text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Fabulous Adventures In Coding
Eric Lippert is a principal developer on the C# compiler team. Learn more about Eric.
.
I don't really see myself ever using something like this. I'm sure others would find it useful, but I imagine the majority would not.
How would such a detector work?
Specifically, how would it detect this as invalid but not _also_ detect the same M1 and M3 as invalid if M2 were:
class intbox { public int value };ref int M2() { intbox y = new intbox { value = 123 }; return ref M1(ref y.value); }
Or if it's M2 that would be detected as invalid, what if M1 returned an (unrelated) intbox.value?
If it's conservative enough that it will reject it even with either [or both] of these changes, what cases _will_ it accept?
Good questions. When researching this prototype we did a sketch of how such a detector might work. In the scenario you describe there is no problem because no ref to y is ever passed to M1, and therefore M1 cannot possibly return a ref to y. y.Value is a variable on the heap somewhere, so it is perfectly safe to pass around arbitrarily. The actually dangerous scenario is
struct intbox { public int value }; // NOW A STRUCTref int M2() { intbox y = new intbox { value = 123 }; return ref M1(ref y); // Now M1 takes a ref intbox.}
M1 might be returning a ref to y.value, which is on the stack and about to die.
The detector would have to keep track of what local variables of value type were being passed by ref, and if the returns coming back could possibly be interior to those locals. If you had:
ref double M2() // now returns a ref double{ intbox y = new intbox { value = 123 }; return ref M1(ref y); // Now M1 takes a ref intbox and returns a ref double}
then no problem; there's no way M1 could be returning a ref double that came out of the storage of y, because intbox doesn't have any field of type double.
Basically you just need to do a little local flow analysis on every ref local, and see if any ref return can possibly be returning the local or a portion of it. It's not that hard a problem. -- Eric
A little off-topic...
On a desktop system, rich of resources, I'd cut the "ref" accessor. I don't see any good reason to keep, especially now that the world is going toward async, and immutability is getting more importance.
Anyway, C# can run even on a compact and micro frameworks, where the resources are like the water in the desert.
Now, consider an array of structs (e.g. Point) and a loop to translate all of them of an offset (also a Point):
for (int i=0; i<N; i++)
{
pt[i].X += offset.X;
pt[i].Y += offset.Y;
}
Well, in this trivial case the ref is important, and would be even important if I were able to use "inline". That is because that loop is poorly performing: it has to access twice an indexer.
If I add this helper:
function Adder(ref Point pt, ref Point offset)
pt.X += offset.X;
pt.Y += offset.Y;
the performance rises a lot more, because there's only one indexing, and none is copied.
My question is: would be a valuable task the ability to inline-"ref" a struct of an array without having to write a separate function?
Thanks a lot.
Please don't add this to C#. I don't want to maintain code that uses it.
I hear you, but I should point out that we get that feedback from customers for pretty much every single feature we propose adding. People basically say "Well I will know how to use this feature correctly, but my idiot coworkers are going to mess it all up and then I'm going to have to clean up their godawful code, so please don't give those bozos any more power." We got that feedback for generics, LINQ, dynamic, async, you name it.
We take very seriously the fact that features can be misused; we want C# to be a "pit of quality" language, where the language naturally leads you to write the high-quality soluition and you really have to climb out of the pit to write something low-quality. But we also trust our customers to follow good practices and to learn how a powerful tool works before they start building with it.
What scares me about this feature is that it makes it easier to write programs that have lots of variable aliasing in them. Aliasing is hard on the compiler because it greatly complicates analysis. And if the compiler is having a hard time, humans are going to have a hard time as well. But if hypothetically we did this -- and like I said, we're probably not going to -- it's not like we're going to go down the C/C++ road and allow you to pass back references to dead variables. The feature will still be memory-safe. -- Eric
I'm with Jeff; I think maintaing this on methods might be hard.
However, I wouldn't mind it on properties; it would be nice to be able to return a struct (such as Point) and you can change a property of that struct without having to copy the struct to a local, change the local and then set it back to the original property.
A passionate plea to NOT add this.
@Mario, there are a number patterns/use cases where "ref" parameters really make good sense. For example if you are implementing an immutable system and need to update the caller with multiple new instances. Also it is a very handy paradigm when initializing read only fields and you want to factor this out of the constructor body itself. [although I wish the definition of readonly was changed...but after 5 years, I have given up even asking]
I can see how it appeals to those who like tricky unreadable code, but we're not counting every byte in application as much as we did back when C was invented.
I prefer having readable code and letting the compiler/Jitter working out the "tricks".
I find that this feature would quickly result in bugs because I feel that it doesn't follow the principle of least surprise.
This sure would reduce the readability/maintainability of code. But that doesn't mean that it should not be implemented. Experts who need this kind of power may use it.
For the life of me I can't think of any uses for this in my day to day usage. I can't even think of uses for it in extreme performance scenarios when I would be willing to tolerate the conceptual complexity incurred.
I'd therefore go with the 'no' option unless someone could point out some compelling use cases I think would benefit me :)
The hit to the reflection/generic layer would also be quite unpleasant (especially since you don't even have the ultimate (if costly) fallback of treating it as a (possibly boxed) object.
Please don't add this, whenever I see a ref I'm very suspicious of what is going on.
If you need C++ features use managed C++
I wonder what would be the practical differences between ref types as you discuss, and a generic class...
public class ValueRef<T> { public T Value; } /* Untested. Ctor etc ommitted. */
The Max function mooted would instead accept two ValueRef<int> object references and return one of those. There's only one copy of the value inside, as long as all access are via x.Value.
Granted, this isn't as tidy as putting the word 'ref' next to the type name, and I'm also side-stepping the question of what practical uses such an object has.
(I've not tested any of this or really thought it through. Please be nice.)
billpg
So how do you make a ValueRef<int> to the tenth element of an integer array, say?
Your idea is not so farfetched though. Something I deliberately did not mention in this article is that something like the ValueRef type you propose actually exists! It is called TypedReference and it is a Very Special Type. It is used only for obscure interop scenarios where you need to be able to pass around a reference to a variable of type where the type is not known at compile time. This is a subject for another day. -- Eric
I agree with Jeff.
I can't think of any situations where I would actually want this. In the strange event that I need something like this, I'll either maybe use something like Eric's ref class ( stackoverflow.com/.../2982037 ). If someone trying to port C++ code hits an issue that requires something like this, I'd prefer solutions that avoid introducing extra syntax; it only only serves to add more ways for other people to make code less maintainable. One of the things I like about all the Marshalling functionality is that it's mostly off to the side and ignorable until actually needed.
I do see Sam's point and *have* encountered situations where it would have made my code slightly simpler, but every time I've hit such a situation I was able to work around it very easily. I think supporting even that much would add more problems than it would solve.
Eric, I hope you'll blog about TypedReference. Not long ago, I had to write the equivalent of htmlTextWriter._attrList[index].value = someValue using Reflection. Because _attrList is an array of RenderAttribute structures, I had to get and set the entire array element just to set the value field. It seems like using FieldInfo.SetValueDirect could have made this a little more efficient.
Meh.
As a long (long, long) time C++ user, I tend to avoid aliases. They're very difficult for the compiler/optimizer to reason with, not to mention humans. If I had a dollar for every bug... (including compiler bugs; I found and developed minimal repros for a couple dozen from Borland, a handful from GCC, and countless ones from Microsoft - no offense).
Your Max() example triggers neurons in my brain associated with C preprocessor macros - that's what it mentally feels like. Maybe a better example would show how this would be useful, but I can't think of any case where I'd use this (however, I stayed up all night with my sick 23-month-old, so my brain isn't exactly 100% at the moment).
I'd rather pass everything by value, even going so far as suggesting a Python-esque multiple-return-value syntax:
(resultA, resultB) = Func();
so that the "out" keyword is no longer necessary. Though I guess "ref" could still be used if somebody *really* had to pass a large mutable value type (not something I've ever seen recommended. Or something I've ever done after my first week in C#.). You could even treat this as a syntax-only change, converting additional return values to reference parameters under the covers.
This would be nudging the language in the opposite direction of the "ref return" idea, but to my mind it would result in more clear code.
> doing it properly would require some changes to the CLR. Right now the CLR treats ref-returning methods as legal but unverifiable
In fact, the story is more subtle than that. While Ecma-335 does say that any return by reference is unverifiable, .NET implementation of the spec does a more stringent analysis. In particular, it is verifiable to return a managed pointer to a field of a reference type (i.e. ldflda immediattely followed by ret).
VC++ actually implements such checks during compilation if compiling with /clr:safe. So:
ref class Foo
public:
int x;
int% GetX() { return x; }
int% GetY(int% y) { return y; }
};
GetX() will compile successfully and produce verifiable code, but the compiler will bark on GetY(). | http://blogs.msdn.com/b/ericlippert/archive/2011/06/23/ref-returns-and-ref-locals.aspx | CC-MAIN-2015-48 | refinedweb | 2,029 | 69.82 |
NP .NET Profiler is designed to assist in troubleshooting performance, memory and first chance exception issues in .NET Applications. You can download the tool from here.
If you are running x86 process on x64-bit machine, please use x86 profiler. If the IIS AppPool is configured to run 32-bit web applications, please use x86 profiler
Here are common causes :
.NET CLR Runtime failed to create the COM object, make sure the profiler is running under admin privileges Use x86 profiler for 32-bit process running on 64-bit machine
Make sure IIS AppPool user account has write access to the output folder. Make sure at least one request is submitted.
Make sure the application is closed before reading the output files
Use x64-bit profiler to read large log files captured using x86-bit profiler Make sure there is enough free hard disk space
Make sure the namespace settings during collection is correct, *.* is not a valid namespace option]. | http://blogs.msdn.com/b/webapps/archive/2012/09/29/faq-on-np-net-profiler.aspx | CC-MAIN-2014-10 | refinedweb | 160 | 60.35 |
Scala has several ways to deal with error handling, and often times people get confused as to when to use what. This post hopes to address that.
Let me count the ways.
Option
People coming to Scala from Java-like languages are often told
Option is
a replacement for
null or exception throwing. Say we have a function that
creates some sort of interval, but only allows intervals where the lower bound
comes first.
class Interval(val low: Int, val high: Int) { if (low > high) throw new Exception("Lower bound must be smaller than upper bound!") }
Here we want to create an
Interval, but we want to ensure that the lower bound
is smaller than the upper bound. If it isn’t, we throw an exception. The idea here
is to have some sort of “guarantee” that if at any point I’m given an
Interval,
the lower bound is smaller than the upper bound (otherwise an exception would have
been thrown).
However, throwing exceptions breaks our ability to reason about a function/program. Control is handed off to the call site, and we hope the call site catches it – if not, it propagates further up until at some point something catches it, or our program crashes. We’d like something a bit cleaner than that.
Enter
Option – given our
Interval constructor, construction may or may not succeed.
Put another way, after we enter the constructor, we may or may not have a valid
Interval.
Option is a type that represents a value that may or may not be there;
it can either be
Some or
None. Let’s use what’s called a smart constructor.
final class Interval private(val low: Int, val high: Int) object Interval { def apply(low: Int, high: Int): Option[Interval] = if (low <= high) Some(new Interval(low, high)) else None }
We make our class
final so nothing can inherit from it, and we make our constructor
private so nobody can create an instance of
Interval without going through our own
smart constructor function,
Interval.apply. Our
apply function takes some relevant
parameters, and returns an
Option[Interval] that may or may not contain our constructed
Interval. Our function does not arbitrarily kick control back to the call site due
to an exception and we can reason about it much more easily.
Eitherand
scalaz.\/
So,
Option gives us
Some or
None, which is all we need if there is only one thing
that could go wrong. For instance, the standard library’s
Map[K, V] has a function
get
that given a key of type
K, returns
Option[V] – clearly if the key exists, the associated
value is returned (wrapped in a
Some). If the key does not exist, it returns a
None.
But sometimes one of several things can go wrong. Let’s say we have some wonky type that wants a string that is exactly of length 5 and another string that is a palindrome.
final class Wonky private(five: String, palindrome: String) object Wonky { def validate(five: String, palindrome: String): Option[Wonky] = if (five.size != 5) None else if (palindrome != palindrome.reverse) None else Some(new Wonky(five, palindrome)) } /* Somewhere else.. */ val w = Wonky.validate(x, y) // say this returns None
Clearly something went wrong here, but we don’t know what. If the strings were sent over
from some front end via JSON or something, when we send an error back hopefully we have
something more descriptive than “Something went wrong.” What we want is instead of
None,
we want something more descriptive. We can look into
Either for this, where we use
Left to hold some sort of error value (similar to
None), and
Right to hold a successful
one (similar to
Some).
To manipulate such values that may or may not exist (presumably obtained from functions that may or may not
fail), we use monadic functions such as
flatMap, often in the form of monad comprehensions, or
for comprehensions as Scala calls them.
val x = ... val y = ... for { a <- foo(x) b <- bar(a) c <- baz(y) d <- quux(b, c) } yield d
In the case of
Option, if any of
foo/bar/baz/quux returns a
None, that
None simply
gets threaded through the rest of the computation – no
try/catch statements marching off
the right side of the screen!
For comprehensions in Scala require the type we’re working with to have
flatMap and
map.
flatMap, along with
pure and some laws, are the requisite functions needed
to form a monad –
map can be defined in terms of
flatMap and
pure.
With
scala.util.Either however, we don’t have those – we have
to use an explicit conversion via
Either#right or
Either#left to get a
RightProjection or
LeftProjection (respectively), which specifies in what direction we bias
the
map and
flatMap calls. The convention however, is that the right side is the “correct”
(or “right”, if you will) side and the left represents the failure case, but it is tedious to
continously call
Either#right on values of type
Either to achieve this.
Thankfully, we have an alternative in the Scalaz library via
scalaz.\/ (I just pronounce this “either” – some say disjoint union or just “or”), a right-biased
version of
scala.util.Either – that is, calling
\/#map maps over the value if it’s in
a “right” (
scalaz.\/-), otherwise if it’s “left” (
scalaz.-\/) it just threads it through
without touching it, much like how
Option behaves. We can therefore alter the earlier function:(x, y)
scalaz.\/ also has several useful methods not found on
Either.
Try
As of Scala 2.10, we have
scala.util.Try which is essentially an either, with the left type
fixed as
Throwable. There are two problems (that I can think of at this moment) with this:
A big factor in our ability to deal with all these error handling types nicely is using their monadic properties in for comprehensions.
For an explanation of the monad laws, there is a nice post
here describing them (using Scala).
Try
violates the left identity.
def foo[A, B](a: A): Try[B] = throw new Exception("oops") foo(1) // exception is thrown Try(1).flatMap(foo) // scala.util.Failure
This can cause unexpected behavior when used, perhaps in a monad/for comprehension. Furthermore,
Try encourages the use of
Throwables which breaks control flow and parametricity.
While it certainly may be convenient to be able to wrap an arbitrarily code block with the
Try constructor
and let it catch any exception that may be thrown, we still recommend using an algebraic data type
describing the errors and using
YourErrorType \/ YourReturnType.
scalaz.Validation
Going back to our previous example with validating wonky strings, we see an improvement that could be made.("foo", "bar") // -\/(MustHaveLengthFive("foo"))
The fact that one string must have a length of 5 can be checked and reported separately from the other
being palindromic. Note that in the above example
"foo" does not satisfy the length requirement,
and
"bar" does not satisfy the palindromic requirement, yet only
"foo"’s error is reported
due to how
\/ works. What if we want to report any and all errors that could be reported
(“foo” does not have a length of 5 and “bar” is not palindromic)?
If we want to validate several properties at once, and return any and all validation errors,
we can turn to
scalaz.Validation. The modified function would look something like:
sealed abstract class WonkyError case class MustHaveLengthFive(s: String) extends WonkyError case class MustBePalindromic(s: String) extends WonkyError final class Wonky private(five: String, palindrome: String) object Wonky { def checkFive(five: String): ValidationNel[WonkyError, String] = if (five.size != 5) MustHaveLengthFive(five).failNel else five.success def checkPalindrome(p: String): ValidationNel[WonkyError, String] = if (p != p.reverse) MustBePalindromic(p).failNel else p.success def validate(five: String, palindrome: String): ValidationNel[WonkyError, Wonky] = (checkFive(five) |@| checkPalindrome(palindrome)) { (f, p) => new Wonky(f, p) } } /* Somewhere else.. */ // Failure(NonEmptyList(MustHaveLengthFive("foo"), MustBePalindromic("bar"))) Wonky.validate("foo", "bar") // Failure(NonEmptyList(MustBePalindromic("bar"))) Wonky.validate("monad", "bar") // Success(Wonky("monad", "radar")) Wonky.validate("monad", "radar")
Awesome! However, there is one caveat – we cannot in good conscience use
scalaz.Validation in a for comprehension. Why? Because there is no valid
monad for it.
Validation’s accumulative nature works via its
Applicative
instance, but due to how the instance works, there is no consistent monad
(every monad is an applicative functor, where monadic bind is consistent with
applicative apply). However, you can use the
Validation#disjunction function to
convert it to a
scalaz.\/, which can then be used in a for comprehension.
One more thing to note: in the above code snippet I used
ValidationNel, which is just a type alias.
ValidationNel[E, A] stands for for
Validation[NonEmptyList[E], A] – the actual
Validation will take
anything on the left side that is a
Semigroup, and
ValidationNel is
provided as a convenience as often times you may want a non-empty
list of errors describing the various errors that happened in a function.
However, you can do several interesting things with other semigroups.
Unless otherwise noted, all content is licensed under a Creative Commons Attribution 3.0 Unported License.Back to blog | https://typelevel.org/blog/2014/02/21/error-handling.html | CC-MAIN-2018-17 | refinedweb | 1,535 | 62.17 |
- By Clay Allsopp
- August 2nd, 2012
- 10 Comments
Everyone is trying to craft the next beautiful iOS app, but building on Apple’s platform has traditionally required experience in a niche programming language, Objective-C. However, with the release of RubyMotion1, anyone can make a completely native iOS app using the power of Ruby.
Developers have tried to get around the Objective-C hurdle by making HTML and JavaScript hybrid apps2 using tools like PhoneGap3 and Trigger4, but the result can be a substandard user experience5. Plus, mobile-centric Web development is yet another narrow skill set that potential developers have to learn. RubyMotion is intended to be an alternative solution that produces apps identical to those created in Objective-C, except using a more accessible and popular language.
How does that work? Unlike normal interpreted Ruby, RubyMotion is compiled to machine code and runs incredibly fast. This allows full access to the existing iOS SDK and APIs while preserving the flexibility and plain fun of Ruby. RubyMotion also includes interactive debugging and testing tools that don’t exist in the traditional Xcode and Objective-C workflow.
The hope is that coding in Ruby and using the RubyMotion toolchain will make developing iOS apps easier for new developers, as well as help existing iOS developers be even more productive. I’ve been working on iOS apps since the original SDK, but I still jumped at the chance to write real native apps in the same language that I was already using for my back ends.
If you’ve hit stumbling blocks learning native iOS development or are just curious about what Ruby on iOS looks like, you should read on. We’ll try out RubyMotion by making an app that grabs some data from the Internet and updates the screen’s content accordingly.
But first, we have to install it.
Set Up
RubyMotion is a commercial product from HipByte, a company founded by the developers responsible for MacRuby346. Parts of the RubyMotion tools are open source7, but the actual Ruby compiler (where the magic happens) is not. Even though it’s only a few months old, a strong community has already developed around RubyMotion; I’m involved in maintaining some cool8 projects9 but am in no way affiliated with HipByte.
You can purchase a lifetime license to RubyMotion for $200 on the RubyMotion website10. The license comes with a graphical installer that sets up everything on your Mac, no commands necessary. If you run into trouble, support is available on @RubyMotion3211’s Twitter account and the mailing list12.
RubyMotion also requires you to install Xcode from the Mac App Store13 to get some developer libraries and tools. However, RubyMotion’s tools work in the command shell, and you’re free to use any editor or IDE you want. Supplementary packages14 are available for most editors that could speed up your development.
Having some experience with RubyGems15 and Rake16 also helps because RubyMotion uses these tools. RubyGems should come installed on your Mac, and you can use it to install Rake by running
gem install rake in the terminal. If you have never seen those words before, no worries; we’ll still cover them like fresh topics.
Create A Project
In contrast to the normal iOS toolchain, RubyMotion works in the command line. You can still use Xcode’s Interface Builder to construct your interfaces, but complete Xcode integration is unsupported at this time. So, fire up the terminal and let’s get started.
RubyMotion uses two commands extensively:
rake and
motion. The
motion command creates RubyMotion projects and manages the RubyMotion tools; if you’re coming from a Rails background, it’s sort of like the
rails command. If you enter
motion in the shell, you’ll see a brief instruction page:
$ motion Usage: motion [-h, --help] motion [-v, --version] motion <command> [<args…>] Commands: create Create a new project activate Activate the software license update Update the software support Create a support ticket
We’re interested in
motion create <Project Name>. This will create the folder
<Project Name> in the current directory and fill it with the essential files and folders that you’ll need for a RubyMotion project.
Run
motion create Smashing to create your first project. You should see some output like this:
$ motion create Smashing Create Smashing Create Smashing/.gitignore Create Smashing/Rakefile Create Smashing/app Create Smashing/app/app_delegate.rb Create Smashing/resources Create Smashing/spec Create Smashing/spec/main_spec.rb
Run
cd ./Smashing to enter the project’s directory. You’ll be running all subsequent commands from this location, so keeping it open in a dedicated tab or window is a good idea.
Let’s walk through what this command created:
./Rakefile
This is the file that the
rakecommand uses to determine what commands are available. RubyMotion also uses it for settings such as your app’s name, resources and source-code location.
./app
This is the directory that contains all of your code. RubyMotion will recursively dig through this folder and load any
*.rbfiles that it finds. You can specify additional directories outside of
./appin
Rakefile.
./app/app_delegate.rb
This is your only piece of code right now. It contains your app’s
delegate. We’ll go into more detail soon, but know that every RubyMotion project needs a
delegate.
./resources
Files in this directory will be copied into your app. It’s a good place to store images, data and icons.
./spec
This is the directory for your app’s automated tests. RubyMotion ships with a port of the Ruby testing framework Bacon17, which you can use to write both unit and functional or UI tests. Any
*.rbfiles in this directory will be executed as tests when you invoke the
rake speccommand.
./spec/main_spec.rb
This is the default test, created as an example.
Compared to larger frameworks such as Rails, this isn’t a lot of configuration. The only files we really care about today are
Rakefile and
app_delegate.rb, so let’s dive into those.
Run The App
The
Rakefile is the first file loaded when you build your app. Its job is to configure your app’s properties and load any additional files that your project might need. By default, it looks something like this:
# -*- coding: utf-8 -*- $:.unshift("/Library/RubyMotion/lib") require 'motion/project' Motion::Project::App.setup do |app| # Use 'rake config' to see complete project settings. app.name = 'Smashing' end
You’ve probably only seen
$.unshift if you’re familiar with Ruby. It takes its argument — in this case,
/Library/RubyMotion/lib — and adds it to
require’s search path. This is necessary before
require 'motion/project', because that file is actually located in the RubyMotion
lib directory. Without the
unshift, no RubyMotion code would be found.
The
motion/project directory is what actually allows us to write RubyMotion apps. It does a lot of stuff behind the scenes, but most obviously it includes the
Motion::Project module that we use immediately afterwards. The
App.setup block is where we can edit our app’s name, files, identifier and many other options. As the generated comment suggests, you can run
rake config to see all possible properties. By default, it uses the
Smashing project name that we passed in
motion create.
When you run
rake, it will load the
Rakefile. Requiring
motion/project actually creates a bunch of
rake “tasks.” These tasks allow you to pass arguments to
rake to invoke particular actions. You can see a complete list of included tasks by running
rake --tasks in your project’s directory:
$ rake --tasks rake archive # Create archives for everything rake archive:development # Create an .ipa archive for development rake archive:release # Create an .ipa for release # Run the test/spec suite rake static # Create a .a static library
Looks like
rake does quite a bit, doesn’t it? Most importantly, if you just run
rake, it will build and run your app in the Simulator.
Run
rake and observe RubyMotion compiling your project’s files (just
app_delegate.rb for now). When it’s done building, it will open your (so far empty) app in the iOS Simulator:
Additionally, you’ll see an
irb-esque prompt appear in the shell. This allows you to interact with the app in real time without any additional compilation, which is useful for debugging and rapid interface development. Try running some basic commands with it:
$ rake … (main)> "a string" => "a string" (main)> h = {hello: "motion"} => {:hello=>"motion"} (main)> h => {:hello=>"motion"}
Our app is off to a great start: we’ve installed RubyMotion, created a new project, learned what all the files do, and gotten a (very bare) app running. Next, we’ll make our creation actually display something on the screen.
Little Boxes
We’re going to build an app that displays a colored box on the screen. Sounds pretty simple, right?
Then we’ll spice it up with random color changes using the Colr API19. It’s going to use a mix of Apple-developed APIs and some cutting-edge work from the RubyMotion community, so when it’s done you’ll have gotten a well-rounded experience with RubyMotion development. We will end up with something like this:
If you want to follow along, the source for this example is available on GitHub2521.
Open up
app_delegate.rb in your favorite code editor. It’s pretty barren, implementing only one function,
application:didFinishLaunchingWithOptions::
class AppDelegate def application(application, didFinishLaunchingWithOptions:launchOptions) true end end
Note that we refer to RubyMotion functions by a combination of their usual Ruby name (
application) and their named parameters (
didFinishLaunchingWithOptions:), all separated by colons. Named parameters were added to RubyMotion to preserve the existing Objective-C APIs, and the extra symbols are required parts of the method name (i.e. you can’t just call
delegate.application(@app, options)). Without those extra parameters, we wouldn’t be able to tell the difference between
def application(application, didFinishLaunchingWithOptions:launchOptions) and
def application(application, handleOpenURL:url).
Moving on, RubyMotion looks for a class named
AppDelegate and makes it the application’s
delegate object. This special object receives callbacks for different events in the lifecycle of the app, such as for starting up, shutting down and receiving push notifications. The
application:didFinishLaunchingWithOptions: function is called once the system has finished its own process of starting the app and is ready for us to take control. In most cases, this function should return
true and allow everything to start.
We’re going to add some “views” to our app. Each view is a subclass of
UIView, and everything you see on the screen is a descendent of that class. The root view of every app is an instance of
UIWindow, a special type of
UIView. Views are added as
subviews to one another; when you move a view, you also move all of its subviews.
Each view has a
frame property, which describes its position and dimensions. The position of a view is actually defined relative to its superview. For example, adding a box at
(10, 10) as a subview to a view located at
(20, 20) in the window means that our new box will really appear at
(30, 30).
Edit your
AppDelegate to include our new views:
class AppDelegate def application(application, didFinishLaunchingWithOptions:launchOptions) # UIScreen describes the display the app is running on. app_frame = UIScreen.mainScreen.applicationFrame @window = UIWindow.alloc.initWithFrame(app_frame) # This is the special method of UIWindow which lets them exist outside of a parent view @window.makeKeyAndVisible # This is our blue box # CGRectMake == CGRectMake(x, y, width, height) @box = UIView.alloc.initWithFrame CGRectMake(0, 0, 100, 100) @box.backgroundColor = UIColor.blueColor @window.addSubview(@box) # UIButtonTypeRoundedRect is the standard button style on iOS @button = UIButton.buttonWithType(UIButtonTypeRoundedRect) # A button has multiple "control states", like disabled, highlighted, and normal @button.setTitle("Change Color", forState:UIControlStateNormal) # Sizes the button to fit its title @button.sizeToFit # Position the button below our box. @button.frame = CGRectMake(0, @box.frame.size.height + 20, @button.frame.size.width, @button.frame.size.height) @window.addSubview(@button) true end end
We’ve created our window and made it the “key” window, which means that it’s the window receiving user input. We then added our blue
@box and
@button as subviews to the window.
Now, this is not the only way to add views to our window. We could have added our views using the Interface Builder, a tool included with Xcode for creating iOS UIs with a drag-and-drop interface. RubyMotion does support Interface Builder, but you really shouldn’t dive into it without understanding what happens behind the scenes.
The other method is to use something called a
UIViewController and then to populate its
view property with our boxes. This is actually the correct way because it conforms your app to the model-view-controller design pattern that the SDK follows. You simply tell the
window to load the
controller, and everything will get handled appropriately. So, why didn’t we do it this way? Because it requires more information overhead, and in this example we’re trying to be as concise as possible; in production code, you will definitely need to use a controller when you start managing more than one or two views.
Moving on,
rake our improved delegate and check out the result, which is starting to look better:
Now we need to make
@button actually do something besides turn blue when we press it. Buttons can take
targets for certain events such as taps. We can add a target and callback (known as an
action in iOS SDK parlance) in our
AppDelegate like this:
… @window.addSubview(@button) @button.addTarget(self, action:"button_tapped", forControlEvents:UIControlEventTouchUpInside) true end def button_tapped puts "I'm tapped!" end end
UIControlEventTouchUpInside is one of many
UIControlEvents that a button can respond to. It sounds complicated, but in plain English it refers to a “touch” that lifts “up” and is still “inside” of the button’s rectangle. When that occurs,
button_tapped will be called. The
puts prints a string to the terminal when we run the app in the simulator.
Give it a
rake and confirm for yourself that our message prints out:
$ rake Build ./build/iPhoneSimulator-5.1-Development Compile ./app/app_delegate.rb Link ./build/iPhoneSimulator-5.1-Development/Smashing.app/Smashing Create ./build/iPhoneSimulator-5.1-Development/Smashing.dSYM Simulate ./build/iPhoneSimulator-5.1-Development/Smashing.app (main)> I'm tapped!
We’ve thrown some views onto our previously barren app and added some very basic interactivity. Time to hook it up to the good ol’ information superhighway.
HTTP
Now that we’ve got a box and a button on the screen, let’s grab some data from the Internet. To make our networking code painless, we’ll use BubbleWrap23, a popular RubyMotion library filled with many idiomatic Ruby wrappers. It’ll also handle JSON de-serialization, so it’s the only external component we need.
To install BubbleWrap, run
gem install bubble-wrap. In our Rakefile, we need to require
bubble-wrap:
… require 'motion/project' require 'bubble-wrap' …
This makes the
BubbleWrap::HTTP and
BubbleWrap::JSON libraries available to us. The default
BubbleWrap set-up also includes helpers for
UIColor, which we’ll use to convert a hexadecimal color code like
#f8f8f8 into a
UIColor object.
Now on to the dirty work. The Colr API has an endpoint that returns information about a random color in JSON. We’re going to query that, get the hex representation of that color, and set our box’s
backgroundColor to that value. In
AppDelegate, let’s add the code to make this HTTP call in our button callback:
def button_tapped BubbleWrap::HTTP.get("") do |response| color_hex = BubbleWrap::JSON.parse(response.body.to_str)["colors"][0]["hex"] # ensure that color_hex is a String when we run .to_color @box.backgroundColor = String.new(color_hex).to_color @button.setTitle(color_hex, forState: UIControlStateNormal) end end
This seemingly more complex task takes fewer lines of code than setting up our views. Run
rake and check out the fruits of our labor:
We can make one more small change to start adding that signature iOS polish. Right now, the user doesn’t get any feedback while the HTTP request is loading, aside from the tiny network activity indicator in the top bar. We can improve that experience by changing the button title while that’s going on. It’s also a good idea to be fault tolerant and show an error if the Colr API returns some bad data. Let’s make those changes:
def application(application, didFinishLaunchingWithOptions:launchOptions) … @button.setTitle("Change Color", forState:UIControlStateNormal) @button.setTitle("Loading…", forState:UIControlStateDisabled) @button.setTitleColor(UIColor.lightGrayColor, forState:UIControlStateDisabled) … end def button_tapped @button.enabled = false BubbleWrap::HTTP.get("") do |response| color_hex = BubbleWrap::JSON.parse(response.body.to_str)["colors"][0]["hex"] # check if bad data as returned if color_hex and color_hex.length > 0 @box.backgroundColor = String.new(color_hex).to_color @button.setTitle(color_hex, forState: UIControlStateNormal) else @button.setTitle("Error :(", forState: UIControlStateNormal) end @button.enabled = true end end
Let’s
rake one last time to see our better UX in action. Not too shabby for our first app, right? You can look at the source for this example on GitHub2521.
“An Object In Motion…”
We’ve created an iOS app with dynamic data and a solid user experience in under 40 lines of clean Ruby. Making this app in Objective-C is definitely possible, but we would have lost the readability and brevity that Ruby affords. We also could have done it using a hybrid Web and native app; however, once you venture out of the basic UI building blocks and into a feature-complete iOS app, perfectly replicating a native experience becomes incredibly difficult.
Is RubyMotion right for your next iOS project? If you don’t have much experience in Objective-C but want to try app development, Ruby definitely has a gentler learning curving. Or if you’re already using Ruby somewhere else in your stack, you might reap the benefits of code portability. What about porting an existing app? Well, RubyMotion allows for middle ground: you can export your Ruby code as a static library to use in an Objective-C app, or you can use existing Objective-C code27 in a RubyMotion app.
This is not to say that RubyMotion is perfect. Its biggest flaw is debuggability: when you hit a nasty bug originating in the RubyMotion compiler, you can’t do a whole lot to trace or fix it. Because RubyMotion is currently closed source, we’ll have to wait for these issues to be remedied by HipByte. But if you don’t do anything tricky or “magical” with your code, then this shouldn’t be a problem. Unfortunately, I run into these issues fairly often because those “fun” bits are what make Ruby so compelling to me.
Fundamentally, RubyMotion hasn’t completely done away the original Objective-C API; you’ll still need to learn the iOS SDK to make really great iOS apps. That in itself tends to make up 80% of the hard work of developing any mobile app. And if you need to whip something up for multiple platforms, RubyMotion definitely won’t make your life any easier; it’s still very nascent and has some maturing to do. But efforts to Ruby-fy Apple’s APIs using RubyMotion are yielding interesting28 new directions29 for iOS development. On a platform defined by constraints and restrictions, having more choice is never a bad thing.
Further Reading
- Developer Center30, RubyMotion
The official developer documentation for RubyMotion.
- iOS Developer Library31, Apple
RubyMotion uses the existing iOS SDK; it doesn’t provide any new classes out of the box. If you run into problems with your
UIViews or other SDK classes, consult this.
- @RubyMotion3211
The official Twitter account, where RubyMotion responds to support requests and publicizes new RubyMotion libraries.
- RubyMotion Tutorials33
A community-curated database of RubyMotion tutorials and writing.
- MacRuby346
RubyMotion is technically a port of MacRuby to iOS. If you enjoy using Ruby for your iOS apps, then you might like writing Mac apps the same way.
(al) (km)
Footnotes
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
↑ Back to topShare on Twitter | http://www.smashingmagazine.com/2012/08/02/get-started-writing-ios-apps-with-rubymotion/ | CC-MAIN-2014-52 | refinedweb | 3,397 | 55.64 |
If you plan to obtain the Certified for Windows Vista logo for a .NET application, this article will be useful for you. Here you will find a simple but complete (certifiable) VB.NET + C# desktop application, including the full source code of both the application and the installer, all packaged in a Visual Studio 2005 solution.
Some time ago, I was assigned with a task that seemed daunting to me: to obtain the Certified for Windows Vista logo for one of our applications. All I had to achieve this was Visual Studio, the source code of the application, and an Internet connection. Well, and the necessary funding when required.
The work has been finished and the application is now certified. While "daunting" is not actually the right word to describe the process, it was not at all easy. The problem was the lack of information provided by Microsoft; I'm not referring to all the administratrivia (registering at Winqual, obtaining a digital certificate for your organization, asking for waivers, and the like; all of this is pretty well explained in the Innovate on Windows Vista and Winqual pages), but to the pure technical part. Microsoft throws at your face the test cases document and... good luck, you are alone.
For example, take test case 25: Verify the application properly handles files in use during install. You open Orca, browse the installer you have created for your application and... ooops, no trace of the MsiRMFilesInUse dialog. The installer created with Visual Studio does not create it. What to do now?
Luckily, we have the Internet, we have search engines, and we have quite a few people who already fought the certification process. For example, the solution for the MsiRMFilesInUse issue is in this forum post. Ok ok, I said that Microsoft does not provide any help and now I post a link to an MSDN blog. I meant that Microsoft should have given this information in the first place, in a more complete test case document or in a separate FAQ.
Well, enough Microsoft Good/Evil discussion. The fact is that after I finished the certification work, I thought that it would be a good idea to share the knowledge I have obtained during the whole process in order to make life a bit easier for other people having the same task assigned. Who knows, maybe "you" could save "my" life tomorrow, so I want you to be happy with me by then.
First of all, I assume that you have already gotten your feet wet with the certification process. I mean, I assume that you are aware that in order to be certified, an application must pass a number of test cases as described in the test cases document provided by Microsoft (download it from the Innovate on Windows Vista page), that you need to purchase an organizational digital certificate, that you need to register your application for error reporting at Winqual, and that once your application is ready, you need to submit it to a testing authority. Except for the test cases, I'll not explain the details of the whole process here.
Second, I assume that your application is structurally similar to the one I certified. "Structurally similar" is a nice buzzword that I have invented right now to say that your application should be as follows:
What I will explain here applies to applications that have the above features. This is not to say that if your application differs somewhere from mine, you must stop reading right now. It is only to say that you must then be extra careful, since some of the things I will say here might not be true for you and/or you would need to search for extra information somewhere else. For example, if your application supports concurrent user sessions, you must ensure that sounds from one session are not heard in another one. I'll not cover this issue here, as I didn't need to address it.
This is common sense, but anyway, here goes: whatever you read here, you MUST test your application against all applicable test cases before submitting it to the testing authority. I'll not accept any complaint of the type, "I wasted $1000 on a failed certification because you said XXX, but in my application it turned to be ZZZ!" I'm human and therefore I commit mistakes, not to mention that I don't know absolutely everything about Visual Studio and .NET applications. I want to help you, but I'm not God.
We will dissect an application that I have created especially for this article. The application name is Killer Application and is developed by the fictitious company Capsule Corporation. The source code, in the form of a Visual Studio 2005 solution, is available for download at the top of this page.
What Killer Application does is, essentially, nothing useful (it consists of a parent MDI form with a couple of menus that show some data and allow you to create and read a text file). However, it will pass all the test cases and could obtain the Certified for Windows Vista logo; that is, if you are Bill Gates and have a $1000 note in your pocket ready to spend on anything. I have created it so that it mimics the basic structure of the real application that I prepared for certification. For this reason, there are VB.NET and C# projects mixed.
After you have read this article, you can use the Killer Application solution as a skeleton for creating your own application, or you can just grab some source code snippets, or you can simply get some ideas about how to do things and do all your code from scratch; whatever best fits your needs. It could even happen that you find a better way to do things (really!). In this case, it would be nice if you could drop a comment explaining your discoveries.
Let's start to move by preparing the environment needed to compile Killer Application (NOT the testing environment: you do not need Windows Vista for the development process; in fact, I use Windows XP). First you need, obviously, Visual Studio 2005. I guess that Visual Studio 2008 would also work via the appropriate solution conversion.
Second, I'll assume that you have obtained your organizational certificate. If not, you will not be able to compile the solution unless you modify the solution post-build events in Visual Studio (more details on this later). I'll assume that the credentials and private key file names are, respectively, capsulecred.spc and capsulekey.pvk, but of course you can use whatever names you like.
Third, create a directory named vistatools at the root of your home drive (that is, C:\vistatools). This is where we will put some extra files needed when compiling the application.
Fourth, copy the following files in the vistatools directory:
(For sure you can find these files alone for download by searching on Internet, but I prefer to point to the "official" source.)
Fifth, you need to create a digital certificate file for your organization from the credentials file and the private key file. You need to do this only once for all your applications. Here are the steps:
pvkimprt -PFX capsulecred.spc capsulekey.pvk
A GUI will then appear asking for the private key password. Later, it will ask you if you want to export the private key. Say yes. Select default parameters on the next screen (PKCS #12 format, allow secure protection) and enter a new password for the generated certificate file (I'll assume that you enter kaitokun here). Finally, when asked for the name of the certificate file, browse to the vistatools directory and select an appropriate name (I'll assume capsulekey.pfx).
When you are done, you can delete the capsulecred.spc and capsulekey.pvk keys from the vistatools directory if you want (but ensure that you have stored them somewhere else, of course!). The contents of the directory should be: mt.exe, MsiTran.exe, signtool.exe and capsulekey.pfx. With all of this, you are ready to compile Killer Application.
Here we will take an overview to Killer Application: what it is composed of and what it does. Later, we'll dive into the details of the source code and the project settings. The Killer Application solution consists of four projects:
When you run the application, you will see an MDI form with a menu containing three main entries. This menu allows you to perform some simple actions that will help you in exercising the test cases (or at least that was the intent):
null
That's all. It's not very much, but enough to exercise all the applicable test cases. Now let's go to the details.
There are three main focus areas to look at when performing the application dissection: the solution/project settings, the source code and the installer project. In this section, we'll see the details of the first one. I'll do this by enumerating the steps that should be followed if the solution were to be created from scratch. All of this was, of course, done by me when creating the Killer Application solution. I believe that this information will be useful for you even if you are preparing for certification of an already existing solution.
Note that, of course, you can use whatever solution and project names you like, and create more or less projects depending on your needs.
Open Visual Studio and create a new project of type Other Project Types -> Visual Studio Solutions -> Blank Solution. Name it Killer Application. Add three new projects to the solution: a VB Windows Forms Application named Killer Application, a C# Class Library Project named Capsule.KillerApplication.Support, and another C# Class Library Project named Capsule.KillerApplication.Install. Remove the default Form1 and Class1 items that Visual Studio adds to the projects by default. Don't create the installer project at this moment. Add a reference to the support class library in the main VB.NET project. Add a new class to the main VB.NET project and name it Program. Add the following placeholder code to the class:
Program
<STAThread()> _
Public Shared Sub Main(ByVal args() As String)
End Sub
Open the properties dialog of the main VB.NET project and perform the following changes in the Application tab:
Note that if your main application project is a C# project, the Program class with the placeholder Main method is created automatically. You still need to change the root namespace anyway. You may ask, "Why do we bother with a Main method instead of placing the startup code in the main form load event?" As we'll see when we take a look to the application source code, we need to check a number of conditions before our application starts, and some of them may even prevent the application from running. So, we need to be able to execute code before any form is created.
Program
Main
Now we'll fill in some information about the application in the AssemblyInfo files of each project. In C# projects, it is in the Properties folder. In the VB.NET project, it is in the My Project folder, but in this case you first need to activate the Show All Files icon in the solution explorer window. For the main application assembly, that's the data we will set up:
<Assembly: AssemblyTitle("Killer Application")>
<Assembly: AssemblyDescription( _
"Simple example of an application that could obtain the
""Certified for Windows Vista"" logo.")>
<Assembly: AssemblyCompany("Capsule Corporation")>
<Assembly: AssemblyProduct("Killer Application")>
<Assembly: AssemblyCopyright("© Capsule Corporation 2007, 2008")>
For the support DLL, the data will be:
[assembly: AssemblyTitle("Capsule.KillerApplication.Support")]
[assembly: AssemblyDescription("Support DLL for Killer Application")]
[assembly: AssemblyCompany("Capsule Corporation")]
[assembly: AssemblyProduct("Capsule.KillerApplication.Support")]
[assembly: AssemblyCopyright("© Capsule Corporation 2007, 2008")]
...and very similar for the install custom actions DLL; just change the assembly title and description. Setting up metadata is not strictly necessary, but it is good practice. Killer Application uses this data in the about box.
Test case 1 states that all application executable files must "contain an embedded manifest that define its execution level." With Visual Studio 2005, there is no direct way for including manifest files in assemblies. Visual Studio 2008 makes this task easier, by the way, but we'll assume we all are poor Visual Studio 2005 users here. Luckily, there is an indirect way to do this. In the application main assembly (the VB.NET executable), add a new text file and name it Killer Application.exe.manifest. The file name must be the same as the assembly name, plus .exe.manifest. Ensure that in the properties page for the file, the Build Action is set to None. Then open the file and paste the following inside:
<>
</assembly>
Note that if our application had more executable (*.exe) projects, we would have to repeat this step for each one. The contents of the manifest file is always the same; only the file name itself changes. This manifest file will be processed by the project post-build events, as we'll see right now.
Remember the vistatools directory we created once upon a time? Well, it's now time to use it. We will set the post-build events of the projects so that some nice things are done with the generated assemblies. In the properties window of the VB.NET executable project, select the Compile tab, click the Build events button, and in the Post-build event command line box paste the following:
%HOMEDRIVE%\vistatools\Mt.exe -manifest "$(ProjectDir)$(TargetFileName).manifest"
-outputresource:"$(ProjectDir)obj\$(ConfigurationName)\$(TargetFileName);1"
%HOMEDRIVE%\vistatools\signtool.exe sign
/f %HOMEDRIVE%\vistatools\capsulekey.pfx /p kaitokun /v
/t
"$(ProjectDir)obj\$(ConfigurationName)\$(TargetFileName)"
Note: actually, there are only two commands to execute. I have divided them in separate lines each to improve readability. In the post-build event window of Visual Studio, the commands must be in one single line each.
In the first command, we use the manifest tool to embed the manifest file in the resulting assembly. The second command will sign the assembly, thus fulfilling test case 5 ("verify application installed executables and files are signed"). Note that you will have to change the capsulekey.pfx file name and the kaitokun password for your real values. If you don't have an organizational certificate yet, but still want to compile the application, just remove this line or, better, disable it by prepending a rem command to it.
rem
Note that we target the file that is created in the $(ProjectDir)obj\$(ConfigurationName) directory, instead of the file created in the target generation directory (usually bin\ConfigName). That's because, when generating the installer, the executable file to be packed will actually be taken from the obj directory. That's the file we want to be "manifested" and signed, rather than the one used for debugging and testing in the development machine.
Now let's go with the support assemblies. For both KillerApplication.Support and KillerApplication.Install, open the properties window, select the Build Events tab and, in the Post-build event command line box, paste the following (note again that it is a single command divided into four lines):
%HOMEDRIVE%\vistatools\signtool.exe sign /f
%HOMEDRIVE%\vistatools\capsulekey.pfx /p kaitokun /v
/t
"$(ProjectDir)obj\$(ConfigurationName)\$(TargetFileName)"
Yes, it is exactly the same as in the case of the executable file, but without the manifest stuff. Again, if you don't have an organizational certificate yet, remove or comment the command temporarily. The first version of post-build events (two commands) must be set on all projects that generate an EXE file. The second version (one command) must be set on all projects that generate a DLL file.
That's all you need regarding project setup (except for the installer project, which we will dissect later). Now you need to add the application code and the auxiliary data (images, texts, datasets, whatever) to the projects. You can use whatever code and data you need... except, of course, that you need to do a few special things, as we will see right now.
Before we plunge into the intricacies of the source code, we'll take a look to another subject that also needs attention: the application data. That is, all application files that are part of the project but are not code.
Visual Studio lets you to add these kinds of files to your projects. Just right-click on the project and select either Add new item or Add existing item. Then in the properties page for the item, make sure that Build Action is set to Content. Killer Application has three files of this kind inside the data folder of the main project: one photography, one text file and one database file, as shown in the following image:
The question is: where do these files go when the application is generated? The answer is that it depends on how you generate the application:
In both cases, the original directory structure is preserved. This means that, in the case of Killer Application, when generating the solution there will be a bin\Release\Killer Application.exe file together with a bin\Release\Data\Texts\Agreement.txt, as well as two more files (the photo and the database) within their original relative paths:
It is now the moment to take a look at test case 15: "verify application installs to the correct folders by default." For application-wide data, the correct folder is the one pointed by the %ALLUSERSPROFILE% variable (also called CommonApplicationData and CommonAppDataFolder in .NETesque). This is usually C:\Documents and Settings\All Users in Windows XP and c:\ProgramData in Windows Vista. If you cheat a little and scroll down (or better, look into the Killer Application solution), you will see that the Killer Application installer creates a directory named, of course, Killer Application, and puts all the project content files here:
%ALLUSERSPROFILE%
So with all of this in mind, you probably think that it would be nice to use a single code base to access the application data on all cases, wherever the data is placed. And you are right. Here is how I have achieved it in Killer Application:
static string
DataDirectory
Program.DataDirectory
data
Path.Combine(Program.DataDirectory, "Texts\Agreement.txt")
There are a couple of extra tricks here, but we'll see them when looking at the source code.
We are now really ready to look at the source code of Killer Application. We'll look at the boot sequence, then we'll see what the menu entries on the Killer Application main window do, and finally we'll examine the custom actions contained in the installer support project.
We'll start the source code dissection at the boot sequence, that is, the code that Killer Application executes at startup. Open the Program.cs file in Visual Studio, look for the Main method, and here is what you will find:
Program.cs
Main
You may get surprised when you see that the Main method is as follows:
<STAThread()> _
Public Shared Sub Main(ByVal args() As String)
Try
_Main(args)
Catch ex As Exception
Dim text As String = _
String.Format("Unexpected exception in Killer Application:{0}({1}){0}{2}", _
Environment.NewLine, ex.GetType().Name, ex.Message)
EventLog.WriteEntry("Application Error", text, EventLogEntryType.Error, 1000)
Throw
End Try
End Sub
Test case 32 says: "verify that the application only handles exceptions that are known and expected." Then, why are we doing just the opposite here? Shouldn't we just let alone unexpected exceptions here?
The problem is what you can read in the "verification" part of the test case: "There must be both an Error message with 'Source' listed as Application Error and an Information message with 'Source' listed as Windows Error Reporting for each executable above in order to pass this test case." It happens that the information message is generated, but there is no trace of the error message in the event log. Hence, we must generate it by hand, and that's exactly what this weird piece of code does. After the logging is done, the Throw statement rethrows the exception unmodified, so everything is OK and we pass the test case.
Throw
Note that at the end of the catch block, we must use Throw and not Throw ex. The former will rethrow the original exception with the original call stack preserved, while the later will generate a new exception that will cause the call stack to be lost (and will also cause test case 32 to fail, I admit I have no idea why).
catch
Throw ex
The rest of the initialization code is inside the _Main method.
_Main
Before doing anything else, the following piece of code is required:
Application.SetUnhandledExceptionMode(UnhandledExceptionMode.ThrowException)
Without this, you will receive this ugly error window instead of the fancy WER window in case of unhandled exception:
And, you guessed it, test case 32 will fail if this happens.
Test case 9 says: "verify application launches and executes properly using Remote Desktop." But as explained earlier, Killer Application does not support remote desktop execution. How can the test case pass then? The answer is in the small print. There is a note in the test case description that says: "if application does not support Remote Desktop, it must pop-up a message indicating this to the User and write a message to the Windows NT Event Log in order to pass this test case." And here is where our life gets saved:
If SystemInformation.TerminalServerSession OrElse _
Command.ToLower().Contains("failremote") Then
MessageBox.Show("Terminal Server execution is not allowed.", _
"Killer Application", MessageBoxButtons.OK, MessageBoxIcon.Exclamation)
Dim evlog As New EventLog_
("Application", Environment.MachineName, "Killer Application")
evlog.WriteEntry("Terminal Server execution was attempted. _
User was notified and application terminated.", _
EventLogEntryType.Information)
Return
End If
The failremote command line switch is a trick I use to exercise this functionality without actually having to set up a Terminal Server connection. It can be removed in real applications. By the way, credit for this piece of code goes to mister Amitava.
failremote
Test case 8 says: "verify application launches and executes properly using Fast User Switching." Again, this is a feature not supported by Killer Application. And again, the small print saves us: "if application does not support concurrent user sessions, it must pop-up a message indicating this to the User and write a message to the Windows NT Event Log in order to pass this test case."
So we'll do something similar to the case of the remote desktop execution, but a little more complex. We will first check if Killer Application is already being run by another user; if so, we show an error message, we create the appropriate event log, and we terminate. If not, we check if Killer Application is already being run by ourselves; if so, we activate the main window of the already running instance.
We will need a little of "advanced" code to achieve this. In the support project you can find the AlreadyRunningChecker class (it should be in the main project, but I had the code in C# and I was too lazy to convert it to VB). This class has two static methods: ActivateProcessMainWindow, that will activate the main window of a process given is process ID (by using unmanaged APIs); and GetSameNameProcess, that will return the process ID of an already existing instance of Killer Application (by using instrumentation). With the help of this class, we can properly check the presence of other application instances in the following way:
AlreadyRunningChecker
static
ActivateProcessMainWindow
GetSameNameProcess
Dim pid As Long = AlreadyRunningChecker.GetSameNameProcess(False)
If pid <> 0 OrElse Command.ToLower().Contains("failmultiuser") Then
MessageBox.Show("This application is already being run by another user.", _
"Killer Application", MessageBoxButtons.OK, MessageBoxIcon.Exclamation)
Dim evlog As New EventLog_
("Application", Environment.MachineName, "Killer Application")
evlog.WriteEntry( _
"Multiple user execution was attempted. _
User was notified and application terminated.", _
EventLogEntryType.Information)
Return
End If
pid = AlreadyRunningChecker.GetSameNameProcess(True)
If pid <> 0 Then
AlreadyRunningChecker.ActivateProcessMainWindow(pid)
Return
End If
I got most of the code of the AlreadyRunningChecker class from somewhere on Internet but I can't remember where. Sorry.
We have spoken about this before: the application data files (all application files that are not code) are placed in different locations depending on whether the application is run from within Visual Studio or installed using the generated MSI file. We will use the global variable Program.DataDirectory to store the actual path of the data files. The following code will set up the contents of this variable:
DataDirectory = ConfigurationManager.AppSettings("DataDirectory")
If String.IsNullOrEmpty(DataDirectory) Then
DataDirectory = Path.Combine(Application.StartupPath, "Data")
If Not Directory.Exists(DataDirectory) Then
DataDirectory = Path.Combine(Environment.GetFolderPath( _
Environment.SpecialFolder.CommonApplicationData), "Killer Application\Data\")
End If
Else
DataDirectory = Path.Combine(Application.StartupPath, DataDirectory)
End If
If Not DataDirectory.EndsWith("\") Then DataDirectory += "\"
What this code does is the following:
DataDirectory
2 will apply when the application is run or built from within Visual Studio, 3 will apply when the application is installed, and 1 is reserved for special purposes where you want to use a different set of data with an already installed application (for example, for debugging on a production machine). There is an extra thing to do after the DataDirectory field has been set. One of the data files of Killer Application is a database file, accessed via a typed dataset. If you look at the settings file, you will see that the connection string used is as follows:
Data Source=.\SQLEXPRESS;AttachDbFilename=
"|DataDirectory|Database\KillerApplicationDatabase.mdf";
Integrated Security=True;User Instance=True
This connection string assumes SQL Server 2005 Express is installed locally. But look at the AttachDbFilename key: it points to the directory where application data files are placed. Can this value be changed? Yes, and that's what we do right now:
AppDomain.CurrentDomain.SetData("DataDirectory", DataDirectory)
The DataDirectory value that the SQL Server engine uses is an application domain wide setting that can be set with the SetData method of the AppDomain class. By default, this setting is not set (i.e. its value is null), so the SQL Server engine assumes the application executable directory as its value. Since we can have the database file in a number of different places, we need to appropriately set this setting so that SQL Server can find the database file.
SetData
AppDomain
null
Moreover, if you hate global variables, you could directly use this application domain setting instead of the Program.DataDirectory variable to obtain the data files path. Simply use this code to obtain its value: AppDomain.CurrentDomain.GetData("DataDirectory").
AppDomain.CurrentDomain.GetData("DataDirectory")
There is however a problem with this approach. With this connection string, whenever you try to edit a typed dataset (to add a new TableAdapter, for example) Visual Studio will complain that it can't find the database file. That's because Visual Studio always assumes the default value for the DataDirectory setting, therefore, while editing the dataset you need to do the following modification in the connection string:
AttachDbFilename="|DataDirectory|Data\Database\KillerApplicationDatabase.mdf";
I'm sure there must be a better solution, but I got used to this one so that I kept this schema.
This is actually not necessary but I do it as an example of modifying a file in the data directory (which theoretically is an issue when uninstalling the application, we'll see later why and how it can be solved). Simply, a line of text containing the current time and user name is appended to a file named log.txt in the root of the data directory (the file is not part of the solution nor the install package, it is created the first time it is written to):
Dim log As String = String.Format("Application run by {0} on {1}{2}", _
Environment.UserName, DateTime.Now, Environment.NewLine)
File.AppendAllText(Path.Combine(DataDirectory, "log.txt"), log)
Nothing unusual here, we just pass control to the application main window:
Application.EnableVisualStyles()
Application.SetCompatibleTextRenderingDefault(False)
Application.Run(New FormMain())
Test case 2 says "verify Least-Privilege Users cannot modify other users documents or files", and test case 3 says "verify Least-Privilege user is not able to save files to Windows System directory". The good news is that you don't need to do anything special to fulfill these test cases, as the operating system will grant or deny access to files and folders as appropriate. You must simply be sure to properly control the exceptions that will be thrown when trying to read or write where you (the user, actually) are not authorized to.
The File menu in Killer Application will help you with these test cases. It contains two entries, Open and Save, that allow to create and open a text file. Any generated exception will be caught and its associated information displayed.
The save dialog window looks like this:
There is a text box where you must enter the path where a text file will be created. Three buttons allow you to populate this text box with three "interesting" locations: the Windows system directory, and the home directories for the logouser1 and logouser2 users (these users must be created for testing the application, as explained in the test cases specification document). You can also select any directory by using the directory tree that appears when clicking the "..." button. Finally, the Create file button will create a file named KILLERAPP.TXT in the selected directory, containing a fixed text.
Note that we don't use a SaveFileDialog control, which would make life easier for us. This is on purpose, since the file save and open dialog controls do not let the user to even browse any unauthorized directory, thus making these test cases trivial to fulfill. The real challenge (well, not quite a challenge actually) is to pass the test cases when dealing with files by code, and that's why this custom save dialog is so ugly.
SaveFileDialog
This is the code attached to the Create file button click event:
Try
File.WriteAllText(Path.Combine(txtPath.Text, "KILLERAPP.TXT"), _
"Congratulations! You have successfully created a text file _
with Killer Application.")
MessageBox.Show(Me.MdiParent, "Text file created successfully.", _
"Killer Application", MessageBoxButtons.OK, MessageBoxIcon.Information)
Catch ex As UnauthorizedAccessException
MessageBox.Show(Me.MdiParent, "Sorry, you don't have the necessary _
permissions for creating a file here.", _
"Killer Application", MessageBoxButtons.OK, MessageBoxIcon.Warning)
Catch ex As Exception
Dim text As String = String.Format("Ooops. Unexpected error:{0}{0}({1}){0}{2}", _
Environment.NewLine, ex.GetType().Name, ex.Message)
MessageBox.Show(Me.MdiParent, text, "Killer Application", _
MessageBoxButtons.OK, MessageBoxIcon.Error)
End Try
When trying to write a file in an unauthorized place, we will get a UnauthorizedAccessException that we must handle appropriately (in this case, we just translate it to a human readable error message). In this simple application, we blindly catch any other possible exception and simply show it, in real applications we would probably do a more accurate exception handling.
UnauthorizedAccessException
The brother of the save dialog is the open dialog, whose details I'll not show here because it is very similar to the save dialog. It just adds one more textbox to enter the name of the file to open (which can be populated with KILLERAPP.TXT) and another one to show the file contents; as for the code, the File.WriteAllText invocation is changed to File.ReadAllText.
File.WriteAllText
File.ReadAllText
With all of this, these are the steps you can follow to exercise test cases 2 and 3:
A note of caution here. As explained here, you may encounter that you actually can write to the Windows directory. If this happens, it means that your executable file does not have a valid manifest. Check if you forgot to include the manifest file in the project and/or to appropriately set the project post-build event to include the manifest file in the assembly with mt.exe.
The Do stuff menu on Killer Application main window contains three entries that are related to handling data files: View photo, View text and View data.
All what was worth saying about the application data files has been said already: we have seen that test case 15 forces us to put all the application data files in a given directory when the application is installed, we have seen how to do this while at the same time having these data files at hand at develop+debug time, and we saw what to do when SQL Server 2005 Express database files are involved. These menu entries allow us to see these concepts working.
For example, look at FormViewPhoto. It loads the photography file with the following code:
FormViewPhoto
Dim photoPath As String = Path.Combine(Program.DataDirectory, "Photos\KaitoCute.jpg")
pictureBox.Load(photoPath)
As for FormViewText, it loads the text file in this way:
FormViewText
Dim textPath As String = Path.Combine(Program.DataDirectory, "Texts\Agreement.txt")
textBox.Text = File.ReadAllText(textPath)
...and so on. If you had more data files, that's how to access them: take the file path in the project removing the initial "Data", combine it with the root data directory whose path is at Program.DataDirectory, and you are done. In the case of the SQL Server database, this is handled via the connection string and the DataDirectory application domain setting.
There is an extra menu entry, Crash, that will just force a null reference exception. This will allow you to easily exercise test case 32 ("verify that the application only handles exceptions that are known and expected"), but this is only for convenience and it does not exonerate you from actually using threadhijacker to make your application crash.
Last but not least within the source code dissection, we will take a look at the KillerApplication.Install project. This project contains one single class (plus a couple of classes with auxiliary code), Installer, that contains custom actions for the installer.
Installer
Custom actions are pieces of code that execute when the application is installed and/or uninstalled. They are useful to perform tasks outside of the standard functionality provided by the Windows Installer technology (which is basically to copy files, to create registry entries, and to create program menu and desktop shortcuts). Custom actions are defined inside a .NET class which must have the RunInstaller attribute, and must be added to the custom action editor on the installer project. We'll see more on that later.
RunInstaller
Before going ahead, let's answer one question we made to ourselves some time ago: why must these custom actions be in a separate assembly? Why can't they be part of the main assembly or the support assembly? The answer is because this would cause the application to not uninstall cleanly.
More precisely, what will happen if you put custom actions inside one of the assemblies of the application itself? Something very ugly: after the application is uninstalled, you will see that the folder where the application was installed still exists, and that it contains one single file: yes, the assembly containing the custom actions. And while no test case says explicitly that the application must perform a clean uninstall (although one could infer it from test case 23), it is common sense; no one wants it's application to leave garbage in the user hard disk after being uninstalled.
What is the solution for this problem? Easy: put the code for the custom actions in a separate assembly, and install this assembly anywhere but in the application folder. In my case, I chose the common files folder (we'll see later how to do this) and it worked just fine: custom actions are properly executed and the application is uninstalled cleanly.
I admit that I don't know very well why this happens this way and that I found this solution by trial and error. Suggestions about alternative ways will be welcome.
Having said that, let's see what custom actions we are using in our project and what they are for. Note that for convenience, the Installer class defines two text constants to store the application name, as well as one property that will tell us where the application is installed:
Installer
private const string APPNAME="Killer Application";
private const string APPFILE="KILLER APPLICATION.EXE";
private string ApplicationDataPath
{
get { return Path.Combine(Environment.GetFolderPath
(Environment.SpecialFolder.CommonApplicationData), APPNAME); }
}
First of all, we need a custom action that will be executed when the application is installed. At install time, we need to perform two tasks:
The piece of code that does this is as follows:
public override void Install(System.Collections.IDictionary stateSaver)
{
if(!EventLog.SourceExists(APPNAME))
{
EventLog.CreateEventSource(APPNAME, "Application");
}
string userGroupName=FindUserForSid.GetNormalUsersGroupName();
AclManager manager=new AclManager(ApplicationDataPath, userGroupName, "F");
manager.SetAcl();
base.Install(stateSaver);
}
The FindUserForSid.GetNormalUsersGroupName invocation will obtain for us the name of the standard users groups, which is different depending on the language of Windows (for example, it is Users in English and Usuarios in Spanish). I took the code for this class from pinvoke.net (changing the administrators group SID for the users group SID of course). The AclManager class changes the directory permissions for the given user's group. I took it from Rick Strahl's blog.
FindUserForSid.GetNormalUsersGroupName
Users
Usuarios
AclManager
Test case 23 says: "verify the application rolls back the install and restores machine back to previous state." To achieve this, we need a rollback custom action that will undo any actions performed during installation. In this case, we only need to remove the event log source, since any installed files will be automatically removed by the standard installer code:
public override void Rollback(System.Collections.IDictionary savedState)
{
if(EventLog.SourceExists(APPNAME))
{
EventLog.DeleteEventSource(APPNAME);
}
base.Rollback(savedState);
}
Three actions are needed when our application is being uninstalled:
The data files directory is created at install time, therefore it should be automatically removed by the installer so we should not worry about it. Well, it happens that this is true only for the contained files that were originally created by the installer. If you create new files in this directory, these will remain after the application has been uninstalled. Killer Application creates indeed one new file in the data directory, to log the application execution (see the boot sequence), therefore we remove this directory by hand.
No mystery here; note that even if we remove the event source, the generated events will remain.
Windows XP and Vista have a directory named Prefetch that contains shortcuts to the most recently used applications, to shorten application startup time. These shortcuts can safely be removed by hand, and that's what we do with the prefetched file for Killer Application, thus achieving a completely clean uninstall.
Here is the code that does all of this. Note that if for some reason the removal of the data directory or the prefetch file fails (this can happen if the involved folders are open in the explorer while the uninstallation takes place), the user is warned so that at least he can manually delete these files:
protected override void OnAfterUninstall(System.Collections.IDictionary savedState)
{
//* Delete data files
string path=ApplicationDataPath;
if(Directory.Exists(path))
{
try
{
Directory.Delete(path, true);
}
catch(Exception ex)
{
MessageBox.Show(string.Format(
@"Error when trying to delete the program data folder:
({0}) {1}
Uninstall process will continue, but the folder will not be deleted.
The folder path is:
{2}", ex.GetType().Name, ex.Message, path), APPNAME + " uninstaller",
MessageBoxButtons.OK, MessageBoxIcon.Warning);
}
}
//* Delete the event source
if(EventLog.SourceExists(APPNAME))
{
EventLog.DeleteEventSource(APPNAME);
}
//* Delete file from Prefetch folder
string prefetchPath=Path.Combine
(Environment.ExpandEnvironmentVariables("%windir%"), "Prefetch");
try
{
string[] files=Directory.GetFiles(prefetchPath, APPFILE+"*.*");
foreach(string file in files)
{
File.Delete(file);
}
}
catch
{
string prefetchDir=Path.Combine
(Environment.ExpandEnvironmentVariables("%windir%"), "Prefetch");
MessageBox.Show(@"Some "+APPNAME+" files may remain in the
"+prefetchPath+" directory. " +
"You can delete these files manually.",
APPNAME + " uninstaller", MessageBoxButtons.OK, MessageBoxIcon.Warning);
}
base.OnAfterUninstall(savedState);
}
And with this, we have finished dissecting the Killer Application source code. Let's go to the installer project now.
Preparing the application source code and tuning the application projects settings was only one part of the work we had to do on our way to the certification. There is a last important thing to do before our application is completely logo-able: to create and adjust an installer for our application.
In order to be certifiable, our application must use the Windows Installer technology to install (Click Once deployment is also accepted but we'll not discuss this possibility here). This means that we must generate an installer file in MSI format that will contain our application appropriately packed, including all the install logic (the GUI to show at install time, the custom actions, and the necessary metadata that will be generated in the Windows registry so that Windows will know how to properly uninstall the application).
The good news is that we can create such an installer with Visual Studio, but of course, we need to cheat a little if we want a completely test cases-proof installer. And how to achieve this is what we will see right now.
The installer project is part of the application solution, but when explaining how to create the solution I didn't mention that on purpose, because I wanted the installer to have a section on its own in this article. So, this is what I did to create an installer project for Killer Application:
With this, you have created an empty installer project for our application. Now we need to make some initial adjustments.
When the installer project is selected in the solution explorer, the properties window of the project will show us some configuration values that we can modify. Some of these have already appropriate values by default, others can be filled but can also be left blank, and lastly there are some values that must be properly set if we want things to work the right way. These values are the following ones:
There is an extra setting to change but it is not in the project properties window. You will need to open the user interface editor (it is an icon in the solution explorer folder), select the Installation Folder window, and in the properties window, set the InstallAllUsersVisible to False. That's the minimum to set up to make things work, but probably you would want also to set the value of other useful properties like Description, Manufacturer, ManufacturerUrl and SupportUrl. Once the application is installed, this information can be displayed in the Windows control panel, more precisely in the Add/Remove Programs window.
Our installer application is by now quite useless, since it does not install anything. We need to tell Visual Studio which files must be packed in the MSI files, and this is done via the File System Editor accessible from the solution explorer. Let's go then, hold your breath and:
KillerApplication.Support
KillerApplication.Install
Phew. Well, you can see the results of this piece of work in the Killer Application installer project itself. Just remember that we have seen already one part of how the file system editor should look like:
A final note on this point. The first time you compile the installer project, you will see the following in the results window:
WARNING: Two or more objects have the same target location ('[targetdir]\capsule.killerapplication.support.dll')
Now look at the Detected Dependencies folder in the installer project and you will see that a reference to the KillerApplication.Support.dll has appeared. What has happened is that Visual Studio has detected the support DLL as a dependency of the main project, and thus it has added it to the list of files to be packaged. But we had done this already when we added the primary output of the support project to the application folder in the file system editor; hence we have it twice. The solution: right click the file reference in the dependencies folder, and select Exclude.
We have created code for installer custom actions in the KillerApplication.Install assembly, but the installer will not actually treat this code as install custom actions unless we explicitly instruct it to do so. To achieve this, we need to do the following:
KillerApplication.Install
Actually, you can give whatever names you like to custom actions, but it is a good idea to use meaningful names to keep things clear for yourself. Here's how the custom actions editor should look like when you're done:
You may want to make your application require a minimum version of the Windows operating system to work. We can instruct the installer to refuse to work if a lower version is detected, here are the required steps:
This will make your application installable only on Windows XP SP2 or later (Windows 2003, Windows Vista, Windows 2008, and the new ones that will appear). If you want to be more restrictive and make your application only Vista or later compatible, set the condition as follows: VersionNT>=600. More details about the operating system version conditions here.
The MSI file that will be generated by our install project will lack some important data that is needed to obtain the certification. More precisely:
InstallLocation
MsiRMFilesInUse
To solve these issues, we need to create two transform files by using Orca, and to apply them to the MSI file. More precisely:
These transform files will patch the resulting MSI file when it is generated. To achieve this, of course we need a post-build event, and that's what we will create now. Select the installer project in the solution explorer, open the properties page, and in the post-build event box paste the following:
%HOMEDRIVE%\vistatools\MsiTran.exe -a "$(ProjectDir)VistaPatch2.mst" "$(BuiltOuputPath)"
%HOMEDRIVE%\vistatools\MsiTran.exe -a "$(ProjectDir)AddMsiRMFilesInUse.mst"
"$(BuiltOuputPath)"
Note that in order to be able to create the transform files, you will need to open the MSI file in Orca... which we have not yet generated. To solve this fish-that-eats-its-own-tail problem, compile the installer project once without the transforms (remove or comment the post-build event commands then) so that you obtain an initial MSI file to work with. Warning: the AddMsiRMFilesInUse.mst included in the Killer Application solution that you can download on this page has all the UI messages in Spanish. You should create your own transform file in your language.
There is a subtle problem with the generated MSI file. If you try to install your application by directly running it (instead of running setup.exe), you will get a nasty and strange unexpected error 2869 window, and the whole install process will stop. This has to do with the interaction of the custom actions with the wonderful, superb, amazing Vista's User Access Control.
The solution to this problem is on this blog entry by Hunter555. You need to create a file named NoImpersonate.js in the installer project directory. Paste in the file the script code that you will find in this blog entry, and then proceed as with the manifest file: add it to the installer project and set its Exclude property to True. The script will appropriately modify the MSI file after it is generated, so that it does not produce any error when executed directly. To achieve this we need to add the following command to the installer project post-build event:
cscript.exe "$(ProjectDir)NoImpersonate.js" "$(BuiltOuputPath)"
cscript.exe is the Windows Script Host and is already included in the operating system.
The adjustments that will be explained in this section are needed only if you plan to distribute the setup.exe file that Visual Studio generates together with the MSI file. Theoretically, it is enough to distribute the MSI file alone, but just in case I submitted setup.exe along with the MSI file to the testing authority.
Whatever. If you plan to distribute the setup.exe file, these are the additional steps you will need to perform in the installer project:
Test case 13 says: "verify application’s installer contains an embedded manifest". So let's go for it: open your favourite text editor and create a file named setup.exe.manifest in the installer project directory. The contents of this file must be as follows:
<="requireAdministrator" uiAccess="false" />
</requestedPrivileges>
</security>
</trustInfo>
</assembly>
(Well, my text editor supports UTF-8 encoding. If yours does not, modify the encoding attribute on the XML declaration appropriately).
Now right click the installer project in the solution explorer, select Add -> File, browse to the installer project folder, and select the setup.exe.manifest file you just created. Select the file in the solution explorer, and in the properties page set Exclude to True. Note that this file is different from the one we embed in the application main executable file in that the requested execution level is requireAdministrator. Indeed, we will need administrator privileges to install our application (this is required for the stuff that the custom actions do).
We have created a nice manifest file and now we need to create the command that will process it. So add the following two commands to the installer's post-build event box (note that as usual, the commands are split across two/three lines each for ease of reading):
%HOMEDRIVE%\vistatools\Mt.exe -manifest "$(ProjectDir)setup.exe.manifest"
-outputresource:"$(ProjectDir)$(Configuration)\setup.exe;1"
%HOMEDRIVE%\vistatools\signtool sign /f %HOMEDRIVE%\vistatools\capsulekey.pfx
/p kaitokun /v /t
"$(ProjectDir)$(Configuration)\setup.exe"
Note that I have included code to sign the setup.exe file. I believe that this is not necessary (it is not mentioned in the test cases document), but anyway signing this file does not hurt and it can be even useful if you plan to do advanced things like this one with your installer.
Believe it or not, but we are done. If you followed all the steps explained in this rather long text, you have a nice and 100% Vista certifiable application with a not less nice installer. You can have a beer or two (just remember: if you drink, don't code). Just to summarize, here is how the solution explorer looks for the Killer Application solution after having done all the work:
To finish this article (yes, I swear this is the last section), we'll take the reverse approach to test cases. We have dissected a sample application, mentioning the involved test cases as appropriate. Now we will list the test cases and we'll mention what we did for fulfilling them. Remember: don't trust me, I'm bad, so test your application against all applicable test cases before sending it to the testing authority.
There are a few cases that do not apply, that is, you don't even need to bother about them. Remember that this is true for applications that are structurally similar (nice word, I have to use it more) to Killer Application; be sure to double check if you actually need to check any of these cases.
And now, let's see the test cases that we really care about.
Note that there is only one line or two about each test case, since we are just referring to concepts that we have seen in detail in the rest of the article.
We have manually included a manifest file on the project that generates an EXE file, and we have set up the project post-build event to embed the manifest in the executable file by using mt.exe after the file is generated.
Whenever we write to a file, we catch a possible UnauthorizedAccessException and perform the appropriate corrective action, for example telling the user about the lack of appropriate permissions.
Same as test case 2, anyway a well designed application should not attempt to write to the Windows directory.
We use a post-build event to sign all the generated EXE and DLL files by using signtool.exe and our organizational certificate after the files are generated.
The .NET Framework will do it for us, we don't have to do anything special.
In the startup code we check if the application is already being run by another user, if so we show an error message, we write a message in the Windows event log and we terminate the execution.
In the startup code we check if the application is being run by using Remote Desktop, if so we show an error message, we write a message in the Windows event log and we terminate the execution.
Yes, the installer created by Visual Studio relies on the Windows Installer technology.
Do the test case steps and you will see that you don't receive any errors. Note however that you may receive warnings (NOT errors) within the range of ICEs specified in the test case specification, but this is not an issue for certification.
This one applies if you distribute the setup.exe file generated by Visual Studio together with the MSI file. We manually include a manifest in this file the same way as in the application main executable file (see test case 1).
We achieve this by installing the application data files in the common application data folder. At startup time we obtain and store the correct path for these data files.
The MSI file generated by Visual Studio indeed contains these properties.
The MSI file generated by Visual Studio contains these properties except for InstallLocation. To solve this, we use a post-build event to apply a transform to the MSI file after it is generated, by using msitran.exe.
InstallLocation
The installer generated by Visual Studio will not try to do such ugly things. Anyway here is an application that will parse the AppVerifier logs for you and will tell you if there is something that will make this test case fail.
We use custom actions but these are not of the forbidden types.
This one is rather tedious to check. Anyway, no, the installer generated by Visual Studio does not add these ugly custom columns, tables or properties.
We have added a rollback custom action so that the Windows event log source we create at install time is deleted if the install process fails. Note that when performing the test case steps, you must use the FailInstallFromDeferredCustomAction.msm module instead of FailInstallFromCommitCustomAction.msm.
No reboot will be forced after install, no special action is required here.
There is no trace of the required MsiRMFilesInUse dialog in the MSI file generated by Visual Studio. To solve this, again we use a post-build event to apply a transform to the MSI file after it is generated, by using msitran.exe.
Our application will pass this test case because we force a per-machine install; if we do a per-user install, the test case will fail. If you really need it, for sure there must be a workaround to pass this test when doing per-user installs, but you will have to find it by yourself, my friend.
There are no null values in the specified table on the installer generated by Visual Studio.
Again, our installer will pass this test case with no required action in our side.
Same as above.
Amitava and Simon Williams explain what to do to make a .NET application Restart Manager aware. Basically, you capture the WM_QUERYENDSESSION and WM_ENDSESSION Windows messages, and do the appropriate actions, such as saving the user data for later recovery.
WM_QUERYENDSESSION
WM_ENDSESSION
You may have noticed that there is nothing about this in the Killer Application code. And the fact is that, while capturing the end session messages and properly preparing for shutdown is of course good practice, it is not actually necessary to do it to certificate an application. In short, actually no special action is required to pass this test case.
We have enclosed the execution of the application's Main method in a try-catch block so that we can generate the required entry in the Windows event log in case of unexpected exception. After we perform the log, the original exception is rethrown via a Throw command (NOT Throw ex or Throw new Exception).
try
Throw new Exception
Also, at application startup, we set the unhandled exception mode to UnhandledExceptionMode.ThrowException so that the WER window will actually appear when an exception arises.
UnhandledExceptionMode.ThrowException
WER
I hope that all of this blah blah would have been somewhat useful for you. And remember that what I have explained is a way to obtain the certification, not the way. If you think there are better ways to do this or that, you have the comments section at your. | http://www.codeproject.com/Articles/22795/Certification-by-Example?fid=969507&df=90&mpp=25&sort=Position&spc=Relaxed&tid=2780488 | CC-MAIN-2015-27 | refinedweb | 9,514 | 53.71 |
!
Does this only work for a specific version of Deskbar? I have 2.14.2 (Ubuntu 6.06) and it’s not doing anything.
Posted by joeljkp on September 6th, 2006.
joeljkp: that’s the version I’ve got. Have you enabled the plugin in the preferences window for Deskbar?
Posted by sil on September 7th, 2006.
Ah, forgot about that. Thanks!
Posted by joeljkp on September 7th, 2006.
Your download link above is broken, it gives a WebDAV error: “Could not open the requested SVN filesystem”.
Posted by Alex on November 25th, 2006.
Am I doing something wrong? I had this working back when you first released it, but I can’t get it going now on feisty (Deskbar 2.17.90). I can’t see it in the list of plugins in the preferences.
Posted by sheepeatingtaz on February 8th, 2007.
I have much the same problem as the last poster (fiesty beta). The converter entry does not appear in the list of plugins in the preferences. Note I have tried a different plugin (hostname lookup I think) which works ok.
Would you mind checking this issue out?
Thanks for writing this – it saves me running ‘units’ in a terminal.
Posted by Aaron Hochwimmer on March 26th, 2007.
I’ll take a look when I upgrade to feisty, which will be shortly.
Posted by sil on March 26th, 2007.
Hi
Just alter line 14 of the script. Turn
import deskbar.Handler, deskbar
into
import deskbar.Handler, deskbar.Match
to make the script work under feisty. Don’t forget to install the units package and enable the plugin in the preferencies dialog.
Stuart, thanks for that cool plugin :)
Posted by papo on May 20th, 2007.
The suggestion from papo don’t work for me in Feisty.
Converter is always hidden.
Posted by Nick on May 26th, 2007.
Hi
You can run
/usr/lib/deskbar-applet/deskbar-applet -wfrom a shell to gather more information about the problem. This runs th deskbar applet in standalone mode and gives a lot of information about what’s going on. No need to instsall the debug package.
With the original plugin, I got this:
Error loading the file: /home/mathias/.gnome2/deskbar-applet/handlers/converter.py.
Traceback (most recent call last):
File "/usr/lib/python2.5/site-packages/deskbar/ModuleLoader.py", line 91, in import_module
mod = pydoc.importfile (filename)
File "pydoc.py", line 259, in importfile
raise ErrorDuringImport(path, sys.exc_info())
ErrorDuringImport: problem in /home/mathias/.gnome2/deskbar-applet/handlers/converter.py - : 'module' object has no attribute 'Match'
After having altered line 14 as discussed, I get this:
Providing your output would be a big help to whomever is going to debug this.
Please also note that Converter is hidden as long as the regexp does not match: Here be dragons does not trigger Converter, while as 10 ly in parsecs does.
Also note that Converter has to be enabled in the prefs.
Posted by papo on May 27th, 2007.
Great stuff. Thanks for that.
Any way to make it support the “€” character for euros ?
Posted by Romain on May 27th, 2007.
About the plugin not showing on Feisty, it seems that Python requires a line:
# coding=utf-8
added at the top of the file as per PEP 0263:
Kudos for writing such an awesomely useful plugin, by the way. :)
Posted by zanglang on May 28th, 2007.
Thanks zanglang–that worked perfectly!!
Posted by Robby on June 10th, 2007.
In addition to the line 14 change and the utf-8 change, if you just installed units, you have to restart to get it to work.
To really emulate Google Calculator, you need to get together with this guy and make a plugin that does both, like Google Calculator. Then I can type “3+4″ and get “7″, type “2 inches in mm” and get “50.8 millimeters”, or mix it up and type “15 V / 3 ohm” and get “5 amperes”. ;-)
(And while you’re at it, recognize things like uF that Google doesn’t recognize.)
Posted by Endolith on August 9th, 2007.
And a date and time calculator would be good. Like
Google calculator can’t do this yet, but it’s hard to do this stuff in your head.
Posted by Endolith on August 12th, 2007.
[...] Pero aún hay más y es que si esto del terminal no es lo tuyo o no te termina de convencer, también existe un plugin para el deskbar-applet, llamado converter deskbar. [...]
Posted by La otra bola de cristal - on August 15th, 2007.
[...] Fuente-> La Otra Bola De Cristal Fuente del Script-> Converter Deskbar Enlace Relacionado->Curioseando Con Units [...]
Posted by Linux Music » Blog Archive » Instalando el applet Converter en Ubuntu on August 19th, 2007.
[...] La Otra Bola De Cristal Fuente del Script-> Converter Deskbar Enlace Relacionado->Curioseando Con [...]
Posted by Instalando el applet Converter en Ubuntu: Linux Music 2.0 on January 26th, 2008.
Hi! I kinda stole this from you and made it 2.20-compatible (new API).
And I changed the link in live.gnome.org.
Feel free to steal it back if you want to update it.
;-) Johannes
Posted by JohannesBuchner on February 11th, 2008.
Johannes: that’s absolutely fine with me! Good work.
Posted by sil on February 12th, 2008. | http://www.kryogenix.org/days/2006/09/06/converter-deskbar | crawl-002 | refinedweb | 881 | 77.53 |
... some hints here on the great programming language Java.
One hell of an IDE is the Open Source Project Eclipse.
Give it a try, you'll love it!
You might want to check out "Callisto" (an effort of the eclipse foundation to synchronise software releases of their products, including the IDE).
What's New in Eclipse 3.2 Java Development Tools.
If you have a directory with .class files and want Eclipse to use this directory for a
specific project as another entry to CLASSPATH:
Project->Properties->Java Build Path, Tab Libraries->Add Class Folder
I wrote a mini-HOWTO on setting up and creating a Java-based web-application, using Tomcat, Turbine, Torque, Velocity and MySQL.
If the environment variable CLASSPATH is not set, more recent "Java's" are smart enough to include the Java-default class-files and the current directory. If you set the CLASSPATH variable to some value, the Java-defaults will still be available, but mind to include your current directory too (if you want to) by adding the directory "." to the path. Otherwise you may be able to compile your programme ("javac Test.java"), but trying to start it ("java Test") will raise the following error (because the class file Test.class is in the current directory which is not in the CLASSPATH):
Exception in thread "main" java.lang.NoClassDefFoundError: Test
You can also temporarily set the classpath using
javac's or
java's
-classpath option.
If you compare two objects by simply calling "object1 == object2", you
do not test if the two objects hold the same content, but you
only compare if the two references "object1" and "object2" refer to the same object!
Similarly, "object1 = object2;" does not fill "object1" with the contents of "object2", but instead you'll end up with both identifiers referencing to the "object2"-object and none to the "object1"-object (which is lost and will be garbage-collected at some point).
Having said all this, here's the string stuff:
String a = "Java"; String b = "Java"; ==> (a==b) returns 'true'. a.equals(b) returns 'true'. String a = new String("Java"); String b = new String("Java"); ==> (a==b) returns 'false'. a.equals(b) returns 'true'. There is a table for all string-constants, which holds each string just once. C will do the same: char *c = "C-code"; char *d = "C-code"; ==> (c==d) returns 'true'. Every Java string can be looked up in the table: String a = "Java"; String b = "Ja" + "va"; ==> (a==b) returns 'true'. And: b = b.intern(); ==> (a==b) returns 'true'.
Ever wondered what hides behind Java's "Static Initializers"?
Here's a good
article on what you can do with them.
Thanks to Gerhard for this link.
Threads are a gift ... once you understand them. ;)
Here are possibilities of how to create threads in Java:
/* Extending class Thread */ public class MyThread extends Thread { public void run() { System.out.println("I'm a thread."); } public static void main(String[] args) { new MyThread().start(); } } /* Implementing Runnable, version 1 */ //public class ImpRunnable1 implements Runnable { public class ImpRunnable1 extends SomeClass implements Runnable { public void run() { System.out.println("I'm a thread."); } public static void main(String[] args) { new Thread(new ImpRunnable1()).start(); } } /* Implementing Runnable, version 2 */ //public class ImpRunnable2 implements Runnable { public class ImpRunnable2 extends SomeClass implements Runnable { private Thread thread; public void run() { System.out.println("I'm a thread."); } public void start() { /* Make this object a Thread: */ thread = new Thread(this); // thread.setPriority(Thread.MAX_PRIORITY); thread.start(); } public static void main(String[] args) { // Create a new object and run its start() method: ImpRunnable2 r = new ImpRunnable2(); r.start(); } }
As you can see, implementing the Runnable interface enables our class to inherit from SomeClass and become a thread.
A great tool to have your programmes create (easily customizable) logging output is
Log4j from the Apache Jakarta Project.
Many thanks to Rick for this hint!
import org.apache.log4j.BasicConfigurator; import org.apache.log4j.Logger; public class MyLog4j { private static final Logger log = (Logger) Logger.getInstance(MyLog4j.class.getName()); public static void main(String[] args) { BasicConfigurator.configure(); // Using DOMConfigurator, it's possible to read a log4j-config-file from a URL! // This way, changing this (possibly remote) config-file, you can change the // programme's logging behaviour, e.g. from "normal" to // "with debug information". log.info("Possible levels are: debug, info, warn, error, fatal."); // In case of "extreme performance needed" and lots of debug messages // it's recommended to use if-clauses to prevent unnecessary // string-concetanations etc.: if (log.isDebugEnabled()) log.debug("Output debug messages only if they are enabled."); } }
Ant is the "defacto standard" build tool in the Java world -- something like "make" for C.
build.xml is the Ant configuration file (similar to make's "Makefile").
The basic structure is explained and an example buildfile is shown in the Apache Ant User Manual, section "Using Ant".
Maven is another tool, slightly similar to Ant. Here's an article on Maven 2.0.
Despite your most probable first thought, "Real-time and Java !? -- This doesn't mix!":
There are real-time specs for Java, and one approach (the RTSJ) is officially
supported by Sun.
I wrote more on RTSJ here.
(By the way, RTSJ is a vital part of my diploma thesis.)
Some other Real-Time approach to Java is Javolution. | http://max.home.subnet.at/java/index.php | CC-MAIN-2018-05 | refinedweb | 887 | 58.99 |
Python has support for working with databases via a simple API. Modules included with Python include modules for SQLite and Berkeley DB. Modules for MySQL , PostgreSQL , FirebirdSQL and others are available as third-party modules. The latter have to be downloaded and installed before use. The package MySQLdb can be installed, for example, using the debian package "python-mysqldb".
DBMS SpecificsEdit
MySQLEdit
An Example with MySQL would look like this:
import MySQLdb db = MySQLdb.connect("host machine", "dbuser", "password", "dbname") cursor = db.cursor() query = """SELECT * FROM sampletable""" lines = cursor.execute(query) data = cursor.fetchall() db.close()
On the first line, the Module MySQLdb is imported. Then a connection to the database is set up and on line 4, we save the actual SQL statement to be executed in the variable query. On line 5 we execute the query and on line 6 we fetch all the data. After the execution of this piece of code, lines contains the number of lines fetched (e.g. the number of rows in the table sampletable). The variable data contains all the actual data, e.g. the content of sampletable. In the end, the connection to the database would be closed again. If the number of lines are large, it is better to use row = cursor.fetchone() and process the rows individually:
#first 5 lines are the same as above while True: row = cursor.fetchone() if row == None: break #do something with this row of data db.close()
Obviously, some kind of data processing has to be used on row, otherwise the data will not be stored. The result of the fetchone() command is a Tuple.
In order to make the initialization of the connection easier, a configuration file can be used:
import MySQLdb db = MySQLdb.connect(read_default_file="~/.my.cnf") ...
Here, the file .my.cnf in the home directory contains the necessary configuration information for MySQL.
SqliteEdit
An example with sqlite is very similar to the one above and the cursor provides many of the same functionalities.
import sqlite3 db = sqlite3.connect("/path/to/file") cursor = db.cursor() query = """SELECT * FROM sampletable""" lines = cursor.execute(query) data = cursor.fetchall() db.close()
When writing to the db, one has to remember to call db.commit(), otherwise the changes are not saved:
import sqlite3 db = sqlite3.connect("/path/to/file") cursor = db.cursor() query = """INSERT INTO sampletable (value1, value2) VALUES (1,'test')""" cursor.execute(query) db.commit() db.close()
PostgresEdit
import psycopg2 conn = psycopg2.connect("dbname=test") cursor = conn.cursor() cursor.execute("select * from test"); for i in cursor.next(): print i conn.close()
FirebirdEditEdit
Parameter QuotingEditEditEditEditEdit
External linksEdit
- APSW module — SQLite3 for Python 2.x and 3.x
- SQLite documentation
- Psycopg2 (PostgreSQL module - newer)
- PyGreSQL (PostgreSQL module - older)
- MySQL module
- FirebirdSQL module | http://en.m.wikibooks.org/wiki/Python_Programming/Databases | CC-MAIN-2015-14 | refinedweb | 457 | 52.87 |
In C language, you need to compute the results and then output the results to the console. In order to do that we need specific built-in functions from C library.
The most common library function are printf(), putchar() and puts(). In this article we will discuss each one of them in details and see some examples.Note that the above three functions belong to the stdio.h header file which is part of the standard C library.
Output with printf()
The printf() function is the most popular and common way to output the results of a C program to console. It is a built-in function from standard C library.
Syntax for printf()
The syntax for printf() is very simple and given below. You can just printf a simple statement like the following.
printf(“”);
For example,
Or
printf(“Hello World!”);
Print a variable using the following syntax. For example,
printf(“%d”, result);
The second syntax example has %d as the format specifier that says we are going to print an integer value and the name result is an integer that has some integer value.
you can have more than one variables in the printf() statement, however, the format specifier given should match with the variable data type in the order they are given.
printf( “int specifier float specifier char specifier”, integer variable, float variable, char variable);
In the above example first in the format specifier is an integer type , then variable listing should have an integer first, the second format is float , then the variable listing should be float and so on.
Format specifiers
There are many format specifiers in C language, but most commonly used are
- %d for integers
- %f for floats
- %c for character
- %s for strings of character like a name.
Plus or minus sign for outputs
You can use more details between % and ‘d’,’f’,’c’ or ‘s’ to describe how you want to display your output.
For example,
printf(“%+d”, result); printf(“%-d”,result);
The result will contain a plus or a minus in front of integer result.
+2344 or – 2334
Right or Left justification of output
You can insert a digit in between % and ‘d’ and it will display result left or right justified to the console. For example, a positive value will right justify and a negative value will left justify.
printf(“%10d”,result);
The above will justify 10 space to the right.
printf(“%-10d”result);
The negative value above will justify 10 space to the left if space is available.
Precision control
When you are working with float, the output will be a real value with decimal places depending on your system – 32 bit or 64 bit. You can control the decimal places with precision control. For example,
printf(“%.2f”,result);
In the above example,
- .2 will limit the decimal places to 2 digits.
- f indicates that this is a float type variable
- result is the float type variable.
The output is 23.454566, the output will be 23.45 after the .2 is applied limiting the decimal to 2 digits.
printf(“%10.2f”,result);
The above will result in
- 10 place right justified
- .2 limit decimal place to 2 digits
- Variable is float type
- Result is the variable of float type
New Line and tab
New line (\n) and (\t) are frequently used with printf() statement. The newline will print the output and begin the cursor in a new line, ready to print the next output statement.
a = 10 b = 20 printf("%d\n",a); printf("%d\n",b);
Will give following output.
10 20
But the with new line (\n) you get each output in its own line.
printf(“%d\n”,a); printf(“%d\n”,b);
Will print following.
10 20
The tab(\t) will move the output to one or more tab space to right depending on how many time you have used it in the printf() statement.
printf(“%d\t”,result); /* one time tab */ printf(“%d\t\t\t\t”,result); /* 4 time tab */
Example Program
The example program below will demonstrate all the concepts we discussed above.
/* Program to demonstrate the printf statement with different format specifiers */ #include <stdio.h> int main() { int number1, number2 , result; float number3,number4,result2; /* initialize variables */ number1 = 200; number2 = 300; number3 = 2.5; number4 = 5.2; result = number1 + number2; result2 = number3 + number4; /* Normal output */ printf("Normal Result\n"); printf("%d\n",result); /* Result with + or - sign */ printf("Result with plus sign\n"); printf("%+f\n",result2); /* Result justified */ printf("Result 10 space right justified\n"); printf("%10d\n",result); /*result justified to 12 place right and precision set to 3 digits */ printf("Result 12 place justified and precision set to 3 digits\n"); printf("%12.3f\n",result2); /* printf with newline and tab */ printf("result with tab\n"); printf("\t%d",result); printf("\n"); printf("result with tab \n"); printf("\t%f",result2); printf("\n\n"); system("pause"); return 0; }
Output- printf()
Normal Result 500 Result with plus sign +7.700000 Result 10 place right justified 500 Result 12 place justified and precision set to 3digits 7.700 result with tab 500 result with tab 7.700000 Press any key to continue . . .
Output with putchar()
The putchar() is also a library function belongs to stdio which output a single character to the console.The putchar() need a single argument to print it on screen.
Syntax
int x = ‘5’; putchar(x);
The above statement is incorrect because the datatype is an integer.
char x = ‘5’; putchar(x);
The above statement will give output 5 which is not an integer but a character.Check the example program below.
/* program to print a single character using putchar() function */ #include <stdio.h> #include <stdlib.h> int main() { char x = '5'; putchar(x); putchar('\n'); system("PAUSE"); return 0; }
Output – putchar()
5 Press any key to continue . . . _
Output with puts()
The puts() is another library function that will print a string of characters until null character is reached or end of file (EOF).
Syntax
The syntax for puts() is as follows
char name[10] = “puppy”; puts(name);
The above will result in following output.
puppy
Example Program
You can check the example program that demonstrate the usage of puts().
#include <stdio.h> #include <stdlib.h> int main() { char name[10] = "PUPPY"; puts(name); system("PAUSE"); return 0; }
Output-puts()
PUPPY Press any key to continue . . . _. | https://notesformsc.org/c-printing-outputs/ | CC-MAIN-2022-33 | refinedweb | 1,062 | 64 |
Sort Arrays in NumPy
Hi Enthusiastic Learners! Have you ever dealt with jumbled numbers? Ever thought of removing randomness & see a clean sequential order? I think we all did. And when it comes to that we can always sort arrays in NumPy. To sort arrays in NumPy, it has provided us many in built functions for doing that. Advantage to sort arrays in NumPy is huge, because as shown in our previous articles efficiency of NumPy in-built functions is very high (Why use Universal Function in NumPy?). Let’s begin with Sorting Arrays!
NumPy provides us 2 major functions for sorting our desired arrays:
- sort()
- argsort()
Watch video tutorial here:
Sort arrays in NumPy using SORT()
SORT() function syntax is as follows:
sort(arr, axis=-1, kind=’quicksort’, order=None)
Where:
arr is the array to be sorted.
axis is used to define the number of axis along which array need to be flattened — we will discuss it with examples below.
kind allows us to choose type of sorting technique we want to use.
By default it is set to ‘quicksort’. You can choose from following list of algorithms [‘quicksort’, ‘mergesort’, ‘heapsort’ ]
order takes list type input. When ‘arr’ is a structured array, it specifies which fields to compare 1st, 2nd, and so on.
Let’s begin with creating a random array & sort it.
import numpy as np arr = np.array([18, 9, 3, 4, 2, 8, 11, 6, 1, 4, 2, 43, 57, 3, 6, 22, 17]) arr
array([18, 9, 3, 4, 2, 8, 11, 6, 1, 4, 2, 43, 57, 3, 6, 22, 17])
Let’s sort this array using sort() function.
sorted_arr = np.sort(arr) # by default quicsort algo, last axis, and order is optional print("-- Sorted Array --") print(sorted_arr)
-- Sorted Array -- [ 1 2 2 3 3 4 4 6 6 8 9 11 17 18 22 43 57]
Choosing algorithm for sorting
Now, let’s use a huge array of random values and try sorting using each algorithm and time-it that, how much time is consumed by each algo.
This will help us determine whether we should let our
np.sort() function use default algorithm or something else.
List of algorithms:
- Quicksort (default algo for np.sort())
- Mergesort
- Heapsort
Creating a huge array in NumPy.
huge_array = np.random.randint(1, 100, size=10000000) print("Number of elements in huge array = " + str(len(huge_array)) ) print("Printing some values of array --> " + str(huge_array))
Number of elements in huge array = 10000000 Printing some values of array --> [77 20 94 ... 62 12 29]
QuickSort
Sorting array using default algo ‘quicksort’.
Function definition –>
np.sort(huge_array, kind='quicksort') OR simply use
np.sort(huge_array)
# Calculating time consumed by quicksort to sort an array %timeit np.sort(huge_array)
1 loop, best of 3: 506 ms per loop
Mergesort
Sorting array using algo ‘mergesort’.
Function definition –>
np.sort(huge_array, kind='mergesort')
# Calculating time consumed by mergesort to sort an array %timeit np.sort(huge_array, kind='mergesort')
1 loop, best of 3: 853 ms per loop
Heapsort
Sorting array using algo ‘heapsort’.
Function definition –>
np.sort(huge_array, kind='heapsort')
# Calculating time consumed by mergesort to sort an array %timeit np.sort(huge_array, kind='heapsort')
1 loop, best of 3: 1.47 s per loop
From above results it is clear that following is the order in means of speed:
QuickSort
< MergeSort
< HeapSort
Sort arrays in NumPy using ARGSORT()
Unlike
np.sort(), the
np.argsort() function provides us indices (positions) of elements in order they should be placed to get a sorted array.
Just line sort() function it also takes same set of arguments.
np.argsort(arr, axis=-1, kind='quicksort', order=None)
Using argsort() to sort an array ‘arr’.
sorted_arr_indices = np.argsort(arr) sorted_arr_indices
array([ 8, 4, 10, 2, 13, 3, 9, 7, 14, 5, 1, 6, 16, 0, 15, 11, 12])
To check the values of array at those locations we can use following syntax:
arr[np.argsort(arr)]
arr[np.argsort(arr)]
array([ 1, 2, 2, 3, 3, 4, 4, 6, 6, 8, 9, 11, 17, 18, 22, 43, 57])
We can clearly see that we are getting a sorted array from this as well, however it is adding a overhead to fetch sorted array values, so it is slower in comparison to that of np.sort().
Use np.argsort() only if you ever need to get indices of sorted array values.
Sort arrays in NumPy across different Axis
Axis can be vertical axis or horizontal axis or in 3-D array some third axis.
For better understanding we will be using only 2-D arrays in this article & for vertical axis we will be dealing with Columns, similarly, for horizontal axis we will be dealing with rows.
Sort arrays in NumPy across Columns
For columns or vertical axis we set value of
AXIS = 0
Let’s create a 2-D array and sort it along columns only.
arr_2d = np.random.randint(5, 15, (4, 4)) arr_2d
array([[14, 8, 7, 10], [10, 14, 13, 9], [ 8, 10, 14, 6], [11, 11, 11, 5]])
# Sort values column wise only sorted_arr_axis_0 = np.sort(arr_2d, axis=0) print("-- Sorted values per column --") print(sorted_arr_axis_0)
-- Sorted values per column -- [[ 8 8 7 5] [10 10 11 6] [11 11 13 9] [14 14 14 10]]
From above example, it is clear that we have sorted values in Ascending oreder in each column & have not tried to sort values in rows of array.
Sort arrays in NumPy across Rows
For rows or horizontal axis we set value of
AXIS = 1
# Sort values column wise only sorted_arr_axis_1 = np.sort(arr_2d, axis=1) print("-- Sorted values per row --") print(sorted_arr_axis_1)
-- Sorted values per row -- [[ 7 8 10 14] [ 9 10 13 14] [ 6 8 10 14] [ 5 11 11 11]]
Data across all rows have been sorted & no value in columns have been tried to be sorted.
In our next articles we will be covering more topics related to sorting in NumPy, such as, Partial Sorting in NumPy, indirect sorting & many more. To know about indirect sorting please check this article lexsort() – Indirect Sort in NumPy
Stay tuned & keep learning!
One thought on “Sort Arrays in NumPy” | https://mlforanalytics.com/2020/04/10/sort-arrays-in-numpy/ | CC-MAIN-2021-21 | refinedweb | 1,032 | 61.36 |
GD::SecurityImage - Security image (captcha) generator.
use GD::SecurityImage; # Create a normal image my $image = GD::SecurityImage->new( width => 80, height => 30, lines => 10, gd_font => 'giant', ); $image->random( $your_random_str ); $image->create( normal => 'rect' ); my($image_data, $mime_type, $random_number) = $image->out;
or
# use external ttf font my $image = GD::SecurityImage->new( width => 100, height => 40, lines => 10, font => "/absolute/path/to/your.ttf", scramble => 1, ); $image->random( $your_random_str ); $image->create( ttf => 'default' ); $image->particle; my($image_data, $mime_type, $random_number) = $image->out;
or you can just say (most of the public methods can be chained)
my($image, $type, $rnd) = GD::SecurityImage->new->random->create->particle->out;
to create a security image with the default settings. But that may not be useful. If you
require the module, you must import it:
require GD::SecurityImage; GD::SecurityImage->import;
The module also supports
Image::Magick, but the default interface uses the
GD module. To enable
Image::Magick support, you must call the module with the
use_magick option:
use GD::SecurityImage use_magick => 1;
If you
require the module, you must import it:
require GD::SecurityImage; GD::SecurityImage->import(use_magick => 1);
The module does not export anything actually. But
import loads the necessary sub modules. If you don' t
import, the required modules will not be loaded and probably, you'll
die().
This document describes version
1.72 of
GD::SecurityImage released on
27 August 2012.
The (so called) "Security Images" are so popular. Most internet software use these in their registration screens to block robot programs (which may register tons of fake member accounts). Security images are basicaly, graphical CAPTCHAs (Completely Automated Public Turing Test to Tell Computers and Humans Apart). This module gives you a basic interface to create such an image. The final output is the actual graphic data, the mime type of the graphic and the created random string. The module also has some "styles" that are used to create the background (or foreground) of the image.
If you are an
Authen::Captcha user, see GD::SecurityImage::AC for migration from
Authen::Captcha to
GD::SecurityImage.
This module is just an image generator. Not a captcha handler. The validation of the generated graphic is left to your programming taste. But there are some captcha handlers for several Perl FrameWorks. If you are an user of one of these frameworks, see "GD::SecurityImage Implementations" in "SEE ALSO" section for information.
This module can use both RGB and HEX values as the color parameters. HEX values are recommended, since they are widely used and recognised.
$color = '#80C0F0'; # HEX $color2 = [15, 100, 75]; # RGB $i->create($meth, $style, $color, $color2) $i->create(ttf => 'box', '#80C0F0', '#0F644B')
RGB values must be passed as an array reference including the three Red, Green and Blue values.
Color conversion is transparent to the user. You can use hex values under both
GD and
Image::Magick. They' ll be automagically converted to RGB if you are under
GD.
The constructor.
new() method takes several arguments. These arguments are listed below.
The width of the image (in pixels).
The height of the image (in pixels).
Numerical value. The point size of the ttf character. Only necessarry if you want to use a ttf font in the image.
The number of lines that you' ll see in the background of the image. The alignment of lines can be vertical, horizontal or angled or all of them. If you increase this parameter' s value, the image will be more cryptic.
The absolute path to your TrueType (.ttf) font file. Be aware that relative font paths are not recognized due to problems in the
libgd library.
If you are sure that you've set this parameter to a correct value and you get warnings or you get an empty image, be sure that your path does not include spaces in it. It looks like libgd also have problems with this kind of paths (eg: '/Documents and Settings/user' under Windows).
Set this parameter if you want to use ttf in your image.
If you want to use the default interface, set this parameter. The recognized values are
Small,
Large,
MediumBold,
Tiny,
Giant. The names are case-insensitive; you can pass lower-cased parameters.
The background color of the image.
If has a true value, the random security code will be displayed in the background and the lines will pass over it. (send_ctobg = send code to background)
If has a true value, a frame will be added around the image. This option is enabled by default.
If set, the characters will be scrambled. If you enable this option, be sure to use a wider image, since the characters will be separated with three spaces.
Sets the angle for scrambled/normal characters. Beware that, if you pass an
angle parameter, the characters in your random string will have a fixed angle. If you do not set an
angle parameter, the angle(s) will be random.
When the scramble option is not enabled, this parameter still controls the angle of the text. But, since the text will be centered inside the image, using this parameter without scramble option will require a taller image. Clipping will occur with smaller height values.
Unlike the GD interface,
angle is in
degrees and can take values between
0 and
360.
Sets the line drawing width. Can take numerical values. Default values are
1 for GD and
0.6 for Image:Magick.
The minimum length of the random string. Default value is
6.
Default character set used to create the random string is
0..9. But, if you want to use letters also, you can set this parameter. This parameter takes an array reference as the value.
Not necessary and will not be used if you pass your own random string.
Creates the random security string or sets the random string to the value you have passed. If you pass your own random string, be aware that it must be at least six (defined in
rndmax) characters long.
Returns the random string. Must be called after
random().
This method creates the actual image. It takes four arguments, but none are mandatory.
$image->create($method, $style, $text_color, $line_color);
$method can be
normal or
ttf.
$style can be one of the following (some of the styles may not work if you are using a really old version of GD):
The default style. Draws horizontal, vertical and angular lines.
Draws horizontal and vertical lines
Draws two filled rectangles.
The
lines option passed to new, controls the size of the inner rectangle for this style. If you increase the
lines, you'll get a smaller internal rectangle. Using smaller values like
5 can be better.
Draws circles.
Draws ellipses.
This is the combination of ellipse and circle styles. Draws both ellipses and circles.
Draws nothing. See "OTHER USES".
Note: if you have a (too) old version of GD, you may not be able to use some of the styles.
You can use this code to get all available style names:
my @styles = grep {s/^style_//} keys %GD::SecurityImage::Styles::;
The last two arguments (
$text_color and
$line_color) are the colors used in the image (text and line color -- respectively):
$image->create($method, $style, [0,0,0], [200,200,200]); $image->create($method, $style, '#000000', '#c8c8c8');
Must be called after create.
Adds random dots to the image. They'll cover all over the surface. Accepts two parameters; the density (number) of the particles and the maximum number of dots around the main dot.
$image->particle($density, $maxdots);
Default value of
$density is dependent on your image' s width or height value. The greater value of width and height is taken and multiplied by twenty. So; if your width is
200 and height is
70,
$density is
200 * 20 = 4000 (unless you pass your own value). The default value of
$density can be too much for smaller images.
$maxdots defines the maximum number of dots near the default dot. Default value is
1. If you set it to
4, The selected pixel and 3 other pixels near it will be used and colored.
The color of the particles are the same as the color of your text (defined in create).
This method must be called after create. If you call it early, you'll die.
info_text adds an extra text to the generated image. You can also put a strip under the text. The purpose of this method is to display additional information on the image. Copyright information can be an example for that.
$image->info_text( x => 'right', y => 'up', gd => 1, strip => 1, color => '#000000', scolor => '#FFFFFF', text => 'Generated by GD::SecurityImage', );
Options:
Controls the horizontal location of the information text. Can be either
left or
right.
Controls the vertical location of the information text. Can be either
up or
down.
If has a true value, a strip will be added to the background of the information text.
This option can only be used under
GD. Has no effect under Image::Magick. If has a true value, the standard GD font
Tiny will be used for the information text.
If this option is not present or has a false value, the TTF font parameter passed to
new will be used instead.
The ptsize value of the information text to be used with the TTF font. TTF font parameter can not be set with
info_text(). The value passed to
new() will be used instead.
The color of the information text.
The color of the strip.
This parameter controls the displayed text. If you want to display long texts, be sure to adjust the image, or clipping will occur.
This method finally returns the created image, the mime type of the image and the random number(s) generated. Older versions of GD only support
gif type, while new versions support
jpeg and
png (update: beginning with v2.15, GD resumed gif support).
The returned mime type is
png or
gif or
jpeg for
GD and
gif for
Image::Magick (if you do not
force some other format).
out method accepts arguments:
@data = $image->out(%args);
You can set the output format with the
force parameter:
@data = $image->out(force => 'png');
If
png is supported by the interface (via
GD or
Image::Magick); you'll get a png image, if the interface does not support this format,
out() method will use it's default configuration.
And with the
compress parameter, you can define the compression for
png and quality for
jpeg:
@data = $image->out(force => 'png' , compress => 1); @data = $image->out(force => 'jpeg', compress => 100);
When you use
compress with
png format, the value of
compress is ignored and it is only checked if it has a true value. With
png the compression will always be
9 (maximum compression). eg:
@data = $image->out(force => 'png' , compress => 1); @data = $image->out(force => 'png' , compress => 3); @data = $image->out(force => 'png' , compress => 5); @data = $image->out(force => 'png' , compress => 1500);
All will default to
9. But this will disable compression:
@data = $image->out(force => 'png' , compress => 0);
But the behaviour changes if the format is
jpeg; the value of
compress will be used for
jpeg quality; which is in the range
1..100.
Compression and quality operations are disabled by default.
Depending on your usage of the module; returns the raw
GD::Image object:
my $gd = $image->raw; print $gd->png;
or the raw
Image::Magick object:
my $magick = $image->raw; $magick->Write("gif:-");
Can be useful, if you want to modify the graphic yourself. If you want to get an image type see the
force option in
out.
See "path bug" in "GD bug" for usage and other information on this method.
Returns a list of available GD::SecurityImage back-ends.
my @be = GD::SecurityImage->backends;
or
my @be = $image->backends;
If called in a void context, prints a verbose list of available GD::SecurityImage back-ends:
Available back-ends in GD::SecurityImage v1.55 are: GD Magick Search directories: /some/@INC/dir/containing/GDSI
you can see the output with this command:
perl -MGD::SecurityImage -e 'GD::SecurityImage->backends'
or under windows:
perl -MGD::SecurityImage -e "GD::SecurityImage->backends"
See the tests in the distribution. Also see the demo program "eg/demo.pl" for an
Apache::Session implementation of
GD::SecurityImage.
Download the distribution from a CPAN mirror near you, if you don't have the files.
All TTF samples generated with the bundled font StayPuft.ttf, unless stated otherwise. Most of the samples here can be generated with running the test suite that comes with the GD::SecurityImage distribution. However, images that are generated with random angles will indeed be a little different after you run the test suite on your system.
All random codes have a length of six (6) characters, unless stated otherwise.
So, (for example) there is no clipping in
ELLIPS.
1: This image is generated with this code:
use GD::SecurityImage backend => 'Magick'; my($data, $mime, $rnd) = GD::SecurityImage ->new( width => 420, height => 100, ptsize => 40, lines => 20, thickness => 4, rndmax => 5, scramble => 1, send_ctobg => 1, bgcolor => '#009999', font => 'StayPuft.ttf', ) ->random('BURAK') ->create( qw/ ttf ec #0066CC #0066CC / ) ->particle(300, 500) ->out;
Images hosted by ImageShack.
GD::SecurityImage drawing capabilities can also be used for counter image generation or displaying arbitrary messages:
use CGI qw(header); use GD::SecurityImage 1.64; # we need the "blank" style my $font = "StayPuft.ttf"; my $rnd = "10.257"; # counter data my $image = GD::SecurityImage->new( width => 140, height => 75, ptsize => 30, rndmax => 1, # keeping this low helps to display short strings frame => 0, # disable borders font => $font, ); $image->random( $rnd ); # use the blank style, so that nothing will be drawn # to distort the image. $image->create( ttf => 'blank', '#CC8A00' ); $image->info_text( text => 'You are visitor number', ptsize => 10, strip => 0, color => '#0094CC', ); $image->info_text( text => '( c ) 2 0 0 7 m y s i t e', ptsize => 10, strip => 0, color => '#d7d7d7', y => 'down', ); my($data, $mime, $random) = $image->out; binmode STDOUT; print header -type => "image/$mime"; print $data;
Image hosted by ImageShack.
die is called in some methods if something fails. You may need to
eval your code to catch exceptions.
If you look at the demo program (not just look at it, try to run it) you'll see that the random code changes after every request (successful or not). If you do not change the random code after a failed request and display the random code inside HTML (like "Wrong! It must be <random>"), then you are doing a logical mistake, since the user (or robot) can now copy & paste the random code into your validator without looking at the security image and will pass the test. Just don't do that. Random code must change after every validation.
If you want to be a little more strict, you can also add a timeout key to the session (this feature currently does not exits in the demo) and expire the related random code after the timeout. Since robots can call the image generator directly (without requiring the HTML form), they can examine the image for a while without changing it. A timeout implemetation may prevent this.
See the "SUPPORT" section if you have a bug or request to report.
There is a bug in PerlMagick' s
QueryFontMetrics() method. ImageMagick versions smaller than 6.0.4 is affected. Below text is from the ImageMagick 6.0.4 Changelog:.
"2004-05-06 PerlMagick's
QueryFontMetrics() incorrectly reports `unrecognized attribute'` for the `font' attribute."
Please upgrade to ImageMagick 6.0.4 or any newer version, if your ImageMagick version is smaller than 6.0.4 and you want to use Image::Magick as the backend for GD::SecurityImage.
libgd and GD.pm don't like relative paths and paths that have spaces in them. If you pass a font path that is not an exact path or a path that have a space in it, you may get an empty image.
To check if the module failed to find the ttf font (when using
GD), a new method added:
gdbox_empty(). It must be called after
create():
$image->create; die "Error loading ttf font for GD: $@" if $image->gdbox_empty;
gdbox_empty() always returns false, if you are using
Image::Magick.
I got some error reports saying that GD::SecurityImage dies with this error:
Can't locate object method "new" via package "GD::Image" (perhaps you forgot to load "GD::Image"?) at ...
This is due to a wrong installation of the GD module. GD includes
XS code and it needs to be compiled. You can't just copy/paste the GD.pm and expect it to work. It will not. If you are under Windows and don't have a C compiler, you have to add new repositories to install GD, since ActiveState' s own repositories don't include GD. Randy Kobes and J-L Morel have ppm repositories for both 5.6.x and 5.8.x and they both have GD:
bribes.org also has a GD::SecurityImage ppd, so you can just install GD::SecurityImage from that repository.
There are some issues related to wrong/incomplete compiling of libgd and old/new version conflicts.
If your libgd is compiled without TTF support, you'll get an empty image. The lines will be drawn, but there will be no text. You can check it with "gdbox_empty" method.
If your GD has a
gif method, but you get empty images with
gif() method, you have to update your libgd or compile it with GIF enabled.
You can test if
gif is working from the command line:
perl -MGD -e '$_=GD::Image->new;$_->colorAllocate(0,0,0);print$_->gif'
or under windows:
perl -MGD -e "$_=GD::Image->new;$_->colorAllocate(0,0,0);print$_->gif"
Conclusions:
GDis a better choice. Since it is faster and does not use that much memory, while
Image::Magickis slower and uses more memory.
Support for compression level argument to png() added in v2.07. If your GD version is smaller than this, compress option to
out() will be silently ignored.
setThickness implemented in GD v2.07. If your GD version is smaller than that and you set thickness option, nothing will happen.
ellipse() method added in GD 2.07.
If your GD version is smaller than 2.07 and you use
ellipse, the
default style will be returned.
If your GD is smaller than 2.07 and you use
ec, only the circles will be drawn.
ImageCodePerl Module (commercial):.
Authen::Captchadrop-in replacement module.
If your software uses
GD::SecurityImage for captcha generation and want to appear in this document, contact the author.
All bug reports and wishlist items must be reported via the CPAN RT system. It is accessible at.
CPAN::Forum is a place for discussing
CPAN modules. It also has a
GD::SecurityImage section at.
If you like or hate or have some suggestions about
GD::SecurityImage, you can comment/rate the distribution via the
CPAN Ratings system:.
Burak Gursoy <burak@cpan.org>.
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.10.1 or, at your option, any later version of Perl 5 you may have available. | http://search.cpan.org/~burak/GD-SecurityImage-1.72/lib/GD/SecurityImage.pm | CC-MAIN-2014-42 | refinedweb | 3,202 | 57.47 |
What would you do if hackers were abusing your software in production?
This is not a hypothetical question. They are probably doing it right now.
You might be thinking about all the secure design choices you have made, or preventative techniques you applied, so there’s nothing to worry about.
If so, that’s great – even if there are always things that get overlooked, you should always be thinking about the security of your system.
But there’s a huge difference between preventing security bugs and forgiving malicious attempts.
How about we catch and act upon the hackers who are trying to break into our software? In this post, I’ll try to give you practical and simple examples of catching typical hacker behaviors in your code early.
Why Catch Malicious Attempts?
Isn’t preventing security bugs enough? I can hear you saying, “As long as I write secure code, I don’t care whether hackers play with my rock-solid software or not. So, why should I care about malicious attempts?”
Let’s first answer this valid question.
A somewhat complex piece of software is difficult to keep secure all the time. More complexity means more potential weaknesses for a hacker to abuse while you're designing, implementing, deploying, or maintaining the code.
Just look at the CVE numbers over the years. It’s a lot:
Moreover, because of its nature, a security bug is not just a regular item in your backlog. There are some nasty consequences if a vulnerability gets exploited: a loss of trust, a bad reputation, or even financial loss.
So, security best practices such as the OWASP Application Security Verification Standard (ASVS) or Mozilla’s Secure Coding Guidelines exist in order to help developers produce secure software.
However, since new ways of bypassing existing security controls or new weaknesses emerge on an almost daily basis, there’s a consensus around the security community that “There’s no 100% security.” So we always have to be alert and responsive to security news and improvements.
There’s also one more thing we can do to ensure secure software: noticing hackers as early as possible, before they do something that we don’t expect or even know about. Moreover, keeping track of their malicious behavior over a long period of time makes us more proactive.
There’s a popular notion of Security Operations Center (SOC) along these lines – SOCs are a type of team in an organization which is outsourced or in-house. Their job is to continuously monitor the security state of the organization. They do it by detecting, analyzing, and responding to cyber security incidents.
SOC teams look for abnormal activities, including software security anomalies. The idea of noticing and responding to a successful or failed cyber-attack gives organizations an upper-hand against threats, which is ultimately reducing the response time to attacks through continuous monitoring.
An SOC is strong only with the rich and quality input it gets from different sources of IT components. Since our software is also an important part of the inventory, appropriate security alarms due to abnormal behaviors sent by our software to SOC teams are invaluable.
How to Check for Abnormal Behaviors
Here are a number of checks and controls we can implement throughout our code that reveal malicious and abnormal behaviors.
Before we start, I’d like to emphasize that I’m not presenting complicated solutions like Web Application Firewall (WAF) here. Instead, I will just try to show you that simple conditionals, smart exception handling, and similar little to no effort actions in your code can help you notice abnormal behaviors as soon as they occur.
Let’s dig in.
Zero Length or Null Returns
The first action we can take to detect a malicious action is by checking zero length aggregates or null returns.
Here’s a simple code block to illustrate the point:
Receipt receipt = GetReceipt(transferId); if (receipt == null) { // what does this mean? // log, notify, alarm }
Here, we try to access the receipt of a certain transfer provided by our end users through the
transferId parameter.
In order to prevent anyone from accessing someone else’s receipts, let’s assume that inside the
GetReceipt method, our developer is smart enough to check whether the
transferId really belongs to the current user.
Checking ownership is a good security best practice.
Let’s further assume that we are sure by design that every transfer should have at least one related receipt, so getting none at runtime is suspicious. Why? Because getting an empty receipt means the provided
transferId doesn’t belong to any transfer executed by the current user.
In other words, the current user provided a forged
transferId to our code and waits to see the content if that
transferId happens to relate to someone else’s transaction.
And since we have the appropriate ownership control, the
GetReceipt method returns an empty or null receipt. That’s where we have to take some security actions.
I won’t go into details of the security actions in this post. However, security logging and/or sending detailed notifications ,Security Information and Event Management (SIEM) systems are two of them.
Here’s another example of how checking the null value allows us to seize a malicious attempt.
Consider that we have the following three endpoints,
ShowReceipt,
Success, and
Error:
// ShowReceipt endpoint if(CurrentUser.Owns(receiptId)) { Session["receiptid"] = receiptId; redirect "Success"; } else { redirect "Error"; }
// Success endpoint receiptId = Session["receiptid"]; return ReadReceipt(receiptId);
// Error endpoint return "Error";
This is a simple application that shows the content of a user’s receipt.
In
ShowReceipt, the first line is an important one. It checks whether the end user is sending us a valid
receiptId to see its contents. Without this control, a malicious user can provide any
receiptId and access the content.
The place of the statement in the third line is equally important, though. If we move this line just before the if statement, that wouldn’t break anything. However, it would create the same security problem we were trying to avoid by checking whether the end user is requesting a valid receipt or not.
Please take a moment to make sure you understand why this is the case.
Now, it’s a good idea that we placed that line in the correct place and that creates another opportunity to notice malicious attempts. Then, in the
Success endpoint, what does it mean if we get null
receiptId from the
Session?
That means someone is calling this endpoint, just after they made a request to
ShowReceipt endpoint with someone else’s
receiptId. Even if they got
Error redirect back because of the ownership check!
Of course with the control we have at the first line, this is impossible.
So, the
Success endpoint is a nice place to write a security log entry and send any notifications to our monitoring solutions when we get a null
receiptId from the
Session.
// Success endpoint (Revisited) receiptId = Session["receiptid"]; if(receiptId == null) { // log, notify, alarm } return ReadReceipt(receiptId);
Targeted Exception Handling
Exception handling is maybe the most important mechanism for developers to respond to any anomalous condition during the execution of the program.
Most of the time the main opportunity it provides is cleaning up resources that were borrowed such as file/network streams or database connections upon unexpected problems. This is a fail-safe behavior that lets us write more reliable programs.
In parallel we can effectively use runtime exceptions to notice malicious attempts towards our software.
Here are some popular sources of weakness where we can utilize related exceptions to notice fishy behavior:
- Deserialization
- Cryptography
- XML Parsing
- Regular Expression
- Arithmetic Operations
The list is not complete, of course. And here I’ll go through only a few of these APIs.
Let’s start with Regular Expressions. Here’s a code block that applies a strict validation method on a user input:
if(!Regex.IsMatch(query.Search, @"^([a-zA-Z0-9]+ ?)+$")) { return RedirectToAction("Error"); }
The regular expression pattern used here is a solid whitelist one, which means it checks what is expected as an input. Not the other insecure way around, which is checking what is known to be bad.
Still, here’s a much secure version of the same code block:
if(!Regex.IsMatch(query.Search, @"^([a-zA-Z0-9]+ ?)+$", RegexOptions.Compiled, TimeSpan.FromSeconds(10))) { return RedirectToAction("Error"); }
This is an overloaded version of the
IsMatch method of which the last argument is the key.
It enforces that the execution of the regular expression during runtime can not exceed 10 seconds. If it does, that means something suspicious is going on since the pattern used is not that complicated.
There’s an actual security weakness that might be used to abuse this pattern called ReDoS, though I won't go into the details of it here. But in short, an end-user can send the following string as the search parameter and make our back-end miserable, spending an awful amount of CPU power in vain.
Notice the quotation mark at the end (and don’t try this in production!):
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA!
The question is, what happens when the execution time actually exceeds 10 seconds?
The .NET environment throws an exception, namely
RegexMatchTimeoutException. So, if we specifically catch this exception, we now have the opportunity to report this suspicious incident or do something about it.
Here’s the final code block to that end:
try { if(!Regex.IsMatch(query.Search, @"^([a-zA-Z0-9]+ ?)+$", RegexOptions.Compiled, TimeSpan.FromSeconds(10))) { return RedirectToAction("Error"); } } catch(RegexMatchTimeoutException rmte) { // log, notify, alarm }
Another important venue where we can utilize exceptions is XML parsing. Here’s an example code block:
XmlReader xmlReader = XmlReader.Create(input); var root = XDocument.Load(xmlReader, LoadOptions.PreserveWhitespace);
The input XML is fed into
XmlReader.Create, and then we get the root element. Hackers can abuse this piece of code by providing some malicious XML files, which, when parsed by the above code, gives ownership of our servers to them.
Scary, right? The security bug is called XML External Entity (XXE) attack, and as with the Regular Expression exploit, I won't go into all the details here.
However, in order to prevent that super critical weakness, we ignore the usage of Document Type Definitions (DTD) through the
XmlReaderSettings. So now, there’s no possibility of XXE security bugs anymore.
Here’s the secure version:
XmlReaderSettings settings = new XmlReaderSettings(); settings.DtdProcessing = DtdProcessing.Ignore; XmlReader xmlReader = XmlReader.Create(input, settings); var root = XDocument.Load(xmlReader, LoadOptions.PreserveWhitespace);
We can leave the code just like this and move on. However, if a hacker still tries to abuse this attack in vain, it's better that we can catch this behavior and produce an invaluable security alert:
try { XmlReaderSettings settings = new XmlReaderSettings(); settings.DtdProcessing = DtdProcessing.Ignore; XmlReader xmlReader = XmlReader.Create(input, settings); var root = XDocument.Load(xmlReader, LoadOptions.PreserveWhitespace); } catch(XmlException xe) { // log, notify, alarm }
Moreover, in order to prevent false positives, you can further customize the catch block by using the message content provided by the
XmlException instance.
There’s a general programming best practice that denies using generic
Exception types in catch blocks. What we have shown is also a good supporting case for this. Same goes with another best practice that denies using empty catch blocks, which is effectively doing nothing when an abnormal behavior occurs in our code.
Apparently though, instead of empty catch blocks, here we have a very solid opportunity to react to malicious attempts.
Normalization on Inputs
By definition, normalization is to get the simplest form of something. In fact, canonicalization is the term used for this purpose. But it is hard to pronounce, so, let's stick to normalization.
Of course, “the simplest form of something” is a little bit abstract. What do we mean by the “simplest form”?
It is always good to show by example. Here is a string:
%3cscript%3e
According to the URL encoding, this string is not in its simplest form. Because if we apply URL decoding on it, we get this one:
<script>
This is the simplest form of the original string according to URL encoding transformation standard.
How do we know that? We know it not because it is understandable to us now. We know it because if we apply URL decoding again, we will get the same string:
<script>
And that means URL decoding does not successfully transform it anymore. We hit the simplest form. Normalization can take more than one step, as originally the encoding might be applied more than once.
URL encoding is just one example of the transformation used for normalization, or in other words, decoding. HTML encoding, JavaScript encoding, and CSS encoding are other important encoding/decoding methods widely used for normalization.
Over the years, attackers find genuine techniques to bypass defense systems. And one of the most prevalent techniques they utilize is encoding. They use crazy encoding techniques on their original malicious inputs, in order to fool defenses around applications.
History is full of these demonstrations, and you can read the details of one of the most famous ones called Microsoft’s infamous IIS dotdot attack that took place in the early 2000s.
Since hackers rely on encoding techniques substantially when they are sending malicious inputs, normalization can be one of the most effective and easy ways to seize them.
Here is the rule of thumb: we recursively apply URL/HTML/CSS/JavaScript decoding to user input until the output no longer changes. And if the output is a different string than the original input, that means we may have a possible malicious request.
Here’s a simplified version of legendary OWASP ESAPI Java that implements this idea:
int foundCount = 0; boolean clean = false; while(!clean) { clean = true; // whatever codes you want; URL/Javascript/HTML/... Iterator i = codecs.iterator(); while (i.hasNext()) { Codec codec = (Codec)i.next(); String old = input; input = codec.decode(input); if (!old.equals(input)) { if (clean) { foundCount++; } clean = false; } } }
When the code block ends, if the value of
foundCount is bigger or equal to 2, that means what? It means someone is sending multiple encoded input to our application, and the odds of this happening is really rare.
Normal users do not send multiple encoded strings to our application. There is a high probability that this is a malicious user. We have to log this event with the original input for further analysis.
The above mechanism, while part of the software itself, functions like a filter in front of the application. It runs on every untrusted input and gives us an opportunity to know about malicious attempts.
However, you may be suspicious about the additional delay this way of validation incurs. I understand if you don’t want to opt-in.
Here's another example of using normalization as a means to seize malicious attempts during file uploads or downloads. Consider the following code:
if (!String.IsNullOrEmpty(fileName)) { fileName = new FileInfo(fileName).Name; string path = @"E:\uploaded_files\" + fileName; if (File.Exists(path)) { response.ContentType = "image/jpg"; response.BinaryWrite(File.ReadAllBytes(path)); } }
Here we get a
fileName parameter from our client, locate the image it points to, read, and present the content. This is a download example. It might also have been an upload scenario.
Nevertheless, in order to prevent the client manipulating the
fileName parameter to their heart’s content, we utilize the
Name property of the
FileInfo class. This will only get the name part of the
fileName, even if the client sends us anything other than what we expect (i.e. a file name with forged paths such as below):
../../WebSites/Cross/Web.config
Here the malicious client wants to read the contents of a sensitive
Web.Config file by using our code. Getting only the file name part, we get rid of this possibility.
That is good but there is still something we can do:
if (!String.IsNullOrEmpty(fileName)) { string normalizedFileName = new FileInfo(fileName).Name; if (normalizedFileName != fileName) { // log, notify, alarm response = ResponseStatus.Unauthorized; } string path = @"E:\uploaded_files\" + fileName; if (File.Exists(path)) { response.ContentType = "image/jpg"; response.BinaryWrite(File.ReadAllBytes(path)); } }
We compare the normalized version of
fileName with itself (the original input). If they differ, that means someone is trying to send us a manipulated
fileName and we take appropriate action.
Normally the browser just sends the uploaded file name in its simplest form with no transformation.
For the sake of the argument, we may not even use the file name when the user uploads a file. We may be generating a
GUID and use that instead.
Nevertheless, applying this control to the provided file name still matters, because hackers will definitely poke with that parameter no matter what.
Invalid Input Against Whitelists
Whitelisting is “accepting only what is expected”. In other words, if we come across some input that we do not expect, we reject it.
This input validation strategy is one of the most secure and effective strategies we have to this date. By using this strategy consistently throughout your software, you can close a lot of known and unknown venues that a malicious user can attack you.
This way of building a software is like building a closed castle with only thoroughly controlled doors opening outside, if that makes any sense.
OK, back to our topic.
Let’s analyze whitelisting with a simple scenario. Assume that our users have the freedom to choose their own, specific usernames when registering. And prior to coding, as a requirement we were informed how a username should look like.
Then, in order to comply with this requirement we can easily devise some rigid rules to apply against the username input before we accept it. If the input passes the test, we take in. Otherwise, we reject the input.
The whitelist rules may be in different forms, though. Some may contain a list of expected hard-coded values, others may check whether the input is integer or not. And others may be in the form of regular expressions.
Here is an example regular expression for usernames:
^[a-zA-Z0-9]{4,15}$
This regular expression is a very rigid whitelisting pattern. It matches with every string whose characters are nothing but a-z, A-Z, or 0-9. Not only this, but the length of the input should be minimum of 4 characters and maximum of 15 characters.
The hat at the beginning and dollar character at the end of the regular expression denote that the match should occur for the whole input.
Now assume that at runtime we get the following input which won’t pass our regular expression test:
o'neal
Does that mean our software is facing a hacker?
The input seems innocent. However, it might also be the case that a malicious user is just trying the existence of an SQL injection security bug before getting into the action, which is also known as reconnaissance.
Anyway, it’s still hard to deduce any malice from this particular case.
However, we can still seize the hackers using other forms of failed whitelists, such as failed input attempts against a list of expected hard-coded values.
An excellent example is JSON Web Token (JWT) standard. We use JWT when we want third parties to send us a claim that we can validate and then trust the data inside.
The standard has a simple JSON structure: a header, a body and a signature. The header contains how this particular claim should be produced and therefore validated. The body contains the claim itself. The signature is there for, well, validation.
For instance, when we get the following token from a third part, such as a user, we validate it using the algorithm it presents in the header value.
In this instance, the token itself tells us that we should use cryptographic hash HMACSHA256 algorithm (HS256 in the token is a short version) on both the header and body data to test whether it produces the same signature given.
If it produces the same signature value, then the token is authentic and we can trust the body:
// Header { "alg" : "HS256", "typ" : "JWT" } // Body { "userid": "johndoe@gmail.com", "name": "John Doe", "iat": 1516239022 } // Signature AflcxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5g
There are various external libraries that we can easily use to produce and validate JWTs. Some of them had a serious security bug which let any JWT to be taken as an authentic token.
Here’s what went wrong with those libraries.
What happens when a token that we should validate contains a header like below? I just present the header here, but it also contains body and signature parts:
// Header { "alg" : "None", "typ" : "JWT" }
It seems that for that specific token some of those JWT validation libraries just accept the body as it is without any validation, because
None says that no algorithm is applied for signature production.
To put this into perspective, that means any end user can send us any
userid inside the token and we will not apply any validation against it and let them login.
The best way to avoid this and similar security problems is to keep a valid list of algorithms on our side. In this case the list may contain only one valid algorithm.
Moreover, it's better not to process the algorithm we get inside the header part of the JSON Web Token, whatever it might be.
But as you might have already guessed, there’s a huge opportunity here. We may just get the algorithm value from the header part and check even if we won’t use it. If the value is anything other than we expect, let’s say HS256, that means someone is messing around with us.
The same method can be used for any list of hard-coded values presented to the end user and one of which we expect to get as an input.
For example, if we provide a list of cities in a select box, we are sure that we will get back one of them when the form is posted. If we get a completely different value, there’s surely something wrong with the behavior of the user or automated tool we are facing.
Actions Against AuthN and AuthZ
One of the most critical parts of software from a security point of view are the authentication and the authorization mechanisms. These are places where we enforce that only the parties we know of access the application and they access certain parts within their roles.
In other words, our users shouldn’t use certain parts of our application without any credential validation and they shouldn’t access parts where they don’t have any privileges.
There are various attack scenarios against both of the mechanisms, however, the most obvious one against authentication is brute forcing. It is trying a set of pre-populated or generated on the fly credentials one after another in the hope that one or more of them would work.
Of course there are well-known ways to prevent such attacks: using CAPTCHAs or applying throttling on problematic IP addresses or usernames.
Usually authentication attacks are well-known and when noticed are already logged and possibly fed into the security monitoring systems.
The same is possible with attacks against authorization.
It’s easy to produce a security log and an alarm when our application returns an 403 response to our users. This well-known HTTP response is the indicator of an authorization problem, so it’s wise to log it.
However, both the authentication and the authorization cases so far have the potential to produce false alarms. However, I still encourage logging and producing alarms whenever these occur.
Now, let’s concentrate on a more solid case. Whenever we use Model-View-Controller (MVC) frameworks, we utilize the built-in auto-binding feature for our
Action method parameters. So, the MVC framework we are using is in charge of binding parameters in HTTP requests onto our model objects automatically.
This is a great relief for us since getting each user input by using the low-level APIs of a framework really becomes tedious after some time.
What happens if this auto-binding becomes too permissive? Assume that we have a
User model. It would probably have at least ten or twenty member fields. But for clarity, let’s say it has a
FullName and a
IsAdmin member fields.
The second member field will denote if a particular user is administrator or not:
public class User { public string FullName { get; set; } public bool IsAdmin { get; set; } }
In order for users to update their own profile, we prepare a
View including the appropriate form and bindings.
At last, when the form is submitted, a controller action will auto-bind the HTTP parameters to a
User class instance. Then, perhaps it will save it to the database just like below:
[HttpPost] public Result Update(User user) { UserRepository.Store(user); return View("Success"); }
Obviously here, a malicious non-administrative user may also set values of unwanted model members, such as
IsAdmin. Since the binding is automatic, our malicious user can make themselves administrator by requesting a simple HTTP POST request to this action!
By using the MVC pattern, every model we use in action method parameters becomes fully visible and editable to end-users.
The best way to prevent this is using extra
ViewModels or DTO objects for
Views and
Actions and include only the permitted fields. For example, here is a
UserViewModel that only contains editable fields of
User model class.
public class UserViewModel { public string FullName { get; set; } }
So, the end user, albeit she can add an additional
IsAdmin parameter to the HTTP POST request, that value will not be used at all to result in a security problem. Excellent!
But wait, there’s a golden opportunity here to seize sophisticated hackers. How about we still include
IsAdmin property in our
UserViewModel, but produce a security log and maybe alarms when the setter is called:
public class UserViewModel { public string FullName { get; set; } public bool IsAdmin { set { // log, alarm, notify } } }
Just make sure that we don’t use this member field when we are creating a
User model class instance out of this
UserViewModel instance.
Miscellaneous
It is impossible to list or classify every possible case where we can place our little controls to notice any hacking attempts as early as possible. However, here are some of the other opportunities we have:
- If our application provides a flow of actions which should be followed in a specific order, then any invalid order of calling indicates an abnormal behaviour.
- Injection attacks are one of the most severe security bug categories that stem from insecure code and data concatenation. Cross Site Scripting (XSS), SQL Injection, and Directory Traversal are some common bugs in this category. Once we use secure constructs like contextual encodings, whitelist validation, and prepared statements, then we get rid of them. However, unfortunately, there are no simple and non-blacklist ways to seize the hackers who are still trying to abuse these security bugs once they are fixed.
- Setting up traps is also a valid way of catching malicious attempts but I’m against this if the effort takes a huge amount of time or is likely to produce false alarms. For example, it’s possible to include hidden links (display:none) in our web pages and trigger security logging when these links are accessed by automated security scanners (because they try to access every link that they can extract). However, this may also produce false alarms for legitimate crawlers, such as Google. Still, this is a design choice and there are a lot of traps that can be set, such as non-existing but easy to guess:
- username, password pairs, e.g. the infamous admin:admin
- administrative URL paths, e.g. /admin
- HTTP headers, parameters, e.g. IsAdmin
Conclusion
“Forgiveness isn’t approving what happened. It’s choosing to rise above it.” Robin Sharma
It is unforgivably naive to let malicious attempts towards our software go unnoticed while we already have the tools under our belt to do otherwise. Forgiveness is such a supreme moral quality, but we have to be on top of risky activities around our code.
Despite chaotic facets of software development, developing secure code is an important survival skill in this hacker-loaded world.
Moreover, we have the chance to improve this skill even further by noticing malicious activities in a precise manner in our code and producing security log entries and alarms for SOC teams.
Doing something about malicious behaviors in our code, like you read in this article is just one of the coding mistakes that lead to hacker abuse. I encourage you to check my Coding Mistakes that Hackers Abuse online training in order to master the rest of them. | https://www.freecodecamp.org/news/how-to-catch-hackers-in-your-code/ | CC-MAIN-2021-49 | refinedweb | 4,800 | 54.02 |
Just.
XML Europe is over and it has been one of the best conferences I have ever seen, with lot of food for thoughts and subject of stories for xmlhack and XMLfr...
The ISO DSDL group seems to be really on its track by now and I have been lucky enough to be in the wagon and have the opportunity (and honnor) to work with people such as James Clark, Murata Makoto, Rick Jelliffe or Ken Holman in a working group chaired by Charles Goldfarb.
XML Europe has been tremendously active for me with the ISO meetings, a tutorial and a presentation about XML schema languages, an expert panel and an afternoon being track chair but my presentation which I consider as the most promissing is my project of Namespaces warehouse. To make it short, it's a proposal to create a search engine to find all you need to know about how XML namespaces are being used on the web.
This is a high potential project for which I am looking for sponsors and partners.
XML.com has published my interview about my W3C XML Schema book yesterday.
It's not that usual for an author (at least in technical domains) to bash the subject of his book, but I couldn't honnestly be too positive about W3C XML Schema.
My only fear is that people might take it personaly: the most amazing thing about this recommendation is that although I have great respect for each member of the W3C XML Schema Working Group (at least for those I know), I find the result of their work less than perfect... The better proof is that a book like mine which tries to guide users around the land mines is needed at all!
I see it as an indication that the current organization of the W3C doesn't scale.
Published a first early release of a WikiMl web site including:
The idea is to define a "wiki markup language" which would be a non XML syntax for a subset of XHTML following the rules defined by the various WikiWikiWeb systems and to treat it as a full class markup language supported by parsers and writers allowing to use it directly with XML tools (as opposed to developping a full system).
It's not ready for full public announcement and there is not much on the site yet (it's more a raw structure to support some hacking which could be done during XML Europe), but I have already used the parser which is published here to write many documents including 2 of the presentations which I will be doing at XML Europe.
Stay tuned and feel free to join the lists and add stuff in the wiki! | http://www.advogato.org/person/vdv/diary.html?start=13 | CC-MAIN-2015-35 | refinedweb | 457 | 61.74 |
table of contents
NAME¶
SDL::CookBook::PDL -- CookBook for SDL + PDL
PDL provides great number crunching capabilities to Perl and SDL provides game-developer quality real-time bitmapping and sound. You can use PDL and SDL ''together'' to create real-time, responsive animations and simulations. In this section we will go through the pleasures and pitfalls of working with both powerhouse libraries.
CATEGORY¶
Cookbook
Creating a SDL Surface piddle¶
PDL's core type is a piddle. Once a piddle is provided to PDL it can go do a numerous amounts of things. Please see the example in 'examples/cookbook/pdl.pl' that came with this module.
Creating a simple piddle¶
First lets get the right modules.
use PDL; use SDL::Rect; use SDL::Video; use SDL::Surface; use SDL::PixelFormat;
Suppose you want a surface of size (200,400) and 32 bit (RGBA).
my ( $bytes_per_pixel, $width, $height ) = ( 4, 200, 400 );
Define the $width, $height and $bytes_per_pixel. Your $bytes_per_pixel is the number of bits (in this case 32) divided by 8 bits per byte. Therefore for our 32 bpp we have 4 Bpp;
my $piddle = zeros( byte, $bytes_per_pixel, $width, $height );
Create a normal $piddle with zeros, byte format and the Bpp x width x height dimensions.
my $pointer = $piddle->get_dataref();
Here is where we get the actual data the piddle is pointing to. We will have SDL create a new surface from this function.
my $surface = SDL::Surface->new_from( $pointer, $width, $height, 32, $width * $bytes_per_pixel );
Using the same dimensions we create the surface using new_form. The width * Bpp is the scanline (pitch) of the surface in bytes.
warn "Made surface of $width, $height and ". $surface->format->BytesPerPixel; return ( $piddle, $surface );
Finally make sure that the surface actually has the correct dimensions we gave.
NOTE: $surface->format->BytesPerPixel must return 1,2,3,4. !!
Now you can blit and use the surface as needed; and do PDL operations as required.
Operating on the Surface safely¶
To make sure SDL is in sync with the data. You must call SDL::Video::lock before doing PDL operations on the piddle.
SDL::Video::lock_surface($surface); $piddle->mslice( 'X', [ rand(400), rand(400), 1 ], [ rand(200), rand(200), 1 ] ) .= pdl( rand(225), rand(225), rand(255), 255 );
After that you can unlock the surface to blit.
SDL::Video::unlock_surface($surface);
Error due to BPP at blitting¶
When blitting the new surface check for the return value to see if there has been a problem.
my $b = SDL::Video::blit_surface( $surface, SDL::Rect->new( 0, 0, $surface->w, $surface->h ), $app, SDL::Rect->new( ( $app->w - $surface->w ) / 2, ( $app->h - $surface->h ) / 2, $app->w, $app->h ) ); die "Could not blit: " . SDL::get_error() if ( $b == -1 );
If the error message is 'Blit combination not supported' that means that the BPP is incorrect or inconsistent with the dimensions. After that a simple update_rect will so your new surface on the screen.
AUTHORS¶
See "AUTHORS" in SDL. | https://manpages.debian.org/testing/libsdl-perl/SDL::Cookbook::PDL.3pm.en.html | CC-MAIN-2022-40 | refinedweb | 489 | 64.71 |
ECE597 Project Auto HUD
Contents
- 1 Project Overview
- 2 Team Members
- 3 Steps
- 4 Installing OpenCV (Development Machine)
- 5 Installing OpenCV on the Beagle
- 6 OpenCV Haar Training
- 7 Display Buffer Mapping
- 8 Pico Projector Integration
- 9 Future Directions
Project Overview
The goal of this project is to use the beagle board to run image recognition on a camera feed located inside a car, and then signaling to the driver via a pico projector various objects of interest.
Team Members.
Create Index File
There are two index files needing to be created in order for the system to train on the images, a background index file creating a list of filelocations, and a positive index file containing the positive file locations, the number of objects in the picture and the rectangular locations for the object.
Creating the Negative Index File
Use the following automated script from within the background images folder:
#!/bin/bash find ./*.jpg -maxdepth 2 -print > background.idx find ./*.png -maxdepth 2 -print >> background.idx find ./*.bmp -maxdepth 2 -print >> background.idx find ./*.jpeg -maxdepth 2 -print >> background.idx
Creating the Positive Index File
Use the following source code to create a training program that allows the user to click on the upper left and lower right of a object to select it and then press a key to return.
//////////////////////////////////////////////////////////////////////// // // Calibrate.cpp // Date: 4/20/10 // Author: Christopher Routh // This program brings up the necessary images and allows the user to // select correct training spots for objects and generates a index file. // //////////////////////////////////////////////////////////////////////// #include <stdlib.h> #include <stdio.h> #include <math.h> #include <cv.h> #include <highgui.h> #include <sys/types.h> #include <dirent.h> #include <errno.h> #include <vector> #include <string> #include <iostream> #include <fstream> using namespace std; static CvSize imageSize; static CvPoint old_click_pt; static CvPoint new_click_pt; static std::vector<CvPoint> pointsClicked; /*function... might want it in some class?*/ int getdir (std::string dir, std::vector<string> &files) { DIR *dp; struct dirent *dirp; if((dp = opendir(dir.c_str())) == NULL) { std::cout << "Error(" << errno << ") opening " << dir << std::endl; return errno; } while ((dirp = readdir(dp)) != NULL) { files.push_back(string(dirp->d_name)); } closedir(dp); files.erase(files.begin(), files.begin()+2); return 0; } // handle mouse clicks here void mouse_callback(int event, int x, int y, int flags, void* obj) { if (event == CV_EVENT_LBUTTONDOWN) { cout << "(x,y) = (" << x << "," << y << ")" << endl; // reset old_click_pt old_click_pt.x = new_click_pt.x; old_click_pt.y = new_click_pt.y; // get new click point -- note the coordinate change in y new_click_pt.x = x; // coming in from the window system new_click_pt.y = imageSize.height-y; // window system and images have different y axes pointsClicked.push_back(cvPoint(new_click_pt.x, new_click_pt.y)); } } int getCorrectObjectLocation(string file, vector<CvRect> &results) { IplImage* img = 0; // load an image cout << "Trying to load: " << file << endl; img=cvLoadImage((file.insert(0, "./good/").c_str())); if(!img){ printf("Could not load image file: %s\n",(file.insert(0, "./good/").c_str())); exit(0); } // get the image data int height = img->height; int width = img->width; int step = img->widthStep; int channels = img->nChannels; uchar* data = (uchar *)img->imageData; printf("Processing a %dx%d image with %d channels\n",height,width,channels); // create a window cvNamedWindow("mainWin", CV_WINDOW_AUTOSIZE); // show the image cvShowImage("mainWin", img ); cvSetMouseCallback( "mainWin", mouse_callback ); // wait for a key cvWaitKey(0); int properClicks = pointsClicked.size() % 2; if(properClicks != 0 || pointsClicked.size() == 0){ cout << "Improper Number of Points Clicked: " << pointsClicked.size() << " Selected" << endl; return -1; }else{ //Process new rectangles for(unsigned int i = 0; i < pointsClicked.size(); i++){ cout << "Recieved Points X: " << pointsClicked.at(i).x << " Y: " << pointsClicked.at(i).y << endl; } for(unsigned int i = 0; i < pointsClicked.size(); i=i+2){ CvPoint topLeft = pointsClicked.at(i); CvPoint bottomRight = pointsClicked.at(i+1); results.push_back( cvRect(topLeft.x,abs(topLeft.y), (bottomRight.x - topLeft.x), abs((topLeft.y - bottomRight.y)) ) ); cout << "Rectangle Created --- X: " << topLeft.x << " Y: " << abs(topLeft.y) << " Width: " << (bottomRight.x - topLeft.x) << " Height: " << abs((topLeft.y - bottomRight.y)) << endl; } pointsClicked.clear(); } // release the image cvReleaseImage(&img ); return 0; } int main(int argc, char *argv[]) { IplImage* img = 0; int height,width,step,channels; uchar *data; int i,j,k; string dir = string("./good/"); vector<string> files = vector<string>(); getdir(dir,files); vector<CvRect> objectRects = vector<CvRect>(); ofstream vecFile; vecFile.open ("./good/signs.idx"); for (unsigned int i = 0;i < files.size();i++) { int success = getCorrectObjectLocation(files.at(i), objectRects); if(success != 0){ i--; pointsClicked.clear(); }else{ vecFile << "good/" << files.at(i) << " " << objectRects.size(); for(int j = 0; j < objectRects.size(); j++) { vecFile << " " << objectRects.at(j).x << " " << objectRects.at(j).y << " " << objectRects.at(j).width << " " << objectRects.at(j).height; } vecFile << endl; objectRects.clear(); } } //Close the Vector File vecFile.close(); cout << endl << endl << "****** INDEX FILE CREATED ******"; return 0; }
Also, there is another way to crop the positive samples from sample set. Using images clipper tools can help to crop the positive samples directly from the image sets. And the find command and identify command (from ImageMagick) will find and gather images' information like width and height.
$ find <dir> -name '*.<ext>' -exec identify -format '%i 1 0 0 %w %h' \{\} \; > <description_file>
Generate Samples
Using the positive samples the creasamples cammand can apply transforms to the images and add them to the background images creating a wider range of images to train on. The syntax for this command is:
Usage: ./createsamples [-info <description_file_name>] [-img <image_file_name>] [-vec <vec_file_name>] [-bg <background_file_name>] [-num <number_of_samples = 1000>] [-bgcolor <background_color = 0>] [-inv] [-randinv] [-bgthresh <background_color_threshold = 80>] [-maxidev <max_intensity_deviation = 40>] [-maxxangle <max_x_rotation_angle = 1.100000>] [-maxyangle <max_y_rotation_angle = 1.100000>] [-maxzangle <max_z_rotation_angle = 0.500000>] [-show [<scale = 4.000000>]] [-w <sample_width = 24>] [-h <sample_height = 24>]
There are two way to achieve it. The first and simple one is to generate the samples from one positive image. With background images, rotating angle and insensitivity deviation setup, the positive samples can be generated quickly by computer. However, comparing with gathering samples from tons of real images, the accuracy for this sample set will be low.
The example command is as below.
$ opencv-createsamples -img stopSign.jpeg -num 1000 -bg background.idx -vec signs.vec -maxxangle 0.6 -maxyangle 1.1 -maxzangle 0.6 -maxidev 100 -bgcolor 0 -bgthresh 0 -w 40 -h 40
The second one is to create samples from multiple images which either selected or cropped from the method described above. Using the good images' description file, the command is like below:
$ opencv-createsamples -info signs.idx -vec signs.vec -w 40 -h 40
Run Haar Training Program
The utility of Haar training is as below:>]
Commands differ based upon application and intended sample size, the command my group used was:
opencv-haartraining -data signs_drivedata_rev1 -vec signs.vec -bg background.idx -nstages 20 -nsplits 2 -minhitrate 0.995 -maxfalsealarm .5 -npos 730 -nneg 835 -w 40 -h 40 -nonsym -mem 2048 -mode ALL. | http://elinux.org/index.php?title=ECE597_Project_Auto_HUD&oldid=21309 | CC-MAIN-2016-18 | refinedweb | 1,118 | 51.14 |
Back in Visual Studio 2010, developers had the built-in option to generate unit tests via a quick right click. This feature was removed in VS2012 and VS2013 Preview until now. The Visual Studio ALM Rangers have produced a new extension that restores a great deal of this functionality with the 1.0 release of Unit Test Generator.
The team is quick to note that this is not resurrection, it is replacement that is inspired by the previous tool. Among the project’s goals are:
- Supports .NET MS-Test, NUnit and XUnit Test Frameworks and generation of VB/C# test code.
- Presents a "reference implementation" of how to do this for a particular test framework.
- It focuses on project and reference management and not on code generation.
By supporting 3 different test frameworks, (MS-Test, NUnit, and XUnit), developers are able to use the framework that works best for their project. The tool also allows for the customization of the project that gets created as well as the names for the namespace, class, method, and text of the method body.
Using the defaults would produce a default generated class that stubs out test methods set to fail with Assert.Fail(), so that they can be detected and replaced with valid tests.
Note that by design, the generator will only generate stubs for a public methods on a public class. It will not generate any private methods or for a private class. Combining this tool’s support for VS2012/VS2013 with project round tripping means teams should not have any difficulty incorporating the tool now and upgrading when ready.
Channel 9 has provided a brief walkthrough while a previous post by the ALM Rangers has a written tutorial based on release candidate.
Community comments | https://www.infoq.com/news/2013/09/unittestgenerator/ | CC-MAIN-2021-04 | refinedweb | 293 | 62.98 |
1.) For the while response not in range(low, high) loop, even when the number entered is correctly within range the loop still goes through for some reason.
2.) After that when you re-enter your number again, the "lower.../higher..." condition statements will go through, but the loop will end after that.
I'd like to know how I'm putting together loops in the function, especially where I place return response.
- Code: Select all
#.")
# set the initial values
low = 1
high = 100
the_number = random.randint(1, 100)
def ask_number(guess, low, high):
"""Ask for a number within a range."""
response = None
# Loop whether the number is not within range
while response not in range(low, high):
response = int(input("\nNot Valid. Please take another guess: "))
# Loop whether the number is
while guess != the_number:
if guess > the_number:
print("\nLower...")
return response
elif guess == the_number:
print("You guessed it! The number was", the_number)
return response
else:
print("\nHigher...")
return response
def main():
ask_number(int(input("\nTake a guess: ")),low, high)
main()
input("\n\nPress the enter key to exit.") | http://www.python-forum.org/viewtopic.php?f=6&t=5965&p=7654 | CC-MAIN-2017-17 | refinedweb | 179 | 75.91 |
A Python Mattermost Driver
Project description
Python Mattermost Driver (APIv4)
Info
The repository will try to keep up with the master of
If something changes, it is most likely to change because the official mattermost api changed.
Python 3.5 or later is required.
Installation
pip install mattermostdriver
Documentation
Documentation can be found at .
Usage
from mattermostdriver import Driver foo = Driver({ """ Required options Instead of the login/password, you can also use a personal access token. If you have a token, you don't need to pass login/pass. It is also possible to use 'auth' to pass a auth header in directly, for an example, see: """ 'url': 'mattermost.server.com', 'login_id': 'user.name', 'password': 'verySecret', 'token': 'YourPersonalAccessToken', """ Optional options These options already have useful defaults or are just not needed in every case. In most cases, you won't need to modify these, especially the basepath. If you can only use a self signed/insecure certificate, you should set verify to your CA file or to False. Please double check this if you have any errors while using a self signed certificate! """ 'scheme': 'https', 'port': 8065, 'basepath': '/api/v4', 'verify': True, # Or /path/to/file.pem 'mfa_token': 'YourMFAToken', """ Setting this will pass the your auth header directly to the request libraries 'auth' parameter. You probably only want that, if token or login/password is not set or you want to set a custom auth header. """ 'auth': None, """ If for some reasons you get regular timeouts after a while, try to decrease this value. The websocket will ping the server in this interval to keep the connection alive. If you have access to your server configuration, you can of course increase the timeout there. """ 'timeout': 30, """ This value controls the request timeout. See for more information. The default value is None here, because it is the default in the request library, too. """ 'request_timeout': None, """ To keep the websocket connection alive even if it gets disconnected for some reason you can set the keepalive option to True. The keepalive_delay defines how long to wait in seconds before attempting to reconnect the websocket. """ 'keepalive': False, 'keepalive_delay': 5, """ This option allows you to provide additional keyword arguments when calling websockets.connect() By default it is None, meaning we will not add any additional arguments. An example of an additional argument you can pass is one used to disable the client side pings: 'websocket_kw_args': {"ping_interval": None}, """ 'websocket_kw_args': None """ Setting debug to True, will activate a very verbose logging. This also activates the logging for the requests package, so you can see every request you send. Be careful. This SHOULD NOT be active in production, because this logs a lot! Even the password for your account when doing driver.login()! """ 'debug': False, }) """ Most of the requests need you to be logged in, so calling login() should be the first thing you do after you created your Driver instance. login() returns the raw response. If using a personal access token, you still need to run login(). In this case, does not make a login request, but a `get_user('me')` and sets everything up in the client. """ foo.login() """ You can make api calls by using calling `Driver.endpointofchoice`. Using api[''] is deprecated for 5.0.0! So, for example, if you used `Driver.api['users'].get_user('me')` before, you now just do `Driver.users.get_user('me')`. The names of the endpoints and requests are almost identical to the names on the api.mattermost.com/v4 page. API calls always return the json the server send as a response. """ foo.users.get_user_by_username('another.name') """ If the api request needs additional parameters you can pass them to the function in the following way: - Path parameters are always simple parameters you pass to the function """ foo.users.get_user(user_id='me') # - Query parameters are always passed by passing a `params` dict to the function foo.teams.get_teams(params={...}) # - Request Bodies are always passed by passing an `options` dict or array to the function foo.channels.create_channel(options={...}) # See the mattermost api documentation to see which parameters you need to pass. foo.channels.create_channel(options={ 'team_id': 'some_team_id', 'name': 'awesome-channel', 'display_name': 'awesome channel', 'type': 'O' }) """ If you want to make a websocket connection to the mattermost server you can call the init_websocket method, passing an event_handler. Every Websocket event send by mattermost will be send to that event_handler. See the API documentation for which events are available. """ foo.init_websocket(event_handler) # Use `disconnect()` to disconnect the websocket foo.disconnect() # To upload a file you will need to pass a `files` dictionary channel_id = foo.channels.get_channel_by_name_and_team_name('team', 'channel')['id'] file_id = foo.files.upload_file( channel_id=channel_id files={'files': (filename, open(filename))} )['file_infos'][0]['id'] # track the file id and pass it in `create_post` options, to attach the file foo.posts.create_post(options={ 'channel_id': channel_id, 'message': 'This is the important file', 'file_ids': [file_id]}) # If needed, you can make custom requests by calling `make_request` foo.client.make_request('post', '/endpoint', options=None, params=None, data=None, files=None, basepath=None) # If you want to call a webhook/execute it use the `call_webhook` method. # This method does not exist on the mattermost api AFAIK, I added it for ease of use. foo.webhooks.call_webhook('myHookId', options) # Options are optional
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/mattermostdriver/ | CC-MAIN-2021-39 | refinedweb | 900 | 56.05 |
After a recent comparison of Python, Ruby, and Golang for a command-line application I decided to use the same pattern to compare building a simple web service. I have selected Flask (Python), Sinatra (Ruby), and Martini (Golang) for this comparison. Yes, there are many other options for web application libraries in each language but I felt these three lend well to comparison.
Library Overviews
Here is a high-level comparison of the libraries by Stackshare.
Flask (Python)
Flask is a micro-framework for Python based on Werkzeug, Jinja2 and good intentions.
For very simple applications, such as the one shown in this demo, Flask is a great choice. The basic Flask application is only 7 lines of code (LOC) in a single Python source file. The draw of Flask over other Python web libraries (such as Django or Pyramid) is that you can start small and build up to a more complex application as needed.
from flask import Flask app = Flask(__name__) @app.route("/") def hello(): return "Hello World!" if __name__ == "__main__": app.run()
Sinatra (Ruby)
Sinatra is a DSL for quickly creating web applications in Ruby with minimal effort.
Just like Flask, Sinatra is great for simple applications. The basic Sinatra application is only 4 LOC in a single Ruby source file. Sinatra is used instead of libraries such as Ruby on Rails for the same reason as Flask - you can start small and expand the application as needed.
require 'sinatra' get '/hi' do "Hello World!" end
Martini (Golang)
Martini is a powerful package for quickly writing modular web applications/services in Golang.
Martini comes with a few more batteries included than both Sinatra and Flask but is still very lightweight to start with - only 9 LOC for the basic application. Martini has come under some criticism by the Golang community but still has one of the highest rated Github projects of any Golang web framework. The author of Martini responded directly to the criticism here. Some other frameworks include Revel, Gin, and even the built-in net/http library.
package main import "github.com/go-martini/martini" func main() { m := martini.Classic() m.Get("/", func() string { return "Hello world!" }) m.Run() }
With the basics out of the way, let’s build an app!
Service Description
The service created provides a very basic blog application. The following routes are constructed:
GET /: Return the blog (using a template to render).
GET /json: Return the blog content in JSON format.
POST /new: Add a new post (title, summary, content) to the blog.
The external interface to the blog service is exactly the same for each language. For simplicity MongoDB will be used as the data store for this example as it is the simplest to set up and we don’t need to worry about schemas at all. In a normal “blog-like” application a relational database would likely be necessary.
Add A Post
POST /new
$ curl --form title='Test Post 1' \ --form summary='The First Test Post' \ --form content='Lorem ipsum dolor sit amet, consectetur ...' \ http://[IP]:[PORT]/new
View The HTML
GET /
View The JSON
GET /json
[ { content:"Lorem ipsum dolor sit amet, consectetur ...", title:"Test Post 1", _id:{ $oid:"558329927315660001550970" }, summary:"The First Test Post" } ]
Application Structure
Each application can be broken down into the following components:
Application Setup
- Initialize an application
- Run the application
Request
- Define routes on which a user can request data (GET)
- Define routes on which a user can submit data (POST)
Response
- Render JSON (
GET /json)
- Render a template (
GET /)
Database
- Initialize a connection
- Insert data
- Retrieve data
Application Deployment
- Docker!
The rest of this article will compare each of these components for each library. The purpose is not to suggest that one of these libraries is better than the other - it is to provide a specific comparison between the three tools:
Project Setup
All of the projects are bootstrapped using docker and docker-compose. Before diving into how each application is bootstrapped under the hood we can just use docker to get each up and running in exactly the same way -
docker-compose up
Seriously, that’s it! Now for each application there is a
Dockerfile and a
docker-compose.yml file that specify what happens when you run the above command.
Python (flask) - Dockerfile
FROM python:3.4 ADD . /app WORKDIR /app RUN pip install -r requirements.txt
This
Dockerfile says that we are starting from a base image with Python 3.4 installed, adding our application to the
/app directory and using pip to install our application requirements specified in
requirements.txt.
Ruby (sinatra)
FROM ruby:2.2 ADD . /app WORKDIR /app RUN bundle install
This
Dockerfile says that we are starting from a base image with Ruby 2.2 installed, adding our application to the
/app directory and using bundler to install our application requirements specified in the
Gemfile.
Golang (martini)
FROM golang:1.3 ADD . /go/src/github.com/kpurdon/go-blog WORKDIR /go/src/github.com/kpurdon/go-blog RUN go get github.com/go-martini/martini && \ go get github.com/martini-contrib/render && \ go get gopkg.in/mgo.v2 && \ go get github.com/martini-contrib/binding
This
Dockerfile says that we are starting from a base image with Golang 1.3 installed, adding our application to the
/go/src/github.com/kpurdon/go-blog directory and getting all of our necessary dependencies using the
go get command.
Initialize/Run An Application
Python (Flask) - app.py
# initialize application from flask import Flask app = Flask(__name__) # run application if __name__ == '__main__': app.run(host='0.0.0.0')
$ python app.py
Ruby (Sinatra) - app.rb
# initialize application require 'sinatra'
$ ruby app.rb
Golang (Martini) - app.go
// initialize application package main import "github.com/go-martini/martini" import "github.com/martini-contrib/render" func main() { app := martini.Classic() app.Use(render.Renderer()) // run application app.Run() }
$ go run app.go
Define A Route (GET/POST)
Python (Flask)
# get @app.route('/') # the default is GET only def blog(): # ... #post @app.route('/new', methods=['POST']) def new(): # ...
Ruby (Sinatra)
# get get '/' do # ... end # post post '/new' do # ... end
Golang (Martini)
// define data struct type Post struct { Title string `form:"title" json:"title"` Summary string `form:"summary" json:"summary"` Content string `form:"content" json:"content"` } // get app.Get("/", func(r render.Render) { // ... } // post import "github.com/martini-contrib/binding" app.Post("/new", binding.Bind(Post{}), func(r render.Render, post Post) { // ... }
Render A JSON Response
Python (Flask)
Flask provides a jsonify() method but since the service is using MongoDB the mongodb bson utility is used.
from bson.json_util import dumps return dumps(posts) # posts is a list of dicts [{}, {}]
Ruby (Sinatra)
require 'json' content_type :json posts.to_json # posts is an array (from mongodb)
Golang (Martini)
r.JSON(200, posts) // posts is an array of Post{} structs
Render An HTML Response (Templating)
Python (Flask)
return render_template('blog.html', posts=posts)
<!doctype HTML> <html> <head> <title>Python Flask Example</title> </head> <body> {% for post in posts %} <h1> {{ post.title }} </h1> <h3> {{ post.summary }} </h3> <p> {{ post.content }} </p> <hr> {% endfor %} </body> </html>
Ruby (Sinatra)
erb :blog
<!doctype HTML> <html> <head> <title>Ruby Sinatra Example</title> </head> <body> <% @posts.each do |post| %> <h1><%= post['title'] %></h1> <h3><%= post['summary'] %></h3> <p><%= post['content'] %></p> <hr> <% end %> </body> </html>
Golang (Martini)
r.HTML(200, "blog", posts)
<!doctype HTML> <html> <head> <title>Golang Martini Example</title> </head> <body> {{range . }} <h1>{{.Title}}</h1> <h3>{{.Summary}}</h3> <p>{{.Content}}</p> <hr> {{ end }} </body> </html>
Database Connection
All of the applications are using the mongodb driver specific to the language. The environment variable
DB_PORT_27017_TCP_ADDR is the IP of a linked docker container (the database ip).
Python (Flask)
from pymongo import MongoClient client = MongoClient(os.environ['DB_PORT_27017_TCP_ADDR'], 27017) db = client.blog
Ruby (Sinatra)
require 'mongo' db_ip = [ENV['DB_PORT_27017_TCP_ADDR']] client = Mongo::Client.new(db_ip, database: 'blog')
Golang (Martini)
import "gopkg.in/mgo.v2" session, _ := mgo.Dial(os.Getenv("DB_PORT_27017_TCP_ADDR")) db := session.DB("blog") defer session.Close()
Insert Data From a POST
Python (Flask)
from flask import request post = { 'title': request.form['title'], 'summary': request.form['summary'], 'content': request.form['content'] } db.blog.insert_one(post)
Ruby (Sinatra)
client[:posts].insert_one(params) # params is a hash generated by sinatra
Golang (Martini)
db.C("posts").Insert(post) // post is an instance of the Post{} struct
Retrieve Data
Python (Flask)
posts = db.blog.find()
Ruby (Sinatra)
@posts = client[:posts].find.to_a
Golang (Martini)
var posts []Post db.C("posts").Find(nil).All(&posts)
Application Deployment (Docker!)
A great solution to deploying all of these applications is to use docker and docker-compose.
Python (Flask)
Dockerfile
FROM python:3.4 ADD . /app WORKDIR /app RUN pip install -r requirements.txt
docker-compose.yml
web: build: . command: python -u app.py ports: - "5000:5000" volumes: - .:/app links: - db db: image: mongo:3.0.4 command: mongod --quiet --logpath=/dev/null
Ruby (Sinatra)
Dockerfile
FROM ruby:2.2 ADD . /app WORKDIR /app RUN bundle install
docker-compose.yml
web: build: . command: bundle exec ruby app.rb ports: - "4567:4567" volumes: - .:/app links: - db db: image: mongo:3.0.4 command: mongod --quiet --logpath=/dev/null
Golang (Martini)
Dockerfile
FROM golang:1.3 ADD . /go/src/github.com/kpurdon/go-todo WORKDIR /go/src/github.com/kpurdon/go-todo RUN go get github.com/go-martini/martini && go get github.com/martini-contrib/render && go get gopkg.in/mgo.v2 && go get github.com/martini-contrib/binding
docker-compose.yml
web: build: . command: go run app.go ports: - "3000:3000" volumes: # look into volumes v. "ADD" - .:/go/src/github.com/kpurdon/go-todo links: - db db: image: mongo:3.0.4 command: mongod --quiet --logpath=/dev/null
Conclusion
To conclude lets take a look at what I believe are a few categories where the presented libraries separate themselves from each other.
Simplicity
While Flask is very lightweight and reads clearly, the Sinatra app is the simplest of the three at 23 LOC (compared to 46 for Flask and 42 for Martini). For these reasons Sinatra is the winner in this category. It should be noted however that Sinatra’s simplicity is due to more default “magic” - e.g., implicit work that happens behind the scenes. For new users this can often lead to confusion.
Here is a specific example of “magic” in Sinatra:
params # the "request.form" logic in python is done "magically" behind the scenes in Sinatra.
And the equivalent Flask code:
from flask import request params = { 'title': request.form['title'], 'summary': request.form['summary'], 'content': request.form['content'] }
For beginners to programming Flask and Sinatra are certainly simpler, but for an experienced programmer with time spent in other statically typed languages Martini does provide a fairly simplistic interface.
Documentation
The Flask documentation was the simplest to search and most approachable. While Sinatra and Martini are both well documented, the documentation itself was not as approachable. For this reason Flask is the winner in this category.
Community
Flask is the winner hands down in this category. The Ruby community is more often than not dogmatic about Rails being the only good choice if you need anything more than a basic service (even though Padrino offers this on top of Sinatra). The Golang community is still nowhere near a consensus on one (or even a few) web frameworks, which is to be expected as the language itself is so young. Python however has embraced a number of approaches to web development including Django for out-of-the-box full-featured web applications and Flask, Bottle, CheryPy, and Tornado for a micro-framework approach.
Final Determination
Note that the point of this article was not to promote a single tool, rather to provide an unbiased comparison of Flask, Sinatra, and Martini. With that said, I would select Flask (Python) or Sinatra (Ruby). If you are coming from a language like C or Java perhaps the statically-typed nature of Golang may appeal to you. If you are a beginner, Flask might be the best choice as it is very easy to get up and running and there is very little default “magic”. My recommendation is that you be flexible in your decisions when selecting a library for your project.
Questions? Feedback? Please comment below. Thank you!
Also, let us know if you’d be interested in seeing some benchmarks. | https://realpython.com/python-ruby-and-golang-a-web-service-application-comparison/ | CC-MAIN-2021-25 | refinedweb | 2,050 | 51.55 |
Hi.
I'm dealing a small program that has a SocketServer.
I have a specific simple protocol that I'm implementing by myself.
For the server I'm openeing a thread for every new session = socket
(for a user connecting to the server).
The problem is with the protocol design:
I want to have an abstract class for "Message" - a protocol message between the server and
the client.
Tryed to create inside a "Session" class (which identifies each connection = independant socket)
an abstract class for Message, so each session can instantiate new messages for every request (=message).
The problem - something with the generics doesn't seem to work.
These are the relevant headers:
Session.java
-------------------
Code :
public class Session <M extends Message> extends Thread { . . public abstract class Message { public abstract String treatMessage(InetAddress ip, int port); } . . }
Introduce.java
-------------------
Code :
public class Introduce extends Message { }
inside "Introduce" class I get error messages from eclipse:
"no enclosing instance of type Session<M> is available due to some intermediate constructor invocation."
"Session.Message is a raw type. References to generic type Session<M>. Message should be parametrized."
The design is in this way because all sessions share the same database, and this is the design I found
for this situation (inner class). If any new good idea for design I'd love to hear..
thanks.. | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/4572-problem-designing-threads-abstraction-printingthethread.html | CC-MAIN-2014-42 | refinedweb | 222 | 57.16 |
Converting one data type into another is a frequent problem in python programming and it is essential to deal with it properly. Dictionary is one of the data types in python which stores the data as a key-value pair. However, the exchange of data between the server and client is done by python json which is in string format by default.
As it is necessary to convert the python string to dictionary while programming, we have presented a detailed guide with different approaches to making this conversation effective and efficient. But before jumping on the methods, let us quickly recall python string and dictionary in detail.
What is Strings in Python?
Python string is an immutable collection of data elements. It is a sequence of Unicode characters wrapped inside the single and double-quotes. Python does not have a character data type and therefore the single character is simply considered as a string of length 1. To know more about the string data type, please refer to our article "4 Ways to Convert List to String in Python".
Check out the below example for a better understanding of strings in python
For Example
a = "Python Programming is Fun" print(a)
Output
Python Programming is Fun
What is Dictionary in Python?
A dictionary is an unordered collection of data elements that is mutable in nature. Python dictionary stores the data in the form of key-value pair.
Hence we can say that dictionaries are enclosed within the curly brackets including the key-value pairs separated by commas. The key and value are separated by the colon between them.
The most important feature of the python dictionaries is that they don’t allow polymorphism. Also, the keys in the dictionary are case-sensitive. Therefore, the uppercase and lowercase keys are considered different from each other. Later, you can access the dictionary data by referring to its corresponding key name.
Check out the below example for a better understanding of dictionaries in python.
For Example
sample_dict = { "vegetable": "carrot", "fruit": "orange", "chocolate": "kitkat" } print(sample_dict)
Output
{'vegetable': 'carrot', 'fruit': 'orange', 'chocolate': 'kitkat'}
Convert String to Dict in Python
Below are 3 methods to convert string to the dictionary in python:
1) Using json.loads()
You can easily convert python string to the dictionary by using the inbuilt function of loads of json library of python. Before using this method, you have to import the json library in python using the “import” keyword.
The below example shows the brief working of json.loads() method:
For Example
import json original_string = '{"John" : 01, "Rick" : 02, "Sam" : 03}' # printing original string print("The original string is : " + str(original_string)) # using json.loads() method result = json.loads(original_string) # print result print("The converted dictionary is : " + str(result))
Output
The original string is : {"John" : 01, "Rick" : 02, "Sam" : 03} The converted dictionary is : {'John': 01, 'Rick': 02, 'Sam': 03}
2) Using ast.literal.eval()
The ast.literal.eval() is an inbuilt python library function used to convert string to dictionary efficiently. For this approach, you have to import the ast package from the python library and then use it with the literal_eval() method.
Check out the below example to understand the working of ast.literal.eval() method.
For Example
import ast original_String = '{"John" : 01, "Rick" : 02, "Sam" : 03}' # printing original string print("The original string is : " + str(original_String)) # using ast.literal_eval() method result = ast.literal_eval(original_String) # print result print("The converted dictionary is : " + str(result))
Output
The original string is : {"John" : 01, "Rick" : 02, "Sam" : 03} The converted dictionary is : {'John': 01, 'Rick': 02, 'Sam': 03}
3) Using generator expression
In this method, we will first declare the string values paired with the hyphen or separated by a comma. Later we will use the strip() and split() method of string manipulation in the for loop to get the dictionary in the usual format. Strip() method will help us to remove the whitespace from the strings. This method is not as efficient for the conversion of string to dictionary as it requires a lot of time to get the output.
Check out the below example for the string to dictionary conversion using a generator expression
For Example
original_String = "John - 10 , Rick - 20, Sam - 30" print("The original string is ",original_String) #using strip() and split() methods result = dict((a.strip(), int(b.strip())) for a, b in (element.split('-') for element in original_String.split(', '))) #printing converted dictionary print("The resultant dictionary is: ", result)
Output
The original string is John - 10 , Rick - 20, Sam - 30 The resultant dictionary is: {'John': 10, 'Rick': 20, 'Sam': 30}
Conclusion
String and dictionary data type has its own importance when it comes to programming in python. But when we wish to share the data over the network as a client-server connection, it is very important to convert a string into the dictionary for error-free data transfer. We have mentioned the three common methods to explicitly convert the string into a dictionary which will help you to make your programming faster and efficient. To learn more about dictionary and JSON in python, check our detailed guide on “5 Ways to Convert Dictionary to JSON in Python”. | https://favtutor.com/blogs/string-to-dict-python | CC-MAIN-2022-05 | refinedweb | 860 | 52.6 |
In <w8CdnQGPtuvdGRLVnZ2dnUVZ_r7inZ2d at comcast.com> Larry Bates <larry.bates at websafe.com`> writes: >kj wrote: >> Yet another noob question... >> >> Is there a way to mimic C's static variables in Python? Or something >> like it? The idea is to equip a given function with a set of >> constants that belong only to it, so as not to clutter the global >> namespace with variables that are not needed elsewhere. >> >> For example, in Perl one can define a function foo like this >> >> *foo = do { >> my $x = expensive_call(); >> sub { >> return do_stuff_with( $x, @_ ); >> } >> }; >> >> In this case, foo is defined by assigning to it a closure that has >> an associated variable, $x, in its scope. >> >> Is there an equivalent in Python? >> >> Thanks! >> >> kynn >First names in Python are just that, names that point to objects. Those objects >can contain any type of information including other objects. They are NOT >buckets where things are stored. >1) Names (variables in Perl/C) defined within a Python function are placed in >its local namespace. They are not visible in the global namespace. >2) Yes you can have a local name point to a global. This is often used in >classes with attributes because looking up local is somewhat quicker than >looking up the class attribute. >def foo(): > x = expensive_call > return do_stuff_with(x()) Maybe I'm missing your point, the goal is to have a "runtime constant" associated with the function. In the your definition of foo, expensive_call gets called every time that foo gets called; this is what I'm trying to avoid! Maybe it's easier to see what I mean with JavaScript: function foo() { if (foo.x === undefined) foo.x = expensive_call(); return do_stuff_with(foo.x); } Here, expensive_call is called only once (assuming it never returns undefined). OK, I guess that in Python the only way to do what I want to do is with objects... kynn -- NOTE: In my address everything before the first period is backwards; and the last period, and everything after it, should be discarded. | https://mail.python.org/pipermail/python-list/2008-July/487041.html | CC-MAIN-2014-15 | refinedweb | 333 | 74.9 |
Hi, I have been ask to do this:
Write a function wallOfEyes that takes three parameters (radius,
width and height) and displays, in a window of exactly the right size,
a “wall” of brown eyes of the given radius and dimensions. E.g., the
function call wallOfEyes(50, 4, 2) should give a 400 × 200
graphics window
This is my code so far:
def wallOfEyes(radius, width, height): radius = win = GraphWin("Wall Of Eyes", width, height) width = win.width height = win.height
I'm not sure how you change the size of the graphics window according to the dimensions given by the user. I'm also a bit stuck with radius.
Thanks :) | https://www.daniweb.com/programming/software-development/threads/325971/creating-circles | CC-MAIN-2018-47 | refinedweb | 112 | 75.74 |
Sometimes,.
I wrote about what a spanning tree is and why you might want one a few months ago, while promoting my wares. But forget all that fancy stuff. If you need to find a plain-old minimum spanning tree, and you like speaking Python, then you want MinimumSpanningTree.py from David Eppstein’s PADS library (Python Algorithms and Datastructures).
PADS doesn’t have an
easy_install package that I know of, but for finding MSTs, there are only two files you need: UnionFind.py and MinimumSpanningTree.py. Put these somewhere that Python can find them, like in your working directory.
Python makes Kruskal’s algorithm so short that I’ll just quote Eppstein’s entire
MinimumSpanningTree function here:
def MinimumSpanningTree(G): """ Return the minimum spanning tree of an undirected graph G. G should be represented in such a way that G[u][v] gives the length of edge u,v, and G[u][v] should always equal G[v][u]. The tree is returned as a list of edges. """ # Kruskal's algorithm: sort edges by weight, and add them one at a time. # We use Kruskal's algorithm, first because it is very simple to # implement once UnionFind exists, and second, because the only slow # part (the sort) is sped up by being built in to Python. subtrees = UnionFind() tree = [] edges = [(G[u][v],u,v) for u in G for v in G[u]] edges.sort() for W,u,v in edges: if subtrees[u] != subtrees[v]: tree.append((u,v)) subtrees.union(u,v) return tree
So, for example, if you have ever had a desire to find the minimum spanning tree of complete graph with uniformly random edge weights, you could do it like this:
from random import random from MinimumSpanningTree import MinimumSpanningTree as mst n = 10 G = {} for u in range(n): G[u] = {} for u in range(n): for v in range(u): r = random() G[u][v] = r G[v][u] = r T = mst(G) mst_weight = sum([G[u][v] for u,v in T])
We might as well get some beautiful pictures out of this, since it’s not much more work. For the above code, but tweaked so that every point has a random position in the unit square with distances as-the-crow-flies between them, behold.
MST of 200 points uniformly random in the unit square (according to euclidean distances).
For fun times, ask yourself, what if I wanted 2 disjoint spanning trees on this set of points? The minimum cost solution can be very different from the spanning trees you find if you yank out the MST and use Eppstein’s code on the remaining edges.
Not necessarily the minimum two disjoint spanning trees of 200 uniformly random points: formed by finding the MST (according to euclidean distances), removing it, and finding the MST in the remaining graph.
p.s. It looks like Aric Hagberg just added this mst code to NetworkX, so if you have the most-most-most recent version of that, maybe you can build up any
XGraph and then just say
T = networkx.algorithms.mst(G).
10 responses to “ACO in Python: PADS for Minimum Spanning Trees”
Lecture notes on matching algorithms:
Interesting. Re the caption of your figure with two trees, saying they’re not necessarily the minimum pair: if you find a MST, remove its edges, and find another MST of the remaining graph, then actually you do always get the two disjoint trees with the minimum total weight, due to the matroid properties of spanning trees.
I wish that were true, but try running your algorithm on this graph:
The union of disjoint spanning trees has the matroid property, but the matroid property isn’t always what it seems like it should be.
This made Matt Cary’s thesis considerably bulkier (and made the SODA version of that part take an extra year to complete).
Oops, you’re right. Sorry for the mistake.
No problem. Thanks for reading, and especially for commenting. And double-especially for making PADS available! :)
BTW, ACO stands for Algorithms, Combinatorics, and Optimization. (I’ve been told that I use too many insider acronyms.)
Pingback: ACO in Python: Minimum Weight Perfect Matchings (a.k.a. Matching Algorithms and Reproductive Health: Part 4) « Healthy Algorithms
Can you please post the complete python code you used to produce the minimum two disjoint spanning trees of 200 uniformly random points and including the code to produce the image.
How can one one generalize to three, four, five … n disjoint MSTs using Prim’s algorithm.
Jesu
Jesu: As I discussed with David above, those probably aren’t the minimum two disjoint spanning trees; generalizing from one MST to multiple disjoint MSTs seems like a tricky (but polynomial-time) business. There is a section on it in Schrijver’s combinatorial optimization bible, but I don’t know of any implementation.
As for producing the images, I’ll post it if I can find it… it’s just some monkeying around with Matplotlib, but it looks cool, huh?
Pingback: MCMC in Python: Sudoku is a strange game « Healthy Algorithms | http://healthyalgorithms.com/2009/01/13/aco-in-python-pads-for-minimum-spanning-trees/ | CC-MAIN-2014-42 | refinedweb | 855 | 67.49 |
I've written a CAN-bus ECU simulator:
You can make your own board or buy a ready made board with Teensy 3.2 installed:...
I've written a CAN-bus ECU simulator:
You can make your own board or buy a ready made board with Teensy 3.2 installed:...
In the Transmitter and Receiver you need to specify the baud rate like this:
Can1.begin(500000);
In the receiver try this:
void loop() {
CAN_message_t rxmsg;
Your schematic on the Teensy doesn't show which pins you have used. There are two CAN ports on the 3.6.
Show how you wired up CAN_TX and CAN_RX.
Wiring looks ok.
Post the code for both Teensy.
You need to post the schematic. Can't see a Vref pin on the TJA1051T/3.
I would like to see photos of the anti static bag and the printed card is well. If they are knockoffs they might not be bothered with these details.
Wow, with Open() it compiled. That means they have not compiled their example.
I can now move to the next stage.
Thank you for your help.
I'm using Teensy 3.2 and trying to compile the example from this library:
The library uses inheritance from this library:
...
There is a spreadsheet of the pinout but Teensy is not on there:
Looks like everybody is using the M.2 connector.
22178
22179
Wow, interesting..
The MicroMod M.2 connector has 75 pins. That means more IO pins from the Teensy can be brought out.
Yes, the default for extended frame is disabled. So you have to manually enable it.
I'm using this Teensy 3.2 board:...
I know what it is now. It is the extended frame filter.
Change to this:
void setup() {
pinMode(LED_BUILTIN, OUTPUT);
Serial.begin(115200);
while (!Serial);
Can0.begin(250000); //...
Strange your program won't compile in my version 1.8.12
It doesn't have digitalToggle() function.
I had to change it to:
#include <FlexCAN.h>
static CAN_message_t msg;
void setup() {
Is there a terminator on the transceiver board ?
Try and change:
if (Can0.available()) {
Can0.read(msg);
To
if(Can0.read(msg))
{
You need to connect the CAN Tx pin is well even if you only want to receive.
Pin 14&15 is not CAN3.
Try and use
FlexCAN_T4<CAN1, RX_SIZE_256, TX_SIZE_16> can1;
On pin
CRX1 pin 23
CTX1 pin 22.
How are you connecting it to your car ?
Are you sure the baud rate is 1000000 on your car ?
I like to see the T5 have the ability to drive larger LCDs. 800x480 or higher. SPI would be too slow, parallel RGB would take up too many IOs.
MIPI-DSI would be a good option to go for.
I've seen...
Do you have the schematic we can have a look at ?
Do you have any surge protection supply ?
You could use a SPI FRAM:
Best to have a 100nF close to pin 20 of the shifter.
First you need to change your display driver from:
#include <Adafruit_SSD1306.h>
to
#include <RA8875.h>
Then setup the SPI pins to your board. Might be worth trying out some RA8875...
My tools for SMD work:
Fine tip soldering iron
x7 magnifier, use for inspection
0.3mm lead free solder
Tweezers
20490
Hi MichaelMeissner,
Is there any simple sketch to test the psram and flash after they been soldered?
I've seen the long thread about T4.1 but would nice to have a simple sketch to see they are...
Sparkfun have them in stock now. No restriction on how many you can buy. When the Teensy 4.0 was release it was only 2 per customer.
Just ordered some.
Got an email from Sparkfun. Teensy 4.1 should be available from today !!
Oh its pre-order.
Nice project.
I like the combination of electronics and wood.
The SPI is quite slow. The driver won't go above 22MHz.
Check in the file RA8875UserSettings.h about line 190
const static uint32_t MAXSPISPEED = 22000000UL;
I'm sure the RA8875 can go...
Code is on Github:
Ensure the following library are installed first:...
I've got it working with the Teensy 4.0 and a 3.5" LCD with ILI9488_t3 driver. Managed to get the SPI running at 60MHz. Its fast.
19973
The Button and Label test code are at:
Ensure you have installed these library first:...
Agree, I had the SPI working at 60MHz connected to an ILI9488 LCD. Its really fast and I'm surprised it works at 60MHz.
The LCD is working at 60MHz. I'm thinking do I really need to drive it in 8...
Is there any 8 GPIO pins that is mapped directly to memory so it can be written in one byte on the Teensy 4.1 ? That is if I want to control 8 GPIO at once by writing one byte to memory. There is a...
Thank you KurtE for the pinout spreadsheet.
How comes there is MISO1 on D1 and another MISO1 on D39 ?
SOLVED !!
I've changed it to use writeRect instead:
/* Display flushing */
void my_disp_flush(lv_disp_drv_t *disp, const lv_area_t *area, lv_color_t *color_p)
{
uint16_t width =...
Ok, I've deleted the original RA8875 library and install this one:
but still the same problem. I think something is not right with drawPixels()...
I'm trying to use the LittlevGL library with RA8875 LCD on Teensy 4.0. The LCD is a 800x480 and the RA8875 library is from sumotoy v0.70b11p11
There is no LittlevGL port for the Teensy 4.0 so I've...
Hi @prickle I'm try to get LittleVGL to compile for Teensy 4.0 and looking at your example in post #1.
The LV_VDB_SIZE_IN_BYTES is not defined. I've made it to 307200. (480x320*16)/8 = 307200
...
Boris is in intensive care now. Hope he pulls through.
Is there somewhere I can download an example code for use with the Teensy 4 and ILI9341 ?
I wonder if the postoffice will shut. If it does then I will have to shut my online store.
Actually it wouldn't make that much difference, sales has been right down.
It is not very clear from the photo but have you actually soldered some header pins on to the Teensy then plug it onto the bread board?
Looks like the RX1 and TX1 on the Teensy is not connected to...
Wow, that is smooth.
Would it be possible for you to share the code ?
@mjs513 Using frame buffer works. No flicker on screen now.
@KurtE Using opaque fonts also works.
Now I have choice of which one to use. I will continue with my project.
Thanks for...
I'm using a 2.8" ILI9341 LCD on Teensy 4.0
To update a counter I first fill a rectangle with the background colour to erase the old number the print the new number.
There is alway a small annoying...
The Pyboard D-series uses an interesting concept. Big thru holes at 0.1" pitch then smaller holes between the 0.1" hole. Making the pitch 0.05".
You can get 0.05" pin header and sockets from Mouser....
Using high density connectors like the one on the Portenta H7 is going to make prototyping an add on board quite difficult. Not only the fine pitch but spacing between the two connector is quite...
Yes, a preview of the layout would be nice. | https://forum.pjrc.com/search.php?s=d1ee2564771ceca89f7c7d633c9bfd89&searchid=6049609 | CC-MAIN-2021-10 | refinedweb | 1,232 | 87.21 |
#include <math.h> int isfinite(real-floating x);
The isfinite() macro shall determine whether its argument has a finite value (zero, subnormal, or normal, and not infinite or NaN). First, an argument represented in a format wider than its semantic type is converted to its semantic type. Then determination is based on the type of the argument.
The following sections are informative.
The Base Definitions volume of POSIX.1-2008, <math.h>
Any typographical or formatting errors that appear in this page are most likely to have been introduced during the conversion of the source files to man page format. To report such errors, see . | https://man.linuxreviews.org/man3p/isfinite.3p.html | CC-MAIN-2019-47 | refinedweb | 105 | 57.16 |
import "github.com/godoctor/godoctor/engine"
Package engine is the programmatic entrypoint to the Go refactoring engine.
AddDefaultRefactorings invokes AddRefactoring on each of the Go Doctor's built-in refactorings.
Clients implementing a custom Go Doctor may: 1. not invoke this at all, 2. invoke it after adding custom refactorings, or 3. invoke it before adding custom refactorings. The choice between #2 and #3 will determine whether the client's custom refactorings are listed before or after the built-in refactorings when "godoctor -list" is run.
func AddRefactoring(shortName string, newRefac refactoring.Refactoring) error
AddRefactoring allows custom refactorings to be added to the refactoring engine. Invoke this method before starting the command line or protocol driver.
AllRefactoringNames returns the short names of all refactorings in an order suitable for display in a menu.
ClearRefactorings removes all registered refactorings from the engine. This should only be used for testing.
func GetRefactoring(shortName string) refactoring.Refactoring
GetRefactoring returns a Refactoring keyed by the given short name. The short name must be one of the keys in the map returned by AllRefactorings.
Package engine imports 2 packages (graph) and is imported by 5 packages. Updated 2018-05-18. Refresh now. Tools for package owners. | https://godoc.org/github.com/godoctor/godoctor/engine | CC-MAIN-2018-26 | refinedweb | 201 | 52.46 |
README
¶
Tegola
Tegola is a vector tile server delivering Mapbox Vector Tiles with support for PostGIS and GeoPackage data providers. User documentation can be found at tegola.io
Features
- Native geometry processing (simplification, clipping, make valid, intersection, contains, scaling, translation)
- Mapbox Vector Tile v2 specification compliant.
- Embedded viewer with auto generated style for quick data visualization and inspection.
- Support for PostGIS and GeoPackage data providers. Extensible design to support additional data providers.
- Support for several cache backends: file, s3, redis, azure blob store.
- Cache seeding and invalidation via individual tiles (ZXY), lat / lon bounds and ZXY tile list.
- Parallelized tile serving and geometry processing.
- Support for Web Mercator (3857) and WGS84 (4326) projections.
- Support for AWS Lambda.
Usage
tegola is a vector tile server Version: v0.10.2 Usage: tegola [command] Available Commands: cache Manipulate the tile cache help Help about any command serve Use tegola as a tile server version Print the version number of tegola Flags: --config string path to config file (default "config.toml") -h, --help help for tegola Use "tegola [command] --help" for more information about a command.
Running tegola as a vector tile server
- Download the appropriate binary of tegola for your platform via the release page.
- Setup your config file and run. Dy default tegola looks for a
config.tomlin the same directory as the binary. You can set a different location for the
config.tomlusing a command flag:
./tegola serve --config=/path/to/config.toml
Server Endpoints
/
The server root will display a built in viewer with an auto generated style. For example:
/maps/:map_name/:z/:x/:y
Return vector tiles for a map. The URI supports the following variables:
:map_nameis the name of the map as defined in the
config.tomlfile.
:zis the zoom level of the map.
:xis the row of the tile at the zoom level.
:yis the column of the tile at the zoom level.
/maps/:map_name/:layer_name/:z/:x/:y
Return vector tiles for a map layer. The URI supports the same variables as the map URI with the additional variable:
:layer_nameis the name of the map layer as defined in the
config.tomlfile.
/capabilities
Return a JSON encoded list of the server's configured maps and layers with various attributes.
/capabilities/:map_name
Return TileJSON details about the map.
/maps/:map_name/style.json
Return an auto generated Mapbox GL Style for the configured map.
Configuration
The tegola config file uses the TOML format. The following example shows how to configure a PostGIS data provider with two layers. The first layer includes a
tablename,
geometry_field and an
id_field. The second layer uses a custom
sql statement instead of the
tablename property.
Under the
maps section, map layers are associated with data provider layers and their
min_zoom and
max_zoom values are defined. Optionally,
default_tags can be setup which will be encoded into the layer. If the same tags are returned from a data provider, the data provider's values will take precedence.
[webserver] port = ":9090" # port to bind the web server to. defaults ":8080" [webserver.headers] Access-Control-Allow-Origin = "*" Cache-Control = "no-cache, no-store, must-revalidate" [cache] # configure a tile cache type = "file" # a file cache will cache to the local file system basepath = "/tmp/tegola" # where to write the file cache # register data providers [[providers]] name = "test_postgis" # provider name is referenced from map layers (required) type = "postgis" # the type of data provider. currently only supports postgis (required) host = "localhost" # postgis database host (required) port = 5432 # postgis database port (required) database = "tegola" # postgis database name (required) user = "tegola" # postgis database user (required) password = "" # postgis database password (required) srid = 3857 # The default srid for this provider. Defaults to WebMercator (3857) (optional) max_connections = 50 # The max connections to maintain in the connection pool. Default is 100. (optional) ssl_mode = "prefer" # PostgreSQL SSL mode*. Default is "disable". (optional) [[providers.layers]] name = "landuse" # will be encoded as the layer name in the tile tablename = "gis.zoning_base_3857" # sql or tablename are required geometry_fieldname = "geom" # geom field. default is geom id_fieldname = "gid" # geom id field. default is gid srid = 4326 # the srid of table's geo data. Defaults to WebMercator (3857) [[providers.layers]] name = "roads" # will be encoded as the layer name in the tile tablename = "gis.zoning_base_3857" # sql or tablename are required geometry_fieldname = "geom" # geom field. default is geom geometry_type = "linestring" # geometry type. if not set, tables are inspected at startup to try and infer the gemetry type id_fieldname = "gid" # geom id field. default is gid fields = [ "class", "name" ] # Additional fields to include in the select statement. [[providers.layers]] name = "rivers" # will be encoded as the layer name in the tile geometry_fieldname = "geom" # geom field. default is geom id_fieldname = "gid" # geom id field. default is gid # Custom sql to be used for this layer. Note: that the geometery field is wraped # in a ST_AsBinary() and the use of the !BBOX! token sql = "SELECT gid, ST_AsBinary(geom) AS geom FROM gis.rivers WHERE geom && !BBOX!" [[providers.layers]] name = "buildings" # will be encoded as the layer name in the tile geometry_fieldname = "geom" # geom field. default is geom id_fieldname = "gid" # geom id field. default is gid # Custom sql to be used for this layer as a sub query. ST_AsBinary and # !BBOX! filter are applied automatically. sql = "(SELECT gid, geom, type FROM buildings WHERE scalerank = !ZOOM! LIMIT 1000) AS sub" # maps are made up of layers [[maps]] name = "zoning" # used in the URL to reference this map (/maps/:map_name) [[maps.layers]] name = "landuse" # name is optional. If it's not defined the name of the ProviderLayer will be used. # It can also be used to group multiple ProviderLayers under the same namespace. provider_layer = "test_postgis.landuse" # must match a data provider layer min_zoom = 12 # minimum zoom level to include this layer max_zoom = 16 # maximum zoom level to include this layer [maps.layers.default_tags] # table of default tags to encode in the tile. SQL statements will override class = "park" [[maps.layers]] name = "rivers" # name is optional. If it's not defined the name of the ProviderLayer will be used. # It can also be used to group multiple ProviderLayers under the same namespace. provider_layer = "test_postgis.rivers" # must match a data provider layer dont_simplify = true # optionally, turn off simplification for this layer. Default is false. dont_clip = true # optionally, turn off clipping for this layer. Default is false. min_zoom = 10 # minimum zoom level to include this layer max_zoom = 18 # maximum zoom level to include this layer
* more on PostgreSQL SSL mode here. The
postgis config also supports "ssl_cert" and "ssl_key" options are required, corresponding semantically with "PGSSLKEY" and "PGSSLCERT". These options do not check for environment variables automatically. See the section below on injecting environment variables into the config.
Environment Variables
Config TOML
Environment variables can be injected into the configuration file. One caveat is that the injection has to be within a string, though the value it represents does not have to be a string.
The above config example could be written as:
# register data providers [[providers]] name = "test_postgis" type = "postgis" host = "${POSTGIS_HOST}" # postgis database host (required) port = "${POSTGIS_PORT}" # recall this value must be an int database = "${POSTGIS_DB}" user = "tegola" password = "" srid = 3857 max_connections = "${POSTGIS_MAX_CONN}"
SQL Debugging
The following environment variables can be used for debugging:
TEGOLA_SQL_DEBUG specify the type of SQL debug information to output. Currently support two values:
LAYER_SQL: print layer SQL as they are parsed from the config file.
EXECUTE_SQL: print SQL that is executed for each tile request and the number of items it returns or an error.
Usage
$ TEGOLA_SQL_DEBUG=LAYER_SQL tegola serve --config=/path/to/conf.toml
The following environment variables can be used to control various runtime options:
TEGOLA_OPTIONS specify a set of options comma or space delimited. Supports the following options
DontSimplifyGeoto turn off simplification for all layers.
SimplifyMaxZoom={{int}}to set the max zoom that simplification will apply to. (14 is default)
Client side debugging
When debugging client side, it's often helpful to to see an outline of a tile along with it's Z/X/Y values. To encode a debug layer into every tile add the query string variable
debug=true to the URL template being used to request tiles. For example:{z}/{x}/{y}.vector.pbf?debug=true
The requested tile will be encode a layer with the
name value set to
debug and include two features:
debug_outline: a line feature that traces the border of the tile
debug_text: a point feature in the middle of the tile with the following tags:
zxy: a string with the
Z,
Xand
Yvalues formatted as:
Z:0, X:0, Y:0
Building from source
Tegola is written in Go and requires Go 1.x to compile from source. (We support the three newest versions of Go.) To build tegola from source, make sure you have Go installed and have cloned the repository to your
$GOPATH. Navigate to the repository then run the following commands:
cd cmd/tegola/ go build
You will now have a binary named
tegola in the current directory which is ready for running.
Build Flags The following build flags can be used to turn off certain features of tegola:
noAzblobCache- turn off the Azure Blob cache back end.
noS3Cache- turn off the AWS S3 cache back end.
noRedisCache- turn off the Redis cache back end.
noPostgisProvider- turn off the PostGIS data provider.
noGpkgProvider- turn off the GeoPackage data provider. Note, GeoPackage uses CGO and will be turned off if the environment variable
CGO_ENABLED=0is set prior to building.
noViewer- turn off the built in viewer.
pprof- enable Go profiler. Start profile server by setting the environment
TEGOLA_HTTP_PPROF_BINDenvironment (e.g.
TEGOLA_HTTP_PPROF_BIND=localhost:6060).
Example of using the build flags to turn of the Redis cache back end, the GeoPackage provider and the built in viewer.
go build -tags 'noRedisCache noGpkgProvider noViewer'
Turning off CGO Tegola uses CGO for certain functionality (i.e. GeoPackge support). To build tegola without CGO use the following command:
CGO_ENABLED=0 go build
License
See license file in repo.
Documentation
¶
Overview ¶
Package tegola describes the basic geometeries that can be used to convert to and from.
Index ¶
- Constants
- Variables
- func GeometeryDecorator(g Geometry, ptsPerLine int, comment string, ptDecorator func(pt Point) string) string
- func GeometryAsJSON(g Geometry, w io.Writer) error
- func GeometryAsMap(g Geometry) map[string]interface{}
- func GeometryAsString(g Geometry) string
- func IsCollectionEqual(c1, c2 Collection) bool
- func IsGeometryEqual(g1, g2 Geometry) bool
- func IsLineStringEqual(l1, l2 LineString) bool
- func IsMultiLineEqual(ml1, ml2 MultiLine) bool
- func IsMultiPointEqual(mp1, mp2 MultiPoint) bool
- func IsMultiPolygonEqual(mp1, mp2 MultiPolygon) bool
- func IsPoint3Equal(p1, p2 Point3) bool
- func IsPointEqual(p1, p2 Point) bool
- func IsPolygonEqual(p1, p2 Polygon) bool
- func LineAsPointPairs(l LineString) (pp []float64)
- func Tile2Lat(y, z uint64) float64
- func Tile2Lon(x, z uint64) float64
- type Collection
- type Geometry
- type LineString
- type MultiLine
- type MultiPoint
- type MultiPolygon
- type Point
- type Point3
- type Polygon
- type Tile
-
- func (t *Tile) Bounds() [4]float64
- func (t *Tile) Deg2Num() (x, y int)
- func (t *Tile) FromPixel(srid int, pt [2]float64) (npt [2]float64, err error)
- func (t *Tile) Init()
- func (t *Tile) Num2Deg() (lat, lng float64)
- func (t *Tile) PixelBufferedBounds() (bounds [4]float64, err error)
- func (t *Tile) ToPixel(srid int, pt [2]float64) (npt [2]float64, err error)
- func (t *Tile) ZEpislon() float64
- func (t *Tile) ZLevel() uint
- func (t *Tile) ZRes() float64
Constants ¶
const ( WebMercator = 3857 WGS84 = 4326 )
const ( DefaultEpislon = 10.0 DefaultExtent = 4096 DefaultTileBuffer = 64.0 MaxZ = 22 )
Variables ¶
var ( WebMercatorBounds = &geom.Extent{-20026376.39, -20048966.10, 20026376.39, 20048966.10} WGS84Bounds = &geom.Extent{-180.0, -85.0511, 180.0, 85.0511} )
var UnknownConversionError = fmt.Errorf("do not know how to convert value to requested value")
Functions ¶
func GeometeryDecorator ¶
func GeometryAsMap ¶
func GeometryAsString ¶
func IsCollectionEqual ¶
func IsCollectionEqual(c1, c2 Collection) bool
CollectionIsEqual will check to see if the provided collections are equal. This function does not check to see if the collections contain any recursive structures, and if there are any recursive structures it will hang. If the collections contains any unknown geometries it will be assumed to not match.
func IsGeometryEqual ¶
GeometryIsEqual will check to see if the two given geometeries are equal. This function does not check to see if there are any recursive structures if there are any recursive structures it will hang. If the type of the geometry is unknown, it is assumed that it does not match any other geometries.
func IsLineStringEqual ¶
func IsLineStringEqual(l1, l2 LineString) bool
IsLineStringEqual will check to see if the two linesstrings provided are equal.
func IsMultiLineEqual ¶
IsMultiLineEqual will check to see if the two Multilines that are provided are equal.
func IsMultiPointEqual ¶
func IsMultiPointEqual(mp1, mp2 MultiPoint) bool
IsMultiPointEqual will check to see if the two provided multipoints are equal
func IsMultiPolygonEqual ¶
func IsMultiPolygonEqual(mp1, mp2 MultiPolygon) bool
MultiPolygonIsEqual will check to see if the two provided multi-polygons are equal.
func IsPoint3Equal ¶
IsPoint3Equal will check to see if the two 3d tegola points are equal.
func IsPointEqual ¶
IsPointEqual will check to see if the two tegola points are equal.
func IsPolygonEqual ¶
PolygonIsEqual will check to see if the two provided polygons are equal.
func LineAsPointPairs ¶
func LineAsPointPairs(l LineString) (pp []float64)
Types ¶
type Collection ¶
Collection is a collections of different geometries.
type LineString ¶
LineString is a Geometry of a line.
type MultiLine ¶
type MultiLine interface { Geometry Lines() []LineString }
MultiLine is a Geometry with multiple individual lines.
type MultiPoint ¶
MultiPoint is a Geometry with multiple individual points.
type MultiPolygon ¶
MultiPolygon describes a Geometry multiple intersecting polygons. There should only one exterior polygon, and the rest of the polygons should be interior polygons. The interior polygons will exclude the area from the exterior polygon.
type Point ¶
Point is how a point should look like.
type Point3 ¶
Point3 is a point with three dimensions; at current is just converted and treated as a point.
type Polygon ¶
type Polygon interface { Geometry Sublines() []LineString }
Polygon is a multi-line Geometry where all the lines connect to form an enclose space.
type Tile ¶
type Tile struct { Z uint X uint Y uint Lat float64 Long float64 Tolerance float64 Extent float64 Buffer float64 // contains filtered or unexported fields }
Tile slippy map tilenames
func NewTileLatLong ¶
NewTileLatLong will return a non-nil tile object.
func (*Tile) Bounds ¶
Bounds returns the bounds of the Tile as defined by the North most Longitude, East most Latitude, South most Longitude, West most Latitude.
func (*Tile) FromPixel ¶
func (*Tile) PixelBufferedBounds ¶
func (*Tile) ZRes ¶
ZRes takes a web mercator zoom level and returns the pixel resolution for that scale, assuming t.Extent x t.Extent pixel tiles. Non-integer zoom levels are accepted. ported from: 40075016.6855785 is the equator in meters for WGS84 at z=0 | https://pkg.go.dev/gitee.com/exlimit/tegola | CC-MAIN-2021-21 | refinedweb | 2,428 | 56.15 |
How to define Global Variables in C#.Net
This Article gives you idea about how to define Global Variables in C#.net and access these variables from anywhere in project. Unlike ASP.NET there is no Session or Application variables in C#.NET that will help us to store values globally.
Every project needs some data Transfer from a Page to another.
Depending upon the coding language and environment, Data transfer methods are vary.
Let's take an example of ASP.NET which having it's own State Management Techniques that are helpful for data transfer.
Following simple four questions will clear all your idea
Q1. Can i transfer data using C#.NET ?
Yes, we can transfer data by Creating constructor of forms and various methods are there. Here is my atricle
ASP.NET provide us a very easy way to define Global veriable Session variable that are unique to every login user. Session can be used for Global Variables.
Q2. Can i declare global variable in c#.net
Yes, we can define global variable in C#.net. which are accessible everywhere in the project.
Q3. How can i declare global variable in c#.net
Follow the steps that allow you to declare global variable in c#.net
Step 1. Open up your form (on which you have to declare) global variable
Step 2. You will see the class declaration under namespace Just define the variable.
e.g.
namespace namespace1
{
public class clsAccess
{
public static int iGlobal;
}
}
Q4. How can i use it anywhere in project
Access the veriable like ClassName.Veriable name
e.g.
MessageBox.Show("Global Variables = " + clsAccess.iGlobal.ToString())
No need to create extra class object.
Hope it helps all.
Regards
koolprasad2003
noce post, more or less the same as | http://www.dotnetspider.com/resources/43170-How-define-Global-Variables-C-Net.aspx | CC-MAIN-2019-22 | refinedweb | 293 | 70.7 |
Copyright © 2007 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply.
This Working Draft defines features of the Scalable Vector Graphics (SVG) Language that are specifically for printing environments.. The W3C Membership and other interested parties are invited to review the document and send comments to to public-svg-print@w3.org (archives) until 8 February 2008. There is an accompanying SVG Print 1.2, Part 1: Primer that lists the ways SVG Print, along with other SVG documents, may be used for SVG Printing.
This document has been produced by the W3C SVG Working Group as part of the W3C Graphics Activity within the Interaction Domain. Print is a snapshot of a work-in-progress.:
Interaction with other print standards. The SVG Working Group believe that some design decisions would be best made in collaboration with other print standards bodies, and would welcome liaison statements in this area.
This document lists features that may be used in the context of printing. The various usage scenarios are listed in the SVG Print Requirements document.
This document is normative.
This document contains explicit conformance criteria that overlap with some RNG definitions in requirements. If there is any conflict between the two, the explicit conformance criteria are the definitive reference.
This document contains conformance criteria for a number of different entities:
Some conformance criteria allow or require the User Agent for Printing and the User Agent for Print Preview to communicate with each other.
The conformance criteria have awkward names - any suggestions for replacing "A User Agent for Printing", "A User Agent for Print Preview", and "A User Agent for Printing or Print Preview" would be welcome.
TODO: The above terms will probably be replaced as follows:
A User Agent for Print Preview SHOULD, when requested to print, provide a mechanism for passing details for that printing to a User Agent for Printing. The mechanisms for doing so are covered by other specifications and are outside the scope of the present one.
A User Agent for Printing SHOULD provide a mechanism for accepting details for that printing (from a User Agent for Print Preview, or other sources). The mechanisms for doing so are covered by other specifications and are outside the scope of the present one.
A User Agent for Printing MUST NOT print any content if there is no pageSet element and if there is no content that would otherwise mark the SVG canvas.
A User Agent for Printing MUST comply with the Static Profile of SVG Tiny 1.2.
A User Agent for Print Preview MUST comply with the Static Profile of SVG Tiny 1.2.
The pageSet element is the container for a set of page elements.
<define name='pageSet'> <element name='pageSet'> <ref name='pageSet.AT'/> <zeroOrMore><ref name='page'/></zeroOrMore> </element> </define> <define name='pageSet.AT' combine='interleave'> <ref name='svg.Core.attr'/> </define>
A User Agent for Printing or Print Preview MUST treat all pageSet elements beyond the first as unsupported elements.
A User Agent for Printing or Print Preview MUST treat all children of a pageSet element as unsupported elements except for page elements, masterPage elements and use elements.
A User Agent for Printing or Print Preview MUST treat all elements from the SVG namespace between the closing pageSet tag and the closing svg tag as unsupported elements.
The page element appears only as a child of a pageSet element.
<define name='page'> <element name='page'> <ref name='page.AT'/> <zeroOrMore><ref name='svg.G.group'/></zeroOrMore> </element> </define> <define name='page.AT' combine='interleave'> <ref name='svg.Core.attr'/> <ref name='svg.page-orientation.attr'/> </define>
Conceptually, the page element is similar to an svg element but without transformation and positioning. All pages are in the coordinate system of their pageSet, which is in the coordinate system of the root svg element.
A User Agent for Printing or Print Preview MUST, when rendering a page, render only the children of the page, and any relevant Master Pages.
A User Agent for Printing or Print Preview MUST treat page elements that are not inside the pageSet element as unsupported elements.
Content that has references inside a page element MUST refer only to elements within the same page or in the SVG document before the pageSet element.
A User Agent for Printing or Print Preview MUST treat any references in a page that refer to content in a different page in the same document as unsupported.
Content MAY have references inside a page element that refer to external documents, but authors SHOULD use the externalResourcesRequired feature in this case.
A User Agent for Printing MAY discard a page element and all its children after that page element has been dealt with.
The masterPage element appears only as a child of a pageSet element.
<define name='masterPage'> <element name='masterPage'> <ref name='masterPage.AT'/> <zeroOrMore><ref name='svg.G.group'/></zeroOrMore> </element> </define> <define name='masterPage.AT' combine='interleave'> <ref name='svg.Core.attr'/> <ref name='svg.rendering-order.attr'/> </define>
A User Agent for Printing or Print Preview MUST NOT directly render any masterPage element or any of its children.
The rendering-order attribute may appear only on the masterPage element.
Attribute definition:
A Foreground Master Page is a Master Page with rendering-order set to "over".
A Background Master Page is a Master Page with rendering-order set to "under".
It is possible to have multiple Background Master Pages and/or Foreground Master Pages in an SVG Print document, though only one of each can be active at any given time. Whenever the renderer encounters a Background or Foreground Master Page, it replaces the current Background or Foreground Master Page with the new one, and subsequent pages use the newer one.
A User Agent for Printing or Print Preview MUST apply one Background Master Page and one Foreground Master Page to a displayed page. The Background Master Page and Foreground Master Page MUST be the ones most closely preceding the page in document order.
A User Agent for Printing or Print Preview MUST render content that is part of the Background Master Page on each displayed page, before the page content.
A User Agent for Printing or Print Preview MUST render content that is part of the Foreground Master Page on each displayed page, after the page content.
A User Agent for Printing or Print Preview MUST discard the Foreground Master Page and all its children when the next Foreground Master Page is parsed.
A User Agent for Printing or Print Preview MUST discard the Background Master Page and all its children when the next Background Master Page is parsed.
The default master pages are made available as a simple default that gives sensible printing behaviour for documents that do not have pageSet elements.
A User Agent for Printing or Print Preview MUST use all elements in the SVG document after the opening svg tag and before the opening pageSet tag as the initial Background Master Page, applying to all page elements that appear before the next Background Master Page in document order.
A User Agent for Printing or Print Preview MUST set the initial Foreground Master Page to be empty.
Note that, as a consequence of the foregoing conformance statements, the default background master page (all the elements between the svg element and the pageSet element) is no longer used as a default master page when another background master page is set. Elements in the defs element of an svg document will be available to all pages in the pageSet element.
Content that has MasterPage elements SHOULD NOT make use of the default background master page (ie. should not have content between the svg element and the pageSet element).
A User Agent for Printing MAY allow control over the job information (e.g. JDF), by means of an encapsulation of an SVG file or document fragment.
A User Agent for Printing or Print Preview MAY, when requested to print an SVG document, and in the absence of conflicting Job Control information, provide a means for selecting which pages to print.
A User Agent for Printing SHOULD, when requested to print an SVG document, and in the absence of conflicting Job Control information, print all pages, in the order in which the pages appear in the SVG document.
If an SVG document fragment with multiple pages is embedded in, or referenced by, a document from another namespace, such as XHTML, then typically it is up to that namespace to define the processing of pages.
The orientation of each page element can be controlled by the page-orientation property. This enables the content to define whether a portrait or landscape mode is used for display and printing.
When displaying a page on a User Agent for Print Preview,.
A User Agent for Printing or Print Preview MUST display each page with the orientation as specified by page-orientation.
The use-master-page attribute may appear only on a use element referencing an external page element.
Attribute definition:
Both the current Foreground Master Page and Background Master Page of the referencing document must be used when use-master-page is set to current.
Both the current Foreground Master Page and Background Master Page of the externally referenced document must be used when use-master-page is set to external.
By default, if the use-master-page attribute is not specified on the use element referencing an external page, then both the current Foreground Master Page and Background Master Page of the referencing document must be used.
A User Agent for Printing or Print Preview MUST use the current Foreground Master Page and Background Master Page of the referencing document when use-master-page is set to a value of current.
A User Agent for Printing or Print Preview MUST use the current Foreground Master Page and Background Master Page of the externally referenced document when use-master-page is set to a value of external.
A User Agent for Printing or Print Preview MUST use the current Foreground Master Page and Background Master Page of the referencing document if use-master-page is not set on the use element.
TODO: Need feedback on this feature. Are there compelling uses cases for having this extra complexity? What benefit does it provide?
Attribute definition:
A User Agent for Printing or Print Preview MUST, for the purposes of printing or print preview, treat a value of noPrint for the print-display attribute as though the element has display=none.
TODO: Need feedback on this feature. display-print is a replacement for noPrint. The attribute was originally a boolean. This changes is still under consideration. The following alternatives are available:
Content that uses CSS SHOULD define all styles for use in the entire document at the start of the SVG document, prior to any page elements.
Content intended for print applications SHOULD be authored using presentation attributes exclusively.
A User Agent for Printing MUST NOT run any script or start any animation in the SVG document.
A User Agent for Print Preview MUST NOT run any script or start any animation in any page being displayed.
SVG Print extends the control of color, relative to SVG Tiny 1.2, in two.
The color-interpolation property, not in SVG Tiny 1.2 but used in SVG 1.1 [SVG11], is extended by this specification to add new values using the CIE L*a*b* color space. Both the cartesian (CIE-Lab) and polar (CIE-LCHab) forms are supported.
A User Agent for Printing or Print Preview MUST use the color space defined by color-interpolation for calculations when interpolating between gradient stops and alpha compositing of graphics elements into the current background.
The 'color-interpolation' property specifies the color space for gradient interpolations and alpha compositing.
When a child element is blended into a background, the value of the 'color-interpolation' property on the child determines the type of blending, not the value of the 'color-interpolation' on the parent.
The definition of paint in SVG Tiny 1.2 [SVGT12] is extended by this specification to add the icc-color values from paint in SVG Full 1.1 [SVG11] and also to add new values for named colors and uncalibrated (device) colors.
TODO: Need feedback on this feature. Looking into the possibility of having ICC or Name ICC fallback for Device Color as well. ICC or Named ICC should have priority over an sRGB fallback if both types of fallback are specified.
Note that by default, color interpolation occurs in the sRGB color space, even if an ICC-based color specification is provided, unless the 'color-interpolation' property is set to one of the CIE Lab color spaces. If an sRGB color interpolation space is specified, all ICC-based colors used for the interpolation must be converted into the sRGB interpolation colorspace.
As with SVG Tiny 1.2, colors may be specified in the sRGB color space (see [sRGB]). Since all SVG Print and SVG Print Preview User Agents are color management capable, the rendering requirements are more strict than for SVG User Agents [SVG11] where color management is optional.
When an sRGB color is used - because it is the sole color specification, or in a permitted fallback situation - a conformant SVG Print or SVG Print Preview User Agent shall render it in conformance with the ICC profile for sRGB to obtain the desired color appearance.
As with SVG Full 1.1, SVG Print content may specify color using an ICC profile (see [ICC42]). For resiliency and for compatibility with SVG Tiny 1.2, an sRGB fallback must still be provided.
If ICC-based colors are provided, a conformant SVG Print or SVG Print Preview user agent MUST use the the ICC-based color in preference to the sRGB fallback color, unless the ICC color profile is unavailable, malformed, or uses a profile connection space other than CIE XYZ or CIE LAB.
When an ICC color is used, a conformant SVG Print or SVG Print Preview User Agent shall render it in conformance with the specified ICC profile to obtain the desired color appearance.
SVG Print introduces the ability to specify a color using a 'Named Color Profile'.
If ICC-based named colors are provided, a conformant SVG Print or SVG Print Preview Print or SVG Print Preview User Agent shall render it in conformance with the specified ICC profile to obtain the desired color appearance.
SVG Print also introduces a method of specifying uncallibrated device colors. This is sometimes useful in print workflows. This feature utilises the deviceColor element and the device-color keyword.
As these are uncalibrated, any interpolation or compositing occurs using the fallback sRGB color space
TODO: Need feedback on this feature. Looking into the possibility of having ICC or Name ICC fallback for Device Color as well. ICC or Named ICC should have priority over an sRGB fallback if both types of fallback are specified. MUST first attempt to locate the profile by using the specifications in the @color-profile rules first.
<define name='color-profile'> <element name='color-profile'> <ref name='color-profile.AT'/> <empty/> </element> </define> <define name='color-profile.AT' combine='interleave'> <ref name='svg.Core.attr'/> <ref name='svg.local.attr'/> <ref name='svg.name.attr'/> <ref name='svg.rendering-intent.attr'/> </define>.
TODO: Need some conformance criteria. Also, this reiterates some of the color-profile element stuff, so we should reference that more here.
TODO: Need some conformance criteria. Also, need to reference color-profile element.
Certain print applications can improve printing quality by specifying colors by name or in an alternative color format. This is commonly referred to as using >
TODO: This needs to be converted to RNG
A URI used to identify the device-specific information included in this element. If the User Agent does not recognize the URI (ie. is not able to recognize the particular device parameters) then the element should be ignored and should not be part of the rendering process.
The name of this device-specific color information. The name attribute is used within the device-color specification within <paint> to reference this deviceColor element..
TODO: Need some conformance criteria. | http://www.w3.org/TR/SVGPrint12/ | crawl-002 | refinedweb | 2,700 | 54.42 |
The Data Science Lab
You don't have to resort to writing C++ to work with popular machine learning libraries such as Microsoft's CNTK and Google's TensorFlow. Instead, we'll use some Python and NumPy to tackle the task of training neural networks.
Over the past year or so, among my colleagues, the use of sophisticated machine learning (ML) libraries, such as Microsoft's CNTK and Google's TensorFlow, has increased greatly. Most of the popular ML libraries are written in C++ for performance reasons, but have a Python API interface. This means that if you want to work with ML, it's becoming increasingly important to have a familiarity with the Python language and with basic neural network concepts.
In this article, I'll explain how to implement the back-propagation (sometimes spelled as one word without the hyphen) neural network training algorithm from scratch, using just Python 3.x and the NumPy (numerical Python) package. After reading this article you should have a solid grasp of back-propagation, as well as knowledge of Python and NumPy techniques that will be useful when working with libraries such as CNTK and TensorFlow.
A good way to see where this article is headed is to take a look at the screenshot of a demo program in Figure 1. The demo Python program uses back-propagation to create a simple neural network model that can predict the species of an iris flower using the famous Iris Dataset. The demo begins by displaying the versions of Python (3.5.2) and NumPy (1.11.1) used. Although it is possible to install Python and NumPy separately, it’s becoming increasingly common to use an Anaconda distribution (4.1.1) as I did.
The Iris Dataset has 150 items. Each item has four numeric predictor variables (often called features): sepal length and width, and petal length and width, followed by the species ("setosa," "versicolor" or "virginica"). The demo program uses 1-of-N label encoding, so setosa = (1,0,0) and versicolor = (0,1,0) and virginica = (0,0,1). The goal is to predict species from sepal and petal length and width.
The). The demo loaded the training and test data into two matrices.
The back-propagation algorithm is iterative and you must supply a maximum number of iterations (50 in the demo) and a learning rate (0.050) that controls how much each weight and bias value changes in each iteration. Small learning rate values lead to slow but steady training. Large learning rates lead to quicker training at the risk of overshooting good weight and bias values. The max-iteration and leaning rate are free parameters.
The demo displays the value of the mean squared error, every 10 iterations during training. As you'll see shortly, there are two error types that are commonly used with back-propagation, and the choice of error type affects the back-propagation implementation. After training completed, the demo computed the classification accuracy of the resulting model on the training data (0.9333 = 112 out of 120 correct) and on the test data (0.9667 = 29 out of 30 correct). The classification accuracy on a set of test data is a very rough approximation of the accuracy you'd expect to see on new, previously unseen data.
This article assumes you have a solid knowledge of the neural network input-output mechanism, and intermediate or better programming skill with a C-family language (C#, Python, Java), but doesn’t assume you know much about the back-propagation algorithm. The demo program is too long to present in its entirety in this article, but the complete source code is available in the accompanying file download.
Understanding Back-Propagation
Back-propagation is arguably the single most important algorithm in machine learning. A complete understanding of back-propagation takes a lot of effort. But from a developer's perspective, there are only a few key concepts that are needed to implement back-propagation. In the discussion that follows, for simplicity I leave out many important details, and take many liberties with the underlying mathematics.
Take a look at the two math equations for back-propagation in Figure 2. The top equation defines a sum of squares error metric and is the starting point for back-propagation. The tj stands for a target value and the oj stands for a computed output value. Suppose a target value is (1, 0, 0) corresponding to setosa. And suppose that for a given set of weight and bias values, and a set of four input values, the computed output values are (0.70, 0.10, 0.20). The squared error is 1/2 * [ (1 - 0.70)^2 + (0 - 0.10)^2 + (0 - 0.20)^2 ] = 1/2 * (0.09 + 0.01 + 0.04) = 0.07. Notice the seemingly arbitrary 1/2 term.
The goal of back-propagation training is to minimize the squared error. To do that, the gradient of the error function must be calculated. The gradient is a calculus derivative with a value like +1.23 or -0.33. The sign of the gradient tells you whether to increase or decrease the weights and biases in order to reduce error. The magnitude of the gradient is used, along with the learning rate, to determine how much to increase or decrease the weights and biases.
Using some very clever mathematics, you can compute the gradient. The bottom equation in Figure 2 is the weight update rule for a single output node. The amount to change a particular weight is the learning rate (alpha) times the gradient. The gradient has four terms. The xi is the input associated with the weight that’s being examined. The (oj - tj) is the derivative of the outside part of the error function: the 2 exponent drops to the front, canceling the 1/2 (which is the only reason the 1/2 term is there), then you multiply by the derivative of the inside, which is -1 times the derivative of the function used to compute the output node.
The third and fourth terms of the gradient come from the activation function used for the output nodes. For classification, this is the softmax function. As it turns out, the derivative of an output node oj is, somewhat surprisingly, oj * (1 - oj). To summarize, the back-propagation weight update rule depends on the derivative of the error function and the derivative of the activation function.
There are some important additional details. The squared error term can be defined using (target -output)^2 instead of (output - target)^2 and give the same error because of the squaring operation. But reversing the order will change the sign of the resulting (target - output) term in the gradient. This in turn affects whether you should add the delta-w term or subtract it when you update weights and biases.
OK, so updating the weights and biases for hidden-to-output weights isn't too difficult. But what about the weight update rule for input-to-hidden weights? That equation is more complicated and in my opinion is best understood using code rather than a math equation, as I'll present shortly. The Wikipedia article on back-propagation has a very good derivation of the weight update rule for both output and hidden nodes.
Overall Demo Program Structure
The overall demo four import statements to gain access to the NumPy package's array and matrix data structures, and the math and random modules. The sys module is used only to programmatically display the Python version, and can be omitted in most scenarios.
# nn_backprop.py
# Python 3.x
import numpy as np
import random
import math
import sys
# ------------------------------------
def loadFile(df): . . .
def showVector(v, dec): . . .
def showMatrix(m, dec): . . .
def showMatrixPartial(m, numRows, dec, indices): . . .
# ------------------------------------
class NeuralNetwork: . . .
# ------------------------------------
def main():
print("\nBegin NN back-propagation demo \n")
pv = sys.version
npv = np.version.version
print("Using Python version " + str(pv) +
"\n and NumPy version " + str(npv))
numInput = 4
numHidden = 5
numOutput = 3
print("\nCreating a %d-%d-%d neural network " %
(numInput, numHidden, numOutput) )
nn = NeuralNetwork(numInput, numHidden, numOutput, seed=3) created a main function to hold all program control logic. I started by displaying some version information:
def main():
print("\nBegin NN back-propagation demo \n")
pv = sys.version
npv = np.version.version
print("Using Python version " + str(pv) +
"\n and NumPy version " + str(npv))
...
Next, I created the demo neural network, like so:
numInput = 4
numHidden = 5
numOutput = 3
print("\nCreating a %d-%d-%d neural network " %
(numInput, numHidden, numOutput) )
nn = NeuralNetwork(numInput, numHidden, numOutput, seed=3)
The NeuralNetwork constructor accepts a seed value to initialize a class-scope random number generator object. The RNG object is used to initialize all weights and bias values to small random numbers between -0.01 and +0.01 using class method initializeWeights. The RNG object is also used during training to scramble the order in which training items are processed. The seed value of 3 is arbitrary.
The constructor assumes that the tanh function is used for hidden node activation. As you'll see shortly, if you use a different activation function such as logistic sigmoid or rectified linear unit (ReLU), the back-propagation code for updating the hidden node weights and bias values will be affected.
The demo loads training and test data using these statements:)
Helper function loadFile does all the work. The function is hardcoded to assume that the source data is comma-delimited, is ordered with features followed by encoded species, and does not have a header line. Writing code from scratch allows you to be very concise, as opposed to writing general-purpose library code, which requires you to take into account all kinds of scenarios and add huge amounts of error-checking code.
The back-propagation training is invoked like so:")
Behind the scenes, method train uses the back-propagation algorithm and displays a progress message with the current mean squared error, every 10 iterations. It's important to monitor progress during neural network training because it's not uncommon for training to stall out completely, and if that happens you don't want to wait for an entire training run to complete.
In non-demo scenarios, the maximum number of training iterations/epochs can be in the thousands, so printing errors every 10 iterations might be too often. You might want to consider passing a parameter to the train method that controls when to print progress messages., seed): ...
def setWeights(self, weights): ...
def getWeights(self): ...
def initializeWeights(self): ...
def computeOutputs(self, xValues): ...
def train(self, trainData, maxEpochs, learnRate): ...
def accuracy(self, tdata): ...
def meanSquaredError(self, tdata): ...
@staticmethod
def hypertan(x): ...
@staticmethod
def softmax(oSums): ...
@staticmethod
def totalWeights(nInput, nHidden, nOutput): ...
# end class NeuralNetwork
----------------------------------------------------------------------------------
The NeuralNetwork.train method implements the back-propagation algorithm. The definition the float32 rather than float64 data type.
Next, two scratch arrays are instantiated:
oSignals = np.zeros(shape=[self.no], dtype=np.float32)
hSignals = np.zeros(shape=[self.nh], dtype=np.float32)
Each hidden and output node has an associated signal that’s essentially a gradient without its input term. These arrays are mostly for coding convenience. holds integers 0 through 119 and is used to shuffle the order in which training items are processed. The training loop begins with:
while epoch < maxEpochs:
self.rnd.shuffle(indices)
for ii in range(numTrainItems):
idx = indices[ii]
...
The built-in shuffle function uses the Fisher-Yates mini-algorithm to scramble instead of copying the input values from the matrix of training items into an intermediate x_values array and then transferring those values to the input nodes, you could copy the input values directly. The computeOutputs method stores and returns the output values, but the explicit rerun is ignored here.
The first step in back-propagation is to compute the output node signals:
# 1. compute output node signals
for k in range(self.no):
derivative = (1 - self.oNodes[k]) * self.oNodes[k]
oSignals[k] = derivative * (self.oNodes[k] - t_values[k])
Recall that the derivative variable holds the derivative of the softmax activation function. The oSignals variable includes that derivative and the output minus target value. Next, the hidden-to-output weight gradients are computed:
# 2. compute hidden-to-output weight gradients using output signals
for j in range(self.nh):
for k in range(self.no):
hoGrads[j, k] = oSignals[k] * self.hNodes[j]
The output node signal is combined with the associated input from the associated hidden node to give the gradient. As I mentioned earlier, the oSignals array is mostly for convenience and you can compute the values into the hoGrads matrix directly if you wish. Next, the gradients for the output node biases are computed:
# 3. compute output node bias gradients using output signals
This is the trickiest part of back-propagation. The sum variable accumulates the product of output node signals and hidden-to-output weights. This isn't at all obvious. You can find a good explanation of how this works by reading the Wikipedia article on back-propagation. Recall that the NeuralNetwork class has a hardcoded tanh hidden node activation function. The derivative variable holds the calculus derivative of the tanh function. So, if you change the hidden node activation function to logistic sigmoid or ReLU, you'd have to change the calculation of this derivative variable.
Next, the input-to-hidden weight gradients, and the hidden node bias gradients are calculated:
# 5. compute input-to-hidden weight gradients using hidden signals
for i in range(self.ni):
for j in range(self.nh):
ihGrads[i, j] = hSignals[j] * self.iNodes[i]
# 6. compute hidden node bias gradients using hidden signals
for j in range(self.nh):
hbGrads[j] = hSignals[j] * 1.0
As before, a gradient is composed of a signal and an associated input term, and the dummy 1.0 input value for the hidden biases can be dropped.
If you imagine the input-to-output mechanism as going from left to right (input to output to hidden), the gradients must be computed from right to left (hidden-to-output gradients, then input-to-hidden gradients). After all the gradients have been computed, you can update the weights in either order. The demo program starts by updating the input-to-hidden weights:
# 1. update input-to-hidden weights
for i in range(self.ni):
for j in range(self.nh):
delta = -1.0 * learnRate * ihGrads[i,j]
self.ihWeights[i, j] += delta
The weight delta is the learning rate times the gradient. Here, I multiply by -1 and then add the delta because error is assumed to use (target - output)^2 and so the gradient has an (output - target) term. I used this somewhat awkward approach to follow the Wikipedia entry on back-propagation. Of course you can drop the multiply by -1 and just subtract the delta if you wish.
Next, the hidden node biases are updated:
# 2. update hidden node biases
for j in range(self.nh):
delta = -1.0 * learnRate * hbGrads[j]
self.hBiases[j] += delta
If you look at the loop structure s carefully, you'll notice that you can combine updating the input-to-hidden weights and updating the hidden biases if you wish. Next, the hidden-to-output weights and the output node biases are updated using these statements:
# 3. update hidden-to-output weights
for j in range(self.nh):
for k in range(self.no):
delta = -1.0 * learnRate * hoGrads[j,k]
self.hoWeights[j, k] += delta
# 4. update output node biases
for k in range(self.no):
delta = -1.0 * learnRate * obGrads[k]
self.oBiases[k] += delta
Notice that all updates use the same learning rate. An advanced version of back-propagation called Adam ("adaptive moment estimation") was developed in 2015. Adam uses different learning rates and a few other tricks, and is considered state-of-the-art. will be displayed every 10 iterations. You might want to parameterize the interval. You can also print additional diagnostic information here. The final values of the weights and biases are fetched by class method getWeights and returned by method train as a convenience.
Wrapping Up
The Python language is too slow to create serious neural networks from scratch. But implementing a neural network in Python gives you a complete understanding of what goes on behind the scenes when you use a sophisticated machine learning library like CNTK or TensorFlow. And the ability to implement a neural network from scratch gives you the ability to experiment with custom algorithms.
The version of back-propagation presented in this article is basic. In future articles I'll show you how to implement momentum and mini-batch training -- two important techniques that increase training speed. Another important variation is to use a different measure of error called cross entropy error.
Printable Format
> More TechLibrary
I agree to this site's Privacy Policy.
> More Webcasts | https://visualstudiomagazine.com/articles/2017/06/01/back-propagation.aspx | CC-MAIN-2018-26 | refinedweb | 2,817 | 57.47 |
Print the next greater element in C++
In this tutorial, we will learn how to print the next greater element C++. Then, we have an array and we have to print the next greater element of each element.
For Example :
- Given array: {1, 4, 0, -1, 5, 2}
- If there is no greater element in the right, then print -1.
- For the rightmost element, print -1.
- This is how we can find the output of this problem.
- Output: 1 –> 4
4 –> 5
0 –> 5
-1 –> 5
5 –> -1
2 –> -1
- Given array: { 13, 45, 12, 9}
- Then, output:
13 –> 45
45 –> -1
12 –> -1
9 — > -1
Moreover, we can solve this problem by these two methods.
Simple Method:
- Using two for loops.
- The outer loop will traverse through element one by one.
- Then, the inner loop will find the greatest element in the rest of the element
Therefore, we will use the stack to solve.
Method using stack:
- Firstly, push the first element to the stack.
- Then, pick the remaining elements from the stack and apply the following steps.
- We have to mark the current element as next
- Now, check for the stack if it is empty or not. If it is not empty, then compare the top element of the stack with the next.
- If next is greater than the top element of the stack, then Pop the element from the stack.
- Now, next is the next greater element for the popped element of the stack.
- We have to keep popping elements from the stack while the popped element becomes smaller than the next element.
- Again, next becomes the next greater element for all popped element of the stack.
- Now we have to push the next in the stack.
- Then, the time complexity: O(n)
You may also like:
Breadth-first search (BFS) and Depth-first search (DFS) for a Graph in C++
C++ program to print the next greater element
Implementation:
#include<iostream> #include <bits/stdc++.h> using namespace std; // print element function void printelement(int arr[], int sn) { stack < int > st; /* push first element */ st.push(arr[0]); // iterate for remaining for (int i = 1; i < sn; i++) { if (st.empty()) { st.push(arr[i]); continue; } //check the above conditions here while (st.empty() == false && st.top() < arr[i]) { cout << st.top() << " --> " << arr[i] << endl; st.pop(); } // push next element to stack st.push(arr[i]); } //if it is not empty while (st.empty() == false) { cout << st.top() << " --> " << -1 << endl; st.pop(); } } int main() { int arr[] = { 13,45,12,9}; int sn = sizeof(arr) / sizeof(arr[0]); printelement(arr, sn); return 0; }
Output Explanation:
INPUT: { 13,45,12,9} OUTPUT: 13 --> 45 9 --> -1 12 --> -1 45 --> -1 | https://www.codespeedy.com/print-the-next-greater-element-in-cpp/ | CC-MAIN-2020-40 | refinedweb | 451 | 74.39 |
Quickstart¶
This page explains how to create a simple Kivy “Hello world” program. This assumes you already have Kivy installed. If you do not, head over to the Installation section. We also assume basic Python 2.x knowledge throughout the rest of this documentation.
Create an application¶
The base code for creating an application looks like this:
import kivy kivy.require('1.0.6') # replace with your current kivy version ! from kivy.app import App from kivy.uix.button import Button class MyApp(App): def build(self): return Button(text='Hello World') if __name__ == '__main__': MyApp().run()
Save it as main.py.
To run the application, follow the instructions for your operating system:
- Linux
-
Follow the instructions for running Kivy application on Linux:$ python main.py
- Windows
-
Follow the instructions for running Kivy application on Windows:$ python main.py # or C:\appdir>kivy.bat main.py
- Mac OS X
-
Follow the instructions for running Kivy application on MacOSX:$ kivy main.py
- Android
- Your application needs some complementary files to be able to run on Android. See Kivy on Android for further reference.
A window should open, showing a sole button (with the label ‘Hello World’) that covers the entire window’s area. That’s all there is to it.
So what does that code do?
- First, we import Kivy, and check if the current installed version will be enough for our application. If not, an exception will be automatically fired, and prevent your application to crash in runtime. You can read the documentation of kivy.require() function for more information.
- We import the App class, to be able to subclass it. By subclassing this class, your own class gains several features that we already developed for you to make sure it will be recognized by Kivy.
- Next, we import the Button class, to be able to create an instance of a button with a custom label.
- Then, we create our application class, based on the App class. We extend the build() function to be able to return an instance of Button. This instance will be used as the root of the widget tree (because we returned it).
- Finally, we call run() on our application instance to launch the Kivy process with our application inside. | http://kivy.org/docs/guide/quickstart.html | CC-MAIN-2014-35 | refinedweb | 374 | 68.06 |
Maybe more like this. (Sorry, but I couldn't test it).
void FindFile(std::wstring dir)
{
dir += L"\\*";
WIN32_FIND_DATAW file;
HANDLE search = FindFirstFileW(dir.c_str(),...
Maybe more like this. (Sorry, but I couldn't test it).
void FindFile(std::wstring dir)
{
dir += L"\\*";
WIN32_FIND_DATAW file;
HANDLE search = FindFirstFileW(dir.c_str(),...
Your code has an error, although it doesn't show up in your example since you are creating a square matrix.
// In allocateData2DbyMalloc, this
for (unsigned int i = 0; i < nColumn; ++i)...
You don't know what you're talking about.
There's no difference.
Of course it doesn't.
I vote to cancel "Structure's" account.
As usual, "Structure" doesn't even bother reading the question.
double roundUpDecimalPlaces(double x, int decimalPlaces) {
double mag = 1.0;
for (int i = 0; i < decimalPlaces; ++i)
...
I vote to cancel this idiot's account.
It presumably needs to be something like this (all on one line):
link.exe
/nologo
/subsystem:windows
/LIBPATH:"C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Lib\x64\"
...
When you are not logged in, an ad is shown in that location.
Since you are using an ad-blocker, it appears as a blank space.
You need to link against the library that defines that windows function. According to the documentation, the library is User32.lib. I don't use windows so I'm not sure of the command line syntax, but...
#include <iostream>
#include <string>
inline char get_char_at(const std::string& s, int row, int col) {
return s[(row + col) % s.size()];
}
int main() {
std::string s;
for...
total moron
It's a truly stupid question.
Best to make no assumptions.
It would be slower to do it that way then to just search the file in one piece (with one thread).
The answer is just the binary representation of M - 1 given in N bits, where 0 is represented by A and 1 by B.
#include <iostream>
int main() {
int n, m;
std::cin >> n >> m;
...
Maybe you should change your username to RyanD. :cool:
Try to figure it out yourself?
You obviously aren't very smart if you think stahta01's name is "Bill". I guess you think my name is "Carl".
Have a nice day!!! :cool:
Stucture's "solution" is total garbage. :wink:
The basic answer is that you already know how many records there are. Three. Problem solved.
If you wish to calculate the size, then in the case of a statically-allocated array for which you have...
You are truly an idiot. :wink:
That's exactly what your loop says: "while no key has been pressed, keep looping" (i.e., stop as soon as a key is pressed).
Presumably you want something more like:
while (true) {
...
Nah. Format strings are notoriously fiddly things. :)
Two problems. If name and traits are wchar_t strings then you need to put an l (lowercase L) before the opening square brackets in %[^*] and %[^\n]. They don't automatically read wchar_t strings.
...
card game - C++ Forum
card game - C++ Forum
unbelievable | https://cboard.cprogramming.com/search.php?s=c61698cb51f2b0617b022519fa55378c&searchid=2056318 | CC-MAIN-2019-47 | refinedweb | 503 | 78.35 |
Next: ax_define_sub_path, Previous: ax_decl_wchar_max, Up: The Macros
AX_DEFINE_INTEGER_BITS (TYPE [, CANDIDATE-TYPE]...)
Given a TYPE of the form "int##_t" or "uint##_t", see if the datatype TYPE is predefined. If not, then define TYPE – both with AC_DEFINE and as a shell variable – to the first datatype of exactly ## bits in a list of CANDIDATE-TYPEs. If none of the CANDIDATE-TYPEs contains exactly ## bits, then set the TYPE shell variable to "no".
For example, the following ensures that uint64_t is defined as a 64-bit datatype:
AX_DEFINE_INTEGER_BITS(uint64_t, unsigned long long, unsigned __int64, long) if test "$uint64_t" = no; then AC_MSG_ERROR([unable to continue without a 64-bit datatype]) fi
You should then put the following in your C code to ensure that all datatypes defined by AX_DEFINE_INTEGER_BITS are visible to your program:
#include "config.h"
#if HAVE_INTTYPES_H # include <inttypes.h> #else # if HAVE_STDINT_H # include <stdint.h> # endif #endif
Download the latest version of ax_define_integer_bits.m4 or browse the macro’s revision history.
Copying and distribution of this file, with or without modification, are permitted in any medium without royalty provided the copyright notice and this notice are preserved. This file is offered as-is, without any warranty. | http://www.gnu.org/software/autoconf-archive/ax_define_integer_bits.html | CC-MAIN-2015-48 | refinedweb | 197 | 54.73 |
On Apr 28, 5:39 pm, Li Wang <li.wan... at gmail.com> wrote: > 2009/4/29 Tim Chase <python.l... at tim.thechases.com>: > > >> I want to concatenate two bits string together: say we have '1001' and > >> '111' which are represented in integer. I want to concatenate them to > >> '1001111' (also in integer form), my method is: > >> ('1001' << 3) | 111 > >> which is very time consuming. > > > You omit some key details -- namely how do you know that "1001" is 4 bits > > and not "00001001" (8-bits)? If it's a string (as your current code shows), > > you can determine the length. However, if they are actually ints, your code > > should work fine & be O(1). > > Actually, what I have is a list of integer numbers [3,55,99,44], and > by using Huffman coding or fixed length coding, I will know how the > bits-length for each number. When I try to concatenate them (say > 10,000 items in the list) all together, the speed is going down > quickly (because of the shifting operations of python long). > > > > > This can be abstracted if you need: > > > def combine_bits(int_a, int_b, bit_len_b): > > return (int_a << bit_len_b) | int_b > > > a = 0x09 > > b = 0x07 > > print combine_bits(a, b, 3) > > > However, if you're using gargantuan ints (as discussed before), it's a lot > > messier. You'd have to clarify the storage structure (a byte string? a > > python long?) > > I am using a single python long to store all the items in the list > (say, 10,000 items), so the work does become messier... Using GMPY () may offer a performance improvement. When shifting multi-thousand bit numbers, GMPY is several times faster than Python longs. GMPY also support functions to scan for 0 or 1 bits. > > > -tkc > > > PS: You may want to CC the mailing list so that others get a crack at > > answering your questions...I've been adding it back in, but you've been > > replying just to me. > > Sorry, this is the first time I am using mail-list....and always > forgot "reply to all" > > Thank you very much:D > > > > -- > Li > ------ > Time is all we have > and you may find one day > you have less than you think | https://mail.python.org/pipermail/python-list/2009-April/535243.html | CC-MAIN-2016-50 | refinedweb | 363 | 79.9 |
Pure functions always return the same result for the same argument values. They only return the result and there are no extra side effects like argument modification, I/O stream, output generation etc.
Some pure functions are sin(), strlen(), sqrt(), max(), pow(), floor() etc. Some impure functions are rand(), time() etc.
Some programs to demonstrate some of the pure functions are as follows −
The strlen() function is used to find the length of a string. This is demonstrated in the following program −
#include<iostream> #include<string.h> using namespace std; int main() { char str[] = "Rainbows are beautiful"; int count = 0; cout<<"The string is "<< str <<endl; cout <<"The length of the string is "<<strlen(str); return 0; }
The output of the above program is as follows −
The string is Rainbows are beautiful The length of the string is 22
The sqrt() function is used to find the square root of a number.This is demonstrated in the following program −
#include<iostream> #include<cmath> using namespace std; int main() { int num = 9; cout<<"Square root of "<< num <<" is "<<sqrt(num); return 0; }
The output of the above program is as follows −
Square root of 9 is 3 | https://www.tutorialspoint.com/pure-function-in-cplusplus | CC-MAIN-2021-43 | refinedweb | 196 | 64.95 |
Notes on Software Engineering from Code Complete
Lessons from “Code Complete: A Practical Handbook of Software Construction” with applications to data science
When people ask about the hardest part of my job as a data scientist, they often expect me to say building machine learning models. Given that all of our ML modeling is done in about 3 lines:
from sklearn import model
model.fit(training_features, training_targets)
predictions = model.predict(testing_features)
I reply that machine learning is one of the easier parts of the job. Rather, the hardest part of being a data scientist in industry is the software engineering required to build the infrastructure that goes into running machine learning models continuously in production.
Starting out, (at Cortex Building Intel) I could write a good Jupyter Notebook for a one-time machine learning project, but I had no idea what it meant to “run machine learning in production” let alone how to do it. Half a year in, and having built several ML systems making predictions around the clock to help engineers run buildings more efficiently, I’ve learned it takes a whole lot of software construction and a tiny bit of data science. Moreover, while there are not yet standard practices in data science, there are time-tested best practices for writing software that can help you be more effective as a programmer.
With a relative lack of software engineering skills entering my job, I’ve had to learn quickly. Much of that came from interacting with other software engineers and soaking up their knowledge, but some of it has also come from resources such as textbooks and online tutorials. One of those textbooks is the 900-page masterwork on constructing quality software, Code Complete: A Practical Handbook of Software Construction by Steve McConnell. In this article, I wanted to outline the high-level points regarding software construction I took away from reading this book. These are as follows:
- Thoroughly plan your project before touching a keyboard
- Write readable code because it’s read more than it’s written
- Reduce the complexity of your programs to free mental capacity
- Test and review every line of code in a program
- Be an egoless programmer
- Iterate on your designs and repeatedly measure progress
Although the second edition of this book was written in 2004, the proven ideas in software engineering haven’t changed in the intervening years and the book is still highly relevant to software engineers today. You won’t learn any specific details about data science in Python from this book, but you will learn the fundamentals of how to plan, structure, build, test, and review software projects. I’ve tried to keep the ideas here at a fairly high-level and plan to go more into depth on specific points in later articles.
While these practices may seem irrelevant to data science as it’s currently taught, I think data science is hurt by the current lack of emphasis on software engineering. After all, a machine learning system is a software project. If there are tested, best practices for delivering successful software projects, then we as data scientists should be practicing these methods to move beyond Jupyter Notebooks and start delivering machine learning solutions. The six topics covered here are not as exciting as the newest machine learning algorithm, but they are critical ideas for data scientists in industry.
- Thoroughly plan your project before touching a keyboard
Just as you would never begin building a house without a blueprint, you should not start coding without a detailed, written design for your program. If you don’t know the intended outcome of your program, you will waste lots of time writing muddled code that accomplishes no particular objective. The problem I often ran into with regards to data science is that diving into a dataset is quite fun. It’s hard to resist the temptation to get your hands on the keyboard, making graphs, looking for anomalies, and hacking together models. After a few hours, you inevitably end up with a lot of messy code and no clear deliverable, a problem that could have been averted by taking the time to plan out your program.
Every software project you take on — and this includes machine learning models — should start with a problem definition: a high-level statement of the problem to solve. An example from my work regarding a demand forecasting model is “We want to predict building energy demand accurately 7 days in advance to help building engineers prepare for the next week.” This statement captures the desired outcome and the business value of the project.
After the problem statement comes the project requirements, a set of objectives —at a lower level than the problem statement— that a solution must meet. These can cover metrics — an error of less than 10% — or the end user experience — graphs must clearly show best estimates and uncertainty. The requirements will guide the detailed design of your program and allow you to assess if the project is a success.
After the description and requirements comes the architecture specification where you start planning out the files, classes, and routines (functions) that will make up your program. At this point, you can start getting into the details of your design such as error handling, input/output, user interface, and data flow through the program.
These documents should be reviewed and discussed just as much as the actual code because errors and decisions made in the design phase affect the rest of the project. Here is where you need to understand what is being asked of your program, and how you will approach the problem. Plan out where you foresee difficulties, make time estimates for the project, outline alternative approaches, and assign responsibility during the design phase of a project, Only after everyone has agreed to the description, requirements, and architecture plan should you even think of hitting the keyboard.
The exact steps above may change, but the important idea is that you should never start off a project by writing a bunch of code. Even on small, one-off personal data science projects, I now take the time to plan out — and write down — my overall goal and a set of requirements for my program. It’s a good habit to start; remember that large projects don’t arise out of people hacking away on a keyboard, they are planned out and built a piece at a time, following a detailed blueprint drawn up during the design process.
At my current company, all of our machine learning projects involve a substantial design phase where we have discussions with the business side of the company, our clients, and customer success to make sure we meet the needs of our end users. This process usually ends up creating several dozen pages of documentation that we refer to throughout the rest of the project. While it’s true that requirements will change over the course of the project, it’s crucial to have a checklist of what you need your code to accomplish because otherwise you’ll just be hacking and building something that ultimately will not be useable. Design tends to take up about 30% of the average project time for us, a worthwhile investment. Planning and writing down our design ahead of time means that instead of purposelessly banging together hammers and nails when we start coding, we follow an outline and build a sturdy structure a piece at a time.
2. Write readable code because it’s read more than it’s written
Code is going to be read many more times than it will be written, so never sacrifice read-time convenience — how understandable your code is — for write-time convenience — how quickly you can write the code.
Data science teaches some bad practices around code readability, most notably with variable names. While it might be obvious to you that X and y stand for features and target because you’ve seen this several hundred times, why not call the variablesfeatures and target to help those less familiar with ML syntax? Reading code should not be an exercise in trying to decipher the cryptic made-up language of whoever wrote it.
Improving code readability means using descriptive names for functions, classes, data objects, and any variable in your program! As an example, never use i, j, and k for loop variables. Instead, use what they actually represent: row_index, column_index , and color_channel. Yes, it takes half a second more to type, but using descriptive variable names will save you, and anyone who reads your code, dozens of hours down the line when debugging or trying to modify the code.
Other practices encompassed by the idea of making code readable are that the visual structure of your code should reflect the logical structure (for example when writing loops), that your comments should show the intent of code rather than just stating what the code does, minimizing the span and live time of a variable, grouping together related statements, avoiding overly complicated if-else statements, and keeping functions as short as possible.
If you ever find yourself thinking “I’ll keep these variable names short to save time” or “I’ll go back and document this code when I’m done writing it”, stop, do your future self a favor, and make the code more readable. As McConnell emphasizes, readable code has a positive effect on all the following:
- Understandability
- Modifiability
- Error Rate
- Debugging
- Development time — more readable code improves development time over the course of a project
- External quality — more readable code creates a better product for the end user as a result of the above factors
I think the practice of writing tangled code in data science is a result of individuals developing code intended to run only a single time. While this might be fine for a personal project, in industry, code readability is much more valuable than how quickly you can put together a model. Much of the first code I wrote on the job is unintelligible even to me because I did not think about the people who would be reading it in the future. As a result, this code is not fit for production and sits languishing in unused branches on GitHub. Remember, don’t ever prioritize writing speed over reading comprehension.
3. Reduce the complexity of your programs to free mental capacity
As emphasized throughout Code Complete, the primary imperative of software construction is to manage complexity. What does this mean? It’s about limiting the amount of information you have to hold in your head while programming and reducing arbitrary decisions. Instead of expanding your intellectual ability to write more complicated code, simplify your existing code to a level where you can understand it with your current intellect.
As an example of limiting information to recall, consider the situation where you have 2 functions, one to email users and one to send them a text. If you want to make things really hard on yourself, you do the following:
def notify_user_method_one(user, message):
"""Send a text to a user"""
...
def notify_user_method_two(message, user):
"""Send an email to a user"""
...
The problem with this code is that you have to remember which function corresponds to which method and the order of variables. A much better approach, resulting in fewer pieces of information to recall is:
def text_user(user, message):
...
def email_user(user, message):
...
Now the function name describes exactly what the function does and the arguments are consistent so you don’t waste mental energy thinking about their order.
The concept of consistency is crucial for reducing code complexity. The argument for having standards/conventions is you don’t have to make multiple small decisions about things tangentially related to coding such as formatting. Pick a standard and apply it across your entire project. Rather than worrying about what capitalization to use for variable names, apply the same rules to all variables in your project and you don’t have to make a decision. The choice of a standard often matters less than the actual standard itself so don’t get too caught up arguing about whether you should use 2 spaces or 4. Just pick one, set up your development environment to automatically apply it, and go to work.
(I don’t want to get too into specific technologies, but if you use Python, I have to recommend the black autoformatter. This tool has completely solved our team’s issues with code formatting and styling. We set it up to auto format our code and never have to worry about the length of lines or whether we should put spaces after commas. I have it set to auto-run on save in vscode).
Other ways you can reduce complexity is by, providing consistent interfaces to all your functions and classes (sklearn is a great example of this), using the same error handling method everywhere, avoiding deeply nested loops, adopting conventions when possible, keeping functions short. On the subject of functions, make sure that each function does a single task and does it well! The name of a function should be self-documenting and describe exactly the single action done by the function (like email_user). If you find yourself writing a function doc-string with the word “and” describing what the function does, you need to simplify the function. Shorter functions that do only one thing are easier to remember, easier to test, reduce the opportunity for errors, and allow for greater modifiability.
You can’t really make yourself much smarter, but you can make your code much simpler, thereby freeing your mental resources to concentrate on solving tough issues. When explaining technical concepts, the mark of a master is not using complicated jargon, but using simple language that anyone can understand. Likewise, when writing code, an experienced developer’s code may perform a complex task, but it will hide that complexity allowing others to understand and build on it. It can be momentarily satisfying to write tricky code that only you understand, but eventually, you’ll realize that an effective programmer writes the simplest code. Reducing complexity increases code quality and limits the number of decisions you have to make so you can focus on the difficult parts of a program.
4. Test and review every line of code in a program
Testing is one of the most-poorly-covered areas in data science education yet it’s absolutely crucial for production code. Even code written by professional programmers has 15–50 errors in every 1000 lines of code. Testing is one of several techniques to try and find errors or, at the least, assert your program works as intended. Without testing, we cannot release our machine learning models into production due to the risk of unintended failures. One of the quickest ways to lose customers would be to have mission-critical code fail because it was not thoroughly tested.
A good rule of thumb for testing (this technique is called structural basis testing) is you need one test for every if, for, while, and, or, elif if your code. At a minimum, you want to test every statement in a program at least once. Our codebase has testing for every function, from loading data, transforming data, feature engineering, modeling, predicting, storing predictions, generating model explanations, and validating models which together cover every line of code in our codebase.
Testing deserves at least its own article (or probably book), but a good place to start is with Pytest. Fortunately, these modern libraries make setting up and developing tests much less tedious. Furthermore, you can set up pytest (or other frameworks) to automatically run your testing suite with every commit to GitHub through a Continuous Integration service like CircleCI.
In addition to (not as a substitute for) testing, every line of code written for a project should be reviewed by multiple programmers. This can be through formal code inspections or informal code reviews where the purpose is to get multiple eyes on the code to flush out errors, check the code logic, enforce consistent conventions, and improve general code quality through feedback. Code reviews are some of the best opportunities to learn, especially if you are inexperienced. Experience is a great teacher, but it requires a long time to acquire. As a shortcut, you can let others hindsight — the mistakes they’ve made — be your foresight by listening to their constructive criticism.
Also, when you know that others will look at your code in a review, it forces you to write better code (think about the difference in quality between the things you write in private and in public). Moreover, once you start thinking about the tests you need to run over your code, it improves initial code quality (some people even recommend writing tests before you write the code).
On our team, we typically spend almost as long testing and reviewing code as we do writing it in the first place to make sure it does exactly what we want with no side effects. At first, this was incredibly frustrating — my typical response was “I wrote the code and it ran once on my machine so why should I test it” — before I realized all the errors I wasn’t catching in my code because I wasn’t testing it. Testing may be foreign to many data scientists, but it’s a proven and universal method in software engineering because it improves code quality and reduces errors.
If you are working on an individual project, you can still add testing and solicit feedback. However, that sometimes can be hard so an alternative is to start contributing to open source. Most libraries, especially the major data-science ones, have strict testing and code review requirements. These may be intimidating at first, but realize that procedures exist for a reason — to ensure that code continues running as intended — and that you can’t get better without trying and failing. I think testing has been overlooked in data science because of the lack of deployed machine learning systems. You don’t need testing to compile a Jupyter Notebook, but you sure need testing when your code is helping to run the largest office buildings in Manhattan!
5. Be an egoless programmer
Before every code review, I take time to tell myself: “you’ve made some mistakes in your code. They are going to be pointed out in this review, but don’t take it personally. Own up to your mistakes and use this experience to learn how to become a better programmer.” I have to do this because I find it very hard to admit when I’m wrong and as with most people, I tend to have an initial negative reaction to criticism. However, over time, I’ve learned failing to admit you are wrong and need to change is one of the greatest blockers to getting better at coding (and at any activity).
My interpretation of being an egoless programmer means accepting your failures as a chance to learn. Don’t take feedback on your code personally, and realize that others are genuinely trying to help in code reviews. Egoless programming also means being willing to let go of your beloved frameworks or standards when they become out of date (in other words, don’t be resistant to change). McConnell makes the point that software engineering is a field where 10 years of experience can be worse than 1 year if the person with more experience has not updated her knowledge since she started. The best tools are constantly changing — especially in data science — and standards can also change over time.
This doesn’t mean jump ship for the newest technology immediately, but it does mean if there is a proven benefit to switching, then don’t be so set in your ways that you refuse to change. Software development is not a deterministic process, it is heuristic — driven by rules of thumb — and you have to be willing to try many different approaches rather than sticking with the exact same method. Sometimes this means abandoning one model when it’s clearly not working — even if you’ve spent dozens of hours on it — and accepting other’s solutions when they are demonstrably better than your own.
To extend the “building a house metaphor” from earlier, construction workers do not use a single tool — the hammer — to build a house. Instead, they have a complete toolbox full of different implements for the varied tasks involved in construction. Likewise, in software engineering, or data science, the person who only knows one method will not get very far. If you are an egoless programmer, you’ll be open to learning from others, respond constructively to feedback, and fill your toolbox with the appropriate techniques. You’ll get much farther much quicker by admitting your mistakes than by asserting you can never make them.
6. Iterate on your designs and repeatedly measure progress
Software development (and data science) is fundamentally an iterative process. Great solutions do not emerge fully-formed from one individual’s fingers the first time they touch a keyboard. Rather, they are developed over long processes, with many repetitions of earlier stages as the design is refined and features are added. Writing good software requires a willingness to keep working at a problem, making code more readable, more efficient, and less error-prone over time by responding to feedback and thinking deeply about problems (sometimes the best tool is a pencil and paper for writing down your thoughts). Don’t expect to get things completely right the first time!
Iteration — and this is probably a familiar concept to data scientists — should be informed by repeated measurements. Without measuring, you are effectively blind. How do you know whether an optimization increased the speed of your code? How do you know which parts of the code are the most error-prone? How do you know which features users spend the most time with? How do you know which parts of a project take the most time? The answer is that you collect data and analyze it. Based on the results, you then improve your approach on the next iteration.
Throughout Code Complete, McConnell stresses the need for measurement to make better decisions. Whenever we see a process that could be made more efficient, we need to look for opportunities where a little data can help us optimize. For example, when I started tracking my time in my first few months on the job, I noticed I was spending more than 75% of my coding time on writing and debugging tests. This was an unacceptably large share, so I decided to spend time reading how to write good unit tests, practicing writing tests, and I started to think about the tests I would write before coding. As a result, I reduced the percent of time writing tests down to less than 50% and was able to spend more time understanding the problem domain (another critical aspect of data science that is hard to teach).
The most important part of tracking data is to look at relative changes over time. In most cases, the absolute value of an observation is not as important as the change of that value relative to the last time you measured it. Noticed that your model performance has been decreasing over time? Well, maybe that’s because one of the building’s power meters has gone down and needs to be fixed. Tracking outcomes over time requires only setting up a system that records data and making sure someone is checking it periodically.
Measurements should help inform all aspects of the software construction process from design to code tuning. When estimating how long a project will take, you should look at past estimates and see why they were inaccurate. If you want to try and optimize your code (always make sure your code is working before trying to improve the performance) you have to measure each incremental change. There are numerous examples in the book that point out supposed performance enhancements that actually had the opposite effect! If you don’t measure the effects of a change, you cannot know if what you are doing is really worthwhile.
The goal of data science isn’t to collect data and build nice graphs, it’s to make better decisions and improve processes through data. You can apply this to your own work by tracking your development habits, figuring out where you are weakest, and focusing on that area for improvement. Track changes over time to make sure you’re headed in the right direction and course correct as often as necessary.
Putting These Ideas into Practice
We’ve walked through some of the key ideas on software construction at a high level and the next step is to put them into practice by actually writing (better) code. First, realize that you won’t be able to adopt these all at once: as with any profession, improving at coding takes time. (Peter Norvig has a great essay on how to learn to program in 10 years, a more realistic goal than “learning Python in 24 hours”.) Instead, focus on one or two ideas at a time, and try to put them into practice either at work or on personal projects.
If you have the opportunity to learn from others at work, then take full advantage of that (assuming they are using best practices) by adopting an egoless attitude. If you are learning on your own, take a look at getting involved with open source. There are plenty of projects looking for help. If that’s a little intimidating, you can try just reading some of the code in well-written libraries. (Some Python examples listed in the Hitchhiker’s Guide to Python are: Flask, Werkzeug, Requests, and Diamond).
Conclusions
The overall theme I took away from the 900+ pages of Code Complete is that quality software is produced through a rigorous design and development process. That rigor is often missing from data science, which tends towards convoluted code to get a solution once, rather than code that can be run millions of times without error. Many people who come into data science — myself included — lack the formal training in computer science and software engineering best practices. However, these programming practices are relatively simple to pick up and will pay off far down the road in terms of your ability to write production-level data science code.
Quality software development is a process, and I’m hoping that data scientists start to adopt thorough processes that allow them to translate their work into deliverable products. Sure it’s exciting when you develop a new AI that can play computer games better than a human, but it’s even cooler when your code helps the Empire State Building to save almost a million dollars a year. The field of data science will move past the hype stage of the curve when it proves it can deliver useful products with business value, as software engineering has done for several decades. Data science can have a massive impact on the real world but that won’t happen until data scientists use practices that allow our code to withstand the rigors of the real world.
I write about data science and sometimes other interesting activities. The best place to follow me is on Twitter @koehrsen_will.
Notes on Software Construction from Code Complete was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story. | https://laptrinhx.com/notes-on-software-construction-from-code-complete-626090023/ | CC-MAIN-2020-45 | refinedweb | 4,610 | 55.17 |
18 December 2007 04:51 [Source: ICIS news]
BEIJING (ICIS news)--Sinopec and its South Korean partner SK Energy are expected to break ground on their cracker project in central China in the next few days for start-up in 2011, sources from the Chinese energy major said on Tuesday.
The companies, which received approval for the yuan (CNY) 1.9bn ($304m) project in ?xml:namespace>
The project is Sinopec's fourth in its 11th five-year plan (2006-2010) which will add close to 4m tonnes of ethylene.
Construction on the other three crackers in
A
Derivatives at Wuhan could include a 500,000 tonne/year benzene, toluene and mixed xylenes (BTX) unit, two 300,000 tonne/year polyethylene (PE) units and a 400,000 tonne/year polypropylene (PP) unit, a second source said, adding that low density PE (LDPE) could be produced.
A 300,000 tonne/year ethylene oxide/monoethylene glycol (EO/MEG) unit was also planned as there were many EO-derivative factories in the region, he said.
However, negotiations on the derivatives were still ongoing as SK was keen to include more products, the sources said, adding that the current plan could change later.
SK Energy could not be reached for comment.
($1=CNY. | http://www.icis.com/Articles/2007/12/18/9087456/sinopec+sk+energy+to+start+work+at+wuhan+cracker.html | CC-MAIN-2013-20 | refinedweb | 208 | 64.54 |
Type: Posts; User: shani09
I'm creating an agenda program using windows form application, I'm trying to load data from a file into a hash table and display this data into text boxes. The labels on the text boxes are dates but...
Although it was a past exam question i got from a friend
okay then... i'll try to write it using structures...
i dont know that's why i'm asking.....
yeah i know you're right and i also know using a class or struct is a possible solution but i want help on how to use a function for the same solution.. that's what the question is really all about
no i dont actually, i didnt try that coz the question said i use a function..
Hello, any help of any kind will be very much appreciated.
I've done the other part of the question but my code is actually useless without this first part
here it is:
"Write down a function...
please tell me my error
Okay here is what i wrote. it is giving me wrong results:
#include <iostream>
using namespace std;
int SIZEE (int [],int size, int&, int&);
int main()
{
const int SIZE = 100;
anyone pls?
Hello eveyone;
Please does any one know how to return the location of the maximum and minimum elements in an array from a function to the main. I mean the locations not the values. Like if the...
thank you
okay thanks....
But one more question: it says in the question that X returns the integers back to its caller.
What is going to be the return statement assuming i use your 1st guideline.?
Hi everyone
Please i need help with this question. it says:
Write down a function that reads in a set of integers into an array X and returns it to its caller . Assume that the maximum size of... | http://forums.codeguru.com/search.php?s=761a92a04b83464ce620ec4ef1f9d56c&searchid=9125473 | CC-MAIN-2016-30 | refinedweb | 317 | 81.12 |
I have a script named test1.py which is not in a module. It just has code that should execute when the script itself is run. There are no functions, classes, methods etc. I have another script which runs as a service. I want to call test1.py from the script running as a service.
eg:
test1.py
print "I am a test"
print "see! I do nothing productive."
# lots of stuff here
test1.py # do whatever is in test1.py
The usual way to do this is something like the following.
test1.py
def some_func(): print 'in test 1, unproductive' if __name__ == '__main__': # test1.py executed as script # do something some_func()
service.py
import test1 def service_func(): print 'service func' if __name__ == '__main__': # service.py executed as script # do something service_func() test1.some_func() | https://codedump.io/share/7g1fOueBQ8ct/1/what-is-the-best-way-to-call-a-python-script-from-another-python-script | CC-MAIN-2017-26 | refinedweb | 134 | 80.48 |
The OpenWhisk runtime specification defines the expected behavior of an OpenWhisk runtime; you can choose to implement a new runtime from scratch by just following this specification. However, the fastest way to develop a new, compliant runtime is by reusing the ActionLoop proxy which already implements most of the specification and requires you to write code for just a few hooks to get a fully functional (and fast) runtime in a few hours or less.
The
ActionLoop proxy is a runtime “engine”, written in the Go programming language, originally developed specifically to support the OpenWhisk Go language runtime. However, it was written in a generic way such that it has since been adopted to implement OpenWhisk runtimes for Swift, PHP, Python, Rust, Java, Ruby and Crystal. Even though it was developed with compiled languages in mind it works equally well with scripting languages.
Using it, you can develop a new runtime in a fraction of the time needed for authoring a full-fledged runtime from scratch. This is due to the fact that you have only to write a command line protocol and not a fully-featured web server (with a small amount of corner cases to consider). The results should also produce a runtime that is fairly fast and responsive. In fact, the ActionLoop proxy has also been adopted to improve the performance of existing runtimes like Python, Ruby, PHP, and Java where performance has improved by a factor between 2x to 20x.
In addition to being the basis for new runtime development, ActionLoop runtimes can also support offline “precompilation” of OpenWhisk Action source files into a ZIP file that contains only the compiled binaries which are very fast to start once deployed. More information on this approach can be found here: Precompiling Go Sources Offline which describes how to do this for the Go language, but the approach applies to any language supported by ActionLoop.
This section contains a stepwise tutorial which will take you through the process of developing a new ActionLoop runtime using the Ruby language as the example.
The general procedure for authoring a runtime with the
ActionLoop proxy requires the following steps:
To facilitate the process, there is an
actionloop-starter-kit in the openwhisk-devtools GitHub repository, that implements a fully working runtime for Python. It contains a stripped-down version of the real Python runtime (with some advanced features removed) along with guided, step-by-step instructions on how to translate it to a different target runtime language using Ruby as an example.
In short, the starter kit provides templates you can adapt in creating an ActionLoop runtime for each of the steps listed above, these include :
-checking out the
actionloop-starter-kit from the
openwhisk-devtools repository -editing the
Dockerfile to create the target environment for your target language. -converting (rewrite) the
launcher.py script to an equivalent for script for your target language. -editing the
compile script to compile your action in your target language. -writing the mandatory tests for your target language, by adapting the
ActionLoopPythonBasicTests.scala file.
As a starting language, we chose Python since it is one of the more human-readable languages (can be treated as
pseudo-code). Do not worry, you should only need just enough Python knowledge to be able to rewrite
launcher.py and edit the
compile script for your target language.
Finally, you will need to update the
ActionLoopPythonBasicTests.scala test file which, although written in the Scala language, only serves as a wrapper that you will use to embed your target language tests into.
In each step of this tutorial, we typically show snippets of either terminal transcripts (i.e., commands and results) or “diffs” of changes to existing code files.
Within terminal transcript snippets, comments are prefixed with
# character and commands are prefixed by the
$ character. Lines that follow commands may include sample output (from their execution) which can be used to verify against results in your local environment.
When snippets show changes to existing source files, lines without a prefix should be left “as is”, lines with
- should be removed and lines with
+ should be added.
# Verify docker version $ docker --version Docker version 18.09.3 # Verify docker is running $ docker ps # The result should be a valid response listing running processes
So let's start to create our own
actionloop-demo-ruby-2.6 runtime. First, check out the
devtools repository to access the starter kit, then move it in your home directory to work on it.
$ git clone $ mv openwhisk-devtools/actionloop-starter-kit ~/actionloop-demo-ruby-v2.6
Now, take the directory
python3.7 and rename it to
ruby2.6 and use
sed to fix the directory name references in the Gradle build files.
$ cd ~/actionloop-demo-ruby-v2.6 $ mv python3.7 ruby2.6 $ sed -i.bak -e 's/python3.7/ruby2.6/' settings.gradle $ sed -i.bak -e 's/actionloop-demo-python-v3.7/actionloop-demo-ruby-v2.6/' ruby2.6/build.gradle
Let's check everything is fine building the image.
# building the image $ ./gradlew distDocker # ... intermediate output omitted ... BUILD SUCCESSFUL in 1s 2 actionable tasks: 2 executed # checking the image is available $ docker images actionloop-demo-ruby-v2.6 REPOSITORY TAG IMAGE ID CREATED SIZE actionloop-demo-ruby-v2.6 latest df3e77c9cd8f 2 minutes ago 94.3MB
At this point, we have built a new image named
actionloop-demo-ruby-v2.6. However, despite having
Ruby in the name, internally it still is a
Python language runtime which we will need to change to one supporting
Ruby as we continue in this tutorial.
Our language runtime's
Dockerfile has the task of preparing an environment for executing OpenWhisk Actions. Using the ActionLoop approach, we use a multistage Docker build to
ruby:2.6.2-alpine3.9image from the Official Docker Images for Ruby on Docker Hub.
openwhisk/actionlooop-v2image on Docker Hub from which we will “extract” the ActionLoop proxy (i.e. copy
/bin/proxybinary) our runtime will use to process Activation requests from the OpenWhisk platform and execute Actions by using the language's tools and libraries from step #1.
Let's edit the
ruby2.6/Dockerfile to use the official Ruby image on Docker Hub as our base image, instead of a Python image, and add our our Ruby launcher script:
FROM openwhisk/actionloop-v2:latest as builder -FROM python:3.7-alpine +FROM ruby:2.6.2-alpine3.9 RUN mkdir -p /proxy/bin /proxy/lib /proxy/action WORKDIR /proxy COPY --from=builder /bin/proxy /bin/proxy -ADD lib/launcher.py /proxy/lib/launcher.py +ADD lib/launcher.rb /proxy/lib/launcher.rb ADD bin/compile /proxy/bin/compile +RUN apk update && apk add python3 ENV OW_COMPILER=/proxy/bin/compile ENTRYPOINT ["/bin/proxy"]
Next, let's rename the
launcher.py (a Python script) to one that indicates it is a Ruby script named
launcher.rb.
$ mv ruby2.6/lib/launcher.py ruby2.6/lib/launcher.rb
Note that:
Rubylanguage image.
Pythonto
Ruby.
python3package to our Ruby image since our
compilescript will be written in Python for this tutorial. Of course, you may choose to rewrite the
compilescript in
Rubyif you wish to as your own exercise.
This section will take you through how to convert the contents of
launcher.rb (formerly
launcher.py) to the target Ruby programming language and implement the
ActionLoop protocol.
Let's recap the steps the launcher must accomplish to implement the
ActionLoop protocol :
mainmethod for execution.
compilescript will make the function available to the launcher.
file descriptor 3which will be used to output the functions response.
stdin, line-by-line. Each line is parsed as a JSON string and produces a JSON object (not an array nor a scalar) to be passed as the input
argto the function.
valuekey contains the user parameter data to be passed to your functions. All the other keys are made available as process environment variables to the function; these need to be uppercased and prefixed with
"__OW_".
mainfunction with the JSON object payload.
file descriptor 3.
stdout,
stderrand
file descriptor 3(FD 3).
Now, let's look at the protocol described above, codified within the launcher script
launcher.rb, and work to convert its contents from Python to Ruby.
Skipping the first few library import statements within
launcer.rb, which we will have to resolve later after we determine which ones Ruby may need, we see the first significant line of code importing the actual Action function.
# now import the action as process input/output from main__ import main as main
In Ruby, this can be rewritten as:
# requiring user's action code require "./main__"
Note that you are free to decide the path and filename for the function‘s source code. In our examples, we chose a base filename that includes the word
"main" (since it is OpenWhisk’s default function name) and append two underscores to better assure uniqueness.
The
ActionLoop proxy expects to read the results of invoking the Action function from File Descriptor (FD) 3.
The existing Python:
out = fdopen(3, "wb")
would be rewritten in Ruby as:
out = IO.new(3)
Each time the function is invoked via an HTTP request, the
ActionLoop proxy passes the message contents to the launcher via STDIN. The launcher must read STDIN line-by-line and parse it as JSON.
The
launcher's existing Python code reads STDIN line-by-line as follows:
while True: line = stdin.readline() if not line: break # ...continue...
would be translated to Ruby as follows:
while true # JSON arguments get passed via STDIN line = STDIN.gets() break unless line # ...continue... end
Each line is parsed in JSON, where the
payload is extracted from contents of the
"value" key. Other keys and their values are as uppercased,
"__OW_" prefixed environment variables:
The existing Python code for this is:
# ... continuing ... args = json.loads(line) payload = {} for key in args: if key == "value": payload = args["value"] else: os.environ["__OW_%s" % key.upper()]= args[key] # ... continue ...
would be translated to Ruby:
# ... continuing ... args = JSON.parse(line) payload = {} args.each do |key, value| if key == "value" payload = value else # set environment variables for other keys ENV["__OW_#{key.upcase}"] = value end end # ... continue ...
We are now at the point of invoking the Action function and producing its result. Note we must also capture exceptions and produce an
{"error": <result> } if anything goes wrong during execution.
The existing Python code for this is:
# ... continuing ... res = {} try: res = main(payload) except Exception as ex: print(traceback.format_exc(), file=stderr) res = {"error": str(ex)} # ... continue ...
would be translated to Ruby:
# ... continuing ... res = {} begin res = main(payload) rescue Exception => e puts "exception: #{e}" res ["error"] = "#{e}" end # ... continue ...
Finally, we need to write the function's result to File Descriptor (FD) 3 and “flush” standard out (stdout), standard error (stderr) and FD 3.
The existing Python code for this is:
out.write(json.dumps(res, ensure_ascii=False).encode('utf-8')) out.write(b'\n') stdout.flush() stderr.flush() out.flush()
would be translated to Ruby:
STDOUT.flush() STDERR.flush() out.puts(res.to_json) out.flush()
Congratulations! You just completed your
ActionLoop request handler.
Now, we need to write the
compilation script. It is basically a script that will prepare the uploaded sources for execution, adding the
launcher code and generate the final executable.
For interpreted languages, the compilation script will only “prepare” the sources for execution. The executable is simply a shell script to invoke the interpreter.
For compiled languages, like Go it will actually invoke a compiler in order to produce the final executable. There are also cases like Java where we still need to execute the compilation step that produces intermediate code, but the executable is just a shell script that will launch the Java runtime.
The OpenWhisk user can upload actions with the
wsk Command Line Interface (CLI) tool as a single file.
This single file can be:
Important: an executable for ActionLoop is either a Linux binary (an ELF executable) or a script. A script is, using Linux conventions, is anything starting with
#!. The first line is interpreted as the command to use to launch the script:
#!/bin/bash,
#!/usr/bin/python etc.
The ActionLoop proxy accepts any file, prepares a work folder, with two folders in it named
"src" and
"bin". Then it detects the format of the uploaded file. For each case, the behavior is different.
bin/execand executed.
src/execthen the compilation script is invoked.
srcfolder, then the
src/execfile is checked.
srcis renamed to
binand then again the
bin/execis executed.
src/execis missing or is not an executable, then the compiler script is invoked.
The compilation script is invoked only when the upload contains sources. According to the description in the past paragraph, if the upload is a single file, we can expect the file is in
src/exec, without any prefix. Otherwise, sources are spread the
src folder and it is the task of the compiler script to find the sources. A runtime may impose that when a zip file is uploaded, then there should be a fixed file with the main function. For example, the Python runtime expects the file
__main__.py. However, it is not a rule: the Go runtime does not require any specific file as it compiles everything. It only requires a function with the name specified.
The compiler script goal is ultimately to leave in
bin/exec an executable (implementing the ActionLoop protocol) that the proxy can launch. Also, if the executable is not standalone, other files must be stored in this folder, since the proxy can also zip all of them and send to the user when using the pre-compilation feature.
The compilation script is a script pointed by the
OW_COMPILER environment variable (you may have noticed it in the Dockerfile) that will be invoked with 3 parameters:
<main>is the name of the main function specified by the user on the
wskcommand line
<src>is the absolute directory with the sources already unzipped
<bin>directory where we are expected to place our final executables
Note that both the
<src> and
<bin> are disposable, so we can do things like removing the
<bin> folder and rename the
<src>.
Since the user generally only sends a function specified by the
<main> parameter, we have to add the launcher we wrote and adapt it to execute the function.
compilefor Ruby
This is the algorithm that the
compile script in the kit follows for Python:
<src>/execit must rename to the main file; I use the name
main__.py
<src>/__main__.pyit will rename to the main file
main__.py
launcher.pyto
exec__.py, replacing the
main(arg)with
<main>(arg); this file imports the
main__.pyand invokes the function
<main>
<src>/exec
<bin>folder and rename
<src>to
<bin>
We can adapt this algorithm easily to Ruby with just a few changes.
The script defines the functions
sources and
build then starts the execution, at the end of the script.
Start from the end of the script, where the script collect parameters from the command line. Instead of
launcher.py, use
launcher.rb:
- launcher = "%s/lib/launcher.py" % dirname(dirname(sys.argv[0])) + launcher = "%s/lib/launcher.rb" % dirname(dirname(sys.argv[0]))
Then the script invokes the
source function. This function renames the
exec file to
main__.py, you will rename it instead to
main__.rb:
- copy_replace(src_file, "%s/main__.py" % src_dir) + copy_replace(src_file, "%s/main__.rb" % src_dir)
If instead there is a
__main__.py the function will rename to
main__.py (the launcher invokes this file always). The Ruby runtime will use a
main.rb as starting point. So the next change is:
- # move __main__ in the right place if it exists - src_file = "%s/__main__.py" % src_dir + # move main.rb in the right place if it exists + src_file = "%s/main.rb" % src_dir
Now, the
source function copies the launcher as
exec__.py, replacing the line
from main__ import main as main (invoking the main function) with
from main__ import <main> as main. In Ruby you may want to replace the line
res = main(payload) with
res = <main>(payload). In code it is:
- copy_replace(launcher, "%s/exec__.py" % src_dir, - "from main__ import main as main", - "from main__ import %s as main" % main ) + copy_replace(launcher, "%s/exec__.rb" % src_dir, + "res = main(payload)", + "res = %s(payload)" % main )
We are almost done. We just need the startup script that instead of invoking python will invoke Ruby. So in the
build function do this change:
write_file("%s/exec" % tgt_dir, """#!/bin/sh cd "$(dirname $0)" -exec /usr/local/bin/python exec__.py +exec ruby exec__.rb """)
For an interpreted language that is all. We move the
src folder in the
bin. For a compiled language instead, we may want to actually invoke the compiler to produce the executable.
Now that we have completed both the
launcher and
compile scripts, it is time to test them.
Here we will learn how to:
In the starter kit, there is a
Makefile that can help with our development efforts.
We can build the Dockerfile using the provided Makefile. Since it has a reference to the image we are building, let's change it:
sed -i.bak -e 's/actionloop-demo-python-v3.7/actionloop-demo-ruby-v2.6/' ruby2.6/Makefile
We should be now able to build the image and enter in it with
make debug. It will rebuild the image for us and put us into a shell so we can enter access the image environment for testing and debugging:
$ cd ruby2.6 $ make debug # results omitted for brevity ...
Let's start with a couple of notes about this test environment.
First, use
--entrypoint=/bin/sh when starting the image to have a shell available at our image entrypoint. Generally, this is true by default; however, in some stripped down base images a shell may not be available.
Second, the
/proxy folder is mounted in our local directory, so that we can edit the
bin/compile and the
lib/launcher.rb using our editor outside the Docker image
NOTE It is not necessary to rebuild the Docker image with every change when using
make debug since directories and environment variables used by the proxy indicate where the code outside the Docker container is located.
Once at the shell prompt that we will use for development, we will have to start and stop the proxy. The shell will help us to inspect what happened inside the container.
It is time to test. Let's write a very simple test first, converting the
example\hello.py in
example\hello.rb to appear as follows:
def hello(args) name = args["name"] || "stranger" greeting = "Hello #{name}!" puts greeting { "greeting" => greeting } end
Now change into the
ruby2.6 subdirectory of our runtime project and in one terminal type:
$ cd <projectdir>/ruby2.6 $ make debug # results omitted for brevity ... # (you should see a shell prompt of your image) $ /bin/proxy -debug 2019/04/08 07:47:36 OpenWhisk ActionLoop Proxy 2: starting
Now the runtime is started in debug mode, listening on port 8080, and ready to accept Action deployments.
Open another terminal (while leaving the first one running the proxy) and go into the top-level directory of our project to test the Action by executing an
init and then a couple of
run requests using the
tools/invoke.py test script.
These steps should look something like this in the second terminal:
$ cd <projectdir> $ python tools/invoke.py init hello example/hello.rb {"ok":true} $ python tools/invoke.py run '{}' {"greeting":"Hello stranger!"} $ python tools/invoke.py run '{"name":"Mike"}' {"greeting":"Hello Mike!"}
We should also see debug output from the first terminal running the proxy (with the
debug flag) which should have successfully processed the
init and
run requests above.
The proxy's debug output should appear something like:
/proxy # /bin/proxy -debug 2019/04/08 07:54:57 OpenWhisk ActionLoop Proxy 2: starting 2019/04/08 07:58:00 compiler: /proxy/bin/compile 2019/04/08 07:58:00 it is source code 2019/04/08 07:58:00 compiling: ./action/16/src/exec main: hello 2019/04/08 07:58:00 compiling: /proxy/bin/compile hello action/16/src action/16/bin 2019/04/08 07:58:00 compiler out: , <nil> 2019/04/08 07:58:00 env: [__OW_API_HOST=] 2019/04/08 07:58:00 starting ./action/16/bin/exec 2019/04/08 07:58:00 Start: 2019/04/08 07:58:00 pid: 13 2019/04/08 07:58:24 done reading 13 bytes Hello stranger! XXX_THE_END_OF_A_WHISK_ACTIVATION_XXX XXX_THE_END_OF_A_WHISK_ACTIVATION_XXX 2019/04/08 07:58:24 received::{"greeting":"Hello stranger!"} 2019/04/08 07:58:54 done reading 27 bytes Hello Mike! XXX_THE_END_OF_A_WHISK_ACTIVATION_XXX XXX_THE_END_OF_A_WHISK_ACTIVATION_XXX 2019/04/08 07:58:54 received::{"greeting":"Hello Mike!"}
Of course, it is very possible something went wrong. Here a few debugging suggestions:
The ActionLoop runtime (proxy) can only be initialized once using the
init command from the
invoke.py script. If we need to re-initialize the runtime, we need to stop the runtime (i.e., with Control-C) and restart it.
We can also check what is in the action folder. The proxy creates a numbered folder under
action and then a
src and
bin folder.
For example, using a terminal window, we would would see a directory and file structure created by a single action:
$ find action/ action/1 action/1/bin action/1/bin/exec__.rb action/1/bin/exec action/1/bin/main__.rb
Note that the
exec starter,
exec__.rb launcher and
main__.rb action code are have all been copied under a directory numbered
1.
In addition, we can try to run the action directly and see if it behaves properly:
$ cd action/1/bin $ ./exec 3>&1 $ {"value":{"name":"Mike"}} Hello Mike! {"greeting":"Hello Mike!"}
Note we redirected the file descriptor 3 in stdout to check what is happening, and note that logs appear in stdout too.
Also, we can test the compiler invoking it directly.
First let's prepare the environment as it appears when we just uploaded the action:
$ cd /proxy $ mkdir -p action/2/src action/2/bin $ cp action/1/bin/main__.rb action/2/src/exec $ find action/2 action/2 action/2/bin action/2/src action/2/src/exec
Now compile and examine the results again:
$ /proxy/bin/compile main action/2/src action/2/bin $ find action/2 action/2/ action/2/bin action/2/bin/exec__.rb action/2/bin/exec action/2/bin/main__.rb
If we have reached this point in the tutorial, the runtime is able to run and execute a simple test action. Now we need to validate the runtime against a set of mandatory tests both locally and within an OpenWhisk staging environment. Additionally, we should author and automate additional tests for language specific features and styles.
The
starter kit includes two handy
makefiles that we can leverage for some additional tests. In the next sections, we will show how to update them for testing our Ruby runtime.
So far we tested a only an Action comprised of a single file. We should also test multi-file Actions (i.e., those with relative imports) sent to the runtime in both source and binary formats.
First, let's try a multi-file Action by creating a Ruby Action script named
example/main.rb that invokes our
hello.rb as follows:
require "./hello" def main(args) hello(args) end
Within the
example/Makefile makefile:
ruby-v2.6"as well as the name of the
mainaction.
-IMG=actionloop-demo-python-v3.7:latest -ACT=hello-demo-python -PREFIX=docker.io/openwhisk +IMG=actionloop-demo-ruby-v2.6:latest +ACT=hello-demo-ruby +PREFIX=docker.io/<docker username>
Now, we are ready to test the various cases. Again, start the runtime proxy in debug mode:
$ cd ruby2.6 $ make debug $ /bin/proxy -debug
On another terminal, try to deploy a single file:
$ make test-single python ../tools/invoke.py init hello ../example/hello.rb {"ok":true} python ../tools/invoke.py run '{}' {"greeting":"Hello stranger!"} python ../tools/invoke.py run '{"name":"Mike"}' {"greeting":"Hello Mike!"}
Now, stop and restart the proxy and try to send a ZIP file with the sources:
$ make test-src-zip zip src.zip main.rb hello.rb adding: main.rb (deflated 42%) adding: hello.rb (deflated 42%) python ../tools/invoke.py init ../example/src.zip {"ok":true} python ../tools/invoke.py run '{}' {"greeting":"Hello stranger!"} python ../tools/invoke.py run '{"name":"Mike"}' {"greeting":"Hello Mike!"}
Finally, test the pre-compilation: the runtime builds a zip file with the sources ready to be deployed. Again, stop and restart the proxy then:
$ make test-bin-zip docker run -i actionloop-demo-ruby-v2.6:latest -compile main <src.zip >bin.zip python ../tools/invoke.py init ../example/bin.zip {"ok":true} python ../tools/invoke.py run '{}' {"greeting":"Hello stranger!"} python ../tools/invoke.py run '{"name":"Mike"}' {"greeting":"Hello Mike!"}
Congratulations! The runtime works locally! Time to test it on the public cloud. So as the last step before moving forward, let's push the image to Docker Hub with
make push.
To run this test you need to configure access to OpenWhisk with
wsk. A simple way is to get access is to register a free account in the IBM Cloud but this works also with our own deployment of OpenWhisk.
Edit the Makefile as we did previously:
IMG=actionloop-demo-ruby-v2.6:latest ACT=hello-demo-ruby PREFIX=docker.io/<docker username>
Also, change any reference to
hello.py and
main.py to
hello.rb and
main.rb.
Once this is done, we can re-run the tests we executed locally on “the real thing”.
Test single:
$ make test-single wsk action update hello-demo-ruby hello.rb --docker docker.io/linus/actionloop-demo-ruby-v2.6:latest --main hello ok: updated action hello-demo-ruby wsk action invoke hello-demo-ruby -r { "greeting": "Hello stranger!" } wsk action invoke hello-demo-ruby -p name Mike -r { "greeting": "Hello Mike!" }
Test source zip:
$ make test-src-zip zip src.zip main.rb hello.rb adding: main.rb (deflated 42%) adding: hello.rb (deflated 42%) wsk action update hello-demo-ruby src.zip --docker docker.io/linus/actionloop-demo-ruby-v2.6:latest ok: updated action hello-demo-ruby wsk action invoke hello-demo-ruby -r { "greeting": "Hello stranger!" } wsk action invoke hello-demo-ruby -p name Mike -r { "greeting": "Hello Mike!" }
Test binary ZIP:
$ make test-bin-zip docker run -i actionloop-demo-ruby-v2.6:latest -compile main <src.zip >bin.zip wsk action update hello-demo-ruby bin.zip --docker docker.io/actionloop/actionloop-demo-ruby-v2.6:latest ok: updated action hello-demo-ruby wsk action invoke hello-demo-ruby -r { "greeting": "Hello stranger!" } wsk action invoke hello-demo-ruby -p name Mike -r { "greeting": "Hello Mike!" }
Congratulations! Your runtime works also in the real world.
Before you can submit your runtime you should ensure your runtime pass the validation tests.
Under
tests/src/test/scala/runtime/actionContainers/ActionLoopPythonBasicTests.scala there is the template for the test.
Rename to
tests/src/test/scala/runtime/actionContainers/ActionLoopRubyBasicTests.scala, change internally the class name to
class ActionLoopRubyBasicTests and implement the following test cases:
testNotReturningJson
testUnicode
testEnv
testInitCannotBeCalledMoreThanOnce
testEntryPointOtherThanMain
testLargeInput
You should convert Python code to Ruby code. We do not do go into the details of each test, as they are pretty simple and obvious. You can check the source code for the real test here.
You can verify tests are running properly with:
$ ./gradlew test Starting a Gradle Daemon, 1 busy Daemon could not be reused, use --status for details > Task :tests:test runtime.actionContainers.ActionLoopPythoRubyTests > runtime proxy should handle initialization with no code PASSED runtime.actionContainers.ActionLoopPythoRubyTests > runtime proxy should handle initialization with no content PASSED runtime.actionContainers.ActionLoopPythoRubyTests > runtime proxy should run and report an error for function not returning a json object PASSED runtime.actionContainers.ActionLoopPythoRubyTests > runtime proxy should fail to initialize a second time PASSED runtime.actionContainers.ActionLoopPythoRubyTests > runtime proxy should invoke non-standard entry point PASSED runtime.actionContainers.ActionLoopPythoRubyTests > runtime proxy should echo arguments and print message to stdout/stderr PASSED runtime.actionContainers.ActionLoopPythoRubyTests > runtime proxy should handle unicode in source, input params, logs, and result PASSED runtime.actionContainers.ActionLoopPythoRubyTests > runtime proxy should confirm expected environment variables PASSED runtime.actionContainers.ActionLoopPythoRubyTests > runtime proxy should echo a large input PASSED BUILD SUCCESSFUL in 55s
Big congratulations are in order having reached this point successfully. At this point, our runtime should be ready to run on any OpenWhisk platform and also can be submitted for consideration to be included in the Apache OpenWhisk project. | https://apache.googlesource.com/openwhisk/+/HEAD/docs/actions-actionloop.md | CC-MAIN-2020-45 | refinedweb | 4,783 | 58.08 |
Introduction to Haskell IO/Actions (
let or
where).
The arrow indicates that the result of an action is being bound. The
type of
getLine is
IO String, and the arrow
binds the result of the action to
line which will be of
type
String.
<-) is used in the binding and not an equal sign (as is done when binding
getLine to make another action, which can be combined with
putStrLn "Enter two lines" to make another more complicated)
In this example
return (line1 ++ " and " ++ line2) is an
action of type
IO String that doesn't affect the outside world in any way, but results in a string that combines
line1 and
line2.)
let which does not require the
in keyword! | https://wiki.haskell.org/index.php?title=Introduction_to_Haskell_IO/Actions&direction=next&oldid=62799&printable=yes | CC-MAIN-2022-21 | refinedweb | 120 | 72.9 |
Post your Comment
VoIP Quality Review
VoIP Quality Review
... line.
VoIP Of Quality with Assenment and Prediction
Speech Quality of VoIP is an essential guide to assessing the speech quality of VoIP
Review
Review Friends please help e on this
How to write java code to display the next question onclicking review button
Project Quality Management
, and hence allows you to report on the status of quality at periodic project review...The Project Quality Management Plan is an essential part of project management, which is required for the effective management of project quality from
rate and review app iphone
rate and review app iphone How to rate and review app in iphone
AJAX Review
AJAX Review
AJAX and web 2.0 dissected. The purpose of this site is simple. Take nifty
web-tools, analyze them, post screenshots, and help people find useful (and not
so useful
pls review my code - Struts
pls review my code Hello friends,
This is the code in struts. when i click on the submit button.
It is showing the blank page. Pls respond soon its urgent.
Thanks in advance.
public class LOGINAction extends Action
Quality Assurance Management
Quality Assurance Management Deming, Juran and Crosby all believe in striving toward world class quality. Each gentleman has his own approach.... This point must be backed up by specific aspects from each quality improvement plans
Software Quality with JAVA
Software Quality with JAVA
Quality achievements in any software i.e. "quality of source code"
it matters a lot to any project.
Software Quality depends
Total Quality Management
competitive time, branding and quality has become a crucial factor for marketing... for your lapses.
Total Quality Management (TQM) is a part of business management, which deals with improving the quality of every aspects of business whether its
HELP! Design Review & Risk Management topics (in Java projects) required - Development process
HELP! Design Review & Risk Management topics (in Java projects) required Hi,
I have to take 2 one hour sessions on the following topics:
* Project Design Review
* Project Risk Management
Can you please recommend any
How to attach file to HP Quality Center [QC] using java
How to attach file to HP Quality Center [QC] using java Hello All,
Do any one know how to upload attachments to QC Test Case Run Instance using java
In QC we have <BR>
Test Lab<BR>
|_Test Set Instcance<
Understanding quality of service for Web services Improving the performance of your Web services
Understanding quality of service for Web
services Improving...
services, quality of service (QoS) will become a significant factor... of providing service quality, transactional services, and a
simple method
TheOpenCD is a collection of high quality Free and Open Source Software
TheOpenCD v2.0
Now Available TheOpenCD v2.0
TheOpenCD
is a collection of high quality Free and Open Source Software. The programs run
in Windows... browsing, web design, and image manipulation. We
include only the highest quality
Mindreef?s most powerful SOA Quality Management solution
Mindreef?s most powerful SOA
Quality Management solution
Multiple teams can leverage robust features while
collaborating to improve overall SOA quality;
class historyquiz{
public static void main(String args
Outsourcing Content Writing to India ? Quality vs. Costs
Outsourcing Content Writing to India – Quality vs. Costs
Outsourcing... more to the freelancers
is worth to get a good quality of work.
Does price determine the quality?
The thinking that content writing done at low cost
Java Beans Books
Java Beans Books
Java Beans
book review
The book... views clear on the quality of what's coming out of
Redmond.
VoIP Reviews
in testing, evaluating and identifying the best VoIP
.this review is actually... VoIP Reviews
Voice
Over IP Reviews
The most recent VoIP reviews say
GPS in Health Improvement & Physical Training
GPS in Health Improvement & Physical Training
The quality of health is now one... of time and the progress can be seen by making a time-by-time review of your
VoIP Providers
the cheapest Voice over IP service and compare, review and rank VoIP services from almost... VoIP Providers
VoIP Provided PC to phone services
With our free VoIP
Android Gaming
and high quality gaming experience is one important aspect of this. While not many... review the Android gaming experience as well as provide a short list of some
Open Source Intelligence
, uniquely adapted to the Internet, to developing high-quality informational products...: peer review, reputation- rather than sanctions-based authority, the free sharing
Code Collaborator
of peer review so developers
perform reviews in half the time while managers.... Review before or after check-in.
Before local file changes are checked into version control, upload them for review with a single click. Or upload
EasyEclipse for Python
EasyEclipse for Python
There are currently 10 comments for this distribution.
You can review them and add more here.
Composition
This distribution includes
EasyEclipse for Ruby and Rails
EasyEclipse for Ruby and Rails
There are currently 9 comments for this distribution. You can review them and add more here.
Composition
This distribution includes
Jupiter
Jupiter
Jupiter is a code review plug-in tool for the Eclipse
IDE. It is currently... and searching: Jupiter provides filters and
sorting to facilitate issue review
Parasoft Jtest
a team is trying to build quality into new code or extend a legacy code
base...).
Gain instant visibility into Java code's quality... towards quality and schedule targets.
Features
Automatically creates
The Quintessential in JSP
The Quintessential in JSP
Quentessential means representing the perfect example
of a class or quality. It is pure and concentrated essence of a substance
Post your Comment | http://www.roseindia.net/discussion/21350-VoIP-Quality-Review.html | CC-MAIN-2015-22 | refinedweb | 925 | 54.12 |
import "golang.org/x/crypto/xt. This package does not implement ciphertext-stealing so sectors must be a multiple of 16 bytes.
Note that XTS is usually not appropriate for any use besides disk encryption. Most users should use an AEAD mode like GCM (from crypto/cipher.NewGCM) instead.
Cipher contains an expanded key structure. It is safe for concurrent use if the underlying block cipher is safe for concurrent use.
NewCipher creates a Cipher given a function for creating the underlying block cipher (which must have a block size of 16 bytes). The key must be twice the length of the underlying cipher's key.
Decrypt decrypts a sector of ciphertext and puts the result into plaintext. Plaintext and ciphertext must overlap entirely or not at all. Sectors must be a multiple of 16 bytes and less than 2²⁴ bytes.
Encrypt encrypts a sector of plaintext and puts the result into ciphertext. Plaintext and ciphertext must overlap entirely or not at all. Sectors must be a multiple of 16 bytes and less than 2²⁴ bytes.
Package xts imports 5 packages (graph) and is imported by 2 packages. Updated 2019-07-01. Refresh now. Tools for package owners. | https://godoc.org/golang.org/x/crypto/xts | CC-MAIN-2019-35 | refinedweb | 198 | 67.25 |
Imagine a simple program like this:
def main(args: String[]):
val hostLocalValue = args(0).toInt
val someRdd = getSomeIntRdd
val mySum = someRdd
.map(x => if (x < 0) 1 else hostLocalValue)
.reduce(_ + _)
print(mySum)
In your example 'hostLocalValue' will be serialized and send to each worker nodes with 'map' closure. If you have 1000 partitions this variable will be distributed to workers 1000 times. Your variable is Int, so it's ok. But if you variable would be dictionary Map ~100mb, you'll have to send 100 gigs over network.
But if you'll wrap your dictionary in broadcast you have to ship it only once => Benefit! | https://codedump.io/share/V2c8BVQ6AQ0N/1/apache-spark-what-happens-when-one-uses-a-host-object-value-within-a-worker-that-has-not-been-broadcasted | CC-MAIN-2017-43 | refinedweb | 107 | 67.45 |
Shells that are more or less POSIX compliant are listed under #POSIX compliant, while shells that have a different syntax are under #Alternative shells.
POSIX compliant
These shells can all be linked from
/usr/bin/sh. When Bash, mkshAUR.
- Oil Shell (OSH) — Oil Shell is a Bash-compatible UNIX command-line shell. OSH can be run on most UNIX-like operating systems, including GNU/Linux. It is written in Python (v2.7), but ships with a native executable. The dialect of Bash recognized by OSH is called the OSH language.
- Yash — Yet another shell, is a POSIX-compliant command line shell written in C99 (ISO/IEC 9899:1999). Yash is intended to be the most POSIX-compliant shell in the world while supporting features for daily interactive and scripting use.
- || yash. Additionally, fish features significantly simplified programming syntax and control flow (similar to ruby). For more information, see the tutorial.
- ion — Ion is a modern system shell that features a simple, yet powerful, syntax. It is written entirely in Rust, which greatly increases the overall quality and security of the shell, eliminating the possibilities of a ShellShock-like vulnerability, and making development easier. It also offers a level of performance that exceeds that of Dash, when taking advantage of Ion's features. While it is developed alongside, and primarily for, RedoxOS, it is a fully capable on other *nix platforms. For more details lookup its manual.
- Nash — Nash is a system shell, inspired by plan9 rc, that makes it easy to create reliable and safe scripts taking advantages of operating systems namespaces (on linux and plan9) in an idiomatic way.
- nushell — Nu draws inspiration from functional programming languages, and modern CLI tools. Rather than thinking of files and services as raw streams of text, Nu looks at each input as something with structure.
-OS and Linux.
- rc — Command interpreter for Plan 9 that provides similar facilities to UNIX’s Bourne shell, with some small additions and less idiosyncratic syntax.
- xonsh — Python-powered shell with additional shell primitives that you are used to from Bash and IPython.
Changing your default shell
After installing one of.
/etc/shellsas reference. If a recently installed shell is not present on the list, it can be manually added to this file.
Uninstalling shell
Change the default shell before removing the package of the shell.
Alternatively, modify the user database.
Use it for every user with zsh set as their login shell (including root if needed). When completed, the package can be removed.
Login shell
A login shell is an invocation mode, in which the shell reads files intended for one-time initialization, such as system-wide
/etc/profile or the user's
~/.profile or other shell-specific file(s). These files set up the initial environment, which is inherited by all other processes started from the shell (including other non-login shells or graphical programs). Hence, they are read only once at the beginning of a session, which is, for example, when the user logs in to the console or via SSH, changes the user with sudo or su using the
--login parameter, or when the user manually invokes a login shell (e.g. by
bash --login).
See #Configuration files and the links therein for an overview of the various initialization files. For more information about login shell, see also Difference between Login Shell and Non-Login Shell? and Why a "login" shell over a "non-login" shell? on Stackexchange. for a comparison of various configuration files of various shells.
.
Standardisation
It is possible to make (some) shells configuration files follow the same naming convention, as well as supporting some common configuration between the shells.
See the article about this and the related repository. See also xsh. | https://wiki.archlinux.org/title/Elvish | CC-MAIN-2022-40 | refinedweb | 622 | 56.76 |
Opened 10 years ago
Closed 10 years ago
#7316 closed (fixed)
Use ugettext_lazy instead of ugettext in contrib.localflavor
Description
There are many files in
localflavor that uses
ugettext for defining fields and widgets, for example:
from django.utils.translation import ugettext class NOSocialSecurityNumber(Field): """ Algorithm is documented at """ default_error_messages = { 'invalid': ugettext(u'Enter a valid Norwegian social security number.'), } ...
This can produce a serious problem (in production server), because two things:
- String translation is got in import time, for example when web server starts. This string will never change.
- This is the most important problem: You could have error with circulars import, because
gettextcall will try import all applications installed on Django, but this file haven't loaded yet. I had this problem with Apache for a import cycle, because a
contrib.localflavorimport. This is not acceptable.
I will attach a patch for fix this problem.
Attachments (1)
Change History (3)
Changed 10 years ago by
comment:1 Changed 10 years ago by
comment:2 Changed 10 years ago by
Note: See TracTickets for help on using tickets.
Patch that fixes problem. | https://code.djangoproject.com/ticket/7316 | CC-MAIN-2018-34 | refinedweb | 183 | 55.54 |
Information about friend trees of a certain TTree or TChain object.
Definition at line 42 of file InternalTreeUtils.hxx.
#include <ROOT/InternalTreeUtils.hxx>
Names of the subtrees of a friend TChain.
fFriendChainSubNames[i] is the list of names of the trees that make a friend TChain whose information is stored at fFriendNames[i] and fFriendFileNames[i]. If instead the friend tree at position
i is a TTree, fFriendChainSubNames[i] will be just a vector with a single empty string.
Definition at line 57 of file InternalTreeUtils.hxx.
Names of the files where each friend is stored.
fFriendFileNames[i] is the list of files for friend with name fFriendNames[i].
Definition at line 49 of file InternalTreeUtils.hxx.
Pairs of names and aliases of friend trees/chains.
Definition at line 44 of file InternalTreeUtils.hxx. | https://root.cern/doc/master/structROOT_1_1Internal_1_1TreeUtils_1_1RFriendInfo.html | CC-MAIN-2021-25 | refinedweb | 134 | 60.31 |
Partest is a custom parallel testing tool that we use to run the test suite for the Scala compiler and library. Go the scala project folder from your local checkout and run it via
ant or standalone as follows.
The test suite can be run by using ant from the command line:
$ ant test.suite
There are launch scripts
partest and
partest.bat in the
test folder of the scala project. To have partest run failing tests only and print details about test failures to the console, you can use
./test/partest --show-diff --show-log --failed
You can get a summary of the usage by running partest without arguments.
--all,
--pos,
--negor
--run.
.scalafiles) as options. Several files can be tested if they are from the same category, e.g.,
pos.
-show-logand
-show-diffoptions.
--verbose. This info is useful as part of bug reports.
-classpath <path>and
-buildpath <path>.
SCALAC_OPTSenvironment variable to pass command line options to the compiler.
JAVA_OPTSenvironment variable to pass command line options to the runner (e.g., for
run/jvmtests).
The launch scripts run partest as follows:
scala -cp <path to partest classes> scala.tools.partest.nest.NestRunner <options>
Partest classes from a
quick build, e.g., can be found in
./build/quick/classes/partest/.
Partest will tell you where it loads compiler/library classes from by adding the
partest.debug property:
scala -Dpartest.debug=true -cp <path to partest classes> scala.tools.partest.nest.NestRunner <options>
Tests that depend on ScalaCheck can be added under folder
./test/files/scalacheck. A sample test:
import org.scalacheck._ import Prop._ object Test { val prop_ConcatLists = property{ (l1: ListInt, l2: ListInt) => l1.size + l2.size == (l1 ::: l2).size } val tests = List(("prop_ConcatLists", prop_ConcatLists)) }
Some tests might fail because line endings in the
.check files and the produced results do not match. In that case, set either
git config core.autocrlf false
or
git config core.autocrlf input
Contents | http://docs.scala-lang.org/tutorials/partest-guide.html | CC-MAIN-2014-52 | refinedweb | 322 | 70.29 |
Hey guys, I just got back into java after taking a class on it in high school 2 years ago. I'm trying to make this simple program that tells you in how many hours you'll reach your desired level in an MMORPG. So far my code looks like this:
import java.util.Scanner; public class enter { public static void main(String[] args) { double ExpToLevel; double ExpPerHour; double time; Scanner input = new Scanner(System.in); System.out.println("Enter your experience until level up"); ExpToLevel = input.nextInt(); System.out.println("How much experience do you get an hour?"); ExpPerHour = input.nextInt(); ExpToLevel / ExpPerHour = time; System.out.println("You'll reach your level in " + time); } }
It's telling me the equation is an unexpected type. How do I fix this problem? | http://www.javaprogrammingforums.com/whats-wrong-my-code/9587-really-simple-program-problem.html | CC-MAIN-2015-35 | refinedweb | 131 | 50.53 |
Thanks for stopping by to check out SwitchYard. This article provides a brief summary of what's inside SwitchYard 0.1. If you are completely new to SwitchYard and wondering what it is, this blog post provides some good background and pointers to additional information.
Getting Started
There a number of options for checking out what SwitchYard 0.1 has to offer. Listed in order of increasing time investment:
- Read the rest of this article. It provides an overview of the features in 0.1.
- Check out our brand new Getting Started and User Guide.
- Download and install a SwitchYard 0.1 release for JBoss AS 6 or AS 7.
- We have quintupled the number of available Quickstart applications! Run them as is and see SwitchYard in action. Or change some stuff and see what blows up.
- Build an application from scratch using fancy new Forge tooling.
- Build SwitchYard from source and contribute!
Feature Highlights
SwitchYard 0.1 has lots of cool features in it. This section highlights the main ones.;
See this feature in action in the demo/orders quickstart.
Camel Services
For more complex composition and routing use cases, we have added support for using Apache Camel as a routing engine within SwitchYard. This adds the powerful EIP support of Camel to SwitchYard and allows you to create routes using the Java DSL or XML routing languages.
Java DSL Route
@Route(HelloService.class) public class HelloBuilder extends RouteBuilder { public void configure() { from("switchyard://HelloService") .log("Message received in HelloService Route") .log("${body}") .split(body(String.class).tokenize("\n")) .filter(body(String.class).startsWith("sally:")) .to("switchyard://AnotherService?operationName=acceptMessage"); } }
XML Route
<implementation.camel> <route xmlns="" id="Camel Test Route"> <log message="ItemId [${body}]"/> <to uri="switchyard://WarehouseService?operationName=hasItem"/> <log message="Title Name [${body}]"/> </route> </implementation.camel>
See this feature in action in the camel-service quickstart..
The following Transform types are available in 0.1:
- Java
- Smooks
- JSON
See this feature in action in the transform-smooks, transform-json, and demo/orders quickstarts.
Gateway Bindings.
There are two gateways available in 0.1:
- SOAP - binds services and references to HTTP/SOAP
- Camel - provides the ability to use Camel config URIs to configure Camel components as gateway providers
See this feature in action in the camel-binding and demo/orders quickstarts.
Tooling
SwitchYard integrates with Forge to support rapid development of service-oriented applications. The SwitchYard distribution contains a set of Forge plugins which can be installed into an existing Forge install.
This page includes examples of how to get started with SwitchYard projects in Forge.:
- A base unit test class which bootstraps an embedded SwitchYard runtime and deploys your application
- A variety of Test MixIns which allow you to add specific testing support based on the requirements of your application
- A simplified Invoker interface which provides a fluent test client contract for invoking services in SwitchYard
See this feature in action in the transform-smooks and demo/orders quickstarts..
Runtimes
We have integrated SwitchYard with JBoss AS 6.0 and AS 7.0 Beta 3 for the 0.1 release. The release distributions contain a complete AS with SwitchYard pre-installed. | https://developer.jboss.org/wiki/ReleaseOverview-01 | CC-MAIN-2016-36 | refinedweb | 524 | 59.5 |
- bullseye-backports 5.14.9-2~bpo11+1
NAME¶
perf-probe - Define new dynamic tracepoints
SYNOPSIS¶¶
This command defines dynamic tracepoint events, by symbol and registers without debuginfo, or by C expressions (C line numbers, C function names, and C local variables) with debuginfo.
OPTIONS¶
-k, --vmlinux=PATH
-m, --module=MODNAME|PATH
-s, --source=PATH
-v, --verbose
Be more verbose (show parsed arguments, etc). Can not use with -q.
-q, --quiet
-a, --add=
-d, --del=
-l, --list[=[GROUP:]EVENT]
-L, --line=
-V, --vars=
--externs
--no-inlines
-F, --funcs[=FILTER]
-D, --definition=
--filter=FILTER
-f, --force
-n, --dry-run
--cache
--max-probes=NUM
--target-ns=PID: Obtain mount namespace information from the target pid. This is used when creating a uprobe for a process that resides in a different mount namespace from the perf(1) utility.
-x, --exec=PATH
--demangle
--demangle-kernel
In absence of -m/-x options, perf probe checks if the first argument after the options is an absolute path name. If its an absolute path, perf probe uses it as a target module/target user space binary to probe.
PROBE SYNTAX¶¶¶¶¶,¶¶’t start with "foo" and end with "bar", like "fizzbar". But "foobar" is filtered out.
EXAMPLES¶'
PERMISSIONS AND SYSCTL¶
Since perf probe depends on ftrace (tracefs) and kallsyms (/proc/kallsyms), you have to care about the permission and some sysctl knobs.
SEE ALSO¶
perf-trace(1), perf-record(1), perf-buildid-cache(1) | https://manpages.debian.org/testing/linux-perf-5.14/perf_5.14-probe.1.en.html | CC-MAIN-2022-21 | refinedweb | 236 | 52.9 |
Hello!
I'm using Xamarin Forms for the first time to develop an iOS and Android app.
I'd like to add a side menu. From what I understand, the way to do this is to use MasterDetailPage.
However, in my project, when I add the Master Detail Page, it comes instantly with some type errors :
The type 'pages:DrawerDetail' was not found. Verify that you are not missing an assembly reference and that all referenced assemblies have been built.
I'm adding the Master Detail Page under Xamarin Forms project, not directly on Android or iOS projects.
I'm doing this by going to Add -> New Item -> Master Detail Page
The CS Files are already in the correct namespace.
For XAML files, I'm not sure.
Hopefully this is some trivial issue, even though it doesn't seem like that to me.
The core view is already built.
I checked some tutorials online and it seems that other users don't have any kind of errors once they add the Master Detail Page.
Any help is appreciated, thanks!
Answers
Try to create a blank project with
MasterDetailPagetemplate to check if problem persists, see the following image.
Hey,
The template project compiles.
Adding the Master Detail Page item using the same steps as in the first post however, causes an error.
Could you try to reproduce it at your end? It could be something with my installation.
Right Click on "App2" -> Add -> New Item -> Master Detail Page.
P.S. In the meantime I looked up some examples and managed to create what I needed manually.
Worked fine on my side .
I suspect something wrong with your installation ,Try to reinstall the visual studio .
@Hamster I had the similar problem, My problem was that I left the name of the master page as "MasterDetailPage". I deleted it and created new one with different name and it worked.
Make sure to restart visual studio
| https://forums.xamarin.com/discussion/comment/368994/ | CC-MAIN-2019-43 | refinedweb | 322 | 75.2 |
Hi, Daniel Thank you for your reviewing. I agree your fixes. Also I agree this issue should be handled by hypervisor. But for Xen, if # of vcpus are out of range, XEN_DOMCTL_setvcpu_context return the -EINVAL. So the inactive domain cannot boot. For this circumstances, it is better to handle # of vcpus error by libvirt. c.f. Then I go to Next Bug fixes. Thanks Atsushi SAKAI Daniel Veillard <veillard redhat com> wrote: > On Wed, Aug 15, 2007 at 05:01:04PM +0900, Atsushi SAKAI wrote: > > Hi, > > > > This patch adds virsh setvcpus range check for negative value case. > > > > for example > > to the inactive domain > > virsh setvcpus -1 > > sets vcpus=4294967295 > > And cannot boot the inactive domain. > > I would rather change the test > > if (!count) { > > to > > if (count <= 0) { > > rather than use the unsigned cast to catch it. > > There is 2 things to note: > - virDomainSetVcpus actually do a check but since the argument is an > unsigned int we have a problem > if (nvcpus < 1) { > virLibDomainError(domain, VIR_ERR_INVALID_ARG, __FUNCTION__); > return (-1); > } > I would be tempted to do an (internal ?) > #define MAX_VCPUS 4096 > and change that check to > if ((nvcpus < 1) || (nvcpus > MAX_VCPUS)) { > to guard at the API against unreasonnable values. > > - There is actually a bug a few lines down in virsh, when checking for the > maximum number of CPUs for the domain: > maxcpu = virDomainGetMaxVcpus(dom); > if (!maxcpu) { > as -1 is the error values for the call. so the test there really ought to be > if (maxcpu <= 0) > one could argue that 0 should be the error value returned by > virDomainGetMaxVcpus but since it's defined as -1 in the API, the test > must be fixed. > > I have made the 2 changes to virsh but not the one to virDomainSetVcpus > where it could be argued it's the hypervisor responsability to check the > given value. Opinions ? > > Thanks for raising the problem ! > > Daniel > > -- > Red Hat Virtualization group > Daniel Veillard | virtualization library > veillard redhat com | libxml GNOME XML XSLT toolkit > | Rpmfind RPM search engine | https://www.redhat.com/archives/libvir-list/2007-August/msg00136.html | CC-MAIN-2015-11 | refinedweb | 328 | 62.88 |
# Just for Fun: PVS-Studio Team Came Up With Monitoring Quality of Some Open Source Projects
Static code analysis is a crucial component of all modern projects. Its proper application is even more important. We decided to set up a regular check of some open source projects to see the effect of the analyzer's frequent running. We use the PVS-Studio analyzer to check projects. As for viewing the outcome, the choice fell on SonarQube. As a result, our subscribers will learn about new interesting bugs in the newly written code. We hope you'll have fun.
After all, why is it necessary to check projects regularly? If you rarely run static analysis, for example, just before the release, you may be snowed under with a great number of warnings. Looking through them all, you can miss the very significant analyzer triggerings that indicate serious errors. If you run the analysis regularly, for example, every day, then there won't be so many of those. Thus, you can easily identify crucial problems. Another reason is the cost of an error: the sooner the problem is detected, the less costly it is to fix it. For example, if you run static analysis just before the release, then by that time most of the bugs will have been found and fixed by the testing department. However, such fixes cost more. That is, the only right way to use static analysis is regular analysis.
As you probably know, our team often publishes articles about open source projects checks. Such articles are certainly curious to read. They bring some benefits to checked projects themselves. We always report on suspicious places to developers. However, such isolated checks have the same disadvantages as the scenario described above with an irregular code check only before the release. It's difficult to perceive a large report. Many errors end up being fixed at other levels of quality control (for example, using tests) instead of being found and fixed immediately after they hit the code.
Therefore, we decided to try a new work format with open source projects. That is a regular, daily code review of one (for a start) project. In this case, the check will be set up in a way that we'd have to view analyzer warnings covering only changed code or newly written code. It's faster than viewing the full analyzer report, and most importantly, it will allow discovering a potential error very quickly. When we find something really exciting, we'll make short notes or even write a post on Twitter.
We hope that this format will allow us to better promote more correct practices of static analysis regular use and will bring additional benefits to the open-source community.
We decided to choose the [Blender](https://www.blender.org/) project as the first project to analyze. You can tell us what additional projects you'd like us to analyze. Also, we'll describe the errors found in them.
### Regular analysis configuration
For our task, we consider the joint effort of PVS-Studio – SonarQube tools to be the best solution for regular analysis. Further, we'll talk about the configuration of the selected tools: how to run and configure SonarQube; we'll describe how to analyze the project and how to upload the results to be displayed.
#### Why we chose SonarQube
PVS-Studio can do a lot: analyze, send out notifications about warnings, and filter them. Moreover, it can also integrate into different systems to display warnings. Not only to get the check results but also to additionally test more PVS-Studio operating modes, we decided to try to configure the display of results for our task in SonarQube.
You can find more information about this application [here](https://www.sonarqube.org/). Now let's proceed to the deployment. SonarQube stores all the data in the database. You can use different databases, but the recommended one is PostgreSQL. Let's set it up first.
#### Configuring PostgreSQL
Download the latest version [here](https://www.postgresql.org/download/windows/). Install it and create a database for SonarQube. To do this, first, create a user named sonar. Run the following command in the psql command line:
```
CREATE USER sonar WITH PASSWORD '12345';
```
You can also use pgAdmin for this and other operations. Now we need to create the database named sonarqube using the CREATE DATABASE command. It looks like this in our case:
```
CREATE DATABASE sonarqube OWNER sonar;
```
The database is ready, let's start configuring SonarQube.
#### SonarQube configuration
Download and install SonarQube. You can get the latest version [here](https://www.sonarqube.org/success-download-community-edition). The distribution itself is an archive. We need to unpack the archive to the C directory:\sonarqube\sonarqube-8.5.1.38104.
Then, edit the file C:\sonarqube\sonarqube-8.5.1.38104\conf\sonar.properties. We'll add there the following info on our created database:
```
sonar.jdbc.username=sonar
sonar.jdbc.password=12345
sonar.jdbc.url=jdbc:postgresql://localhost/sonarqube
```
SonarQube will see the database that we created and will start working with it. Next, you'll need to install the plugin for PVS-Studio. The plugin is in the directory where PVS-Studio is installed. It is C:\Program Files (x86)\PVS-Studio by default. We need a sonar-pvs-studio-plugin.jar file. Copy it to the directory with SonarQube C:\sonarqube\sonarqube-8.5.1.38104\extensions\plugins. You also need to download the sonar-cxx-plugin, click [here](https://github.com/SonarOpenCommunity/sonar-cxx/releases) to do it. At the time of writing, this is sonar-cxx-plugin-1.3.2.1853.jar. We need to copy this plugin to the C:\sonarqube\sonarqube-8.5.1.38104\extensions\plugins directory.
Now you can run SonarQube. To do this, run C:\sonarqube\sonarqube-8.5.1.38104\bin\windows-x86-64\StartSonar.bat.
Let's start setting up via the web interface. Go to the browser at sonarServer:9000. Here sonarServer is the name of the machine where SonarQube is installed.
#### Quality Profile configuration
The quality profile is a key component of SonarQube, which defines a set of rules for the codebase. The PVS-Studio plugin provides a set of rules that correspond to the analyzer warnings. We can add all of them to the quality profile or disable any rules if necessary. According to the configured quality profile, SonarQube will display or not display warnings after analyzing our code.
Now, we need to configure the Quality Profile. To do so go to the Quality Profiles tab and click Create as shown in the picture below.
In the appeared window enter a profile name (it can be random). In our case, the name is PVS-Studio Way. Then, select the language. C++ is relevant for us now. After that, click Create.
Then go to the Rules tab, select the Repository category, and select PVS-Studio C++. Next, click Bulk Change and Activate In, in the appeared window select our created profile, that is, PVS-Studio Way.
SonarQube is set up and ready to go.
### Analysis
Then, we'll configure the project analysis directly using the PVS-Studio analyzer.
Download the source code with the following command:
```
git clone https://github.com/blender/blender.git
```
generate the project files:
```
make.bat full nobuild
```
generate the necessary additional files, compile the build\_windows\_Full\_x64\_vc15\_Release\INSTALL.vcxproj project for that.
Run the analysis with the following command
```
"c:\\Program Files (x86)\\PVS-Studio\\PVS-Studio_Cmd.exe" \
-t build_windows_Full_x64_vc15_Release\\Blender.sln \
-o blender.plog --sonarqubedata -r
```
So, we have the files blender.plog and sonar-project.properties, and we can push the results of our analysis to SonarQube. Use the sonar-scanner utility to do this.
### Sonar scanner
You can download the utility [here](https://docs.sonarqube.org/latest/analysis/scan/sonarscanner/). Download the archive by the link, unzip it. For example, in our case, it is placed in the directory D:\sonar\sonar-scanner-4.5.0.2216-windows. Edit the D:\sonar\sonar-scanner-4.5.0.2216-windows\conf\sonar-scanner.properties file by adding the following line to it:
```
sonar.host.url=http://sonarServer:9000
```
Where sonarServer is the name of the machine where SonarQube is installed.
Run the following command:
```
D:\sonar\sonar-scanner-4.5.0.2216-windows\sonar-scanner.bat \
-Dsonar.projectKey=blender -Dsonar.projectName=blender \
-Dsonar.projectVersion=1.0 \
-Dsonar.pvs-studio.reportPath=blender.plog
```
Note that the command is called from the directory with the analysis results (blender.plog and sonar-project.properties).
To run the analysis on a project regularly, all above commands can be easily automated using a Continuous Integration server, such as Jenkins.
### Conclusion
Regular projects analysis allows you to eliminate errors at the earliest stage when the cost of such a correction is minimal. We hope that this new format of checking open source projects and the article about it will be interesting to our readers and will diversify the "usual" articles about checking, as well as benefit the open source community. Let me remind you once again that we accept requests for the inclusion of additional projects in our regular review. We can't guarantee that we'll add a project, but we will definitely consider all your suggestions. | https://habr.com/ru/post/541918/ | null | null | 1,604 | 50.73 |
Outlook Backup Assistant 4.2.16.36
Sponsored Links
Download location for Outlook Backup Assistant 4.2.16.36
Outlook Backup Assistant 4.2.16.36....read more
NOTE: You are now downloading Outlook Backup Assistant 4.2.16.36. This trial download is provided to you free of charge. Please purchase it to get the full version of this software.
Select a download mirror
Outlook Backup Assistant 4.2.16.36 description
Outlook Backup Assistant 4.2.16.36 Screenshot
Outlook Backup Assistant 4.2.16.36 Keywords
Outlook Backup Assistant 4.2.16.36 Outlook Backup Assistant step by step backup assistant Outlook Backup
Bookmark Outlook Backup Assistant 4.2.16.36
Outlook Backup Assistant 4.2.16.36 Copyright
WareSeeker.com do not provide cracks, serial numbers etc for Outlook Backup Assistant 4.2.16.36. Any sharing links from rapidshare.com, yousendit.com or megaupload.com are also prohibited.
Featured Software
Want to place your software product here?
Please contact us for consideration.
Contact WareSeeker.com
Related Software
ABF Outlook Backup is a handy tool to backup Microsoft Outlook. It enables user to save all important Outlook data, including email messages, contacts, mail accounts, calendar, journal, tasks and notes, rules and alerts, signatures and stationeries. Free Download
Backup Standard provides data protection for small-to-medium enterprises Free Download
import you data from Outlook Free Download
Data Backup and Storage - Finding information about data backup and storage has never been easier. With this application, you can easily and quickly find the information you are looking for. Data Backup and Storage. Free Download
extract old data from Outlook & import it automatically to T&C 6. Free Download
Mass Data Backup and Storage - This is a simple and easy-to-use application that can help you find the information about mass data backup and storage. The application will also help you find service providers. Mass Data Backup and Storage. Free Download
Disable access to Outlook Express and prevent your Outlook Expresss data from stealing or spoiling. It lets you lock Outlook Express and password-protect the message base files and the address book.
Latest Software
Popular Software
Favourite Software | http://wareseeker.com/download/outlook-backup-assistant-4.2.16.36.rar/7ce91b615 | CC-MAIN-2016-36 | refinedweb | 361 | 52.66 |
You’ve just installed Red Hat OpenShift Container Platform 3 - now what?
Red Hat OpenShift Container Platform 3 is a complex product with a lot of components. This article is going to go over steps to validate your installation was successful, that your applications are responsive, and what to look for if things aren’t working as expected.
Step 1: Authenticate
Depending on what installation method you used, you may or may not already have some form of external authentication enabled. If you don’t, you’ll want to set one up, with HTPasswd being the fastest and easiest method to install and configure.
Logging from the command line is as simple as running “oc login”, and supplying the name of the master (single master environment) or master cluster name.
# oc login Authentication required for (openshift) Username: jritenour Password: Login successful. You have access to the following projects and can switch between them with 'oc project <projectname>': * default kube-system management-infra openshift openshift-infra Using project "default".
Authentication issues against an external provider can be challenging to troubleshoot. First, you need to verify you can actually authenticate against the provider outside the context of Red Hat OpenShift Container Platform. For example, if you’re using IdM/FreeIPA as an LDAP source, verify you can login with your credentials on a host joined to the IdM domain. If so, verify you are using the correct parameters in your master-config.yaml for all OpenShift masters - the most common issue I tend to see is trying to use secure LDAP, but without a trusted certificate.
If everything looks correct in the config, then you need to verify your authentication request is actually reaching the external provider. Check to see if there any failed authentication attempts on the provider’s logs. If the attempt isn’t even getting to the provider, then you might have roll up your sleeves and do some network troubleshooting with utilities such as tcpdump, nmap, and traceroute. That’s a bit outside the scope of this article, however.
Validate nodes
Running “oc get nodes” will return information about the nodes in an OpenShift cluster.
# oc get nodes NAME STATUS AGE jr-ose001.home.lab Ready,SchedulingDisabled 19m jr-ose002.home.lab Ready,SchedulingDisabled 2m jr-ose003.home.lab Ready,SchedulingDisabled 19m jr-ose004.home.lab Ready 19m jr-ose005.home.lab Ready 19m jr-ose006.home.lab Ready 19m jr-ose007.home.lab Ready 19m jr-ose008.home.lab Ready 19m
All nodes should be listed as “Ready”, and masters should also have “SchedulingDisabled” in their status. If any nodes are in any state other than “Ready”, verify the host is up and responsive. Some quick checks you can perform are:
Is the host responding on the correct IP address? Does it match up with the hostname configured in DNS?
Is the atomic-openshift-node service running?
Is the firewall properly configured? Sample output:
Chain OS_FIREWALL_ALLOW (1 references) target prot opt source destination ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:10250 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:http ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:https ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:10255 ACCEPT udp -- anywhere anywhere state NEW udp dpt:10255 ACCEPT udp -- anywhere anywhere state NEW udp dpt:4789
Validate default project status
The “default” project is where all the essential containerized OpenShift services run, specifically the router and registry. Verifying this namespace is in a healthy state is essential. Run ‘oc status’ while in the default project (oc project default). You should have:
A Kubernetes service
A Docker registry service with at least one pod
A router service with at least one pod.
#oc status In project default on server svc/docker-registry - 172.30.24.180:5000 dc/docker-registry deploys registry.access.redhat.com/openshift3/ose-docker-registry:v3.3.0.35 deployment #1 deployed 25 minutes ago - 1 pod svc/kubernetes - 172.30.0.1 ports 443, 53->8053, 53->8053 svc/router - 172.30.253.41 ports 80, 443, 1936 dc/router deploys docker.io/openshift3/ose-haproxy-router:v3.3.0.35 deployment #1 deployed 2 minutes ago - 2 pods View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.
You can also run “oc get all” to show the current state of the deployment configs, replication configs, services, and pods in this project.
# oc get all NAME REVISION DESIRED CURRENT TRIGGERED BY dc/docker-registry 1 1 1 config dc/router 1 2 2 config NAME DESIRED CURRENT AGE rc/docker-registry-1 1 1 29m rc/router-1 2 2 5m NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/docker-registry 172.30.24.180 <none> 5000/TCP 29m svc/kubernetes 172.30.0.1 <none> 443/TCP,53/UDP,53/TCP 1h svc/router 172.30.253.41 <none> 80/TCP,443/TCP,1936/TCP 5m NAME READY STATUS RESTARTS AGE po/docker-registry-1-on709 1/1 Running 0 28m po/router-1-baety 1/1 Running 0 4m po/router-1-bmdy2 1/1 Running 0 5m
Validate registry
Now that we’ve verified the overall state of the default project, we’ll want to verify the registry is working, and allows pushes and pulls with an authenticated user. You’ll need to login to test the registry, and for that you need to know what the registry’s service IP is, and the value of your authentication token.
# oc get svc docker-registry NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE docker-registry 172.30.24.180 <none> 5000/TCP 35m # oc whoami -t *redacted token string*
With that info, I can now login to my registry service.
# docker login -u jritenour -e jritenour@redhat.com -p *redacted token string* 172.30.24.180:5000 WARNING: login credentials saved in /root/.docker/config.json Login Succeeded
Now I can try to push an image. Pull the “busybox” image down from docker.io, as it’s small and easy to test with. Tag it with your registry ip/port and put it in the “openshift” namespace, then push.
# docker images REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/busybox latest e02e811dd08f 2 weeks ago 1.093 MB # docker tag docker.io/busybox 172.30.24.180:5000/openshift/busybox # docker push 172.30.24.180:5000/openshift/busybox The push refers to a repository [172.30.24.180:5000/openshift/busybox] e88b3f82283b: Pushed latest: digest: sha256:b321c7c9c643778fbe22de13a01fdfbac0f21c6c5452d8164de9367d96235d0c size: 2099 # docker images REPOSITORY TAG IMAGE ID CREATED SIZE 172.30.24.180:5000/openshift/busybox latest e02e811dd08f 2 weeks ago 1.093 MB docker.io/busybox latest e02e811dd08f 2 weeks ago 1.093 MB
A successful push means all is well. If it fails, then you likely have a problem with whatever storage your registry is using on the backend. Verify the storage is accessible, writable, and mounted by the docker registry pods.
Create an application
To continue, we’ll next create application. First, let’s create a project/namespace to run this app in:
oc new-project jr-test Now using project "jr-test" on server "". You can add applications to this project with the 'new-app' command. For example, try: oc new-app centos/ruby-22-centos7~ to build a new example application in Ruby.
And to keep it simple, we’ll deploy the Ruby application that the “new-project” command suggested.
# oc new-app centos/ruby-22-centos7~ --> Found Docker image 0449712 (3 days
I can run “oc status” to see what stage the deployment is in, or log into the Web UI to actually watch the console output of the build process.
When the build is complete, I can then run “oc get service” to obtain the service IP, and verify I can connect to it using the curl command.
# oc get service NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE ruby-ex 172.30.138.103 <none> 8080/TCP 5m # curl 172.30.138.103:8080 -I HTTP/1.1 200 OK Content-Type: text/html Content-Length: 39590
Validate router
Now I know that I can connect to the application by the service IP, but in the real world, we need to route hostnames to applications, and that’s where our router comes in. In most cases, you’re going to have a wildcard DNS record for your OpenShift/PaaS subdomain pointed at your infrastructure node(s) that your router pod(s) live on. For example, I have “*.paas.home.lab” pointing at jr-ose005, and jr-ose006, with router pods running on poth. So any name that lives in the .paas.home.lab domain that isn’t explicitly defined in DNS will get directed to these nodes. From there, the OpenShift router checks its exposed services, and send traffic to the appropriate pod/node.
Let’s expose the ruby-ex service we just created:
#oc expose svc/ruby-ex --hostname ruby.paas.home.lab
Now, we can curl against that hostname, just as we did with the IP.
curl ruby.paas.home.lab -I HTTP/1.1 200 OK Content-Type: text/html Content-Length: 39590 Set-Cookie: 46835afb0f3cb981bcb9d80703f10156=8a5b6b851a46307c10c5a52a2859aad5; path=/; HttpOnly Cache-control: private
Another “200 OK” status response - this application is healthy, and we can get traffic to it.
Run diagnostics
Finally, you can run “oadm diagnostics” on the cluster. This is more or less the “kitchen sink” as far as Red Hat OpenShift Container Platform health checks go, and will examine the running environment, along with your master/node config files. It’s verbose, so I won’t include the full output here, but here is a small sample in which I’m warning about the fact that I haven’t configured metrics or aggregated logging.
WARN: [DH0005 from diagnostic MasterConfigCheck@openshift/origin/pkg/diagnostics/host/check_master_config.go:52] Validation of master config file '/etc/origin/master/master-config.yaml' warned: assetConfig.loggingPublicURL: Invalid value: "": required to view aggregated container logs in the console assetConfig.metricsPublicURL: Invalid value: "": required to view cluster metrics in the console
In this post, we went through several steps to validate that a Red Hat OpenShift Container Platform environment has been deployed correctly, and verify basic operation. | https://www.redhat.com/pt-br/blog/validating-red-hat-openshift-container-platform-3-installation | CC-MAIN-2019-04 | refinedweb | 1,708 | 56.66 |
#include <stdio.h>
The printf() function places output on the standard output stream stdout.
The fprintf() function places output on on the named output stream stream.
The sprintf() function places output, followed by the null byte (\0), in consecutive bytes starting at s; it is the user's responsibility to ensure that enough storage is available.
The snprintf() function is identical to sprintf() with the addition of the argument n, which specifies the size of the buffer referred to by s. The buffer is always terminated with the null byte. results in the fetching of zero or more arguments. The results are undefined if there are insufficient arguments for the format. If the format is exhausted while arguments remain, the excess arguments are evaluated but are otherwise ignored.
Conversions can be applied to the nth argument after the format in the argument list, rather than to the next unused argument. In this case, the conversion).
In format strings containing the %n$ form of conversion specifications, numbered arguments in the argument list can be referenced from the format string as many times as required.
In format strings containing the % form of conversion specifications, each argument in the argument list is used exactly once.
All forms of the printf() functions allow for the insertion of a language-dependent radix character in the output string. The radix character is defined by the program's locale (category LC_NUMERIC). In the POSIX locale, or in a locale where the radix character
is not defined, the radix character defaults to a period (.).
Each conversion specification is introduced by the % character or by the character sequence %n$, after which the following appear in sequence:
If the conversion character is s, a standard-conforming application (see standards(5)) interprets the field width as the minimum number
of bytes to be printed; an application that is not standard-conforming interprets the field width as the minimum number of columns of screen display. For an application that is not standard-conforming, %10s means if the converted value has a screen width of 7 columns, 3 spaces would
be padded on the right.
If the format is %ws, then the field width should be interpreted as the minimum number of columns of screen display.
If the conversion character is s or S, a standard-conforming application (see standards(5)) interprets the precision
as the maximum number of bytes to be written; an application that is not standard-conforming interprets the precision as the maximum number of columns of screen display. For an application that is not standard-conforming, %.5s would print only the portion of the string that would display
in 5 screen columns. Only complete characters are written.
For %ws, the precision should be interpreted as the maximum number of columns of screen display. The precision takes the form of a period (.) followed by a decimal digit string; a null digit string is treated as zero. Padding specified by the precision overrides
the padding specified by the field width.
A field width, or precision, or both may be indicated by an asterisk (*) . In this case, an argument of type int supplies the field width or precision. Arguments specifying field width, or precision, or both format can contain either numbered argument specifications (that is, %n$ and *m$), or unnumbered argument specifications (that is, %
and *), but normally.
The flag characters and their meanings are:
Each conversion character results in fetching zero or more arguments. The results are undefined if there are insufficient arguments for the format. If the format is exhausted while arguments remain, the excess arguments are ignored.
The conversion characters and their meanings are:
If an l (ell) qualifier is present, the wint_t argument must be a pointer to an array of type wchar_t. Wide-characters from the array are converted to characters (each as if by a call to the wcrtomb(3C) function, with the conversion state described by an mbstate_t object initialized to zero before the first wide-character is converted) up to and including a terminating null wide-character. The resulting characters are written up to (but
not including) the terminating null character (byte). If no precision is specified, the array must contain a null wide-character. If a precision is specified, no more than that many characters (bytes) are written (including shift sequences, if any), and the array must contain a null wide-character
if, to equal the character sequence length given by the precision, the function would need to access a wide-character one past the end of the array. In no case is a partial character written.
If a conversion specification does not match one of the above forms, the behavior is undefined.
If a floating-point value is the internal representation for infinity, the output is [+-]Infinity, where Infinity is either Infinity or Inf, depending on the desired output string length.
Printing of the sign follows the rules described above.
If a floating-point value is the internal representation for "not-a-number," the output is [+-]NaN. Printing of the sign follows the rules described above.
In no case does a non-existent or small field width cause truncation of a field; if the result of a conversion is wider than the field width, the field is simply expanded to contain the conversion result. Characters generated by printf() and fprintf() are printed
as if the putc(3C) function had been called.
The st_ctime and st_mtime fields of the file will be marked for update between the call to a successful execution of printf() or fprintf() and the next successful completion of a call to fflush(3C) or fclose(3C) on the same stream or a call to exit(3C) or abort(3C).
The printf(), fprintf(), and sprintf() functions return the number of bytes transmitted (excluding the terminating null byte in the case of sprintf()).
The snprintf() function returns the number of characters formatted, that is, the number of characters that would have been written to the buffer if it were large enough. If the value of n is 0 on a call to snprintf(), an unspecified value
less than 1 is returned.
Each function returns a negative value if an output error was encountered.
For the conditions under which printf() and fprintf() will fail and may fail, refer to fputc(3C) or fputwc(3C).
In addition, all forms of printf() may fail if:
In addition, printf() and fprintf() may fail if:
If the application calling the printf() functions has any objects of type wint_t or wchar_t, it must also include the header <wchar.h> to have these objects defined.
The sprintf() and snprintf() functions are MT-Safe in multithreaded applications. The printf() and fprintf() functions can be used safely in multithreaded applications, as long as setlocale(3C) is not being called to change the locale.
It is common to use the following escape sequences built into the C language when entering format strings for the printf() functions, but these sequences are processed by the C compiler, not by the printf() function.
In addition, the C language supports character sequences of the form
\octal-number
\hex-number
printf (format, weekday, month, day, hour, min);"
Sonntag, 3. Juli, 10:02
printf("%s, %s %i, %d:%.2d", weekday, month, day, hour, min);
printf("pi = %.5f", 4 * atan(1.0));
printf("%20s%20s%20s", lastname, firstname, middlename);
See attributes(5) for descriptions of the following attributes:
exit(2), lseek(2), write(2), abort(3C), ecvt(3C), exit(3C), fclose(3C), fflush(3C), fputwc(3C), putc(3C), scanf(3C), setlocale(3C), stdio(3C), wcstombs(3C), wctomb(3C), attributes(5), environ(5), standards(5) | http://www.shrubbery.net/solaris9ab/SUNWaman/hman3c/sprintf.3c.html | CC-MAIN-2015-35 | refinedweb | 1,265 | 51.48 |
Using the Channel API on App Engine for instant traffic analysis
Posted by Nick Johnson | Filed under python, channels, app-engine, javascript, prospective-search
In last week's post, we introduced Clio, a system for getting live insight into your site's traffic patterns, and we described how the Prospective Search API lets us filter the site's traffic to get just the records we care about.
This week, we'll cover the other part of the system: delivering results in real-time. For this, we'll be using the Channel API to stream new log entries to admin users in real-time. As with last week's post, where there's differences between our demo implementation and what you'd use in a real-world system, I'll point those out.
The admin interface
First up, we need to provide a simple admin interface to which we'll stream results. Here's the handler for that:
class IndexHandler(webapp.RequestHandler): """Serve up the Clio admin interface.""" def get(self): client_id = os.urandom(16).encode('hex') channel_key = channel.create_channel(client_id) template_path = os.path.join(os.path.dirname(__file__), 'templates', 'index.html') self.response.out.write(template.render(template_path, { 'config': config, 'client_id': client_id, 'channel_key': channel_key, }))
The only thing of significance we do here relates to the Channel API. First, we generate a random client ID by getting some random data and hex-encoding it. We pass that to the Channel API's channel.create_channel function to create a new channel, and are given back the channel_key, a unique value that lets our client connect to the channel. Then, we render a standard template, passing in those values (along with some site-wide config information).
If this were a more complete project, we'd likely not take this approach, and instead have the page make an AJAX call back to the server to request a channel key. That way, when the channel expires after 2 hours, it can request a new one and keep on seamlessly serving results, rather than timing out and having to be reloaded (and requiring the user to re-add all his subscriptions).
The admin interface, while straightforward, has a reasonable amount of Javascript code. Let's have a look at the basic page layout first, then we'll examine the Javascript. Here's the bulk of the page:
{% extends "base.html" %} {% block title %}Clio{% endblock %} {% block head %} <script type="text/javascript" src="/_ah/channel/jsapi"></script> <script type="text/javascript" src=""></script> <style type="text/css"> tr { border-bottom: 1px solid black; } </style> {% endblock %} {% block body %} <h1>Clio Console</h1> <table> <thead> <tr><th>Method</th><th>Path</th><th>Status Code</th><th>Wall Time</th><th>CPU Time</th></tr> </thead> <tbody id="results"> </tbody> </table> </table> <div id="querybox"> Enter query: <input type="text" id="query" /> <input type="button" id="querybutton" value="Submit" /> </div> {% endblock %}
We include two Javascript snippets here. The first is the Channel API, found in /_ah/channel/jsapi. The second is JQuery, which will allow us to write much neater, cleaner javascript than would be the case if we didn't have it available. The rest of the page is a pretty bare skeleton: We define a header, an empty table that will be filled with log entries, and a very simple form for sending new queries. This form submits to the SubscribeHandler, which we covered in last week's post.
Let's take a look at the javascript that makes the admin interface do its thing. First up, we define a few variables we'll need:
<script type="text/javascript"> client_id = '{{client_id}}'; channel_key = '{{channel_key}}'; subscriptions = []; columns = ['method', 'path', 'status_code', 'wall_time', 'cpu_time'];
client_id and channel_key are both replaced by the template engine with the actual client id and channel key we supplied. subscriptions will be an array of subscription IDs; we don't actually need to do anything with this, but a more sophisticated implementation would track these so users can remove existing subscriptions, as well as categorize incoming results by the subscriptions they matched. Finally, we define a list of columns present in our table so we can easily generate markup for it.
function add_message(message) { var row = $('<tr />'); $('<td />', { 'colspan': columns.length, 'text': message, }).appendTo(row); row.appendTo('#results'); } $(document).ready(function () { channel = new goog.appengine.Channel(channel_key); socket = channel.open(); socket.onopen = function() { add_message('Channel established.'); }; socket.onmessage = function(message) { var data = jQuery.parseJSON(message.data) var row = $('<tr />'); for(var i = 0; i < columns.length; i++) { $('<td />', { 'text': data.data[columns[i]], }).appendTo(row); } row.appendTo('#results'); }; socket.onerror = function(error) { add_message('Channel error: ' + error.description); }; socket.onclose = function() { add_message('Channel closed.'); };
Here's where we do most of the work, but it should still be fairly easy to understand. We define a utility function, add_message, which allows us to add informative messages to the table we defined. We call it from the socket.onopen, socket.onclose and socket.onerror events to keep the user informed of these conditions. The socket.onmessage event handles incoming communications from the Channel API, and converts each message to a new table row using JQuery's excellent DOM generation support, adding it to the results table. We already covered in last week's post how matched results are handled with the MatchHandler, which then calls channel.send_message - messages sent there are received directly by the onmessage handler.
Finally, here's the code that handles clicks on the 'new query' button:
$("#querybutton").click(function(event) { var subdata = { 'query': $("#query").val(), 'client_id': client_id, }; $.post('{{config.BASE_URL}}/subscribe', subdata, function(data) { subscriptions.push(data); add_message('Subscription added with ID ' + data); }); }); }); </script>
All we do here is construct a new subscription request, consisting of the query and our client ID, and send it to the SubscriptionHandler. When we get a response, we log this as an informative message to the results table.
That, surprisingly, is all we have to do to provide the admin interface of our traffic inspector. All the major parts are in place now: The middleware that intercepts requests and logs them to the Prospective Search API, the subscription handler to put new subscriptions in place to match results, the admin interface which establishes a channel, and the match handler that sends matched results to the client.
There's one final piece of cleanup we should do, however: Whenever a client disconnects from the Channel API, we should delete any subscriptions it has, so they don't sit around cluttering up the datastore. Although the matcher subscription will eventually expire, we should delete that too, to save on resources. Here's the code that does that, given a client ID:
def handle_disconnection(client_id): """Handles a channel disconnection for a Clio channel.""" # Find all their subscriptions and delete them. q = model.Subscription.all().filter('client_id =', client_id) subscriptions = q.fetch(1000) for sub in subscriptions: prospective_search.unsubscribe(model.RequestRecord, str(sub.key())) db.delete(subscriptions)
Note we unsubscribe individually from each subscription, since there's no 'bulk unsubscribe' option for the Prospective Search API, but we delete all the subscriptions from the datastore in a single batch, to cut down on RPCs.
This function needs to be called from somewhere, of course, and the answer to that is the new connection and disconnection notification support in the Channel API. First, we define a new incoming service in app.yaml:
inbound_services: - channel_presence
Once we've done that, we'll get channel connection and disconnection notifications on /_ah/channel/connected/ and /_ah/channel/disconnected/ respectively. Since these are app-wide, and an application might use the channel API for more than just Clio, we've provided the above function for another handler to call. We'll also define our own implementations as part of Clio that can be used directly if you're not using the channel API, for convenience:
class ChannelConnectHandler(webapp.RequestHandler): def post(self): pass class ChannelDisconnectHandler(webapp.RequestHandler): def post(self): handle_disconnection(self.request.get('from'))
Using these requires us to add another mapping to app.yaml, since Clio currently only handles requests to /_clio/.*:
- url: /_ah/channel/.* script: clio/handler.py
A more sophisticated implementation might take advantage of the fact that we're already intercepting requests to most handlers via middleware to avoid the need to manually hook this component up.
And with that, we're done - we have a complete, if rather basic, system for monitoring site traffic in realtime. The source, as before, can be found here, though it would require some expansion in order to be useful in a real production environment, starting with a better and more flexible UI. Anyone keen? ;)Previous Post Next Post | http://blog.notdot.net/2011/07/Using-the-Channel-API-on-App-Engine-for-instant-traffic-analysis | CC-MAIN-2017-13 | refinedweb | 1,440 | 54.83 |
Opened 10 months ago
Closed 10 months ago
Last modified 10 months ago
#11814 closed defect (wontfix)
XML RPC with Bloodhound
Description (last modified by rjollos)
Hello everybody,
I started this issue on the SCM Manager bug report list - but it now seems to be a problem of the XML-RPC plugin ()
I tried to link the SCM-Manager with bloodhound using XML-RPC (this was initially designed for the communication with trac). After I failed to set it up I tried the Python examples from the plugin homepage (I set the privileges for XML_RPC to anonymous) Python 2.7.6 Bloodhound 0.7 (with trac 1.0.1)
import xmlrpclib server = xmlrpclib.ServerProxy("") print server.ticket.query() # returns [1,2] because I added to tickets manually -> working server.wiki.putAttachment('WikiStart/trac.bak', xmlrpclib.Binary(open('trac.bak').read())) # will add an attachment to the WikiStart page -> working server.ticket.get(1) # <Fault 404: 'Ticket 1 does not exist.'> -> failed a = server.ticket.create("foo","bar") server.ticket.get(a) # -> working BUT breaks the Bloodhound tickets page!!! # My Tickets # Info Items list is empty # Widget error # Error Exception raised while rendering widget. Contact your administrator for further details # Widget name # TicketQuery # Exception type # TracError # Log entry ID # 492056e7-8a5c-46da-ac12-9d6e6b607b69 # until I delete that ticket again server.ticket.get(a) # which returns 0
I tried those things with the dedicated bloodhound xml rpc plugin and with the trac XMLRPC plugin - both times the same effect.
Perhaps someone could help me fix that, Markus
Attachments (0)
Change History (3)
comment:1 in reply to: ↑ description Changed 10 months ago by olemis
comment:2 Changed 10 months ago by olemis
- Resolution set to wontfix
- Status changed from new to closed
Replying to markus.fuger@…:
I cannot reply there ... sorry .
btw , Bloodhound RPC plugin is tested against BH trunk (i.e. currently 0.8-dev) so it's possible to find some incompatibilities when using it with BH=0.7 .
[...]
The reason for this to happen is that you are using the wrong RPC URL . See these messages . There are other interesting details mentioned in other messages in that thread .
[...]
please try again using product RPC URL , upgrading to BH=0.8-dev is recommended
unless something new is noticed there's nothing to fix , afaict . | http://trac-hacks.org/ticket/11814 | CC-MAIN-2015-18 | refinedweb | 386 | 56.76 |
On 5 May 2003, Bruno Dumon wrote:
> On Mon, 2003-05-05 at 14:52, David Crossley wrote:
> > Bruno Dumon wrote:
> > > Vadim Gritsenko wrote:
> > > > Steven Noels wrote:
> > > > >Bruno Dumon wrote:
> > > > >
> > > > >> I think Xalan may be an exception to this rule, since it's not
as if we
> > > > >> depend on specific features or API's of this newer version. We
can
> > > > >> always go back to 2.4.1 if we would make a final Cocoon 2.1 release
> > > > >> before Xalan 2.6 is released.
> > > > >
> > > > > Then again, what was the original reason for adding 2.5 instead of
> > > > > 2.4.1?
> > >
> > > The question should be: why not?
> > >
> > > > > We might as well revert now and hopefully 2.6 will be out soon
> > > > > enough.
> > > >
> > > > I'm more in favor of reverting to older version than adding CVS version.
> > >
> > > Hmmm, that's then two against one.
> > >
> > > If we revert to 2.4.1 now, we'll never spot any other bugs in 2.5, and
> > > so 2.6 may also not be usable for us.
> > >
> > > But I'm happy to follow the majority, if nobody else speaks up, I'll
> > > revert to 2.4.1 by, say, monday evening.
> >
> > I will level the score then - i am in favour of using newer versions
> > of important jars.
> >
>
> Where do we go from here? Wait some more? We should make a decission
> before the M2 release.
>
> To summarize the situation:
> Xalan 2.5 contains a blocking bug (namespaces in endElement events can
> be incorrect when mixing namespaced and non-namespaced elements). This
> bug is fixed in Xalan CVS. So now we can either go back to 2.4.1 while
> waiting for a 2.5.1, or we can work with a CVS snapshot.
So for me there is only one option.
So +1 for using a CVS snapshot, and nagging the Xalan people for
a new release ;-)
Stephan. | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200305.mbox/%3CPine.LNX.4.44.0305051555230.9934-100000@vern.chem.tu-berlin.de%3E | CC-MAIN-2018-22 | refinedweb | 312 | 87.31 |
As discussed in the maintenance book, typing your code can be valuable in many ways. In part, it's about communication. Having the type information available makes it easier to develop tools that make it easier to manipulate code (think refactoring, intelligent parameters).
To understand the topic more in-depth, this time around I'm interviewing Charles Pick, the author of Flow Runtime.
codemix, I live in the countryside with my wife and kids near York, UK.I'm Charles, and I run a JavaScript consultancy called
My first exposure to programming was with BASIC on the BBC Micro at school when I was seven, ever since then I've been hooked. I worked as a nightclub DJ before becoming a full-time web developer about twelve years ago.
Since 2013 I've been entirely focused on JavaScript, and I love it. I'm interested in how to make JavaScript faster, safer, less error-prone and more comfortable to refactor.
It's a type system for JavaScript that works while the application is running, not at compile time like TypeScript or Flow do. The core idea is that types become first-class values that you can reference and pass around like any other.
Flow Runtime can represent the type of any possible JavaScript value; numbers, objects, classes, functions, etc. and verifies that the input your program receives in reality matches what you were expecting when you wrote it.
The goal is to be 100% compatible with Flow - Flow catches errors at compile time, Flow Runtime catches errors when your code interacts with untyped code or user input.
There are two main packages:
flow-runtime
flow-runtime represents types and does the actual verification. It provides a simple, composable API for defining types and matching values against them:
import t from "flow-runtime"; const stringOrNumber = t.union(t.string(), t.number()); stringOrNumber.assert(123); stringOrNumber.assert("this is fine"); stringOrNumber.assert(false); // throws an error
You can use this standalone and as well as type checking it enables some pretty cool stuff, like pattern matching.
babel-plugin-flow-runtime
babel-plugin-flow-runtime takes code written with Flow annotations and turns those annotations into flow-runtime API calls.
So when you write code like this:
type Thing = { id: number; name: string; }; const widget: Thing = { id: 123, name: "Widget" };
the plugin produces this:
import t from "flow-runtime"; const Thing = t.type("Thing", t.object( t.property("id", t.number()), t.property("name", t.string()) )); const widget = Thing.assert({ id: 123, name: "Widget" });
You can try this out in the online demo.
The vast majority of JS validation libraries have a focus on validating user input of one kind or another, whereas Flow Runtime is all about program correctness. To do this, we have to be able to represent the type of any possible JavaScript value, e.g., the shape of a class, or whether a generator function yields the right type of object.
Most popular validation libraries don't handle these kinds of scenarios, the closest alternative is tcomb by Giulio Canti, it's a vast library but pre-dates Flow and therefore can't handle some complicated cases.
We were modernizing a pretty large, sprawling JavaScript codebase for one of our customers back in 2014 when Facebook launched Flow, and after a bit of experimentation we were sold entirely - it's an excellent technology. However, at the time it was still pretty rough around the edges and didn't support a lot of the newer ES6 features we were using.
We also found introducing a type system to an existing project pretty challenging. You have to make a lot of assumptions about the untyped code, and you don't start seeing the benefit until the overwhelming majority of the codebase is converted.
The core problem is that your nice, newly typed codebase touches untyped code so often that static analysis is defeated - it's entirely possible to write fully annotated code that Flow happily accepts and is completely wrong because the real-world input does not match your expectations. So if we can't find these problems at compile time, the only way to find them is at runtime.
Out of this idea came my first effort - babel-plugin-typecheck which compiles Flow type annotations into type checks. It generates all the code inline which makes it very hard to develop for and maintain. As Flow matured and continued getting better, it became clear that we needed a different approach if we were ever going to be compatible, and so flow-runtime was born.
I'd like to produce a webpack plugin to make it easier to work with external type definitions. Right now you have to use a separate package called
flow-runtime-cli which generates a file that you can later import, and it's all a bit messy. I also want to simplify some of the internals to make it easier for people to contribute.
In general, I think we're going to see TypeScript and Flow become more and more popular, the benefits of optional static typing are pretty clear at this point. I'd like to see the ecosystem around Flow mature, I think it's the technically superior option but TypeScript offers a lot better tooling at the moment.
Eventually, I think we'll see Flow's type information start being incorporated into other projects, which will enable a lot of cool things. If that information were available directly to Babel, webpack or uglify, etc. it would be possible to generate much faster safely, smaller production builds.
Now that Babel supports TypeScript it is possible to support TypeScript in flow-runtime. I'm pretty excited to try that out.
Take every prescriptive blog post or article you read with a pinch of salt and be particularly suspicious of anyone who tells you to always/never do X, Y or Z.
Stick with well-established tools at first and don't worry about keeping up with the cutting edge - excellent documentation and support matter most.
Seek out and work closely with people smarter and more experienced than you, but remember that those intelligent people are still going to be wrong a lot of the time.
Comment your code, for your future benefit and because you'll spot a bunch of lurking bugs in the process.
I think Benjamin Gruenbaum is an unsung hero in the Open Source JavaScript community. Benjamin contributes to so many projects and discussions that it's hard to keep up, he's one of those people that is always there, helping people on Stack Overflow, supporting other developers in GitHub issues, being pragmatic and helping keep discussions productive.
Thanks for the interview Charles! I think your work complements Flow well and will allow people using it already to get more out of the approach.
Check out flow-runtime site to learn more. See the project in GitHub as well. | https://survivejs.com/blog/flow-runtime-interview/ | CC-MAIN-2018-30 | refinedweb | 1,158 | 59.94 |
1.49 - fix to Z-axis color filling in 3D pie charts (Debian Bug #489184) - bump ExtUtils::MakeMaker dependency - tiny improvement in the code of the samples 1.48 02 Aug 2013 - no code changes, just release enginering cleanup - adjust MANIFEST.SKIP file so MANIFEST can be generated once again - ship sample58.pl file, so `make samples` stop failing - mention the current and past maintainers in META files as authors - use newer CPAN::Meta and ExtUtils::MakeMaker, older versions generated META files without runtime prerequisites 1.47 28 Jun 2013 - experimental hide_overlapping_values option for bar graphs 1.46 26 Jun 2013 - This release is based on old work by Martien that was sitting in his repo - x_last_label_skip option - new samples and tweaks to old 1.45 21 Jun 2013 - read DISTRIBUTION STATUS in perldoc GD::Graph - no code changes since 1.44 25 Apr 2007 - Patched bugs 21610, 20792, 20802, 23755 and 22932 - Updated POD to clarify current maintenance status, and encourage bug reporting via RT (and to point out some external help resources) - Release 1.44 17 May 2006 - Patched bugs 7307, 2944 (extended the fix to mixed graphs), 4177, and 16863. - Fixed a fencepost error in pie.pm that caused an occasional segfault (reported by John Westlund and Hank Leininger) - Fixed a bug that broke bar charts with no visible 0-axis (reported by Steve Foster) 19 Feb 2006 - Patched bugs 2218, 4632 (which had two duplicates), 7881, and 15374 - Failed to update CHANGES file before releasing 1.4307 to CPAN (oops) 4 Feb 2006 - Patched bugs 16880 and 16791 19 Dec 2005 - Resolved the following bug reports/feature requests from RT: 1363, 2944, 3346, 3850, 4104, 4380, 4384, 4469, 5275, 5282, 6751, 6786, 7287, 7819 - Straightened out $my_graph/$graph confusion in docs - Added five new sample scripts - Individual module versions now reflect a branch from original CVS versioning, in case things need to be merged at some later date. - Uploading to CPAN as 1.4305 under ~BWARFIELD because I can't seem to get in touch with Martien Verbruggen. 1 July 2003 - Fixed yet another division by 0 problem, for two_axes. - Added more tests to axestype.t - Cleaned up other test files. - Release 1.43 19 June 2003 - Fixed another division by 0 problem, introduced in 1.41 - Added test t/axestype.pm, which now tests for division by 0 error. - Released 1.42 17 June 2003 - Removed file BUGS from distribution. Too much work to keep up to date. Use rt.cpan.org, or email. - Fixed skip() calls in tests to work with ancient versions of Test.pm. - Made GD::Graph::Data::read() work with file handles under Perl 5.005. - Released 1.41 16 June 2003 - Fixed when zero axis inclusion is done for bar and are charts. - Fixed code to reserve area for hbar charts last y axis label 11 June 2003 - Added no_axes option, changed sample56 to reflect this 30 May 2003 - Added version number for GD::Text PREREQ_PM - Allow GD::Graph::Data::read() to read from file handle, instead of file - Added tests for data file reading, and test data - Release version 1.40 24 Feb 2003 - How come I never noticed this before? Right axis was disappearing when r_margin was zero. 22 Feb 2003 - Added patch by Ben Tilly from RT ticket 203 (manually, and much too late) to fix problems with picking decent values for axes when two_axes set to true. Added sample 57 20 Feb 2003 - Removed cetus font, because of unknown copyright status. - Release 1.39 (skip 1.38, internal release) 11 Feb 2003, continued after 1.37 release - Fixed version numbering - Added limited, preliminary get_feature_coordinates support. 11 Feb 2003 - Fixed =head1 in GD/Graph/FAQ.pod - release 1.37 10 Feb 2003 - Added some tests, mainly to make the CPAN testers happy :) The best way to test is still to use the samples. - Preparation for 1.36 Jan 2003 - Made detection of output formats more robust. Newer GDs break on simply testing with UNIVERSAL::can. - Added some documentation on error handling. 12 Jun 2002 - 21 Jun 2002 - Fixed various bugs: - Area charts don't allow undefined variables, die on hotspot code - allow "0.00" to be equal to 0 when determining min and max values for axes - fixed shadows for cumulative bars - Preparation for release 1.35 9 Jun 2002 - I just realised this file is severely out of date. I'll only keep track of the really big changes here, since I really can't remember what I've fixed and changed sine 1.33, and it's really too much work to go through all CVS comments. - Added hbars.pm, and put most of the framework in place to allow the other charts to be plotted rotated as well. - preparation of release 1.34 7 Oct 2000 - Addition of undefined data set colour. - Addition of bar_width option - Preparation of release 1.33 May - Sep 2000 - Various small bug fixes 7 May 2000 - Finalised code for value display - prepared for release of 1.32 6 May 2000 - Added FAQ to distribution - Fixed bug with calculation of bottom of plotting area (causing axis tick values to be cut off, when no label present) - Some 5.004 code retrogression 30 Apr 2000 - Fixed problems with overzealousness of correct_width attribute for mixed charts (report Jimmy Sieben) - Fixed problem with zero_axis and zero_axis_only drawing when y values are all on one side of 0 (report Matthew Darwin) - Fixed GD::Graph::Data::get_min_max_y and ppoints charts with numerical X axes (thanks Peter) - Fixed problem when data sets contain only zeroes (report Joe Pepin) - Added experimental support for hotspots 15 Apr 2000 - Added some code (thanks Gary) to deal better with 'numerical' X axes. - Prepared version 1.30 for release, mainly to make sure that patches that come in are done against the new code base 27 Feb 2000 - Added cumulate attribute, needs more code changes to fully replace 'overwrite == 2'. For now, it will work. 20 Feb 2000 - Added correct_width option - Fixed bug in pie. If one of the slices ended up at exactly 90 degrees, or very close to it, The fill would cause GD to core dump. Ugly. Introduced half a degree of relaxation there. There still are troubles with this. I really need to think of another way to do this. 15 - 17 Feb 2000 - Added Error class, removed error checking code from Data - Added $VERSION to each module - Fixed bug in can_do_ttf. Can now _really_ be called as static or class method. 14 Feb 2000 - Fixed bug in bar width. Removed some roundings, and always add 1 to left boundary. Also, adapt r_margin on the fly to make sure bars are always exactly the same width. This changes the look of graphs slightly, but it looks so much better in general. - Fixed bug in drawing of vertical x labels with x_tick_number enabled - Cleaned up code in setup_coords in axestype.pm a bit 12 Feb 2000 - Removed ReadFile sub from GD::Graph.pm. I never got around to making it part of the interface anyway. replaced by GD::Graph::Data. - Rewrote lots, and lots of code. General cleanup. Removal of direct work with data array. Now only works with interface of GD::Graph::Data object (hopefully) 09 Feb 2000 - Added GD::Graph::Data to distribution 27 Jan 2000 - Added patch from Hildebrand, which allows non-drawing of accents on bars with accent_treshold option - Fixed rounding problems with overwrite == 2, alert from Hildebrand, also a patch supplied which couldn't be used because of other code changes related to sample17 - Added sample18, supplied by Hildebrand as illustration of rounding problem. - Fixed off-by one error in pick_data_color whenever color pick is based on point instead of data set (cycle_clrs and pie) - Added accent_treshold option to area charts 09 Jan 2000 - Rewrote some documentation to reflect new GD::Text behaviour. 08 Jan 2000 - Rewrote set method to only accept documented attributes. - Made sure all documented attributes are part of the %Defaults hashes. - Changed inheritance slightly, because changing the set method and using access to file scoped variables didn't work very well with multiple inheritance in mixed and linespoints. All file scoped material now is kept in the common ancestor of these, i.e. axestype. 06 Jan 2000 - Added shadow patch from Jeremy Wadsack, changed a few things: Allow negative shadows, move his code out a bit more. - Added color cycling patch from Jeremy. Adapted it to new code, and added code to also allow cycling of colour of bar borders. - Documentation. 05 Jan 2000 - removed all references to perl version from any code. Leaving it up to users to find out for themselves. They'll just need to read the documentation. - Applied patch from Jeremy Wadsack fixing text sticking out of pie slices causing fills to fail. - cleanup of code in pie.pm: draw_data, filling front of pie, and _get_pie_front_coords. 03 Jan 2000 - Fixed two_axes and negative values - released 1.22 Dec 1999, mgjv - Development and support back to me, mgjv. GIFgraph and Chart::PNGgraph (hopefully) will both become wrappers around this new set of modules. - Renamed to GD::Graph - Removed direct font handling stuff, and moved font handling to GD::Text::Align objects - Removed methods for writing to files directly. it is now up to the user of the module to save the image to a file. GIFgraph and Chart::PNGgraph will still do this - plot method now returns a reference to a gd object, no longer the image data. GIFgraph and Chart::PNGgraph still exhibit old beahviour. - Added some new options, and split up some others. - Added new methods, mainly to detect the useability level of the current GD. - Changed any die and warn statements to croak and carp Thu October 21 1999 by sbonds@agora.rdrop.com - Chart::PNGgraph version 1.13 - Added primitive support for TrueType fonts - Fixed bug where 3d charts with very large slices would not have both parts of the front filled properly. Thu October 7 1999 by sbonds@agora.rdrop.com - Chart::PNGgraph version 1.12 - Changed namespace to Chart:PNGgraph so that CPAN will index it properly - Added "base" tests-- still very incomplete - Changed sample 5-2 to have multiple lines since this is what the html file described it as having. - Checks the read-only attributes 'pngx' and 'pngy' so these will not get accidently set. $g->set() returns undef if they are present, but the rest of the attributes will be set. Tue August 31 1999 by sbonds@agora.rdrop.com - Please contact me rather than the original author for problems unless you are convinced that the problems are not a result of the port to PNG - Converted GIFgraph-1.10 to PNGgraph-1.10 so it works properly with GD-1.20 - UNISYS has been increasingly defensive of the LZW patent used in GIF compression, which necessitates these kinds of fixes. Tue August 25 1998 - Fixed bug with undef values for first value in lines graphs - Changed one or two samples, and samples Makefile dependencies Tue August 25 1998 - Added bar_spacing option. - Fixed a slight drawing 'bug' while doing that. - Changed a few of the samples to use bar_spacing - Implemented numerical X axis, based on a changed axestype.pm from Scott Prahl <prahl@ece.ogi.edu>. many thanks. - Added sample54 to illustrate Tue August 18 1998 - Added rudimentary mixed type graph Mon August 17 1998 - Added control over axis label placement through x_label_position and y_label_position. - Added possibility to call a coderef to format the y labels. See y_number_format. (Idea: brian d foy <comdog@computerdog.com>) - Fixed some bugs (see file BUGS, version 1.04) Fri August 14 1998 - Uploaded version 1.03 to CPAN - Finally able to make some fixes - Changed defaults for zero_axis and zero_axis_only to 0. Were both 1. Needed to do this, because of all the confusion they cause - Test for defined $s->{y_(min|max)_value} (Honza Pazdziora <adelton@informatics.muni.cz>) (Vegard Vesterheim <vegardv@runit.sintef.no>) - Fixed handling of negative values (I hope) (brian d foy <comdog@computerdog.com>) - From now on, require 5.004. 5.003 is dead, and should be deprecated now that 5.005 is out. - Added 5.005 specific MakeMaker fields to Makefile.PL Tue May 12 1998 - Cleaned up a bit, finalised version 1.02, because of time contraints, and the need to get these bug fixes out. Didn't succeed. Mon Jan 19 1998 - Fixed some bugs (see file BUGS, version 1.02) - Added option x_all_ticks, to force all x ticks to be printed, even if x_label_skip is set. - Added option x_labels_vertical, to print x labels vertical (Thanks to DBelcher <dbelcher@cyberhino.com> for a patch) Fri Jan 9 1998 - Fixed some bugs (see file BUGS, version 1.01) - Added formatting for y labels (option y_label_format) Tue Dec 23 1997 - Changed PERL=perl5 to PERL=perl in samples/Makefile (D'OH!) - Added read_rgb to GIFgraph::colour to allow definition of own colours - Added t/colour.t - Removed a lot of unnecessary quotes all over the place Mon Dec 22 1997 - Center graph titles on axes, not on png - Added line types - Moved initialise() to each module with $self->SUPER inheritance - Added check for duplicate GD::Image object initialisation - Added binmode() to t/ff.pl to make tests compatible with Win32 platforms (D'OH). Thu Dec 18 1997 - Allow undef values in data sets, and skip them - Added prototyping of functions - Legends, finally requests from Petter Reinholdtsen <pere@link.no> Giorgos Andreadakis <gand@forthnet.gr> Tue Dec 16 1997 - Added documentation for dclrs and markers options - removed line_width and ReadFile from the documentation (unimplemented) - Started on port to win32 - Changed Area to use Polygon and to work with negative numbers Mon Dec 15 1997 - Added support for negative numbers in lines, points and linespoints graphs - Added new options: zero_axis, and zero_axis_only (code and documentation) - Added new options: y_min_value, y1_min_value, y2_min_value Fri Dec 12 1997 - Changed 0 angle for pies to be at front/bottom, to make calculations a bit easier - Added test scripts for 'make test' Before Fri Dec 05 1997 - Lots of minor tuning - Added x_ticks option requests from Will Mitayai Keeso Rowe - thelab@nmarcom.com mparvataneni@quadravision.com (Murali Parvataneni) - Added binmode() here and there to accommodate for platforms that need it. | https://metacpan.org/changes/distribution/GDGraph | CC-MAIN-2015-22 | refinedweb | 2,378 | 64.1 |
CodePlexProject Hosting for Open Source Software
Hi,
I was wondering if it was possible to configure the recent posts widget summary length. I noticed that the default blog posts list only displays X number of characters, is it possible to control this summary limit? Also is is possible to include images in
this summary?
Future enhancement would be a preview of the summary below where the post was being authored.
Many thanks,
Ian
Seems like a good idea. Could be an interesting patch ;)
Indeed. But in the short term I just need to extend the character limit, not done any orchard development yet. I'm assuming the summary will get set in the core rather than the view template. Is that right?
I have the same requirement. I'll cook up something.
OK, it's actually fairly simple. Add a file named Parts.Common.Body.Summary.cshtml into the Views directory of your theme, with this contents:
@{
Orchard.ContentManagement.ContentItem contentItem = Model.ContentPart.ContentItem;
string bodyHtml = Model.Html.ToString();
var body = new HtmlString(Html.Excerpt(bodyHtml, 400).ToString().Replace(Environment.NewLine, "</p>" + Environment.NewLine + "<p>"));
}
<p>@body</p>
<p>@Html.ItemDisplayLink(T("Read more...").ToString(), contentItem)</p>
Thanks Bertrand. It's starting to make sense but what I don't quite understand is how you worked out the file must be called Parts.Common.Body.Summary. I can't find any documentation about the common parts.
Also if I override this part to match my recent blog posts design, hasn't that then overridden other content types body summaries also. Is there a way to target just a specific module in this case the blog.
I may have totally missed where this is documented.
Cheers,
You are absolutely right that documentation is somewhat lacking in that area, as is good tooling to make this more discoverable. I'll explain how I figured it out, and some of it will be a little technical but I hope I can keep it usable.
I know that the body of a content item is handled by the body part that is in the Core.Common module. I open the views directory of that volume and hunt for template files in there. So you determine what part is responsible for what you want to customize
and try to find the right template in the views folder of that module.
Once you've done that, you can also look for alternates. You do that by looking for a class implementing IShapeTableProvider and see if anything is handling the shape you're looking for and adding alternates. You can see examples of that by searching the
code for AddAlternates. If you don't find an alternate that already exists that would enable you to specialize the template the way you want. That's where the technique I exposed in post enters into play and enables you to add your own alternates. Between
all those techniques (plus placement.info) you should be able to customize pretty much anything in any way you can think of.
Does this help?
Ok. Say I want to customize the metadata information displayed on a recent blog post item.
I know that the recent blog post item is displayed by the Content.Summary.cshtml which in turn calls
<div class="metadata">
@Display(Model.Meta)
</div>
I can see that in Shapes.cs there is an Alternate for blog posts.
if (contentItem != null) {
//Content-BlogPost
displaying.ShapeMetadata.Alternates.Add("Content__" + contentItem.ContentType);
So in order to target the meta data on a blog post only I tried creating a chtml template in my theme named:
Content.BlogPost.Summary.Metadata.chtml
Content.BlogPost.Metadata.cshtml
Not working. What am I missing...
I'm sure this will all click soon.
Thanks for all your help.
Uh. I was pretty sure I had answered this but it appears my answer got lost in the intertubes... Oh well.
Yeah, you have two shapes at play here: the content item's shape (that does have an alternate with the content type) and the metadata part that is provided by the Orchard.Common module. That one is called Parts_Common_MetaData and has no alternates that
I know of. This is where the technique I showed in my post comes in handy as it enables you to add your own alternates, for example based on the content type name.
Makes sense?
Oh, and the metadata shape is just one of the shapes that will get rendered inside of the more global content item shape. It's a hierarchy in case that wasn't clear.
Still struggling with this. I've added the below but a breakpoint on the ContentItem doesn't get hit. I've found this shape mentioned by name in the src so I know its correct.
public class MetaDataShapeProvider : IShapeTableProvider
{
private readonly IWorkContextAccessor workContextAccessor;
public MetaDataShapeProvider(IWorkContextAccessor workContextAccessor)
{
this.workContextAccessor = workContextAccessor;
}
public void Discover(ShapeTableBuilder builder)
{
builder
.Describe("Parts_Common_Metadata")
.OnDisplaying(displaying =>
{
ContentItem contentItem = displaying.Shape.ContentItem;
if (contentItem != null)
displaying.ShapeMetadata.Alternates.Add("Metadata__" + contentItem.ContentType);
});
}
}
Any ideas?
If I'm understanding this concept correctly this should create an alternate with the name "Metadata_BlogPost that I can use to override the display of that part.
I'm working off the below post for anyone that's interested.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://orchard.codeplex.com/discussions/242866 | CC-MAIN-2017-09 | refinedweb | 915 | 67.76 |
I’m actually building a web app using custom Polymer elements (which is based on web-components and shadow-dom) but I encountered a serious problem. I want this app to run on an Android 4.4 WebView, but I experienced some issues with the shadow-dom when testing the app on the WebView, although it works perfectly fine on […]
When I touch the html textfield, the soft keyboard pops up, and hides the textfield, so that I was not able to see what I had typed. How to add vertical scroll for scrolling hidden content?
I am getting this warning when getting HTML content from server to WebView and HTML contains img src tag Example: sequences represents the increasing order of the polarizing power of the cationic species img src=”” 11-06 01:35:44.129: W/PicturePileLayerContent(2179): Warning: painting PicturePile without content! 11-06 01:35:44.139: W/PicturePileLayerContent(2179): Warning: painting PicturePile without content! 11-06 01:35:44.149: W/PicturePileLayerContent(2179): Warning: […]
I have an Ionic project where I want to display a video, and on top of it the content of the cordova WebView. To have the transparent WebView on top of the video view, in a plugin I use: webView.getView().setBackgroundColor(0x00000000); ((ViewGroup) webView.getView()).bringToFront(); The VideoView is initialized and added in a FrameLayout like this: FrameLayout containerView […]
I placed a WebView in a NestedScrollView. To hide/show the toolbar when the user scrolls down/up in the WebView. Unforunatly, there are issues: When click on a url in a webpage that navigates to a smaller webpage (in height), a blank area appears at the bottom of the webview. When i remove this a javascript […]
FIrst off I am working on an application that needs to run JavaScript when the app is not in the foreground; the problem appears to be that when the app is put into the background/ the webview is detached from the screen the webview’s onPause method is called which per docs does the following: “Pauses […]
I am trying to have interactive buttons in an android WebView that contains flash. As a test, I set up a HTML to load in a flash through with a set x/y size. public class webz extends WebView { private Drawable image; public webz(Context context, AttributeSet attrs) { super(context, attrs); image=context.getResources().getDrawable(R.drawable.icon); getSettings().setPluginsEnabled(true); } @Override protected […]
I have a document reader project in android. Main Activity includes a WebView. The texts are reading from html. In Top Options menu includes a button for increasing text size dynamically (text are wrapping). So far so clear but when the button pressed, the text size is some increases but all text are shifting down […]
Problem : I want to pre load a web page inside an Android WebView and attach it to an Activity when the Activity is ready. The trigger point of loading the webpage is before the actual Activity gets created. So I create a webview object in a service the following way. MutableContextWrapper contextWrapper = new […]
I have successfully implemented crosswalk webview project inside an Android studio project. Basically by following this link: People familiar with implementing crosswalk know that the app size gets increased by +- 20-30 mb. For this reason i have been trying to integrate the lite version of crosswalk. Which is +- 10 mb, Unfortunaly without […] | http://babe.ilandroid.com/android/webview/page/5 | CC-MAIN-2018-22 | refinedweb | 566 | 51.78 |
Create and Use a Custom Finder
The MATLAB Report Generator report generation API supports creation of finders that search data containers for specified objects and return the results in reportable form. Finders allow you to separate search logic from report logic in your report generators. Finders also promote reuse of search logic, thereby speeding development of report generators. This example shows how to develop and use a finder to generate a report.
Define a Finder
Creating a finder entails creating a MATLAB class that defines the finder's properties and behavior. The following sections explain the steps needed to create a finder class. The explanation uses a class named
GrantFinder as an example. The code for the class resides in a file,
GrantFinder.m, that accompanies this script. The
GrantFinder class defines a finder capable of finding and formatting grants awarded by the U.S. National Endowment for the Humanities (NEH).
Create a Skeleton Class Definition File
Use the MATLAB Editor (not the Live Editor) to create a skeleton class definition for your finder, for example
Specify Finder Base Class
Specify the Report API's
mlreportgen.finder.Finder class as the base class for your finder.
This base class defines properties that are common to finders, including
Container: a property used to reference the container to be searched by the finder. For example, the
GrantFinderuse this property to store a reference to a grant database that it creates.
Properties: a property used by finder clients to specify property values that an object must have to satisfy a search. For example, this property allows a GrantFinder client to specify the grant properties that a a grant must have to be returned as a result of a search of the NEH grant database.
The
mlreportgen.finder.Finder class specifies other properties and methods that your finder class definition must define. This ensures that your finder works with the Report API.
Define a Finder Constructor
Define a function that creates an instance of your finder, for example,
The
GrantFinder constructor uses MATLAB's
xmlread function to read the grant XML file from disk and convert it to a Java DOM document. It then passes the DOM document to the
mlreportgen.finder.Finder constructor, which sets the Java DOM document as the value of the finder's
Container property. Storing the NEH database as a Java DOM document allows the finder to use third-party Java software to search the data base. See Xerces Java DOM API for more information.
The constructor also calls a
reset function that initializes variables used to search the grant database. The GrantFinder class defines this function. Similarly, your class must define a reset function. The
reset function ensures that a client can use your finder to conduct multiple searches of its container. See Define a Reset Function for more information.
Define a
find Method
Define a method to search the finder container for objects that meet user-specified constraints. The find method must return an array of result objects that contain the objects that it finds. Result objects are objects of base type
mlreportgen.finder.Result. Returning the find results as result objects allows a user of your finder to add the results to a report or report chapter. See Define a Finder Result for more information. Returning the find results as a MATLAB array allows you to use for loop to process search results, for example
The
GrantFinder's
find method illustrates definition of a
find method.
This find method uses
getNodeList, a search utility function that the
GrantFinder class defines (see Define a Search Utility Method) to search the grant database for grants that meet property value constraints specified by the finder's
Properties property. The
getNodeList function sets an internal property named
NodeList to the result of its search. The result is a Java DOM
NodeList object (
NodeList) that contains the search results as a list of Java DOM elements (
Element).
The
find method then converts this node list to an array of result objects of type
GrantResult. It uses the
GrantResult constructor to create a grant result object from the Java DOM
Element object that contains the grant data.
Define
hasNext and
next Methods
Your finder class definition must define
hasNext and
next methods. On its first invocation, your
hasNext method must create a queue of search results and return true if the queue is not empty. On subsequent invocations the
hasNext method must return
true if the queue is empty,
false otherwise. Your
next method must return the first result in the queue on its first invocation, the next result, on its next invocation, and so on, until the queue is empty.
These methods are intended to allow a client of your finder to use a MATLAB
while loop to search your finder's container, for example,
The
GrantFinder class illustrates a
hasNext method.
This method first checks whether it has already created a search queue as indicated by the finder's
IsIterating property. If the queue already exists and is not empty, this method returns
true. if the queue exists and is empty, this method returns
false. If the queue does not yet exist (i.e., this is the method's first invocation), the
hasNext method creates a result queue as follows. First, it uses its internal
getNodeList method to get the grants that meet the search criteria specified by the finder's
Properties property. The
getNodeList method sets an internal finder property named
NodeCount to the number of results found. If
NodeCount is greater than zero, the
hasNext method sets an internal property named
NextNodeIndex to 1. The finder's
next method uses this property to save the state of the search queue, that is the next item in the queue. Finally, if the queue is not initially empty, the finder returns
true; otherwise,
false.
The
GrantFinder's
next method operates on the queue created by the
hasNext method.
Define a Search Utility Method
Your finder's
find and
hasNext methods must search your finder's container for objects that satisfy search constraints. You should consider defining a search utility that both methods can use. For example, the
GrantFinder
hasNext and
next methods both delegate searching to an internal utility named
getNodeList. The
getNodeList method in turn delegates searching to an XML document search API named
XPath (see XPath Tutorial).
Create an
InvalidPropertyNames Property
Your finder must define a property named
InvalidPropertyNames that specifies object properties that cannot be used to constrain a search. The
mlreportgen.finder.Finder base class uses this property to verify that user-specified search properties specified by your finder's
Properties property are valid. If not, the base class throws an error. In other words, if a client sets your finder's
Properties property to invalid properties, the base class throws an error. In this way, the Report API's base finder handles property validity checking for your finder.
If your finder can use any search object property as a search constraint, it should set the
InvalidPropertyNames property empty. For example, the
GrantFinder can handle any grant property. It therefore sets this property empty:
Define a
reset Method
A finder must be able to support multiple searches to avoid the need to create a finder for every search. For this reason, the Report API's base finder class forces your finder class to define a
reset method that resets variables used by your finder's search logic, for example,
Define a Finder Result
If a suitable definition does not exist, you must create a class to define the result objects returned by your finder. This section shows how to define a finder result object. It uses a class named
GrantResult as an example. The
GrantResult class defines results returned by the
GrantFinder class used as an example in the Define a Finder section. The
GrantResult.m file that accompanies this script contains the code for the
GrantResult class. Defining a finder result entails the following tasks.
Specify the Result Base Class
Define
mlreportgen.finder.Result as the base class for your result class, for example,
Define
Object Property
Define a property named
Object that clients of your result object can use to access the found object that your result object contains. Specify
protected as the
SetAccess value of your finder's
Object property. This ensures that only your result can specify the found object that it contains.
Your result constructor must set the found object as the value of its
Object property. Your result constructor can use the base class constructor to perform this task, for example,
Expose Found Object Properties
Your result's
Object property allows a client to access the found object and therefore its properties. However, accessing the properties can requires extra code or specialized knowledge. You may want to expose some or all of the found object's properties as properties of the result object. For example, the
GrantResult class exposes the following subset of a grant's properties.
This saves the client of the grant finder result object from having to extract these properties itself. Your result's constructor should extract the values of the properties to be exposed and set the corresponding result properties to the extracted values, for example,
Note that
GrantResult combines some of the grant properties into a single exposed property. For example, it exposes a grant's
InstCity,
InstState,
InstPostalCode, and
InstCountry properties into a single result property named
Location.
In this example, the constructor uses internal methods to extract the grant properties from the grant object, which is a Java DOM
Element object, for example,
Define a
getReporter Method
You must define a getReporter method for your result object that returns a reporter object that reports on the found object that the result object contains. This method allows a client of your finder to report on a result of a find operation simply by adding the result to a
Section, or
Chapter object. For example,
A report or chapter's
add method knows that a result object must have a
getReporter method that returns a reporter that formats the data the result contains. So if you add a result object to a report or chapter, the
add method invokes the result's
getReporter method to get the result reporter and adds the result reporter to the report or reporter, causing the result data to be formatted and included in the report.
The
GrantResult class definition defines a
getReporter method that returns a customized version of the Report API's
mlreportgen.report.BaseTable reporter. The
BaseTable reporter generates a table with a numbered title. The
GrantResult class customizes the
BaseTable reporter to generate a table of grant properties, for example,
The following code shows how the
GrantResult class customizes the
BaseReporter to generate a numbered grant properties table:
Use a Finder
This script shows how to use a finder to generate a report. This script uses the GrantFinder example used in the Define a Finder section to generate a PDF report on NEH grants to institutions in selected states from 2010 through 2012. The script performs the following tasks.
Import the Report Generator API
Import the classes included in the MATLAB Report Generator's Report API. Importing the classes allows the script to use unqualified (i.e., abbreviated) names to refer to the classes.
import mlreportgen.report.* import mlreportgen.dom.*
Create a Report Container
Create a PDF container for the report, using the Report API's
mlreportgen.report.Report class. Note that because the script imports the Report API, it can refer to the class by its unqualified name.
rpt = Report('grant', 'pdf');
Create the Report Title Page
Add a title page to the report, using the Report API's
TitlePage class.
add(rpt, TitlePage( ... 'Title', 'NEH Grants', ... 'Subtitle', 'By State from 2010-2012', ... 'Image', 'neh_logo.jpg', ... 'Author', 'John Doe' ... ));
Create the Report Table of Contents
Add a table of contents, using the Report API's
TableOfContents class.
add(rpt, TableOfContents);
Find the Report Data
Use an array of structures to specify the states to be included in this report. Each structure contains the data for a specific state:
Name: name of the state
Grants: grants made to institutions in the state. This field is initially empty.
NGrants: number of grants made to institutions in this state (initially empty)
states = struct( ... 'Name', {'California', 'Massachusetts', 'New York'}, ... 'PostalCode', {'CA', 'MA', 'NY'}, ... 'Grants', cell(1,3), ... 'NGrants', cell(1,3) ... );
Use a grant finder to populate the
Grants and
NGrants fields of the state structures. Create the grant finder.
f = GrantFinder;
Loop through the state array. For each state, use the finder's
Properties property to constrain the search for grants awarded to the state. Use these grant properties to constrain the search:
InstState: Specifies the postal code of the state in which the institution that received the grant is located.
YearAwarded: Specifies the year in which the grant was awarded.
The finder treats property values as regular expressions. Exploit this fact to specify a range of values, 2010-2012, as the value of the
YearAwarded property.
n = numel(states); for i = 1:n f.Properties = [ {'InstState', states(i).PostalCode}, ... {'YearAwarded', '201[0-2]'}]; states(i).Grants = find(f); states(i).NGrants = numel(states(i).Grants); end
Create the Grant Summary Chapter
Create a grant summary as the first chapter of the report. The grant summary chapter contains a title and a grant summary table. Each row of the table lists the total number of grants and total amount of money awarded to institutions in the state for the years 2010-2012. States appear in the table in descending order of number of grants. Each state is hyperlinked to the chapter that details the grants awarded to it.
Create the Summary Chapter Container
Start by creating a chapter container. .
ch = Chapter('Title', 'Grant Summary');
Create the Contents of the Grant Summary Table
Start by creating a Java currency formatter. Use this object to format the dollar amount of the grants awarded to a state.
currencyFormatter = java.text.NumberFormat.getCurrencyInstance();
Create a cell array containing the contents of the table header.
header = {'State', 'Grants Awarded', 'Amount Awarded'};
Preallocate a cell array to contain the table body contents. The cell array has Rx3 rows and columns where R is the number of states and 3 is the number of summary items reported for each state.
body = cell(numel(states), 3);
Sort the states array by the number of grants awarded to them, using the MATLAB
sort function. The
sort function returns
ind, an array of indices to the states array. The first index of the
ind array is the index of the state with the most grants, the second, with the second most number of grants, etc.
[~, ind] = sort([states.NGrants], 'descend');
Loop through the states by number of grants, filling in the summary information for each state. Use a variable,
rowIdx, as an index to the cell array row corresponding to the current state.
rowIdx = 0;
The following line rearranges the
states array in order of grants received and creates a
for loop that assigns each structure in the sorted
states array to the variable
state on each iteration of the loop.
for state = states(ind)
Update the row index to point to the cell array row corresponding to the current state.
rowIdx = rowIdx+1;
The script enters a hyperlink to the grant details chapter for the state as the first entry in the table for the state, for example,
The following line uses a DOM
InternalLink constructor to create the hyperlink. The
InternalLink constructor takes two arguments, a link target id and the text of the hyperlink. The script uses the current state's postal code as the link target id and the state's name as the link text. Later on, when the script creates the grant details chapter, it inserts a link target in the chapter title whose id is the state's postal code. This completes creation of the hyperlink.
body(rowIdx, 1) = {InternalLink(state.PostalCode, state.Name)};
Assign the total number of grants for this state to the second item in its cell array row.
body(rowIdx, 2) = {state.NGrants};
Compute the total amount awarded to this state.
totalAwarded = 0; for grant = state.Grants totalAwarded = totalAwarded + str2double(grant.AwardAmount); end
Use the currency formatter to format the total amount as a dollar amount, for example,
and assign the formatted result as the third and final item in the cell array for this state.
body(rowIdx,3) = {char(currencyFormatter.format(totalAwarded))}; end
To create the summary table, pass the header and body cell arrays to the constructor of a
mlreportgen.dom.FormalTable object.
table = FormalTable(header, body);
A formal table is a table that has a header and a body. The
FormalTable constructor takes two arguments: a cell array that specifies the contents of the table's header and a cell array that specifies the contents of its body. The constructor converts the cell array contents to DOM
TableRow and
TableEntry objects that define the table, saving the script from having to create the necessary table objects itself.
Format the Grant Summary Table
At this point, the summary table looks like this:
This is not very readable. The heading has the same format as the body and the columns are not spaced apart.
In the following steps, the script adjusts the header text formatting to look like this:
First the script specifies the width and alignment of the table columns, using a DOM
TableColSpecGroup object. A
TableColSpecGroup object specifies the format of a group of columns. The summary table has only one group of columns so the script needs to create only one
TableColSpecGroup object.
grp = TableColSpecGroup;
The
TableColSpecGroup object lets the script specify the default style of the table's columns. The script specifies
1.5in as the default width of the columns and center alignment as the default column alignment.
grp.Style = {HAlign('center'), Width('1.5in')};
The script uses a
TableColSpec object to override the default column alignment for the first column.
specs(1) = TableColSpec; specs(1).Style = {HAlign('left')}; grp.ColSpecs = specs; table.ColSpecGroups = grp;
Note The script could use as many as three
TableColSpec objects, one for each table column, to override the group column styles. The first
TableColSpec object applies to the first column, the second to the second column, etc. The script needs to assign only one column spec object to the group because it is overriding the default style only for the first column. However, if it needed to change only the third column, it would have to assign three column spec objects, leaving the
Style property of the first two column spec objects empty.
The default table style crowds the table entries. So the script uses a DOM
InnerMargin format object to create some space above the entries to separate them from the entries in the row above them. An
InnerMargin object creates space (inner margin) between a document object and the object that contains it, for example, between the text in a table entry and the table entry's borders. The
InnerMargin constructor optionally takes four arguments, the left, right, top, bottom inner margins of a document object.
The script use this constructor to create a top inner margin format of 3 points. It then assigns this format to the style of the entries in the summary table's body section.
table.Body.TableEntriesStyle = {InnerMargin('0pt', '0pt', '3pt', '0pt')};
Finally the script format the table header to consist of bold, white text on a gray background:
table.Header.row(1).Style = {Bold, Color('white'), BackgroundColor('gray')};
Add the Summary Chapter to the Report
add(ch, table); add(rpt, ch);
Create the Grant Details Chapters
Loop through the state structures.
for state = states
For each state create a chapter to hold the state's grant details. Insert a link target into the chapter title to serve as a target for the hyperlink in the summary table in the first chapter.
ch = Chapter('Title', {LinkTarget(state.PostalCode), state.Name});
Loop through the grant results for the state.
for grant = state.Grants
For each grant result, add the result to the chapter.
add(ch, grant);
A grant result has a
getReporter method that returns a reporter that creates a table of selected grant properties. The chapter
add method is preconfigured to get a result's reporter and add it the chapter. Thus, adding a grant to a chapter is tantamount to adding the result property table to the chapter, for example,
end add(rpt, ch); end
Close the Report Object
Closing the report object generates the PDF output file (
grant.pdf) that the report object specifies.
close(rpt);
Display the report
rptview(rpt);
Appendix: The NEH Grant Database
The source of the database used in this example is the National Endowment for the Humanities (NEH). The database contains information on NEH grants for the period 2010-2019. It contains about 6000 records in XML format. It is available at NEH Grant Data. This example uses a local copy of the database XML file,
NEH_Grants2010s.xml.
The database consists of a
Grants element that contains a set of
Grant elements each of which contains a set of grant data elements. The following is an extract from the database that illustrates its structure:
See Also
mlreportgen.finder.Finder |
mlreportgen.report.Report |
mlreportgen.report.Chapter |
rptview | https://ch.mathworks.com/help/rptgen/ug/create-and-use-a-custom-finder.html | CC-MAIN-2022-21 | refinedweb | 3,565 | 54.12 |
Code First Data Annotations
Note
EF4.1 Onwards Only - The features, APIs, etc. discussed in this page were introduced in Entity Framework 4.1. If you are using an earlier version, some or all of this information does not apply.
The content on this page is adapted from an article originally written by Julie Lerman (<>).
Entity Framework Code First allows you to use your own domain classes to represent the model that EF relies on to perform querying, change tracking, and updating functions. Code First leverages a programming pattern referred to as 'convention over configuration.' Code First will assume that your classes follow the conventions of Entity Framework, and in that case, will automatically work out how to perform it's job. However, if your classes do not follow those conventions, you have the ability to add configurations to your classes to provide EF with the requisite information.
Code First gives you two ways to add these configurations to your classes. One is using simple attributes called DataAnnotations, and the second.
The model
I’ll demonstrate Code First DataAnnotations with a simple pair of classes: Blog and Post. ICollection<Comment> Comments { get; set; } }
As they are, the Blog and Post classes conveniently follow code first convention and require no tweaks to enable EF compatability. However, you can also use the annotations to provide more information to EF about the classes and the database to which they map.
Key
Entity Framework relies on every entity having a key value that is used for entity tracking. One convention of Code First is implicit key properties; Code First will look for a property named “Id”, or a combination of class name and “Id”, such as “BlogId”. This property will map to a primary key column in the database.
The Blog and Post classes both follow this convention. Identity by default.
Composite keys
Entity Framework supports composite keys - primary keys that consist of more than one property. For example, you could have a Passport class whose primary key is a combination of PassportNumber and IssuingCountry.
public class Passport { [Key] public int PassportNumber { get; set; } [Key] public string IssuingCountry { get; set; } public DateTime Issued { get; set; } public DateTime Expires { get; set; } }
Attempting to use the above class in your EF model would result in an
InvalidOperationException:
Unable to determine composite primary key ordering for type 'Passport'. Use the ColumnAttribute or the HasKey method to specify an order for composite primary keys.
In order to use composite keys, Entity Framework requires you to define an order for the key properties. You can do this by using the Column annotation to specify an order.
Note
The order value is relative (rather than index based) so any values can be used. For example, 100 and 200 would be acceptable in place of 1 and 2.
public class Passport { [Key] [Column(Order=1)] public int PassportNumber { get; set; } [Key] [Column(Order = 2)] public string IssuingCountry { get; set; } public DateTime Issued { get; set; } public DateTime Expires { get; set; } }
If you have entities with composite foreign keys, then you must specify the same column ordering that you used for the corresponding primary key properties.
Only the relative ordering within the foreign key properties needs to be the same, the exact values assigned to Order do not need to match. For example, in the following class, 3 and 4 could be used in place of 1 and 2.
public class PassportStamp { [Key] public int StampId { get; set; } public DateTime Stamped { get; set; } public string StampingCountry { get; set; } [ForeignKey("Passport")] [Column(Order = 1)] public int PassportNumber { get; set; } [ForeignKey("Passport")] [Column(Order = 2)] public string IssuingCountry { get; set; } public Passport Passport { get; set; } }
Required
The Required annotation tells EF that a particular property is required.
Adding Required to the Title property will force EF (and MVC) to ensure that the property has data in it.
[Required] public string Title { get; set; }
With no additional code or markup changes in the application, an MVC application will perform client side validation, even dynamically building a message using the property and annotation names.
The Required attribute will also affect the generated database by making the mapped property non-nullable. Notice that the Title field has changed to “not null”..
MaxLength and MinLength.
MVC.
NotMapped
Code first convention dictates that every property that is of a supported data type); } }
ComplexType including the properties contained in its BlogDetail property. By default, each one is preceded with the name of the complex type, BlogDetail.
ConcurrencyCheck
The ConcurrencyCheck annotation allows you to flag one or more properties to be used for concurrency checking in the database when a user edits or deletes an entity. If you've been working with the EF Designer,.
TimeStamp.
Table and Column
If you are letting Code First create the database, you may want to change the name of the tables and columns it is creating..
DatabaseGenerated
An important database features is the ability to have computed properties. If you're mapping your Code First classes to tables that contain computed columns,Generated that is an integer will become an identity key in the database. That would be the same as setting DatabaseGenerated to DatabaseGeneratedOption.Identity. If you do not want it to be an identity key, you can set the value to DatabaseGeneratedOption.None.
Index
Note
EF6.1 Onwards Only - The Index attribute was introduced in Entity Framework 6.1. If you are using an earlier version the information in this section does not apply.
You can create an index on one or more columns using the IndexAttribute. Adding the attribute to one or more properties will cause EF to create the corresponding index in the database when it creates the database, or scaffold the corresponding CreateIndex calls if you are using Code First Migrations.
For example, the following code will result in an index being created on the Rating column of the Posts table in the database.
public class Post { public int Id { get; set; } public string Title { get; set; } public string Content { get; set; } [Index] public int Rating { get; set; } public int BlogId { get; set; } }
By default, the index will be named IX_<property name> (IX_Rating in the above example). You can also specify a name for the index though. The following example specifies that the index should be named PostRatingIndex.
[Index("PostRatingIndex")] public int Rating { get; set; }
By default, indexes are non-unique, but you can use the IsUnique named parameter to specify that an index should be unique. The following example introduces a unique index on a User's login name.
public class User { public int UserId { get; set; } [Index(IsUnique = true)] [StringLength(200)] public string Username { get; set; } public string DisplayName { get;Id; } }
Relationship Attributes: InverseProperty and ForeignKey
Note
This page provides information about setting up relationships in your Code First model using Data Annotations. For general information about relationships in EF and how to access and manipulate data using relationships, see Relationships & Navigation Properties.*; } }
The constraint in the database shows a relationship between InternalBlogs.PrimaryTrackingKey and Posts.BlogId..
Summary .
Feedback | https://docs.microsoft.com/en-us/ef/ef6/modeling/code-first/data-annotations | CC-MAIN-2019-35 | refinedweb | 1,172 | 52.09 |
HDP 2.5.0 provides Tez 0.7.0 and the following Apache patches:
TEZ-814: Improve heuristic for determining a task has failed output.
TEZ-1248: Reduce slow-start should special case 1 reducer runs.
TEZ-1314: Port MAPREDUCE-5821 to Tez.
TEZ-1529: ATS and TezClient integration in secure kerberos enabled cluster.
TEZ-1911: MergeManager's unconditionalReserve() should check for memory limits before allocating.
TEZ-1961: Remove misleading exception "No running dag" from AM log.
TEZ-2076: Tez framework to extract/analyze data stored in ATS for specific dag.
TEZ-2097: TEZ-UI Add dag logs backend support.
TEZ-2198: Fix sorter spill count.
TEZ-2211: Tez UI: Allow users to configure timezone.
TEZ-2291: TEZ UI: Improper vertex name in tables.
TEZ-2307: Possible wrong error message when submitting new dag.
TEZ-2311: AM can hang if kill received while recovering from previous attempt.
TEZ-2391: TestVertexImpl timing out at times on Jenkins builds.
TEZ-2398: Flaky test: TestFaultTolerance.
TEZ-2409: Allow different edges to have different routing plugin.
TEZ-2436: Tez UI: Add cancel button in column selector.
TEZ-2440: Sorter should check for indexCacheList.size() in flush(.
TEZ-2447: Tez UI: Generic changes based on feedbacks.
TEZ-2453: Tez UI: show the dagInfo is the application has set the same.
TEZ-2455: Tez UI: Dag view caching, error handling and minor layout change.
TEZ-2460: Temporary solution for issue due to YARN-256.
TEZ-2461: tez-history-parser compile fails.
TEZ-2468: Change the minimum Java version to Java 7.
TEZ-2474: The old taskNum is logged incorrectly when parallelism is changed.
TEZ-2475: Fix a potential hang in Tez local mode caused by incorrectly handled interrupts.
TEZ-2478: Move OneToOne routing to store events in Tasks.
TEZ-2481: Tez UI: graphical view does not render properly on IE1.
TEZ-2482: Tez UI: Mouse events not working on IE1.
TEZ-2483: TEZ-2843 Tez UI: Show error if in progress fails due to AM not reachable.
TEZ-2489: Disable warn log for Timeline ACL error when tez.allow.disabled.timeline-domains set to true.
TEZ-2504: Tez UI: tables - show status column without scrolling, numeric 0 shown as Not available.
TEZ-2505: PipelinedSorter uses Comparator objects concurrently from multiple threads.
TEZ-2509: YarnTaskSchedulerService should not try to allocate containers if AM is shutting down.
TEZ-2513: Tez UI: Allow filtering by DAG ID on All dags table.
TEZ-2523: Tez UI: derive applicationId from dag/vertex id instead of relying on json date.
TEZ-2527: Tez UI: Application hangs on entering erroneous RegEx in counter table search bot.
TEZ-2528: Tez UI: Column selector buttons gets clipped, and table scroll bar not visible in mac.
TEZ-2535: Tez UI: Failed task attempts link in vertex details page is broken.
TEZ-2538: ADDITIONAL_SPILL_COUNT wrongly populated for DefaultSorter with multiple partition.
TEZ-2539: Tez UI: Pages are not updating in IE.
TEZ-2541: DAGClientImpl enable TimelineClient check is wrong.
TEZ-2545: It is not necessary to start the vertex group commit when DAG is in TERMINATING
TEZ-2546: Tez UI: Fetch hive query text from timeline if dagInfo is not set.
TEZ-2547: Tez UI: Download Data fails on secure, cross-origin cluster.
TEZ-2548: TezClient submitDAG can hang if the AM is in the process of shutting down.
TEZ-2549: Reduce Counter Load on the Timeline Server.
TEZ-2552: CRC errors can cause job to run for very long time in large jobs.
TEZ-2553: Tez UI: Tez UI Nits.
TEZ-2554: Tez UI: View log link does not correctly propagate login credential to read log from yarn web.
TEZ-2560: Fix Tez-ui build for maven 3.3+.
TEZ-2561: Port for TaskAttemptListenerImpTezDag should be configurable.
TEZ-2567: Tez UI: download dag data does not work within ambari.
TEZ-2568: V_INPUT_DATA_INFORMATION may happen after vertex is initialized.
TEZ-2575: Handle KeyValue pairs size which do not fit in a single block in PipelinedSorte.
TEZ-2579: Incorrect comparison of TaskAttemptId.
TEZ-2602: Throwing EOFException when launching MR job.
TEZ-2629: LimitExceededException in Tez client when DAG has exceeds the default max counters.
TEZ-2635: Limit number of attempts being downloaded in unordered fetch.
TEZ-2636: MRInput and MultiMRInput should work for cases when there are 0 physical inputs.
TEZ-2660: Tez UI: need to show application page even if system metrics publish is disabled.
TEZ-2662: Provide a way to check whether AM or task opts are valid and error if not.
TEZ-2663: SessionNotRunning exceptions are wrapped in a ServiceException from a dying AM.
TEZ-2687: Tez should release/kill all held containers before stopping services during the shutdown phase.
TEZ-2719: Consider reducing logs in unordered fetcher with shared-fetch option.
TEZ-2730: tez-api missing dependency on org.codehaus.jettison for json.
TEZ-2732: DefaultSorter throws ArrayIndex exceptions on 2047 Mb size sort buffer.
TEZ-2734: Add a test to verify the filename generated by OnDiskMerge.
TEZ-2742: VertexImpl.finished() terminationCause hides member var of the same name.
TEZ-2745: ClassNotFound in InputInitializer causes AM to crash.
TEZ-2752: logUnsuccessful completion in Attempt should write original finish time to ATS.
TEZ-2754: Tez UI: StartTime & EndTime is not displayed with right format in Graphical View.
TEZ-2756: MergeManager close should not try merging files on close if invoked after a shuffle exception.
TEZ-2758: Remove append API in RecoveryService after TEZ-190.
TEZ-2761: Addendum fix build failure for java.
TEZ-2761: Tez UI: update the progress on the dag and vertices pages with info from AM.
TEZ-2766: Tez UI: Add vertex in-progress info in DAG detail.
TEZ-2767: Make TezMxBeanResourceCalculator the default resource calculator.
TEZ-2768: Log a useful error message when the summary stream cannot be closed when shutting down an AM.
TEZ-2780: Tez UI: Update All Tasks page while in progress.
TEZ-2781: Fallback to send only TaskAttemptFailedEvent if taskFailed heartbeat fails.
TEZ-2787: Tez AM should have java.io.tmpdir=./tmp to be consistent with tasks.
TEZ-2789: Backport events added in TEZ-2612.
TEZ-2792: Add AM web service API for task.
TEZ-2792: Addendum fix build failure for java.
TEZ-2807: Log data in the finish event instead of the start event.
TEZ-2808: Race condition between preemption and container assignment.
TEZ-2812: Tez UI: Update task & attempt tables while in progress.
TEZ-2813: Tez UI: add counter data for rest api calls to AM Web Services v2.
TEZ-2816: Preemption sometimes does not respect heartbeats between preemption.
TEZ-2817: Tez UI: update in progress counter data for the dag vertices and tasks table.
TEZ-2825: Report progress in terms of completed tasks to reduce load on AM for Tez U.
TEZ-2829: Tez UI: minor fixes to in-progress update of UI from AM.
TEZ-2830: Backport TEZ-2774. Improvements to logging in the AM and part of the runtime.
TEZ-2834: Make Tez preemption resilient to incorrect free resource reported by YARN.
TEZ-2842: Tez UI: Update Tez App details page while in-progress.
TEZ-2844: Backport TEZ-2775. Improve and consolidate logging in Runtime components.
TEZ-2846: Flaky test: TestCommit.testVertexCommit_OnDAGSuccess.
TEZ-2847: Tez UI: Task details doesn't gets updated on manual refresh after job complete.
TEZ-2850: Tez MergeManager OOM for small Map Outputs.
TEZ-2851: Support a way for upstream applications to pass in a caller context to Tez.
TEZ-2853: Tez UI: task attempt page is coming empty.
TEZ-2857: Fix flakey tests in TestDAGImpl.
TEZ-2863: Container, node, and logs not available in UI for tasks that fail to launch.
TEZ-2866: Tez UI: Newly added columns wont be displayed by default in table.
TEZ-2868: Fix setting Caller Context in Tez Examples.
TEZ-2874: Improved logging for caller context.
TEZ-2876: Tez UI: Update vertex, task & attempt details page while in progress.
TEZ-2878: Tez UI: AM error handling - Make the UI handle cases in which AM returns unexpected/no date.
TEZ-2882: Consider improving fetch failure handling.
TEZ-2885: Remove counter logs from AMWebController.
TEZ-2886: Ability to merge AM credentials with DAG credentials.
TEZ-2887: Tez build failure due to missing dependency in pom files.
TEZ-2893: Tez UI: Retain vertex info displayed in DAG details page even after completion.
TEZ-2894: Tez UI: Disable sorting for few columns while in progress. Display an alert on trying to sort them.
TEZ-2895: Tez UI: Add option to enable and disable in-progress.
TEZ-2896: Fix thread names used during Input/Output initialization.
TEZ-2898: Tez tools : swimlanes.py is broken.
TEZ-2899: Backport graphical view fix from TEZ-2899.
TEZ-2900: Ignore V_INPUT_DATA_INFORMATION when vertex is in Failed/Killed/Error.
TEZ-2907: NPE in IFile.Reader.getLength during final merge operation.
TEZ-2908: Tez UI: Errors are logged, but not displayed in the UI when AM fetch fails.
TEZ-2909: Tez UI: Application link in All DAGs table is disable when applicationhistory is unavailable.
TEZ-2910: Tez should invoke HDFS Client API to set up caller context.
TEZ-2915: Tez UI: Getting back to the DAG details page is difficult.
TEZ-2923: Tez Live UI counters view empty for vertices, tasks, attempt.
TEZ-2927: Tez UI: Graciously fail when system-metrics-publisher is disabled.
TEZ-2929: Tez UI: Dag details page displays vertices to be running even when dag have completed.
TEZ-2930: Tez UI: Parent controller is not polling at time.
TEZ-2933: Tez UI: Load application details from RM when available.
TEZ-2936: Support HDFS-based Timeline writer.
TEZ-2946: Tez UI: At times RM return a huge error message making the yellow error bar to fill the whole screen.
TEZ-2947: Tez UI: Timeline, RM & AM requests gets into a consecutive loop in counters page without any delay.
TEZ-2949: Allow duplicate dag names within session for Tez.
TEZ-2960: Tez UI: Move hardcoded url namespace to the configuration file.
TEZ-2963: RecoveryService#handleSummaryEvent exception with HDFS transparent encryption + kerberos authentication.
TEZ-2968: Counter limits exception causes AM to crash.
TEZ-2970: Re-localization in TezChild does not use correct UGI.
TEZ-2975: Bump up apache commons dependency.
TEZ-2988: DAGAppMaster:shutdownTezAM should return with a no-op if it has been invoked earlier.
TEZ-2995: Timeline primary filter should only be on callerId and not type.
TEZ-2997: Tez UI: Support searches by CallerContext ID for DAGs.
TEZ-3017: HistoryACLManager does not have a close method for cleanup.
TEZ-3025: InputInitializer creation should use the dag UGI.
TEZ-3032: Incorrect start time in different events for DAG history events.
TEZ-3036: Tez AM can hang on startup with no indication of error.
TEZ-3037: History URL should be set regardless of which history logging service is enabled.
TEZ-3052: Task internal error due to Invalid event: T_ATTEMPT_FAILED at FAILED.
TEZ-3063: Tez UI : Display Input, Output, Processor, Source and Sink configurations under a vertex.
TEZ-3066: TaskAttemptFinishedEvent ConcurrentModificationException in recovery or history logging services.
TEZ-3086: Tez UI: Backward compatibility changes.
TEZ-3101: Tez UI: Task attempt log link doesn't have the correct protocol.
TEZ-3103: Shuffle can hang when memory to memory merging enabled.
TEZ-3105: TezMxBeanResourceCalculator does not work on IBM JDK 7 or 8 causing Tez failures.
TEZ-3107: tez-tools: Log warn messages in case ATS has wrong values (e.g. startTime > finishTime).
TEZ-3114: Shuffle OOM due to EventMetaData flood.
TEZ-3117: Deadlock in Edge and Vertex code.
TEZ-3123: Containers can get re-used even with conflicting local resources.
TEZ-3126: Log reason for not reducing parallelism.
TEZ-3128: Avoid stopping containers on the AM shutdown thread.
TEZ-3131: Support a way to override test_root_dir for FaultToleranceTestRunner.
TEZ-3137: Tez task failed with illegal state exception.
TEZ-3147: Intermediate mem-to-mem: Fix early exit when only one segment can fit into memory.
TEZ-3155: Support a way to submit DAGs to a session where the DAG plan exceeds hadoop ipc limits.
TEZ-3156: Tez client keeps trying to talk to RM even if RM does not know about the application.
TEZ-3166: Counters aren't fully updated and sent for failed task.
TEZ-3173: Update Tez AM REST APIs for more information for each vertex.
TEZ-3175: Add tez client submit host.
TEZ-3177: Non-DAG events should use the session domain or no domain if the data does not need protection.
TEZ-3189: Pre-warm dags should not be counted in submitted dags count by DAGAppMaster.
TEZ-3192: IFile#checkState creating unnecessary objects though auto-boxing.
TEZ-3193: Deadlock in AM during task commit request.
TEZ-3196: java.lang.InternalError from decompression codec is fatal to a task during shuffle.
TEZ-3202: Reduce the memory need for jobs with high number of segments.
TEZ-3203: DAG hangs when one of the upstream vertices has zero tasks.
TEZ-3213: Uncaught exception during vertex recovery leads to invalid state transition loop.
TEZ-3223: Support a NullHistoryLogger to disable history logging if needed.
TEZ-3224: User payload is not initialized before creating vertex manager plugin.
TEZ-3233: Tez UI: Have LLAP information reflect in Tez UI.
TEZ-3254: Tez UI: Consider downloading Hive/Pig explain plan.
TEZ-3255: Tez UI: Hide swimlane while displaying running DAGs from old versions of Tez.
TEZ-3256: [Backport HADOOP-11032] Remove Guava Stopwatch dependency.
TEZ-3258: JVM Checker does not ignore DisableExplicitGC when checking JVM GC options.
TEZ-3259: Tez UI: Build issue - File saver package is not working well with bower.
TEZ-3262: Tez UI : zip.js is not having a bower friendly versioning system.
TEZ-3264: Tez UI: UI discrepancies.
TEZ-3276: Tez Example MRRSleep job fails when tez.staging-dir fs is not same as default FS.
TEZ-3281: Tez UI: Swimlane improvement.
TEZ-3286: Allow clients to set processor reserved memory per vertex (instead of per container).
TEZ-3288: Tez UI: Display more details in the error bar.
TEZ-3289: Tez Example MRRSleep job does not set Staging dir correctly on secure cluster.
TEZ-3291: Optimize splits grouping when locality information is not available.
TEZ-3292: Tez UI: UTs breaking with timezone change.
TEZ-3293: Fetch failures can cause a shuffle hang waiting for memory merge that never starts.
TEZ-3294: DAG.createDag() does not clear local state on repeat calls.
TEZ-3295: TestOrderedWordCount should handle relative input/output path.
TEZ-3297: Deadlock scenario in AM during ShuffleVertexManager auto reduce.
TEZ-3304: TestHistoryParser fails with Hadoop 2.7.
TEZ-3305: TestAnalyzer fails with Hadoop 2.7.
TEZ-3308: Add counters to capture input split length.
TEZ-3314: Double counting input bytes in MultiMRInput.
TEZ-3318: Tez UI: Polling is not restarted after RM recovery.
TEZ-3325: Flaky test in TestDAGImpl.testCounterLimits.
TEZ-3327: ATS Parser: Populate config details available in dag.
TEZ-3329: Tez ATS data is incomplete for a vertex which fails or gets killed before initialization.
TEZ-3331: Add operation specific HDFS counters to ATS.
TEZ-3333: Tez UI: Handle cases where Vertex/Task/Task Attempt data is missing.
TEZ-3337: Do not log empty fields of TaskAttemptFinishedEvent to avoid confusion.
TEZ-3357: Change TimelineCachePlugin to handle DAG grouping.
TEZ-3370: Tez UI: Display the log link as N/A if the app does not provide a log line.
TEZ-3374: Change TEZ_HISTORY_LOGGING_TIMELINE_NUM_DAGS_PER_GROUP conf key name.
TEZ-3376: Fix groupId generation to account for dagId starting with 1.
TEZ-3379: Tez analyzer: Move sysout to log4j.
TEZ-3382: Tez analyzer: Should be resilient to new counters.
TEZ-3398: Tez UI: Bread crumb link to Application from Application details dag/configuration tab is broken | https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.5.0/bk_release-notes/content/patch_tez.html | CC-MAIN-2022-27 | refinedweb | 2,592 | 69.58 |
--------------------------------------------------------------------------------
Fedora Update Notification
FEDORA-2012-19752
2012-12-05 06:29:58
--------------------------------------------------------------------------------
Name : dovecot
Product : Fedora 16
Version : 2.0.21
Release : 4.fc16:
- do not crash during mail search (CVE-2012-5620)
--------------------------------------------------------------------------------
ChangeLog:
* Tue Dec 4 2012 Michal Hlavinka - 1:2.0.21-4
- do not crash during mail search (CVE-2012-5620)
* Mon Nov 12 2012 Michal Hlavinka - 1:2.0.21-3
- fix network still not ready race condition (#871623)
* Fri Nov 2 2012 Michal Hlavinka - 1:2.0.21-2
- add reload command to service file
* Tue Jul 3 2012 Michal Hlavinka - 1:2.0.21-1
- dovecot updated to 2.0.21
-
* Tue Apr 10 2012 Michal Hlavinka - 1:2.0.20-1
- dovecot updated to 2.0.20
- doveadm import didn't import messages' flags
- Make sure IMAP clients can't create directories when accessing
nonexistent users' mailboxes via shared namespace.
- Dovecot auth clients authenticating via TCP socket could have failed
with bogus "PID already in use" errors.
* Fri Mar 16 2012 Michal Hlavinka - 1:2.0.19-1
- dovecot updated to 2.0.19, pigeonhole updated to 0.2.6
- IMAP: ENABLE CONDSTORE/QRESYNC + STATUS for a mailbox might not
have seen latest external changes to it, like new mails.
- imap_id_* settings were ignored before login.
- doveadm altmove did too much work sometimes, retrying moves it had
already done.
- mbox: Fixed accessing Dovecot v1.x mbox index files without errors.
* Mon Feb 13 2012 Michal Hlavinka - 1:2.0.18-1
- dovecot updated to 2.0.18
-.
* Mon Jan 9 2012 Michal Hlavinka - 1:2.0.17-1
- dovecot updated to 2.0.17, pigeonhole updated to 0.2.5
- Fixed memory leaks in login processes with SSL connections
- vpopmail support was broken in v2.0.16
* Fri Dec 2 2011 Michal Hlavinka - 1:2.0.16-2
- call systemd reload in postun
* Mon Nov 21 2011 Michal Hlavinka - 1:2.0.16-1
- dovecot updated to 2.0.16
* Mon Oct 24 2011 Michal Hlavinka - 1:2.0.15-2
- do not use obsolete settings in default configuration (#743444)
--------------------------------------------------------------------------------
References:
[ 1 ] Bug #883060 - CVE-2012-5620 dovecot: DoS when handling a search for
multiple keywords
-------------------------------------------------------------------------------- | http://article.gmane.org/gmane.linux.redhat.fedora.package.announce/94994 | CC-MAIN-2017-26 | refinedweb | 363 | 70.39 |
So far we have seen that the derivative of a function is the instantaneous rate of change of that function. In other words, how does a function's output change as we change one of the variables. In this lesson, we will learn about the chain rule, which allows us to see how a function's output changes as we change a variable that the function does not directly depend on. The chain rule may seem complicated, but it is just a matter of following a prescribed procedure. Learning about the chain rule will allow us to take the derivative of more complicated functions that we will encounter in machine learning.
Ok, now let's talk about the chain rule. Imagine that we would like to take the derivative of the following function:
$$f(x) = (0.5x + 3)^2 $$
Doing something like that can be pretty tricky right off the bat. Lucky for us, we can use the chain rule. The chain rule is essentially a trick that can be applied when our functions get complicated. The first step is using functional composition to break our function down. Ok, let's do it.
$$g(x) = 0.5x + 3 $$ $$f(x) = (g(x))^2$$
Let's turn these two into functions while we are at it.
def g_of_x(x): return 0.5*x + 3
g_of_x(2) # 4
4.0
def f_of_x(x): return (g_of_x(x))**2
f_of_x(2) # 16
16.0
Looking at both the mathematical and code representations of $f(x)$ and $g(x)$, we can see that the $f(x)$ function wraps the $g(x)$ function. So let's call $f(x)$ the outer function, and $g(x)$ the inner function.
def g_of_x(x): return 0.5*x + 3 def f_of_x(x): # outer function f(x) return (g_of_x(x))**2 #inner function g(x)
Let's plot these two functions.
from plotly.offline import iplot, init_notebook_mode init_notebook_mode(connected=True) from graph import trace_values, plot x_values = list(range(0, 10)) f_of_x_values = list(map(lambda x: f_of_x(x),x_values)) g_of_x_values = list(map(lambda x: g_of_x(x),x_values)) f_of_x_trace = trace_values(x_values, f_of_x_values, mode = 'lines', name = 'f(x) = (g(x))^2') g_of_x_trace = trace_values(x_values, g_of_x_values, mode = 'lines', name = 'g(x) = 0.5*x + 3') plot([g_of_x_trace, f_of_x_trace])
Ok, so now that we have a sense of how our function $g(x) = 0.5x + 3$ and $f(x) = (g(x))^2$ look, let's begin to take derivatives of these functions, starting with the derivative of $g(x)$, the inner function.
From our rules about derivatives we know that the power rule tells us that the derivative of $g(x) = 0.5x +3 $ is
$$g'(x) = 1*0.5x^0 + 0 = 0.5$$
Now a trickier question is what is the derivative of, our outer function $f(x)$? So how does the output of our outer function, $f(x)$, change as we vary $x$.
Notice that the outer function $f(x)$'s output does not directly vary with $x$. Instead, it's output varies based on the output, $g(x)$, whose output varies with $x$.
def g_of_x(x): return 0.5*x + 3 def f_of_x(x): # outer function f(x) return (g_of_x(x))**2 #inner function g(x)
The chain rule: So in taking the derivative, $\frac{df}{dx}$ of an outer function, $f(x)$, which depends on an inner function $g(x)$, which depends on $x$, the derivative equals the derivative of the outer function times the derivative of the inner function.
Or:
$$ f'(g(x)) = f'g(x)*g'(x) $$
Ok, so that is the chain rule. Let's apply this to our example.
Remember we started with the function $f(x) = (0.5x + 3)^2 $. Then we recast this as two functions:
$$g(x) = 0.5x + 3$$ $$f(x) = (g(x))^2$$
2. Find the derivatives, $f'(x)$ and $g'(x)$
3. Substitute into our chain rule
We have:
Then substituting for $g(x)$, which we already defined, we have:
$f'(g(x)) = g(x) = 0.5x + 3$
So the derivative of the function $f(x) = (0.5x + 3)^2 $ is $f'(x) = 0.5x + 3 $
The chain rule is allows us to determine the rate of change of a function that does not directly depend on a variable, $x$, but rather depends on a separate function that depends on $x$. For example, the function $f(x)$ below.
def g_of_x(x): return 0.5*x + 3 def f_of_x(x): # outer function f(x) return (g_of_x(x))**2 #inner function g(x)
It does not directly depend on $x$, but depends on a function $g(x)$, which varies with different outputs of $x$. So now we want to take the derivative of $f(x)$.
Remember, taking a derivative means changing a variable $x$ a little, and seeing the change in the output. The chain rule allows us to solve the problem of seeing the change in output when our function does not directly depend on that changing variable, but depends on *a function * that depends on a variable.
We can take the derivative of a function that indirectly depends on $x$, by taking the derivative of the outer function and multiplying it by the derivative of the inner function, or
$f'(x) = f'(g(x))*g'(x)$
Let's go through some more examples.
$$ f(x) = (3x^2 + 10x)^3$$
Stop here, and give this a shot on your own. The answer will always be waiting for you right below, so you really have nothing to lose. No one will know if you struggle - and it's kinda the point.
1.Divide the function into two components
$$g(x) = 3x^2 + 10x $$ $$f(x) = (g(x))^3$$
2. Take the derivative of each of the component functions
$$g'(x) = 6x + 10 $$ $$f'(x) = 3(g(x))^2$$
3. Substitution
$$f'(x) = f'(g(x))g'(x) = 3(g(x))^2(6x+10)$$
Then substituting in $g(x) = 3x^2 + 10x $ we have:
$$f'(x) = 3*(3x^2 + 10x)^2*(6x+10) $$
And we can leave it there for now.
In this lesson, we learned about the chain rule. The chain rule allows us to take the derivative of a function that that comprises of another function that depends on $x$. We apply the chain by taking the derivative of the outer function and multiplying that by the derivative of the inner function. We'll see the chain rule in the future when in our work with gradient descent. | https://learn.co/lessons/derivative-chain-rule | CC-MAIN-2019-43 | refinedweb | 1,076 | 72.87 |
Learning Objectives
What Is a Cache?
A cache is temporary storage. In the computer world, cache is temporary storage for frequently accessed data from a database. Here’s an analogy. Suppose you’re a chipmunk looking for nuts and acorns for dinner. It’s 5:00 and you’re ready to eat. Are you going to use the nuts and acorns stored in your cheeks (cache), or are you going back to the forest to gather more from trees (database)? If you access the temporary cache of food in your cheeks, your dinner is closer and you get to eat it faster! Also, you accomplish your goal more efficiently. A data cache has similar advantages, but for people, not chipmunks.
What Is Platform Cache?
Platform Cache is a memory layer that stores Salesforce session and org data for later access. When you use Platform Cache, your applications can run faster because they store reusable data in memory. Applications can quickly access this data; they don’t need to duplicate calculations and requests to the database on subsequent transactions. In short, think of Platform Cache as RAM for your cloud application.
With Platform Cache, you can also allocate cache space so that some apps or operations don’t steal capacity from others. You use partitions to distribute space. We’ll get to partitions later.
Before We Go Any Further
Let’s pause for a moment for you to request a trial of Platform Cache. By default, your Developer org has 0 MB cache capacity. You can request a trial cache of 10 MB.
To request a trial, go to Setup in your Developer org. In the Quick Find box, enter cache, and then click Platform Cache. Click Request Trial Capacity and wait for the email notifying you that your Platform Cache trial is active. Salesforce approves trial requests immediately, but it can take a few minutes for you to receive the email.
If you don’t have a cache trial, you can still execute cache operations to learn how to use the cache. However, cache storage is bypassed, and retrieved values are null (cache misses).
Okay, now that you’ve requested a Platform Cache trial, let’s learn some more concepts.
When Can I Use Platform Cache?
You can use Platform Cache in your code almost anywhere you access the same data over and over. Using cached data improves the performance of your app and is faster than performing SOQL queries repetitively, making multiple API calls, or computing complex calculations.
The best data to cache is:
- Reused throughout a session, or reused across all users and requests
- Static (not rapidly changing)
- Expensive to compute or retrieve
Store Data That Doesn’t Change Often
Use the cache to store static data or data that doesn’t change often. This data is initially retrieved through API calls from a third party or locally through SOQL queries. If the data changes, cache this data if the values don’t have to be highly accurate at all times.
Examples of such static data are:
- Public transit schedule
- Company shuttle bus schedule
- Tab headers that all users see
- A static navigation bar that appears on every page of your app
- A user’s shopping cart that you want to persist during a session
- Daily snapshots of exchange rates (rates fluctuate during a day)
Store Data Obtained from Complex Calculations
Values that are a result of complex calculations or long queries are good candidates for cache storage. Examples of such data are:
- Total sales over the past week
- Total volunteering hours company employees did as a whole
- Top sales ranking
For clues on where to use Platform Cache, inspect your code. For example, do you currently store app data by overloading a Visualforce view state? These stored values are all candidates for Platform Cache.
Not every use case is a Platform Cache use case. For example, data that changes often and that is real-time, such as stock quotes, isn’t a good candidate for caching. Also, ensure you familiarize yourself with Platform Cache limitations. For example, if your data is accessed by asynchronous Apex, it can’t be stored in a cache that is based on the user’s session.
Cache Allocations by Edition
Platform Cache is available to customers with Enterprise Edition orgs and above. The following editions come with some default cache space, but often, adding more cache gives even greater performance enhancements.
- Enterprise Edition (10 MB by default)
- Unlimited Edition (30 MB by default)
- Performance Edition (30 MB by default)
Experiment with Trial Cache
You can purchase additional cache for your org. To determine how much extra cache would be beneficial for your applications, you can request trial cache and try it out. Also, request trial cache for Professional Edition before purchasing cache. Use trial cache in your Developer Edition org to develop and test your applications with Platform Cache. When your request is approved, you receive 30 MB of trial cache space (10 MB for Developer Edition). If you need more trial cache space, contact Salesforce.
What Are the Types of Platform Cache?
There are two types of Platform Cache: org cache and session cache.
Org Cache
Org cache stores org-wide data that anyone in the org can use. Org cache is accessible across sessions, requests, and org users and profiles.
For example, weather data can be cached and displayed for contacts based on their location. Or daily snapshots of currency exchange rates can be cached for use in an app.
Session Cache
Session cache stores data for an individual user and is tied to that user’s session. The maximum life of a session is 8 hours.
For example, suppose that your app calculates the distance from a user’s location to all customers the user wishes to visit on the same day. The location and the calculated distances can be stored in the session cache. That way, if the user wants to get this information again, the distances don’t need to be recalculated. Or, you might have an app that enables users to customize their navigation tab order and reuse that order as they visit other pages in the app.
What Are the Performance Gains When Using the Cache?
You might be wondering how much performance your app gains by using Platform Cache. Retrieving data from the cache is much faster than through an API call. When comparing SOQL to cache retrieval times, the cache is also much faster than SOQL queries.
The following chart shows the retrieval times, in milliseconds, of data through an API call and the cache. It is easy to notice the huge performance gain when fetching data locally through the cache, especially when retrieving data in multiple transactions. In the sample used for the graph, the cache is hundreds of times faster than API calls. In this graph, cache retrieval times are just a few milliseconds but appear as almost zero due to the scale used for the time value. Keep in mind that this chart is a sample test, and actual numbers might vary for other apps.
This next graph compares SOQL with org and session cache retrieval times. As you can see, SOQL is slower than the cache. In this example, the cache is two or more times faster than SOQL for data retrieval in a single transaction. When performing retrievals in multiple transactions, the difference is even larger. (Note that this graph is a sample and actual numbers might vary for other apps.)
What Are Cache Partitions?
Remember when we mentioned earlier that with Platform Cache you can allocate space using partitions? Let’s talk now about partitions. Partitions let you allocate cache space to balance usage and performance across apps. Caching data to designated partitions ensures that the cache space isn’t overwritten by other apps or by less critical data.
Before you can use cache space in your org, you must create partitions to define how much capacity you want for your apps. Each partition capacity is broken down between org cache and session cache. Session and org cache allocations can be zero, 5 MB, or greater, and must be whole numbers. The minimum size of a partition, including its org and session cache allocations, is 5 MB. For example, say that your org has a total of 10-MB cache space and you created a partition with a total of 5 MB, 5 MB of which is for session cache and 0 MB for org cache. Or you can create a partition of 10-MB space, with 5 MB for org cache and 5 MB for session cache. The sum of all partitions, including the default partition, equals the Platform Cache total allocation.
The following image shows charts of cache capacity and partition allocation. In this example, we haven’t used the cache yet as evidenced by the 0% cache usage (1) and two partitions have been created with equal allocations (2).
Default Partition
You can define any partition as the default partition, but you can have only one default partition. The default partition enables you to use shorthand syntax to perform cache operations on that partition. This means that you don’t have to fully qualify the key name with the namespace and partition name when adding a key-value pair. For example, instead of calling Cache.Org.put('namespace.partition.key', 0); you can just call Cache.Org.put('key', 0);
In the next unit, you’ll create a partition in Setup to get started using Platform Cache! | https://trailhead.salesforce.com/en/content/learn/modules/platform_cache/platform_cache_get_started | CC-MAIN-2021-43 | refinedweb | 1,593 | 63.49 |
How deep a recurse?
By clive on Aug 01, 2009
Chris has been exploring various limits of a lab M8000. Inspired (well, umm, also maybe board on a conf call) by this and prompted by a Twitter update on Google and recursion from Alec (don't recall if I read it first on his blog or Twitter) got me thinking about how deep you can recurse on a modern system? So wrote some code. The marginally tricky bit was setting up the alternate stack to handle the signal on with sigaltstack.
#include < unistd.h > #include < stdio.h > #include < signal.h > #include < stdlib.h > #include < sys/resource.h > #include < sys/mman.h > static void handler(); static void recurse(void); static int depth = 0; int main() { struct sigaction act; struct rlimit rlp; stack_t ss; getrlimit(RLIMIT_STACK, &rlp); printf("RLIMIT_STACK = %u:%u\\n", rlp.rlim_cur, rlp.rlim_max); act.sa_handler = handler; sigemptyset(&act.sa_mask); act.sa_flags = 0; act.sa_flags |= SA_RESETHAND|SA_SIGINFO|SA_ONSTACK; if (sigaction(SIGSEGV, &act, NULL) < 0) { perror("sigaction failed"); exit(1); } if ((ss.ss_sp = mmap(NULL, SIGSTKSZ, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANON, -1, 0)) == MAP_FAILED) { perror("mmap failed"); exit(1); } ss.ss_size = SIGSTKSZ; ss.ss_flags = 0; if (sigaltstack(&ss, NULL) < 0) { perror("sigaltstack failed"); exit(1); } recurse(); } static void recurse(void) { depth++; recurse(); } void handler(void) { printf("depth = %u\\n", depth); exit(0); }
1st attempt on a Macbook with OSX gave a number of 524030. We then moved to Solaris Nevada 110 running on one of our x86 lab system. Also tried the S10 Sparc stable server. The Sparc numbers are a lot smaller than the x86 numbers. The Sparc numbers are similar on Solaris 10 or Nevada. What a great microbenchmark this would make to base purchases on, how deep can a system recurse with no function arguments passed. Many purchasing decisions have been made on the results of benchmarks of similar relevance to the business problem in hand so lets not dismiss totally. Anyway, back to reality.
On a Solaris 10 Sparc box we get
ebusy(5.10)$ cc -o recurse recurse.c ebusy(5.10)$ ./recurse RLIMIT_STACK = 1000000000:1000000000 depth = 10416627 ebusy(5.10)$ cc -m64 -o recurse recurse.c ebusy(5.10)$ ./recurse RLIMIT_STACK = 1000000000:1000000000 depth = 5681792 ebusy(5.10)$ uname -a SunOS ebusy 5.10 Generic_137137-09 sun4u sparc SUNW,Sun-Fire
and on the Nevada x86 lab system we get
exdev(5.11)$ cc -o recurse recurse.c exdev(5.11)$ ./recurse RLIMIT_STACK = 2147483647:2147483647 depth = 16812966 exdev(5.11)$ cc -m64 -o recurse recurse.c exdev(5.11)$ ./recurse RLIMIT_STACK = 1000000000:1000000000 depth = 62499512 exdev(5.11)$ uname -a SunOS exdev 5.11 snv_110 i86pc i386 i86pc
I am sure there are some games to play with increasing the hard stack limit or allocating the alternate stack a huge segment of memory and recursing through that as well. However, over 62 million stack frames is adequate for most recursive situations which will complete.
Interesting that compilation with -x02 or higher leads to assembler that does nothing and the code just sits in a loop without ever calling the function below.
exdev(5.11)$ pstack `pgrep recur` 17659: ./recurse 0000000000400f80 recurse () exdev(5.11)$
So a few interesting questions that will have to wait for the next conf call where I don't need to pay too much attention.
ALl the includes have gone. You build it 64 bit then you will be limited just by the amount of memory in the system.
Posted by Chris Gerhard on August 01, 2009 at 11:18 AM BST #
Thanks Chris, includes fixed. Even a 64 bit build still has an hard rlimit for the stack, though bigger. The second set of results were built 64 bit.
Posted by Clive King on August 01, 2009 at 11:38 AM BST #
Just don't ustack() it :-) . Actually, we'd stop at 2048 frames by default so long way from where you got to...
Posted by Jon Haslam on August 01, 2009 at 03:08 PM B.
Posted by Clive King on August 03, 2009 at 05:13 AM BST # | https://blogs.oracle.com/clive/entry/how_deep_can_you_recurse | CC-MAIN-2015-40 | refinedweb | 679 | 76.52 |
Are you sure?
This,.. All Rights Reserved. ..Figures and Listings Chapter 1 Objects. Mid..
Its additions to C are mostly based on Smalltalk.4 and earlier). You can start to learn more about Cocoa by reading Getting Started with Cocoa. described in Xcode Workspace Guide and Interface Builder respectively. it assumes some prior acquaintance with that language. not on the C language itself. Important: This document describes the version of the Objective-C language released in Mac OS X v10. It concentrates on the Objective-C extensions to C. which introduces the associative references feature (see “Associative References” (page 83)). and provides a foundation for learning about the second component.6. and to do so in a simple and straightforward way. The runtime environment is described in a separate document. 9 . Object-oriented programming in Objective-C is sufficiently different from procedural programming in ANSI C that you won’t be hampered if you’re not an experienced C programmer. It fully describes the Objective-C language. All Rights Reserved. Objective-C Runtime Programming Guide. Who Should Read This Document 2010-07-13 | © 2010 Apple Inc. one of the first object-oriented programming languages. Most object-oriented development environments consist of several parts: ■ ■ ■ ■ An object-oriented programming language A library of objects A suite of development tools A runtime environment This document is about the first component of the development environment—the programming language. it doesn’t have to be an extensive acquaintance. read Object Oriented Programming and the Objective-C Programming Language 1. The two main development tools you use are Xcode and Interface Builder. Because this isn’t a document about C.INTRODUCTION Introduction to The Objective-C Programming Language The Objective-C language is a simple computer language designed to enable sophisticated object-oriented programming.. the Mac OS X Objective-C application frameworks—collectively known as Cocoa. Objective-C is defined as a small but powerful set of extensions to the standard ANSI C language.0 of the Objective-C language (available in in Mac OS X v10. To learn about version 1.0. Objective-C is designed to give C full object-oriented programming capabilities.
c. methods. Objective-C syntax is a superset of GNU C/C++ syntax. C++ and Objective-C source code. Computer voice denotes words or characters that are to be taken literally (typed as they appear). and other programming elements. For example. Conventions Where this document discusses functions. Italic denotes words that represent something else or can be varied. it makes special use of computer voice and italic fonts. ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ “Objects. The appendix contains reference material that might be useful for understanding the language: ■ “Language Summary” (page 117) lists and briefly comments on all of the Objective-C extensions to the C language. 10 Organization of This Document 2010-07-13 | © 2010 Apple Inc.m.INTRODUCTION Introduction to The Objective-C Programming Language Organization of This Document This document is divided into several chapters and one appendix. . Other issues when using Objective-C with C++ are covered in “Using C++ With Objective-C” (page 111). the syntax: @interfaceClassName(CategoryName) means that @interface and the two parentheses are required. but that you can choose the class name and category name. All Rights Reserved. The following chapters cover all the features Objective-C adds to standard C. Similarly. Classes.mm. just as it recognizes files containing only standard C syntax by filename extension . and the Objective-C compiler works for C.. The compiler recognizes Objective-C source files by the filename extension . the compiler recognizes C++ files that use Objective-C by the extension .
11 . (Not available on iOS—you cannot access this document through the iOS Dev Center. Objective-C Runtime Reference describes the data structures and functions of the Objective-C runtime support library. or obtain a list of all class definitions for loaded classes. Your programs can use these interfaces to interact with the Objective-C runtime system.(void)encodeWithCoder:(NSCoder *)coder { [super encodeWithCoder:coder]. since those have many different expectations and conventions from Objective-C. you should read Object-Oriented Programming with Objective-C. For example. } The conventions used in the reference appendix are described in that appendix. Memory Management Objective-C supports two environments for memory management: automatic garbage collection and reference counting: ■ Garbage Collection Programming Guide describes the garbage collection system used by Cocoa. ■ See Also 2010-07-13 | © 2010 Apple Inc. often substantial parts.. Object-Oriented Programming with Objective-C is designed to help you become familiar with object-oriented development from the perspective of an Objective-C developer.. you can add classes or methods. See Also If you have never used object-oriented programming to create applications before. All Rights Reserved. It spells out some of the implications of object-oriented design and gives you a flavor of what writing an object-oriented program is really like. Objective-C Release Notes describes some of the changes in the Objective-C runtime in the latest release of Mac OS X.) Memory Management Programming Guide describes the reference counting system used by Cocoa.INTRODUCTION Introduction to The Objective-C Programming Language Where example code is shown. Runtime Objective-C Runtime Programming Guide describes aspects of the Objective-C runtime and how you can use it. ellipsis points indicates the parts. . that have been omitted: . You should also consider reading it if you have used other object-oriented development environments such as C++ and Java.
. All Rights Reserved.INTRODUCTION Introduction to The Objective-C Programming Language 12 See Also 2010-07-13 | © 2010 Apple Inc.
In Objective-C. however. see “The Scope of Instance Variables” (page 40)). though. it can’t mistakenly perform methods intended for other types of objects. All Rights Reserved. The runtime system acts as a kind of operating system for the Objective-C language. an object hides both its instance variables and its method implementations. To understand more about the functionality it offers. Typically. an object’s instance variables are internal to the object. Whenever possible. and Messaging This chapter describes the fundamentals of objects. Runtime 2010-07-13 | © 2010 Apple Inc. 13 . Classes. It also introduces the Objective-C runtime. you don’t need to interact with the runtime directly. Object Basics An object associates data with the particular operations that can use or affect that data.CHAPTER 1 Objects. Objective-C provides a data type to identify an object variable without specifying a particular class of the object—this allows for dynamic typing. an object sees only the methods that were designed for it. it’s what makes the language work. an object bundles a data structure (instance variables) and a group of procedures (methods) into a self-contained programming unit. but also a runtime system to execute the compiled code. For others to find out something about an object. Moreover. Objects As the name implies. An object associates data with the particular operations that can use or affect that data. there has to be a method to supply the information. Just as a C function protects its local variables. In Objective-C. classes. object-oriented programs are built around objects. a Rectangle would have methods that reveal its size and its position. you get access to an object’s state only through the object’s methods (you can specify whether subclasses or other objects can access instance variables directly by using scope directives. This means that the language requires not just a compiler. Runtime The Objective-C language defers as many decisions as it can from compile time and link time to runtime. and messaging as used and implemented by the Objective-C language. In essence. these operations are known as the object’s methods. see Objective-C Runtime Programming Guide. For example. generally. it dynamically performs operations such as creating objects and determining what method to invoke. hiding them from the rest of the program. the data they affect are its instance variables (in other environments they may be referred to as ivars or member variables).
h. see Objective-C Runtime Programming Guide. By itself. and the other basic types of Objective-C are defined in the header file objc/objc. (For strictly C constructs. See “Class Types” (page 26) and “Enabling Static Behavior” (page 91). for example. you can. and can be used for both instances of a class and class objects themselves. object identifiers are a distinct data type: id. such as function return values. Using the runtime system. or discover the name of its superclass. each object has to be able to supply it at runtime.CHAPTER 1 Objects. id. it yields no information about an object.” Dynamic Typing The id type is completely nonrestrictive. Classes. Objects with the same behavior (methods) and the same kinds of data (instance variables) are members of the same class. All objects thus have an isa variable that tells them of what class they are an instance. Object classes are discussed in more detail under “Classes” (page 23). such as method return values. int remains the default type. id replaces int as the default data type. the runtime system can find the exact class that an object belongs to. The functions of the runtime system use isa. Classes are particular kinds of objects. This is the general type for any kind of object regardless of class. id is defined as pointer to an object data structure: typedef struct objc_object { Class isa.) Dynamic typing in Objective-C serves as the foundation for dynamic binding.) The keyword nil is defined as a null object. Since the id type designator can’t supply this information to the compiler. (To learn more about the runtime. nil. to find this information at runtime. except that it is an object. just by asking the object. Objects are thus dynamically typed at runtime. a program typically needs to find more specific information about the objects it contains. determine whether an object implements a particular method. } *id. 14 Objects 2010-07-13 | © 2010 Apple Inc. For the object-oriented constructs of Objective-C. the isa variable is frequently referred to as the “isa pointer. and the class name can serve as a type name. id anObject. an id with a value of 0. All Rights Reserved. discussed later. and Messaging id In Objective-C. It’s also possible to give the compiler information about the class of an object by statically typing it in source code using the class name. Since the Class type is itself defined as a pointer: typedef struct objc_class *Class. The compiler records information about class definitions in data structures for the runtime system to use. At some point. The isa instance variable identifies the object’s class—what kind of object it is. The isa variable also enables objects to perform introspection—to find out about themselves (or other objects). . Whenever it needs to.
Objective-C offers two environments for memory management that allow you to meet these goals: ■ Reference counting. In Objective-C. For example. ■ Garbage collection. [myRectangle setWidth:20. All Rights Reserved. or “arguments.0]. Methods can also take parameters. where you pass responsibility for determining the lifetime of objects to an automatic “collector. method names in messages are often referred to as selectors.” Garbage collection is described in Garbage Collection Programming Guide. including how you can nest message expressions. The method name in a message serves to “select” a method implementation.) Object Messaging This section explains the syntax of sending messages. The message is followed by a “. the runtime system selects the appropriate method from the receiver’s repertoire and invokes it. Message Syntax To get an object to do something. the message is simply the name of a method and any arguments that are passed to it. this message tells the myRectangle object to perform its display method. and the message tells it what to do. Object Messaging 2010-07-13 | © 2010 Apple Inc. and the concepts of polymorphism and dynamic binding. it is important to ensure that objects are deallocated when they are no longer needed—otherwise your application’s memory footprint becomes larger than necessary. where you are ultimately responsible for determining the lifetime of objects. message expressions are enclosed in brackets: [receiver message] The receiver is an object. Classes. It also discusses the “visibility” of an object’s instance variables.” as is normal for any line of code in C. When a message is sent. In source code. It is also important to ensure that you do not deallocate objects while they’re still being used. which causes the rectangle to display itself: [myRectangle display]. 15 . and Messaging Memory Management In any program. (Not available on iOS—you cannot access this document through the iOS Dev Center. you send it a message telling it to apply a method.CHAPTER 1 Objects. For this reason. Reference counting is described in Memory Management Programming Guide.” A message with a single argument affixes a colon (:) to the name and puts the argument right after the colon.
0): [myRectangle setOriginX: 30. BOOL isFilled. Extra arguments are separated by commas after the end of the method name. and is described in “Dot Syntax” (page 19). though they’re somewhat rare. The imaginary message below tells the myRectangle object to set its origin to the coordinates (30. // This is a good example of multiple arguments A selector name includes all the parts of the name. including the colons.) operator that offers a compact and convenient syntax for invoking an object’s accessor methods. thus. Here. methods can return values. Objective-C also provides a dot (.0]. the second argument is effectively unlabeled and it is difficult to determine the kind or purpose of the method’s arguments. This is different from the named or keyword arguments available in a language like Python: def func(a. the commas aren’t considered part of the name. such as return type or parameter types. include anything else. // This is a bad example of multiple arguments This particular method does not interleave the method name with the arguments and. Thing=DefaultThing): pass where Thing (and NeatMode) might be omitted or might have different values when called. Classes. NeatMode=SuperNeat.0 :50. One message expression can be nested inside another. b. 50. For all intents and purposes. . however. the color of one rectangle is set to the color of another: [myRectangle setPrimaryColor:[otherRect primaryColor]]. In principle. “Named arguments” and “keyword arguments” often carry the implication that the arguments to a method can vary at runtime.CHAPTER 1 Objects. (Unlike colons. Like standard C functions. This is not the case with Objective-C. memberOne. can have default values.0. memberTwo. All Rights Reserved. The selector name does not. It has two colons as it takes two arguments. Methods that take a variable number of arguments are also possible. Objective-C's method names are interleaved with the arguments such that the method’s name naturally describes the arguments expected by the method.0 y: 50. This is typically used in conjunction with the declared properties feature (see “Declared Properties” (page 67)). can possibly have additional named arguments. nor can their order be varied. Note that a variable and a method can have the same name. can be in a different order. the Rectangle class could instead implement a setOrigin:: which would be invoked as follows: [myRectangle setOrigin:30. or NO if it’s drawn in outline form only.) In the following example. and Messaging For methods with multiple arguments. isFilled = [myRectangle isFilled]. 16 Object Messaging 2010-07-13 | © 2010 Apple Inc. so the method is named setOriginX:y:. memberThree]. the imaginary makeGroup: method is passed one required argument (group) and three that are optional: [receiver makeGroup:group. The following example sets the variable isFilled to YES if myRectangle is drawn as a solid rectangle.0]. Important: The sub-parts of the method name—of the selector—are not optional. an Objective-C method declaration is simply a C function that prepends two additional arguments (see Messaging in the Objective-C Runtime Programming Guide).
You should therefore not rely on the return value of messages sent to nil unless the method’s return type is an object.0 for every field in the data structure.. if it does. yet it can find the primary color for otherRect and return it. Other struct data types will not be filled with zeros. This convention simplifies Objective-C source code. There are several patterns in Cocoa that take advantage of this fact. as defined by the Mac OS X ABI Function Call Guide to be returned in registers. } Note: The behavior of sending messages to nil changed slightly with Mac OS X v10. id anObjectMaybeNil = nil. any floating-point type. On Mac OS X v10. Object Messaging 2010-07-13 | © 2010 Apple Inc. For example. All Rights Reserved.0) { // implementation continues. they don’t need to bring the receiver to itself. If the message sent to nil returns anything other than the aforementioned value types (for example. a message to nil also is valid. Every method assumes the receiver and its instance variables. if it returns any struct type.5. or any integer scalar of size less than or equal to sizeof(void*). a message sent to nil returns nil. If the method returns a struct.4 and earlier. and Messaging Sending Messages to nil In Objective-C. a long double.. It also supports the way object-oriented programmers think about objects and messages. then a message sent to nil returns 0. then mother is sent to nil and the method returns nil. Message arguments bring information from the outside to the receiver. 17 . any pointer type. without having to declare them as arguments. or any vector type) the return value is undefined. The Receiver’s Instance Variables A method has automatic access to the receiving object’s instance variables. If aPerson’s spouse is nil. Messages are sent to receivers much as letters are delivered to your home. then a message sent to nil returns 0 (nil). any pointer type. void. The value returned from a message to nil may also be valid: ■ If the method returns an object. // this is valid if ([anObjectMaybeNil methodThatReturnsADouble] == 0. or any integer scalar of size less than or equal to sizeof(void*).CHAPTER 1 Objects. If the method returns anything other than the aforementioned value types the return value of a message sent to nil is undefined. Classes. the primaryColor method illustrated above takes no arguments. then a message sent to nil returns 0. it is valid to send a message to nil—it simply has no effect at runtime. as long as the message returns an object. for example: Person *motherInLaw = [[aPerson spouse] mother]. or a long long. ■ If the method returns any pointer type. a float. ■ ■ The following code fragment illustrates valid use of sending a message to nil. You don’t need to pass them to the method as arguments. a double. any integer scalar of size less than or equal to sizeof(void*).
Since each object can have its own version of a method. Dynamic Binding A crucial difference between function calls and messages is that a function and its arguments are joined together in the compiled code. If you write code that sends a display message to an id variable. A Circle and a Rectangle would respond differently to identical instructions to track the cursor. but by varying just the object that receives the message. See “Defining a Class” (page 35) for more information on referring to instance variables. Together with dynamic binding. because methods “belong to” an object. but it’s not available from the type declarations found in source code. even if another object has a method with the same name. an object can be operated on by only those methods that were defined for it. The precise method that a message invokes depends on the receiver. a runtime messaging routine looks at the receiver and at the method named in the message. This feature. Polymorphism As the examples above illustrate. not when the code is compiled. without you having to choose at the time you write the code what kinds of objects they might be. For the compiler to find the right method implementation for a message. not by varying the message itself. and passes it a pointer to the receiver’s instance variables. If it requires information about a variable stored in another object. Different receivers may have different method implementations for the same method name (polymorphism). This can be done as the program runs. any object that has a display method is a potential receiver. “calls” the method. Classes. The selection of a method implementation happens at runtime. but a message and a receiving object aren’t united until the program is running and the message is sent. The primaryColor and isFilled methods shown above are used for just this purpose. It can’t confuse them with methods defined for other kinds of object. it permits you to write code that might apply to any number of different kinds of objects. referred to as polymorphism. plays a significant role in the design of object-oriented programs. In particular. messages behave differently than function calls. This means that two objects can respond differently to the same message. All Rights Reserved. each kind of object sent a display message could display itself in a unique way. it would have to know what kind of object the receiver is—what class it belongs to. receivers can be decided “on the fly” and can be made dependent on external factors such as user actions. messages in Objective-C appear in the same syntactic positions as function calls in standard C. They might even be objects that will be developed later.) This dynamic binding of methods to messages works hand-in-hand with polymorphism to give object-oriented programming much of its flexibility and power. But. It locates the receiver’s implementation of a method matching the name. and Messaging A method has automatic access only to the receiver’s instance variables. Therefore. 18 Object Messaging 2010-07-13 | © 2010 Apple Inc. a program can achieve a variety of results. by other programmers working on other projects. (For more on this routine. For example.CHAPTER 1 Objects. it must send a message to the object asking it to reveal the contents of the variable. When a message is sent. . This is information the receiver is able to reveal at runtime when it receives a message (dynamic typing). see Messaging in the Objective-C Runtime Programming Guide. the exact method that’s invoked to respond to a message can only be determined at runtime.
See Dynamic Method Resolution in the Objective-C Runtime Programming Guide for more details. Dot Syntax Objective-C provides a dot (. and so on). The code that sends the message doesn’t have to be concerned with them.value). printf("myInstance value: %d". these differences are isolated in the methods that respond to the message. for example. Since messages don’t select methods (methods aren’t bound to messages) until runtime. Classes. All Rights Reserved. Dynamic Method Resolution You can provide implementations of class and instance methods at runtime using dynamic method resolution. Objective-C takes dynamic binding one step further and allows even the message that’s sent (the method selector) to be a variable that’s determined at runtime. It is particularly useful when you want to access or modify a property that is a property of another object (that is a property of another object. Each application can invent its own objects that respond in their own way to copy messages. This is discussed in the section Messaging in the Objective-C Runtime Programming Guide. The message goes to whatever object controls the current selection. An object that displays text would react to a copy message differently from an object that displays scanned images. The code example above is exactly equivalent to the following: [myInstance setValue:10]. it doesn’t even have to enumerate the possibilities. The dot syntax is purely “syntactic sugar”—it is transformed by the compiler into invocation of accessor methods (so you are not actually accessing an instance variable directly). 19 .CHAPTER 1 Objects. printf("myInstance value: %d". myInstance. An object that represents a set of shapes would respond differently from a Rectangle. Using the Dot Syntax Overview You can use the dot syntax to invoke accessor methods using the same pattern as accessing structure elements as illustrated in the following example: myInstance.) operator.) operator that offers a compact and convenient syntax you can use as an alternative to square bracket notation ([]s) to invoke accessor methods. and Paste. Object Messaging 2010-07-13 | © 2010 Apple Inc. as illustrated in the following example. Listing 1-1 Accessing properties using the dot syntax Graphic *graphic = [[Graphic alloc] init]. [myInstance value]). Copy.value = 10. and Messaging When executing code based upon the Application Kit. users determine which objects receive messages from menu commands like Cut. General Use You can read and write properties using the dot (.
which will fail at runtime. The following statements compile to exactly the same code as the statements shown in Listing 1-1 (page 19). int textCharacterLength = graphic. } graphic.color. 120. and Messaging NSColor *color = graphic. [data setLength:[data length] / 4]. 10. the dot syntax therefore preserves encapsulation—you are not accessing an instance variable directly.text. which is equivalent to: [data setLength:[data length] + 1024].length *= 2. CGFloat xLoc = graphic. . All Rights Reserved.CHAPTER 1 Objects. You can change the methods that are invoked by using the Declared Properties feature (see “Declared Properties” (page 67))..text = @"Hello". setProperty:).0.hidden. Classes.0.0. BOOL hidden = graphic. but use square bracket syntax: Listing 1-2 Accessing properties using bracket syntax Graphic *graphic = [[Graphic alloc] init]. 120. There is one case where properties cannot be used. NSColor *color = [graphic color]. property) and setting it calls the set method associated with the property (by default. 10. 20. [data setLength:[data length] * 2]. 20 Object Messaging 2010-07-13 | © 2010 Apple Inc.length. int textCharacterLength = [[graphic text] length]. } [graphic setBounds:NSMakeRect(10.0). BOOL hidden = [graphic hidden]. Despite appearances to the contrary. For properties of the appropriate C language type.length /= 4.xLoc.0. CGFloat xLoc = [graphic xLoc].textHidden != YES) { graphic. if (graphic. (@"Hello" is a constant NSString object—see “Compiler Directives” (page 118). data. data.0. Consider the following code fragment: id y.0. the meaning of compound assignments is well-defined. 20. data. if ([graphic isTextHidden] != YES) { [graphic setText:@"Hello"].0)].bounds = NSMakeRect(10.) Accessing a property property calls the get method associated with the property (by default.length += 1024. whereas at best it can only generate an undeclared method warning that you invoked a non-existent setProperty: method. For example.
21 .name. // the path contains a C struct // will crash if window is nil or -contentView returns nil y = window.origin. Object Messaging 2010-07-13 | © 2010 Apple Inc..address. then it is not ambiguous if there's only one declaration of a z property in the current compilation unit. person. Since this is ambiguous. [[[person address] street] setName: @"Oxford Road"]. the statement is treated as an undeclared property error. If z is declared. If there are multiple declarations of a z property. // an example of using a setter. All Rights Reserved. no additional thread dependencies are introduced as a result of its use. Classes.bounds.CHAPTER 1 Objects.aProperty. For example.street. anObject.name = @"New Name". x = [[[person address] street] name]. If you do not use self. otherwise you get a compiler warning. One source of ambiguity would also arise from one of them being declared readonly.y. you access the instance variable directly. the set accessor method for the age property is not invoked: age = 10. y = [[window contentView] bounds].address. Usage Summary aVariable = anObject.name = @"Oxford Road". In the following example.age = 10. you must explicitly call out self as illustrated in this example: self. // z is an undeclared property Note that y is untyped and the z property is undeclared. passing @"New Name" as the argument..y. Since the dot syntax simply invokes methods. Invokes the aProperty method and assigns the return value to aVariable. and Messaging x = y. The type of the property aProperty and the type of aVariable must be compatible. the result is the same as sending the equivalent message to nil. There are several ways in which this could be interpreted. Performance and Threading The dot syntax generates code equivalent to the standard method invocation syntax.origin. the following pairs are all equivalent: // each member of the path is an object x = person. nil Values If a nil value is encountered during property traversal...street.z. code using the dot syntax performs exactly the same as code written directly using the accessor methods. As a result. Invokes the setName: method on anObject. self If you want to access a property of self using accessor methods. as long as they all have the same type (such as BOOL) then it is legal.contentView.
Assigns 11 to both anObject. xOrigin = aView. flag = aView.retain.(void) setReadonlyProperty: (NSInteger)newValue. /* code fragment */ self.origin. Because the setter method is present.fooIfYouCan = myInstance. That is. Generates a compiler warning that setFooIfYouCan: does not appear to be a setter method because it does not return (void). /* property declaration */ @property(readonly) NSInteger readonlyProperty.floatProperty.x structure element of the NSRect returned by bounds. this code generates a compiler warning (warning: assignment to readonly property 'readonlyProperty'). if the property name does not exist. anObject. Since the property is declared readonly. 22 Object Messaging 2010-07-13 | © 2010 Apple Inc.). anObject. This does not generate a compiler warning unless flag’s type mismatches the method’s return type. or if setName: returns anything but void.integerProperty and anotherObject. All Rights Reserved. The pre-evaluated result is coerced as required at each point of assignment.bounds. /* method declaration */ . Generates a compiler warning (warning: value returned from property not used. it will work at runtime. but simply adding a setter for a property does not imply readwrite. /* method declaration */ .CHAPTER 1 Objects.readonlyProperty = 5. Invokes the bounds method and assigns xOrigin to be the value of the origin. Invokes lockFocusIfCanDraw and assigns the return value to flag. .floatProperty = ++i. Incorrect Use The following patterns are strongly discouraged.integerProperty = anotherObject. the right hand side of the assignment is pre-evaluated and the result is passed to setIntegerProperty: and setFloatProperty:. Classes.(BOOL) setFooIfYouCan: (MyClass *)newFoo. and Messaging You get a compiler warning if setName: does not exist.x. /* code fragment */ anObject. NSInteger i = 10.lockFocusIfCanDraw.
By convention. Classes. for example. The new class simply adds to or modifies what it inherits. Each object gets its own instance variables.”) The class object is the compiled version of the class. a class object that knows how to build new objects belonging to the class. Figure 1-1 illustrates the hierarchy for a few of the classes used in the drawing program. The objects that do the main work of your program are instances created by the class object at runtime. and Messaging Classes An object-oriented program is typically built from a variety of objects. In Objective-C. it declares the instance variables that become part of every member of the class. that root class is typically NSObject.CHAPTER 1 Objects. each new class that you define is based on another class from which it inherits methods and instance variables. you define objects by defining their class. It doesn’t need to duplicate inherited code. NSWindow objects. Inheritance links all classes together in a hierarchical tree with a single class at its root. the objects it builds are instances of the class. but the methods are shared. All instances of a class have the same set of methods. NSText objects. and they all have a set of instance variables cut from the same mold. NSDictionary objects. The class definition is a prototype for a kind of object. (For this reason it’s traditionally called a “factory object. and many others. Inheritance Class definitions are additive. and any class (including a root class) can be the superclass for any number of subclasses one step farther from the root. NSFont objects. class names begin with an uppercase letter (such as “Rectangle”). and it defines a set of methods that all objects in the class can use. A program based on the Cocoa frameworks might use NSMatrix objects. Every class (except a root class) has a superclass one step nearer the root. the names of instances typically begin with a lowercase letter (such as “myRectangle”). When writing code that is based upon the Foundation framework. The compiler creates just one accessible object for each class. All Rights Reserved. Programs often use more than one object of the same kind or class—several NSArray objects or NSWindow objects. Figure 1-1 Some Drawing Program Classes NSObject Graphic Image Text Shape Line Rectangle Square Circle Classes 2010-07-13 | © 2010 Apple Inc. 23 .
and Messaging This figure shows that the Square class is a subclass of the Rectangle class. The class must duplicate much of what the NSObject class does. you should generally use the NSObject class provided with Cocoa as the root class. and NSObject. and so on. A class that doesn’t need to inherit any special behavior from another class should nevertheless be made a subclass of the NSObject class. Figure 1-2 shows some of the instance variables that could be defined for a particular implementation of Rectangle. and the ones that make it a Shape are added to the ones that make it a Graphic. Note: Implementing a new root class is a delicate task and one with many hidden hazards. Every class but NSObject can thus be seen as a specialization or an adaptation of another class. Each successive subclass further modifies the cumulative total of what’s inherited. Note that the variables that make the object a Rectangle are added to the ones that make it a Shape. Some are classes that you can use “off the shelf”—incorporate into your program as is. Others you might want to adapt to your own needs by defining a subclass. Instances of the class must at least have the ability to behave like Objective-C objects at runtime. It imparts to the classes and instances of classes that inherit from it the ability to behave as objects and cooperate with the runtime system. and reusing work done by the programmers of the framework. This is simply to say that a Square object isn’t only a Square. a Shape. connect them to their class. Thus. Cocoa includes the NSObject class and several frameworks containing definitions for more than 250 additional classes. isa connects each object to its class. Classes. For more information. It defines the basic framework for Objective-C objects and object interactions. but leave some specifics to be implemented in a subclass. All Rights Reserved. every class you create must be the subclass of another class (unless you define a new root class).CHAPTER 1 Objects. Inheritance is cumulative. you link it to the hierarchy by declaring its superclass. a Graphic. the new object contains not only the instance variables that were defined for its class but also the instance variables defined for its superclass and for its superclass’s superclass. and Graphic is a subclass of NSObject. 24 Classes 2010-07-13 | © 2010 Apple Inc. Some framework classes define almost everything you need. Inheriting this ability from the NSObject class is much simpler and much more reliable than reinventing it in a new class definition. The NSObject Class NSObject is a root class. Shape is a subclass of Graphic. it’s also a Rectangle. So a Square object has the methods and instance variables defined for Rectangle. such as allocate instances. and so doesn’t have a superclass. Inheriting Instance Variables When a class object creates a new instance. The Square class defines only the minimum needed to turn a Rectangle into a Square. Graphic. all the way back to the root class. When you define a class. and identify them to the runtime system. . the isa instance variable defined in the NSObject class becomes part of every object. the Rectangle class is a subclass of Shape. and where they may come from. see the Foundation framework documentation for the NSObject class and the NSObject protocol. and an NSObject. Shape. For this reason. as well as those defined specifically for Square. You can thus create very sophisticated objects by writing only a small amount of code. Plenty of potential superclasses are available.
origin. All Rights Reserved. Graphic defines a display method that Rectangle overrides by defining its own version of display. Graphic..CHAPTER 1 Objects. This type of inheritance is a major benefit of object-oriented programming. declared in NSObject declared in Graphic declared in Shape declared in Rectangle A class doesn’t have to declare instance variables. height. But because they don’t have instance variables (only instances do).. but each new version incorporates the version it overrides. Shape. A redefined method can also incorporate the very method it overrides. float float BOOL NSColor . Although overriding a method blocks the original version from being inherited. For example. *fillColor. When it does. if it needs any instance variables at all. rather than replace it outright. width. 25 . all the way back to the root of the hierarchy. you can implement a new method with the same name as one defined in a class farther up the hierarchy. Classes.. Any new class you define in your program can therefore make use of the code written for all the classes above it in the hierarchy. When several classes in the hierarchy define the same method. your programs can take advantage of the basic functionality coded into the framework classes. they inherit only methods. When you use one of the object-oriented frameworks provided by Cocoa. which instead perform the Rectangle version of display. For instance. but also to methods defined for its superclass. Inheriting Methods An object has access not only to the methods defined for its class. and subclasses of the new class inherit it rather than the original. instances of the new class perform it rather than the original. the new method serves only to refine or modify the method it overrides. For example. *primaryColor. Classes 2010-07-13 | © 2010 Apple Inc. The Graphic method is available to all kinds of objects that inherit from the Graphic class—but not to Rectangle objects. You have to add only the code that customizes the standard functionality to your application. Class objects also inherit from the classes above them in the hierarchy. a Square object can use methods defined in the Rectangle. Overriding One Method With Another There’s one useful exception to inheritance: When you define a new class. and NSObject classes as well as methods defined in its own class. It can simply define new methods and rely on the instance variables it inherits. the implementation of the method is effectively spread over all the classes. The new method overrides the original. Rectangle Instance Variables isa. other methods defined in the new class can skip over the redefined method and find the original (see “Messages to self and super” (page 43) to learn how). filled. and Messaging Figure 1-2 Class NSPoint NSColor Pattern . Square might not declare any new instance variables of its own. and for its superclass’s superclass.. linePattern.
provides an example of an abstract class instances of which you might occasionally use directly. The class. Class Types A class definition is a specification for a kind of object. Classes. Abstract Classes Some classes are designed only or primarily so that other classes can inherit from them. If you try. All Rights Reserved. it doesn’t defeat dynamic binding or alter the dynamic determination of a receiver’s class at runtime. Objects are always typed by a pointer. When you create subclasses of these classes.CHAPTER 1 Objects. . id hides it. as an argument to the sizeof operator: int i = sizeof(Rectangle). 26 Classes 2010-07-13 | © 2010 Apple Inc. Static typing makes the pointer explicit. it can’t override inherited instance variables. they’re sometimes also called abstract superclasses. Since an object has memory allocated for every instance variable it inherits. you can’t override an inherited variable by declaring a new one with the same name. Abstract classes often contain code that helps define the structure of an application. nor does it prevent you from creating an instance of an abstract class. it can make your intentions clearer to others who read your source code. The abstract class is typically incomplete by itself. The type is based not just on the data structure the class defines (instance variables). Because this way of declaring an object type gives the compiler information about the kind of object it is. on the other hand. in effect. Objective-C does not have syntax to mark classes as abstract. However. objects are statically typed as pointers to a class. it’s known as static typing. These abstract classes group methods and instance variables that can be used by a number of different subclasses into a common definition. You never use instances of the NSObject class in an application—it wouldn’t be good for anything. instances of your new classes fit effortlessly into the application structure and work automatically with other objects. (Because abstract classes must have subclasses to be useful. Static Typing You can use a class name in place of id to designate an object’s type: Rectangle *myRectangle. The NSObject class is the canonical example of an abstract class in Cocoa. it would be a generic object with the ability to do nothing in particular. but also on the behavior included in the definition (methods). to warn if an object could receive a message that it appears not to be able to respond to—and to loosen some restrictions that apply to objects generically typed id. the compiler will complain. A class name can appear in source code wherever a type specifier is permitted in C—for example. Just as id is actually a pointer. In addition. but contains useful code that reduces the implementation burden of its subclasses. defines a data type. The NSView class. Static typing permits the compiler to do some type checking—for example.) Unlike some other languages. and Messaging Although a subclass can override inherited methods.
which means mainly information about what instances of the class are like. report whether an object can respond to a message. a Rectangle instance could be statically typed to the Graphic class: Graphic *myRectangle. 27 . It’s able to produce new instances according to the plan put forward in the class definition. Type Introspection Instances can reveal their types at runtime. Later sections of this chapter discuss methods that return the class object. Introspection isn’t limited to type information.. checks whether the receiver is an instance of a particular class: if ( [anObject isMemberOfClass:someClass] ) . a class object. The isKindOfClass: method. The isMemberOfClass: method. and related methods. For purposes of type checking. and reveal other information. Classes. and Messaging An object can be statically typed to its own class or to any class that it inherits from.. All Rights Reserved. For example. The compiler creates just one object. the compiler considers myRectangle to be a Graphic. This is possible because a Rectangle is a Graphic. to represent the class.. but at runtime it’s treated as a Rectangle. Classes 2010-07-13 | © 2010 Apple Inc. since inheritance makes a Rectangle a kind of Graphic. Class Objects A class definition contains various kinds of information. The class object has access to all the information about the class. checks more generally whether the receiver inherits from or is a member of a particular class (whether it has the class in its inheritance path): if ( [anObject isKindOfClass:someClass] ) . defined in the NSObject class. isMemberOfClass:. but it’s a Graphic nonetheless.. See the NSObject class specification in the Foundation framework reference for more on isKindOfClass:.CHAPTER 1 Objects.. It’s more than a Graphic since it also has the instance variables and method capabilities of a Shape and a Rectangle. The set of classes for which isKindOfClass: returns YES is the same set to which the receiver can be statically typed. See “Enabling Static Behavior” (page 91) for more on static typing and its benefits. also defined in the NSObject class.
But class objects can also be more specifically typed to the Class data type: Class aClass = [anObject class]. it’s not an instance itself. All Rights Reserved. and Messaging Although a class object keeps the prototype of a class instance. like all other objects. Classes. However. the class name stands for the class object only as the receiver in a message expression. just as instances inherit instance methods. The alloc method dynamically allocates memory for the new object’s instance variables and initializes them all to 0—all. Every class object has at least one method (like alloc) that enables it to produce new objects. and inherit methods from other classes. In source code. However. It describes the class object just as the class object describes instances of the class. In the following example. you need to ask an instance or the class to return the class id. All class objects are of type Class. myRectangle = [Rectangle alloc]. That’s the function of an init method. Initialization typically follows immediately after allocation: myRectangle = [[Rectangle alloc] init]. This code tells the Rectangle class to create a new Rectangle instance and assign it to the myRectangle variable: id myRectangle. it generally needs to be more completely initialized. It has no instance variables of its own and it can’t perform methods intended for instances of the class. the class object is represented by the class name. But while you can send messages to instances and to the class object. Creating Instances A principal function of a class object is to create new instances. or one like it. Class rectClass = [Rectangle class]. Both respond to a class message: id aClass = [anObject class]. Class objects are thus full-fledged objects that can be dynamically typed.CHAPTER 1 Objects. As these examples show. Using this type name for a class is equivalent to using the class name to statically type an instance. be typed id. Elsewhere. and are the agents for producing instances at runtime. lack data structures (instance variables) of their own other than those built from the class definition. that is. id rectClass = [Rectangle class]. the metaclass object is used only internally by the runtime system. receive messages. This line of code. For an object to be useful. except the isa variable that connects the new instance to its class. would be necessary before myRectangle could receive any of the messages that were illustrated in previous examples in this chapter. a class definition can include methods intended specifically for the class object—class methods as opposed to instance methods. The alloc method returns a new instance and that instance performs an init method to set its initial state. class objects can. . They’re special only in that they’re created by the compiler. A class object inherits class methods from the classes above it in the hierarchy. and every instance has at least one method (like init) that 28 Classes 2010-07-13 | © 2010 Apple Inc. Note: The compiler also builds a “metaclass object” for each class. the Rectangle class returns the class version number using a method inherited from the NSObject class: int versionNumber = [Rectangle version].
an NSMatrix object can be customized with a particular kind of NSCell object. Moreover. even types that haven’t been invented yet. An NSMatrix object can take responsibility for creating the individual objects that represent its cells. One solution to this problem is to define the NSMatrix class as an abstract class and require everyone who uses it to declare a subclass and implement the methods that produce new cells. for example. In the Application Kit. perhaps in response to user actions. It’s a choice that has intended. It’s possible. to customize an object with a class. Classes 2010-07-13 | © 2010 Apple Inc. should they be NSButtonCell objects to display a bank of buttons or switches. Initialization methods often take arguments to allow particular values to be passed and have keywords to label the arguments (initWithPosition:size:. and sometimes surprising. But what kind of objects should they be? Each matrix displays just one kind of NSCell. for example. but there are many different kinds.CHAPTER 1 Objects. Customization With Class Objects It’s not just a whim of the Objective-C language that classes are treated as objects. NSTextFieldCell objects to display fields where the user can enter and edit text. programmers on different projects would be writing virtually identical code to do the same job. you’d also have to define a new kind of NSMatrix. users of the class could be sure that the objects they created were of the right type. Because they would be implementing the methods. But this requires others to do work that ought to be done in the NSMatrix class. or some other kind of NSCell? The NSMatrix object must allow for any kind of cell. it could become cluttered with NSMatrix subclasses. All Rights Reserved. When it grows. Classes. Since an application might need more than one kind of NSMatrix. The visible matrix that an NSMatrix object draws on the screen can grow and shrink at runtime. each with a different kind of NSCell. where the class belongs to an open-ended set. Every time you invented a new kind of NSCell. and it unnecessarily proliferates the number of classes. the matrix needs to be able to produce new objects to fill the new slots that are added. benefits for design. and Messaging prepares it for use. is a method that might initialize a new Rectangle instance). The inheritance hierarchy in Figure 1-3 shows some of those provided by the Application Kit. 29 . It can do this when the matrix is first initialized and later when new cells are needed.. for example. all to make up for NSMatrix's failure to do it. but they all begin with “init” .
the solution the NSMatrix class actually adopts. This kind of customization would be difficult if classes weren’t objects that could be passed in messages and assigned to variables. initialized from the class definition. Every instance of the class can maintain its own copy of the variables you declare—each object controls its own data. subclasses. . There is. is to allow NSMatrix instances to be initialized with a kind of NSCell—with a class object. @implementation MyClass + (MyClass *)sharedInstance { // check for existence of shared instance // create if necessary return MCLSSharedInstance.CHAPTER 1 Objects. or directly manipulated by. static MyClass *MCLSSharedInstance. Only internal data structures. A class object can be used to coordinate the instances it creates. you can put all the object’s state into static variables and use only class methods. Moreover. and Messaging A better solution. are provided for the class. Classes. it can approach being a complete and versatile object in its own right. or manage other processes essential to the application. } // implementation continues Static variables help give the class object more functionality than just that of a “factory” producing instances. Variables and Class Objects When you define a new class. however. The simplest way to do this is to declare a variable in the class implementation file as illustrated in the following code fragment. no “class variable” counterpart to an instance variable. This saves the step of allocating and initializing an instance. Declaring a variable static limits its scope to just the class—and to just the part of the class that’s implemented in the file. For all the instances of a class to share data. dispense instances from lists of objects already created. it can’t initialize.) This pattern is commonly used to define shared instances of a class (such as singletons. see “Creating a Singleton Instance” in Cocoa Fundamentals Guide). read. you can specify instance variables. All Rights Reserved. and provide class methods to manage it. (Thus unlike instance variables. static variables cannot be inherited by. int MCLSGlobalVariable. In the case when you need only one object of a particular class. you can declare a variable to be static. 30 Classes 2010-07-13 | © 2010 Apple Inc. or alter them. you must define an external variable of some sort. The NSMatrix object uses the class object to produce new cells when it’s first initialized and whenever it’s resized to contain more cells. @implementation MyClass // implementation continues In a more sophisticated implementation. It defines a setCellClass: method that passes the class object for the kind of NSCell object an NSMatrix should use to fill empty slots: [myMatrix setCellClass:[NSButtonCell class]]. a class object has no access to the instance variables of any instances.
class A’s initialize is executed instead. you may need to initialize it just as you would an instance. Initializing a Class Object If you want to use a class object for anything besides allocating instances. It’s the province of the NSObject class to provide this interface. use the template in Listing 1-3 when implementing the initialize method. Therefore. you don’t need to write an initialize method to respond to the message. To avoid performing initialization logic more than once. Methods of the Root Class All objects. and class B inherits from class A but does not implement the initialize method. the runtime system sends initialize to it. If no initialization is required. Objective-C does provide a way for programs to initialize them. but the limited scope of static variables better serves the purpose of encapsulating data into separate objects. For example. and for the appropriate class. class A should ensure that its initialization logic is performed only once. For example. Classes 2010-07-13 | © 2010 Apple Inc. classes and instances alike. Because of inheritance. need an interface to the runtime system. and Messaging Note: It is also possible to use external variables that are not declared static. 31 . } } Note: Remember that the runtime system sends initialize to each class individually. Although programs don’t allocate class objects. the initialize method could set up the array and even allocate one or two default instances to have them ready. in a class’s implementation of the initialize method. Classes. The runtime system sends an initialize message to every class object before the class receives any other messages and after its superclass has received the initialize message. This gives the class a chance to set up its runtime environment before it’s used. . if a class maintains an array of instances... Listing 1-3 Implementation of the initialize method + (void)initialize { if (self == [ThisClass class]) { // Perform initialization here. But. you must not send the initialize message to its superclass. Therefore. Just before class B is to receive its first message. Both class objects and instances should be able to introspect about their abilities and to report their place in the inheritance hierarchy. even though the superclass has already received the initialize message.CHAPTER 1 Objects. If a class makes use of static or global variables. an initialize message sent to a class that doesn’t implement the initialize method is forwarded to the superclass. the initialize method is a good place to set their initial values. All Rights Reserved. because class B doesn’t implement initialize. assume class A implements the initialize method.
see the NSObject class specification in the Foundation framework reference. The class name can stand for the class object only as a message receiver. the class name refers to the class object. These contexts reflect the dual role of a class as a data type and as an object: ■ The class name can be used as a type name for a kind of object. In any other context. See “Enabling Static Behavior” (page 91) for details. if ( [anObject isKindOfClass:[Rectangle class]] ) . Classnames are about the only names with global visibility in Objective-C. but rather belong to the Class data type. class objects can’t be. since they aren’t members of a class. For example: Rectangle *anObject.CHAPTER 1 Objects... The example below passes the Rectangle class as an argument in an isKindOfClass: message. . This usage was illustrated in several of the earlier examples. you can use NSClassFromString to return the class object: NSString *className. Classnames exist in the same namespace as global variables and function names. When a class object receives a message that it can’t respond to with a class method. All Rights Reserved. you must ask the class object to reveal its id (by sending it a class message). Only instances can be statically typed. 32 Classes 2010-07-13 | © 2010 Apple Inc. Classes... A class and a global variable can’t have the same name. . Static typing enables the compiler to do better type checking and makes source code more self-documenting. Class Names in Source Code In source code. For more on this peculiar ability of class objects to perform root instance methods. The only instance methods that a class object can perform are those defined in the root class. The compiler expects it to have the data structure of a Rectangle instance and the instance methods defined and inherited by the Rectangle class. If you don’t know the class name at compile time but have it as a string at runtime. Here anObject is statically typed to be a pointer to a Rectangle.. It would have been illegal to simply use the name “Rectangle” as the argument. and only if there’s no class method that can do the job. ■ As the receiver in a message expression. the runtime system determines whether there’s a root instance method that can respond. This function returns nil if the string it’s passed is not a valid class name. if ( [anObject isKindOfClass:NSClassFromString(className)] ) .. class names can be used in only two very different contexts.. The class name can only be a receiver.
though.. Classes. the class method is typically overridden such that the dynamic subclass masquerades as the class it replaces. All Rights Reserved. key-value observing and Core Data—see Key-Value Observing Programming Guide and Core Data Programming Guide respectively). There are several features in the Cocoa frameworks that dynamically and transparently subclass existing classes to extend their functionality (for example. It is important. you should therefore compare the values returned by the class method rather than those returned by lower-level functions. When this happens. 33 . to get the correct class. Classes 2010-07-13 | © 2010 Apple Inc. Put in terms of API: [object class] != object_getClass(object) != *((Class*)object) You should therefore test two classes for equality as follows: if ([objectA class] == [objectB class]) { //.. When testing for class equality. and Messaging Testing Class Equality You can test two class objects for equality using a direct pointer comparison.CHAPTER 1 Objects.
. and Messaging 34 Classes 2010-07-13 | © 2010 Apple Inc. All Rights Reserved.CHAPTER 1 Objects. Classes.
Interface and implementation files typically are named after the class. Class Interface The declaration of a class interface begins with the compiler directive @interface and ends with the directive @end. In Objective-C. indicating that it contains Objective-C source code.CHAPTER 2 Defining a Class Much of object-oriented programming consists of writing the code for new objects—defining new classes. Separating an object’s interface from its implementation fits well with the design of object-oriented programs. the name of the interface file usually has the . Categories are described in “Categories and Extensions” (page 79). sometimes however a class definition may span several files through the use of a feature called a “category.h and defined in Rectangle. (All Objective-C directives to the compiler begin with “@” .m extension. Because it’s included in other source files. The interface file must be made available to anyone who uses the class. if not also a separate implementation file.m.) @interface ClassName : ItsSuperclass { instance variable declarations } method declarations @end Source Files 2010-07-13 | © 2010 Apple Inc. Nevertheless. An object is a self-contained entity that can be viewed from the outside almost as a “black box. For example. the Rectangle class would be declared in Rectangle. once you’ve declared its interface—you can freely alter its implementation without affecting any other part of the application.” Categories can compartmentalize a class definition or extend an existing one. 35 . A single file can declare or implement more than one class.” Once you’ve determined how an object interacts with other elements in your program—that is.h extension typical of header files. The name of the implementation file has the . All Rights Reserved. Source Files Although the compiler doesn’t require it. The interface file can be assigned any other extension. it’s customary to have a separate interface file for each class. the interface and implementation are usually separated into two different files. classes are defined in two parts: ■ ■ An interface that declares the methods and instance variables of the class and names its superclass An implementation that actually defines the class (contains the code that implements its methods) These are typically split between two files. Keeping class interfaces separate better reflects their status as independent entities.
a rival to the NSObject class. Here’s a partial list of instance variables that might be declared in the Rectangle class: float width. For example: .(void)setWidth:(float)width height:(float)height. the arguments are declared within the method name after the colons. BOOL filled. All Rights Reserved. This is more common. Method return types are declared using the standard C syntax for casting one type to another: . are marked with a minus sign: . If a return or argument type isn’t explicitly declared.(void)setRadius:(float)aRadius. . If the colon and superclass name are omitted. NSColor *fillColor. just as a function would: .CHAPTER 2 Defining a Class The first line of the declaration presents the new class name and links it to its superclass.makeGroup:group. Circle has a radius method that could match a radius instance variable.. you can define a class method and an instance method with the same name.. Arguments break the name apart in the declaration. Methods that take a variable number of arguments declare them using a comma and ellipsis points. braces enclose declarations of instance variables. 36 Class Interface 2010-07-13 | © 2010 Apple Inc. The names of methods that can be used by class objects.. the data structures that are part of each instance of the class. class methods. especially if the method returns the value in the variable. Argument types are declared in the same way: . are preceded by a plus sign: + alloc. instance methods. A method can also have the same name as an instance variable. The alloc method illustrated earlier returns id. For example. as discussed under “Inheritance” (page 23). float height.(float)radius. The methods that instances of a class can use. Methods for the class are declared next. just as in a message. it’s assumed to be the default type for methods and messages—an id. . after the braces enclosing instance variables and before the end of the class declaration. The superclass defines the position of the new class in the inheritance hierarchy. Although it’s not a common practice. the new class is declared as a root class. When there’s more than one argument. Following the first part of the class declaration.(void)display.
it gets interfaces for the entire inheritance hierarchy that the class is built upon.(void)setPrimaryColor:(NSColor *)aColor. However. The interface is usually included with the #import directive: #import "Rectangle. It’s therefore preferred and is used in place of #include in code examples throughout Objective-C–based documentation. To reflect the fact that a class definition builds on the definitions of inherited classes. This directive simply informs the compiler that “Rectangle” and “Circle” are class names.h" @interface ClassName : ItsSuperclass { instance variable declarations } method declarations @end This convention means that every interface file includes. or mentions an instance variable declared in the class. you may prefer to import the precomp instead. and arguments. It doesn’t import their interface files. Note that if there is a precomp—a precompiled header—that supports the superclass. For example. sends a message to invoke a method declared for the class. return values. an interface file begins by importing the interface for its superclass: #import "ItsSuperclass. except that it makes sure that the same file is never included more than once. Referring to Other Classes An interface file declares a class and. the @class directive gives the compiler sufficient forewarning of what to expect. An interface file mentions class names when it statically types instance variables. by importing its superclass. from NSObject on down through its superclass. this declaration . the interface files for all inherited classes. indirectly. Circle. When a source module imports a class interface. where the interface to a class is actually used (instances created.CHAPTER 2 Defining a Class Importing the Interface The interface file must be included in any source module that depends on the class interface—that includes any module that creates an instance of the class.h" This directive is identical to #include. implicitly contains declarations for all inherited classes. 37 . messages sent). it must import them explicitly or declare them with the @class directive: @class Rectangle. Class Interface 2010-07-13 | © 2010 Apple Inc. All Rights Reserved. mentions the NSColor class. Since declarations like this simply use the class name as a type and don’t depend on any details of the class interface (its methods and instance variables). If the interface mentions classes not in this hierarchy.
Finally. Rectangle. except when defining a subclass. This is because the compiler must be aware of the structure of an object where it’s used. and the corresponding implementation file imports their interfaces (since it will need to create instances of those classes or send them messages). if one class declares a statically typed instance variable of another class. ■ The interface file tells users how the class is connected into the inheritance hierarchy and what other classes—inherited or simply referred to somewhere in the class—are needed. methods that are internal to the class implementation can be omitted. Being simple. it can safely omit: ■ ■ The name of the superclass The declarations of instance variables 38 Class Implementation 2010-07-13 | © 2010 Apple Inc. however. The interface file also lets the compiler know what instance variables an object contains. Although instance variables are most naturally viewed as a matter of the implementation of a class rather than its interface. All Rights Reserved. For example.h. and their two interface files import each other. The Role of the Interface The purpose of the interface file is to declare the new class to other source modules (and to other programmers). Because the implementation doesn’t need to repeat any of the declarations it imports. and tells programmers what variables subclasses inherit. The @class directive minimizes the amount of code seen by the compiler and linker. they must nevertheless be declared in the interface file. For example. an interface file uses @class to declare classes. neither class may compile correctly. every implementation file must import its own interface. you can generally ignore the instance variables of the classes you use. As a programmer.m imports Rectangle. . ■ ■ Class Implementation The definition of a class is structured very much like its declaration. Typically. through its list of method declarations. the interface file lets other modules know what messages can be sent to the class object and instances of the class. and is therefore the simplest way to give a forward declaration of a class name. It begins with the @implementation directive and ends with the @end directive: @implementation ClassName : ItsSuperclass { instance variable declarations } method definitions @end However. it avoids potential problems that may come with importing files that import still other files. Every method that can be used outside the class definition is declared in the interface file. It contains all the information they need to work with the class (programmers might also appreciate a little documentation). not just where it’s defined.CHAPTER 2 Defining a Class the class interface must be imported.
For example..CHAPTER 2 Defining a Class This simplifies the implementation and makes it mainly devoted to method definitions: #import "ClassName.h> . ..getGroup:group. Class Implementation 2010-07-13 | © 2010 Apple Inc. } . . group). . All Rights Reserved.(BOOL)isFilled { . the definition of an instance method has all the instance variables of the object within its scope...h" @implementation ClassName method definitions @end Methods for a class are defined. or ->) to refer to an object’s data. } Neither the receiving object nor its filled instance variable is declared as an argument to this method. within a pair of braces.. but without the semicolon. yet the instance variable falls within its scope. the exact nature of the structure is hidden.. the following method definition refers to the receiver’s filled instance variable: .. Before the braces.. } Methods that take a variable number of arguments handle them just as a function would: #import <stdarg.(void)setFilled:(BOOL)flag { . } . va_start(ap. like C functions. For example: + (id)alloc { ... .. they’re declared in the same manner as in the interface file.. You don’t need either of the structure operators (.(void)setFilled:(BOOL)flag { filled = flag. 39 .. This simplification of method syntax is a significant shorthand in the writing of Objective-C code. Although the compiler creates the equivalent of C structures to store instance variables. } Referring to Instance Variables By default. It can refer to them simply by name.. { va_list ap.
40 Class Implementation 2010-07-13 | © 2010 Apple Inc.CHAPTER 2 Defining a Class When the instance variable belongs to an object that’s not the receiver. limits their visibility within the program. To enforce the ability of an object to hide its data. it also lets you explicitly set the scope at three different levels. } return twin. In referring to the instance variable of a statically typed object. struct features *appearance. the compiler limits the scope of instance variables—that is. Suppose. As long as messages are the vehicle for interacting with instances of the class. the object’s type must be made explicit to the compiler through static typing. that the Sibling class declares a statically typed object. As a class is revised from time to time. for example. } The Scope of Instance Variables Although they’re declared in the class interface. as in the following example: . and some instance variables might store information that an object is unwilling to reveal. But to provide flexibility. the structure pointer operator (->) is used. a Sibling method can set them directly: . twin->gender = gender. even though the methods it declares remain the same. Some methods might return information not stored in instance variables. An object’s interface lies in its methods. these changes won’t really affect its interface. int gender. . the choice of instance variables may change.(BOOL)isFilled { return filled. not in its internal data structures. instance variables are more a matter of the way a class is implemented than of the way it’s used. All Rights Reserved. as an instance variable: @interface Sibling : NSObject { Sibling *twin. } But this need not be the case.makeIdenticalTwin { if ( !twin ) { twin = [[Sibling alloc] init]. } As long as the instance variables of the statically typed object are within the scope of the class (as they are here because twin is typed to the same class). twin->appearance = appearance. twin. Often there’s a one-to-one correspondence between a method and an instance variable. Each level is marked by a compiler directive: Directive @private Meaning The instance variable is accessible only within the class that declares it.
and boss is public. This is analogous to private_extern for variables and functions. In the following example. This is illustrated in Figure 2-1. This is most useful for instance variables in framework classes. @private int age. where @private may be too restrictive but @protected or @public too permissive. and wage are protected. up to the next directive or the end of the list. Figure 2-1 The scope of instance variables The class that declares the instance variable @private @protected A class that inherits the instance variable @public Unrelated code A directive applies to all the instance variables listed after it. All Rights Reserved. Using the modern runtime. @public @package The instance variable is accessible everywhere. the age and evaluation instance variables are private. but @private outside. @interface Worker : NSObject { char *name. 41 . Any code outside the class implementation’s image that tries to use the instance variable will get a link error. float wage. @protected id job.CHAPTER 2 Defining a Class Directive Meaning @protected The instance variable is accessible within the class that declares it and within classes that inherit it. an @package instance variable acts like @public inside the image that implements the class. Class Implementation 2010-07-13 | © 2010 Apple Inc. name. char *evaluation.
Mid’s designer wanted to use the High version of negotiate and no other.. the author of Mid’s makeLastingPeace method intentionally skipped over Mid’s version of negotiate (and over any versions that might be defined in classes like Low that inherit from Mid) to perform the version defined in the High class. under the circumstances. The init method. Neither message finds Mid’s version of negotiate. You can override an existing method to modify or add to it. is designed to work like this..makeLastingPeace { [super negotiate].. but it would take a direct message to a Mid instance to do it. and still incorporate the original method in the modification: . but. It ignores the receiving object’s class (Low) and skips to the superclass of Mid. each class in the inheritance hierarchy can implement a method that does part of the job and passes the message on to super for the rest. } For some tasks. Using super Messages to super allow method implementations to be distributed over more than one class. Each init method has responsibility for initializing the instance variables defined in its class. } the messaging routine will find the version of negotiate defined in High. All Rights Reserved. The designer of Low didn’t want Low objects to perform the inherited method. Not being able to reach Mid’s version of negotiate may seem like a flaw. ■ Mid’s version of negotiate could still be used. .CHAPTER 2 Defining a Class . it sends an init message to super to have the classes it inherits from initialize their instance variables. But before doing so. which initializes a newly allocated instance. Here it enabled makeLastingPeace to avoid the Mid version of negotiate that redefined the original High version. since Mid is where makeLastingPeace is defined.. it’s right to avoid it: ■ The author of the Low class intentionally overrode Mid’s version of negotiate so that instances of the Low class (and its subclasses) would invoke the redefined version of the method instead. As this example illustrates. if (self) { . In sending the message to super.(id)init { self = [super init]. so classes initialize their instance variables in the order of inheritance: . Each version of init follows this procedure. super provides a way to bypass a method that overrides another method. } } Messages to self and super 2010-07-13 | © 2010 Apple Inc. 45 .negotiate { .. return [super negotiate]..
it can still get the basic functionality by sending a message to super. For example. and have subclasses incorporate the method through messages to super. Redefining self super is simply a flag to the compiler telling it where to begin searching for the method to perform. just as in an instance method. All Rights Reserved. self refers to the instance. It’s also possible to concentrate core functionality in one method defined in a superclass. This is an example of what not to do: + (Rectangle *)rectangleOfColor:(NSColor *) color { self = [[Rectangle alloc] init]. and are described in more detail in “Allocating and Initializing Objects” (page 47). // GOOD [newInstance setColor:color]. For example. Class methods are often concerned not with the class object.CHAPTER 2 Defining a Class Initializer methods have some additional constraints. There’s a tendency to do just that in definitions of class methods. In such a method. + (id)rectangleOfColor:(NSColor *)color { id newInstance = [[self alloc] init]. This is typically left to the alloc and allocWithZone: methods defined in the NSObject class. // BAD [self setColor:color]. This way. it’s usually better to use a variable other than self to refer to an instance inside a class method: + (id)rectangleOfColor:(NSColor *)color { id newInstance = [[Rectangle alloc] init]. return [newInstance autorelease]. If another class overrides these methods (a rare case). the instance returned will be the same type as the subclass (for example. rather than sending the alloc message to the class in a class method. . but with instances of the class. self refers to the class object. it’s used only as the receiver of a message. } To avoid confusion. But self is a variable name that can be used in any number of ways. return [self autorelease]. 46 Messages to self and super 2010-07-13 | © 2010 Apple Inc. the array method of NSArray is inherited by NSMutableArray). But that would be an error. if the class is subclassed. it’s often better to send alloc to self. and the rectangleOfColor: message is received by a subclass. Inside an instance method. it might be tempting to send messages to the newly allocated instance and to call the instance self. often setting up instance variable values at the same time. // EXCELLENT [newInstance setColor:color]. } In fact. even assigned a new value. } See “Allocating and Initializing Objects” (page 47) for more information about object allocation. but inside a class method. self and super both refer to the receiving object—the object that gets a message telling it to perform the method. every class method that creates an instance must allocate storage for the new object and initialize its isa variable to the class structure. many class methods combine allocation and initialization of an instance. return [newInstance autorelease].
Separating allocation from initialization gives you individual control over each step so that each can be modified independently of the other. method to initialize them. All other instance variables are set to 0. All Rights Reserved.. You must: ■ ■ Dynamically allocate memory for the new object Initialize the newly allocated memory to appropriate values An object isn’t fully functional until both steps have been completed. It’s the responsibility of the method to return an object that can be used without error.CHAPTER 3 Allocating and Initializing Objects Allocating and Initializing Objects It takes two steps to create an object using Objective-C. The Returned Object An init. The following sections look first at allocation and then at initialization. then returns it. method normally initializes the instance variables of the receiver.. begin with the abbreviation “init” If the method takes no arguments.. an object needs to be more specifically initialized before it can be safely used. These methods allocate enough memory to hold all the instance variables for an object belonging to the receiving class. all NSObject’s init method does is return self. This initialization is the responsibility of class-specific instance methods that. In Objective-C. NSObject declares the method mainly to establish the naming convention described earlier. Each step is accomplished by a separate method but typically in a single line of code: id anObject = [[Rectangle alloc] init]. For example. The alloc and allocWithZone: methods initialize a newly allocated object’s isa instance variable so that it points to the object’s class (the class object). and discuss how they are controlled and modified. memory for new objects is allocated using class methods defined in the NSObject class. If it . labels for the arguments follow the “init” prefix. the method name is just those four letters. NSObject defines two principal methods for this purpose. alloc and allocWithZone:. by convention. an NSView object can be initialized with an initWithFrame: method. Every class that declares instance variables must provide an init. 47 . init. The NSObject class declares the isa variable and defines an init method. Allocating and Initializing Objects 2010-07-13 | © 2010 Apple Inc.. Usually. However. takes arguments. They don’t need to be overridden and modified in subclasses. since isa is initialized when memory for an object is allocated.
id anObject = [[SomeClass alloc] init]. or even return nil. method might return an object other than the newly allocated receiver. else . not just that returned by alloc or allocWithZone:. since it ignores the return of init. All Rights Reserved. Because an init. If the file name it’s passed doesn’t correspond to an actual file. [anObject someOtherMessage]. Constraints and Conventions There are several constraints and conventions that apply to initializer methods that do not apply to other methods: ■ By convention. 48 Implementing an Initializer 2010-07-13 | © 2010 Apple Inc. In some situations.. all bits of memory (except for isa)—and hence the values for all its instance variables—are set to 0. In such a case. you want to provide other default values for an object’s instance variables. an initFromFile: method might get the data it needs from a file passed as an argument. then you should check the return value before proceeding: id anObject = [[SomeClass alloc] init]. method to do what it’s asked to do. custom initializers are subject to more constraints and conventions than are most other methods. In a few cases. [anObject init].. in many others. or you want to pass values as arguments to the initializer.. it might be impossible for an init. Instead. it might free the newly allocated instance and return the other object—thus ensuring the uniqueness of the name while at the same time providing what was asked for. In Objective-C. If there’s a chance that the init. if ( anObject ) [anObject someOtherMessage]. it’s important that programs use the value returned by the initialization method. . If there can be no more than one object per name. an instance with the requested name. it won’t be able to complete the initialization.. the name of a custom initializer method begins with init. For example. it might provide an initWithName: method to initialize new instances. For example. you need to write a custom initializer. [anObject someOtherMessage]. id anObject = [SomeClass alloc]. When asked to assign a new instance a name that’s already being used by another object. in some cases. to safely initialize an object.CHAPTER 3 Allocating and Initializing Objects However.. you should combine allocation and initialization messages in one line of code. method could free the receiver and return nil. Implementing an Initializer When a new object is created.. In these other cases. the init. The following code is very dangerous... indicating that the requested object can’t be created. method might return nil (see “Handling Initialization Failure” (page 50)). this responsibility can mean returning a different object than the receiver. if a class keeps a list of named objects. this may be all you require when an object is initialized... initWithName: might refuse to assign the same name to two objects.
if (self) { creationDate = [[NSDate alloc] init]. Designated initializers are described in “The Designated Initializer” (page 53). } return self. however. ■ If you set the value of an instance variable. For example. the designated initializer is init. initWithObjects:. initWithFormat:. that represents the time when the object was created: . If you are implementing any other initializer. The reason for this is that id gives an indication that the class is purposefully not considered—that the class is unspecified and subject to change. } (The reason for using the if (self) pattern is discussed in “Handling Initialization Failure” (page 50). In brief. ■ At the end of the initializer. By default (such as with NSObject). This avoids the possibility of triggering unwanted side-effects in the accessors. if you are implementing a new designated initializer. ■ You should assign self to the value returned by the initializer. and initWithObjectsAndKeys:. creationDate. This is because the initializer could return a different object than the original receiver. 49 . the singleton example given in “Combining Allocation and Initialization” (page 55). a full explanation of this issue is given in “Coordinating Classes” (page 51). not NSString. unless the initializer fails in which case you return nil.) ■ In the implementation of a custom initializer. It could then rely on methods like setEnabled:. setFriend:. Implementing an Initializer 2010-07-13 | © 2010 Apple Inc. ■ The return type of an initializer method should be id. The following example illustrates the implementation of a custom initializer for a class that inherits from NSObject and has an instance variable. you typically do so using direct assignment rather than using an accessor method.) An initializer doesn’t need to provide an argument for each variable.(id)init { // Assign self to value returned by super's designated initializer // Designated initializer for NSObject is init self = [super init].CHAPTER 3 Allocating and Initializing Objects Examples from the Foundation framework include. you must ultimately invoke a designated initializer. it might provide an initWithName:fromURL: method. or another of its own initializers that ultimately invokes the designated initializer. When sent to an instance of NSMutableString (a subclass of NSString). and setDimensions: to modify default values after the initialization phase had been completed. (See also. For example. though. NSString provides a method initWithFormat:. it must invoke the superclass’ designated initializer. All Rights Reserved. depending on context of invocation. you must return self. it should invoke its own class’s designated initializer. but set nonessential instance variables to arbitrary values or allow them to have the null values set by default. the message returns an instance of NSMutableString. Failed initializers are discussed in more detail in “Handling Initialization Failure” (page 50). if a class requires its instances to have a name and a data source.
} This example doesn’t show what to do if there are any problems during initialization. There are two main consequences of this policy: ■ Any object (whether your own class. this is discussed in the next section. if (self) { image = [anImage retain].CHAPTER 3 Allocating and Initializing Objects The next example illustrates the implementation of a custom initializer that takes a single argument. ■ Note: You should only call [self release] at the point of failure. It shows that you can do work before invoking the super class’s designated initializer. you should call [self release] and return nil. a subclass. You must make sure that dealloc methods are safe in presence of partially-initialized objects. 0. NSRect frame = NSMakeRect(0. In this case. if (self) { creationDate = [[NSDate alloc] init]. // Assign self to value returned by super's designated initializer // Designated initializer for NSView is initWithFrame: self = [super initWithFrame:frame]. size. this includes undoing any connections.(id)init { self = [super init]. the class inherits from NSView. If you get nil back from an invocation of the superclass’s initializer.0. All Rights Reserved. size. Handling Initialization Failure In general.height).0. } The following example builds on that shown in “Constraints and Conventions” (page 48) to show how to handle an inappropriate value passed as the parameter: .(id)initWithImage:(NSImage *)anImage { // Find the size for the new instance from the image NSSize size = anImage. or an external caller) that receives a nil from an initializer method should be able to deal with it. In the unlikely case where the caller has established any external references to the object before the call. if there is a problem during an initialization method. } return self. . This is typically handled by the pattern of performing initialization within a block dependent on a test of the return value of the superclass’s initializer—as seen in previous examples: .size.width. You should simply clean up any references you set up that are not dealt with in dealloc and return nil. } return self. you should not also call release. .(id)initWithImage:(NSImage *)anImage { 50 Implementing an Initializer 2010-07-13 | © 2010 Apple Inc.
(id)initWithName:(NSString *)string { self = [super init].0. see Error Handling Programming Guide.. if (self) { name = [string copy]. there is a possibility of returning meaningful information in the form of an NSError object returned by reference: .(id)initWithURL:(NSURL *)aURL error:(NSError **)errorPtr { self = [super init]. You should typically not use exceptions to signify errors of this sort—for more information.height). } The next example illustrates best practice where.CHAPTER 3 Allocating and Initializing Objects if (anImage == nil) { [self release]. } // implementation continues.. return nil. } // Find the size for the new instance from the image NSSize size = anImage. if (data == nil) { // In this case the error object is created in the NSData initializer [self release]. } return self. // Assign self to value returned by super's designated initializer // Designated initializer for NSView is initWithFrame: self = [super initWithFrame:frame]. size. All Rights Reserved. if (self) { NSData *data = [[NSData alloc] initWithContentsOfURL:aURL options:NSUncachedRead error:errorPtr]. } Implementing an Initializer 2010-07-13 | © 2010 Apple Inc. Coordinating Classes The init.width.size. NSRect frame = NSMakeRect(0.0.. size. methods a class defines typically initialize only those variables declared in that class. 51 . Inherited instance variables are initialized by sending a message to super to perform an initialization method defined somewhere farther up the inheritance hierarchy: .. in the case of a problem. if (self) { image = [anImage retain]. return nil. 0. } return self.
The easiest way to do that is to replace the inherited init method with a version that invokes initWithName:: .CHAPTER 3 Allocating and Initializing Objects The message to super chains together initialization methods in all inherited classes. } The initWithName: method would. it ensures that superclass variables are initialized before those declared in subclasses. . as shown in Figure 3-1. a Rectangle object must be initialized as an NSObject..init { return [self initWithName:"default"]. B must also make sure that an init message successfully initializes B instances. in turn. For example. if class A defines an init method and its subclass B defines an initWithName: method. invoke the inherited method. For example. Because it comes first. and a Shape before it’s initialized as a Rectangle. All Rights Reserved. a Graphic. as shown earlier. Figure 3-2 includes B’s version of init: 52 Implementing an Initializer 2010-07-13 | © 2010 Apple Inc.
For example. . suppose we define class C. In addition to this method. It’s also the method that does most of the work. the inherited init method invokes this new version of initWithName: which invokes initWithName:fromFile:. 53 . It’s a Cocoa convention that the designated initializer is always the method that allows the most freedom to determine the character of a new instance (usually this is the one with the most arguments.CHAPTER 3 Allocating and Initializing Objects Figure 3-2 Covering an Inherited Initialization Model – init Class A – init Class B – initWithName: Covering inherited initialization methods makes the class you define more portable to other applications. It’s important to know the designated initializer when defining a subclass. initWithName: would be the designated initializer for its class (class B). but not always). This can be done just by covering B’s initWithName: with a version that invokes initWithName:fromFile:. The designated initializer is the method in each class that guarantees inherited instance variables are initialized (by sending a message to super to perform an inherited method). The relationship between these methods is shown in Figure 3-3: The Designated Initializer 2010-07-13 | © 2010 Apple Inc. and implement an initWithName:fromFile: method. and the one that other initialization methods in the same class invoke. we have to make sure that the inherited init and initWithName: methods also work for instances of C. } For an instance of the C class.initWithName:(char *)string { return [self initWithName:string fromFile:NULL]. If you leave an inherited method uncovered. All Rights Reserved. The Designated Initializer In the example given in “Coordinating Classes” (page 51). someone else may use it to produce incorrectly initialized instances of your class. a subclass of B.
for two reasons: ■ Circularity would result (init invokes C’s initWithName:. 54 The Designated Initializer 2010-07-13 | © 2010 Apple Inc... sends a message to super to invoke an inherited initialization method.initWithName:(char *)string fromFile:(char *)pathname { self = [super initWithName:string]. Messages to self are shown on the left and messages to super are shown on the right. ■ Therefore. The initWithName:fromFile: method. . } General Principle: The designated initializer in a class must. while other initialization methods are chained to designated initializers through messages to self. Designated initializers are chained to each other through messages to super. being the designated initializer for the C class. through a message to super. if (self) { . But which of B’s methods should it invoke. It won’t be able to take advantage of the initialization code in B’s version of initWithName:. which invokes init again). and C are linked. initWithName:fromFile: must invoke initWithName:: . B. All Rights Reserved. which invokes initWithName:fromFile:. init or initWithName:? It can’t invoke init. Figure 3-4 shows how all the initialization methods in classes A. invoke the designated initializer in a superclass.CHAPTER 3 Allocating and Initializing Objects Figure 3-3 Covering the Designated Initializer – init Class B – initWithName: Class C – initWithName: – initWithName:fromFile: This figure omits an important detail.
some classes define creation methods that combine the two steps of allocating and initializing to return new. + (id)stringWithFormat:(NSString *)format. initialized instances of the class. For example. when the receiver is an instance of the B class. it invokes B’s version of initWithName:.. These methods are often referred to as convenience constructors and typically take the form + className.. Therefore. . Combining Allocation and Initialization In Cocoa.. NSArray defines the following class methods that combine allocation and initialization: + (id)array. Combining Allocation and Initialization 2010-07-13 | © 2010 Apple Inc. where className is the name of the class. All Rights Reserved. and when the receiver is an instance of the C class... 55 . NSString has the following methods (among others): + (id)stringWithCString:(const char *)cString encoding:(NSStringEncoding)enc. Similarly. it invokes C’s version.
when initWithName: is passed a name that’s already taken. Since this method returns a singleton share instance. of course. It also makes sense to combine allocation and initialization in a single method if you want to avoid the step of blindly allocating memory for a new object that you might not use. it might free the receiver and in its place return the object that was previously assigned the name. In the following example. Methods that combine allocation and initialization are particularly valuable if the allocation must somehow be informed by the initialization. . Important: It is important to understand the memory management implications of using these methods if you do not use garbage collection (see “Memory Management” (page 15)). + (id)arrayWithObjects:(id)firstObj. method might sometimes substitute another object for the receiver. If the code that determines whether the receiver should be initialized is placed inside the method that does the allocation instead of inside init.. In this case. } return instance. As mentioned in “The Returned Object” (page 47). For example. 56 Combining Allocation and Initialization 2010-07-13 | © 2010 Apple Inc. put them in the List. and the file might contain enough data to initialize more than one object. you might implement a listFromFile: method that takes the name of the file as an argument. You must read Memory Management Programming Guide to understand the policy that applies to these convenience constructors. This is for the same reason as for initializer methods... All Rights Reserved. you can avoid the step of allocating a new instance when one isn’t needed. that an object is allocated and freed immediately without ever being used... . It would open the file. For example.. if the data for the initialization is taken from a file. It allocates and initializes a single shared instance: + (Soloist *)soloist { static Soloist *instance = nil. it would be impossible to know how many objects to allocate until the file is opened. as discussed in “Constraints and Conventions” (page 48). if ( instance == nil ) { instance = [[self alloc] init]. It would then allocate and initialize the objects from data in the file. an init.. strong typing is appropriate—there is no expectation that this method will be overridden. This means.CHAPTER 3 Allocating and Initializing Objects + (id)arrayWithObject:(id)anObject.. and create a List object large enough to hold all the new objects. Notice that the return type of these methods is id. } Notice that in this case the return type is Soloist *. see how many objects to allocate. the soloist method ensures that there’s no more than one instance of the Soloist class. and finally return the List.
but also on the basis of their similarity in conforming to the same protocol. What is of interest is whether or not a particular class conforms to the protocol—whether it has implementations of the methods the protocol declares. However. might implement. Thus objects can be grouped into types not just on the basis of similarities due to the fact that they inherit from the same class. on the other hand.(void)mouseDown:(NSEvent *)theEvent.(void)mouseUp:(NSEvent *)theEvent. so they can be used in ways that classes and categories cannot. but which any class. some don’t. but the identity of the class that implements them is not of interest. Informal and formal protocols.. A protocol is simply a list of method declarations. an Objective-C program doesn’t need to use protocols. these methods that report user actions on the mouse could be gathered into a protocol: . All Rights Reserved.CHAPTER 4 Protocols Protocols declare methods that can be implemented by any class. . they’re optional. It all depends on the task at hand. especially where a project is divided among many implementors or it incorporates objects developed in other projects. Classes in unrelated branches of the inheritance hierarchy might be typed alike because they conform to the same protocol. unattached to a class definition. Declaring Interfaces for Others to Implement 2010-07-13 | © 2010 Apple Inc. 57 . . Protocols list methods that are (or may be) implemented somewhere. declare methods that are independent of any specific class. Protocols can play a significant role in object-oriented design. and perhaps many classes. Some Cocoa frameworks use them. Any class that wanted to respond to mouse events could adopt the protocol and implement its methods.(void)mouseDragged:(NSEvent *)theEvent. Unlike class definitions and message expressions. For example. Protocols free method declarations from dependency on the class hierarchy. Cocoa software uses protocols heavily to support interprocess communication through Objective-C messages.
Protocols provide a way for it to also advertise the messages it sends. All Rights Reserved. a check is made to be sure that the receiver implements a method that can respond: . The sender simply imports the interface file of the receiver. If you develop the class of the sender and the class of the receiver as part of the same project (or if someone else has supplied you with the receiver and its interface file). A protocol serves this purpose. if ( [assistant respondsToSelector:@selector(helpOut:)] ) { [assistant helpOut:self]..setAssistant:anObject { assistant = anObject. 58 Methods for Others to Implement 2010-07-13 | © 2010 Apple Inc.(BOOL)doWork { . you can’t know what kind of object might register itself as the assistant. } Then. or it may on occasion simply need to ask another object for information. this communication is easily coordinated. . Communication works both ways. For example. whenever a message is to be sent to the assistant. at the time you write this code. These declarations advertise the messages it can receive. Suppose.. You provide an assistant instance variable to record the outlet for these messages and define a companion method to set the instance variable. You need another way to declare the methods you use in messages but don’t implement. if you develop an object that sends messages to objects that aren’t yet defined—objects that you’re leaving for others to implement—you won’t have the receiver’s interface file. It informs the compiler about methods the class uses and also informs other implementors of the methods they need to define to have their objects work with yours. } Since. you can look at its interface declaration (and the interface declarations of the classes it inherits from) to find what messages it responds to. that you develop an object that asks for the assistance of another object by sending it helpOut: and other messages. } return NO. an object might be willing to notify other objects of its actions so that they can take whatever collateral measures might be required. an object might delegate responsibility for a certain operation to another object. you can only declare a protocol for the helpOut: method. return YES.CHAPTER 4 Protocols Methods for Others to Implement If you know the class of an object. This method lets other objects register themselves as potential recipients of your object’s messages: . objects send messages as well as receive them. The imported file declares the method selectors the sender uses in the messages it sends. In some cases. for example. you can’t import the interface file of the class that implements it. However.
It doesn’t have to disclose anything else about the object. especially where only one object of its kind is needed. an object of unknown class. But you don’t need to know how another application works or what its components are to communicate with it. a method in another class returns a usable object: id formatter = [receiver formattingService]. the supplier must provide a ready-made instance. A class message returns the anonymous object’s class.CHAPTER 4 Protocols Declaring Interfaces for Anonymous Objects A protocol can be used to declare the methods of an anonymous object. Typically. the object itself reveals it at runtime. The sending application doesn’t need to know the class of the object or use the class in its own design. All Rights Reserved. consider the following situations: ■ Someone who supplies a framework or a suite of objects for others to use can include objects that are not identified by a class name or an interface file. This is done by associating the object with a list of methods declared in a protocol. discusses this possibility in more detail. the information in the protocol is sufficient. all you need to know is what messages you can send (the protocol) and where to send them (the receiver). Note: Even though the supplier of an anonymous object doesn’t reveal its class. the supplier must be willing to identify at least some of the messages that it can respond to. there would be no way to declare an interface to an object without identifying its class. but they are anonymous when the developer supplies them to someone else. Instead. at least not one the supplier is willing to reveal. users have no way of creating instances of the class. Without a protocol. classes. Non-Hierarchical Similarities If more than one class implements a set of methods. An anonymous object may represent a service or handle a limited set of functions. As an outsider. The object returned by the method is an object without a class identity.) Each application has its own structure. 59 . Each subclass may re-implement the methods in its own way. For example. However. For it to be of any use at all. Declaring Interfaces for Anonymous Objects 2010-07-13 | © 2010 Apple Inc. An application that publishes one of its objects as a potential receiver of remote messages must also publish a protocol declaring the methods the object will use to respond to those messages. Lacking the name and class interface. Protocols make anonymous objects possible. and internal logic. ■ You can send Objective-C messages to remote objects—objects in other applications. but the inheritance hierarchy and the common declaration in the abstract class captures the essential similarity between the subclasses. (Remote Messaging (page 105) in the Objective-C Runtime Programming Guide.) Objects are not anonymous to their developers. there’s usually little point in discovering this extra information. those classes are often grouped under an abstract class that declares the methods they have in common. of course. (Objects that play a fundamental role in defining an application’s architecture and objects that you must initialize before using are not good candidates for anonymity. All it needs is the protocol.
For example. there is a @required keyword to formally denote the semantics of the default behavior. 60 Formal Protocols 2010-07-13 | © 2010 Apple Inc. protocol names don’t have global visibility. Declaring a Protocol You declare formal protocols with the @protocol directive: @protocol ProtocolName method declarations @end For example. the default is @required. and objects can introspect at runtime to report whether or not they conform to a protocol. Corresponding to the @optional modal keyword. For example. rather than by their class. you might want to add support for creating XML representations of objects in your application and for initializing objects from an XML representation: . In this case. Formal Protocols The Objective-C language provides a way to formally declare a list of methods (including declared properties) as a protocol. They live in their own namespace.initFromXMLRepresentation:(NSXMLElement *)xmlString. This limited similarity may not justify a hierarchical relationship.CHAPTER 4 Protocols However. These methods could be grouped into a protocol and the similarity between implementing classes accounted for by noting that they all conform to the same protocol. . For example. Optional Protocol Methods Protocol methods can be marked as optional using the @optional keyword. Classes that are unrelated in most respects might nevertheless need to implement some similar methods. All Rights Reserved. the compiler can check for types based on protocols. @end Unlike class names. an NSMatrix instance must communicate with the objects that represent its cells. the NSMatrix object wouldn’t care what class a cell object belonged to.(NSXMLElement *)XMLRepresentation. Alternatively. If you do not specify any keyword. Formal protocols are supported by the language and the runtime system.(NSXMLElement *)XMLRepresentation..initFromXMLRepresentation:(NSXMLElement *)XMLElement. the NSMatrix object could require objects representing cells to have methods that can respond to a particular set of messages (a type based on protocol). just that it implemented the methods. sometimes it’s not possible to group common methods in an abstract class. . you could declare an XML representation protocol like this: @protocol MyXMLSupport . . Objects can be typed by this similarity (the protocols they conform to).
@required . Instead. such as for a delegate. but there is little reason to do so. (It would also be possible to declare an informal protocol as a category of another class to limit it to a certain branch of the inheritance hierarchy. Informal Protocols In addition to formal protocols. There’s no type checking at compile time nor a check at runtime to see whether an object conforms to the protocol. This constraint is removed in Mac OS X v10.6 and later. 61 . . protocols declared in categories don’t receive much language support. a category interface doesn’t have a corresponding implementation. but (on Mac OS X v10. All Rights Reserved. Being informal. @end Informal protocols are typically declared as categories of the NSObject class. . An informal protocol bends the rules of category declarations to list a group of methods but not associate them with any particular class or implementation. protocols may not include optional declared properties.(void)anotherOptionalMethod.(void)requiredMethod. Informal Protocols 2010-07-13 | © 2010 Apple Inc.5. @optional .CHAPTER 4 Protocols @protocol MyProtocol . the methods aren’t restricted to any part of the inheritance hierarchy.5 and later) it is typically better to use a formal protocol with optional methods.(NSXMLElement *)XMLRepresentation. @end Note: In Mac OS X v10.initFromXMLRepresentation:(NSXMLElement *)XMLElement. Because all classes inherit from the root class.) When used to declare a protocol. An informal protocol may be useful when all the methods are optional.(void)anotherRequiredMethod. since that broadly associates the method names with any class that inherits from NSObject. classes that implement the protocol declare the methods again in their own interface files and define them along with other methods in their implementation files. you must use a formal protocol. you can also define an informal protocol by grouping the methods in a category declaration: @interface NSObject ( MyXMLSupport ) . To get these benefits.(void)anOptionalMethod.
or Referred to somewhere in source code (using @protocol()) Protocols that are declared but not used (except for type checking as described below) aren’t represented by Protocol objects at runtime. A class or category that adopts a protocol must import the header file where the protocol is declared. Protocol objects are created automatically from the definitions and declarations found in source code and are used by the runtime system. In many ways. but only if the protocol is also: ■ ■ Adopted by a class. Adopting a Protocol Adopting a protocol is similar in some ways to declaring a superclass. a protocol name doesn’t designate the object—except inside @protocol(). formal protocols are represented by a special data type—instances of the Protocol class. 62 Protocol Objects 2010-07-13 | © 2010 Apple Inc. Source code can refer to a Protocol object using the @protocol() directive—the same directive that declares a protocol. in addition to any it might have declared itself. Source code that deals with a protocol (other than to use it in a type specification) must refer to the Protocol object. Unlike a class name. except that here it has a set of trailing parentheses. otherwise the compiler issues a warning. They’re not allocated and initialized in program source code.CHAPTER 4 Protocols Protocol Objects Just as classes are represented at runtime by class objects and methods by selector codes. names in the protocol list are separated by commas. The superclass declaration assigns it inherited methods. and at runtime they’re both represented by objects—classes by class objects and protocols by Protocol objects. This is the only way that source code can conjure up a Protocol object. The parentheses enclose the protocol name: Protocol *myXMLSupportProtocol = @protocol(MyXMLSupport). protocols are similar to class definitions. @interface Formatter : NSObject < Formatting. All Rights Reserved. Both assign methods to the class. They both declare methods. The Formatter class above would define all the required methods declared in the two protocols it adopts. Like class objects. Prettifying > A class or category that adopts a protocol must implement all the required methods the protocol declares. The methods declared in the adopted protocol are not declared elsewhere in the class or category interface. The compiler creates a Protocol object for each protocol declaration it encounters. protocol assigns it methods declared in the protocol list. .
protocol names are listed between angle brackets after the type name: . Just as static typing permits the compiler to test for a type based on the class hierarchy.(id <Formatting>)formattingService. id <MyXMLSupport> anObject. Type Checking Type declarations for objects can be extended to include formal protocols. Prettifying > @end Conforming to a Protocol A class is said to conform to a formal protocol if it adopts the protocol or inherits from another class that adopts it. if Formatter is an abstract class. but declares no instance variables or methods of its own: @interface Formatter : NSObject < Formatting. For example. this is probably an error } (Note that there is also a class method with the same name—conformsToProtocol:. if ( ! [receiver conformsToProtocol:@protocol(MyXMLSupport)] ) { // Object does not conform to MyXMLSupport protocol // If you are expecting receiver to implement methods declared in the // MyXMLSupport protocol. Protocols thus offer the possibility of another level of type checking by the compiler. The conformsToProtocol: test is also like the isKindOfClass: test. It’s possible to check whether an object conforms to a protocol by sending it a conformsToProtocol: message. saying that a class or an instance conforms to a protocol is equivalent to saying that it has in its repertoire all the methods the protocol declares. Because it checks for all the methods in the protocol. one that’s more abstract since it’s not tied to particular implementations. except that it tests whether a protocol has been adopted (and presumably all the methods it declares implemented) rather than just whether one particular method has been implemented. All Rights Reserved. For example. An instance of a class is said to conform to the same set of protocols its class conforms to. except that it tests for a type based on a protocol rather than a type based on the inheritance hierarchy. Since a class must implement all the required methods declared in the protocols it adopts.) The conformsToProtocol: test is like the respondsToSelector: test for a single method. conformsToProtocol: can be more efficient than respondsToSelector:. the following class declaration adopts the Formatting and Prettifying protocols. this syntax permits the compiler to test for a type based on conformance to a protocol.CHAPTER 4 Protocols It’s possible for a class to simply adopt protocols and declare no other methods. 63 . In a type declaration. this declaration Conforming to a Protocol 2010-07-13 | © 2010 Apple Inc.
and conformsToProtocol: messages if ( [anotherObject conformsToProtocol:@protocol(Paging)] ) . id <Formatting> anObject. Protocols can’t be used to type class objects. The compiler can make sure only objects that conform to the protocol are assigned to the type. All Rights Reserved. groups all objects that conform to the Formatting protocol into a type. as mentioned earlier. . both classes and instances will respond to a conformsToProtocol: message. the type groups similar objects—either because they share a common inheritance. If an incorporated protocol incorporates still other protocols.. For example.. at runtime. When a class adopts a protocol. Type declarations id <Paging> someObject. In each case. or because they converge on a common set of methods. it must implement the required methods the protocol declares. A class can conform to an incorporated protocol by either: ■ ■ Implementing the methods the protocol declares. it must conform to any protocols the adopted protocol incorporates.). need to mention only the Paging protocol to test for conformance to Formatting as well. @protocol Paging < Formatting > any object that conforms to the Paging protocol also conforms to Formatting. Only instances can be statically typed to a protocol. just as only instances can be statically typed to a class. 64 Protocols Within Protocols 2010-07-13 | © 2010 Apple Inc. this declaration. the class must also conform to them. The two types can be combined in a single declaration: Formatter <Formatting> *anObject. Similarly. regardless of their positions in the class hierarchy. (However. if the Paging protocol incorporates the Formatting protocol. or Inheriting from a class that adopts the protocol and implements the methods. In addition.CHAPTER 4 Protocols Formatter *anObject.
Referring to Other Protocols When working on complex applications.foo:(id <B>)anObject.h" @protocol B . @interface Pager : Formatter < Paging > it must implement all the methods declared in the Paging protocol proper. Note that a class can conform to a protocol without formally adopting it simply by implementing the methods declared in the protocol. you occasionally find yourself writing code that looks like this: #import "B.CHAPTER 4 Protocols Suppose. It adopts the Formatting protocol along with Paging. 65 . It doesn’t import the interface file where protocol B is defined.foo:(id <B>)anObject. To break this recursive cycle. Pager inherits conformance to the Formatting protocol from Formatter. for example. that the Pager class adopts the Paging protocol. circularity results and neither file will compile correctly.bar:(id <A>)anObject. The following code excerpt illustrates how you would do this: @protocol B. @end where protocol B is declared like this: #import "A. including those declared in the incorporated Formatting protocol.h" @protocol A . Referring to Other Protocols 2010-07-13 | © 2010 Apple Inc. If Pager is a subclass of NSObject. but not those declared in Formatting. @end Note that using the @protocol directive in this manner simply informs the compiler that “B” is a protocol to be defined later. All Rights Reserved. @end In such a situation. you must use the @protocol directive to make a forward reference to the needed protocol instead of importing the interface file where the protocol is defined. @interface Pager : NSObject < Paging > it must implement all the Paging methods. @protocol A . On the other hand. if Pager is a subclass of Formatter (a class that independently adopts the Formatting protocol).
CHAPTER 4 Protocols 66 Referring to Other Protocols 2010-07-13 | © 2010 Apple Inc. All Rights Reserved. .
you adhere to the principle of encapsulation (see “Mechanisms Of Abstraction” in Object-Oriented Programming with Objective-C > The Object Model). Although using accessor methods has significant advantages. Overview There are two aspects to this language feature: the syntactic elements you use to specify and optionally synthesize declared properties.. its declaration and its implementation. This means you have less code to write and maintain.CHAPTER 5 Declared Properties The Objective-C “declared properties” feature provides a simple way to declare and implement an object’s accessor methods. explicit specification of how the accessor methods behave. You typically access an object’s properties (in the sense of its attributes and relationships) through a pair of accessor (getter/setter) methods. @property can appear anywhere in the method declaration list found in the @interface of a class. and a related syntactic element that is described in “Dot Syntax” (page 19). Property Declaration A property declaration begins with the keyword @property. so the compiler can detect use of undeclared properties. writing accessor methods is nevertheless a tedious process—particularly if you have to write code to support both garbage collected and reference counted environments. Properties are represented syntactically as identifiers and are scoped. Moreover. By using accessor methods. ■ Property Declaration and Implementation There are two parts to a declared property. All Rights Reserved. aspects of the property that may be important to consumers of the API are left obscured—such as whether the accessor methods are thread-safe or whether new values are copied when set. according to the specification you provide in the declaration. 67 . The compiler can synthesize accessor methods for you. @property can also appear in the declaration of a protocol or category. Overview 2010-07-13 | © 2010 Apple Inc.
]). given a property “foo” the accessors would be foo and .(void)setValue:(float)newValue. Listing 5-1 Declaring a simple property @interface MyClass : NSObject { float value. Property Declaration Attributes You can decorate a property with attributes by using the form @property(attribute [. attribute2. Like methods. 68 Property Declaration and Implementation 2010-07-13 | © 2010 Apple Inc. however. . @property declares a property. Listing 5-1 illustrates the declaration of a simple property. Like any other Objective-C type. if you specify copy you must make sure that you do copy the input value in the setter method). All Rights Reserved. each property has a type specification and a name. A property declaration. . setFoo:. the code it generates matches the specification given by the keywords. For property declarations that use a comma delimited list of variable names.. The getter must return a type matching the property’s type and take no arguments. } @property float value.(float)value. @end You can think of a property declaration as being equivalent to declaring two accessor methods. If you implement the accessor method(s) yourself. getter=getterName Specifies the name of the get accessor for the property. is equivalent to: . properties are scoped to their enclosing interface declaration. Thus @property float value. Accessor Method Names The default names for the getter and setter methods associated with a property are propertyName and setPropertyName: respectively—for example. . you should ensure that it matches the specification (for example. They are both optional and may appear with any other attribute (except for readonly in the case of setter=). The following attributes allow you to specify custom names instead. An optional parenthesized set of attributes provides additional details about the storage semantics and other behaviors of the property—see “Property Declaration Attributes” (page 68) for possible values.CHAPTER 5 Declared Properties @property(attributes) type name.. the property attributes apply to all of the named properties. provides additional information about how the accessor methods are implemented (as described in “Property Declaration Attributes” (page 68)). If you use the @synthesize directive to tell the compiler to create the accessor method(s).
The copy is made by invoking the copy method. Moreover. For further discussion. retain and assign are effectively the same in a garbage-collected environment. you get a compiler error. assign Specifies that the setter uses simple assignment. or (in a reference-counted environment) for objects you don’t own such as delegates. If you specify that a property is readonly then also specify a setter with setter=. as illustrated in this example: @property(retain) __attribute__((NSObject)) CFDictionaryRef myDictionary. You typically use this attribute for scalar types such as NSInteger and CGRect. This is the default. This attribute is valid only for object types. Both a getter and setter method will be required in the @implementation. 69 . retain Specifies that retain should be invoked on the object upon assignment. If you specify readonly. The setter method must take a single argument of a type matching the property’s type and must return void. Property Declaration and Implementation 2010-07-13 | © 2010 Apple Inc.CHAPTER 5 Declared Properties setter=setterName Specifies the name of the set accessor for the property. if you attempt to assign a value using the dot syntax. Writability These attributes specify whether or not a property has an associated set accessor. Setter Semantics These attributes specify the semantics of a set accessor. you will get a compiler warning. readwrite Indicates that the property should be treated as read/write. All Rights Reserved. you can use the __attribute__ keyword to specify that a Core Foundation property should be treated like an Objective-C object for memory management. only the getter method is synthesized. which must implement the NSCopying protocol. readonly Indicates that the property is read-only. This is the default. If you use @synthesize in the implementation block.6. On Mac OS X v10. They are mutually exclusive.6 and later.) The previous value is sent a release message. (The default is assign.) The previous value is sent a release message. copy Specifies that a copy of the object should be used for assignment. this attribute is valid only for Objective-C object types (so you cannot specify retain for Core Foundation objects—see “Core Foundation” (page 74)). Prior to Mac OS X v10. If you use @synthesize in the implementation block. the getter and setter methods are synthesized. (The default is assign. They are mutually exclusive. only a getter method is required in the @implementation. see “Copy” (page 73). Typically you should specify accessor method names that are key-value coding compliant (see Key-Value Coding Programming Guide)—a common reason for using the getter decorator is to adhere to the isPropertyName convention for Boolean values.
. By default. you need to understand Cocoa’s memory management policy (see Memory Management Programming Guide). (There is no keyword to denote atomic. then a synthesized accessor for an object property simply returns the value directly.)). then in a reference counted environment a synthesized get accessor for an object property uses a lock and retains and autoreleases the returned value—the implementation will be similar to the following: [_internal lock]. Properties can be deprecated and support __attribute__ style markup. ■ If you use garbage collection. If you do not specify nonatomic. If you specify nonatomic. If you want to specify that a property is an Interface Builder outlet. Atomicity This attribute specifies that accessor methods are not atomic. Markup and Deprecation Properties support the full range of C style decorators. as illustrated in the following example: @property CGFloat x AVAILABLE_MAC_OS_X_VERSION_10_1_AND_LATER_BUT_DEPRECATED_IN_MAC_OS_X_VERSION_10_4.) To decide which you should choose. (This encourages you to think about what memory management behavior you want and type it explicitly.CHAPTER 5 Declared Properties Different constraints apply depending on whether or not you use garbage collection: ■ If you do not use garbage collection. for object properties you must explicitly specify one of assign. All Rights Reserved. you can use the IBOutlet identifier: @property (nonatomic.) nonatomic Specifies that accessors are non-atomic. . a formal part of the list of attributes. you don't get a warning if you use the default (that is. retain or copy) unless the property's type is a class that conforms to NSCopying. though. // lock using an object-level lock id result = [[value retain] autorelease]. Properties are atomic by default so that synthesized accessors provide robust access to properties in a multi-threaded environment—that is. @property CGFloat y __attribute__((. return result.. For more details. see “Performance and Threading” (page 77). to preserve encapsulation you often want to make a private copy of the object. if you don’t specify any of assign. the value returned from the getter or set via the setter is always fully retrieved or set regardless of what other threads are executing concurrently. [_internal unlock]. retain or copy—otherwise you will get a compiler warning. you can use the storage modifiers __weak and __strong in a property’s declaration: 70 Property Declaration and Implementation 2010-07-13 | © 2010 Apple Inc. The default is usually what you want. however. IBOutlet is not. if the property type can be copied. If you use garbage collection. accessors are atomic. retain) IBOutlet NSButton *myButton.
For the modern runtimes (see Runtime Versions and Platforms in Objective-C Runtime Programming Guide). This specifies that the accessor methods for firstName. readwrite) NSString *value.. Other aspects of the synthesized methods are determined by the optional attributes (see “Property Declaration Attributes” (page 68)). @end You can use the form property=ivar to indicate that a particular instance variable should be used for the property. Whether or not you specify the name of the instance variable. instance variables are synthesized as needed. lastName. Important: If you do not specify either @synthesize or @dynamic for a particular property. @end @implementation MyClass @synthesize value. If you do not. but again they are not a formal part of the list of attributes. lastName. you get a compiler error. and age should be synthesized and that the property age is represented by the instance variable yearsOld. retain) __weak Link *parent. Listing 5-2 Using @synthesize @interface MyClass : NSObject { NSString *value. All Rights Reserved. If an instance variable of the same name and compatible type as the property exists. 71 . Property Implementation Directives You can use the @synthesize and @dynamic directives in @implementation blocks to trigger specific compiler actions. it is used. instance variables must already be declared in the @interface block of the current class. ■ Property Declaration and Implementation 2010-07-13 | © 2010 Apple Inc. There are differences in the behavior that depend on the runtime (see also “Runtime Difference” (page 78)): ■ For the legacy runtimes. age = yearsOld. Note that neither is required for any given @property declaration. it is used—otherwise. } @property(copy. @synthesize can only use an instance variable from the current class. not a superclass.CHAPTER 5 Declared Properties @property (nonatomic. for example: @synthesize firstName. the compiler will generate a warning. If an instance variable of the same name already exists.
The example shown in Listing 5-3 illustrates using @dynamic with a subclass of NSManagedObject. but you don’t have to implement the accessor methods yourself. at runtime. . readwrite) you must repeat its attributes in whole in the subclasses. you can redeclare it as readwrite in a class extension (see “Extensions” (page 81)). You therefore typically declare properties for the attributes and relationships. or a subclass—see “Subclassing with Properties” (page 76). All Rights Reserved. Property Re-declaration You can re-declare a property in a subclass. the compiler would generate a warning. Using @dynamic suppresses the warning. Core Foundation data type. @end NSManagedObject is provided by the Core Data framework. The same holds true for a property declared in a category or protocol—while the property may be redeclared in a category or protocol. Listing 5-3 Using @dynamic with NSManagedObject @interface MyClass : NSManagedObject { } @property(nonatomic. the fact that the property was redeclared prior to any @synthesize statement will cause the setter to be synthesized. however. retain) NSString *value. but (with the exception of readonly vs. or “plain old data” (POD) type (see C++ Language Note: POD Types). A managed object class has a corresponding schema that defines attributes and relationships for the class. and shouldn’t ask the compiler to do so. a protocol. For constraints on using Core Foundation types. You should only use it if you know that the methods will be available at runtime. In the case of a class extension redeclaration. the property’s attributes must be repeated in whole. the Core Data framework generates accessor methods for these as necessary. @end @implementation MyClass @dynamic value. If you declare a property in one class as readonly. Using Properties Supported Types You can declare a property for any Objective-C class. however. If you just declared the property without providing any implementation. The ability to redeclare a read-only property as read/write enables two common implementation patterns: a mutable subclass of an immutable class (NSString. see “Core Foundation” (page 74). It suppresses the warnings that the compiler would otherwise generate if it can’t find suitable implementations.
the synthesized method uses the copy method. @interface MyClass : NSObject { NSMutableArray *myArray. if you declare a property as follows: @property (nonatomic. All Rights Reserved. @end @implementation MyClass @synthesize myArray.(void)setMyArray:(NSMutableArray *)newArray { Using Properties 2010-07-13 | © 2010 Apple Inc. @end // private implementation file @interface MyObject () @property (readwrite. then the synthesized setter method is similar to the following: -(void)setString:(NSString *)newString { if (string != newString) { [string release]. copy) NSString *language. // public header file @interface MyObject : NSObject { NSString *language. This is useful for attributes such as string objects where there is a possibility that the new value passed in a setter may be mutable (for example. you specify that a value is copied during assignment. } } Although this works well for strings. copy) NSMutableArray *myArray. } @property (nonatomic. and NSDictionary are all examples) and a property that has public API that is readonly but a private readwrite implementation internal to the class. string = [newString copy]. copy) NSString *language. it may present a problem if the attribute is a collection such as an array or a set. Typically you want such collections to be mutable. @end Copy If you use the copy declaration attribute.CHAPTER 5 Declared Properties NSArray. } @property (readonly. 73 . The following example shows using a class extension to provide a property that is declared as read-only in the public header but is redeclared privately as read/write. copy) NSString *string. an instance of NSMutableString) and you want to ensure that your object has its own private immutable copy. @end @implementation MyObject @synthesize language. but the copy method returns an immutable version of the collection. as illustrated in the following example. If you synthesize the corresponding accessor. you have to provide your own implementation of the setter method. In this situation. . For example.
All Rights Reserved. therefore. } If you are using the modern runtime and synthesizing the instance variable. you declare a property whose type is a CFType and synthesize the accessors as illustrated in the following example: @interface MyClass : NSObject { CGImageRef myImage.6 you cannot specify the retain attribute for non-object types. } Core Foundation As noted in “Property Declaration Attributes” (page 68). however. If. prior to Mac OS X v10. [super dealloc].(void)dealloc { [property release].CHAPTER 5 Declared Properties if (myArray != newArray) { [myArray release]. you cannot access the instance variable directly. Using Properties 2010-07-13 | © 2010 Apple Inc. 74 . so you should not synthesize the methods. so you must invoke the accessor method: . } @property(readwrite) CGImageRef myImage. @end @implementation MyClass @synthesize myImage. you should implement them yourself. } } @end dealloc Declared properties fundamentally take the place of accessor method declarations. myArray = [newArray mutableCopy]. This is typically incorrect. and those marked assign are not released. There is no direct interaction with the dealloc method—properties are not automatically released for you.(void)dealloc { [self setProperty:nil]. the compiler only creates any absent accessor methods. however. as illustrated in this example: . @end then in a reference counted environment the generated set accessor will simply assign the new value to the instance variable (the new value is not retained and the old value is not released). when you synthesize a property. Declared properties do. [super dealloc]. Note: Typically in a dealloc method you should release object instance variables directly (rather than invoking a set accessor and passing nil as the parameter). provide a useful way to cross-check the implementation of your dealloc method: you can look for all the property declarations in your header file and make sure that object properties not marked assign are released.
@property CGFloat gratuitousFloat. Using Properties 2010-07-13 | © 2010 Apple Inc. @property(copy) NSString *name. name. but this is the default value. @property CGImageRef myImage. next. @end @interface MyClass : NSObject <Link> { NSTimeInterval intervalSinceReferenceDate. @end @implementation MyClass @synthesize creationTimestamp = intervalSinceReferenceDate. . 75 . id <Link> nextLink... @property(readonly. CGFloat gratuitousFloat. it is supported using a direct method implementation (since it is read-only.. and uses instance variable synthesis (recall that instance variable synthesis is not ■ ■ supported using the legacy runtime—see “Property Implementation Directives” (page 71) and “Runtime Difference” (page 78)). getter=nameAndAgeAsString) NSString *nameAndAge. All Rights Reserved. it only requires a getter) with a specified name (nameAndAgeAsString). but the setter will trigger a write barrier. __strong CGImageRef myImage. if the variable is declared __strong: . name is synthesized. // Synthesizing 'name' is an error in legacy runtimes. MyClass also declares several other properties. } @property(readonly) NSTimeInterval creationTimestamp.CHAPTER 5 Declared Properties In a garbage collected environment. MyClass adopts the Link protocol so implicitly also declares the property next. Listing 5-4 Declaring properties for a class @protocol Link @property id <Link> next. then the accessors are synthesized appropriately—the image will not be CFRetain’d. Example The following example illustrates the use of properties in several different ways: ■ ■ The Link protocol declares a property. creationTimestamp and next are synthesized but use existing instance variables with different names. nameAndAge does not have a dynamic directive.. ■ ■ gratuitousFloat has a dynamic directive—it is supported using direct method implementations.
All Rights Reserved. [name release]. [self name].(CGFloat)gratuitousFloat { return gratuitousFloat. } @property(readonly) NSInteger value. } . you could define a class MyInteger with a readonly property. @end @implementation MyInteger @synthesize value. the instance variable is synthesized. @end 76 Subclassing with Properties 2010-07-13 | © 2010 Apple Inc.(id)init { self = [super init]. } . . [super dealloc]. [NSDate timeIntervalSinceReferenceDate] intervalSinceReferenceDate].CHAPTER 5 Declared Properties // in modern runtimes. For example. } . if (self) { intervalSinceReferenceDate = [NSDate timeIntervalSinceReferenceDate].(void)dealloc { [nextLink release].(NSString *)nameAndAgeAsString { return [NSString stringWithFormat:@"%@ (%fs)". } . @dynamic gratuitousFloat. } return self. // Uses instance variable "nextLink" for storage. } @end Subclassing with Properties You can override a readonly property to make it writable. // This directive is not strictly necessary. @synthesize next = nextLink. . value: @interface MyInteger : NSObject { NSInteger value.(void)setGratuitousFloat:(CGFloat)aValue { gratuitousFloat = aValue.
as illustrated in “Atomicity” (page 70). the synthesized accessors are atomic. In a garbage collected environment. this may have a significant impact on performance. as illustrated below (the implementation may not be exactly as shown): // assign property = newValue. simply making all the properties in your class atomic does not mean that your class or more generally your object graph is “thread safe”—thread safety cannot be expressed at the level of individual accessor methods. Although “atomic” means that access to the property is thread-safe. Performance and Threading 2010-07-13 | © 2010 Apple Inc. For more about multi-threading. . @end @implementation MyMutableInteger @dynamic value. copy. In a reference counted environment. and nonatomic. } The effect of the nonatomic attribute depends on the environment. // retain if (property != newValue) { [property release]. The first three of these affect only the implementation of the assignment part of the set method. assign. property = [newValue retain]. see Threading Programming Guide. moreover a returned object is retained and autoreleased.CHAPTER 5 Declared Properties You could then implement a subclass. All Rights Reserved. the method implementations generated by the compiler depend on the specification you supply.(void)setValue:(NSInteger)newX { value = newX. It is important to understand that the goal of the atomic implementation is to provide robust accessors—it does not guarantee correctness of your code. } @end Performance and Threading If you supply your own method implementation. By default. the fact that you declared a property has no effect on its efficiency or thread safety. property = [newValue copy]. guaranteeing atomic behavior requires the use of a lock. most synthesized methods are atomic without incurring this overhead. } // copy if (property != newValue) { [property release]. 77 . If such accessors are invoked frequently. The declaration attributes that affect performance and threading are retain. which redefines the property to make it writable: @interface MyMutableInteger : MyInteger @property(readwrite) NSInteger value. If you use synthesized properties. MyMutableInteger.
@end the compiler for the legacy runtime would generate an error at @synthesize noDeclaredIvar. @property float differentName. There is one key difference: the modern runtime supports instance variable synthesis whereas the legacy runtime does not.CHAPTER 5 Declared Properties Runtime Difference In general the behavior of properties is identical on all runtimes (see Runtime Versions and Platforms in Objective-C Runtime Programming Guide). For @synthesize to work in the legacy runtime. . @property float noDeclaredIvar. @end @implementation MyClass @synthesize sameName. 78 Runtime Difference 2010-07-13 | © 2010 Apple Inc. if you do not provide an instance variable. given the following class declaration and implementation: @interface MyClass : NSObject { float sameName. whereas the compiler for the modern runtime would add an instance variable to represent noDeclaredIvar. All Rights Reserved. For example. @synthesize noDeclaredIvar. } @property float sameName. the compiler adds one for you. With the modern runtime. float otherName. @synthesize differentName=otherName. you must either provide an instance variable with the same name and compatible type of the property or specify another existing instance variable in the @synthesize statement.). . Breaking Associations To break an association. // (2) overview invalid At point (1). [overview release]. the policy isn’t actually important. // (1) overview valid [array release]. 84 Retrieving Associated Objects 2010-07-13 | © 2010 Apple Inc. static char overviewKey.h> #import <objc/runtime. nil. &overviewKey. In general. you generate a runtime exception. &overviewKey. you typically use objc_setAssociatedObject. Retrieving Associated Objects You retrieve an associated object using the Objective-C runtime function objc_getAssociatedObject. the string overview is still valid because the OBJC_ASSOCIATION_RETAIN policy specifies that the array retains the associated object.h> int main (int argc. OBJC_ASSOCIATION_RETAIN). you could break the association between the array and the string overview using the following line of code: objc_setAssociatedObject(array. You only use this function if you need to restore an object to “pristine condition. #import <Foundation/Foundation.” Complete Example The following program combines the code samples from the preceding sections. however. All Rights Reserved. however (at point 2). you are discouraged from using this since this breaks all associations for all clients.CHAPTER 7 Associative References objc_setAssociatedObject(array. you can use objc_removeAssociatedObjects. Continuing the example shown in Listing 7-1 (page 83). (Given that the associated object is being set to nil. overview. If you try to. OBJC_ASSOCIATION_ASSIGN). When the array is deallocated. overview is released and so in this case also deallocated.) To break all associations for an object. Continuing the example shown in Listing 7-1 (page 83). for example. passing nil as the value. you could retrieve the overview from the array using the following line of code: NSString *associatedObject = (NSString *)objc_getAssociatedObject(array. const char * argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]. log the value of overview.
nil]. 85 . &overviewKey). } Complete Example 2010-07-13 | © 2010 Apple Inc. @"Three".CHAPTER 7 Associative References NSArray *array = [[NSArray alloc] initWithObjects:@"One". overview. [pool drain]. @"First three numbers"]. [overview release]. objc_setAssociatedObject(array. NSString *associatedObject = (NSString *)objc_getAssociatedObject(array. use initWithFormat: to ensure we get a // deallocatable string NSString *overview = [[NSString alloc] initWithFormat:@"%@". nil. @"Two". return 0. // For the purposes of illustration. &overviewKey. [array release]. objc_setAssociatedObject(array. OBJC_ASSOCIATION_RETAIN). OBJC_ASSOCIATION_ASSIGN). All Rights Reserved. associatedObject). &overviewKey. NSLog(@"associatedObject: %@".
All Rights Reserved. .CHAPTER 7 Associative References 86 Complete Example 2010-07-13 | © 2010 Apple Inc.
you can perform multiple enumerations concurrently. The for…in Feature Fast enumeration is a language feature that allows you to enumerate over the contents of a collection. the iterating variable is left pointing to the last iteration item. for example. and NSSet—adopt this protocol. The syntax is concise. The syntax is defined as follows: for ( Type newVariable in expression ) { statements } or Type existingItem. an exception is raised. using NSEnumerator directly. It should be obvious that in the cases of NSArray and NSSet the enumeration is over their contents. All Rights Reserved. NSDictionary enumerates its keys. Since mutation of the object during iteration is forbidden.CHAPTER 8 Fast Enumeration Fast enumeration is a language feature that allows you to efficiently and safely enumerate over the contents of a collection using a concise syntax. NSDictionary. If the loop is terminated early. There are several advantages to using fast enumeration: ■ ■ ■ The enumeration is considerably more efficient than. as does NSEnumerator. The iterating variable is set to nil when the loop ends by exhausting the source pool of objects. NSDictionary and the Core Data class NSManagedObjectModel provide support for fast enumeration. The Cocoa collection classes—NSArray. the corresponding documentation should make clear what property is iterated over—for example. Adopting Fast Enumeration Any class whose instances provide access to a collection of other objects can adopt the NSFastEnumeration protocol. 87 . expression yields an object that conforms to the NSFastEnumeration protocol (see “Adopting Fast Enumeration” (page 87)). The for…in Feature 2010-07-13 | © 2010 Apple Inc. For other classes. The iterating variable is set to each item in the returned object in turn. for ( existingItem in expression ) { statements } In both cases. and NSManagedObjectModel enumerates its entities. Enumeration is “safe”—the enumerator has a mutation guard so that if you attempt to modify the collection during enumeration. and the code defined by statements is executed.
@"Three". . NSString *key. the feature behaves like a standard for loop. nil]. element). } NSDictionary *dictionary = [NSDictionary dictionaryWithObjectsAndKeys: @"quattuor". @"four". NSUInteger index = 0. @"sex". for (NSString *element in array) { NSLog(@"element: %@". } } NSString *next = [enumerator nextObject]. [dictionary objectForKey:key]). index. @"quinque". NSEnumerator *enumerator = [array reverseObjectEnumerator].CHAPTER 8 Fast Enumeration Using Fast Enumeration The following code example illustrates using fast enumeration with NSArray and NSDictionary objects. NSArray *array = [NSArray arrayWithObjects: @"One". } In other respects. element). for (key in dictionary) { NSLog(@"English: %@. for (id element in array) { NSLog(@"Element at index %u is: %@". @"Two". // next = "Two" For collections or enumerators that have a well-defined order—such as NSArray or NSEnumerator instance derived from an array—the enumeration proceeds in that order. @"six". @"Four". key. for (NSString *element in enumerator) { if ([element isEqualToString:@"Three"]) { break. so simply counting iterations will give you the proper index into the collection if you need it. and if you want to skip elements you can use a nested conditional statement as shown in the following example: NSArray *array = /* assume this exists */. @"Three". as illustrated in the following example: NSArray *array = [NSArray arrayWithObjects: @"One". NSArray *array = /* assume this exists */. @"five". nil]. Latin: %@". @"Two". All Rights Reserved. You can use break to interrupt the iteration. for (id element in array) { if (/* some test for element */) { // statements that apply only to elements passing test } 88 Using Fast Enumeration 2010-07-13 | © 2010 Apple Inc. } You can also use NSEnumerator objects with fast enumeration. nil]. index++. @"Four".
CHAPTER 8 Fast Enumeration } If you want to skip the first element then process no more than five further elements. you could do so as shown in this example: NSArray *array = /* assume this exists */. index. All Rights Reserved. } } Using Fast Enumeration 2010-07-13 | © 2010 Apple Inc. NSUInteger index = 0. } if (++index >= 6) { break. for (id element in array) { if (index != 0) { NSLog(@"Element at index %u is: %@". 89 . element).
All Rights Reserved.CHAPTER 8 Fast Enumeration 90 Using Fast Enumeration 2010-07-13 | © 2010 Apple Inc. .
as described in “Dynamic Binding” (page 18). As many decisions about them as possible are pushed from compile time to runtime: ■ ■ The memory for objects is dynamically allocated at runtime by class methods that create new instances. Default Dynamic Behavior 2010-07-13 | © 2010 Apple Inc. Objects are dynamically typed. 91 . Messages and methods are dynamically bound. Objective-C allows objects to be statically typed with a class name rather than generically typed as id. It also lets you turn some of its object-oriented features off in order to shift operations from runtime back to compile time. but there’s a price to pay. ■ These features give object-oriented programs a great deal of flexibility and power. the compiler restricts the value of the declared variable to be either an instance of the class named in the declaration or an instance of a class that inherits from the named class. any object variable can be of type id no matter what the object’s class is. the compiler can’t check the exact types (classes) of id variables. thisObject can only be a Rectangle of some kind. Note: Messages are somewhat slower than function calls. A runtime procedure matches the method selector in the message to a method implementation that “belongs to” the receiver. The exceptionally rare case where bypassing Objective-C's dynamism might be warranted can be proven by use of analysis tools like Shark or Instruments. In particular. including ways to temporarily overcome its inherent dynamism. Static Typing If a pointer to a class name is used in place of id in an object declaration. and to make code more self-documenting. Statically typed objects have the same internal data structures as objects declared to be ids. Rectangle *thisObject. it affects only the amount of information given to the compiler about the object and the amount of information available to those reading the source code. To permit better compile-time type checking. Default Dynamic Behavior By design. typically incurring an insignificant amount of overhead compared to actual work performed. Objective-C objects are dynamic entities. All Rights Reserved. In the example above. In source code (at compile time).CHAPTER 9 Enabling Static Behavior This chapter explains how static typing works and discusses some other features of Objective-C. The type doesn’t affect the object. The exact class of an id variable (and therefore its particular methods and data structure) isn’t determined until the program runs.
provided the class of the object being assigned is identical to. not the one in its Rectangle superclass. A warning is issued if the receiver doesn’t have access to the method named in the message. Statically typed objects are dynamically allocated by the same class methods that create instances of type id. or inherits from. the compiler can deliver better type-checking services in two situations: ■ When a message is sent to a statically typed receiver. if the roles of the two variables are reversed and aShape is assigned to aRect.CHAPTER 9 Enabling Static Behavior Static typing also doesn’t affect how the object is treated at runtime. It permits you to use the structure pointer operator to directly access an object’s instance variables. aRect = [[Rectangle alloc] init]. not just those of a Rectangle: Rectangle *thisObject = [[Square alloc] init]. see Figure 1-2 (page 25). However. the class of the variable receiving the assignment. just as objects typed id are. not every Shape is a Rectangle. the compiler makes sure the types are compatible. the compiler can make sure the receiver can respond. it allows for compile-time type checking. which shows the class hierarchy including Shape and Rectangle. It can free objects from the restriction that identically named methods must have identical return and argument types. Here aRect can be assigned to aShape because a Rectangle is a kind of Shape—the Rectangle class inherits from Shape. aShape = aRect. All Rights Reserved. A warning is issued if they’re not. The third is covered in “Defining a Class” (page 35). The following example illustrates this: Shape *aShape. the compiler generates a warning.) 92 Type Checking 2010-07-13 | © 2010 Apple Inc. ■ An assignment can be made without warning. performs the version of the method defined in the Square class. A display message sent to thisObject [thisObject display]. When a statically typed object is assigned to a statically typed variable. Rectangle *aRect. If Square is a subclass of Rectangle. Messages sent to statically typed objects are dynamically bound. static typing opens up possibilities that are absent for objects typed id: ■ ■ In certain situations. ■ The first two topics are discussed in the sections that follow. the following code would still produce an object with all the instance variables of a Square. The exact type of a statically typed receiver is still determined at runtime as part of the messaging process. . Type Checking With the additional information provided by static typing. By giving the compiler more information about an object. (For reference.
All instances. 93 . For example. Because the class of a message receiver (and therefore class-specific details about the method it’s asked to perform). Shape *myRectangle = [[Rectangle alloc] init].(double)worry. Return and Argument Types In general. The following code is error-prone. but is allowed nonetheless: Rectangle *aRect. All Rights Reserved. the compiler will treat it as a Shape. and it does its type checking accordingly.CHAPTER 9 Enabling Static Behavior There’s no check when the expression on either side of the assignment operator is an id. the compiler understands the class of a statically typed object only from the class name in the type designation. This constraint is imposed by the compiler to allow dynamic binding. However. the compiler will complain. The compiler has access to class-specific information about the methods. can’t be known at compile time. When it prepares information on method return and argument types for the runtime system. Therefore. Typing an instance to an inherited class can therefore result in discrepancies between what the compiler thinks would happen at runtime and what actually happens. The isFilled method is defined in the Rectangle class. If you send the object a message to perform a Rectangle method. Static Typing to an Inherited Class An instance can be statically typed to its own class or to any class that it inherits from. can be statically typed as NSObject. not in Shape. However. Rectangle’s version of the method is performed. or an id to a statically typed object. if you send it a message to perform a method that the Shape class knows about. At runtime. Because methods like alloc and init return ids. BOOL solid = [myRectangle isFilled]. methods in different classes that have the same selector (the same name) must also share the same return and argument types. A statically typed object can be freely assigned to an id. Return and Argument Types 2010-07-13 | © 2010 Apple Inc. suppose that the Upper class declares a worry method that returns a double. if you statically type a Rectangle instance as a Shape. aRect = [[Shape alloc] init]. it creates just one method description for each method selector. the class of the receiver is known by the compiler. the message is freed from the restrictions on its return and argument types. the compiler must treat all methods with the same name alike. However. the compiler won’t complain. [myRectangle display]. the compiler doesn’t ensure that a compatible object is returned to a statically typed variable. . Similarly. for example. even though Rectangle overrides the method. when a message is sent to a statically typed object.
it will think that worry returns an int. All Rights Reserved. but at runtime it actually returns an int and generates an error. If an instance is statically typed to the Upper class. but it can do so reliably only if the methods are declared in different branches of the class hierarchy. Errors will obviously result if a Middle instance is typed to the Upper class. . 94 Static Typing to an Inherited Class 2010-07-13 | © 2010 Apple Inc. and if an instance is typed to the Middle class.(int)worry. the compiler will think that its worry method returns a double. Static typing can free identically named methods from the restriction that they must have identical return and argument types.CHAPTER 9 Enabling Static Behavior and the Middle subclass of Upper overrides the method and declares a new return type: . The compiler will inform the runtime system that a worry message sent to the object returns a double.
95 . It also. the selector for setWidth:height: is assigned to the setWidthHeight variable: SEL setWidthHeight. Methods and Selectors For efficiency. the compiler writes each method name into a table. then pairs the name with a unique identifier that represents the method at runtime. Here. You can do this with the NSSelectorFromString function: setWidthHeight = NSSelectorFromString(aBuffer).CHAPTER 10 Selectors In Objective-C. SEL and @selector Compiled selectors are assigned to a special type. Instead. to distinguish them from other data. it’s futile to assign them arbitrarily. However. It can be used to refer simply to the name of a method when it’s used in a source-code message to an object. method = NSStringFromSelector(setWidthHeight). full ASCII names are not used as method selectors in compiled code. Compiled selectors are of type SEL. you may need to convert a character string to a selector at runtime. “selector” has two meanings. The @selector() directive lets you refer to the compiled selector. in some cases. The NSStringFromSelector function returns a method name for a selector: NSString *method. refers to the unique identifier that replaces the name when the source code is compiled. SEL. Conversion in the opposite direction is also possible. You can use a selector to invoke a method on an object—this provides the basis for the implementation of the target-action design pattern in Cocoa. All methods with the same name have the same selector. All Rights Reserved. Methods and Selectors 2010-07-13 | © 2010 Apple Inc. It’s most efficient to assign values to SEL variables at compile time with the @selector() directive. though. The runtime system makes sure each identifier is unique: No two selectors are the same. You must let the system assign SEL identifiers to methods. Valid selectors are never 0. setWidthHeight = @selector(setWidth:height:). rather than to the full method name. and all methods with the same name have the same selector.
SEL request = getTheSelector(). it lets you send the same message to receivers belonging to different classes. If there were one selector per method implementation. take SEL identifiers as their initial arguments. has the same selector as display methods defined in other classes. not method implementations.CHAPTER 10 Selectors Methods and Selectors Compiled selectors identify method names. and performSelector:withObject:withObject: methods. except for messages sent to statically typed receivers. Variable names can be used in both halves of a message expression: id helper = getTheReceiver(). so it treats all methods with the same selector alike. since the compiler can learn about the method implementation from the class type. because of their separate domains. Method Return and Argument Types The messaging routine has access to method implementations only through selectors. there’s no confusion between the two. All Rights Reserved. In this example. defined in the NSObject protocol. This is essential for polymorphism and dynamic binding. (Statically typed receivers are an exception to this rule. Therefore. It discovers the return type of a method. and the data types of its arguments. is equivalent to: [friend gossipAbout:aNeighbor]. and the method the receiver is asked to perform (request) is also determined at runtime (by the equally fictitious getTheSelector function). 96 Varying the Message at Runtime 2010-07-13 | © 2010 Apple Inc. [friend performSelector:@selector(gossipAbout:) withObject:aNeighbor]. . The display method for one class. the receiver (helper) is chosen at runtime (by the fictitious getTheReceiver function). [helper performSelector:request]. Varying the Message at Runtime The performSelector:. However. for example. just as it’s possible to vary the object that receives the message. performSelector:withObject:. a message would be no different than a function call. dynamic binding requires all implementations of identically named methods to have the same return type and the same argument types.) Although identically named class methods and instance methods are represented by the same selector. These methods make it possible to vary a message at runtime. A class could define a display class method in addition to a display instance method. A class method and an instance method with the same name are assigned the same selector. All three methods map directly into the messaging function. they can have different argument and return types. from the selector. For example.
text fields. switches. a button labeled “Find” would translate a mouse click into an instruction for the application to start searching for something. Each time you rearranged the user interface. NSControl objects are graphical devices that can be used to give instructions to an application. Avoiding Messaging Errors If an object receives a message to perform a method that isn’t in its repertoire. All Rights Reserved. the NSButtonCell object sends a message instructing the application to do something. They interpret events coming from hardware devices like the keyboard and mouse and translate them into application-specific instructions. or the target object would have to discover which button the message came from and act accordingly. If Objective-C didn’t allow the message to be varied. Most resemble real-world control devices such as buttons. 97 . an NSButtonCell instance can be initialized for an action message. and a target. This would be an unnecessary complication that Objective-C happily avoids. Accordingly. [myButtonCell setTarget:anObject]. For example. the name of the method would be frozen in the NSButtonCell source code. The button cell sends the message using NSObject’s performSelector:withObject: method. In software. it should be cast to the proper type. these devices stand between the application and the user. [myButtonCell setAction:@selector(reapTheWind:)]. a label. an error results. the object that should receive the message. button cells and other controls would have to constrain the content of the message. The Application Kit defines a template for creating control devices and defines a few “off-the-shelf” devices of its own. the NSButtonCell class defines an object that you can assign to an NSMatrix instance and initialize with a size. the error often isn’t evident until the program executes.) The Target-Action Design Pattern In its treatment of user-interface controls. There would either have to be one target for each button. All action messages take a single argument. and a keyboard alternative. The Target-Action Design Pattern 2010-07-13 | © 2010 Apple Inc. all NSButtonCell objects would have to send the same message. an NSButtonCell object must be initialized not just with an image. and a label. the Application Kit makes good use of the ability to vary both the receiver and the message. the method selector it should use in the message it sends. a font. But because messaging occurs at runtime. knobs. but with directions on what message to send and who to send it to. It’s the same sort of error as calling a nonexistent function. casting doesn’t work for all types. a size. Instead of simply implementing a mechanism for translating user actions into action messages. the method should return a pointer or a type compatible with a pointer. dials. If the method that’s performed returns a different type. When the user clicks the button (or uses the keyboard alternative). For example. a picture. menu items. This would make it difficult for any object to respond to more than one button cell. the id of the control device sending the message. (However. and the like. To do this. you would also have to re-implement the method that responds to the action message.CHAPTER 10 Selectors Note: performSelector: and its companion methods return an id.
It takes the method selector as an argument and returns whether the receiver has access to a method matching the selector: if ( [anObject respondsToSelector:@selector(setOrigin::)] ) [anObject setOrigin:0. determines whether a receiver can respond to a message. you can make sure that the receiver is able to respond. if the message selector or the class of the receiver varies. even though the object responds to the message indirectly by assigning it to another object. .0 :0. it appears that the object can handle the message. you should make sure the receiver implements a method that can respond to the message. Note: An object can also arrange to have the messages it receives forwarded to other objects if it can’t respond to them directly itself. However. if you write code that sends a message to an object represented by a variable that others can set. it may be necessary to postpone this test until runtime. As you write your programs. In that case.0]. The respondsToSelector: method. else fprintf(stderr. [NSStringFromClass([anObject class]) UTF8String]). "%s can’t be placed\n". the compiler performs this test for you. The respondsToSelector: test is especially important when sending messages to objects that you don’t have control over at compile time. See Message Forwarding in Objective-C Runtime Programming Guide for more information.CHAPTER 10 Selectors It’s relatively easy to avoid this error when the message selector is constant and the class of the receiving object is known. If the receiver is statically typed. For example. defined in the NSObject class. All Rights Reserved. 98 Avoiding Messaging Errors 2010-07-13 | © 2010 Apple Inc.
A @catch() block contains exception-handling logic for exceptions thrown in a @try block. Objective-C provides language-level support for exception handling. and @finally: ■ ■ Code that can potentially throw an exception is enclosed in a @try block.) A @finally block contains code that must be executed whether an exception is thrown or not.) Exception Handling An exception is a special condition that interrupts the normal flow of program execution. see Exception Programming Topics. @try { [cup fill]. All Rights Reserved. To turn on support for these features. but are not required to. or custom classes. NSError.3 and later because runtime support for exception handling and synchronization is not present in earlier versions of the software. 99 . Objective-C’s exception support revolves around four compiler directives: @try. Enabling Exception-Handling Using GNU Compiler Collection (GCC) version 3. @throw.3 and later. which is essentially an Objective-C object. ■ ■ The example below depicts a simple exception-handling algorithm: Cup *cup = [[Cup alloc] init]. Coupled with the use of the NSException. } Enabling Exception-Handling 2010-07-13 | © 2010 Apple Inc.3 and later. Examples include arithmetical errors such as division by zero. You typically use an NSException object. You can have multiple @catch() blocks to catch different types of exception. calling undefined instructions (such as attempting to invoke an unimplemented method). for more details. underflow or overflow. by hardware as well as software. You use the @throw directive to throw an exception. you can add robust error-handling to your programs. This article provides a summary of exception syntax and handling. (This is illustrated in “Catching Different Types of Exception” (page 100). (Note that this renders the application runnable only in Mac OS X v10. use the -fobjc-exceptions switch of the GNU Compiler Collection (GCC) version 3. There are a variety of reasons why an exception may be generated (exceptions are typically said to be raised or thrown). @catch.CHAPTER 11 Exception Handling The Objective-C language has an exception-handling syntax similar to that of Java and C++. and attempting to access a collection element out of bounds.
. } @finally { // 3 // Perform processing necessary whether an exception occurred .CHAPTER 11 Exception Handling @catch (NSException *exception) { NSLog(@"main: Caught %@: %@". . That way you can tailor the processing of exceptions as groups. } @catch (NSException *ne) { // 2 // Perform processing necessary at this level. Catches a more general exception type. 3. } @catch (id ue) { ... Listing 11-1 An exception handler @try { . Throwing Exceptions To throw an exception you must instantiate an object with the appropriate information.. use one or more @catch()blocks following the @try block. } @catch (CustomException *ce) { // 1 . 2. . } or not. [exception name]... as shown in Listing 11-1.. } Catching Different Types of Exception To catch an exception thrown in a @try block. All Rights Reserved. NSException *exception = [NSException exceptionWithName:@"HotTeaException" reason:@"The tea is too hot" userInfo:nil]. The following list describes the numbered code-lines: 1. such as the exception name and the reason it was thrown.. 100 Catching Different Types of Exception 2010-07-13 | © 2010 Apple Inc. whether exceptions were thrown or not. The @catch() blocks should be ordered from most-specific to the least-specific. [exception reason]).. } @finally { [cup release]. Performs any clean-up processing that must always be performed.. Catches the most specific exception type.
you might throw an exception to signal that a routine could not execute normally—such as when a file is missing or data could not be parsed correctly. you can re-throw the caught exception using the @throw directive without an argument. and provide information about the problem in an error object. All Rights Reserved. Instead you should use the return value of a method or function to indicate that an error has occurred. or simply to signify errors. You should not use exceptions for general flow-control.CHAPTER 11 Exception Handling @throw exception. Throwing Exceptions 2010-07-13 | © 2010 Apple Inc. see Error Handling Programming Guide. 101 . use of exceptions is fairly commonplace. For example. This can help make your code more readable. The NSException class provides methods that help in exception processing. You are not limited to throwing NSException objects. Exceptions are resource-intensive in Objective-C. but you can implement your own if you so desire. You can throw any Objective-C object as an exception object. such as file-system exceptions or communications exceptions. Important: In many environments. Inside a @catch() block. For more information. You can also subclass NSException to implement specialized types of exceptions.
All Rights Reserved.CHAPTER 11 Exception Handling 102 Throwing Exceptions 2010-07-13 | © 2010 Apple Inc. .
Listing 12-1 Locking a method using self . The Account class could create the semaphore in its initialize method. To turn on support for these features. In the latter case. You can take a similar approach to synchronize the class methods of the associated class.(void)criticalMethod { @synchronized(self) { // Critical code..3 and later. } } Listing 12-2 shows a general approach. including self. Other threads are blocked until the thread exits the protected code.. only one thread at a time is allowed to execute a class method because there is only one class object that is shared by all callers. renders the application runnable only in Mac OS X v10. use the -fobjc-exceptions switch of the GNU Compiler Collection (GCC) version 3. which are explained in this article and “Exception Handling” (page 99).3 and later because runtime support for exception handling and synchronization is not present in earlier versions of the software. Objective-C provides the @synchronized() directive. The @synchronized()directive locks a section of code for use by a single thread. You should use separate semaphores to protect different critical sections of a program. of course. Note: Using either of these features in a program. The @synchronized() directive takes as its only argument any Objective-C object. Synchronizing Thread Execution 2010-07-13 | © 2010 Apple Inc. Listing 12-1 shows an example of code that uses self as the mutex to synchronize access to the instance methods of the current object. All Rights Reserved.CHAPTER 12 Threading Objective-C provides support for thread synchronization and exception handling. the code obtains a semaphore from the Account class and uses it to lock the critical section. when execution continues past the last statement in the @synchronized() block. Synchronizing Thread Execution Objective-C supports multithreading in applications. This means that two threads can try to modify the same object at the same time. a situation that can cause serious problems in a program. This object is known as a mutual exclusion semaphore or mutex. It’s safest to create all the mutual exclusion objects before the application becomes multithreaded to avoid race conditions. To protect sections of code from being executed by more than one thread at a time. that is. It allows a thread to lock a section of code to prevent its use by other threads. . 103 . using the Class object instead of self. Before executing a critical process.
104 Synchronizing Thread Execution 2010-07-13 | © 2010 Apple Inc.CHAPTER 12 Threading Listing 12-2 Locking a method using a custom semaphore Account *account = [Account accountFromString:[accountField stringValue]]. A thread can use a single semaphore several times in a recursive manner. @synchronized(accountSemaphore) { // Critical code. . } The Objective-C synchronization feature supports recursive and reentrant code. the Objective-C runtime catches the exception. . All Rights Reserved. that is. other threads are blocked from using it until the thread releases all the locks obtained with it. When code in an @synchronized() block throws an exception. // Get the semaphore. every @synchronized() block is exited normally or through an exception... and re-throws the exception to the next exception handler. id accountSemaphore = [Account semaphore]. releases the semaphore (so that the protected code can be executed by other threads).
for most purposes. It then communicates with the remote object through the proxy. you can send Objective-C messages to objects in other tasks or have messages executed in other threads of the same task. Cocoa includes a distributed objects architecture that is essentially this kind of extension to the runtime system. 105 . Objective-C was initially designed for programs that are executed as a single process in a single address space. in a typical server-client interaction. Or imagine an interactive application that needs to do a good deal of computation to carry out a user command. it doesn’t alter the fundamental behavior of your Cocoa objects. It must also mediate between the separate schedules of the two tasks. The application is able to regard the proxy as if it were the remote object. the threads are treated exactly like threads in different tasks. For example. It’s not hard to imagine Objective-C messages between objects that reside in different address spaces (that is.CHAPTER 13 Remote Messaging Like most other programming languages. recognize when a message is intended for an object in a remote address space. To send a remote message. where communication takes place between relatively self-contained units through messages that are resolved at runtime. All Rights Reserved. Remote messaging is illustrated in Figure 13-1 (page 106). Establishing the connection gives the application a proxy for the remote object in its own address space. It could simply display a dialog telling the user to wait while it was busy. it has no identity of its own. Using distributed objects. leaving the main part of the application free to accept user input. The proxy assumes the identity of the remote object. or it could isolate the processing work in a subordinate task. it has to hold messages until their remote receivers are free to respond to them.) Note that Cocoa’s distributed objects system is built on top of the runtime system. Distributed Objects Remote messaging in Objective-C requires a runtime system that can establish connections between objects in different address spaces. the object-oriented model. it is the remote object. and transfer data from one address space to another. (When remote messages are sent between two threads of the same task. Nevertheless. in different tasks) or in different threads of execution of the same task. where object A communicates with object B through a proxy. the client task might send its requests to a designated object in the server. and the server might target specific client objects for the notifications and other information it sends. Objects in the two tasks would communicate through Objective-C messages. and messages for B wait in a queue until B is ready to respond to them: Distributed Objects 2010-07-13 | © 2010 Apple Inc. an application must first establish a connection with the remote receiver. would seem well suited for interprocess communication as well.
CHAPTER 13 Remote Messaging Figure 13-1 Remote Messages A Proxy for B B The sender and receiver are in different tasks and are scheduled independently of each other. In a sense. Because of this. It doesn’t need to use the same classes itself. The receiving application declares it because the remote object must conform to the protocol. Therefore. The sending application doesn’t have to implement any of the methods in the protocol. is documented in the Foundation framework reference and Distributed Objects Programming Topics. A proxy isn’t fully transparent. Most of the issues are related to the efficiency of remote messaging and the degree of separation that the two tasks should maintain while they’re communicating with each other. including the NSProxy and NSConnection classes. So there’s no guarantee that the receiver is free to accept a message when the sender is ready to send it. an object that’s designated to receive remote messages advertises its interface in a formal protocol. The distributed objects architecture. It isn’t a copy of the object. Language Support Remote messaging raises not only a number of intriguing possibilities for program design. All Rights Reserved. but a lightweight substitute for it. it simply passes the messages it receives on to the remote receiver and manages the interprocess communication. a proxy doesn’t allow you to directly set and get an object’s instance variables. A remote receiver is typically anonymous. Both the sending and the receiving application declare the protocol—they both import the same protocol declaration. The sending application doesn’t need to know how that application is designed or what classes it uses. . For instance. however. it’s transparent. The sending application declares it to inform the compiler about the messages it sends and because it may use the conformsToProtocol: method and the @protocol() directive to test the remote receiver. All it needs to know is what messages the remote object responds to. Its class is hidden inside the remote application. it also raises some interesting issues for the Objective-C language. A proxy doesn’t act on behalf of the remote object or need access to its class. arriving messages are placed in a queue and retrieved at the convenience of the receiving application. So that programmers can give explicit instructions about the intent of a remote message. Its main function is to provide a local address for an object that wouldn’t otherwise have one. it declares the protocol only because it initiates messages to the remote receiver. Objective-C defines six type qualifiers that can be used when declaring methods inside a formal protocol: oneway in out inout bycopy 106 Language Support 2010-07-13 | © 2010 Apple Inc.
two-way (or “round trip”) remote procedure calls (RPCs) like this one. However. round-trip messages are often called synchronous. to indicate that a method is used only for asynchronous messages: . two underlying messages are required—one message to get the remote object to invoke the method. When invoked. its implementation of the protocol methods can use the same modifiers that are used to declare the methods. and send back an indication that it has finished. Pointer Arguments Next. Synchronous messages are the default. In the meantime. Synchronous and Asynchronous Messages Consider first a method with just a simple return value: . the method is invoked and the return value provided directly to the sender. such as oneway float or oneway id. Objective-C provides a return type modifier. All Rights Reserved. allowing the receiver to get to the task when it can. Waiting for the receiver to finish. Sometimes it’s sufficient simply to dispatch the remote message and return. 107 . the sender can go on to other things. The sending application waits for the receiving application to invoke the method. if a class or category adopts a protocol. The following sections explain how these modifiers are used. it’s not always necessary or a good idea to wait for a reply. complete its processing. An asynchronous message can’t have a valid return value. they can’t be used inside class and category declarations. the only such combination that makes any sense is oneway void.CHAPTER 13 Remote Messaging byref These modifiers are restricted to formal protocols.(oneway void)waltzAtWill. A pointer can be used to pass information to the receiver by reference. at bottom.” For this reason.(BOOL)canDance. This is illustrated in the figure below: Figure 13-2 Round-Trip Message A Proxy for B initial message B return information Most remote messages are. But when the receiver is in a remote application. consider methods that take pointer arguments. of keeping them both “in sync. the method looks at what’s stored in the address it’s passed. However. When a canDance message is sent to a receiver in the same application. Although oneway is a type qualifier (like const) and can be used in combination with a specific type name. and the other message to send back the result of the remote calculation. has the advantage of coordinating the two communicating applications. along with any return information requested. oneway. Language Support 2010-07-13 | © 2010 Apple Inc. even if no information is returned.
} The way the pointer is used makes a difference in how the remote message is carried out.getTune:(struct tune *)theSong { . 108 Language Support 2010-07-13 | © 2010 Apple Inc. In C. If the argument is used to pass information by reference. store the value in an address local to that application.getTune:(out struct tune *)theSong. a string is represented as a character pointer (char *). While in can be used with any kind of argument.. the value it points to doesn’t have to be sent to the other application. the pointer is used to return information by reference. Instead.CHAPTER 13 Remote Messaging . In the first case.. Because these cases result in very different actions on the part of the runtime system for remote messaging. Objective-C provides type modifiers that can clarify the programmer’s intention: ■ The type modifier in indicates that information is being passed in a message: . a string is an entity in and of itself. For example.setTune:(struct tune *)aSong { tune = *aSong. in concept there’s not. } The same sort of argument can also be used to return information by reference.. If. *theSong = tune. In neither case can the pointer simply be passed to the remote object unchanged. out and inout make sense only for pointers. . and pass that address to the remote receiver. information is returned on the second leg of the round trip. the runtime system must dereference the pointer. indicates that an argument is used both to provide information and to get information back: . it points to a memory location in the sender’s address space and would not be meaningful in the address space of the remote receiver. pointers are sometimes used to represent composite values. inout is the safest assumption but also the most time-consuming since it requires passing information in both directions.adjustTune:(inout struct tune *)aSong. All Rights Reserved.. Although in notation and implementation there’s a level of indirection here. ship the value it points to over to the remote application. ■ The modifier out indicates that an argument is being used to return information by reference: . on the other hand. The Cocoa distributed objects system takes inout to be the default modifier for all pointer arguments except those declared const. In the second case. The method uses the pointer to find where it should place information requested in the message. inout. information is passed on the first leg of the round trip. .setTune:(in struct tune *)aSong. ■ A third modifier. Conceptually. a value from the other application must be sent back and written into the location indicated by the pointer. The only modifier that makes sense for arguments passed by value (non-pointers) is in. . not a pointer to something else. for which in is the default. The runtime system for remote messaging must make some adjustments behind the scenes.
for instance—that often these classes are written so that a copy is sent to a remote receiver. Messages sent to the proxy would be passed across the connection to the real object and any return information would be passed back to the remote application. however. The application that receives the object must have the class of the object loaded in its address space.danceWith:(id)aPartner. bycopy makes so much sense for certain classes—classes that are intended to contain a collection of other objects. Language Support 2010-07-13 | © 2010 Apple Inc.getDancer:(bycopy out id *)theDancer. 109 . Note: When a copy of an object is passed to another application. These conventions are enforced at runtime. If the sender and the receiver are in the same application. thereby specifying that objects passed to a method or objects returned from a method should be passed or returned by reference. except that the receiver needs to refer to the object through a proxy (since the object isn’t in its address space). Objective-C provides a bycopy type modifier: . if ever. they would both be able to refer to the same aPartner object. consider a method that takes an object as an argument: .danceWith:(bycopy id)aClone. Therefore. Since passing by reference is the default behavior for the vast majority of Objective-C objects. you will rarely. the out and inout modifiers make no sense with simple character pointers. instead of the usual reference. bycopy can also be used for return values: . the distributed objects system automatically dereferences the pointer and passes whatever it points to as if by value. You can override this behavior with byref. make use of the byref keyword. The pointer that danceWith: delivers to a remote receiver is actually a pointer to the proxy. A danceWith: message passes an object id to the receiver. when it’s better to send a copy of the object to the remote process so that it can interact with it directly in its own address space. Proxies and Copies Finally. The only type that it makes sense for bycopy or byref to modify is an object.CHAPTER 13 Remote Messaging In cases like this. There are times when proxies may be unnecessarily inefficient. This is true even if the receiver is in a remote application. It can similarly be used with out to indicate that an object returned by reference should be copied rather than delivered in the form of a proxy: . not by the compiler.adjustRectangle:(inout Rectangle **)theRect. It takes an additional level of indirection in a remote message to pass or return a string by reference: . whether dynamically typed id or statically typed by a class name. it cannot be anonymous. All Rights Reserved.(bycopy)dancer.getTuneTitle:(out char **)theTitle. To give programmers a way to indicate that this is intended. The same is true of objects: .
. @end A class or category can then adopt your protocol foo. you could write a formal protocol foo as follows: @Protocol foo . All Rights Reserved.(bycopy)array.. 110 Language Support 2010-07-13 | © 2010 Apple Inc.
you can include pointers to Objective-C objects as data members of C++ classes. Pointers to objects in either language are just pointers. } . Listing 14-1 Using C++ and Objective-C instances as instance variables -o hello /* Hello. For example. } }. world!". @interface Greeting : NSObject { @private Hello *hello.(void)dealloc. and as such can be used anywhere.(id)init. and you can include pointers to C++ objects as instance variables of Objective-C classes. [greeting_text UTF8String]).h> class Hello { private: id greeting_text.(void)sayGreeting:(Hello*)greeting.mm * Compile with: g++ -x objective-c++ -framework Foundation Hello. Mixing Objective-C and C++ Language Features In Objective-C++. . With it you can make use of existing C++ libraries from your Objective-C applications. } void say_hello() { printf("%s\n".CHAPTER 14 Using C++ With Objective-C Apple’s Objective-C compiler allows you to freely mix C++ and Objective-C code in the same source file. Note: Xcode requires that file names have a “.(void)sayGreeting. 111 . This Objective-C/C++ language hybrid is called Objective-C++. All Rights Reserved. you can call methods from either language in C++ code and in Objective-C methods.mm” extension for the Objective-C++ extensions to be enabled by the compiler. Listing 14-1 illustrates this. . Mixing Objective-C and C++ Language Features 2010-07-13 | © 2010 Apple Inc. } Hello(const char* initial_greeting_text) { greeting_text = [[NSString alloc] initWithUTF8String:initial_greeting_text]. // holds an NSString public: Hello() { greeting_text = @"Hello.mm */ #import <Foundation/Foundation. .
Greeting *greeting = [[Greeting alloc] init]. @end // ERROR! class Derived: public ObjCClass .. monde! As you can declare C structs in Objective-C interfaces. } . } // > Hello. As with C structs. [greeting sayGreeting].. return 0.. All Rights Reserved. you can also declare C++ classes in Objective-C interfaces. [greeting sayGreeting:hello]. [super dealloc]. if (self) { hello = new Hello(). */ }.. delete hello.(void)sayGreeting:(Hello*)greeting { greeting->say_hello(). As previously noted. } .. not nested within the Objective-C class. @interface ObjCClass: Base . // ERROR! 112 Mixing Objective-C and C++ Language Features 2010-07-13 | © 2010 Apple Inc. as specified by (respectively) the C++ and Objective-C language standards.(void)sayGreeting { hello->say_hello(). } return self. } . C++ classes defined within an Objective-C interface are globally-scoped.) To allow you to conditionalize your code based on the language variant. nor does it allow you to inherit Objective-C classes from C++ objects.. [pool release]. world! // > Bonjour. Objective-C++ does not allow you to inherit C++ classes from Objective-C objects.(void)dealloc { delete hello. . Hello *hello = new Hello("Bonjour. the Objective-C++ compiler defines both the __cplusplus and the __OBJC__ preprocessor constants.(id)init { self = [super init]. monde!"). (This is consistent with the way in which standard C—though not C++—promotes nested struct definitions to file scope. class Base { /* . } @end int main() { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init].CHAPTER 14 Using C++ With Objective-C @end @implementation Greeting . [greeting release].
inside object_dispose).. } // OK } @end Bar *barPtr. (The fobjc-call-cxx-cdtors compiler flag is set by default in gcc-4. // OK } . 113 . The constructor used is the “public no-argument in-place constructor.” Destructors are invoked in the dealloc method (specifically. with runtime polymorphism available as an exceptional case. @end On Mac OS X 10..4 and later.) Constructors are invoked in the alloc method (specifically. in declaration order immediately after the Objective-C object of which they are a member is allocated. The object models of the two languages are thus not directly compatible. objects in C++ are statically typed.. You can declare a C++ class within an Objective-C class declaration. as follows: @interface Foo { class Bar { ...CHAPTER 14 Using C++ With Objective-C Unlike Objective-C. in reverse declaration order immediately before the Objective-C object of which they are a member is deallocated. the layout of Objective-C and C++ objects in memory is mutually incompatible.. Hence. // OK Objective-C allows C structures (whether declared inside of an Objective-C declaration or not) to be used as instance variables. More fundamentally. if you set the fobjc-call-cxx-cdtors compiler flag. you can use instances of C++ classes containing virtual functions and nontrivial user-defined zero-argument constructors and destructors as instance variables. struct CStruct bigIvar. The compiler treats such classes as having been declared in the global namespace. the two type hierarchies cannot be intermixed. meaning that it is generally impossible to create an object instance that would be valid from the perspective of both languages.2. }. inside class_createInstance). Mixing Objective-C and C++ Language Features 2010-07-13 | © 2010 Apple Inc. @interface Foo { struct CStruct { . All Rights Reserved.
the C++ class may not serve as an Objective-C instance variable.h> struct Class0 { void foo(). This is possible as long as the C++ class in question (along with all of its superclasses) does not have any virtual member functions defined. in. // ERROR! *ptr. These identifiers are id. // 'delete ptr' from Foo's dealloc Class2 class2. nor can you declare namespaces within Objective-C classes. Objective-C++ similarly strives to allow C++ class instances to serve as instance variables. // OK class1.constructor not called! . self and super are context-sensitive. Objective-C classes may serve as C++ template parameters. However. These are not keywords in any other contexts. the Objective-C runtime cannot initialize the virtual function table pointer. . inout. However. int j). Similarly.CHAPTER 14 Using C++ With Objective-C Mac OS X v10. Objective-C does not have a notion of nested namespaces. and categories cannot be declared inside a C++ template. protocol. there are five more context-sensitive keywords (oneway. out.3 and earlier. Class. @interface Class0 Class1 Class1 Foo : NSObject { class0.. so the impact isn’t too severe. they may be used as ordinary identifiers outside of Objective-C methods. If any virtual member functions are present. the Objective-C runtime cannot dispatch calls to C++ constructors or destructors for those objects. protocols. Inside an Objective-C method. // WARNING . The compiler emits a warning in such cases. struct Class1 { virtual void foo(). }. You cannot declare Objective-C classes within C++ namespaces. From an Objective-C programmer's point of view. In the parameter list of methods within a protocol. IMP. @end C++ requires each instance of a class containing virtual functions to contain a suitable virtual function table pointer. and bycopy). the compiler pre-declares the identifiers self and super. }. even though class is a C++ keyword. similarly to the keyword this in C++. struct Class2 { Class2(int i. you can still use the NSObject method class: 114 C++ Lexical Ambiguities and Conflicts 2010-07-13 | © 2010 Apple Inc. #import <Cocoa/Cocoa. C++ template parameters can also be used as receivers or parameters (though not as selectors) in Objective-C message expressions. }. If a C++ class has any user-defined constructors or destructors.3 and earlier: The following cautions apply only to Mac OS X v10. nor can a C++ template be declared inside the scope of an Objective-C interface. However. but you cannot use them for naming Objective-C classes or instance variables. Objective-C classes. All Rights Reserved. or category. For example. because it is not familiar with the C++ object model. You can still use C++ keywords as a part of an Objective-C selector. SEL. C++ adds quite a few new keywords. they are not called. and BOOL. // OK—call 'ptr = new Class1()' from Foo's init. unlike the C++ this keyword.. C++ Lexical Ambiguities and Conflicts There are a few identifiers that are defined in the Objective-C header files that every Objective-C program must include.
// Error In Objective-C. nor does it add Objective-C features to C++ classes. you cannot add constructors or destructors to an Objective-C object. In addition. That is. conversely. Finally. Protocol and template specifiers use the same syntax for different purposes: id<someProtocolName> foo. both @interface foo and @interface(foo) can exist in the same source code. you cannot use class as the name of a variable: NSObject *class. The class hierarchies are separate. you cannot use Objective-C syntax to call a C++ object. the compiler doesn’t permit id to be used as a template name. All Rights Reserved. and you cannot use the keywords this and self interchangeably. and an Objective-C class cannot inherit from a C++ class. as in: label: ::global_name = 3. you can also have a category whose name matches that of a C++ class or structure. That is. 115 . multi-language exception handling is not supported. see “Exception Handling” (page 99). TemplateType<SomeTypeName> bar. an exception thrown in C++ code cannot be caught in Objective-C code. a C++ class cannot inherit from an Objective-C class. which also requires a space: receiver selector: ::global_c++_name. To avoid this ambiguity. because it is a keyword. The space after the first colon is required. For example. In Objective-C++. Objective-C++ adds a similar case.CHAPTER 14 Using C++ With Objective-C [foo class]. Limitations 2010-07-13 | © 2010 Apple Inc. For more information on exceptions in Objective-C. there is a lexical ambiguity in C++ when a label is followed by an expression that mentions a global name. the names for classes and categories live in separate namespaces. // OK However. an exception thrown in Objective-C code cannot be caught in C++ code and. Limitations Objective-C++ does not add C++ features to Objective-C classes.
All Rights Reserved. .CHAPTER 14 Using C++ With Objective-C 116 Limitations 2010-07-13 | © 2010 Apple Inc.
either YES or NO. Defined Types The principal types used in Objective-C are defined in objc/objc. For more information. They are: Type id Definition An object (a pointer to its data structure). or instance. . Note that the type of BOOL is char. A Boolean value. class names can be used as type names to statically type instances of a class. class. This appendix lists all the additions to the language but doesn’t go into great detail.h. id can be used to type any kind of object.APPENDIX A Language Summary Objective-C adds a small number of constructs to the C language and defines a handful of conventions for effectively interacting with the runtime system. A statically typed instance is declared to be a pointer to its class or to any class it inherits from. All Rights Reserved. A pointer to a method implementation that returns an id.. Messages 2010-07-13 | © 2010 Apple Inc. 117 . Class A class object (a pointer to the class data structure). In addition. a compiler-assigned code that identifies a method name. see the other chapters in this document.
The following mutually exclusive directives specify the visibility of instance variables: Directive @private Definition Limits the scope of an instance variable to the class that declares it. and protocols: Directive @interface Definition Begins the declaration of a class or category interface.APPENDIX A Language Summary The objc. @public Removes restrictions on the scope of instance variables. @protocol @end Begins the declaration of a formal protocol. (BOOL)0. (BOOL)1. Compiler Directives Directives to the compiler begin with “@” The following directives are used to declare and define classes. A boolean true value. // Begins a comment that continues to the end of the line. All Rights Reserved.h header file also defines these useful terms: Type Definition nil Nil NO YES A null object pointer. Ends the declaration/definition of a class. This directive is identical to #include. category. categories. @protected Limits instance variable scope to declaring and inheriting classes. Preprocessor Directives The preprocessor understands these special notations: Notation Definition #import Imports a header file. 118 Preprocessor Directives 2010-07-13 | © 2010 Apple Inc. A boolean false value. except that it doesn’t include the same file more than once. or protocol. A null class pointer. . (id)0. . (Class)0. @implementation Begins the definition of a class or category.
@catch() Catches an exception thrown within the preceding @try block. for the properties whose names follow.) @encode(type_spec) @"string" Yields a character string that encodes the type structure of type_spec. The following directives support the declared properties feature (see “Declared Properties” (page 67)): Directive @property Definition Begins the declaration of a declared property.2 and later. the compiler generate accessor methods for which there are no custom implementations. These directives support exception handling: Directive @try @throw Definition Defines a block within which exceptions can be thrown. there are directives for these particular purposes: Directive @class @selector(method_name) Definition Declares the names of classes defined elsewhere. (@protocol is also valid without (protocol_name) for forward declarations. you can use UTF-16 encoded strings. @protocol(protocol_name) Returns the protocol_name protocol (an instance of the Protocol class).5 to compile an application for Mac OS X v10. On Mac OS X v10.APPENDIX A Language Summary The default is @protected. Throws an exception object. @synthesize Requests that. On Mac OS X v10.2 and later supports UTF-16 encoded strings. All Rights Reserved. In addition.0 and later). (The runtime from Mac OS X v10. @finally Defines a block of code that is executed whether exceptions were thrown or not in a preceding @try block. Returns the compiled selector that identifies method_name.5 and later (with Xcode 3.4 and earlier. the string must be 7-bit ASCII-encoded. Defines a constant NSString object in the current module and initializes the object with the specified string.) Compiler Directives 2010-07-13 | © 2010 Apple Inc. so if you use Mac OS X v10. you can also use UTF-16 encoded strings. 119 . @dynamic Instructs the compiler not to generate a warning if it cannot find implementations of accessor methods associated with the properties whose names follow.
If any protocols are listed. If any protocols are listed.h" @implementation ClassName method definitions @end Categories A category is declared in much the same way as a class. @synchronized() Defines a block of code that must be executed only by one thread at a time.h" @interface ClassName ( CategoryName ) < protocol list > method declarations @end The protocol list and method declarations are optional. Classes A new class is declared with the @interface directive.APPENDIX A Language Summary Directive Definition @"string1" @"string2" . the header files where they’re declared must also be imported. a file containing a category definition imports its own interface: 120 Classes 2010-07-13 | © 2010 Apple Inc. the header files where they’re declared must also be imported. Like a class definition. If the colon and superclass name are omitted.. . Defines a constant NSString object in the current module. The interface file for its superclass must be imported: #import "ItsSuperclass. the class is declared to be a new root class. A file containing a class definition imports its own interface: #import "ClassName. All Rights Reserved.. The interface file that declares the class must be imported: #import "ClassName. The string @"stringN" created is the result of concatenating the strings specified in the two directives.h" @interface ClassName : ItsSuperclass < protocol_list > { instance variable declarations } method declarations @end Everything but the compiler directives and class name is optional.
Formal Protocols 2010-07-13 | © 2010 Apple Inc. where the parentheses enclose the protocol name.APPENDIX A Language Summary #import "CategoryName. The protocol must import the header files that declare any protocols it incorporates. to limit the type to objects that conform to the protocol ■ Within protocol declarations. Within source code. to incorporate other protocols (as shown earlier) In a class or category declaration. The @optional directive specifies that following methods are optional. should be passed or returned. The default is @required. not a proxy. A copy of the object. The argument both passes information and gets information. these type qualifiers support remote messaging: Type Qualifier Definition oneway in out inout bycopy The method is for asynchronous messages and has no valid return type. 121 . Protocol names listed within angle brackets (<. You can create a forward reference to a protocol using the @protocol directive in the following manner: @protocol ProtocolName.. All Rights Reserved.. to adopt the protocol (as shown in “Classes” (page 120) and “Categories” (page 120)) In a type specification. The argument gets information returned by reference. the @required directive directive specifies that following methods must be implemented by a class that adopts the protocol argument passes information to the remote receiver. protocols are referred to using the similar @protocol() directive.>) are used to do three different things: ■ ■ In a protocol declaration.
Methods with no other valid return typically return void. All Rights Reserved. 122 Method Declarations 2010-07-13 | © 2010 Apple Inc.) Method Implementations Each method implementation is passed two hidden arguments: ■ ■ The receiving object (self). a label describing the argument precedes the colon—the following example is valid but is considered bad style: . Argument and return types are declared using the C syntax for type casting. Arguments are declared after colons (:).(void)setWidthAndHeight:(int)newWidth :(int)newHeight Both labels and colons are considered part of the method name. super replaces self as the receiver of a message to indicate that only methods inherited by the implementation should be performed in response to the message. . ■ The default return and argument type for methods is id. A “-” precedes declarations of instance methods. (However. not int as it is for functions. both self and super refer to the receiving object.APPENDIX A Language Summary Type Qualifier Definition byref A reference to the object. not a copy. Method Declarations The following conventions are used in method declarations: ■ ■ ■ ■ A “+” precedes declarations of class methods. Within the implementation.(void)setWidth:(int)newWidth height:(int)newHeight Typically. The selector for the method (_cmd). should be passed or returned. for example: . Deprecation Syntax Syntax is provided to mark methods as deprecated: @interface SomeClass -method __attribute__((deprecated)). the modifier unsigned when used without a following type always means unsigned int.
Naming Conventions 2010-07-13 | © 2010 Apple Inc. a category. However. or anything else. Likewise. In Objective-C. 123 . A program can’t have a global variable with the same name as a class. Files that declare class and category interfaces or that declare protocols have the .m extension. A class can declare instance variables with the same names as variables in other classes. identical names that serve different purposes don’t clash. class names are in the same name space as global variables and defined types. Class.0 and later. protocols and categories of the same class have protected name spaces: ■ ■ A protocol can have the same name as a class. All Rights Reserved. Within a class. A method can have the same name as an instance variable. // or some other deployment-target-specific macro @end This syntax is available only in Objective-C 2. The names of variables that hold instances usually also begin with lowercase letters. are reserved for use by Apple. Naming Conventions The names of files that contain Objective-C source code have the . the names of methods and instance variables typically begin with a lowercase letter. category.h extension typical of header files. . and protocol names generally begin with an uppercase letter. A category of one class can have the same name as a category of another class. Method names beginning with “_” a single underscore character.APPENDIX A Language Summary @end or: #include <AvailabilityMacros. An instance method can have the same name as a class method.h> @interface SomeClass -method DEPRECATED_ATTRIBUTE. names can be freely assigned: ■ ■ ■ ■ ■ A class can declare methods with the same names as methods in other classes.
APPENDIX A Language Summary 124 Naming Conventions 2010-07-13 | © 2010 Apple Inc. . All Rights Reserved.
Clarified the discussion of sending messages to nil. Corrected typographical errors. Made several minor bug fixes and clarifications. Updated article on Mixing Objective-C and C++. Extended the discussion of properties to include mutable objects. Corrected minor typographical errors. Updated description of categories. Date 2010-07-13 2009-10-19 2009-08-12 2009-05-06 2009-02-04 2008-11-19 2008-10-15 2008-07-08 2008-06-09 Notes Updated to show the revised initialization pattern. Significant reorganization. All Rights Reserved. Clarified use of the static specifier for global variables used by a class. Clarified the description of Code Listing 3-3. . Corrected minor errors. particularly in the "Properties" chapter.. Moved the discussion of memory management to "Memory Management Programming Guide for Cocoa. with several sections moved to a new Runtime Guide.REVISION HISTORY Document Revision History This table describes the changes to The Objective-C Programming Language. Corrected typographical errors. Corrected minor errors. Added discussion of associative references. Corrected minor typographical errors.
Corrected definition of method_getArgumentInfo. Added exception and synchronization grammar to “Grammar” . All Rights Reserved.3 and later in “Exception Handling and Thread Synchronization” . Clarified when the initialize method is called and provided a template for its implementation in “Initializing a Class Object” . 2004-02-02 2003-09-16 2003-08-14 Corrected typos in “An exception handler” . Added examples of thread synchronization approaches to “Synchronizing Thread Execution” .mm" extension to signal Objective-C++ to compiler. Documented the Objective-C exception and synchronization support available in Mac OS X version 10. . Moved function and data structure reference to Objective-C Runtime Reference. 2005-04-08 2004-08-31 Removed function and data structure reference. Replaced conformsTo: with conformsToProtocol: throughout document. 126 2010-07-13 | © 2010 Apple Inc. Moved “Memory Management” before “Retaining Objects” . Corrected the descriptions for the Ivar structure and the objc_ivar_list structure. Made technical corrections and minor editorial changes. noted use of ". Corrected definition of the term conform in the glossary. Corrected typo in language grammar specification and modified a code example. Clarified example in Listing 14-1 (page 111). Added exception and synchronization grammar. Renamed from Inside Mac OS X: The Objective-C Programming Language to The Objective-C Programming Language.REVISION HISTORY Document Revision History Date 2005-10-04 Notes Clarified effect of sending messages to nil. Documented the language support for concatenating constant strings in “Compiler Directives” (page 118). Corrected definition of id. Corrected the grammar for the protocol-declaration-list declaration in “External Declarations” . Changed the font of function result in class_getInstanceMethod and class_getClassMethod.
All Rights Reserved. Mac OS X 10. Added an index. Fixed a bug in the Objective-C language grammar’s description of instance variable declarations. Fixed several typographical errors. Updated grammar and section names throughout the book to reduce ambiguities. 2002-05-01 127 2010-07-13 | © 2010 Apple Inc. passive voice. . Added runtime library reference material.REVISION HISTORY Document Revision History Date 2003-01-01 Notes Documented the language support for declaring constant strings. Restructured some sections to improve cohesiveness. and vice versa.1 introduces a compiler for Objective-C++. and archaic tone. which allows C++ constructs to be called from Objective-C classes. Renamed from Object Oriented Programming and the Objective-C Language to Inside Mac OS X: The Objective-C Programming Language.
REVISION HISTORY Document Revision History 128 2010-07-13 | © 2010 Apple Inc. . All Rights Reserved.
Decisions made at compile time are constrained by the amount and kind of information encoded in source files. class method In the Objective-C language. Cocoa is a set of frameworks with its primary programming interfaces in Objective-C. a method that can operate on class objects rather than instances of the class. and can’t be statically typed. an object that represents a class and knows how to create new instances of the class. category In the Objective-C language. only of its subclasses.” See also synchronous message. a set of method definitions that is segregated from the rest of the class definition. Class objects are created by the compiler. or sent to another application.Glossary abstract class A class that’s defined solely so that other classes can inherit from it. without waiting for the application that receives the message to respond. class object In the Objective-C language. especially an object. abstract superclass Same as abstract class. The sending application and the receiving application act independently. Programs don’t use instances of an abstract class. In Cocoa. but it can also be written to memory. for later use. Thus. a prototype for a particular kind of object. An instance conforms to a protocol if its class does. archiving The process of preserving a data structure. The Application Kit provides a basic program structure for applications that draw on the screen and respond to events. class In the Objective-C language. Application Kit A Cocoa framework that implements an application's user interface. An archived data structure is usually stored in a file. All Rights Reserved. an instance that conforms to a protocol can perform any of the instance methods declared in the protocol. Cocoa An advanced object-oriented development platform on Mac OS X. See also class object. but otherwise behave like all other objects. conform In the Objective-C language. a class object is represented by the class name. lack instance variables. asynchronous message A remote message that returns immediately. Categories can be used to split a class definition into parts or to add methods to an existing class. Protocols are adopted by listing their names between angle brackets in a class or category declaration. copied to the pasteboard. As the receiver in a message expression. a class is said to conform to a protocol if it (or a superclass) implements the methods declared in the protocol. . The interface to an anonymous object is published through a protocol declaration. adopt In the Objective-C language. A class definition declares instance variables and defines methods for all members of the class. 129 2010-07-13 | © 2010 Apple Inc. compile time The time when source code is compiled. Objects that have the same types of instance variables and have access to the same methods belong to the same class. archiving involves writing data to an NSData object. a class is said to adopt a protocol if it declares that it implements all the methods in the protocol. anonymous object An object of unknown class. and are therefore not “in sync.
invokes the designated initializer of its superclass. finding the method implementation to invoke in response to the message—at runtime. instead of when it launches. through a message to super. a protocol that’s declared with the @protocol directive. event The direct or indirect report of external activity. other init. implementation Part of an Objective-C class specification that defines its implementation. This allows the implementation to be updated or changed without impacting the users of the interface. on-line documentation. id is defined as a pointer to an object data structure. method that has primary responsibility for initializing new instances of a class. the NSView object that’s associated with the content area of a window—all the area in the window excluding the title bar and border. an object that belongs to (is a member of ) a particular class. especially user activity on the keyboard and mouse. All Rights Reserved. distributed objects Architecture that facilitates communication between objects in different address spaces. methods in the same class directly or indirectly invoke the designated initializer.GLOSSARY content view In the Application Kit. protocols and functions together with localized strings. dynamic allocation Technique used in C-based languages where the operating system provides memory to a running application as it needs it.. Each class defines or inherits its own designated initializer. formal protocol In the Objective-C language. delegate An object that acts on behalf of another object. Frameworks are sometimes referred to as “kits. designated initializer The init. the hierarchy of classes that’s defined by the arrangement of superclasses and subclasses. and any class may have an unlimited number of subclasses. Through its superclass. usually as a category of the NSObject class. framework A way to package a logically-related set of classes. . Classes can adopt formal protocols. dispatch table Objective-C runtime table that contains entries that associate method selectors with the class-specific addresses of the methods they identify. 130 2010-07-13 | © 2010 Apple Inc. Cocoa provides the Foundation framework and the Application Kit framework.. factory object Same as class object.” gdb The standard Mac OS X debugging tool. rather than at compile time. Instances are created at runtime according to the specification in the class definition. a protocol declared as a category. id In the Objective-C language. and other pertinent files. Through messages to self. and the designated initializer. inheritance In object-oriented programming. dynamic binding Binding a method to a message—that is. dynamic typing Discovering the class of an object at runtime rather than at compile time. Every class (except root classes such as NSObject) has a superclass. and instances can be typed by the formal protocols they conform to. the ability of a superclass to pass its characteristics (methods and instance variables) on to its subclasses.. instance In the Objective-C language. inheritance hierarchy In object-oriented programming. This section defines both public methods as well as private methods—methods that are not declared in the class’s interface. but not to informal ones. The language gives explicit support to formal protocols. objects can respond at runtime when asked if they conform to a formal protocol. each class inherits from those above it in the hierarchy. encapsulation Programming technique that hides the implementation of an operation from its users behind an abstract interface. the general type for any kind of object regardless of class. All other views in the window are arranged in a hierarchy beneath the content view. factory Same as class object. informal protocol In the Objective-C language. It can be used for both class objects and instances of a class. factory method Same as class method.. among others.
For example. an application gets one keyboard or mouse event after another from the Window Manager and responds to them. . It sets up the corresponding objects for you and makes it easy for you to establish connections between these objects and your own code where needed. any variable that’s part of the internal data structure of an instance. instance variable In the Objective-C language. In the Objective-C language. interface Part of an Objective-C class specification that declares its public interface. and the protocols it conforms to. the messages it can respond to. any method that can be used by an instance of a class rather than by the class object. the NSApplication object runs the main event loop. it would be “Esci. Used to synchronize thread execution. key window The window in the active application that receives keyboard events and is the focus of user activity. 131 2010-07-13 | © 2010 Apple Inc. In Italian. From the time it’s launched until the moment it’s terminated. link time The time when files compiled from different source modules are linked into a single program. Decisions made by the linker are constrained by the compiled code and ultimately by the information contained in source code. multiple inheritance In object-oriented programming. which include its superclass name. the ability of a class to have more than one superclass—to inherit from different sources and thus combine separately-defined behaviors in a single class. object A programming unit that groups together a data structure (instance variables) and the operations (methods) that can use or affect that data.” and in English “Quit. Symbols in one name space won’t conflict with identically named symbols in another name space. In the Application Kit. localize To adapt an application to work under various local conditions—especially to have it use a language selected by the user. an application localized in Spanish would display “Salir” in the application menu. and sounds). and public-method prototypes. the instance methods of each class are in a separate name space. Localization entails freeing application code from language-specific and culture-specific references and making it able to import localized resources (such as character strings. Objects are the principal building blocks of object-oriented programs. For example. instances variables. introspection The ability of an object to reveal information about itself as an object—such as its class and superclass. a procedure that can be executed by an object. images. Instance variables are declared in a class definition and become part of all objects that are members of or inherit from the class. message expression In object-oriented programming.GLOSSARY instance method In the Objective-C language. an expression that sends a message to an object. as are the class methods and instance variables. waiting between events if the next event isn’t ready. method In object-oriented programming. Interface Builder A tool that lets you graphically specify your application’s user interface. mutex Also known as mutual exclusion semaphore. Objective-C doesn’t support multiple inheritance. Only menus for the active application are visible on-screen. in Objective-C. an object id with a value of 0. nil In the Objective-C language.” main event loop The principal control loop for applications that are driven by events. message In object-oriented programming.” in German “Verlassen. menu A small window that displays a list of commands. the method selector (name) and accompanying arguments that tell the receiving object in a message expression what to do. All Rights Reserved. message expressions are enclosed within square brackets and consist of a receiver followed by a message (method selector and arguments). name space A logical subdivision of a program within which all names must be unique.
protocol In the Objective-C language. that organizes a program as a set of procedures that have definite beginnings and ends. All Rights Reserved. subclass In the Objective-C language. the ability of different objects to respond.” See also asynchronous message. 132 2010-07-13 | © 2010 Apple Inc. superclass In the Objective-C language. See also formal protocol and informal protocol. to the same message. When the object’s reference count reaches zero. remote message A message sent from one application to an object in another application. receiver In object-oriented programming. Occasionally used more generally to mean any class that inherits from another class. runtime The time after a program is launched and while it’s running. procedural programming language A language. reference counting Memory-management technique in which each entity that claims ownership of an object increments the object’s reference count and later decrements it. the name of a method when it’s used in a source-code message to an object. Because the application that sends the message waits for an acknowledgment or return information from the receiving application. surrogate An object that stands in for and forwards messages to another object. giving the compiler information about what kind of object an instance is. remote object An object in another application. by typing it as a pointer to a class.GLOSSARY outlet An instance variable that points to another object. one that’s a potential receiver for a remote message. polymorphism In object-oriented programming. any class that’s one step below another class in the inheritance hierarchy. the object is deallocated. Outlet instance variables are a way for an object to keep track of the other objects to which it may need to send messages. Decisions made at runtime can be influenced by choices the user makes. . selector In the Objective-C language. or the unique identifier that replaces the name when the source code is compiled. Compiled selectors are of type SEL. synchronous message A remote message that doesn’t return until the receiving application finishes responding to the message. the object that is sent a message. the class through which a subclass inherits methods and instance variables. and sometimes also used as a verb to mean the process of defining a subclass of another class. static typing In the Objective-C language. the declaration of a group of methods not associated with any particular class. each in its own way. This technique allows one instance of an object to be safely shared among several other objects. the two applications are kept “in sync. like C. a class that’s one step above another class in the inheritance hierarchy.
80 Class data type 28. All Rights Reserved. 59 action messages 97 adaptation 24 adopting a protocol 62.(minus sign) before method names 36 // marker comment 118 @"" directive (string declaration) 119. 81 and static typing 32 as receivers of messages 32 variables and 30 classes 23–32 root. 122 defined 28 of root class 32 using self 46 class object defined 23 initializing 31 class objects 27–32 and root class 32 and root instance methods 32. 117 @class directive 37.c extension 10 A abstract classes 26.Index Symbols + (plus sign) before method names 36 . See root classes 133 2010-07-13 | © 2010 Apple Inc. 121 alloc method 28. . 119 class methods and selectors 96 and static variables 30 declaration of 36.. 120 [“Dynamic Method Resolution”] 19 .
27 naming conventions 123 subclasses 23 superclass 23 uses of 32 comment marker (//) 118 compiler directives. 47 initialize method 31 initializing objects 45. 118 implementation files 35. See instance variables data types defined by Objective-C 117 designated initializer 53–55 development environment 9 directives. summary of 118–119 distributed objects 105 dynamic binding 18 dynamic typing 14 E @encode() directive 119 @end directive 35. summary of 118 conforming to protocols 57 conformsToProtocol: method 106 conventions of this book 10–11 customization with class objects 29–30 F @finally directive 99. 101 synchronization 104 system requirements 99. 119 formal protocols 60. All Rights Reserved. 123 hidden arguments 122 I id data type 117 D data members. 38 implementation of classes 38–42.h extension 35. 118 in type qualifier 121 #include directive 37 #include directive See #import directive informal protocols 61 See also protocols inheritance 23–26 of instance variables 51 of interface files 37 init method 28. 25 and instances 23 and namespaces 123 declaring 35–38. 121 See also protocols G GNU Compiler Collection 10 H . 91 as default method return type 36 of class objects 28 overview 14 IMP data type 117 @implementation directive 38. 120 of methods 39. . 122 #import directive 37.INDEX abstract 26 and inheritance 23. 38 instance methods 28 interfaces 35 introspection 14. 100. 123 defining 35–42. See instance variables data structures. 38. 120. 103 throwing 100 and method declarations 122 and static typing 26. 55 inout type qualifier 121 instance methods 28 and selectors 96 declaration of 122 declaring 36 naming conventions 123 syntax 36 134 2010-07-13 | © 2010 Apple Inc. 103 exception handler 100 NSException 99. 120 designated initializer of 53 identifying 14 implementation of 35. 118 exceptions 99–100 catching 100 clean-up processing 100 compiler switch 99.
42 naming conventions 123 of the receiver 17 public access to 118 referring to 39 scope of 13. 122 inheriting 25 instance methods 28 naming conventions 123 overriding 25 return types 93. 47. 24 NSSelectorFromString function 95 NSStringFromSelector function 95 M . 96 messaging avoiding errors 97 to remote objects 105–110 metaclass object 28 method implementations 39. 96 returning values 14.m extension 10. 101 NSObject 23. 117 sending 15. 96 O object 14 object identifiers 14 Objective-C 9 Objective-C++ 111 objects 23 allocating memory for 47 anonymous 59 creating 28 customizing 29 designated initializer 53 dynamic typing 14. 36. 55 message expressions 15. 117 message receivers 117 messages See also methods and selectors 15.mm extension 10 N name spaces 123 naming conventions 123 Nil constant 118 nil constant 14. 118 defined 13 encapsulation 40 inheriting 24–25. 27 isa instance variable 14. 16 selecting 18 specifying arguments 15 using instance variables 39 minus sign (-) before method names 36 . 118 NO constant 118 NSClassFromString function 32 NSException 99. 47–56 instances of the class See also objects @interface directive 35. 93 calling super 51 class methods 28 declaring 36. 17 synchronous 107 syntax 117 varying at runtime 19. 28 isKindOfClass: method 27. 118. 18 and static typing 92 asynchronous 107 binding 92 defined 15. 122 methods 13 See also behaviors See also messages adding with categories 79 and selectors 15.INDEX instance variables declaring 25. 35. All Rights Reserved. 120 interface files 37. 123 memory allocating 47. 40–42. 122 hidden arguments 122 implementing 39. 32 isMemberOfClass: method 27 and variable arguments 36 argument types 96 arguments 48. 118 instances of a class allocating 47 creating 28 defined 23 initializing 28. 45. . 91 initializing 28. 120 introspection 14. 55 initializing a class object 31 instance variables 17 introspection 27 135 2010-07-13 | © 2010 Apple Inc.
119. 118 procedures. 63. 117 @selector() directive 95. . 106. 121 incorporating other protocols 64–65. summary of 118 @private directive 40. 122 Smalltalk 9 specialization 24 static type checking 92 static typing 91–94 and instance variables 40 in interface files 38 introduced 26 to inherited classes 93 strings. 64 declaring 57.. 121 Protocol objects 62 protocols 57–65 adopting 64. 45. 92 protocol types 63 136 2010-07-13 | © 2010 Apple Inc. 119 type checking class types 91. 120 subclasses 23 super variable 43. 100. 46. See methods @protected directive 41. 121 formal 60 forward references to 65. 118 selectors 95 and messaging errors 98 defined 15 self 43. All Rights Reserved. 121 conforming to 57. declaring 119. 118. 122 superclasses See also abstract classes and inheritance 23 importing 37 synchronization 103–104 compiler switch 99. 119 @try directive 99. 121 informal 61 naming conventions 123 type checking 63 uses of 57–65 proxy objects 105 @public directive 41. 103 @synchronized() directive 103. 119 96 plus sign (+) before method names 36 polymorphism defined 18 precompiled headers 37 preprocessor directives. 118 @protocol directive 65. 103 exceptions 104 mutexes 103 system requirements 99.
. All Rights Reserved.INDEX type introspection 27 types defined by Objective-C 117 U unsigned int data type 122 V variable arguments 36 void data type 122 Y YES constant 118 137 2010-07-13 | © 2010 Apple Inc. | https://www.scribd.com/doc/51097669/ObjC | CC-MAIN-2017-51 | refinedweb | 37,673 | 52.46 |
Hi,
I've just started learning Python and for my first script I wanted to make something that would make life easier and that I would actually use, so starting small I've began working on a Torrent Search script that uses the ISOHunt API. However I've ran into a problem in the unfinished method that cleans the results from the search:
def sanitise(string): a = string.split('{"title":') # split page into parts, a is list for i in range(len(a)): a[i].lower() # lower every char on page index = a[i].find('"enclosure_url":') b = a[i].replace(a[:index], "") print a[i]
Here's an example search result:
<b>Inception</b>.FRENCH.DVDRip.XViD-DVDFR", "link":"http:\/\/isohunt.com\/torrent_details\/255340617\/inception?tab=summary", "guid":"255340617","enclosure_url": "........."
My script only needs 3 pieces of information, so basically what I'm trying to do is cut out the bit of string that is in front of "enclosure_url" but from my code above, I keep getting the error:
"TypeError: expected a character buffer object".
Could anyone help me fix this? | https://www.daniweb.com/programming/software-development/threads/436382/typeerror-expected-a-character-buffer-object | CC-MAIN-2018-26 | refinedweb | 181 | 53.1 |
Mar 07, 2008 12:10 PM
There's one command you need to enter the united world of Ruby and Java:
include Java .
Download the Free Adobe® Flex® Builder 3 Trial
Adobe® Rich Internet Application Project Portal
Usage Landscape: Enterprise Open Source Data Integration
Performance Management and Diagnostics in Distributed Java and .NET Applications
The Agile Business Analyst: Skills and Techniques needed for Agile
This will allow you to instantiate Java classes, call their methods and even derive from them as if they were just ordinary Ruby objects. There are a few subtle differences, this article will show you how to manage them so you can quickly devise new applications and deploy them to your users at lightning-speed.
This article is based on a sample application that implements a simple ObjectSpace browser using JRuby and Swing. Ruby's ObjectSpace feature provides a way to access all objects of the system. For example, all living strings can be printed like this:
ObjectSpace.each_object(String) do |string|This yields about 28000 strings when run from within my irb session. Using Swing and JRuby, we can now display the different classes, their instances and the available methods in a nice graphical interface. You can even invoke parameterless methods by clicking on them in the rightmost panel:
puts string
end
ObjectSpace support in JRuby is disabled by default because of the performance penalty it imposes on the runtime, but more on that later.
I'd like to point out a few interesting details of the implementation and give some hints on how to get started on using the Java integration features of JRuby.
Once you've included Java into your script, you can start subclassing existing Java classes. This can be done by simply specifying the fully qualified name of the Java class. In the example application, the main window extends JFrame. It also includes the javax.swing and java.awt packages into the class' scope so you don't have to specify the full name for every usage:
class MainWindow < javax.swing.JFrame
include_package 'javax.swing'
include_package 'java.awt' ...
Alternatively, you can also include specific classes using the
include_class function, which doesn't pollute the namespace with constants for classes you'll never use.
Calling super constructors works just as with plain Ruby code, which means that the frame title can be set by calling
super("JRuby Object Browser") on the first line of the
initialize-method.
With the whole javax.swing package included into the class, instantiating Java classes is straightforward:
list_panel = JPanel.new
list_panel.layout = GridLayout.new(0, 3)
If you take a closer look at the second line, one might think that we directly access the layout-field of the JPanel, but that's not case. JRuby adds some convenience methods to Java objects, so the above statement could also be written in the accustomed way:
list_panel.setLayout(GridLayout.new(0, 3))
Instead of using getters and setters, you can seemingly access fields directly. There is even more syntactic sugar JRuby adds to Java objects that makes the whole experience feel more Ruby-like: you can use the snake_case notation instead of the actually defined camel case names for any method calls. From this follows the third way to call the
setLayout method:
list_panel.set_layout(GridLayout.new(0, 3))
Be cautious when calling setter-methods from within their own class, Ruby might think you want to create a local variable, so you have to call it explicitely with self as the receiver:
self.default_close_operation = WindowConstants::EXIT_ON_CLOSE
Another difference between Java and Ruby is the accessing of constants, like
EXIT_ON_CLOSE in the previous snippet. Just remember to replace all
. accesses with
:: if you're translating some code from Java to Ruby.
Until now, most changes JRuby brings to Swing development don't seem to be very revolutionary, but we haven't covered one important aspect yet: listeners. In Java, hooking up a listener with an event often means implementing an interface in an anonymous class:
button.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
...
}
});
This really clutters your code if you need to attach a lot of listeners. With JRuby, this can be reduced to the following two lines of code (and you can even omit the
event variable if you don't use it):
button.add_action_listener do |event|
...
end
These are the basics you need to know to start using Swing with JRuby. Even though JRuby makes GUI development with Swing more comfortable, you still have to write a lot of code manually, especially when you need to use complex layouts. If you want to make the creation of Swing-UIs even simpler, take a look at Three approaches to JRuby GUI APIs.
Ruby applications and libraries are usually distributed with the help of RubyGems, but in order to take advantage of it, you already need to have Ruby and RubyGems installed, which likely isn't the case for the average end-user. This problem has already been solved for traditional (MRI / C-Ruby) programs in the form of RubyScript2Exe[1], which bundles your scripts and a Ruby interpreter into a nice package that can be run on multiple platforms. Users of JRuby don't have to feel left out in the rain, quite the contrary, they have an even mightier tool at hand to quickly deploy applications: Java Web Start.
Java Web Start is included in the Java Runtime Environment and should therefore be present on most systems. Making an application ready for Web Start is rather easy, all that is needed is a Jar containing all the files and a JNLP (Java Network Launching Protocol) description file. We demonstrate the process of creating a web-startable Ruby application on the basis of the ObjectSpace browser application.
The prerequisite for Web Start is a Jar that contains the application, so we'll take a look at that first. JRuby provides two different libraries: the "minimal" jruby.jar and jruby-complete.jar, which bundles the whole Ruby standard library. If you don't use anything from the standard library, you can use the smaller jruby.jar and decrease the download by roughly one megabyte.
The simplest way to get your script running is to add the
.rb files to jruby.jar. The following command adds our example,
rob.rb, to the zip.
jar uf jruby.jar rob.rb
To check if it worked, you can start your application with java, requiring our Ruby script. The application needs the ObjectSpace, which we need to enable by passing the
jruby.objectspace.enabled=true property to Java:
java -Djruby.objectspace.enabled=true -jar jruby.jar -r rob
The
-r option automatically requires the specified file and thus runs our script.
One of the exciting new features of JRuby 1.1 is the ahead of time (AOT) compilation support. Currently, just in time compilation in JRuby is restricted to 2048 methods, ahead of time compiling can help to mitigate that restriction.
jrubyc, the JRuby compiler, is still under development, so I'd advise to use the latest JRuby release available. Compiling a plain Ruby file to a classfile is as simple as invoking the compiler with the script(s) as arguments:
jrubyc rob.rb
This will create a
ruby directory containing a
rob.class file. Instead of including the
ruby directory into the jruby.jar as we did above, we're going to create a separate Jar to hold the application. After all, modifying the existing Jar doesn't really seem to be such a neat solution. Jars can be made by the tool of the same name:
jar -cfe rob.jar ruby/rob.class rubyThis will create a small Jar called rob.jar with our class inside, adding the ruby/rob.class as the main-class to be specified in the Manifest. This allows us to simplify the invocation, as we can now simply point to the class and don't have to speficy the require on the command line anymore. To execute it, we have to make sure that
rob.jaris on the classpath:
java -Djruby.objectspace.enabled=true -cp rob.jar:jruby.jar ruby.rob
Before we can continue to writing the JNLP-File, we need to sign the Jars. That's a unfortunate necessity because JRuby uses reflection and thus needs to have extended permissions, see the JRuby Wiki for more details. The easiest way is to create yourself a test certificate using the
keytool that comes with the JDK:
keytool -genkey -keystore myKeystore -alias myself
keytool -selfcert -alias myself -keystore myKeystore
From now on, every time you modify one of the Jars, you have to update the signature or you'll get a SecurityException when running it.
jarsigner -keystore myKeystore jruby.jar myself
jarsigner -keystore myKeystore rob.jar myself
Now that we have the Jars prepared, we can actually take a look at that JNLP file. The following presents a minimal configuration for the application. Some fields are required by the specification, like the
title,
vendor and
j2se tags as well as the
security section. The jar-tag denotes the final location of the Jars. It can also point to a local file, using a, which can be convenient during development. The ObjectSpace property needs to be set here too, this can be done with the property-tag.
<?xml version="1.0" encoding="utf-8"?>
<jnlp>
<information>
<title>Ruby Object Browser</title>
<vendor>Mirko Stocker</vendor>
</information>
<resources>
<jar href=""/>
<jar href=""/>
<j2se version="1.5+"/>
<property name="jruby.objectspace.enabled" value="true"/>
</resources>
<application-desc
<security>
<all-permissions/>
</security>
</jnlp>
If you skipped the AOT section or just want to keep the one-jar way, then you'll have to modify the jnlp file to also include the
-e arguments, so your
application-desc will look like:
[...]
<application-desc>
<argument>-r</argument>
<argument>rob</argument>
</application-desc>
[...]
The final step is to upload the Jars and the JNLP file to the specified location. You should now be able to open the link in your browser or using the
javaws tool directly from a shell.
In order for your browser to launch the application with Web Start, the JNLP file needs to be delivered with the
application/x-java-jnlp-file MIME-Type. So if your browser just displays the content of the JNLP file and javaws isn't launched automatically, you need to adapt the webserver configuration. For example, Apache needs the following directive in mime.types:
application/x-java-jnlp-file jnlp
The JRuby wiki is a great source for general information about JRuby.
Download the code for Ruby Object Browser..
1.) If you need to be convinced of the benefits of Webstart: 2.) To play with this sample app, download the provided code (be sure to update the jar paths in the jnlp file to fit your system). This article is not trying to "sell" Web Start - it's showing how to use it with JRuby. Selling Web Start is Sun's business.. | http://www.infoq.com/articles/jruby-deployment-with-webstart | crawl-002 | refinedweb | 1,828 | 55.34 |
Can someone tell me what "Common Data Platform (CDP)" is ? first time I've heard of it.
Damn. I don't know what CDP was, but I had high hopes for MBF and was already planning on using it. Procee Flow/Work Flow engines are really expensive and this was going to be a great extension.
FDB
Hmm. Is this a case of moving some things around? I love the idea of having a centralised business framework, but hey, as long as the API's are there, I don't much care what namespace they're in.
Thread Closed
This thread is kinda stale and has been closed but if you'd like to continue the conversation, please create a new thread in our Forums,
or Contact Us and let us know. | http://channel9.msdn.com/Forums/Coffeehouse/126017-Microsoft-Business-Framework-is-no-more | crawl-003 | refinedweb | 132 | 78.69 |
Copyright © 2004-2013 W3C® (MIT, ERCIM, Keio, Beihang), All Rights Reserved. W3C liability, trademark and document use rules apply. revision of the 2004 Semantics specification for RDF [RDF-MT] and supersedes that document..
Notes in this style indicate changes from the 2004 RDF 1.0 semantics.
Notes in this style are technical asides on obscure or recondite matters.
This document defines a model-theoretic semantics for RDF graphs and the RDF and RDFS vocabularies, providing an exact formal specification of when truth is preserved by transformations of RDF or operations which derive RDF content from other R Semantics, is normative for RDF semantics and the validity of RDF inference processes. It is not normative for many aspects of RDF meaning which are not described or specified by this semantics, including social issues of how IRIs are assigned meanings in use and how the referents of IRIs are related to Web content expressed in other media such as natural language texts.
RDF is intended for use as a base notation for a variety of extended notations such as OWL [OWL2-OVERVIEW] and RIF [RIF-OVERVIEW], whose expressions can be encoded as RDF graphs which use a particular vocabulary with a specially defined meaning. Also, particular IRI vocabularies may be given meanings by other specifications or conventions. When such extra meanings are assumed, a given RDF graph may support more extensive entailments than are sanctioned by the basic RDF semantics. In general, the more assumptions that are made about the meanings of IRIs in an RDF graph, the more entailments follow from those assumptions. by names such as RDFS entailment, D-entailment, etc.
Semantic extensions MAY impose special syntactic conditions or restrictions upon RDF graphs, such as requiring certain triples to be present, or prohibiting particular combinations of IRIs in triples, and MAY consider RDF graphs which do not conform to these conditions to be errors. For example, RDF statements of the form
ex:a rdfs:subClassOf "Thing"^^xsd:string .
are prohibited in the OWL semantic extension based on description logics [OWL2-SYNTAX]. In such cases, basic RDF operations such as taking a subset of triples, or combining RDF graphs, may cause syntax errors in parsers which recognize the extension conditions. None of the semantic extensions normatively defined in this document impose such syntactic restrictions on RDF graphs.
All entailment regimes MUST be monotonic extensions of the simple entailment regime described in the document, in the sense that if A simply entails B then A also entails B under any extended notion of entailment, provided that any syntactic conditions of the extension are also satisfied. Put another way, a semantic extension cannot "cancel" an entailment made by a weaker entailment regime, although it can treat the result as a syntax error.
This document uses the following terminology for describing RDF graph syntax, all as defined in the companion RDF Concepts specification [RDF11-CONCEPTS]: IRI, RDF triple, RDF graph, subject, predicate, object, RDF source, node, blank node, literal, isomorphic, and RDF datasets. All the definitions in this document apply unchanged to generalized RDF triples, graphs, and datasets.
The words denotes and refers to are used interchangeably as synonyms for the relationship between an IRI or literal and what it refers to in a given interpretation, itself called the referent or denotation. IRI meanings may also be determined by other constraints external to the RDF semantics; when we wish to refer to such an externally defined naming relationship, we will use the word identify and its cognates. For example, the fact that the IRI is widely used as the name of a datatype described in the XML Schema document [XMLSCHEMA11-2] might be described by saying that the IRI identifies that datatype. If an IRI identifies something it may or may not refer to it in a given interpretation, depending on how the semantics is specified. For example, an IRI used as a graph name identifying a named graph in an RDF dataset may refer to something different from the graph it identifies.
Throughout this document, the equality sign = indicates strict identity. The statement "A = B" means that there is one entity to which both expressions "A" and "B" refer. Angle brackets < x, y > are used to indicate an ordered pair of x and y.
Throughout this document, RDF graphs and other fragments of RDF abstract syntax are written using the notational conventions of the Turtle syntax [TURTLE-CR]. The namespace prefixes
rdf:
rdfs: and
xsd: are used as in [RDF11-CONCEPTS], section 1.4. When the exact IRI does not matter, the prefix
ex: is used. When stating general rules or conditions we use three-character variables such as aaa, xxx, sss to indicate arbitrary IRIs, literals, or other components of RDF syntax. Some cases are illustrated by node-arc diagrams showing the graph structure directly.
A name is any IRI or literal. A typed literal contains two names: itself and its internal type IRI. A vocabulary is a set of names.
The empty graph is the empty set of that contains no blank nodes.
Suppose that M is a functional mapping from a set of blank nodes to some set of literals, blank nodes and IRIs. Any graph obtained from a graph G by replacing some or all of the blank nodes N in G by M(N) is an instance of G. Any graph is an instance of itself, an instance of an instance of G is an instance of G, and if H is an instance of G then every triple in H is an instance of at least.
Two graphs are isomorphic when each maps into the other by a 1:1 mapping on blank nodes. Isomorphic graphs are mutual instances with an invertible instance mapping. As blank nodes have no particular identity beyond their location in a graph, we will often treat isomorphic graphs as identical.
An RDF graph is lean if it has no instance which is a proper subgraph of itself. Non-lean graphs have internal redundancy and express the same content as their lean subgraphs. For example, the graph
ex:a ex:p _:x .
_:y ex:p _:x .
is not lean, but
ex:a ex:p _:x .
_:x ex:p _:x .
is lean. Ground graphs are lean.
This section defines the basic notions of interpretation and truth for RDF graphs. All semantic extensions of any vocabulary or higher-level notation encoded in RDF MUST conform to these minimal truth conditions. Other semantic extensions may extend and add to these, but they MUST NOT modify or negate them. For example, because interpretations are mappings which apply to IRIs, a semantic extension cannot interpret different occurrences of a single IRI differently.
The entire semantics applies to RDF graphs, not to RDF sources. An RDF source has a semantic meaning only through the graph that is its value at a given time, or in a given state. Graphs cannot change their semantics with time.
A simple interpretation I is a structure consisting of:
The 2004 RDF 1.0 semantics defined interpretations relative to a vocabulary.
In the 2004 RDF 1.0 semantics, IL was a total, rather than partial, mapping.
The 2004 RDF 1.0 specification divided literals into 'plain' literals with no type and optional language tags, and typed literals. Usage has shown that it is important that every literal have a type. RDF 1.1 replaces plain literals without language tags by literals typed with the XML Schema
string datatype, and introduces the special type
rdf:langString for language-tagged strings. The full semantics for typed literals is given in the next section.
Interpretations are required to interpret all names, and are therefore infinite. This simplifies the exposition. However, RDF can be interpreted using finite structures, supporting decidable algorithms. Details are given in Appendix B.
IEXT(x), called the extension of x, is a set of pairs which identify the arguments for which the property is true, that is, a binary relational extension.
The distinction between IR and IL will become significant below when the semantics of datatypes are defined. IL is allowed to be partial because some literals may fail to have a referent.
It is conventional to map a relation name to a relational extension directly. This however presumes that the vocabulary is segregated into relation names and individual names, and RDF makes no such assumption. Moreover, RDF allows an IRI to be used as a relation name applied to itself as an argument. Such self-application structures are used in RDFS, for example. The use of the IEXT mapping to distinguish the relation as an object from its relational extension accommodates both of these requirements. It also provides for a notion of RDFS 'class' which can be distinguished from its set-theoretic extension. A similar technique is used in the ISO/IEC Common Logic standard [ISO24707].
The denotation of a ground RDF graph in an interpretation I is then given by the following rules, where the interpretation is also treated as a function from expressions (names, triples and graphs) to elements of the universe and truth values:
If IL(E) is undefined for some literal E then E has no semantic value, so any triple containing it will be false, so any graph containing that triple will also be false.
The final condition implies that the empty graph (the empty set of triples) is always true.
The sets IP and IR may overlap, indeed IP can be a subset of IR. Because of the domain conditions on IEXT, the denotation of the subject and object of any true triple will be in IR; so any IRI which occurs in a graph both as a predicate and as a subject or object will denote something in the intersection of IP and IR.
Semantic extensions may impose further constraints upon interpretation mappings by requiring some IRIs to refer in particular ways. For example, D-interpretations, described below, require some IRIs, understood as identifying and referring to datatypes, to have a fixed interpretation.
Blank nodes are treated as simply indicating the existence of a thing, without using an IRI to identify any particular thing. This is not the same as assuming that the blank node indicates an 'unknown' IRI.. Then the semantic conditions for an RDF graph are:.
This section is non-normative.
An RDF graph is true exactly when:
1. the IRIs and literals in subject or object position in the graph all refer to things,
2. there is some way to interpret all the blank nodes in the graph as referring to things,
3. the IRIs in property position refer to binary relationships,
4. and under these interpretations, each triple S P O in the graph asserts that the thing referred to as S, and the thing referred to as O, do in fact stand in the relationship referred to by P.
Following standard terminology, we say that I satisfies E when I(E)=true, that E is satisfiable when an interpretation exists which satisfies it, (otherwise unsatisfiable), and that a graph G simply entails a graph E when every interpretation which satisfies G also satisfies E.
In later sections these notions will be adapted to other classes of interpretations, but throughout this section 'entailment' should be interpreted as meaning simple entailment.
We do not define a notion of entailment between sets of graphs. To determine whether a set of graphs entails a graph, the graphs in the set must first be combined into one graph, either by taking the union or the merge of the graphs. Unions preserve the common meaning of shared blank nodes, while merging effectively ignores any sharing of blank nodes. Merging the set of graphs produces the same definition of entailment by a set that was defined in the 2004 RDF 1.0 specification.
Any process which constructs a graph E from some other graph S is (simply) valid if S simply entails E in every case, otherwise invalid.
The fact that an inference is valid should not be understood as meaning that any RDF application is obliged or required to make the inference. Similarly, the logical invalidity of some RDF transformation or process does not mean that the process is incorrect or prohibited. Nothing in this specification requires or prohibits any particular operations on RDF graphs or sources. Entailment and validity are concerned solely with establishing the conditions on such operations which guarantee the preservation of truth. While logically invalid processes, which do not follow valid entailments, are not prohibited, users should be aware that they may be at risk of introducing falsehoods into true RDF data. Nevertheless, particular uses of logically invalid processes may be justified and appropriate for data processing under circumstances where truth can be ensured by other means.
Entailment refers only to the truth of RDF graphs, not to their suitability for any other purpose. It is possible for an RDF graph to be fitted for a given purpose and yet validly entail another graph which is not appropriate for the same purpose. An example is the RDF test cases manifest [RDF-TESTCASES] which is provided as an RDF document for user convenience. This document lists examples of correct entailments by describing their antecedents and conclusions. Considered as an RDF graph, the manifest simply entails a subgraph which omits the antecedents, and would therefore be incorrect if used as a test case manifest. This is not a violation of the RDF semantic rules, but it shows that the property of "being a correct RDF test case manifest" is not preserved under RDF entailment, and therefore cannot be described as an RDF semantic extension. Such entailment-risky uses of RDF should be restricted to cases, as here, where it is obvious to all parties what the intended special restrictions on entailment are, in contrast with the more normal case of using RDF for the open publication of data on the Web.
This section is non-normative.
The properties described here apply only to simple entailment, not to extended notions of entailment introduced in later sections. Proofs are given in Appendix C.
Every graph is satisfiable.
This does not hold for extended notions of interpretation. For example, a graph containing an ill-typed literal is D-unsatisfiable.
The following interpolation lemma
G simply entails a graph E if and only if a subgraph of G is an instance of E.
completely characterizes simple entailment in syntactic terms. To detect whether one RDF graph simply entails another, check that there is some instance of the entailed graph which is a subset of the first graph.
This is clearly decidable, but it is also difficult to determine in general, since one can encode the NP-hard subgraph problem (detecting whether one mathematical graph is a subgraph of another) as detecting simple entailment between RDF graphs. This construction (due to Jeremy Carroll) uses graphs containing many blank nodes, which are unlikely to occur in practice. The complexity of checking simple entailment is reduced by having fewer blank nodes in the conclusion E. When E is a ground graph, it is simply a matter of checking the subset relationship on sets of triples.
Interpolation has a number of direct consequences, for example:
The empty graph is entailed by any graph, and does not entail any graph except itself.
A graph entails all its subgraphs.
A graph is entailed by any of its instances.
If E is a lean graph and E' is a proper instance of E, then E does not entail E'.
If S is a subgraph of S' and S entails E, then S' entails E.
If S entails a finite graph E, then some finite subset S' of S entails E.
The property just above is called compactness - RDF is compact. As RDF graphs can be infinite, this is sometimes important.
If E contains an IRI which does not occur anywhere in S, then S does not entail E.
This section is non-normative.
Skolemization is a transformation on RDF graphs which eliminates blank nodes by replacing them with "new" IRIs, which means IRIs which are coined for this purpose and are therefore guaranteed to not occur in any other RDF graph (at the time of creation). See Section 3.5 of [RDF11-CONCEPTS] for a fuller discussion.
Suppose G is a graph containing blank nodes and sk is a skolemization mapping from the blank nodes in G to the skolem IRIs which are substituted for them, so that sk(G) is a skolemization of G. Then the semantic relationship between them can be summarized as follows.
sk(G) simply entails G (since sk(G) is an instance of G.)
G does not entail sk(G) (since sk(G) contains IRIs not in G.)
For any graph H, if sk(G) entails H then there is a graph H' such that G entails H' and H=sk(H') .
For any graph H which does not contain any of the "new" IRIs introduced into sk(G), sk(G) simply entails H if and only if G simply entails H.
The second property means that a graph is not logically equivalent to its skolemization. Nevertheless, they are in a strong sense almost interchangeable, as shown the next two properties. The third property means that even when conclusions are drawn from the skolemized graph which do contain the new vocabulary, these will exactly mirror what could have been derived from the original graph with the original blank nodes in place. The replacement of blank nodes by IRIs does not effectively alter what can be validly derived from the graph, other than by giving new names to what were formerly anonymous entities. The fourth property, which is a consequence of the third, clearly shows that in some sense a skolemization of G can "stand in for" G as far as entailments are concerned. Using sk(G) instead of G will not affect any entailments which do not involve the new skolem vocabulary.
In the 2004 RDF 1.0 specification, datatype D-entailment was defined as a semantic extension of RDFS-entailment. Here it is defined as a direct extension to basic RDF. This is more in conformity with actual usage, where RDF with datatypes is widely used without the RDFS vocabulary. If there is a need to distinguish this from the 2004 RDF 1.0 terminology, the longer phrasing "simple D-entailment" or "simple datatype entailment" should be used rather than "D-entailment".
Datatypes are identified by IRIs. Interpretations will vary according to which IRIs they recognize as denoting datatypes. We describe this using a parameter D on interpretations. where D is the set of recognized datatype IRIs. We assume that a recognized IRI identifies a unique datatype wherever it occurs, and the semantics requires that it refers to this identified datatype. The exact mechanism by which an IRI identifies a datatype IRI is considered to be external to the semantics. RDF processors which are not able to determine which datatype is identifier by an IRI cannot recognize that IRI, and should treat any literals type with that IRI as unknown names.
In the 2004 RDF 1.0 specification, the semantics of datatypes referred to datatype maps. The current treatment subsumes datatype maps into the interpretation mapping on recognized IRIs.
RDF literals and datatypes are fully described in Section 5. The function L2V maps datatypes to their lexical-to-value mapping. A literal with datatype d denotes the value obtained by applying this mapping to the character string sss: L2V(d)(sss). If the lexical-to-value mapping gives no value for the literal string, then the literal has no referent. The value space of a datatype is the range of the lexical-to-value mapping. Every literal with that type either refers to a value in the value space of the type, or fails to refer at all. An ill-typed literal is one whose datatype IRI is recognized, but whose character string is assigned no value by the lexical-to-value mapping for that datatype.
RDF processors are not REQUIRED to recognize any datatype IRIs other than
rdf:langString and
xsd:string, but when IRIs listed in Section 5 of [RDF11-CONCEPTS] are recognized, they MUST be interpreted as described there, and when the IRI
rdf:PlainLiteral is recognized, it MUST be interpreted to refer to the datatype defined in [RDF-PLAIN-LITERAL]. RDF processors MAY recognize other datatype IRIs, but when other datatype IRIs are recognized, the mapping between a recognized IRI and the datatype it refers to MUST be specified unambiguously, and MUST be fixed during all RDF transformations or manipulations.
Literals with
rdf:langString as their datatype are an exceptional case which are given a special treatment. The IRI
rdf:langString is classified as a datatype IRI, and interpreted to refer to a datatype, even though no L2V mapping is defined for it. The value space of
rdf:langString is the set of all pairs of a string with a language tag. The semantics of literals with this as their type are given below.
RDF literal syntax allows any IRI to be used in a typed literal, even when it is not recognized as referring to a datatype. Literals with such an "unknown" datatype IRI, which is not in the set of recognized datatypes, SHOULD NOT be treated as errors, although RDF applications MAY issue a warning. Such literals SHOULD be treated like IRIs and assumed to denote some thing in the universe IR. RDF processors which fail to recognize a datatype IRI will not be able to detect some entailments which are visible to one which does. For example, the fact that
ex:a ex:p "20.0000"^^xsd:decimal .
entails
ex:a ex:p "20.0"^^xsd:decimal .
will not be visible to a processor which does not recognize the datatype IRI
xsd:decimal.
Let D be a set of IRIs identifying datatypes. A (simple) D-interpretation is a simple interpretation which satisfies the following conditions:
If the literal is ill-typed then the L2V(I(aaa)) mapping has no value, and so the literal cannot denote anything. In this case, any triple containing the literal must be false. Thus, any triple, and hence any graph, containing an ill-typed literal will be D-unsatisfiable, i.e. false in every D-interpretation. This applies only to literals typed with recognized datatype IRIs in D; literals with an unrecognized type IRI are not ill-typed and cannot give rise to a D-unsatisfiable graph.
The built-in RDF datatype
rdf:langString has no ill-typed literals. Any syntactically legal literal with this type will denote a value in every RDF interpretation. The only ill-typed literals of type
xsd:string are those containing a Unicode code point which does not match the Char production in [XML10]. Such strings cannot be written in an XML-compatible surface syntax.
In the 2004 RDF 1.0 specification, ill-typed literals were required to denote a value in IR, and D-inconsistency could only be recognized by using the RDFS semantics.
A graph is (simply) D-satisfiable or satisfiable recognizing D when it has the value true in some D-interpretation, and a graph S (simply) D-entails or entails recognizing D a graph G when every D-interpretation which satisfies S also D-satisfies G.
Unlike the case with simple interpretations, it is possible for a graph to have no satisfying D-interpretations, i.e. to be D-unsatisfiable. RDF processors MAY treat an unsatisfiable graph as signaling an error condition, but this is not required.
A D-unsatisfiable graph D-entails any graph.
The fact that an unsatisfiable statement entails any other statement has been known since antiquity. It is called the principle of ex falso quodlibet. It should not be interpreted to mean that it is necessary, or even permissible, to actually draw any conclusion from an inconsistency.
In all of this language, 'D' is being used as a parameter to represent some set of datatype IRIs, and different D sets will yield different notions of satisfiability and entailment. The more datatypes are recognized, the stronger is the entailment, so that if D ⊂ E and S E-entails G then S must D-entail G. Simple entailment is { }-entailment, i.e. D-entailment when D is the empty set, so if S D-entails G then S simply entails G.
This section is non-normative.
Unlike simple entailment, it is not possible to give a single syntactic criterion to detect all D-entailments, which
can hold because of particular properties of the lexical-to-value mappings of the recognized datatypes. For example, if D contains
xsd:decimal then
ex:a ex:p "25.0"^^xsd:decimal .
D-entails
ex:a ex:p "25"^^xsd:decimal .
In general, any triple containing a literal with a recognized datatype IRI D-entails another literal when the lexical strings of the literals map to the same value under the lexical-to-value map of the datatype. If two different datatypes in D map lexical strings to a common value, then a triple containing a literal typed with one datatype may D-entail another triple containing a literal typed with a different datatype. For example,
"25"^^xsd:integer and
"25.0"^^xsd:decimal have the same value, so the above also D-entails
ex:a ex:p "25"^^xsd:integer .
when D also contains
xsd:integer.
(There is a W3C Note [SWBP-XSCH-DATATYPES] containing a long discussion of literal values.)
ill-typed literals are the only way in which a graph can be simply D-unsatisfiable, but datatypes can give rise to a variety of other unsatisfiable graphs when combined with the RDFS vocabulary, defined later.
RDF interpretations impose extra semantic conditions on
xsd:string and part of the infinite
set of IRIs with the namespace prefix
rdf: .
An RDF interpretation recognizing D is a D-interpretation I where D includes
rdf:langString and
xsd:string, and which satisfies:
and satisfies every triple in the following infinite set:
RDF imposes no particular normative meanings on the rest of the RDF vocabulary. Appendix D describes the intended uses of some of this vocabulary.
The datatype IRIs
rdf:langString and
xsd:string MUST be recognized by all RDF interpretations.
Two other datatypes
rdf:XMLLiteral and
rdf:HTML are defined in [RDF11-CONCEPTS]. RDF-D interpretations MAY fail to recognize these datatypes.
S RDF entails E recognizing D when every RDF interpretation recognizing D which satisfies S also satisfies E. When D is {
rdf:langString,
xsd:string} then we simply say S RDF entails E.
The properties of simple entailment described earlier do not all apply to RDF entailment. For example, all the RDF axioms are true in every RDF interpretation, and so are RDF entailed by the empty graph, contradicting interpolation for RDF entailment.
This section is non-normative.
The last semantic condition in the above table gives the following entailment pattern for recognized datatype IRIs:
Note, this is valid even when the literal is ill-typed, since an unsatisfiable graph entails any triple.
For example,
ex:a ex:p "123"^^xsd:integer .
RDF entails recognizing {
xsd:integer}
ex:a ex:p _:x .
_:x rdf:type xsd:integer .
In addition, the first RDF semantic condition justifies the following entailment pattern:
So that the above example also RDF entails
ex:p rdf:type rdf:Property .
recognizing {
xsd:integer}.
Some datatypes support idiosyncratic entailment patterns which do not hold for other datatypes. For example,
ex:a ex:p "true"^^xsd:boolean .
ex:a ex:p "false"^^xsd:boolean .
ex:v rdf:type xsd:boolean .
together RDF entail
ex:a ex:p ex:v .
recognizing {
xsd:boolean}.
In addition, the semantic conditions on value spaces may produce other unsatisfiable graphs. For example, when D contains
xsd:integer and
xsd:boolean, then the following is RDF unsatisfiable recognizing D:
_:x rdf:type xsd:boolean .
_:x rdf:type xsd:integer .
RDF Schema [RDF-SCHEMA] extends RDF to a larger vocabulary constrain their meanings.)
It is convenient to state the RDFS semantics
in terms of a new semantic construct, a class, i.e. a resource which represents
a set of things in the universe which all have that class as.
A class may have an
empty class extension. Two different classes can have the same class extension.
The class extension of
rdfs:Class contains the class
rdfs:Class.
An RDFS interpretation (recognizing D) is an RDF interpretation (recognizing D) I which satisfies the semantic conditions in the following table, and all the triples in the subsequent table of RDFS axiomatic triples.
In the 2004 RDF 1.0 semantics, LV was defined as part of a simple interpretation structure, and the definition given here was a constraint.
Since I is an RDF interpretation, the first condition implies that IP
= ICEXT(I(
rdf:Property)).
The semantic conditions on RDF interpretations, together with the RDFS conditions on ICEXT, mean that every recognized datatype can be treated as a class whose extension is the value space of the datatype, and every literal with that datatype either fails to refer, or refers to a value in that class.
When using RDFS semantics, the referents of all recognized datatype IRIs can be considered to be in the class
rdfs:Datatype.
The axioms and conditions listed above. This is not a complete set.
RDFS does not partition the universe into disjoint categories of classes, properties and individuals. Anything in the universe can be used as a class or as a property, or both, while retaining its status as an individual which may be in classes and have properties. Thus, RDFS permits classes which contain other classes, classes of properties, properties of classes, etc. As the axiomatic triples above illustrate, it also permits classes which contain themselves and properties which apply to themselves. A property of a class is not necessarily a property of its members, nor vice versa.
This section is non-normative.
The class
rdfs:Literal is not the class of literals, but rather that of literal values, which may also be referred to by IRIs. For example, LV does not contain the literal
"foodle"^^xsd:string but it does contain the string "foodle".
A triple of the form
ex:a rdf:type rdfs:Literal .
is consistent even though its subject is an IRI rather
than a literal. It says that the IRI '
ex:a'
refers to a literal value, which is quite possible since literal values are things in the universe. Blank nodes may range over literal values, for the same reason.
S RDFS entails E recognizing D when every RDFS interpretation recognizing D which satisfies S also satisfies E.
Since every RDFS interpretation is an RDF interpretation, if S RDFS entails E then S also RDF entails E; but RDFS entailment is stronger than RDF entailment. Even the empty graph has a large number of RDFS entailments which are not RDF entailments, for example all triples of the form
aaa
rdf:type rdfs:Resource .
where aaa is an IRI, are true in all RDFS interpretations.
This section is non-normative.
RDFS entailment holds for all the following patterns, which correspond closely to the RDFS semantic conditions:
RDFS provides for several new ways to be unsatisfiable recognizing D. For example, the following graph is RDFS unsatisfiable recognizing {
xsd:integer,
xsd:boolean}:
ex:p rdfs:domain xsd:boolean .
ex:a rdf:type xsd:integer .
ex:a ex:p ex:c .
An RDF dataset (see [RDF11-CONCEPTS]) is a finite set of RDF graphs each paired with an IRI or blank node called the graph name, plus a default graph, without a name. Graphs in a single dataset may share blank nodes. The association of graph name IRIs with graphs is used by SPARQL [RDF-SPARQL-QUERY] to allow queries to be directed against particular graphs.
Graph names in a dataset may refer to something other than the graph they are paired with. This allows IRI referring to other kinds of entities, such as persons, to be used in a dataset to identify graphs of information relevant to the entity denoted by the graph name IRI.
When a graph name is used inside RDF triples in a dataset it may or may not refer to the graph it names. The semantics does not require, nor should RDF engines presume, without some external reason to do so, that graph names used in RDF triples refer to the graph they name.
RDF datasets MAY be used to express RDF content. When used in this way, a dataset SHOULD be understood to have at least the same content as its default graph. Note however that replacing the default graph of a dataset by a logically equivalent graph will not in general produce a structurally similar dataset, since it may for example disrupt co-occurrences of blank nodes between the default graph and other graphs in the dataset, which may be important for reasons other than the semantics of the graphs in the dataset.
Other semantic extensions and entailment regimes MAY place further semantic conditions and restrictions on RDF datasets, just as with RDF graphs. One such extension, for example, could set up a modal-like interpretation structure so that entailment between datasets would require RDF graph entailments between the graphs with the same name (adding in empty graphs as required).
This section is non-normative.
(This section is based on work described more fully in [HORST04], [HORST05], which should be consulted for technical details and proofs.)
The RDF and RDFS entailment patterns listed in the above tables can be viewed as left-to-right rules which add the entailed conclusion to a graph. These rule sets can be used to check RDF (or RDFS) entailment between graphs S and E, by the following sequence of operations:
1. Add to S all the RDF (or RDF and RDFS) axiomatic triples except those containing the container membership property IRIs
rdf:_1, rdf:_2, ....
2. For every container membership property IRI which occurs in E, add the RDF (or RDF and RDFS) axiomatic triples which contain that IRI.
3. Apply the RDF (or RDF and RDFS) inference patterns as rules, adding each conclusion to the graph, to exhaustion; that is, until they generate no new triples.
4. Determine if E has an instance which is a subset of the set, i.e. whether the enlarged set simply entails E.
This process is clearly correct, in that if it gives a positive result then indeed S does RDF (RDFS) entail E. It is not, however, complete: there are cases of S entailing E which are not detectable by this process. Examples include:
Both of these can be handled by allowing the rules to apply to a generalization of the RDF syntax in which literals may occur in subject position and blank nodes may occur in predicate position.
Consider generalized RDF triples, graphs, and datasets instead of RDF triples, graphs and datasets (extending the generalization used in [HORST04] and following exactly the terms used in [OWL2-PROFILES]). The semantics described in this document applies to the generalization without change, so that the notions of interpretation, satisfiability and entailment can be used freely. Then we can replace the first RDF entailment pattern with the simpler and more direct
which gives the entailments;
ex:a ex:p "string"^^xsd:string . by GrdfD1
ex:b ex:q "string"^^xsd:string .
"string"^^xsd:string rdf:type xsd:string .
which is an instance (in generalized RDF) of the desired conclusion, above.
The second example can be derived using the RDFS rules:
ex:a rdfs:subPropertyOf _:b . by rdfs7
_:b rdfs:domain ex:c .
ex:d ex:a ex:e .
ex:d _:b ex:c .
ex:d rdf:type ex:c . by rdfs2
Where the entailment patterns have been applied to generalized RDF syntax but yield a final conclusion which is legal RDF.
With the generalized syntax, these rules are complete for both RDF and RDFS entailment. Stated exactly:
Let S and E be RDF graphs. Define the generalized RDF (RDFS) closure of S towards E to be the set obtained by the following procedure.
1. Add to S all the RDF (and RDFS) axiomatic triples which do not contain any container membership property IRI.
2. For each container membership property IRI which occurs in E, add the RDF (and RDFS) axiomatic triples which contain that IRI.
3. If no triples were added in step 2., add the RDF (and RDFS) axiomatic triples which contain
rdf:_1.
4. Apply the rules GrdfD1 and rdfD2 (and the rules rdfs1 through rdfs13), with D={
rdf:langString,
xsd:string), to the set in all possible ways, to exhaustion.
Then we have the completeness result:
If S is RDF (RDFS) consistent, then S RDF entails (RDFS entails) E just when the generalized RDF (RDFS) closure of S towards E simply entails E.
The closures are finite. The generation process is decidable and of polynomial complexity. Detecting simple entailment is NP-complete in general, but of low polynomial order when E contains no blank nodes.
Every RDF(S) closure, even starting with the empty graph, will contain all RDF(S) tautologies which can be expressed using the vocabulary of the original graph plus the RDF and RDFS vocabularies. In practice there is little utility in re-deriving these, and a subset of the rules can be used to establish most entailments of practical interest.
If it is important to stay within legal RDF syntax, rule rdfD1 may be used instead of GrdfD1, and the introduced blank node can be used as a substitute for the literal in subsequent derivations. The resulting set of rules will not however be complete.
As noted earlier, detecting datatype entailment for larger sets of datatype IRIs requires attention to idiosyncratic properties of the particular datatypes.
This section is non-normative.
To keep the exposition simple, the RDF semantics has been phrased in a way which requires interpretations to be larger than absolutely necessary. For example, all interpretations are required to interpret the whole IRI vocabulary, and the universes of all D-interpretations must contain all possible strings and therefore be infinite. This appendix sketches, without proof, how to re-state the semantics using smaller semantic structures, without changing any entailments.
Basically, it is only necessary for an interpretation structure to interpret the names actually used in the graphs whose entailment is being considered, and to consider interpretations whose universes are at most as big as the number of names and blank nodes in the graphs. More formally, we can define a pre-interpretation over a vocabulary V to be a structure I similar to a simple interpretation but with a mapping only from V to its universe IR. Then when determining whether G entails E, consider only pre-interpretations over the finite vocabulary of names actually used in G union E. The universe of such a pre-interpretation can be restricted to the cardinality N+B+1, where N is the size of the vocabulary and B is the number of blank nodes in the graphs. Any such pre-interpretation may be extended to simple interpretations, all of which which will give the same truth values for any triples in G or E. Satisfiability, entailment and so on can then be defined with respect to these finite pre-interpretations, and shown to be identical to the ideas defined in the body of the specification.
When considering D-entailment, pre-interpretations may be kept finite by weakening the semantic conditions for datatyped literals so that IR need contain literal values only for literals which actually occur in G or E, and the size of the universe restricted to (N+B)×(D+1), where D is the number of recognized datatypes. (A tighter bound is possible.) For RDF entailment, only the finite part of the RDF vocabulary which includes those container membership properties which actually occur in the graphs need to be interpreted, and the second RDF semantic condition is weakened to apply only to values which are values of literals which actually occur in the vocabulary. For RDFS interpretations, again only that finite part of the infinite container membership property vocabulary which actually occurs in the graphs under consideration needs to be interpreted. In all these cases, a pre-interpretation of the vocabulary of a graph may be extended to a full interpretation of the appropriate type without changing the truth-values of any triples in the graphs.
The whole semantics could be stated in terms of pre-interpretations, yielding the same entailments, and allowing finite RDF graphs to be interpreted in finite structures, if the finite model property is considered important.
This section is non-normative.
The empty graph is entailed by any graph, and does not entail any graph except itself.
The empty graph is true in all interpretations, so is entailed by any graph. If G contains a triple <a b c>, then any interpretation I with IEXT(I(b))={ } makes G false; so the empty graph does not entail G. QED.
A graph entails all its subgraphs.
If I satisfies G then it satisfies every triple in G, hence every triple in any subset of G. QED.
A graph is entailed by any of its instances.
Suppose H is an instance of G with the instantiation mapping M, and that I satisfies H. For blank nodes n in G which are not in H define A(n)=I(M(n)); then I+A satisfies G, so I satisfies G. QED.
Every graph is satisfiable.
Consider the simple interpretation with universe {x}, IEXT(x)= <x,x > and I(aaa)=x for any IRI aaa. This interpretation satisfies every RDF graph. QED.
G simply entails a graph E if and only if a subgraph of G is an instance of E.
If a subgraph E' of G is an instance of E then G entails E' which entails E, so G entails E. NOw suppose G entails E, and consider the Herbrand interpretation I of G defined as follows. IR contains the names and blank nodes which occur in the graph, with I(n)=n for each name n; n is in IP and <a, b> in IEXT(n) just when the triple <a n b> is in the graph. (For IRIs which do not occur in the graph, assign them values in IR at random.) I satisfies every triple <s p o> in E; that is, for some mapping A from the blank nodes of E to the vocabulary of G, the triple <[I+A](s) I(p) [I+A](o)> occurs in G. But this is an instance of <s p o> under the instance mapping A; so an instance of E is a subgraph of G. QED.
if E is lean and E' is a proper instance of E, then E does not entail E'.
Suppose E entails E', then a subgraph of E is an instance of E', which is a proper instance of E; so a subgraph of E is a proper instance of E, so E is not lean. QED.
If E contains an IRI which does not occur in S, then S does not entail E.
IF S entails E then a subgraph of S is an instance of E, so every IRI in E must occur in that subgraph, so must occur in S. QED.
For any graph H, if sk(G) entails H there there is a graph H' such that G entails H' and H=sk(H').
The skolemization mapping sk substitutes a unique new IRI for each blank node, so it is 1:1, so has an inverse. Define ks to be the inverse mapping which replaces each skolem IRI by the blank node it replaced. Since sk(G) entails H, a subgraph of sk(G) is an instance of H, say A(H) for some instance mapping A on the blank nodes in H. Then ks(A(H)) is a subgraph of G; and ks(A(H))=A(ks(H)) since the domains of A and ks are disjoint. So ks(H) has an instance which is a subgraph of G, so is entailed by G; and H=sk(ks(H)). QED.
For any graph H which does not contain any of the "new" IRIs introduced into sk(G), sk(G) simply entails H if and only if G simply entails H.
Using the terminology in the previous proof: if H does not contain any skolem IRIs, then H=ks(H). So if sk(G) entails H then G entails ks(H)=H; and if G entails H then sk(G) entails G entails H, so sk(G) entails H. QED.
This section is non-normative.
The RDF semantic conditions do not place formal constraints on the meaning of much of the RDF vocabulary which is intended for use in describing containers and bounded collections, or the reification vocabulary intended to enable an RDF graph to describe RDF triples. This appendix briefly reviews the intended meanings of this vocabulary.
The omission of these conditions from the formal semantics is a design decision to accommodate variations in existing RDF usage and to make it easier to implement processes to check formal RDF entailment. For example, implementations may decide to use special procedural techniques to implement the RDF collection vocabulary.
This section is non-normative.
The intended meaning of this vocabulary is to allow an RDF graph to act as metadata describing other RDF triples.
Consider an example graph containing a single triple:
ex:a ex:b ex:c .
and suppose that IRI
ex:graph1 is used to identify this graph. Exactly how this identification is achieved is external to the RDF model, but it might be by the IRI resolving to a concrete syntax document describing the graph, or by the IRI being the associated name of a named graph in a dataset. Assuming that the IRI can be used to refer to the triple, then the reification vocabulary allows us to describe the first graph in another graph:
ex:graph1 rdf:type rdf:Statement .
ex:graph1 rdf:subject ex:a .
ex:graph1 rdf:predicate ex:b .
ex:graph1 rdf:object ex:c .
The second graph is called a reification of the triple in the first graph.
Reification is not a form of quotation. Rather, the reification describes the
relationship between a token of a triple and the resources that the triple refers
to. The value of the
rdf:subject property is not the
subject IRI itself but the thing it denotes, and similarly for
rdf:predicate and
rdf:object. For example, if the referent of
ex:a is Mount Everest, then the subject of the reified triple is also the mountain, not the IRI which refers to it.
Reifications can be written with a blank node as subject, or with an IRI subject which does not identify any concrete realization of a triple, in both of which cases they simply assert the existence of the described triple.
The subject of a reification is intended to refer to a concrete realization of an RDF triple, such as a document in a surface syntax, rather than a triple considered as an abstract object. This supports use cases where properties such as dates of composition or provenance information are applied to the reified triple, which are meaningful only when thought of as referring to a particular instance or token of a triple.
A reification of a triple does not entail the triple, and is not entailed by it. The reification only says that the triple token exists and what it is about, not that it is true, so it does not entail the triple. On the other hand, asserting a triple does not automatically imply adds a generic membership property which holds regardless of position, and classes containing all the containers and all the membership properties.
One should understand this vocabulary as describing containers, rather than as a tool for constructing them, as would typically be supplied by a programming language. cannot be formally sanctioned by the RDF formal semantics..
If the container is of an ordered type, then the ordering of items in the container is intended to be
indicated by the numerical ordering of the container membership
properties, which are assumed to be single-valued.
However, these informal interpretations are not reflected in any formal RDF
entailments.
The RDF semantics does not support any entailments which could arise from enumerating
the elements of an unordered
rdf:Bag in a different order. For example,
_:xxx rdf:type rdf:Bag .
_:xxx rdf:_1 ex:a .
_:xxx rdf:_2 ex:b .
does not entail
_:xxx rdf:_1 ex:b .
_:xxx rdf:_2 ex:a .
(If this conclusion were valid, then the result of adding it to the original graph would be entailed by the graph, and this
it is consistent to assert that something assert that a container under-specify.
The RDFS semantic conditions require that any
subject of the
rdf:first property, and any subject or object of
the
rdf:rest property, be of
rdf:type rdf:List.
This section is non-normative.
The basic idea of using an explicit extension mapping to allow self-application without violating the axiom of foundation was suggested by Christopher Menzel. The generalized RDF syntax used in Appendix A, and the example showing the need for it, were suggested by Herman ter Horst, who also proved completeness and complexity results for the rule sets. Jeremy Carroll first showed that simple entailment is NP-complete in general. Antoine Zimmerman suggested several simplifications and improvements to the proofs and presentation.
The RDF 1.1 editors acknowledge valuable contributions from Thomas Baker, Dan Brickley, Gavin Carothers, Jeremy Carroll, Pierre-Antoine Champin, Richard Cyganiak, Martin J. Dürst, Alex Hall, Steve Harris, Ivan Herman, Eric Prud'hommeaux, Andy Seaborne, David Wood and Antoine Zimmermann.
This specification is a product of extended deliberations by members of the RDF Working Group. It draws upon the earlier specification [RDF-MT], whose editor acknowledged valuable inputs from Jeremy Carroll, Dan Connolly, Jan Grant, R. V. Guha, Herman ter Horst, Graham Klyne, Ora Lassilla, Brian McBride, Sergey Melnick, Peter Patel-Schneider, Jos deRoo and Patrick Stickler.
This document was prepared using the ReSpec.js specification writing tool developed by Robin Berjon. | http://www.w3.org/TR/2013/WD-rdf11-mt-20130723/ | CC-MAIN-2015-14 | refinedweb | 8,396 | 52.9 |
This patch implements the demangling functionality as described in the
Vector Function ABI. This patch will be used to implement the
SearchVectorFunctionSystem(SVFS) as described in the RFC:
A fuzzer is added to test the demangling utility.
Minor Update. Cosmetic changes.
Hi all,
thank you for the reviews so far.
I am picking up this change-set to continue working on it. Thank you @aranisumedh for your work!
Question: what is the fuzzer supposed to do? Generate random strings to be parsed by VABI::getVectorName?
Francesco
@lebedev.ri, @jdoerfert I agree that the names ParameterKind and ParamType are too generic to be exposed in the llvm namespace, but I think that moving them in the VABI namespace might not be the right thing to do. These enum and structure are supposed to support other types of parameters of vector function that might not be OpenMP specific or directly related to a vector function ABI specification. This is one of the goals of the RFC submitted for the SVFS.
Therefore I suggest to move them back to the llvm namespace, with names that will make it clar that we are talking about function vectorization.
How about the following?
ParameterKind -> VectorFunctionParameterKind or VFParameterKind
ParamType -> VectorFunctionParameter or VFParameter (we might want get rid of the Type token too)
Does that make sense to you?
I have uploaded a new version with the fuzzer requested by @lebedev.ri .
Cosmetic changes to a comment.
How about the following?
ParameterKind -> VectorFunctionParameterKind or VFParameterKind
ParamType -> VectorFunctionParameter or VFParameter (we might want get rid of the Type > token too)
Does that make sense to you?
ParameterKind -> VectorFunctionParameterKind or VFParameterKind
ParamType -> VectorFunctionParameter or VFParameter (we might want get rid of the Type > token too)
That is fine with me (both the VectorFunction or the VF version). It would be nice and consistent to put VF (or VectorFunction) in front of all these names (ParameterKind, ISAKind, ParamType) as done for VectorFunctionShape.
Sorry, meant to reply, but lost track.
I think that would be good-enough.
files should end with newlines
Question: does getVectorName() run *all* of the parsing code in SearchVectorFunctionSystem.cpp?
Yes, this should be ok i suppose.
I have updated the code according to the last round of reviews. I have removed pieces of code that were not needed anymore for testing.
The parser now is split from the VFParameter class (the old ParamType class).
Hi @sdesmalen:
it would be better to parse the mangled string incrementally, rather than extracting each feature from the string individually.
I think the parser now does what you ask for the "variable length" part of the mangled string, which is the one that holds the <vlen> and the <parameters>.
Good catch. Now it does.
I have done the following updates:
Not a review, but some thoughts nonetheless.
have you considered std::tie() ? Not sure if it would work/help here.
Documentation commends have 3x /
VectorVariant
But, i think this function should return llvm::Optional<Some Struct>, or error_or maybe
This is unreachable, right? Else i think this might need better error propagation strategy.
same, and elsewhere.
I have changed the signature to use a single input parameter. The method is still returning a bool.
I have used optional in the static method that creates the VFShape instance.
I have replace the output type with Optional<VFParamKind> and moved the llvm_unreachable at the end of the function.
Done. The static method now returns an optional value.
In this new patch I have:
I have deferred the handling of the mask token "M" to the VFParameter struct, introducing a new VFPAramKind that handles global function predication. This should be more in line with the aim of this implementation, which is to make VFShape independent from the Vector Function ABI. In particular, there are use cases that @simoll cares about where predication can be attached to single parameters, and not to the whole function.
Tests to make sure the Global Predicate is added correct;ly has been added for all the targets supported in this patch (x86 and aarch64).
I'm fine with this (one small nit below).
@simoll, @lebedev.ri, any more comments or can this go in?
Why is this an Optional? It seems to always return a proper VFParamKind if it returns at all.
Looks ok.
(Note that i'm only looking at code, not correctness of demangling here.)
There's ongoing effort to migrate to llvm::Alignment (i don't recall the name)
I'm not sure why this should not also return Optional<ParsedData>.
One minor thing: VFABI parsing should succeed for any well-formed AVFBI vector function name (beyond what's listed in the VFISAKind enum). This would open up the functionality for external users.
How about renaming to ISA_Unknown ?
Why not carry on with parsing a well-formed AFABI String if the ABI is unknown?
This would enable support for VFABIs not listed here.
I have address all comments. Thank you for your feedback @jdoerfert @simoll @lebedev.ri !
Additionally, I have renames the enums values of VFISAKind from VFISAKind::ISA_* to VFISAKind::*.
yeah, sorry,Optional is an unnecessary complication here.
LGTM. Thanks!
Thanks for all the changes @fpetrogalli, this patch is really taking shape! Just added a few more drive-by comments.
Given that ScalarName and VectorName are not part of the equality-comparison, does that imply they should not be part of VFShape ?
nit: Can we rename this main function to something like tryDemangleForVFABI ?
Why is it named getISA which starts parsing at position4 rather than tryParseISA which starts parsing at position 0?
Can we make the dropping of these substrings part of getMask and getISA (on success) ?
nit: this comment seems unnecessary (the code says it all)
I see. How about I refactor this as follow?
struct VFShape {
unsigned VF; // Vectorization factor.
bool IsScalable; // True if the function is a scalable function.
VFISAKind ISA; // Instruction Set Architecture.
SmallVector<VFParameter, 8> Parameters; // List of parameter informations.
};
struct VFInfo {
VFShape Shape;
StringRef ScalarName; // Scalar Function Name.
StringRef VectorName; // Vector Function Name associated to this VFShape.
;
I think this makes more sense, I just want to double check with you before applying the changes.
HI @sdesmalen ,
I have updated the patch according to your feedback. In particular, I have modified all the parsing method to use the ParseRet enum instead of booleans. It looks better now, thanks.
Once outstanding work is the comparison operator of VFShape. I have proposed a solution in a comment, let me know what you think, I will apply it if you agree on the approach. BTW: good catch. The VFShape should carry information only about the shape, not about the names of the functions.).
Thank you,
In D66024#1668515, @fpetrogalli wrote:
I have updated the patch according to your feedback. In particular, I have modified all the parsing method to use the ParseRet enum instead of booleans. It looks better now, thanks.
Agreed, this looks good now, thanks!
Yes, that seems like a good suggestion.
That seems like the better choice given how you've shaped the interface that returns a Optional<VFShape>, but I don't really understand why this would break CI.
In D66024#1669011, @sdesmalen wrote:).
That seems like the better choice given how you've shaped the interface that returns a Optional<VFShape>, but I don't really understand why this would break CI.
The way I am building the VFShape is as follows:
const StringRef AttrString =
CI->getCalledFunction()->getFnAttribute("vector-function-abi-variant").getValueAsString();
SmallVector<StringRef, 8> ListOfStrings;
AttrString.split(ListOfStrings, ",");
SmallVector<VFShape, 8> Ret;
for (auto MangledName : ListOfStrings) {
auto Shape = VFShape::getFromVFABI(MangledName);
if (Shape.hasValue() && Shape.getValue().ScalarName == ScalarName)
Ret.push_back(Shape.getValue());
}
Most of the functions don't have the attribute, which result in list of string consisting of a single element, which is an empty string, therefore the parser fails and the exception is raised.
Maybe there is a better way to do it, but given the Optional<VFShape> return, it doesn't make sense to raise the exception on failure.
I have split the implementation of VFShape into VFShape and VFInfo, as agreed.
I have also found a couple of bugs thanks to the fuzzer (that was a great suggestion @lebedev.ri !). I have added the correspondent unit tests to prevent regressions. The bugs were around invalid mangled name strings and the parsing of the align token.
Regards,
I added the Unknown ISA handling in the test ISAIndependentMangling, and updated the comments in the fuzzer.
Can't tryDemangleForVFABI return a Optional<VFInfo> directly? (rather than construct it here from the ParsedData)
This also makes me question the need for ParsedData.
I could, but I think it is not the right thing to do. I expect the the`VFInfo` struct to grow beyond handling Vector Function ABI and OpenMP information (it is one of the requirements). When this happens, all we need to do is just to to change this line adding a proper ParsedData -> VFInfo constructor instead of having to adapt the parser to the new VFInfo class.
If VFInfo turns into a class that supports a superset of the VFABI, the parser could construct a VFInfo object that populates less fields through having multiple constructors for VFInfo, or through a constructor with default arguments for the fields that are not applicable.
In the end there needs to be code somewhere that fills in the VFInfo object. Having a layer in the middle that just passes the information around offers little value, unless that layer adds some new functionality. Without such functionality, I think it makes more sense to return VFInfo directly.
I have removed the ParsedData struct as requested by @sdesmalen , my reasoning for keeping it was wrong.
I have updated the tests and the fuzzer accordingly.
I agree with you, thank you.
Sorry, just spotting this now. Instead of having struct VFInfo; here, can you just move the definition of VFInfo here?
nit: this code is still commented out.
Restore the function name to tryParseForVFABI.
Thanks @fpetrogalli, LGTM! | https://reviews.llvm.org/D66024?id=214419 | CC-MAIN-2022-27 | refinedweb | 1,666 | 66.03 |
ZK/How-Tos/Concepts and Tricks
Contents
- 1 Concepts and Tricks
- 1.1 Concepts
- 1.2 test
- 1.3 Tricks
- 1.3.1 Change or customize the Toolbars in FCKeditor
- 1.3.2 How to open treeitem at start
- 1.3.3 How do I programmatically create a Button that has an onClick zscript?
- 1.3.4 How do I use JavaScript in zscripts?
- 1.3.5 Pass JavaScript variable value to ZK Server
- 1.3.6 Use 'self' In Event Handlers Within 'forEach'
- 1.3.7 IE Overflow Scrolling of Relative Positioned DIV
- 1.3.8 How to Keep The Current Focused Component
- 1.3.9 How to Disable The Progress Bar of ZK
- 1.3.10 How to access static member field of a class in zul without zscript
- 1.3.11 How to detect Firebug from browser
- 1.3.12 How to resolve the issue of CSS not loaded in IE6&7 while integrating ZK and JSP.
- 1.3.13 How to pass Arguments to a Macro Component.
Concepts and Tricks[edit]
Concepts[edit]
Parameter and Event Listener[edit]
It is simple to access a parameter by use of Execution.getParameter() or ${param[key]} in EL as follows.
<button label="${param['what']}"/>
However, you cannot access it in an event listener. For example, the button's label becomes empty once it is pressed in the following example.
<button label="${param['what']}" onClick="doIt()"/> <zscript> void doIt() { self.label= Executions.getCurrent().getParameter("what"); }
Why?
The parameter map is available only when the page is evaluated. And, when user clicks the button, another URL is generated and the previous parameters was gone.
Solution:
<button label="${param['what']}" onClick="doIt()"/> <zscript> tmp = Executions.getCurrent().getParameter("what"); void doIt() { self.label= tmp; }
Grid versus Listbox[edit]
Both grid and listbox support multiple columns and rows. They looks similar. Which one to use? also work combobox in listbox and grid.onopen() are use to run time create database record in listbox and combobox;
Grid[edit]
- Main use: table-like presentation of data.
- Each cell could be dramatically different from each other
- To implement selections, developers have to add a checkbox to each row. There is no easy way to implement single-selection (with radio button).
Listbox[edit]
- Main use: selectable data
- Selections are supported directly (by use of getSelectedItem and others)
- If textbox is contained in a cell, users cannot select its content.
Why alert instead of an error box when access the value property in BeanShell[edit]
In the following code snippet, the error box is not shown. Rather, ZK considered it as programming error (by showing the exception with JavaScript's alert)
<zk> <textbox id="box" constraint="no empty"/> <button label="cause validation error" onClick="alert(box.value)"/> </zk>
Why? BeanShell intercepts all exceptions and uses its own exception to represent them. Thus, ZK doesn't know it is actually caused by WrongValueException.
Solution: Use getValue() and setValue() instead of accessing the property name directly.
<zk> <textbox id="box" constraint="no empty"/> <button label="cause validation error" onClick="alert(box.getValue())"/> </zk>
test[edit]
How to implement paging for listboxes with many items[edit]
If you have a listbox with many (say 1,000,000 items), you cannot use the internal paging mechanism of the listbox (in other words, you cannot set the 'paging' mold.) The reason is that the internal paging mechanism will create as many listitems as there are items.
A listbox that is controlled by a paginal will always display listitems starting from offset "pagesize * active page count" to "pagesize * (active page count + 1)". Consequently, you must *not* associate a listbox for which the number of listitems does not match the actual number of elements with a paginal. In other words, do not let the listbox be the controllee of the paging element. Instead, create a separate paging element like so:
<vbox> <listbox id="box" /> <paging id="paging" /> </vbox>
Set the number of actual items as the "totalSize" of the paging controller:
paging.setTotalSize(1000000);
Now, you have update intercept the "onPaging" event:
paging.addEventListener("onPaging", new EventListener() { public void onEvent(Event e) { PagingEvent pe = (PagingEvent)e; int desiredPage = pe.getActivePage(); // now retrieve items desiredPage * pagesize .. (desiredPage+1) * pagesize // and assign their content to listitem 0 .. pagesize } });
Consequently, there are always only as many listitems as are currently displayed.
Note that a drawback of this technique is that you have to place the paging controller outside the listbox. You cannot add this paging controller to the listbox, since the listbox checks that at most one child that is an instance of org.zkoss.zul.Paging is added, lest it conflict with the internal paging component. A second drawback is that you cannot use the "box.getPageSize()" methods, which rely on internal paging. You must use "paging.getPageSize()" instead. Also, make sure you do *not* call box.setPaginal(paging) if you use this technique.
On Width and Percentages[edit]
Using the width= attribute with percentages in ZK requires careful attention because its meaning is different from other frameworks. On one hand, ZK strives to be a browser-independent framework. On the other hand, when rendering HTML, ZK often passes the width= attribute on to the HTML elements it uses, which leads to unexpected effects. The reason is that the HTML elements to which the width= attribute is being passed form a different hierarchy than the ZUL elements from which they originate.
To understand this, consider the following scenario:
<hbox width="100%"> <div width="50%" style="background-color: blue;"> Left </div> <div width="50%" style="background-color: green;"> Right </div> </hbox>
Here, the intention is for the parent hbox to occupy 100% of its parent (maybe a window), and for each 'div' child to occupy 50% of its parent. In CSS, [W3C], the width attribute refers to the containing block.
However, ZK renders the above roughly as follows:
<table width="100%"> <tr> <td> <div style="width:50%"> Left </div> </td> <td> <div style="width:50%"> Right </div> </td> </tr> </table>
Users familiar with CSS will note that the width:50% attribute now no longer refers to the 'table' element that was used to render the 'hbox', but to the 'td' element ZK inserted! The default width of the td element is 50% of the table (since there are two 'td' elements), hence the effective width of the 'div' element is 25% of the table. It is apparent that in order to use width= properly, one must understand how ZK translates between ZUL and HTML (contrary to the intentions of ZK's developers.)
To address this problem, ZK provides a "widths" attribute for box elements. The 'widths' attribute allows the assignment of 'width:' attributes to the 'td' elements that make up a box. To achieve the desired effect, you would have to use:
<hbox widths="50%,50%" width="100%"> <div width="100%" style="background-color: blue;"> Left </div> <div width="100%" style="background-color: green;"> Right </div> </hbox>
which will be rendered to
<table width="100%"> <tr> <td style="width:50%"> <div style="width:100%"> Left </div> </td> <td style="width:50%"> <div style="width:100%"> Right </div> </td> </tr> </table>
Thus, the table takes up 100% of its parent, each 'td' elements takes up 50% of the table's width, and the 'div' elements occupying each 'td' take up the entire space (100%).
The 'widths' attribute provides a work-around for boxes, but, unfortunately, it does not provide a general solution. By passing on 'width=' attributes to HTML elements, ZK puts the rendering at the whim of the browser being used, therefore exposing the ZK user to all width-related rendering bugs. Moreover, these bugs are hard to debug because they require the ZK user to understand ZK's rendering process, before they can begin to address the browser peculiarities causing them.
For instance, specifying a width of 100% to a <textbox> will result in a style of "width:100%" applied to a <input type="text"> element, which triggers bugs in different browsers - for instance, IE ignores the width restriction when the value placed in the textbox is larger than the box. In this case, IE stretches the box. Workaround
Tricks[edit]
Change or customize the Toolbars in FCKeditor[edit]
Since March 13, 2007, the FCKeditor has supported the new method setCustomConfigurationsPath(String url) that you can specify the url of the custom configuration .js file. You do not need to modify the fckconfig.js inside the fckez.jar any more. Please see FCKeditor Configurations File for details on what can be customized.
So here is the new way to create a new toolbar set for the FCKeditor.
1. Create a JavaScript .js custom configuration file, say, myconfig.js under somewhere of your web path. For description convenience, this example would put it under root path; i.e. "/myconfig.js".
2. Create a new toolbar set "Simple" inside the file (myconfig.js). e.g.
FCKConfig.ToolbarSets["Simple"] = [ ['Bold','Italic','Underline','StrikeThrough','-','TextColor','BGColor'] ];
3. To use this /myconfig.js, you have to specify it in the <fckeditor> tag
<fckeditor customConfigurationsPath="/myconfig.js" toolbarSet="Simple"/>
or by Java methods
FCKeditor myFCKeditor = new FCKeditor(); myFCKeditor.setCustomConfigurationsPath("/myconfig.js"); myFCKeditor.setToolbarSet("Simple");
4. OK. You are done.
The following is the old way that needs you to modify the fckconfig.js inside the fckez.jar
To change a toolbar in FCKeditor or create a new one, the fckconfig.js file in the fckez.jar file will have to be modified.
1. make a backup copy of the fckez.jar file in case anything goes wrong.
2. rename the fckez.jar file to fckez.zip (jar files are just zip files, so this works)
3. unzip the fckez.zip file somewhere. A new directory is suggested.
4. find the fckconfig.js, and open it for editing. It should be at \web\js\ext\FCKeditor\fckconfig.js
5. find the toolbars section. The following is the Basic one.
FCKConfig.ToolbarSets["Basic"] = [ ['Bold','Italic','-','OrderedList','UnorderedList','-','Link','Unlink','-','About'] ] ;
6. To create a new ToolbarSet, copy everything between the FCKConfig.ToolbarSets... to the semicolon ;
7. Paste it in following the semicolon, and change the name.
FCKConfig.ToolbarSets["Simple"] = [ ['Bold','Italic','-','OrderedList','UnorderedList','-','Link','Unlink','-','About'] ] ;
8. Change the Toolset as you see fit. Buttons are defined in the single quotes. the '-' is a toolbar separator. Lets remove the link and unlink buttons.
FCKConfig.ToolbarSets["Simple"] = [ ['Bold','Italic','-','OrderedList','UnorderedList','-','About'] ] ;
9. Save the file and exit the editor program.
10. navigate up the dir tree till you get to web folder level, and place the whole tree into the zip file. (this will preserve the location of the modified file in the dir structure.)
11. rename the zip file back to fckez.jar, and put the jar file into the tomcat shared/lib directory with the other zk jar files.
12. change your code that calls the FCKeditor in you zk application to use the 'Simple' toolbarset.
FCKeditor myFCKeditor = new FCKeditor(); myFCKeditor.setToolbarSet("Simple");
13. start Tomcat, and use a browser to view your application. You should see your modified toolbar set in FCKeditor.
How to open treeitem at start[edit]
Please try this with the newest version as a work-round.
...
</treecols> </tree> <zscript> int[] path ={0}; tree.renderItemByPath(path); int[] path ={1}; tree.renderItemByPath(path); </zscript>
... And, please refer to javadoc- treemodel for detail on path and
please feel free to post further question to ZK forum on sourceforge
Due to some spec change, after 3.0.3, please refer the following: ...
</treecols> </tree> <zscript> int[] path ={0.0}; tree.renderItemByPath(path); int[] path ={1.0}; tree.renderItemByPath(path); </zscript>
... Here is a demo code to play around
<?xml version="1.0" encoding="UTF-8"?> <zk xmlns=""> <window title="Dynamically Change by Model"> <zscript><![CDATA[ class MySimpleTreeNode extends SimpleTreeNode { private String myData = null; public MySimpleTreeNode(String data, List children) { super(data, children); myData = data.toString(); } public String toString() { return "Node: " + myData; } public void append(String data) { myData = myData + data; } public Object getData() { return myData; } } List aChildren = new ArrayList(); List empty = new ArrayList(); List a2Children = new ArrayList(); MySimpleTreeNode a20 = new MySimpleTreeNode("A2-0", empty); MySimpleTreeNode a21 = new MySimpleTreeNode("A2-1", empty); MySimpleTreeNode a22 = new MySimpleTreeNode("A2-2", empty); a2Children.add(a20); a2Children.add(a21); a2Children.add(a22); MySimpleTreeNode a0 = new MySimpleTreeNode("A0", empty); MySimpleTreeNode a1 = new MySimpleTreeNode("A1", empty); MySimpleTreeNode a2 = new MySimpleTreeNode("A2", a2Children); aChildren.add(a0); aChildren.add(a1); aChildren.add(a2); List children = new ArrayList(); MySimpleTreeNode a = new MySimpleTreeNode("A", aChildren); children.add(a); List bbChildren = new ArrayList(); MySimpleTreeNode b00 = new MySimpleTreeNode("B0-0", empty); bbChildren.add(b00); List bChildren = new ArrayList(); MySimpleTreeNode b0 = new MySimpleTreeNode("B0", bbChildren); MySimpleTreeNode b1 = new MySimpleTreeNode("B1", empty); MySimpleTreeNode b2 = new MySimpleTreeNode("B2", empty); bChildren.add(b0); bChildren.add(b1); bChildren.add(b2); MySimpleTreeNode b = new MySimpleTreeNode("B", bChildren); children.add(b); List rList = new ArrayList(); rList.add(a); rList.add(b); MySimpleTreeNode r = new MySimpleTreeNode("Root", rList); List rootList = new ArrayList(); rootList.add(r); MySimpleTreeNode root = new MySimpleTreeNode("Root", rootList); SimpleTreeModel stm = new SimpleTreeModel(root); public void renderByPath(Object obj){ int[] result = stm.getPath(root,obj); for(int i =0; i < result.length;i++) { System.out.println(result[i]); } tree.renderItemByPath(result); } public void renderByPathMul(){ int l = tree.getTreechildren().getChildren().size(); System.out.println(l); for(int i=0; i<l; i++) { int[] path ={0,i}; tree.renderItemByPath(path); } } ]]></zscript> <vbox> <tree model="${stm}" id="tree" width="700PX"> </tree> <hbox> <button label='renderByPath A2' onClick='renderByPath(a2)' /> <button label='renderByPath B0-0' onClick='renderByPath(b00)' /> <button label='renderByPath A2-1' onClick='renderByPath(a21)' /> <button label='renderByPath Root' onClick='renderByPath(r)' /> <button label='renderByPath A' onClick='renderByPath(a)' /> </hbox> </vbox> </window> </zk>
How do I programmatically create a Button that has an onClick zscript?[edit]
This piece of code does not work with ZK 3.5.1!!!
Click the "Try Me!" button on the demo page to see the source panel. Cut-n-paste in the following example then lick "Try Me!" again to load the example. Then click on the button to see the code in action.
<window title="My First Window" border="normal" width="200px"> <vbox id="myVbox"> <button label="create new button"> <attribute name="onClick"> <![CDATA[ deleteButton = new Button("Click me!"); // Store an arbitary piece of data in my button (e.g. some data primary key of the some object to delete) deleteButton.setAttribute("myDeleteButtonAttribute", "mary had a little lamb"); cond = new org.zkoss.zk.ui.util.Condition(){ boolean isEffective(Component comp) {return true;} boolean isEffective(Page page){return true;} }; // here the script just does a serverside alert showing the silly text but you could call a // business delegate function to delete something from the database. zscript = new org.zkoss.zk.ui.metainfo.ZScript("java", "alert(self.getAttribute(\"myDeleteButtonAttribute\"));", cond); eventHandler = new org.zkoss.zk.ui.metainfo.EventHandler( zscript, cond); deleteButton.addEventHandler("onClick", eventHandler); myVbox.appendChild(deleteButton); ]]> </attribute> </button> </vbox> </window>
How do I use JavaScript in zscripts?[edit]
It is important to note that the event handlers and functions that you write with zscript run on the server not in the browser. There is a feature called "client side action" where you can have the browser run some JavaScript function in parallel to your server-side event handlers. Client-side actions are well suited for graphical effects in the browser. JavaScript client actions running in the browser do not have access to your server-side business objects, the whole Java JDK, and the server-side ZK Desktop containing rich XUL objects. Server-side zscript does have direct access to all of these things and you can use any of a number of languages including JavaScript.
This code shows how you can opt to use JavaScript rather than interpreted Java (BeanShell) in your server-side event zscripts handlers:
<?page zscript- <attribute name="onClick"> hello(); </attribute> </button> </window>
You can run that example on the demo page of the zkoss.org site. Simply press the "Try Me!" button to show the editor, paste in the example, then press "Try Me!" again to load the example and press the "Say hello world" button to see it work.
What you cannot expect to do is write something like
window.setTimeout("alert('hello');", 1000);
in that serverside JavaScript function. Why? Because 'window' is only an implicit object reference in the browser's JavaScript runtime. As the event handler is running on the server you do not have direct access to objects in the browser. There is no implicit object reference called 'window' in server-side ZK. We could have given the xul window tag and id of "window" and then we could script it. Cut and paste the following example into a "Try Me!" code editor of the zkoss.org demo then click "Try Me!" again to load it:
<?page zscript- <zscript> // this is a JavaScript function that manipulates the // desktop object that is assigned the ID "window" function changeWindowTitle(title){ window.title = title; } </zscript> <label value="New title:"/> <textbox id="new_title" value="edit me..."/> <button label="Change the window title."> <attribute name="onClick"> changeWindowTitle(new_title.value); </attribute> </button> </window>
There we gave the window object an ID of 'window' so that we can refer to it in a zscript by that name and change it's title. We could still not call "window.setTimeout()" as there is no method called setTimeout on a ZK Window Component. The JavaDoc on the zkoss site to see what methods Window and other Components have. We implicitly used setTitle in that last example when we set a new value to window.title in the zscript.
One browser method that people use all the time is:
document.getElementById('some_id');
That is not going to work as 'document' is an implicit object in the browser and it does not exist in server-side ZK. Do not dispare though! ZK has 86 different types of XHTML objects in addition to the 76 XUL object types. It will build up a mixed XHTML and XUL "DOM" on the server known as the Desktop. The following example manipulates some XHTML with a JavaScript zscript:
<?page zscript- <window id="window"> <zscript> function changeParagraph(){ // this script code changes the xhtml objects on the server myParagraph.children.get(1).value = 'goodbye world'; } </zscript> <!-- The following are XHTML elements. If you don't like the prefix 'h' see the DevGuide about how to set the default namespace to be html. --> <h:p <h:img</h:img> hello world <!-- "hello world" is in an implicit Text element --> </h:p> <!-- This is a XUL button that has an onClick that changes an XHTML element --> <button label="Change hello to goodbye"> <attribute name="onClick"> changeParagraph(); </attribute> </button> </window> </zk>
You cannot change the src attribute of the XHTML IMG element. Instead use the ZUL IMAGE element. Likewise if you want to set the source on an IFRAME use the ZUL element not an XHTML element. Why? Because then when you change their state on the server in zscripts after directly accessing your businss logic ZK will automatically update the corresponding DOM elements in the browser. The ZUL elements are documented here and the ZHTML elements documented here.
Pass JavaScript variable value to ZK Server[edit]
Sometimes we need to write some browser side JavaScript function to do something. We can use CSA (Client Side Action) to trigger the operation without problem. However, after the JavaScript execution completes, we might need the value from the JavaScript variable to be accessed from within a zscript function (the code running at server side). Here is a trick to do it. Basically, we prepare a "proxy" textbox. Copy the JavaScript value to textbox's value and fake an "onChange" event to really send the value back to Server. Following is the example codes:
<zk> <script type="text/JavaScript"> <![CDATA[ function test(tbxsss) { var <textbox id="sss" value="test" onChange="alert(self.getValue());" visible="false"/> <button id="btn" label="Invoke JS" action="onclick:test(#{sss})"/> </window> </zk>
When you click the button, the JavaScript function test() is executed. It then copys the new String "abc" into the textbox's value and fake onblur and onchange events. This will cause the ZK JavaScript engine to pass the new value "abc" back to server side, update the Textbox object's value property, and fire the "onChange" ZK event. And now you can access the value from this "proxy" textbox. And if you read the example code carefully, you should notice that the textbox is "invisible" on the browser (visible = "false") so it will not interfere your layout. Have fun!
Use 'self' In Event Handlers Within 'forEach'[edit]
It is very useful to use forEach to dynamically build a user interface. If you have an event handler within a forEach you can use 'self' to refer to the component that the event happened upon. You can also use 'self' to walk through the component scope starting at the component that the event happened upon.
<window> <zscript><![CDATA[ /** * in a real application we would use something like * List iterateOverMe = sessionScope.get("searchResults"); */ String[][] iterateOverMe = { {"Fred","Flintstone"} ,{"Wilma","Flintstone"} ,{"Barney","Rubble"} ,{"Betty","Rubble"} }; void doRespondToClick(){ String </vbox> </window>
IE Overflow Scrolling of Relative Positioned DIV[edit]
The overflow bug is documented well and exists in IE6 or IE7 as well. Thus, in this case we have to specify "position:relative" to the outer tag. For example,
<window title="tree demo" border="normal"> <div style="overflow:auto;position:relative" width="300px" height="100px"> <tree id="tree" width="90%"> </div> </window>
How to Keep The Current Focused Component[edit]
We need to catch the onFocus events to book keeping who is getting the focus right now. For example,
<zk> <zscript> Component current; void changeFocus(Component t){ current = t; } </zscript> <window title="My First Window" border="normal" width="300px"> <hbox>Window 1 : <textbox id="First" onFocus="changeFocus(self)"/></hbox> </window> <window title="My Second Window" border="normal" width="300px"> <hbox>Window 2 : <textbox id="Second" onFocus="changeFocus(self)"/></hbox> </window> <window title="My Third Window" border="normal" width="300px"> <hbox>Window 3 : <textbox id="Third" onFocus="changeFocus(self)"/></hbox> </window> <window title="My Fourth Window" border="normal" width="300px"> <hbox>Window 4 : <textbox id="Fourth" onFocus="changeFocus(self)"/></hbox> </window> <button label="Show" onClick="alert(current.id)"/> </zk>
How to Disable The Progress Bar of ZK[edit]
Add the following code into your zul page.
<script type="text/javascript"><![CDATA[ window.Boot_progressbox = function (){} ]]></script>
How to access static member field of a class in zul without zscript[edit]
The EL of ZUL support tag lib, so we can write a util tag lib function to access static member field from zul.(or any other static function)
Note: ZK 3.0 introduced the xel-method directive to declare a static method directly in a ZUL page (without TLD). <?xel-method prefix="c" name="forName" class="java.lang.Class" signature="java.lang.Class forName(java.lang.String)"?> Use TLD as described below if you want to manage the declarations in a single TLD file (rather than scattering around ZUL files), or you want to be compatible with ZK 2.x.
step1. write a tag lib , I place this field in src/metainfo/tld/myutil.tld
<?xml version="1.0" encoding="ISO-8859-1" ?> <taglib> <uri></uri> <description> </description> <function> <name>sf</name> <function-class>mytld.Util</function-class> <function-signature> java.lang.String getStaticField(java.lang.String name) </function-signature> <description> </description> </function> </taglib>
step2. write a tag lib configuration, this file must be placed in src/metainfo/tld/config.xml
<?xml version="1.0" encoding="UTF-8"?> <config> <version> <version-class>org.zkoss.zk.Version</version-class> <version-uid>3.0.1</version-uid> </version> <taglib> <taglib-uri></taglib-uri> <taglib-location>/metainfo/tld/myutil.tld</taglib-location> </taglib> </config>
step3. taglib implementation
public class Util { static public String getStaticField(String name){ try{ int i = name.lastIndexOf("."); String field = name.substring(i+1,name.length()); name = name.substring(0,i); Class clz = Class.forName(name); Object obj = clz.getField(field).get(null); if(obj!=null){ return obj.toString(); } return null; }catch(Exception x){ throw new RuntimeException(x); } } }
step4. access taglib in the zul.(sf is function name declared in step1
<?taglib uri="" prefix="t" ?> <window id="${t:sf('mytld.CompIds.ID1')}" title="title"> <button label="click" onClick="alert(self.parent.id)"/> </window>
How to detect Firebug from browser[edit]
Firebug is a accepted cause of reducing performance regarding Javascript. This is a way to detect whether user enables the Firebug. Here is a Javascript code.
if(window.console && window.console.firebug){ // Do something Javascript code. }
How to resolve the issue of CSS not loaded in IE6&7 while integrating ZK and JSP.[edit]
As so far as I know, the ZK CSS file fails to load in JSP on IE6&7, so you must add the following page definition to your JSP file.
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> ... //Your JSP content ...
How to pass Arguments to a Macro Component.[edit]
If you have a macro component and you are using that macro in a window, the following shows the exact syntax on how to pass arguments (which is in the developers guide but there isn't an actual example) to the macro and use them inside the macro:
Here is the macro component :
<hbox valign="center"> <button id="pmminusButton"/> <progressmeter width="${arg.width}" value="${arg.initialValue}" id="dhProgressMeter" /> <button id="pmplusButton"/> </hbox>
And here is the include statement at the top of my window file where I specify the arguments to get passed :
<?component name="dhProgressMeter" inline="true" macro-uri="/macros/dhProgressMeter.zul" width="90px" initialValue="20" ?>
The important items here are that I am declaring 2 variables to be passed to the macro component, 'width' and 'initialValue'.
In the macro itself you can see that I obtain the value of these arguments using ${arg.width} and ${arg.initialValue}
This is straightforward and I guess could have gone into a how-to but I thought I would put it in the forum so if someone does a search for "how to pass arguments to a macro" it will come up. | http://en.wikibooks.org/wiki/ZK/How-Tos/Concepts_and_Tricks | CC-MAIN-2014-42 | refinedweb | 4,350 | 57.98 |
OK,
I'm trying to write a program that uses a circular linked list. I have been playing around with a linked list program all day, and kinda understand it. Here it is, and yes it does compile.
#include <iostream> using namespace std; struct node { char name[20]; // Name of up to 20 letters int age; // D.O.B. would be better float height; // In metres node *nxt;// Pointer to next node }; node *start_ptr = NULL; node *current; // Used to move along the list int option = 0; void add_node_at_end() { node *temp, *temp2; // Temporary pointers // Reserve space for new node and fill it with data temp = new node; cout << "Please enter the name of the person: "; cin >> temp->name; cout << "Please enter the age of the person : "; cin >> temp->age; cout << "Please enter the height of the person : "; cin >> temp->height; temp->nxt = NULL; // Set up link to this node if (start_ptr == NULL) { start_ptr = temp;";} // else if (temp == NULL){ // cout << " <-- Last Node" << endl;}); return 0; }
My problem is that, I can't find any good resources on circular linked lists. I know that the last node in a circular linked list points
to the first node, but I can't figure out how to do this. Should I create another pointer in the struct??
My program is supposed to build a linked list of cards, and perform operations such as insert, delete, traverse.
Also, what is the true definition of traverse???
My directions say that a "card" can be inserted anywhere into the stack, and be traversed back to the "home card".
If anybody can get me pointed in the right direction, it would be greatly appreciated.
Again, the code above is just a normal linked list. If I am traversing any thing in the program, let me know so I can understand what that means more clearly.
John | https://www.daniweb.com/programming/software-development/threads/227500/help-with-a-circular-linked-list | CC-MAIN-2017-09 | refinedweb | 307 | 73.71 |
This
Step 1: Preparation
MCU
Today core subject, ATtiny85.
ISP
Any ISP that can program ATtiny85.
Battery
In general expectation, a watch should run over 1 year without charging or replace battery. For my simple measurement of power usage and the battery specification, CR1220 can only run about half an year, CR2016, CR2025 and CR2032 can run over 1 to 2 years representatively.
Display 3 different sizes, 64x32 is the smallest one. (another sizes are 128x64 and 128x32)
Other Parts
A battery holder, a small bread board, some bread board wires, two buttons and a power switch (optional).
Step 2: Assembly
Connect all the parts on the bread board:
ATtiny85
pin 1: not connected
pin 2: set button, other button pin connect to GND
pin 3: up button, other button pin connect to GND
pin 4: GND
pin 5: OLED SDA
pin 6: not connected
pin 7: OLED SCL
pin 8: VCC
Also connect battery and OLED board to VCC and GND.
Step 3: Power Down, WDT and Time
When MCU and OLED turned on, it consume about 6 mA.
In order to make the watch can run over 1 year, I will use the MCU most power saving sleep mode, SLEEP_MODE_PWR_DOWN, when user not in use. According to my cheap power meter, it show 0.1 uA if disable all the function. But It still need to enable WDT for time keeping, after enable WDT, it show 4 uA. Assume MCU and OLED auto sleep after 5 seconds and user view the watch 12 times every day in average, the watch will consume about 0.2 mAh per day. ((0.004 mA * 24 hours) + (6 mA * (5 / 60 / 60) hours * 12)) So a 150 mAh CR2025 battery can run 750 days.
The time source code mainly come from PaulStoffregen. However, the power down sleep mode will stop the normal timer, use millis() function for time keeping is not valid. So I keep another variable to replace millis() function. For each WDT interrupt, it increase a certain value. The increment value depends on the WDT interval settings and the chip's oscillator. When using 1 second WDT interrupt, my chip's calibrated increment value is 998 (around 1000 milliseconds).
And also I have added the readVcc() function for monitoring the battery status.
Ref.:
WDT and power related:...
time function for Arduino v1.4:
readVcc:...
Step 4: I2C Display
The code is reference from DigisparkOLED, but since its example code complied size is over 6K, the complete example cannot put on their ATtiny85 product, Digispark or Digithumb. (it require to comment out the bitmap code for running) The complete example can only run on their another product, Digispark Pro. (it have around 14K flash available)
Here are something I have revised or rewritten:
- Trim out many unused data, including font and bitmap
- Init SSD1306 setting according to the SSD1306 data sheet page 64
- Try to support all known resolution (64x32, 128x32, 128x64)
- Support custom font
- Function for turning OLED on and off for power saving purpose
It use TinyWireM library, but it have a bug (reported), you need revise one line of code in write() function to cover this:
if (USI_BufIdx >= USI_BUF_SIZE - 1) return 0; // dont blow out the buffer
Ref.:
TinyWireM:
DigisparkOLED:...
SSD1306 data sheet:
Another instructables using ATtiny85 and SSD1306:-...
Step 5: Custom Font
8K flash has no enough room to store all characters in large font size. (such as 24 pixels font height)
As watch only require 10 digital characters, we can tailor-make a selected font type binary to fit in limited space.
I will use imagicmagick command line tools to show how to convert custom font characters to a c header file.
This program need 10 digit in 2 font sizes, one with font height 8 pixels to show date digits and one with font height 24 pixels to show time digits:
convert -depth 1 -font Lucida-Sans-Unicode -pointsize 11 label:00123456789 -crop 70x8+7+4 -flip -rotate 90 watch_digit.xbm
convert -depth 1 -font Cooper-Black -pointsize 25 label:00123456789 -crop 150x24+14+4 -flip -rotate 90 watch_3x_digit.xbm
Lucida-Sans-Unicode and Cooper-Black are the font type in Windows 7, you may use your selected font type in your OS.
The corp, flip and rotate option help to adjust the binary data in correct position and direction. You may change the output format from xbm to png to preview the output bitmap.
After export the xbm files, we can copy the font binary code to the watchdigit.h source file:
#include <avr/pgmspace.h>
#define FONTWIDTH 7
#define FONT3XWIDTH 15
static const uint8_t watch_digit[] PROGMEM = {
//watch_digit.xbm binary code
};
static const uint8_t watch_3x_digit[] PROGMEM = {
//watch_3x_digit.xbm binary code
};
Ref.:
Step 6: Program
I am using a littlewire board as ISP, and I have make a hacking ISP connector for easier plug and play.
Any ISP that can program ATtiny85 should be ok.
Reference:............
Step 7: User Input
2 buttons UI for adjust time, operation method just like any simple digital watch.
Set button: select adjust field, the field will highlighted
Up button: increment value of selected field
Step 8: Calibration
Debug Screen
Press up button when not selected any field will enter debug screen.
Time Calibration
The first number is WDT interrupt count, this value is used to calibrate the value of wdt_millis_per_interrupt.
This value for you chips should be calculate from:
actual time (in millisecond) passed / WDT interrupt count
e.g. if you turn it on at 2016/01/07 23:10 and it is now 2016/01/08 13:25, it passed 51300000 milliseconds. At the same time the first line of debug screen show 51454 then you should set wdt_millis_per_interrupt as:
51300000 / 51454 ~= 997
Voltage Value
The second number is current battery voltage in millivolt. It require calibrate the constant value in readVcc() with the real readings from a multimeter.
This value for you chips should be calculate from:
actual millivolt / debug value * current voltage reference value
e.g. current voltage reference value is 1125300, debug screen second line show 2823 and multimeter show 2.81 volt then you should alter voltage reference value as:
2810 / 2823 * 1125300 ~= 1120118
Step 9: What's Next?
- watch case, it may be a big ring on a finger (should use a smaller battery like CR1220)
- more precise time, try to tune WDT millis with current volt and temperature
- connect other I2C modules
- research sync time method, GPS, WiFi + internet, BLE + mobile phone and more
66 Discussions
5 months ago
I Made It!
Reply 5 months ago
great
6 months ago
Very good job to cram so much in just 8k memory.
Have you considered to upgrade to the Attiny167 ? 16k memory.
Who supplied the 8 pin programming clip ? Price ?
Question 10 months ago
Hi i need to make a pore man watch (not with OLED) but just flashing hour and minutes
have you any idea or sample code please ?
1 year ago
can it work with arduino pro mini???
Reply 1 year ago
Hi Hanzalaibnzabid, the code depends on tinyWireM library, and it is tailor made for ATtiny core.
1 year ago
Does the ATtiny85 not need a Crystal to work?
Reply 1 year ago
yes, attiny can use internal oscillator only
2 years ago
can we use this microcontroller instead
Reply 2 years ago
no idea, never use it yet. but it have a USB socket, seems similar to Digispark. if true, the boot loader eat up around 2 KB memory, then may not have enough room to fit this program.
2 years ago
How do you modify the code for a 64x48 display?
Reply 2 years ago
just guess it similar to 64x32, but have 2 more page.
2 years ago
You left one pin unused what a waste :-)
Seriously though: great project and looks good too
Reply 2 years ago
can you send me the download files i don't want to pay thank you email; klambie at enutil-energy.com
Reply 2 years ago
which download files? I have no idea what you are talking about
Reply 2 years ago
Not waste, I just reserve it for future development. Actually, in my latest code, I change to used only one pin to connect three or more buttons, and reserve remain more pin for further features, such as serial GPS or BLE.
Reply 2 years ago
good idea though the purist in me would then think....hmmm attiny10 ;-)
Great project, well done
2 years ago
wow, I think i'm going to base my Attiny85 pocket watch on this :)
3 years ago
Nice idea and 'ible! I wonder if this could be compacted even more, maybe an SMD attiny, SMD buttons, and like you mentioned, a smaller battery? I wonder just how tiny this could be made. Also, I wonder if there's green on black versions of this display? Then you could make the font look like an old terminal, so it'd look like an ancient computer. Anyways, neat idea and project, I might make it sometime.
Reply 2 years ago | https://www.instructables.com/id/ATtiny-Watch-Core/ | CC-MAIN-2019-04 | refinedweb | 1,514 | 69.21 |
Opened 3 years ago
Closed 2 years ago
#20401 closed Bug (fixed)
get_for_model queries the wrong database
Description
The
django.contrib.contenttypes.models.ContentTypeManager.get_for_model method, if it doesn't find the requested content type in the cache, uses
get_or_create to get it from the database. In multi-database environments,
get_or_create always queries the db_for_write and therefore I think it is inappropriate for this case, where there is an overwhelming probability that the object requested already exists and therefore no write will occur.
Here is a copy of the method for reference:
def get_for_model(self, model, for_concrete_model=True): """ Returns the ContentType object for a given model, creating the ContentType if necessary. Lookups are cached so that subsequent lookups for the same model don't hit the database. """ opts = self._get_opts(model, for_concrete_model) try: ct = self._get_from_cache(opts) except KeyError: # Load or create the ContentType entry. The smart_text() is # needed around opts.verbose_name_raw because name_raw might be a # django.utils.functional.__proxy__ object. ct, created = self.get_or_create( app_label = opts.app_label, model = opts.model_name, defaults = {'name': smart_text(opts.verbose_name_raw)}, ) self._add_to_cache(self.db, ct) return ct
Change History (9)
comment:1 Changed 3 years ago by
comment:2 Changed 3 years ago by
comment:3 Changed 3 years ago by
Can you clarify where this is an issue? Under master/slave replication, it doesn't matter if you query the master or the slave; under sharding, your read and write database are the same. What's the use case where this manifests as a problem?
Closing needsinfo; please reopen if you can provide more details of why this is an issue.
comment:4 Changed 3 years ago by
ContentTypeManager.get_for_model() uses the master (the
db_for_write) for a read operation. The load on the master is next to negligible because of the caching. However, master/slave can also be used for a kind of high availability where, if the master goes down, the system can continue to be used in read-only mode using the slave. So this is an issue because it prevents my application from running in read-only mode when the master goes down.
comment:5 Changed 3 years ago by
comment:6 Changed 3 years ago by
I improved the patch with the help of bmispelon. However, it turns out there's (most probably) no use case after all, for the following reasons:
- If the system is put in read-only mode, it is likely that the Django processes will need to be reconfigured and restarted. Reconfiguration would probably alter the
DATABASESsetting so that the (former)
db_for_writewouldn't be there at all.
- If the system is put in read-only mode without reconfiguring and restarting Django (we can do that because our Django code checks for the existence of a file in order to determine if it's in read-only mode), it means that the
db_for_writeis still up (or at least that it can accept connections), otherwise the Django processes would try to connect and cause an error.
The patch could be useful if there is a chance that the
db_for_write somehow becomes unable to respond to queries, but still allows connections. It's really an edge case, if it exists at all.
I would certainly not do the work I did in this patch had I realized that, after all, it (most probably) doesn't affect us. However, now that the patch is ready, the question is whether it should be accepted. The argument for accepting it is that the
db_for_read, according to the documentation, "suggest[s] the database that should be used for read operations"; so I can argue that if the database router suggests something, we should be following the suggestion, regardless of the reason behind the suggestion. So I think that the patch makes the behaviour cleaner and more conformant.
comment:7 Changed 3 years ago by
comment:8 Changed 3 years ago by
I think this patch should be included. I can't see any problems with including it, and it does solve a small problem.
The following is some code taken from a ReadonlyDatabaseRouter (my particular case includes checking specific app-labels also) that we use during database migrations. The vast majority of calls to get_for_model should be able to fetch from the database without requiring a create (I would imagine, correct me if I'm wrong).
def db_for_write(self, model, **hints): is_read_only = getattr(settings, 'SITE_READ_ONLY', False) if is_read_only: raise DatabaseWriteDenied return None
This code will fail currently if get_for_model is called and there is an instance available to the database. Admittedly it'd be a pretty obscure edge case, but this read only router was adapted from something else I found online so there are definitely 2 users out there that this could possibly accommodate.
Created pull request | https://code.djangoproject.com/ticket/20401 | CC-MAIN-2016-44 | refinedweb | 797 | 52.19 |
This is not a question about C/C++ as such, but I couldn't think of where else to put it.
I would like to know how to call either gcc or g++ and specifying that all error messages that are to occur when compiling some source code are to be written to a file. for example..
// HelloWorld.cpp #include <iostream> using namespace std; int main() { cout<<"Hello World\n"; this_is_my_code_error; }
I have tried to call g++ like this to get the error..
g++ "C:\HelloWorld.cpp" > "C:\output.txt"
the file output.txt was created but it did not contain anything, i.e. the error the compiler found in the source code was not written to that file.
I'm running Windows XP.
Thanks. | https://www.daniweb.com/programming/software-development/threads/132266/making-gcc-g-output-errors-to-a-file | CC-MAIN-2019-04 | refinedweb | 125 | 80.51 |
Crossing Problem
Hi
I am using two cross indicators like this
but my qustion is why it find crossing with delay !
my code is this for Strategy
import backtrader as bt import numpy as np class BBANDS(bt.Strategy): params = (('fast', 9), ('slow', 26)) def log(self, txt, dt=None): dt = dt or self.datas[0].datetime.date(0) print('%s, %s' % (dt.isoformat(), txt)) def __init__(self): self.netprices = [] self.periods = [] self.period = None self.Log = False self.dataclose = self.datas[0].close self.order = None self.buyprice = None self.buycomm = None self.buytime = None self.bbands_mid = bt.indicators.BollingerBands(self.data).lines.mid self.bbands_bot = bt.indicators.BollingerBands(self.data).lines.bot self.bbands_cross_bot = bt.indicators.CrossOver(self.data, self.bbands_bot, plotname='Cross Bottom Line') self.bbands_cross_mid = bt.indicators.CrossOver(self.data, self.bbands_mid, plotname='Cross Mid Line') def next(self): if self.Log: self.log('Close, %.2f' % self.dataclose[0]) if not self.position: if self.bbands_cross_bot > 0: self.order = self.buy() else: if self.bbands_cross_mid > 0: self.order = self.sell()
it has one day delay for finding cross and one day delay for buying that !!!!
is there any solution for that ?
you use one day data, so the candle will be available the next day. the result is as expected. you would need to replay intraday data to watch for the cross to create orders.
But why buy signal create after indicator cross? is there any solution to do that!
if you want to look at a candle from yesterday, then it is finished at the end of the day. This means you can look at that candle when it is available, which is the next day. Then the cross is next day.
If you want to look for the cross intraday, you would need data with a smaller timeframe. a candle is available after the period is over (when the candle is fully formed).
so you could use intraday data and replay it to 1day timeframe | https://community.backtrader.com/topic/2583/crossing-problem | CC-MAIN-2020-50 | refinedweb | 331 | 62.75 |
If you import from it, you must mention it as a dependency in your setup.py. That’s sane advice.
Problem: when do you notice it? When do you grep your files for
import?
Some imports work just fine as another dependency depended on the thing you’re
importing. Still, it is safer to also include that dependency if you use it
directly: if the dependencies get refactored (for instance the large-scale
cleanup in zope’s libraries), the indirect dependency might not help you out
anymore.
Solution: z3c.dependencychecker. Install it and run “dependencychecker” inside your python project. It will report. (Since version 0.3).
It is not perfect yet.
from zope import interface is treated as a
requirement for
zope instead of the actual
zope.interface
package. (Note: support for ‘from’ added in version 0.4!) And before you
remove something, please grep around a bit. Doctests also aren’t checked yet.
(Note: support for doctests added in version 0.5!)
It is already helping me in cleaning up grok’s dependencies. Grok uses the
zope tool kit’s libraries and they’ve seen some major dependency refactoring.
Grok 1.1 is supposed to reap the fruits of that. But for that to happen,
grok’s own requirements need to be right, of course. So I’m now going through
grok and
grok): | https://reinout.vanrees.org/weblog/2009/12/10/z3c_dependencychecker.html | CC-MAIN-2022-21 | refinedweb | 225 | 69.38 |
Is there anyway to link matlab directly into unity? I have external sensors, that are then run through data collection and amplification, processed by matlab and output information that I would like to be received in real time by unity to control elements in the 3d environment. Is this possible?
i don't know exactly but i have an idea u can make your matlab to write your information in a txt file then make your c# script in unity read this txt file and make the script read the same file say every 10 sec (estimation of the time between receiving information -writing them in text file-reading them by unity). of-curse u should make your matlab code rewrite the update information in the same text file with the same name
You could try writing a simple c# dll to serve as glue. There's an article (requires matlab acct) on how to go about using it within a matlab project.
Unity might then reference the same dll, subscribing to events fired by the matlab side. Have not tried this personally.
What a strange mix :)
I would suggest trying to use UDP sockets. For example, your Unity program opens a UDP server socket in the port 55555. Matlab sends the measures as UDP frames to the port 55555. It's quite simple if you know about sockets. It's ok if both programs are in the same computer, you can use localhost as the destination address.
have you solved the issue? i am trying to handle with the same stuff currently.
Answer by cap_L
·
Oct 30, 2018 at 01:28 PM
Hello,
Yes I also want to know if we can use Unity with MATLAB. There are few APIs that only support MATLAB. Did you find any solution @srcarter camera position relative to a specific target.
1
Answer
Unity keeps reverting to visual studio as source editor
0
Answers
VSC Code "The type or namespace could not be found" - but it't not really true!
1
Answer
External Script Editor Argument Variable
1
Answer
About the Visual C# Express Editor limitation
6
Answers | https://answers.unity.com/questions/1035694/matlab-with-unity.html | CC-MAIN-2020-29 | refinedweb | 355 | 70.73 |
This is a simple illustration of a django view having a potential race condition:
# myapp/views.py from django.contrib.auth.models import User from my_libs import calculate_points def add_points(request): user = request.user user.points += calculate_points(user) user.save()
The race condition ought to be fairly apparent: A person could make this request two times, and also the application may potentially execute
user = request.user concurrently, leading to among the demands to override another.
Imagine that the function
calculate_points is comparatively complicated, and makes information according to a myriad of strange stuff that can't be placed in one
update and could be difficult to set up a saved procedure.
Here is my question: What type of securing systems are for sale to django, to cope with situations such as this?
By Django 1.1 you should use the ORM's F() expressions to resolve this unique problem. For additional particulars begin to see the documentation:
There are many methods to single-thread this type of factor.
One standard approach is Update First. You need to do an update that will seize a unique lock around the row then do your projects and lastly commit the modification. With this to operate, you have to bypass the ORM's caching.
Another standard approach is to possess a separate, single-threaded application server that isolates the net transactions in the complex calculation.
Your internet application can produce a queue of scoring demands, spawn another process, after which write the scoring demands for this queue. The spawn may be put in Django's
urls.pytherefore it happens on web-application startup. Or it may be put in separate
manage.pyadmin script. Or it is possible "when neededInch once the first scoring request is attempted.
You may also produce a separate WSGI-flavored web server using Werkzeug which accepts WS demands via urllib2. For those who have just one port number with this server, demands are queued by TCP/IP. In case your WSGI handler has one thread, then, you've accomplished serialized single-threads. This really is a little more scalable, because the scoring engine is really a WS request and may be run anywhere.
Another approach would be to possess some other resource that needs to be acquired and held to complete the calculation.
A Singleton object within the database. Just one row inside a unique table could be up-to-date having a session ID to get control update with session ID of
Noneto produce control. The fundamental update needs to incorporate a
WHERE SESSION_ID IS NONEfilter to make sure the update fails once the lock is held by another person. This really is interesting since it is naturally race-free -- it is a single update -- not really a Choose-UPDATE sequence.
An outdoor-variety semaphore may be used outdoors the database. Queues (generally) are simpler to utilize than the usual low-level semaphore.
You could utilize transactions to encapsulate your request. In the per-request level it appears such as this:
from django.db import transaction @transaction.autocommit def add_points(request): ...
This shoudl be adequate should you read increase the consumer data inside the request.
When the user may also edit other fields within the form after which save this data, you must do something similar to this:
Keep last modified time stamp within the request. Before saving the brand new data, determine if it's still exactly the same. Otherwise there's a race condition and you will display a note.
This might be oversimplifying your circumstances, but how about only a JavaScript link alternative? Quite simply once the user clicks the hyperlink or button wrap the request inside a JavaScript function which immediately hinders / "greys out" the hyperlink and replaces the written text with "Loading..." or "Posting request..." info or something like that similar. Would that meet your needs? | http://codeblow.com/questions/race-conditions-in-django/ | CC-MAIN-2017-51 | refinedweb | 642 | 57.47 |
I came upon a very interesting and cryptic snippet of code somewhere nameless, and I can’t decide if it is brilliant or completely insane. It is a very obscure way of accomplishing the required task, but it’s around four times faster than the alternatives I’ve tried, so I have to admit that it’s not completely without merit. Still, I cringe a bit at seeing it, since it packs around four unusual Python concepts in almost as many characters.
This is the snippet in question:
def GetContourPoints(self, array): """Parses an array of xyz points and returns a array of point dictionaries.""" return zip(*[iter(array)]*3)
As the docstring says, what this function does is that it parses an iterable of (x, y, z) points and returns an array of point dictionaries. Only, it doesn’t really. It takes an iterable of points like so:
(x1, y1, z1, x2, y2, z2, ...)
and returns an iterable of 3-tuples of groupped points:
((x1, y1, z1), (x2, y2, z2), ...)
So how does it do this? Let’s analyze it. I just started from the outermost part and progressed inwards, like a worm burrowing in a delicious chocolate cake, only less disturbing. Since the function returns an iterable of 3-tuples,
zip must accept three iterables in its command line, like so:
zip((x1, x2, x3, ...), (y1, y2, y3, ...), (z1, z2, z3, ...)) # So that must be what this is: >>> [iter(array)] * 3 (x1, x2, x3, ...), (y1, y2, y3, ...), (z1, z2, z3, ...)
The asterisk in the function call unpacks the iterable (the list, in this case), so we’re pretty much at the meat of this curious function. However,
array, our input, is just an iterable of
(x, y, z) points, so how can it be transformed to three iterables of one coordinate each?
Well, the magic is here:
[iter(array)] * 3
What does this produce? One’s first thought would be that it produces a list of three iterators, which, when evaluated, would return something like:
(x1, y1, z1, x2, ...), (x1, y1, z1, x2, ...), (x1, y1, z1, x2, ...)
i.e. the original sequence three times, which is nothing like what we need. The keen eye, however, will notice that this is not three iterators, but it is the same iterator, three times:
>>> print repr([iter(array)] * 3) [<listiterator object at 0x7fd2db258f90>, <listiterator object at 0x7fd2db258f90>, <listiterator object at 0x7fd2db258f90>]
As you can see, all the iterators have the same address, which means they are the same object. Thus, when
zip tries to iterate over one array each time, the iterator gets advanced and returns the next element in a row, so what actually gets returned is what we needed (i.e. the tuple of three tuples).
This is a fantastic abuse of the… well… everything, and I am very impressed at how someone could have come up with this. I don’t really like the fact that it relies in an implementation detail (the order the
zip function iterates over the arrays) to work, but it does, and it’s much faster than the usual alternatives I tried, so I think I like it. I definitely would have expected a comment on it, though, rather than leaving people to read the bones in an attempt to divine what it does.
Here are some timings, which my friend Marc graciously provided:
In [43]: %timeit [(arr[3*x], arr[3*x+1], arr[3*x+2]) for x in range(len(arr)/3)] 10000 loops, best of 3: 42.8 us per loop In [44]: %timeit numpy.reshape(arr, (-1, 3)) 10000 loops, best of 3: 59.2 us per loop In [45]: %timeit zip(*([iter(arr)]*3)) 100000 loops, best of 3: 11.2 us per loop
As you can see, this way of doing things is four to five times faster than the alternatives, for reasons unknown and unknowable (although I suspect that what takes numpy that long to do it is transferring the data in and out of it).
If you have any such tricks of your own, I’d appreciate if you could post them in the comments, as I’m always interested in reading and thinking about them. Thanks!
UPDATE: In JuanManuel Gimeno Illa’s comment below, he mentions that this is actually the officially sanctioned way to cluster an iterable into n-length groups by the Python documentation on zip(). That’s very interesting, and clarifies this whole method.
UPDATE 2: There is excellent discussion in this Hacker News thread. | https://www.stavros.io/posts/brilliant-or-insane-code/ | CC-MAIN-2020-29 | refinedweb | 755 | 69.92 |
SCDJWS Study Guide: SAAJ
Printer-friendly version |
Mail this to a friend
Using SOAP Faults
In this section, you will see how to use the API for creating and accessing a SOAP Fault element in an XML message.
Overview of SOAP Faults
If you send a message that was not successful for some reason, you may get back a response containing a SOAP Fault element that gives you status information, error information, or both. There can be only one SOAP Fault element in a message, and it must be an entry in the SOAP Body. Further, if there is a SOAP Fault element in the SOAP Body, there can be no other elements in the SOAP Body. This means that when you add a SOAP Fault element, you have effectively completed the construction of the SOAP Body. The SOAP 1.1 specification defines only one Body entry, which is the SOAP Fault element. Of course, the SOAP Body may contain other kinds of Body entries, but the SOAP Fault element is the only one that has been defined. the way previous section, an actor that cannot process a header that has a mustUnderstand attribute with a value of true must return a SOAP fault to the sender.
A SOAPFault object contains the following elements:
· A fault code -- always required
The fault code must be a fully qualified name, which means that it must contain a prefix followed by a local name. The SOAP 1.1 specification defines a set of fault code local name values in section 4.4.1, which a developer may extend to cover other problems. The default fault code local names defined in the specification relate to the SAAJ API as follows:
· VersionMismatch -- the namespace for a SOAPEnvelope object was invalid
· MustUnderstand -- an immediate child element of a SOAPHeader object had its mustUnderstand attribute set to true, and the processing party did not understand the element or did not obey it
· Client -- the SOAPMessage object was not formed correctly or did not contain the information needed to succeed
· Server -- the SOAPMessage object could not be processed because of a processing error, not because of a problem with the message itself
· A fault string -- always required
A human-readable explanation of the fault
· A section The Actor Attribute.
· A.
Creating and Populating a SOAPFault Object
You have already seen how to add content to a SOAPBody object; this section will walk both code, the method setFaultCode creates a faultcode element, adds it to fault, and adds a Text node object fault, created in the previous lines of code, indicates that the cause of the problem is an unavailable server and that the actor at is having the problem. If the message were being routed only to its ultimate destination, there would have been no need for setting a fault actor. Also note that fault does not have a Detail object because it does not relate to the SOAPBody object.
The following code fragment creates a SOAPFault object that includes a Detail object. Note that a SOAPFault object may have only one Detail object, which is simply a container for DetailEntry objects, but the Detail object may have multiple DetailEntry objects. The Detail object in the following lines of code has two DetailEntry objects");
Retrieving Fault Information if the SOAPBody object contains a SOAPFault object. If so,();
Name code = newFault.getFaultCodeAsName();
String string = newFault.getFaultString();
String actor = newFault.getFaultActor();
}
Next the code prints out the values it just retrieved. Not all messages are required to have a fault actor, so the code tests to see if there is one. Testing whether the variable actor is null works because the method getFaultActor returns null if a fault actor has not been set.
System.out.println("SOAP fault contains: ");
System.out.println(" Fault code = " + code.getQualifiedName()); of the DetailEntry objects in newDetail. Not all SOAPFault objects are required to have a Detail object, so the code tests to see whether newDetail is null. If it is not, the code prints out. | http://xyzws.com/scdjws/SGS31/6 | CC-MAIN-2018-39 | refinedweb | 671 | 58.42 |
LCM HCF of Two Numbers coderinme
LCM HCF of Two Numbers coderinme
The full form of LCM is Least Common Multiple. What is it? The least number which is exactly divisible by each one of the given numbers is called their L.C.M. The full form of HCF is Highest Common Factor. Its is also known as Greatest Common Measure (G.C.M.) or Greatest Common Divisor (G.C.D.). Now what is this term? The H.C.F. of two or more than two numbers is the greatest number that divides each of them exactly.
C program to find the LCM (lowest common divisor) and HCF (Highest Common Factor) of entered two numbers.
#include <stdio.h> int main() { int a,b, max, HCF,LCM; printf("Enter two positive integers: "); scanf("%d %d", &a,&b); max = ( a < b ) ? a : b; while(1) { if( max % a == 0 && max % b == 0 ) { LCM = max; printf("LCM: %d\n",LCM); HCF = ( a * b ) / LCM; printf("HCF: %d",HCF); break; } max++; } | https://coderinme.com/lcm-hcf-of-two-numbers-coderinme/ | CC-MAIN-2018-39 | refinedweb | 167 | 74.79 |
Test with Storybook
Set Test component
First of all, we need to set
test.tsx component that we want to apply to our component.
import Button from "./Button";
import { render, screen } from "@testing-library/react";test("renders learn react link", () => {
render(<Button />);
expect(screen.getByRole("button")).toHaveTextContent(/Hello /i);
});
So this is a normal test libray setting in React. But we want to use this with our Storybook setting. Its easy to apply first we can just add our Storybook component in test component.
import { RedButton } from "./Button.stories";
import { render, screen } from "@testing-library/react";test("renders learn react link", () => {render(<RedButton {...RedButton.args} />);
expect(screen.getByRole("button")).toHaveTextContent(/Red/i);
});
After we made this, we can check test library.
Test Command
Test command is also easy. Only thing you need to do is just type this.
yarn test
But, for better Test experience we need to add some lines in
package.json.
"jest": {
"collectCoverageFrom": [
"<rootDir>/src/components/**/*.{ts,tsx}",
"!**/node_modules/**",
"!**/*.stories.{ts,tsx}"
]
},
Just add upper code inside in your
package.json. All this code mean that ㅜot testing unnecessary elements like node modules, and stories.
And also i want to add some new script
"test:coverage": "react-scripts test --watchAll=false --coverage",
and after this you can run
yarn test:coverage . Then we can see like this.
But you can see that our test element is not fill 100%.
So let’s fix that.
Make test 100%
First we need to check
index.html that in Coverage folder. I will express the position in a tree structure.
react App
|_ coverage
|_ Icov-report
|_ index.html
in there you can see what we miss.
we can easily find that we miss
sm and
lg type button. So let’s add that in our
Button.test.tsx.
test("should render SmButton", () => {
render(<SmButton {...SmButton.args} />);
expect(screen.getByRole("button")).toHaveTextContent(/Small Button/i);
});test("should render LgButton", () => {
render(<LgButton {...LgButton.args} />);
expect(screen.getByRole("button")).toHaveTextContent(/Large Button/i);
});
If we test after this we can find that we pass all test element.
How to test custom styled component
If you want to test specific style customized then you can add other test option like below.
test("should render RedButton", () => {
render(<RedButton {...RedButton.args} />);
expect(screen.getByRole("button")).toHaveTextContent(/Red/i);
expect(screen.getByRole("button")).toHaveStyle("backgroundColor: red");
});
with
expectmethod inside of
test you can define what we want to test in the component. | https://materokatti.medium.com/test-with-storybook-1665508a3346?source=read_next_recirc---------2---------------------90f9fbb3_01a9_4748_abed_d275c19c7a0a------- | CC-MAIN-2022-40 | refinedweb | 407 | 52.76 |
On Tue, Dec 20, 2005 at 12:11:37AM -0500, Glenn Maynard wrote: > > Yeah; vi not behaving like vi by default seems like a showstopper. > "Can't make vim act like vi" might be a showstopper. "The default > configuration makes vim not act like vi" isn't a showstopper--it's > trivial to change. Geez, I hate arguments about defaults. If it's trivial to change, that's great; but until the defaults are changed it's still a showstopper. > I guess there are two competing goals here: acting like vi by default, > for the people in a time capsule, *sigh* > and acting like vim by default, to > show off vim's cool features. I wonder if there's a sensible way to > do both, eg. argv[0] for "vi" and "vim". The following patch lets you have a /usr/share/vim/virc (which should be a symlink to /etc, like /usr/share/vim/vimrc) to specify different behaviour when vim's invoked as vi instead of vim. --- vim-6.4.old/vim64/src/main.c 2005-02-15 23:09:15.000000000 +1000 +++ vim-6.4/vim64/src/main.c 2005-12-20 16:36:49.000000000 +1000 @@ -1363,6 +1363,10 @@ * Get system wide defaults, if the file name is defined. */ #ifdef SYS_VIMRC_FILE +# ifdef SYS_VIM_VIRC_FILE + if (STRCMP(initstr, "vi") != 0 || + do_source((char_u *)SYS_VIM_VIRC_FILE, FALSE, FALSE) == FAIL) +# endif (void)do_source((char_u *)SYS_VIMRC_FILE, FALSE, FALSE); #endif --- vim-6.4.old/vim64/src/os_unix.h 2003-11-10 19:53:44.000000000 +1000 +++ vim-6.4/vim64/src/os_unix.h 2005-12-20 16:14:07.000000000 +1000 @@ -233,6 +233,9 @@ #ifndef SYS_VIMRC_FILE # define SYS_VIMRC_FILE "$VIM/vimrc" #endif +#ifndef SYS_VIM_VIRC_FILE +# define SYS_VIM_VIRC_FILE "$VIM/virc" +#endif #ifndef SYS_GVIMRC_FILE # define SYS_GVIMRC_FILE "$VIM/gvimrc" #endif Cheers, aj
Attachment:
signature.asc
Description: Digital signature | https://lists.debian.org/debian-devel/2005/12/msg00955.html | CC-MAIN-2015-48 | refinedweb | 300 | 60.11 |
Welcome to part 2 of the web scraping with Beautiful Soup 4 tutorial mini-series. In this tutorial, we're going to talk about navigating source code to get just the slice of data we want.
We'll begin with the same starting code:
import bs4 as bs import urllib.request source = urllib.request.urlopen('').read() soup = bs.BeautifulSoup(source,'lxml')
Now, rather than working with the entire soup, we can specify a new Beautiful Soup object. An example might be:
nav = soup.nav
Next, we can grab the links from just the nav bar:
for url in nav.find_all('a'): print(url.get('href'))
In this case, we're grabbing the first nav tags that we can find (the navigation bar). You could also go for
soup.body to get the body section, then grab the
.text from there:
body = soup.body for paragraph in body.find_all('p'): print(paragraph.text)
Finally, sometimes there might be multiple tags with the same names, but different classes, and you might want to grab information from a specific tag with a specific class. For example, our page that we're working with has a
div tag with the class of
"body". We can work with this data like so:
for div in soup.find_all('div', class_='body'): print(div.text)
Note the
class_='body', which allows us to work with a specific class of tag.
In the next tutorial, we're going to cover working with tables and XML. | https://pythonprogramming.net/navigating-pages-scraping-parsing-beautiful-soup-tutorial/ | CC-MAIN-2019-26 | refinedweb | 247 | 68.36 |
Introduction
In this post, we would introduce how to use micro:bit and DFRobot BOSON starter kit for micro:bit to build a Lego smart house.
There are three small projects in this topic, the first one is a LED light which will automatically adjust the lightness according to ambient light; the second one is a button-controlled ceiling fan; and the third one is an earthquake-detecting alarm.
Preparation
Microbit and BOSON starter Kit
In this article, we use the BBC Micro:bit as the controller of the smart house.
And to handle the electronic components in our smart house, i.e. sensors and actuators, we use the DFRobot BOSON starter kit and insert micro:bit into the extension board bit so that we can easily connect all the BOSON modules of this projects.
Note: if you don't have this starter kit, you can use compatible components and connect them to micro:bit pins with the help with breadboard and wires.
For more information about BOSON starter Kit, you may refer to this link.
And here is the list of all BOSON bricks we need:
1.Light Sensor Module
2.LED Module
3.Button Module
4.Fan Module
5.Tilt Sensor Module
6.Buzzer Module
Mu editor
Next, to write the micro python code of micro:bit, please download the Mu Editor from here. This is its main interface.
Also, we recommend you to have a basic understanding of the Micro Python API for micro:bit, there are many tutorials and samples of the Microbit Python Module.
Hardware
The following pictures show how we build our house by Lego bricks.
Besides the horizontally placed BOSON bricks, some hanged bricks require addition fasten method rather than simply attach them to the Lego pieces.
We could use a screw to secure a BOSON base plat with Lego pieces, and the bricks can adhere to the base magnetically.
Samples
Now let’s launch the Mu Editor and write the Micro Python program!
1. Night Light
In the first sample, we want to make a LED light to adjust the lightness by itself, even more, we want it can change properly with respect to the ambient lightness.
So we attach a light sensor to P1 port of the micro:bit extension board to enable microbit read the value of lightness, and the LED is connected to P2 of the extension board.
First, we need to use the following scripts to measure the max and min light sensor value.
To make the value more precisely, we record light sensor values in a short time, and then compute the final average value.
night-light-measure.py
from microbit import * light_sensor = pin1.read_analog() counter = 0 timer = running_time() while (running_time() - timer) <= 3 * 1000: light_sensor += pin1.read_analog() counter += 1 light_sensor /= counter print("mean light sensor value: ", light_sensor)
Then open a REPL in Mu Editor, and run this program with and without the ambient light.
You will obtain the mean value under these two instances, for example the 966.4033 and 14.81614 in the picture below,
Now we can put these two dark and light value in the following code, and micro:bit will adjust the analog output proportionally due to these two upper and lower bounds.
night-light.py
from microbit import * light_sensor = pin1.read_analog() counter = 0 timer = running_time() light = 966.4033 dark = 14.81614 while True: light_sensor = pin1.read_analog() LED = int((light - light_sensor)/(light - dark)*1023) if LED > 1023: LED = 1023 elif LED < 0: LED = 0 print("LED lightness: ", LED) pin2.set_analog_period(1) pin2.write_analog(LED) sleep(0.5)
Video:
2. Ceiling Fan
In the second demo, we will build a simple ceiling fan which can be switched on and off by a button.
Please connect a BOSON button module to P12 of the extension board and connect to a BOSON DC fan module to P16.
We use a boolean variable, switch, to store the status of the switch, and it would change to the opposite status when user pressed and released the button.
Note that it might be better to wait for a certain time duration ( i.e. sleep(0.5)) during user switching the fan so that we can ensure the button is pressed only once and fully released.
ceiling-fan.py
from microbit import * switch = False while True: if pin12.read_digital() is 1: while pin12.read_digital() is 1: sleep(0.5) switch = not switch if switch: pin16.write_digital(1) print("Turn ON") else: pin16.write_digital(0) print("Turn OFF")
Video:
3. Earthquake Alarm
The third demo is quite interesting, we are going to make a earthquake alarm of the house.
To tell if the house is shaking, we use a tilt sensor and monitor the variance of the sensor in a very high frequency.
If the house is not shaking, the variance will be identically zero, so we compute the average and check whether it is zero to determine the occurrence of earthquake. And if an earthquake is detected, the program will play a music to notify people.
Please connect a BOSON tilt module to P0 and buzzer's +/- pins to micro:bit's P0 and GND pin. If you use the extension board, just plug a earphone or small speaker to its audio jack (connected to P0 already).
Note this demo uses the built-in music library of microbit, and it defines audio output on P0.
You may design a custom notes and play it such like the example in this document.
alarm.py
from microbit import * import music status = pin8.read_digital() def detect_shake(): old_tilt_status = pin8.read_digital() sleep(0.1) new_tilt_status = pin8.read_digital() return abs(new_tilt_status - old_tilt_status) while True: counter = 0 timer = running_time() shake = detect_shake() while (running_time() - timer) <= 500: shake += detect_shake() counter += 1 status = shake/counter print(status) if status is not 0.0: print("Alarm!!!") music.play(music.DADADADUM)
Video: | https://microbit.hackster.io/73125/micro-bit-smart-home-light-fan-and-alarm-system-bd1e6f | CC-MAIN-2018-34 | refinedweb | 970 | 64.61 |
PnP libraries are created to serve the needs of developers for easy learning and implementation of logic through client-side programming.
Patterns and Practices (PnP)
Let’s directly jump into the implementation section. Please follow the below steps to get it done.
Step 1
Build your SharePoint Framework client-side web part and integrate it with Angular 4.
Step 2
Import the following package in your package.json file to use PnP graph and PnP SP in your webpart.
Step 3
After step 2, go to the Node.js command prompt and make sure you are pointing the project directory and run the below command to install the packages. npm install
Step 4
Import the following code to use PnP Graph in your webpart. import { graph } from "@pnp/graph";
Import the following code to use PnP SharePoint in your webpart.
import { sp } from "@pnp/sp" import { sp } from "@pnp/sp"
Step 5
To set up the PnP Graph in your webpart, add the following code in your webpart.
The next step is to mention the required permissions in package-solution.json file.
webApiPermissionRequests is an array of web API’s permission request items. Every item in webApiPermissionRequests is defined, the resource and the scope of the permission request.
Resource Defines the resource for which user wants to configure the permission request. The resource value to access Microsoft Graph is Microsoft Graph
Scope Defines the name of the permission that user wants to access in the resource. The permission name and ID were briefly explained in official Graph API documentation
Please find the sample code snippet below,
Go to the Node.js command prompt and make sure you are pointing the Project directory and run the below command to verify that the solution builds correctly. gulp build
Run the below command to bundle the solution, we are using –ship to make sure that all our node_modules dependencies were included in bundle file. gulp bundle --ship
Use the following command to package the solution. gulp package-solution --ship
Go to your App Catalog in your SharePoint and push the .sppkg file generated from Step 9.
Once you upload the file in app catalog you will be prompted with a popup like below,
During installation of your app, you can find the highlighted content in the deploy window, which alerts you to approve the pending permissions for Microsoft Graph. Step 11
After the installation process, go to the SharePoint admin center and select the “Try the preview” option in your SharePoint admin center.
In new modern SharePoint Admin preview page, we do a new menu called API Management to manage app permissions in your tenant.
In step 6, we requested 5 permissions in package-solution.json, you can find those permission requests in the API management screen below. An administrator should approve requested permission scopes in the resource to grant access to the apps using those resources.
Now we are all set to use the Microsoft Graph in SPFx webparts. Below I have mentioned some of the code samples for real-time scenarios which will help you to start with Microsoft graph in SPFx.
Note There are no webApiPermissionRequests needed to access SharePoint for using the resources. webApiPermissionRequests is needed only to access Microsoft Graph.
Import the desired component code and HTML code in your webpart as given in the following examples.
The following examples are based on Microsoft Graph using PnP Graph. Get groups in your organization
The below component and HTML code are used to get all the groups in your organization.
Component code
HTML code
Response - Add groups in your organization
The below component is used to add a group in your organization.
The following examples are based on SharePoint using PnP. Get SiteUsers in your organization
The below component is used get all SiteUsers in your organization.
Add SiteUser in your organization
The below component is used add a SiteUser in your organization.
If you have any questions/issues about this article, please let me know in the comments. | https://www.c-sharpcorner.com/blogs/how-to-integrate-pnp-graph-and-pnp-sp-in-your-webpart2 | CC-MAIN-2019-18 | refinedweb | 670 | 54.93 |
Type: Posts; User: irona20
I have seen it a bit late, but anyway thank you very much!!!
I has been a very nice surprise!
Thank you! Thank you! Thank you very much! :blush:
Wow, I've read your Gridbaglayout tutorial and it's fantastic. It will help me a lot. Thank you very much for your help
I have a frame sized 640x480, which I want to divide horizontally into 2 parts, one taking 1/3 of the total width and the other the rest. The former section will contain some buttons, the latter some...
I have declared a varchar variable, and it contains a sql sentence (which it works), I want to execute this sentece into a procedure.
If I use:
EXECUTE @sentence;
I get this error: "insert...
Thank you Andreas. With your code happens the same. When the text is changed in my edit box (with spin control), all text is selected.
I have a dialog with a CEdit and a CSpinButtonCtrl.
This is my code:
Where m_editLog is the edit and m_strLog is a CString variable associated with the edit. With this code, the text in CEdit...
More info:
We have tested in 3 XP machines:
1.- It works.
2.- ShellExecute doesn't return any error, but it doesn't open the file.
3.- ShellExecute returns an error, I know it because the...
Thank you for your answer!
Well, I don't know what is the error, because I don't have the machine, and on my machine works :D
Moreover, if you do double-click in the lnk file from explorer, it...
Hi!
This is my code:
if (::ShellExecute(NULL, "open", sPath, NULL, NULL, SW_SHOWNORMAL) <= (HINSTANCE)32)
{
ShowError(IDS_ERROR_OPEN_FILE, sPath);
bRetcode = FALSE;
}
Hi Doctor :)
I don't know if I need to include some directive more.
The error happens in this line:
typedef ULONG_PTR HCRYPTPROV;
Thank you for your answers.
Hi! I have downloaded Microsoft SDK, and this is my code:
#include "stdafx.h"
#define _WIN32_WINNT 0x0400
#include <stdio.h>
#include <windows.h>
#include <wincrypt.h>
If I execute this code:
CString sInfo;
SDKVersion = ldap_version( &ver );
if ( ver.security_level != LDAP_SECURITY_NONE )
{
sInfo.Format( "Level of encryption: %d bits\n",...
Yes, I thought that... but I am searching for a function like GetCurLine for CArchive.
If there isn't other method, I will have to use a counter.
Thank you for your answer.
Hi!
I am reading a text file line by line, using CStdioFile and ReadString.
Is there any way to know what number of line I am reading?
Thank you in advance.
Thank you dimm_coder :D
nice to hear you too :)
I need a global variable, which will be used from several .exe.
My code:
Log_clnt.cpp
string text_messages [NUM_MAX_MESSAGES];
With other files, it will generated a logclnt.lib.
No, now it isn't owner-drawn. Thank you!. Now I can insert my string. But now I have two problems:
1.
m_wndComboTool.m_ImageListCombo.Create( GetSystemMetrics(SM_CXSMICON),...
Hi! Maybe a stupid error, but my code only inserts garbage:
m_cmbAddress.Create(WS_CHILD|WS_VISIBLE | CBS_AUTOHSCROLL | CBS_DROPDOWN | CBS_HASSTRINGS ,
...
If you use:
::ShellExecute(NULL,NULL,"your page.html",NULL,NULL,SW_SHOWNORMAL);
That will open your default browser with your page.
How do you change the background? Have you used WM_CTLCOLOR message? I do it with WM_CTLCOLOR and it works for me.
Thank you very much Elhadif!
But now I get the error LDAP_LOCAL_ERROR :confused:
My code:
LDAP* ld = ldap_init ("scorpions", 636);
int iRtn;
iRtn = ldap_set_option(ld, LDAP_OPT_SSL,...
Example from MSDN:
/* VA.C: The program below illustrates passing a variable
* number of arguments using the following macros:
* va_start va_arg va_end
* ...
Look this function in MSDN:
IShellLink::SetArguments
I don't renember, it was three years ago :) I only renember I had to split my string.
Well, I don't think that the SQL statment are so huge.
I had this problem once:
CString strAux = "SELECT DISTINCT \
GenInfInforme.sINFO_CODIGO, \
GenInfInformeDetalles.sIDET_VISTA,... | http://forums.codeguru.com/search.php?s=6f29854d87dc7e7419dd10b288f92787&searchid=2757577 | CC-MAIN-2014-15 | refinedweb | 648 | 68.77 |
Products.TemplateFields
- Warning
- This product has not had a release in over 1 year and may no longer be maintained.
Supplies an Archetypes field useful for editing and storing Zope Page Templates
Project Description
This product provides two Archetype fields that store and render templates. There's the DTMLField for DTML templates and the ZPTField for ZPT templates.
Usage
Install as usual in your Products directory or as an egg.
Add this line to your custom Archetype to import the fields:
from Products.TemplateFields import DTMLField, ZPTField
In your schema, add DTMLFields and ZPTFields like this:
BaseSchema + Schema(( ... DTMLField('oneField'), ZPTField('anotherField'), ... ))
Credits
Thanks to Sidnei da Silva for the TALESField product, which served as the base for this.
Further Information
Visit for documentation, bug-reports, etc.
- 2005-2007, BlueDynamics Alliance, Klein & Partner KEG, Austria
Installation
TemplateFields may be installed as either an egg or as a traditional Zope product.
Via Buildout
Just add archetypes.TemplateFields to the "eggs" list for the buildout or zope2instance parts:
- eggs =
- ... Products.TemplateFields ...
Via easy_setup
Just use the copy of easy_setup for the Python that you're using to run Zope.
Traditional Zope Product Installation
Copy or symbolically link the Products.TemplateFields/Products/TemplateFields to be Products/TemplateFields in your Zope instance's Products directory.
Changelog
1.2.5 (2010-06-10)
- ZopePageTemplate's write method decodes the template text; make sure that we encode with UTF-8 when returning the text in getRaw. [davisagli]
1.2.4
- 1.2.3 release was somehow botched.
1.2.3
- Switch to Zope 3 interfaces; we're now Plone 4 compatible. [smcmahon]
1.2.2
- Fix typo in exception handler. [wichert]
1.2.1
- Fix type in error handling. [ivo]
1.2
- Add a configurable option to swallow errors generated while rendering a template field. These errors were problematic since they break catalog indexing of the object, resulting in site errors. [wichert]
1.1.3
- Make sure to use a page template with acquisition context when validating fields. Without this context variables such as context and here were not available. [wichert]
- Cleanup REST syntax in the documentation and add a changelog. [wichert]
Current Release
Products.TemplateFields 1.2.5
Released Jun 10, 2010 — tested with Plone 4, Plone 3
Fixes an issue with templates including non-ASCII characters.
More about this release…
Get Products.TemplateFields for all platforms
- Products.TemplateFields-1.2.5.zip
- If you are using Plone 3.2 or higher, you probably want to install this product with buildout. See our tutorial on installing add-on products with buildout for more information. | http://plone.org/products/templatefields | crawl-003 | refinedweb | 429 | 58.99 |
Jun 25, 2008 11:00 AM|TheEagle|LINK
Hi,
My website is working fine.I added a deployment project.When I biuld the website(which cause the deployment project to be biulded too) I get the error:
aspnet_merge.exe exited with code 1.
I don't know how to find what cause the error to solve it.Could any one help?My company want the website to be deployed today but I couldn't because of this error.Please help me as fast as possible.
All-Star
16800 Points
Jun 25, 2008 12:13 PM|Jeev|LINK
see this post
In all probability its usually because of naming collisions eg similarly named page in 2 different folders and both of them being in the same namespace
Jun 25, 2008 12:54 PM|hongping|LINK
You could try running the command manually and add a "-errorstack" flag which might also yield more information on the error.
Jun 25, 2008 04:30 PM|hongping|LINK
You can take a look at the output window after you build the Web Deployment Project. It would have the commands ran for aspnet_compiler and aspnet_merge.
------ Rebuild All started: Project: WebSite3_deploy, Configuration: Debug Any CPU ------
if exist ".\TempBuildDir\" rd /s /q ".\TempBuildDir\"
D:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\aspnet_compiler.exe -v /WebSite3 -p e:\bugs\WebSite3 -u -f -c -d .\TempBuildDir\
Running aspnet_merge.exe.
D:\Program Files\Microsoft SDKs\Windows\v6.0A\bin\aspnet_merge.exe .\TempBuildDir -o WebSite3_deploy -a -debug -copyattrs
Successfully merged '.\TempBuildDir'.
You can try re-running those commands in a Visual Studio command prompt. For aspnet_merge, you could try adding the "errorstack" option.
5 replies
Last post Jun 25, 2008 04:30 PM by hongping | https://forums.asp.net/t/1280702.aspx?aspnet_merge+exe+exited+with+code+1 | CC-MAIN-2017-34 | refinedweb | 283 | 59.3 |
Forum Index
Hi! I'm trying to make my own string type for my library. Anyway, I'm not very experienced with structs/classes so I don't understand what's going one here. I will not post the full code because I don't think that anyone would like it so I will just post the important parts that play a role (tho in any case feel free to ask for the full code).
Code:
import core.memory;
import core.stdc.stdio;
import core.stdc.string;
import core.stdc.stdlib;
struct str {
private:
char* _val;
uint* _count;
ulong _cap, _len;
public:
// Constructors
this(const char* val) {
printf("Debug: Called this(const char* val)\n");
this._val = cast(char*)val;
this._count = cast(uint*)pureMalloc(4);
*this._count = 0;
this._cap = 0;
this._len = strlen(val);
}
// Copy constructor
this(ref return scope str rhs) {
printf("Debug: Copy constructor called!!! (strig rhs)\n");
this._cap = rhs.length;
this._len = this._cap;
this._val = cast(char*)pureMalloc(this._len);
strcpy(this._val, rhs.ptr);
this._count = cast(uint*)pureMalloc(4);
*this._count = 1;
}
// Assigment constructors
str opAssign(str rhs) {
printf("Debug: Assigment constructor called!!! (str rhs)\n");
if (*this._count == 1) {
free(this._val);
} else if (*this._count > 1) {
(*this._count)--;
} else *this._count = 1;
this._val = cast(char*)pureMalloc(rhs.length);
if (!this._val) {
fprintf(stderr, "Could not allocate memory for the str object");
exit(1);
}
strcpy(this._val, rhs.ptr);
this._cap = rhs.length;
this._len = rhs.length;
return this;
}
@property char* ptr() { return _val; }
@property ulong length() { return _len; }
}
extern (C) int main() {
str name = "Mike";
str other_name = "Anna";
other_name = name;
return 0;
}
So, when I assign the value of the variable "name" in the "other_name", first it will call the copy constructor, then it will call the assignment constructor and then it will call the copy constructor again. Why is this happening? I was expecting only the assignment constructor to get called.
On Friday, 19 November 2021 at 14:05:40 UTC, rempas wrote:
When you pass a struct instance to a function by value, or return a struct instance from a function by value, a copy is made, and the copy constructor is called.
Your opAssign takes rhs by value, and returns a str by value:
opAssign
rhs
str
// Returned by value
// |
// v
str opAssign(str rhs) {
// ^
// |
// Passed by value
So, every call to it will result in two calls to str's copy constructor.
If you want to avoid this, you can change opAssign to have the following signature:
// Returned by referece
// |
// v
ref str opAssign()(auto ref str rhs)
// ^
// |
// Passed by reference (if possible)
Since auto ref is only allowed for template functions, I have added an empty template argument list (the ()) to make opAssign into a function template.
auto ref
()
You can read more about ref functions and auto ref parameters in the D language specification.
ref
On Friday, 19 November 2021 at 14:22:07 UTC, Paul Backus wrote:
Your opAssign takes rhs by value, and returns a str by
[...]
Interesting! It's weird that it works like that and explicitly calls a constructor but it indeed works as expected now. Thanks a lot and have an amazing day! | http://forum.dlang.org/thread/buajicbayemxlrqqzjaf@forum.dlang.org#post-ujwtfmphekujsuqxqtqq:40forum.dlang.org | CC-MAIN-2021-49 | refinedweb | 536 | 66.33 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.