text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Array class in C++. Operations on array :- 1. at() :- This function is used to access the elements of array. 2. get() :- This function is also used to access the elements of array. This function is not the member of array class but overloaded function from class tuple. 3. operator[] :- This is similar to C-style arrays. This method is also used to access array elements. Output: The array elemets are (using at()) : 1 2 3 4 5 6 The array elemets are (using get()) : 1 2 3 4 5 6 The array elements are (using operator[]) : 1 2 3 4 5 6 4. front() :- This returns the first element of array. 5. back() :- This returns the last element of array. Output: First element of array is : 1 Last element of array is : 6 6. size() :- It returns the number of elements in array. This is a property that C-style arrays lack. 7. max_size() :- It returns the maximum number of elements array can hold i.e, the size with which array is declared. The size() and max_size() return the same value. Output: The number of array elements is : 6 Maximum elements array can hold is : 6 8. swap() :- The swap() swaps all elements of one array with other. Output: The first array elements before swapping are : 1 2 3 4 5 6 The second array elements before swapping are : 7 8 9 10 11 12 The first array elements after swapping are : 7 8 9 10 11 12 The second array elements after swapping are : 1 2 3 4 5 6 9. empty() :- This function returns true when the array size is zero else returns false. 10. fill() :- This function is used to fill the entire array with a particular value. Output: Array empty Array after filling operation is : 0 0 0 0 0 0: - Implementation of Array class in JavaScript - How to create a dynamic 2D array inside a class in C++ ? - How to convert a class to another class type in C++? - Find original array from encrypted array (An array of sums of other elements) - Find an element in array such that sum of left array is equal to sum of right array - std::any Class in C++ - std::hash class in C++ STL - std:: valarray class in C++ - Structure vs class in C++ - std::string class in C++ - Virtual base class in C++ - Difference between namespace and class - C++ | Class and Object | Question 3 - C++ | Class and Object | Question 2 - C++ | Class and Object | Question 6 Improved By : siddharthx_07
https://www.geeksforgeeks.org/array-class-c/
CC-MAIN-2019-35
refinedweb
422
62.38
On Wed, Jul 20, 2005 at 02:51:14PM -0700, Steve Langasek wrote: > Yeah, this is another lib with a C++ implementation that only exports > a C ABI in its headers. (other telltale signs to look for besides > '::', btw are 'use', 'class', 'operator'; but that may obviously give > false positives.) The C++ bits within the library are a whole lot of > template implementations, and a few internal classes that are only > exposed in the headers via C wrappers. If you're sure that nothing > out there is using tsqllib internals inappropriately, then there's no > need for a package name change. Actually the proper way is to check the public headers and look if the interface is guarded with extern "C" { ... }. There _must_ be a check like: #ifdef __cplusplus extern "C" { #endif /* ... */ #ifdef __cplusplus } #endif Just take the public header and pass it thru the preprocessor: $ g++ -E /usr/include/GL/gl.h | grep -v ^# look for the bits outside the extern "C" linkage: typedef int ptrdiff_t; typedef unsigned int size_t; that's harmless. Let's say you do find something like: extern void glEnableTraceMESA( GLbitfield mask ); _outside_ the extern "C" block... that is _not_ harmless. A small parser that looks for extern "C", the "{" right after it and the matching "}" should make things much easier. -- Marcelo
https://lists.debian.org/debian-devel/2005/07/msg01356.html
CC-MAIN-2015-11
refinedweb
217
69.31
in reply to Re^3: Perl6 Contest #2: P6 That Doesn't Look Like P5 in thread Perl6 Contest #2: P6 That Doesn't Look Like P5 I was looking for ways to pass out-of-band parameters to the loop subroutines (preferably without requiring the users to make messy changes), and this idea of using named parameters in combination with a splatted list occurred to me. #!/usr/bin/pugs use v6; #use Test; #plan 2; sub oob(+$x = $CALLER::_, *@lst) { return ($x,@lst); } sub runner() { $_ = "qqq"; oob("a","b","c"); } my ($x, @lst) = runner(); say "x = *$x* lst = *",~@lst,"*"; #is($x, 'qqq', '... default named parameter with $CALLER_ and a list', + :todo<bug>); #is(~@lst, 'a b c', '... list after default named parameter with $CALL +ER_', :todo<bug>); What I actually got was: and not:and not:x = *a* lst = *b c* x = *qqq* lst = *a b c* This was unexpected after seeing examples in E06, so it could be a bug test (with the Test stuff uncommented). Or I may be completely misunderstanding positional parameters. In Section Seekers of Perl Wisdom
https://www.perlmonks.org/?displaytype=print;node_id=464502
CC-MAIN-2020-10
refinedweb
184
60.48
it seems I am having a linking problem. In my project im using a class that is defined in a header file, whose methods´ definitions lie in a precompiled library. I am using CMake and the compiling command includes the path to the precompiled library set correctly by the -L compiler flag. The linking problem relates to an overloaded typecast. In the header file, the overloaded typecaset looks like this: namespace comm { struct InstanceID{ . . __attribute__ ((visibility("default"))) operator ipl::string () const; } } and as mentioned, the definition lies in a precompiled library. I peeked inside the precompiled library using the nm linux took and the reference is indeed defined there: Note that in C string is a typecast of std:basic_string<char, traits, allocator> The linking error that I get looks like this: You can note that the definition in the precompiled library includes >[abi::cx11], whilst the reference shown in the error does not. Maybe this is the error, that the reference is not exactly the same? How could I solve this? What else could it be? Thank you! Source: Windows Questions C++
https://windowsquestions.com/2021/12/29/linker-error-for-overloaded-typecast-into-string-in-class/
CC-MAIN-2022-05
refinedweb
183
63.49
tag:blogger.com,1999:blog-43503638462910778182021-01-06T08:32:12.077-05:00Interesting Things, Largely Python and Twisted RelatedJean-Paul Calderone on Python 3 - Call for PortersHello Pythonistas,<br /><br />Earlier this year a number of <a href="">Tahoe-LAFS</a>.<br /><br /.<br /><br /><a href="">Foolscap</a>, a dependency of Tahoe-LAFS, is also being ported. Foolscap is an object-capability-based RPC protocol with flexible serialization.<br /><br />Some details of the porting effort are available in <a href="">a milestone on the Tahoe-LAFS trac instance</a>.<br /><br />For this help, we are hoping to find a person/people with significant prior Python 3 porting experience and, preferably, some familiarity with Twisted, though in general the Tahoe-LAFS project welcomes contributors of all backgrounds and skill levels.<br /><br /.<br /><div><br /></div>Jean-Paul Calderone the txflashair Dockerfile<p>Some <a href="">FlashAero</a> a bit first but it doesn't do quite what I want out of the box and the code is a country mile from anything I'd like to hack on. It did serve as a decent resource for the HTTP interface to go alongside the <a href="">official reference</a> which I didn't find until later.</p> <p>Fast forward a bit and I've got <a href="">txflashair</a>.</p> <p>This afternoon I took the Dockerfile I'd managed to cobble together in the last hack session:</p> <pre><br />FROM python:2-alpine<br /><br />COPY . /src<br />RUN apk add --no-cache python-dev<br />RUN apk add --no-cache openssl-dev<br />RUN apk add --no-cache libffi-dev<br />RUN apk add --no-cache build-base<br /><br />RUN pip install /src<br /><br />VOLUME /data<br /><br />ENTRYPOINT ["txflashair-sync"]<br />CMD ["--device-root", "/DCIM", "--local-root", "/data", "--include", "IMG_*.JPG"]<br /></pre> and turn it into something halfway to decent and that produces something actually working to boot: <pre><br />FROM python:2-alpine<br /><br />RUN apk add --no-cache python-dev<br />RUN apk add --no-cache openssl-dev<br />RUN apk add --no-cache libffi-dev<br />RUN apk add --no-cache build-base<br />RUN apk add --no-cache py-virtualenv<br />RUN apk add --no-cache linux-headers<br /><br />RUN virtualenv /app/env<br /><br />COPY requirements.txt /src/requirements.txt<br />RUN /app/env/bin/pip install -r /src/requirements.txt<br /><br />COPY . /src<br /><br />RUN /app/env/bin/pip install /src<br /><br />FROM python:2-alpine<br /><br />RUN apk add --no-cache py-virtualenv<br /><br />COPY --from=0 /app/env /app/env<br /><br />VOLUME /data<br /><br />ENTRYPOINT ["/app/env/bin/txflashair-sync"]<br />CMD ["--device-root", "/DCIM", "--local-root", "/data", "--include", "IMG_*.JPG"]<br /><br /></pre> </p>So, what have I done exactly? The change to make the thing <em>work</em> is basically just to install the missing <code>py-virtualenv</code>. It took a few minutes to track this down. <code>netifaces</code> has this as a build dependency. I couldn't find an <code>apk</code> equivalent to <code>apt-get build-dep</code> but I did finally track down its <a href="">APKBUILD</a> file and found that <code>linux-headers</code> was probably what I was missing. Et voila, it was.</code> <>Perhaps more interesting, though, are the changes to reduce the image size. I began using the new-ish Docker feature of <a href="">multi-stage builds</a>. From the beginning of the file down to the 2nd <code>FROM</code> line defines a Docker image as usual. However, the second <code>FROM</code> line starts a new image which is allowed to copy <em>some</em> of the contents of the first image. I merely copy the entire virtualenv that was created in the first image into the second one, leaving all of the overhead of the <em>build environment</em> behind to be discarded.</p> <p.</p> <p <code>py-virtualenv</code> is also copied to the second image because a virtualenv does not work without virtualenv itself being installed, strangely.</p> <p>Like this kind of thing? Check out <em>Supporing Open Source</em> on the right. </p>Jean-Paul Calderone to EC2 (Refrain)<p>Recently <a href="">Moshe</a> wrote up a demonstration of the simple steps needed to retrieve an SSH public key from an EC2 instance to populate a <code>known_hosts</code>file. Moshe's example uses the highly capable boto3 library for its EC2 interactions. However, since his blog is syndicated on Planet Twisted, reading it left me compelled to present an implementation based on <a href="">txAWS</a> instead.</p> <p>First, as in Moshe's example, we need <code>argv</code> and <code>expanduser</code> so that we can determine which instance the user is interested in (accepted as a command line argument to the tool) and find the user's <code>known_hosts</code> file (conventionally located in <code>~</code>):</p> <pre><br />from sys import argv<br />from os.path import expanduser<br /></pre> Next, we'll get an abstraction for working with filesystem paths. This is commonly used in Twisted APIs because it saves us from many path manipulation mistakes committed when representing paths as simple strings: <pre><br />from filepath import FilePath<br /></pre> Now, get a couple of abstractions for working with SSH. Twisted Conch is Twisted's SSH library (client & server). <code>KnownHostsFile</code> knows how to read and write the <code>known_hosts</code> file format. We'll use it to update the file with the new key. <code>Key</code> knows how to read and write SSH-format keys. We'll use it to interpret the bytes we find in the EC2 console output and serialize them to be written to the <code>known_hosts</code> file. <pre><br />from twisted.conch.client.knownhosts import KnownHostsFile<br />from twisted.conch.ssh.keys import Key<br /></pre> And speaking of the EC2 console output, we'll use txAWS to retrieve it. <code>AWSServiceRegion</code> is the main entrypoint into the txAWS API. From it, we can get an EC2 client object to use to retrieve the console output. <pre><br />from txaws.service import AWSServiceRegion<br /></pre> And last among the imports, we'll write the example with <code>inlineCallbacks</code> to <code>react</code> to drive the whole thing so we don't need to explicitly import, start, or stop the reactor. <pre><br />from twisted.internet.defer import inlineCallbacks<br />from twisted.internet.task import react<br /></pre> With that sizable preamble out of the way, the example can begin in earnest. First, define the main function using <code>inlineCallbacks</code> and accepting the reactor (to be passed by <code>react</code>) and the EC2 instance identifier (taken from the command line later on): <pre><br />@inlineCallbacks<br />def main(reactor, instance_id):<br /></pre> Now, get the EC2 client. This usage of the txAWS API will find AWS credentials in the usual way (looking at <code>AWS_PROFILE</code> and in <code>~/.aws</code> for us): <pre><br /> region = AWSServiceRegion()<br /> ec2 = region.get_ec2_client()<br /></pre> Then it's a simple matter to get an object representing the desired instance and that instance's console output. Notice these APIs return <code>Deferred</code> so we use <code>yield</code> to let <code>inlineCallbacks</code> suspend this function until the results are available. <pre><br /> [instance] = yield ec2.describe_instances(instance_id)<br /> output = yield ec2.get_console_output(instance_id)<br /></pre> Some simple parsing logic, much like the code in Moshe's implementation (since this is exactly the same text now being operated on). We do take the extra step of deserializing the key into an object that we can use later with a <code>KnownHostsFile</code> object. <pre><br /> keys = (<br /> Key.fromString(key)<br /> for key in extract_ssh_key(output.output)<br /> )<br /></pre> Then write the extracted keys to the known hosts file: <pre><br /> known_hosts = KnownHostsFile.fromPath(<br /> FilePath(expanduser("~/.ssh/known_hosts")),<br /> )<br /> for key in keys:<br /> for name in [instance.dns_name, instance.ip_address]:<br /> known_hosts.addHostKey(name, key)<br /> known_hosts.save()<br /></pre> There's also the small matter of actually parsing the console output for the keys: <pre><br />def extract_ssh_key(output):<br /> return (<br /> line for line in output.splitlines()<br /> if line.startswith(u"ssh-rsa ")<br /> )<br /></pre> And then kicking off the whole process: <pre><br />react(main, argv[1:])<br /></pre> Putting it all together: <pre><br />from sys import argv<br />from os.path import expanduser<br /><br />from filepath import FilePath<br /><br />from twisted.conch.client.knownhosts import KnownHostsFile<br />from twisted.conch.ssh.keys import Key<br /><br />from txaws.service import AWSServiceRegion<br /><br />from twisted.internet.defer import inlineCallbacks<br />from twisted.internet.task import react<br /><br />@inlineCallbacks<br />def main(reactor, instance_id):<br /> region = AWSServiceRegion()<br /> ec2 = region.get_ec2_client()<br /><br /> [instance] = yield ec2.describe_instances(instance_id)<br /> output = yield ec2.get_console_output(instance_id)<br /><br /> keys = (<br /> Key.fromString(key)<br /> for key in extract_ssh_key(output.output)<br /> )<br /><br /> known_hosts = KnownHostsFile.fromPath(<br /> FilePath(expanduser("~/.ssh/known_hosts")),<br /> )<br /> for key in keys:<br /> for name in [instance.dns_name, instance.ip_address]:<br /> known_hosts.addHostKey(name, key)<br /> known_hosts.save()<br /><br />def extract_ssh_key(output):<br /> return (<br /> line for line in output.splitlines()<br /> if line.startswith(u"ssh-rsa ")<br /> )<br /><br />react(main, argv[1:])<br /></pre> .</p> <p>Also, I'd like to thank <a href="">LeastAuthority</a> (my current employer and operator of the <a href="">Tahoe-LAFS</a>-based S4 service which just so happens to lean heavily on txAWS) for originally implementing <code>get_console_output</code> for txAWS (which, minor caveat, will not be available until the next release of txAWS is out).</p> <p>As always, if you like this sort of thing, check out the support links on the right.</p>Jean-Paul Calderone Web in 60 Seconds: HTTP/2Hello, hello. It's been a long time since the last entry in the "Twisted Web in 60 Seconds" series. If you're new to the series and you like this post, I recommend going back and reading the <a href="">older posts</a> as well. <br /><br />In this entry, I'll show you how to enable <a href="">HTTP/2</a> for your Twisted Web-based site. HTTP/2 is the latest entry in the HTTP family of protocols. It builds on work from Google and others to improve performance (and other) shortcomings of the older HTTP/1.x protocols in wide-spread use today. <br /><br />Twisted implements HTTP/2 support by building on the general-purpose <a href="">H2</a> Python library. In fact, all you have to do to have HTTP/2 for your Twisted Web-based site (starting in Twisted 16.3.0) is install the dependencies: <br /><br /><code>$ pip install twisted[http2] </code><br /><br />Your TLS-based site is now available via HTTP/2! A future version of Twisted will likely extend this to non-TLS sites (which requires the <code>Upgrade: h2c</code> handshake) with no further effort on your part. <br /><br />Jean-Paul Calderone Object Initialization - Patterns and Antipatterns<p>I caught Toshio Kuratomi's post about <a href="">asyncio initialization patterns (or anti-patterns)</a> on Planet Python. This is something I've dealt with a lot over the years using <a href="">Twisted</a> (one of the sources of inspiration for the asyncio developers). </p> <p>To recap, Toshio wondered about a pattern involving asynchronous initialization of an instance. He wondered whether it was a good idea to start this work in <code>__init__</code> and then explicitly wait for it in other methods of the class before performing the distinctive operations required by those other methods. Using asyncio (and using Toshio's example with some omissions for simplicity) this looks something like: </p> <p><pre><br />class Microblog:<br /> def __init__(self, ...):<br /> loop = asyncio.get_event_loop()<br /> self.init_future = loop.run_in_executor(None, self._reading_init)<br /><br /> def _reading_init(self):<br /> # ... do some initialization work,<br /> # presumably expensive or otherwise long-running ...<br /><br /> @asyncio.coroutine<br /> def sync_latest(self):<br /> # Don't do anything until initialization is done<br /> yield from self.init_future<br /> # ... do some work that depends on that initialization ...<br /></pre></p> <p>It's quite possible to do something similar to this when using Twisted. It only looks a little bit difference: </p> <p><pre><br />class Microblog:<br /> def __init__(self, ...):<br /> self.init_deferred = deferToThread(self._reading_init)<br /><br /> def _reading_init(self):<br /> # ... do some initialization work,<br /> # presumably expensive or otherwise long-running ...<br /><br /> @inlineCallbacks<br /> def sync_latest(self):<br /> # Don't do anything until initialization is done<br /> yield self.init_deferred<br /> # ... do some work that depends on that initialization ...<br /></pre></p> <p>Despite the differing names, these two pieces of code basical do the same thing: <ul><li>run <code>_reading_init</code> in a thread from a thread pool</li><li>whenever <code>sync_latest</code> is called, first suspend its execution until the thread running <code>_reading_init</code> has finished running it</li></ul></p> <h3>Maintenance costs</h3> <p>One thing this pattern gives you is an <em>incompletely initialized object</em>. If you write <code>m = Microblog()</code> then <code>m</code> refers to an object that's not actually ready to perform all of the operations it supposedly can perform. It's either up to the implementation or the caller to make sure to wait until it <strong>is</strong> ready. Toshio suggests that each method should do this implicitly (by starting with <code>yield self.init_deferred</code> or the equivalent). This is definitely better than forcing each call-site of a <code>Microblog</code> method to explicitly wait for this event before actually calling the method. <p> <p <code>_reading_init</code> method actually modifies attributes of <code>self</code> which means there are potentially many more than just two possible cases. Even if you're not particularly interested in having full automated test coverage (... for <em>some</em> reason ...), you still have to remember to add this yield statement to the beginning of all of <code>Microblog</code>. </p> <h3>Diminished flexibility</h3> <p>Another thing this pattern gives you is an object that <em>does</em> things as soon as you create it. Have you ever had a class with a <code>__init__< <em>don't have</em>).</p> <p>Another related problem is that it removes one of your options for controlling the behavior of instances of that class. It's great to be able to control everything a class does just by the values passed in to <code>__init__</code> but most programmers have probably come across a case where behavior is controlled via an attribute instead. If <code>__init__</code> starts an operation then instantiating code doesn't have a chance to change the values of any attributes first (except, perhaps, by resorting to setting them on the class - which has global consequences and is generally icky). </p> <h3>Loss of control</h3> <p>A third consequence of this pattern is that instances of classes which employ it are inevitably <em>doing</em> something. It may be that you don't always <strong>want</strong> the instance to do something. It's certainly fine for a <code>Microblog</code> instance to create a SQLite3 database and initialize a cache directory if the program I'm writing which uses it is actually intent on hosting a blog. It's most likely the case that other useful things can be done with a <code>Microblog</code> instance, though. Toshio's own example includes a <code>post</code> method which <em>doesn't use</em> the SQLite3 database or the cache directory. His code correctly <em>doesn't</em> wait for <code>init_future</code> at the beginning of his <code>post</code> method - but this should leave the reader wondering why we need to create a SQLite3 database if all we want to do is post new entries. </p> <p>Using this pattern, the SQLite3 database is always created - whether we want to use it or not. There are other reasons you might want a <code>Microblog</code> instance that hasn't initialized a bunch of on-disk state too - one of the most common is unit testing (yes, I said "unit testing" twice in one post!). A very convenient thing for a lot of unit tests, both of <code>Microblog</code> itself and of code that uses <code>Microblog</code>, is to <em>compare instances of the class</em>. How do you know you got a <code>Microblog</code> instance that is configured to use the right cache directory or database type? You most likely want to make some comparisons against it. The ideal way to do this is to be able to instantiate a <code>Microblog</code> instance in your test suite and uses its <code>==</code> implementation to compare it against an object given back by some API you've implemented. If creating a <code>Microblog</code> <code>__init__< <code>__init__</code> where you have to take them both or give up on using <code>Microblog</code>. </p> <h3>Alternatives</h3> <p>You might notice that these three observations I've made all sound a bit negative. You might conclude that I think this is an antipattern to be avoided. If so, feel free to give yourself a pat on the back at this point. </p> <p>But if this is an antipattern, is there a pattern to use instead? I think so. I'll try to explain it. </p> <p>The general idea behind the pattern I'm going to suggest comes in two parts. The first part is that your object should primarily be about representing state and your <code>__init__</code> method should be about accepting that state from the outside world and storing it away on the instance being initialized for later use. It should always represent complete, internally consistent state - not partial state as asynchronous initialization implies. This means your <code>__init__</code> methods should mostly look like this: </p> <p><pre><br />class Microblog(object):<br /> def __init__(self, cache_dir, database_connection):<br /> self.cache_dir = cache_dir<br /> self.database_connection = database_connection<br /></pre></p> <p>If you think that looks boring - yes, it does. Boring is a good thing here. Anything exciting your <code>__init__</code> method does is probably going to be the cause of someone's bad day sooner or later. If you think it looks tedious - yes, it does. Consider using Hynek Schlawack's excellent <a href="">attrs</a> package (full disclosure - I contributed some ideas to attrs' design and Hynek ocassionally says nice things about me (I don't know if he means them, I just know he says them)). </p> <p>The second part of the idea an acknowledgement that asynchronous initialization is a reality of programming with asynchronous tools. Fortunately <code>__init__</code> isn't the only place to put code. <em>Asynchronous factory functions</em> are a great way to wrap up the asynchronous work sometimes necessary before an object can be fully and consistently initialized. Put another way: </p> <p><pre><br />class Microblog(object):<br /> # ... __init__ as above ...<br /><br /> @classmethod<br /> @asyncio.coroutine<br /> def from_database(cls, cache_dir, database_path):<br /> # ... or make it a free function, not a classmethod, if you prefer<br /> loop = asyncio.get_event_loop()<br /> database_connection = yield from loop.run_in_executor(None, cls._reading_init)<br /> return cls(cache_dir, database_connection)<br /></pre></p> <p>Notice that the setup work for a <code>Microblog</code> instance is still asynchronous but <em>initialization</em> of the <code>Microblog</code> instance is not. There is never a time when a <code>Microblog</code> instance is hanging around <strong>partially</strong> ready for action. There is setup work and then there is a complete, usable <code>Microblog</code>. </p> <p>This addresses the three observations I made above: <ul><li>Methods of <code>Microblog</code> never need to concern themselves with worries about whether the instance has been <em>completely</em> initialized yet or not.</li><li>Nothing happens in <code>Microblog.__init__</code>. If <code>Microblog</code> has some methods which depend on instance attributes, any of those attributes can be set after <code>__init__</code> is done and before those other methods are called. If the <code>from_database</code> constructor proves insufficiently flexible, it's easy to introduce a new constructor that accounts for the new requirements (named constructors mean never having to overload <code>__init__</code> for different competing purposes again).</li><li>It's easy to treat a <code>Microblog</code> instance as an inert lump of state. Simply instantiating one (using <code>Microblog(...)</code> has no side-effects. The special extra operations required if one wants the more convenient constructor are still available - but elsewhere, where they won't get in the way of unit tests and unplanned-for uses.</li></ul></p> <p. </p>Jean-Paul Calderone, Imaginary!<p>Still here, still hacking away at this vision/clothing issue<sup>[<a href="">1</a>][<a href="">2</a>][<a href="">3</a>][<a href="">4</a>]</sup>. I hope this isn't getting tedious. If it is, hang in there! There's some light up ahead. </p> <p. <b>Seven</b> other fixes landed too, though. This is the most activity Imaginary has seen in a handful of years now. Crufty, dead code has been deleted. Several interfaces has been improved. Useful functionality has been factored out so it can actually be re-used. Good stuff. <a href="">ashfall</a> gets the credit for doing the legwork of splitting out these good pieces from the larger branch where most work has been going on. </p> <p>Something else came about last week which is worthy of note: the code in that branch now <a href="">passes the full test suite</a>. That is pretty big news. I think maybe it means we actually figured out how to fix the bug (I'm totally hedging here; Imaginary's test suite is <em>pretty</em> good). This came about after a relatively big change and a relatively little change. </p> <p>The bigger change was an expansion of the interface for exits. <q>Wait,</q>, you're surely going to exclaim, <q>exits?</q>. </p> <p>This generalization resulted in a bit more information being rendered than we really wanted. Containers (boxes, chairs, rooms, submarines, etc) almost all have implicit "in" and "out" exits (okay <em>exit</em> might not be the best name for how you go <em>in</em>. </p> <p <em>see</em> <code>Idea</code> instance representing your socks). The short term fix was just to just add an <code>observer</code> argument to an existing method, <code>isOpaque</code> and hack the garment system to make clothing <em>not</em>). </p> <p>And that fix we actually made a few weeks ago. What we did last week was remember to update the code that <em>calls</em> <code>isOpaque</code> to <a href="">actually <em>pass</em> the correct observer object</a>. <p>So, pending some refactoring, some documentation, probably some new automated tests, clothing and vision are now playing nicely together - in a way with fewer <em>net</em> hacks than we had before. Woot. </p> <p <a href="">Nevow</a> (yes, that's a github link). There's more work to do on that front but perhaps mostly just getting a new release out. So, watch this space (but please don't hold your breath). </p>Jean-Paul Calderone Graph Traversal Woes<p>So we worked on <a href="">Imaginary</a> some more. Unsurprisingly, the part that was supposed to be easier was surprisingly hard. </p> <p>As part of the change Glyph and I are working on, we needed to rewrite the code in the presentation layer that shows a player what exits exist from a location. The <a href="">implementation strategy</a> <a href="">we chose</a> to fix <q><a href="">looking at stuff</a></q> involved passing a lot more information to this presentation code and teaching the presentation code how to (<em>drumroll</em>) present it. </p> <p>Unfortunately, we ran into the tiny little snag that we haven't actually been passing the necessary information to the presentation layer! Recall that Imaginary represents the simulation as a graph. The graph is directed and <strong>most</strong> certainly cyclic. To avoid spending all eternity walking loops in the graph, Imaginary imposes a constraint that when using <code>obtain</code>, no path through the graph will be returned as part of the result if the target (a node) of the last link (a directed edge in the graph path) is the source (a node) of any other link in the path. </p> <p, <code>obtain</code> gets to the path <em>Alice -> room A -> door -> room B -> door</em> and then it doesn't take the next step (<em>door -> room A</em>) because the next step is the first path that qualifies as cyclic by Imaginary's definition: it includes <em>room A</em> in two places. </p> <p>Having given this a few more days thought, I'm curious to explore the idea that the definition of cyclic is flawed. Perhaps <em>Alice -> room A -> door -> room B -> door -> room A</em> should actually be considered a viable path to include in the result of <code>obtain</code>? It is cyclic but it represents a useful piece of information that can't easily be discovered otherwise. Going any <em>further</em> than this in the graph would be pointless because <code>obtain</code> is already going to make sure to give back paths that include all of the other edges away from <em>room A</em> - we don't need duplicates of those edges tacked on after the path has gone to <em>room B</em> and come back again. </p> <p>Though there are other narrower solutions as well, such as just making the presentation layer smart enough to be able to represent <em>Alice -> room A -> door -> room B -> door</em> correctly even though it hasn't been <strong>directly</strong> told that the door from <em>room B</em> is leading back to <em>room A</em>. This would have a smaller impact on Imaginary's behavior overall. I'm not yet sure if avoiding the big impact here makes the most sense - if <code>obtain</code> is omitting useful information then maybe fixing that is the best course of action even if it is more disruptive in the short term. </p>Jean-Paul Calderone (Divmod Imaginary Update)<p>Over the weekend Glyph and I had another brief hack session on Imaginary. We continued our efforts towards fixing the visibility problem I've described over the last <a href="">couple</a> of <a href="">posts</a>. This really was a brief session but we took two excellent steps forward.</p> <p>First, we made a big chunk of progress towards fixing a weird hack in the garment system. If you recall, I previously mentioned that the way <code>obtain</code> works in Imaginary now, the system will find two links to a hat someone is wearing. In a previous session, Glyph and I removed the code responsible for creating the second link. This was a good thing because it was straight-up deletion of code. The first link still exists and that's enough to have a path between an observer and a hat being worn by someone nearby.</p> <p>The downside of this change is that the weird garment system hack was getting in the way of that remaining link being useful. The purpose of the hack was to prevent hats from showing up twice in the old system - once for each link to them. Now, with only one link, the hack would sometimes prevent the hat from even showing up once. The fix was <em>fairly</em> straightforward. To explain it, I have to explain annotations first.</p> <p>As I've mentioned before, Imaginary represents a simulation as a graph. One powerful tool that simulation developers have when using Imaginary is that edges in the graph can be given arbitrary annotations. The behavior of simulation code will be informed by the annotations on each link along the path through the graph the simulation code is concerned with. Clear as mud, right?</p> <p>Consider this example. Alice and Bob are standing in a room together. Alice is wearing a pair of white tennis shoes. Alice and Bob are standing in two parts of the room which are divided from each other by a piece of red tinted glass. A realistic vision simulation would have Bob observing Alice's shoes as red. Alice, being able to look directly at those same shoes without an intervening piece of tinted glass, perceives them as white. In Imaginary, for Bob to see the tennis shoes, the vision system uses <code>obtain</code> to find the path from Bob to those shoes. The resulting path necessarily traverses the <code>Idea</code> representing the glass - the path looks <strong>very</strong> <em>roughly</em> like <em>Bob</em> to <em>glass</em> to <em>Alice</em> to <em>shoes</em>. The glass, being concerned with the implementation of tinting for the vision system, <em>annotates</em> the link from itself to Alice with an object that participates in the vision simulation. That object takes care of representing the spectrum which is filtered out of any light which has to traverse that link. When the vision system shows the shoes to Bob, the light spectrum he uses to see them has been altered so that he now perceives them as red.</p> <p>Clearly I've hand-waved over a lot of details of how this works but I hope this at least conveys the very general idea of what's going on inside Imaginary when a simulation system is basing its behavior on the simulation graph. Incidentally, better documentation for how this stuff works is one of the things that's been added in the branch Glyph and I are working on.</p> <p>Now, back to hats. The hack in the garment system annotates links between clothing and the person wearing that clothing. The annotation made the clothing invisible. This produced a good effect - when you look around you, you're probably not very interested in seeing descriptions of your own clothing. Unfortunately, this annotation is on what is now the <strong>only</strong> link to your clothing. Therefore, this annotation makes it impossible for you to <strong>ever</strong> see your own clothing. You could put it on because when you're merely holding it, it doesn't receive this annotation. However, as soon as you put it on it vanishes. This poses some clear problems: for example, you can never take anything off.</p> <p>The fix for this was simple and satisfying. We changed the garment system so that instead of annotating clothing in a way that says "you can't see this" it annotates clothing in a way that merely says "you're wearing this". This is very satisfying because, as you'll note, the new annotation is a much more obviously correct piece of information. Part of implementing a good simulation system in Imaginary is having a good model for what's being simulated. "Worn clothing is worn clothing" sounds like a much better piece of information to have in the model than "Worn clothing can't be seen". There's still some polish to do in this area but we're clearly headed in the right direction.</p> <p>By comparison, the second thing we did is ridiculously simple. Before Imaginary had <code>obtain</code> it had <code>search</code>. Sounds similar (well, maybe...) and they were similar (well, sort of...). <code>search</code> was a much simpler, less featureful graph traversal API from before Imaginary was as explicitly graph-oriented as it is now. Suffice it to say most of what's cool about Imaginary now wasn't possible in the days of <code>search</code>. When <code>obtain</code> was introduced, <code>search</code> was re-implemented as a simple wrapper on top of <code>obtain</code>. This was neat - both because it maintained API compatibility and because it demonstrated that <code>obtain</code> was strictly <em>more</em> expressive than <code>search</code>. However, that was a while ago and <code>search</code> is mostly just technical baggage at this point. We noticed there were only three users of <code>search</code> left (outside of unit tests) and took the opportunity to delete it and update the users to use <code>obtain</code> directly instead. Hooray, more code deleted.</p> <p>Oh, and I think we're now far enough beneath the fold that I can mention that the project has now completed most of a migration from <a href="">Launchpad</a> to <a href="">github</a> (the remaining work is mostly taking things off of Launchpad). Thanks very much to <a href="">ashfall</a> for doing very nearly all the work on this.</p>Jean-Paul Calderone Things Right (Divmod Imaginary Update)<p><a href="">Earlier this month I wrote about how Glyph and I have been trying to fix a bug in Imaginary</a>. Since then we've worked on the problem a little more and made some excellent progress.</p> <p>If you recall, the problem involved being shown articles of clothing as though they were lying on the floor rather than being worn by nearby people. I mentioned that our strategy to fix this was to make the "look at <em>something</em>" action pay attention to more of the structure in the simulation graph.</p> <p>That's just what Glyph and I did when we got together to work on this some more last week.</p> <p>The version of the "look at <em>something</em>" action in trunk operates in two steps. First, it searches around the player in the simulation graph for an object satisfying a few simple criteria: </p> <ul><li>The object is something that can be seen at all - for example, a chair or a shirt, not the wind or your rising dread at the <em>scritch scritch scritch</em> noise that always seems to be coming from <strong>just</strong> out of your field of vision.</li><li>The object is something the player's senses <em>actually</em> allow them to see - for example, objects <strong>not</strong> draped in a cloak of invisibility, objects <strong>not</strong> sitting in a pitch black room.</li><li>The object answers to the name the player used in the action - "look at hat" will not consider the Sears tower or a passing dog.</li><li>The object is reasonably nearby (which I'll just hand-wave over for now).</li></ul> <p>Having found one and only one object satisfying all of these criteria (behavior for the case where zero or more than one result are found produce another outcome), the action proceeds to the second portion of its implementation. It invokes a method that all objects capable of being seen are contractually obligated to provide, a method called <code>visualize</code> which is responsible for representing that thing to the player doing the looking. The most common implementation of that method is a stack of special-cases:</p> <ul><li>is the thing a location? if so, include information about its exits.</li><li>does the thing have a special description? if so, include that.</li><li>is the thing in good condition or bad condition? include details about that.</li><li>is the thing wearing clothing? if so, include details about those.</li><li>is the thing a container and open? if so, include details about all the things inside it.</li></ul> <p>Much of this logic is implemented using a plugin system so, while somewhat gross, it at least holds with <em>some</em> of Imaginary's goals of generality. However, it has some problems beyond just being gross. One of those problems is that the full path from the player doing the looking to all of the things that appear in the <code>visualize</code> method's result is not known. This is because the path is broken in half: that from the player to the direct target (whatever <em>something</em> names) and that from the direct target to each of the (for lack of a better word) indirect targets (if Bob looks at Alice then Alice is a target of the action but so is the axe Alice is holding behind her back, Alice's hat, etc). And in most cases the problem is even worse because the second part - the path from the direct target to each of the indirect targets - is ignored. Breaking up the path or ignoring part of it like this has problematic consequences.</p> <p>If Alice is carrying that axe behind her back and Bob looks at her, unless you know the <strong>complete</strong> path through the simulation graph from Bob to the axe then you can't actually decide whether Bob can <em>see</em> the axe or not: is Bob standing in front of Alice or behind her?</p> <p>The idea that Glyph and I are pursuing to replace the <code>visualize</code> method solves this problem, cuts down significantly on the grossness involved (that is, on the large amount of special-case code), and shifts what remaining grossness there is to a more suitable location - the presentation layer (where, so far as I can tell, you probably do want a lot of special-case code because deciding exactly how best to present information to people is really just a lot of special cases).</p> <p>Perhaps by now you're wondering how this new idea works.</p> <p>As I've mentioned already, the core of the idea is to consider the full path between the player taking the action and the objects (direct and indirect) that end up being targets of the action.</p> <p>An important piece of the new implementation is that instead of just collecting the direct target and then acting on it, we now collect both the direct target and all of the indirect targets as well. We also save the path to all of these targets. So, whereas before <code>resolve_target</code> (the method responsible for turning <code>u"something"</code> into some kind of structured object, probably an object from the simulation graph) would just return the object representing Alice, it now returns the path from Bob to Alice, the path from Bob to Alice's hat, the path from Bob to Alice's axe, the path from Bob to the chair Alice is sitting in, and so forth.</p> <p>With all this extra information in hand, the presentation layer can spit out something both nicely formatted <em>and</em> correct like:</p> <blockquote>Alice is here, wearing a fedora, holding one hand behind her back. </blockquote> Or, should it be more appropriate to the state of the simulation: <blockquote>Alice is here, her back turned to you, wearing a fedora, holding an axe. </blockquote> <p>Of course, we haven't implemented the "holding something behind your back" feature of the "holding stuff" system yet so Imaginary can't actually reproduce this example exactly. And we still have some issues left to fix in the branch where we're working on fixing this bug. Still, we've made some more good progress - both on this specific issue, on documentation to explain how the core Imaginary simulation engine works, and on developing useful idioms for simulation code that has yet to be written.</p>Jean-Paul Calderone on Divmod Imaginary<p>Recently <a href="">Glyph</a> <a href="">Zork</a> or <a href="">CircleMUD</a> or a piece of <a href="">interactive fiction</a>. </p> <p>The last time I worked on Imaginary I was trying to understand <a href="">a bug that I thought was in the new "obtain" system</a>. I made some progress, mostly on <a href="">documenting how the "obtain" system works</a>,. </p> <p>So it is very nice to have Glyph participating again (<a href="">he did write the "obtain" system in the first place</a>, after all). </p> <p>"But wait," I hear you ask, "what is this obtain thing?" </p> <p. </p> <p. </p> <p>Put another way, the world would be described to Bob like this: </p> <blockquote> You are in a room. Alice is here. A fedora hat is here. </blockquote> <p>When a more correct description would be more like this: </p> <blockquote> You are in a room. Alice is here. She is wearing a fedora hat. </blockquote> <p. </p> <em>using</em> obtain rather than the implementation of obtain itself. This was great because it looks like the real culprit is <em>mostly</em> that application code. It's getting back lots of useful results and then misinterpreting them. </p> <p>The misinterpretation goes something like this. </p> <ul><li> To describe Bob's surroundings to him, the "look" action does an "obtain" call on Bob's location. It specifies that only results for visible things should be returned from the call. </li><li> ). </li><li>. </li></ul> <p. </p> <p>So far we haven't implemented the solution to this problem completely. <a href="">We have a pretty good start</a>, though, involving making the "look" action more explicitly operate on paths instead of only the objects at the *end* of each path. And actually understanding all the pieces of the bug helps a lot. </p>Jean-Paul Calderone, I Fixed It<p>Got a new hard drive for my Inspiron 1545 - an SSD, at last! Turns out it was probably just in time too, as I found an unrecoverable bad block on the WD it is replacing. </p> <img src="" title="SSD / WD side-by-side" /> <p... </p> <img src="" title="drive not found" /> <p>Guess it is a problem after all. Near as I can tell, the SATA port in the Inspiron requires the drive to completely fill the bay in order to force the all the contacts to ... contact. </p> <img src="" title="drive thick/thin comparison" /> <p>Good thing the problem is that it's too thin, not too thick. I can fix this. </p> <div><img src="" title="drive-with-scrap" style="width: 40%" /><img src="" title="drive-with-scrap-taped" style="width: 40%" /></div> <p>Problem solved. </p> <img src="" title="yea I installed debian on it" /> <p>Thanks for the well-engineered hardware, guys. </p>Jean-Paul Calderone to December Reading List <ul><li><u><a href="">Engineering Infinity</a></u>. Robert Reed, Gwyneth Jones, Charles Stross.</li><li><u><a href="">The Year's Best Science Fiction & Fantasy, 2012 Edition</a></u>. Karen Joy Fowler, Jonathan Carroll, .</li> <li><u><a href="">Orion in the Dying Time</a></u>. Ben Bova.</li><li><u><a href="">2312</a></u>. Kim Stanley Robinson.</li><li><u><a href="">Orion and the Conqueror</a></u>. Ben Bova.</li><li><u><a href="">Orion Among the Stars</a></u>. Ben Bova.</li><li><u><a href="">Polar City Red - A Novel</a></u>. Jim Laughter.</li><li><u><a href="">507 Mechanical Movements: Mechanisms and Devices</a></u>. Henry T. Brown.</li><li><u><a href="">Ubik</a></u>. Phillip K. Dick.</li><li><u><a href="">Dawn (Xenogenesis Trilogy)</a></u>. Octavia E. Butler.</li><li><u><a href="">Marooned in Realtime (Peace War)</a></u>. Vernor Vinge.</li><li><u><a href="">Blue Remembered Earth</a></u>. Alastair Reynolds.</li><li><u><a href="">Jayber Crow: A Novel (Port William)</a></u>. Wendell Berry.</li><li><u><a href="">Animal, Vegetable, Miracle</a></u>. Barbara Kingsolver.</li><li><u><a href="">Bringing It to the Table: On Farming and Food</a></u>. Wendell Berry.</li></ul>Jean-Paul Calderone Scheduled Events Unreliable in Twisted<p>Someone recently asked a question about whether <a href="">reactor.callLater</a> could be used to precisely schedule events in the very distant future. The person gave an example of scheduling now - December, 2012 - an event to run at a particular time in December 2014 - a date two years in the future.</p> <p.</p> <p>The asker knew this, though, and was only curious about whether there were any intrinsic scheduling limitations related to very distant times. The answer is that there is a limitation, and I've already alluded to it in the paragraph above.</p> <p>Twisted uses Python floats to represent time. The precision available to floating points declines as the values themselves get larger (this is why they're called "floating points"! The decimal point can move around).</p> <p99<b>3</b> seconds after the epoch, it would most likely run at 900719925474099<b>2</b> seconds after the epoch instead.</p> <p.</p>Jean-Paul Calderone to August Reading List<a href="">Down on the Farm</a>. Charles Stross.<br/><a href="">Children of the Sky</a>. Vernor Vinge.<br/><a href="">Toast</a>. Charles Stross.<br/><a href="">The Etched City</a>. K. J. Bishop.<br/><a href="">American Fascists</a>. Chris Hedges.<br/><a href="">Why We Get Fat</a>. Gary Taubes.<br/><a href="">The Forever War</a>. Joe Haldeman.<br/><a href="">The Accidental Time Machine</a>. Joe Haldeman.<br/><a href="">Cyborg Assault</a>. Vaughn Heppner.<br/><a href="">Planet Wrecker</a>. Vaughn Heppner.<br/><a href="">Star Fortress</a>. Vaughn Heppner.<br/><a href="">The Restoration Game</a>. Ken Macleod.<br/><a href="">The Night Sessions</a>. Ken Macleod.<br/><a href="">Redshirts</a>. John Scalzi.<br/><a href="">The Ghost Brigades</a>. John Scalzi.<br/><a href="">Old Man's War</a>. John Scalzi.<br/><a href="">The Last Colony</a>. John Scalzi.<br/><a href="">The Year of the Jackpot</a>. Robert Heinlein.<br/><a href="">The Vegetarian Myth</a>. Lierre Keith.<br/><a href="">Raising Pastured Pigs</a>. Samantha Biggers.<br/><a href="">The China Study</a>. T. Colin Campbell, Thomas M. Campbell II.<br/><a href="">Orion</a>. Ben Bova.<br/><a href="">Vengeance of Orion</a>. Ben Bova.<br/><a href="">Heir to the Empire</a>. Timothy Zahn.<br/><a href="">Dark Force Rising</a>. Timothy Zahn.<br/><a href="">The Last Command</a>. Timothy Zahn.<br/><a href="">The Whole Soy Story</a>. Kaayla T. Daniel.<br/><a href="">Stop Alzheimer's Now!</a>. Bruce Fife.<br/><a href="">Pastured Poultry Profits</a>. Joel Salatin.<br/><a href="">Greener Pastures on Your Side of the Fence</a>. Bill Murphy.<br/>Jean-Paul Calderone Up, San FranciscoI will be visiting San Francisco in December and January. To my various acquaintances in the bay area, let's get together and do something fun. To anyone interested in Python or Twisted, my <a href="">recently formed company</a> would be happy to offer on-site training or other consulting services while I'm in the area. <a href="mailto:info@futurefoundries.com">Drop us a line</a>. Jean-Paul Calderone support contracts for Twisted<p>Last week I posted a survey to gauge interest in commercial Twisted support contracts to the Twisted mailing list:</p> <blockquote><a href=""></a></blockquote> <p>If you think this might be applicable to your interests and you didn't see the initial posting or haven't had a chance to respond yet, please take a minute or two to fill it out (it's very short, no essay questions at all). Thanks!</p>Jean-Paul Calderone Bug ReportsAlmost ten years after <a href="">jwz coined "CADT"</a>, they're still <a href="">at</a> <a href="">it</a>. Way to keep the dream alive, guys.Jean-Paul Calderone Project: Crop Planning Software<p> <a href="">Elsewhere</a>, I wrote about <a href="">the beginning of growing season</a> and some software I've written to help us out this year. <a href=""> The software</a>. </p> <p> What the software does at this point is this: </p> <ul> <li>. </li> <li>). </li> <li>). </li> <li> Generate a schedule of when to seed each variety, when to expect to transplant them outdoors, and when to harvest them. The schedule can be displayed as a list or it can be generated as <a href="">an iCalendar file</a> and loaded into something like Google Calendar or Apple's iCal. </li></ul> <p>. </p> <p>. </p> <p>. </p> <p> Everything is written in Python, of course. I used <a href="">vobject</a> to generate the iCalendar output, with pytz to help with the timezone math (oh, timezones, how I loathe you). A pleasantly small amount of code suffices for that. </p> <p> I used <a href="">matplotlib</a> and <a href="">dateutil</a>. </p> <p> For the <a href=""> highly tedious structure definition</a>, I used a class from <a href="">Epsilon</a>. <code>epsilon.structlike.record</code> is a lot like the Python standard library <code>collections.namedtuple</code>. Any time I used the latter, though, I remember how it is implemented and I feel bad. So I stick to the former. </p> <p> I also used Twisted and html5lib to write <a href=""> a simple web scraper</a> <em>organization</em>. I asked Johnny's if they could make this information available in any sort of structured format and they told me they couldn't. Maybe I should sell it back to them? </p> <p>. </p> <p> I don't expect this to be useful to a lot of people. In case this sort of tool does appeal to you, though, I'd love feedback (particularly from people more experienced with planning and executing these kinds of agricultural tasks) - but no feature requests, please :) </p>Jean-Paul Calderone Up Branch CheckoutsSince Twisted development typically involves at least one branch per ticket, a Twisted developer can end up with a lot of branches checked out. For example, this morning I had 177 Twisted branches checked out on my laptop. Many of these were branches that I contributed code to, and perhaps even merged into trunk myself when they were complete. I could probably have deleted them at that point, but I usually can't be bothered. Besides, I put everything I have into the branch itself, by the time I'm merging it I'm <i>done</i>. Other branches are ones I've done code reviews on for other developers. I don't keep track of when these get merged into trunk as closely, since typically someone else is going to do those merges.<br /><br />The incremental cost of another Twisted branch is pretty minimal. A few more megs used on my hard drive is barely noticable. The <i>aggregate</i> cost can get pretty high though (Seven GB for the 177 branches I had this morning). At some point this can cause problems.<br /><br /.<br /><br />So I use <a href="">cleanup-local.py</a>).<br /><br /.<br /><br />Here's a brief snippet from today's run:<br /><br /><blockquote class="tr_bq">Found password-comparison-4536-2 for ticket(s): 4536<br />Status of 4536 is assigned<br />Found pb-chat-example-4459 for ticket(s): 4459<br />Status of 4459 is closed<br />Removing closed: pb-chat-example-4459<br />Found plugin-cache-2409 for ticket(s): 2409<br />Status of 2409 is closed<br />Removing closed: plugin-cache-2409<br />Found poll-default-2234-2 for ticket(s): 2234<br />Status of 2234 is closed<br />Removing closed: poll-default-2234-2</blockquote><br />Jean-Paul Calderone About Twisted at PyCon 2012<p.</p> <p>I am a long time core Twisted developer with real world experience building maintainable, scalable systems with Twisted. I've also presented similar introductory Twisted tutorials several times in the past, letting me learn the common sticking points and teaching approaches to help overcome them.</p> <p>Check out <a href="">the tutorial's page on the PyCon 2012 website</a> for details about what will be covered. Come learn how to leverage Twisted and Twisted-based libraries to their fullest extent!</p>Jean-Paul Calderone - December Reading List<ul><li> <a href="">The History of the Peloponnesian War</a>. Thucydides. (Books 2 - 8) </li><li> <a href="">Root Cellaring: Natural Cold Storage of Fruits & Vegetables</a>. Mike and Nancy Bubel. </li><li> <a href="">The Fatal Shore: The Epic of Australia's Founding</a>. Robert Hughes. </li><li> <a href="">Special Topics in Calamity Physics</a>. Marisha Pessl. </li><li> <a href="">The Dirty Life: A Memoir of Farming, Food, and Love</a>. Kristin Kimball. </li><li> <a href="">The Worst Hard Time: The Untold Story of Those Who Survived the Great American Dust Bowl</a>. Timothy Egan. </li><li> <a href="">The Children of the Sky (Zones of Thought)</a>. Vernor Vinge. </li><li> <a href="">The Gathering Storm (Wheel of Time)</a>. Robert Jordan and Brandon Sanderson. </li></li><li> <a href="">Oresteia: Agamemnon, The Libation Bearers, and The Eumenides</a>. Eschylus. </li><li> <a href="">Towers of Midnight (The Wheel of Time)</a>. Robert Jordan and Brandon Sanderson. </li><li> <a href="">The Clouds</a>. Aristophanes. </li><li> <a href="">Saturn's Children</a>. Charles Stross. </li><li> <a href="">The Fuller Memorandum (A Laundry Files Novel)</a>. Charles Stross. </li><li> <a href="">Scratch Monkey</a>. Charles Stross. </li></ul>Jean-Paul Calderone't Use Buildbot EC2 FeaturesI just noticed that Buildbot spun up one EC2 instance 31 days ago and another one 14 days ago and left them both running.Jean-Paul Calderone Followup<p:</p> <a href=""><img src="" /></a> <p>You can see the oats we put in at the beginning of September in the garlic ged on the left side there. They didn't grow as much as I had hoped:</p> <a href=""><img src="" /></a> <p>Perhaps due to some nutrient or mineral deficiency. To rectify that (and based on a soil test), we spread a number of amendments, starting with greensand to provide potassium:</p> <a href=""><img src="" /></a> <p>We also spread rock phosphate (for phosphorus) and pelletized lime for calcium and to adjust the pH to be less acidic. And, importantly, compost - about 1 cubic yard over the entire bed (with which task my dad helped us out):</p> <a href=""><img src="" /></a> <p>As you can see, we just left the oats in place. They are not cold hardy and will die soon enough without any help. With the bed thusly prepped, we began breaking up our "seed" garlic:</p> <a href=""><img src="" /></a> <p>Garlic is most often grown by sowing cloves in the autumn for harvest the following summer. The winter encourages the clove to split and grow into a new bulb. We planted four varieties of garlic, but mostly Inchelium, a softneck variety.</p> <a href=""><img src="" /></a> <p>These seed bulbs each had around a dozen cloves in them.</p> <a href=""><img src="" /></a> <p>We planted the largest undamaged cloves. We also planted three varieties of hardneck garlic. Compared to the inchelium, these all look pretty similar to each other. Here's some Siberian Red:</p> <a href=""><img src="" /></a> <p>The hardneck varieties have bulbs with fewer, larger cloves. After we broke up the cloves, we planted them! While one of us dropped cloves in pre-marked locations, the other followed behind and planted them.</p> <a href=""><img src="" /></a> <p>The cloves are planted right-side-up about one inch deep. Finally we mulched them with straw to even out temperature variations and retain moisture.</p> <a href=""><img src="" /></a> <p>Now the garlic sits tight until next year.</p>Jean-Paul Calderone Bed Prep<p>Over the long weekend, Jericho and I made a garden bed. We picked a plot a few minutes walk from the new orchard and started by mowing a 100' x 4' area.</p> <img src="" /> <p>That's Lucy, my mom's lab, down near the end. Next we cut sod with shovels.</p> <img src="" /> <p>I dug too, but Jericho doesn't take as many pictures as I do.</p> <p>After that, we flipped sod. First one row of it:</p> <img src="" /> <p>And then the next row:</p> <img src="" /> <p>After all the sod was out, we dug a little more.</p> <img src="" /> <p>Then we put the sod at the bottom of the hole, upside-down, where it will hopefully die and contribute organic material to the soil.</p> <img src="" /> <p>And then we shoveled that dirt off the tarp, back into the hole on top of the sod, and raked it flat.</p> <img src="" /> <p>And again.</p> <img src="" /> <p>Until all the dirt was back in the bed.</p> <img src="" /> <p>This will be a garlic bed. We'll plant the garlic in October. Until then, we put in a quick cover crop of oats.</p> <img src="" /> <p>Then I rubbed some dirt on my shirt to make it look like I helped too.</p> <img src="" /> <p>A few hours after we finished seeding, a nice thunderstorm rolled in and watered everything for us.</p> <img src="" /> <p>If all goes well, in about a month we'll have some nice young oats to mow down before planting garlic in the bed.</p>Jean-Paul Calderone Python Software is Tedious<p>I released <a href="">pyOpenSSL 0.13</a> a few days ago. Apart from making sure it actually worked on various platforms, updating the version number, regenerating the documentation, and sending out the release announcement, I also had to upload release files to the <a href="">Python Package Index</a>.</p> <p.</p> <p.</p>Jean-Paul Calderone
https://as.ynchrono.us/feeds/posts/default
CC-MAIN-2021-04
refinedweb
9,752
63.39
TypeScript Plays Well With Others TypeScript's key benefit is that it's able to work with existing JavaScript, including both JavaScript that's part of your project and JavaScript from other libraries that your application depends on. Being able to do so fluidly means not rewriting your entire codebase to bring it into a TypeScript application. Instead, TypeScript allows you to describe objects that are visible to your app but may have been loaded outside your script. This includes JavaScript libraries like jQuery, Backbone, and AngularJS that provide utility functionality crucial to your application. Let's take jQuery as an example. The jQuery library makes a $ symbol available at runtime that lets developers access much of the jQuery functionality. We could describe the full jQuery API to the compiler, but as a first step, all we need to do is tell the compiler that this symbol will be visible at runtime: declare var $: any; This tells the compiler that the $ symbol is not being created by our application, but rather by an external script or library being run before our script is run. It also says that the type of this variable is any. With this, the compiler will allow you to access any member you wish on this variable without complaint. This enables you to get up and running quickly. While this is effective to get started, it doesn't let the compiler give us the errors and auto-completion, since the compiler lacks any type information about the $ symbol. To get proper type-checking, we need to have the API of jQuery documented for the compiler. Luckily for us, volunteers have already been hard at work documenting the APIs of many JavaScript libraries, including jQuery. You can reach this repository on GitHub. To use these API documentation files, called .d.ts files, you include them with your project files or alongside the source files you pass to the compiler. Here's an example of the .d.ts file for jQuery: // The jQuery instance members interface JQuery { // AJAX ajaxComplete(handler: any): JQuery; ajaxError(handler: (evt: any, xhr: any, opts: any) => any): JQuery; ajaxSend(handler: (evt: any, xhr: any, opts: any) => any): JQuery; ajaxStart(handler: () => any): JQuery; ajaxStop(handler: () => any): JQuery; // ... } declare var $: JQueryStatic; One way to think of .d.ts files is as the equivalent to headers in a C-based language. They act to describe the API and are a companion to the library they're describing. Similarly, in TypeScript, you use the .d.ts file to inform your tooling and load the corresponding library at runtime. Modularity in TypeScript As applications grow larger, it becomes ever more important to have clean separation between components. Without this separation, components morph into a tangled mess of global definitions that become increasingly fragile and more difficult to maintain and extend. Modules and namespaces allow programmers to untangle the mess and create components that can be separately maintained, extended, and even replaced fortified with the knowledge that such changes won't affect the rest of the system. TypeScript has two kinds of modules. The first is an internal module. Internal modules help you organize your code behind an extensible namespace, moving it out of the global namespace. This example shows the earlier changeDirection example refactored to use an internal module: export interface Direction { goLeft: boolean; goRight: boolean; } export function changeDirection(s: Direction) { if (Math.random() > 0.5) { s.goLeft = true; } else { s.goRight = true; } return s; } } var s = { goLeft: false, goRight: false }; s = RoadMap.changeDirection(s); The second kind of module is an external module. External modules let you treat entire files as modules. The added advantage of external modules is that they can be loaded using one of the popular JavaScript module loaders. These module loaders do the additional service of removing the need to manually order your JavaScript files. Instead, module loaders automatically handle modules by loading a module's dependencies first, before loading the module. The end result is a set of modules with clean declarations of their dependencies and a compiler-enforced separation between modules. Here is the previous example, this time refactored as two external modules: //RoadMap.ts export interface Direction { goLeft: boolean; goRight: boolean; } export function changeDirection(s: Direction) { if (Math.random() > 0.5) { s.goLeft = true; } else { s.goRight = true; } return s; } //Main.ts import RoadMap = require("RoadMap"); var s = { goLeft: false, goRight: false }; s = RoadMap.changeDirection(s); Notice that we now have two separate files, each importing or exporting directly from the file. The RoadMap.ts file has become a single external module denoted by the filename. In the Main.ts file, we load the RoadMap.ts using an import call. This is how we describe the dependency between these two modules. Once imported, we can interact with the module just as before. To compile external modules, we also have to tell the compiler what kind of module loader we will be using. Currently, the compiler supports two styles: AMD/RequireJS and Node/CommonJS. To compile for AMD, we pass AMD as the module type to the compiler: > tscRoadMap.tsMain.ts--moduleAMD The resulting JavaScript files will then be specialized for the AMD-style module loaders like RequireJS. For example, compiling the Main.ts above outputs: define(["require", "exports", "RoadMap"], function (require, exports, RoadMap) { var s = { goLeft: false, goRight: false }; s = RoadMap.changeDirection(s); }); You can see where the import call has become part of the list of dependencies being tracked by the module loader using the define call. TypeScript allows us to manage our files as separate external modules, with all the benefits of using module loaders, while also getting all of the type-checking benefits we expect from working in TypeScript. Type Inference in TypeScript Another technique that TypeScript uses to focus types on usability is type inference. Type inference has moved from its functional programming roots to being part of most programming languages today. It is a powerful tool to focus types on utility, rather than being boilerplate. In Typescript, type inference helps to infer types in some of the common JavaScript coding patterns. The first of these patterns is to infer types during variable declaration from an initializer, a technique common with many programming languages. var x = 3; // x has type number The next example infers types in the opposite direction as the previous code, by inferring the type left-to-right. If the variable has a declared type, we can infer information about the type of the initializing expression. In this example, the parameter x in the function on the right-hand side has its type inferred to number based on the function type provided. var f: (x: number) => string = function (x) { return x.toString(); } The next example shows how the context of expression can also help infer its type. Here, the type of the function expression is inferred because the call in which it is created can be resolved to the function declaration, allowing inference to use the type of the declared parameter. function f(g: (x: number) => string) { return g(3); } f(function (x) { return x.toString(); }) We can rewrite the previous examples using lambdas to better understand how the contextual type helps maintain code readability by reducing code cruft. function f(g: (x: number) => string) { return g(3); } f(x => x.toString()); Because of the heavy use of patterns like callbacks in JavaScript, contextual type inference helps keep code simple without sacrificing the power of having the type information available. Conclusion TypeScript offers a lightweight, flexible way of working with standards-based JavaScript while enjoying the power that static type information provides. TypeScript's type system focuses on compatibility with existing JavaScript and is designed to require less effort to use than many statically typed languages. If you'd like to learn more about TypeScript, read up on it at the TypeScript homepage. Jonathan Turner is the Program Manager for the TypeScript team at Microsoft. Amanda Silver is the Principal Director Program Manager for Client Platform Tools at Microsoft.
http://www.drdobbs.com/tools/introduction-to-typescript/240168688?pgno=2
CC-MAIN-2014-42
refinedweb
1,335
55.03
On 5 May 2015 at 16:57, Sergio Fernández <wikier@apache.org> wrote: > Hi, > > On Tue, May 5, 2015 at 4:39 PM, sebb <sebbaz@gmail.com> wrote: >> >> > One question, sebb, how the site development is organize? Do you use >> jira or >> > something as any other project does? Just to do the things properly >> > according your guidelines. >> >> It's not a regular project. >> I don't know who "owns" the code - possibly Infra or maybe ComDev. >> >> I have just been making the occasional fix as I notice problems. >> >> The site-dev and dev@community mailing list are probably the place to >> discuss changes. > > > OK, then I stay in this thread for discussion about this. > > I didn't have much time today, but what I already did was implementing the > basics of how the DOAP processing could look like. For the moment is at > until I'll get something more > functional, then I'll commit it to the asf repo. > > Basically what if currently does that simple code is to get all DOAP/PMC > files and report some basics (size). You can run it by yourself executing: > > $ python doap.py > > What I can already say is that I do not understand what > > aim to represent. This is the default location for the PMC data [1] files which provide data about the PMC. A single such file may be referenced by multiple DOAPs. E.g. all the Commons components refer to the same PMC data file. The contents and locations of the various files are documented on the site. [1] > Because asfext:pmc is defined as a property in the > namespace (as we discussed couple of days ago), so I missed the subject > where it refers to (normally it should be used <> asfext:pmc > <...>). According that usage of the term, I guess they actually wanted to > define a class. > > But please, let me evolve a bit more the code for giving you some basic > tools, and then I can discuss further such aspects. > > Cheers. > > -- > Sergio Fernández > Partner Technology Manager > Redlink GmbH > m: +43 6602747925 > e: sergio.fernandez@redlink.co > w:
http://mail-archives.apache.org/mod_mbox/www-site-dev/201505.mbox/%3CCAOGo0VY98h6mpyY0DWXBBwVXvCqYKU6DhjtV5zOpMteSJih=5g@mail.gmail.com%3E
CC-MAIN-2018-05
refinedweb
348
73.98
I was recently called onto a project to add features to existing applications. To me, this is one of the most challenging aspects of being a developer because the existing application strips away much of your control. My project encompasses three applications that are similar in many ways. I quickly noticed that much of the code was redundant since the applications shared many functions. A lot of the duplicate code was in classes, so my first step was to create a class library to reduce maintenance headaches and ease the current task at hand: adding functionality. Class libraries You use class libraries when you're developing any type of .NET application. The .NET Framework includes the .NET Framework Class Library, an extensive collection of thousands of types (i.e., classes, interfaces, structures, delegates, and enumerations) that aim to encapsulate the functionality of core system and application services in order to make application programming easier and faster. There are classes that you can use to manipulate the file system, access databases, serialize objects, and launch and synchronize multiple threads of execution, to name a few. To make working with these classes easy, you can group classes with similar functionality together in namespaces. When developing applications utilizing XML, the System.XML namespace is a necessity; it's also a class library. The .NET Framework compiles class libraries into DLLs, so the System.XML class library exists within the System.XML.dll file. If you're using Visual Studio .NET, you may include a namespace (the DLL file) in the project's references section. Once you add a reference to an assembly, you can access any of the types in its namespaces by providing a fully qualified reference to the type. Typing the fully qualified name of each .NET type quickly becomes rather tiresome, particularly for types that are nested deep within a hierarchical namespace. You can, however, use the Imports (VB.NET) and using (C#) directives to utilize a particular namespace. This allows the compiler to resolve a particular type reference, thus eliminating the need to provide a fully qualified path. Develop your own class libraries In addition to the vast number of class libraries included with the .NET Framework, you can create your own. This allows you to create a collection of classes that you may use in multiple applications, and easily make them available to other developers. Additionally, it provides a central location for class maintenance. It reduces the need to include code in multiple projects with multiple maintenance access points. To create a class library in Visual Studio .NET, select File | New | Project | Visual C# Projects | Class Library. Select your project name and the appropriate directory using the Browse button and click OK. Visual Studio .NET adds two classes to your project: AssemblyInfo and Class1. The AssemblyInfo class file contains details of the project (assembly information) such as name, copyright, version information, and so on. Class1 is the default name given to a class with subsequent classes incrementing the numeric suffix. You can easily rename this class (and namespace) to suit your needs. The following code listing shows the default class added to a C# class library project, minus the default comment lines: using System; namespace ClassLibrary { public class Class1 { public Class1() { } } } Here's the VB.NET equivalent: Public Class Class1 End Class Or, you may decide to create your code using a simple text editor. Saving a file with the appropriate code extension (cs for C# and vb for VB.NET) makes it appear as a source code file. You can use the command-line compiler for your language to create the resulting DLL file. Example Now you're ready to create your own class library. I'll use code from a previous .NET newsletter that demonstrated extending the System.Web.UI.Page class. We'll create this class within its own class library. The class performs these tasks: - Extends the System.Web.UI.Page class - Disables caching - Creates a hidden field called statusFlag with a value of zero - Adds a JavaScript function to the head portion of the page - Executes JavaScript code upon page startup/load Listing A contains the class library code. Notice that the code is created within the BuilderClassLibrary namespace. Once you create and compile this class library, a DLL file, BuilderClassLibrary.dll, is available for use within other applications. You may use the library in other applications by adding a reference to it. You can achieve this in the reference list within Visual Studio .NET, and the reference (/r) switch when using a command-line compiler. I'll demonstrate this momentarily. The next step is using your class library in another application. The code in Listing B show a basic ASP.NET Web form (the code behind file) that takes advantage of the class library. The Web form's class is derived from the BaseClass in the class library. Since the code needs to be compiled, I use the C# command-line compiler. It uses the \out switch to tell the system where to place the output, as well as in what file. The reference switch (\r) is used to include the class library, the Web form's source code (WebForm1.aspx.cs) and the application's global file (Global.asax.cs) in the resulting dll. View Listing C. For consistency, view the Web form's aspx file in Listing D. VB.NET equivalent Up to this point, I've used C# as the language of choice, but VB.NET (or any other .NET language) could have been used. Listing E features the VB.NET equivalent for the class library. Here's the command-line option when using the VB.NET compiler: vbc /target:library /out:bin\BuilderExtendPageClass.dll /r:BuilderClassLibrary.dll Global.asax.vb WebForm1.aspx.vb And, here's the Web form's .aspx file when VB.NET is used in the code behind file: <%@ Page Language="vb" AutoEventWireup="false" Codebehind="WebForm1.aspx.vb" Inherits="BuilderExtendsPageClassVBNet.WebForm1"%> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <html><head> <title>WebForm1</title> <body MS_POSITIONING="GridLayout"> <form id="Form1" method="post" runat="server"></form> </body></html> The code that utilizes the VB.NET class is the same as the C# listing. That is a great aspect of using class libraries; you may build a library in VB.NET, but you can easily use it in your C# applications (and vice versa). Simplify development The use of class libraries allows you to better organize code to foster code reuse and ease the maintenance task. In addition, any future code changes are easier to implement when the code is centrally located..
http://www.techrepublic.com/article/simplify-net-coding-and-maintenance-with-class-libraries/
CC-MAIN-2017-22
refinedweb
1,107
58.58
using the logging module from pythonista I have been taken to task to the indescriminate use of try: except:blocks to handle "edge conditions" in my code. Mea culpa. So, I tried to incorporate logging. How do I force the log file to not be the console? Here's what did. What am I missing? import sys import traceback,logging logging.basicConfig(filename = 'log') exception_logger = logging.getLogger('log.exception') def log_traceback(ex, ex_traceback): tb_lines = traceback.format_exception(ex.__class__, ex, ex_traceback) tb_text = ''.join(tb_lines) print tb_text exception_logger.log(0,tb_text) . . . try: self.items[self.currentRow]['accessory_type'] = 'none' # un-flags current selected row except Exception as ex: #needed for very first selection _, _, ex_traceback = sys.exc_info() log_traceback(ex, ex_traceback) @polymerchm, Please accept my apologies. I was not trying to denigrate your code. We have all been impressed by your substantial contributions to this forum. We are all attempting to learn together. try: except:is very Pythonic as you say so I do not want to discourage its use. As I said in my post, the author's ideas were a bit overboard for non-production code but his point about a bare except: passis an interesting one. In your example, would except TypeError: passbe sufficient for catching the error that you expect while continuing to raise all unexpected errors? I doubt that resorting to loggingin necessary in your example. That being said, I can no longer get logging to work in Pythonista and a simple logging example that used to work for me no longer does. @ccc No offense taken. Always learning. Logging broken. Hmmm. Good thing I tested it on something simpler. @OMZ. Logging would be nice. Interesting to know, I'll look into it... I think it might have to do with the whole exception redirection that's going on to show the error markers in the editor, but I haven't checked yet. I was able to get a logging .outfile using this example I found on Google. import logging LOG_FILENAME = 'logging_example.out' logging.basicConfig(filename=LOG_FILENAME, level=logging.DEBUG, ) logging.debug('This message should go to the log file') f = open(LOG_FILENAME, 'rt') try: body = f.read() finally: f.close() print 'FILE:' print body Pythonista will say it cannot open the file, but StaSH will open it using cat. It will also show the logs in the console area. Hope this helps. The above works also within a try: except:block and runs a statement after the except clause as well. It creates and appends to the log file. @blmacbeth: make the extension .txt and the editor reads it just fine. import logging,sys,traceback LOG_FILENAME = 'logging_example.txt' logging.basicConfig(filename=LOG_FILENAME, level=logging.DEBUG, ) def log_traceback(ex, ex_traceback): tb_lines = traceback.format_exception(ex.__class__, ex, ex_traceback) tb_text = ''.join(tb_lines) logging.debug(tb_text) logging.debug('This message should go to the log file') list = "this is a test".split() try: print list[7] except Exception as ex: _, _, ex_traceback = sys.exc_info() log_traceback(ex, ex_traceback) logging.debug("got past the exception") Here is the log file output: DEBUG:root:This message should go to the log file DEBUG:root:Traceback (most recent call last): File "/var/mobile/Containers/Data/Application/3469D264-D1AC-451E-9E4A-B3E38AD33B7F/Documents/chordcalc/test/Untitled.py", line 18, in <module> print list[7] IndexError: list index out of range DEBUG:root:got past the exception Two things i learned (I'm attracted to the idea of logging, but never taken the leap) from the comments section of ccc's link... you could use logging.exceptiondirectly, which will include the traceback along with whatever message you include. if you did want to log the exception at a lower severity level, you could use the exc_info=Trueargument to logging.debug, etc, which will automatically log the traceback. (this could avoid the need for your own traceback logger function)
https://forum.omz-software.com/topic/1529/using-the-logging-module-from-pythonista/1
CC-MAIN-2020-45
refinedweb
642
51.85
React 16 added waves of new features, improving the way we build web applications. The most impactful update is the new Hooks feature in version 16.8. Hooks allow us to write functional React components that manage state and side effects, making our code cleaner and providing the ability to easily to share functionality. React is not removing class components, but they cause many problems and are a detriment to upcoming code optimizations. The vision for Hooks is that all new components will be written using the API, resulting in more scalable web applications with better code. This tutorial will walk you through Hooks step-by-step and teach the core hook functionality by building a counter app. An overview of hooks Hooks provide the ability to manage state and side effects in functional components while also providing a simple interface to control the component lifecycle. The 4 built-in hooks provided by React are useState, useEffect, useReducer, and useContext. - useState replaces the need for this.state used in class components - useEffect manages side effects of the app by controlling the componentDidMount, componentDidUpdate, and componentWillUnmount lifecycle methods. - useContext allows us to subscribe to the React context - useReducer is similar to useState but allows for more complex state updates. The two main hook functions that you will use are, useState and useEffect, which manage the standard React state and lifecycle. useReducer is used to manage more complex state and useContext is a hook to pass values from the global React context to a component. With the core specification updating frequently, it’s essential to find tutorials to learn React. You can also build your own custom hooks, which can contain the primitive hooks exposed by React. You are able to extract component state into reusable functions that can be accessed by any component. Higher-order components and render props have traditionally been the way to share functionality, but these methods can lead to a bloated component tree with a confusing glob of nested React elements. Hooks offer a straightforward way to DRY out your code by simply importing the custom hook function into your component. Building counter with hooks To build our counter, we will use Create React App to bootstrap the application. You can install the package globally or use npx from the command line: npx create-react-app react-hooks-counter cd react-hooks-counter React Hooks is a brand new feature, so ensure you have v16.8.x installed. Inside your package.json, the version of react and react-dom should look similar to the code snippet below. If not, update them and reinstall using the yarn command. The foundation of hooks is that they are utilized inside functional components. To start, let’s convert the boilerplate file inside src/App.js to a functional component and remove the content. At the top of the file, we can import useState and useEffect from React. import React, { useState, useEffect } from 'react'; The most straightforward hook is useState since its purpose is to maintain a single value, so let’s begin there. The function takes an initial value and returns an array of arguments, with the item at the 0 index containing the state value, and the item at the 1 index containing a function to update the value. We will initialize our count to 0 and name the return variables count and setCount. const [count, setCount] = useState(0); NOTE: The returned value of the useState is an array. To simplify the syntax, we use array destructuring to extract the elements at the 0 and 1 index. Inside our rendered React component, we will display the count and provide a button to increment the count by 1 by using setCount. With a single function, we have eliminated the need to have a class component along with this.state and this.setState to manage our data. Every time you click the increment button, the count will increase by 1. Since we are using a hook, React recognizes this change in state and will re-render the DOM with this updated value. To demonstrate the extensibility of the state updates, we will add buttons increment the count by 2, 5, and 10 as well. We will also DRY out our code by storing these values in an array. We iterate over this array using the .map() function, which will return an array of React components. React will treat this as sibling elements in the DOM. You are now able to increment the count by different values. Now we will integrate the useEffect hook. This hook enables you to manage side effects and handle asynchronous events. The most notable and frequently used side effect is an API call. We will mimic the async nature of an API call using a setTimeout function. We will make a fake API request on the component’s mount that will initialize a random integer 1–10 to our count after waiting 1 second. We will also have an additional useEffect that will update the document title (a side effect) with the current count to show how it responds to a change in state. The useEffect hook takes a function as an argument. useEffect replaces the componentDidMount, componentDidUpdate, and componentWillUnmount class methods. When the state of the component mounts or updates, React will execute the callback function. If your callback function returns a function itself, React will execute this during componentWillUnmount. First, let’s create our effect to update the document title. Inside the body of our function, we declare useEffect which sets document.title = ‘Count = ‘ + count in the callback. When the state count updates, you should see your tab title also updating simultaneously. For the final step, we will create a mock API call that returns an integer to update the state count. We use a setTimeout and a function that returns a Promise because this simulates the time required to wait for an API request to return and the associated return value of a promise, which allows us to handle the response asynchronously. To mock an API, we create a mockApi function above our component. It returns a promise with a resolved random integer between 1 and 10. A common pattern is to make fetch requests in the componentDidMount. To reproduce this is in our functional component, we will add another useState to manage a hasFetched variable: const [hasFetched, setFetch] = useState(false). This is used to prevent the mockApi from being executed on subsequent updates. Our fetch hook will be an async function, so we will use async/await to handle the result. Inside our useEffect function, we will first check if the hasFetched has been executed. If it has not, we call mockApi and setCount with a result to initialize our value and then flip our hasFetched flag to true. Visual indicators are essential for UX and provide feedback for your users of the application status. Since we are waiting for an initial count value, we want to hide our buttons and display “Loading…” text on the screen if the hasFetched is false. This results in the following behavior: The final code Wrapping Up This article introduced hooks and showed how to implement useState and useEffect to simplify your class components into simple functional components. While this is a big win for React developers, the power of hooks is fully realized with the ability to combine them to create custom hooks. This allows you to extract logic and build modular functionality that can seamlessly be shared among React components without the overhead of HOCs or render props. You simply import your custom hook function, and any component can implement it. The only caveat is that all hook functions must follow the rules of hooks. Author Bio Trey Huffine A JavaScript fanatic. He is a software engineer in Silicon Valley building products using React, Node, and Go. Passionate for making the world a better place through code. Read Next Reactive programming in Swift with RxSwift and RxCocoa [Tutorial] React 16.8 releases with the stable implementation of Hooks PrimeReact 3.0.0 is now out with Babylon create-react-app template
https://hub.packtpub.com/getting-started-with-react-hooks-by-building-a-counter-with-usestate-and-useeffect/
CC-MAIN-2021-21
refinedweb
1,355
54.93
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Help calculate date I have the following fields: “rec” this field is the automatic system date “toma” It is a date that is entered“edad” is an integer which is entered “falla” It is a date that should be automatically loaded fulfilling the following condition: “falla” = if “falla” es < “rec” so “falla”= “rec” if not “falla”= “toma” + “edad” You can use compute field for 'rec' and 'falla' and use datetime to get current date. Write like this import datetime @api.one def _get_current_date(self): self.rec = datetime.datetime.now() rec = fields.Datetime(compute='_get_current_date') toma = fields.Datetime() edad = fields.Datetime() @api.one def _get_falla(self): # Assign to falla using your condition pass falla = fields.Datetime(compute='_get_falla') Note, I did n't test the above code About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now “toma” + “edad” you are trying to add and integer to a date. Do you want to add the years to the date?
https://www.odoo.com/forum/help-1/question/help-calculate-date-108661
CC-MAIN-2017-47
refinedweb
203
56.25
- Back To The Future – Build your Flux capacitor We will show here a modern take on the equipment that enabled time travel in Back to the Future. Those who have a few “decades” of ageing on their shoulders should know Back to the Future (but the younger ones as well, since they could know and see it thanks to YouTube streaming), the famous movie that then became a saga… in three installments. The great appeal of the movie – directed by Robert Zemeckis, and with no less than Steven Spielberg as an executive producer – lies in the bizarre and futuristic adventure of a teenager, Marty McFly (played by Michael J. Fox), that managed – with the help of an eccentric-looking scientist – a certain Emmett L. “Doc” Brown (whose real name is Christopher Lloyd) – to travel in the future (and in the past as well) by means of a time machine. The messy gear, the car on which Marty travelled, was created by the Doc and became a cult. It was the DeLorean DMC-12, chosen by the professor, who was sure that: “if you’re going to build a time machine into a car, why not do it with some style?” This car, a new time machine, had to receive an electric power of 1,21 Gigawatt in order to be able to make the jump: an enormous amount that was then used by the “flux capacitor”, placed behind the DeLorean’s seats. Initially, the flux capacitor was powered by some plutonium, while in the follow-up movies it was simply needed to introduce some garbage in the Mr. Fusion conversion device, in order to start the nuclear fusion and to develop the power needed. In the movie, after the time circuits were activated, and the destination date and time were set, McFly had to start the engine and accelerate up to 88 miles per hour (141,6 km/h), so that the flux capacitor could be activated. Once the time jump was performed, the DeLorean was to be found in the same physical position on Earth of the time of the departure. The “Back to the Future” fans will remember that the panel showing the flux capacitor’s state was composed of a three-pointed star, with the points being lamps, to which three cables with the typical rubber pipe insulation were applied. They will also remember that in the first installment, the data chosen for the experimental journey in the future – by the uncertain destiny of the two main characters – was the 21st October 2015; yes, right: it’s a day of this current month. Back to the future encouraged the curiosity and the imagination of many fans, so that for this date many commemorative events have been organized, some of them even promoting some suggestive comparisons with what the movie proposed (as regards the future destination scenario for Marty McFly and Doc). As an example, the video communication (if you want, it can be rendered today with Skype) and the flying cars (that, by the way, were speculated in movies such as Minority Report as well) that, unfortunately for the screenwriters and fortunately for us, still do not exist… But after all, during the ‘80s of the past century (as well as even before), the progress led the most optimistic directors and screenwriters to dream an evolution that turned out to be much faster than the one that the laws of Physics could and have imposed; on the other hand Back to the Future was not the only futuristic diversion, since from the conquest of the Moon (and of the Space more in general) led the cinema and the television to propose series such as Space: 1999 (in which it was theorised that the Moon had become the depot for radioactive wastes and that was populated by 300 inhabitants of the futuristic Space Base Alpha, that it was pulled out from the earth orbit and projected in an infinite journey in the Space) and Star Trek or Star Wars. The fact that the first time travel jump in Back to the Future has the 21st October 2015 as a destination, made us surrender to the enthusiasm that is currently overwhelming the fans of the trilogy, and that stimulated the Maker part of us, a part that everyone has inside. It is a short step from the passion to the idea, and it was as fast to render it in practice: a few hours in the workshop and the project proposed in these pages was born. It is a modern take on the flux capacitor device, and with it we want as well to pay homage to the great dream that Robert Zemeckis shared with the “Back to the Future” fans, and to the passion raised by a time machine version that was more realistic than the ones proposed in other movies, that were certainly unlikely. This was surely because in those years, electronics made us believe what in the end happened up to a certain measure, that is to say that it was the technology to create what once was impossible. And in a certain way the appeal of electronics is somehow this one, since differently from other disciplines, and being it something that cannot be seen and cannot be immediately explained, we humans are inclined to entrust our dreams and our hopes to it (and differently from mechanics, for example, that is more immediate and can be seen, and thus lends itself less to make us imagine space age creations made by means of it). Our project In order to simulate the flux capacitor device that once casted the DeLorean in time, we made use of the now ubiquitous Arduino Uno that, with a dedicated sketch loaded (and this time without need for any shield) drives three strips, each one with 8 Neopixel LEDs. The strips are managed in parallel, by a single Arduino line, that in our case can be easily modified at leisure by specifying it in the sketch; the communication is a one-directional one and manages a group of LEDs, that in our case are 24. The connections of the set are illustrated in these pages in the wiring diagram. Before continuing, it is appropriate to spend a few words about the Neopixel technology, since it enables the creation of “smart” RGB LEDs with a controller onboard. They can be easily integrated in the Arduino environment, thanks to proprietary libraries that Adafruit () has made freely available. A distinctive trait of the Neopixel LEDs is that they can be connected in cascade, so that the data line from one may pass to the following one. The price to pay, however, is that a beyond a certain number of LEDs the management speed must be considerably reduced; because of that, if in need to create matrices to show some fast graphics, one must use many lines with few LEDs for each one. But this kind of limitation does not concern our project. Each RGB LED can be individually managed by means of a dedicated command, included in the serial string and can produce up to 256 tones of its own colour, thus determining a total of 16.777.216 colour combinations. In practice, Neopixel is a solution that considers the integration of a driver and of its relative RGB LED in a SMD case, thus allowing the direct command, LED by LED. The data channel that is used for the communication with the Neopixel LEDs, and thus with the strips, is similar to those of the oneWire type. The power source considered for the Neopixel LEDs is a 5 volts one; the communication takes place at a maximum of 800 kbps. For each LED strip it is possible to set the refresh frequency at leisure, in order to make certain tricks of the light imperceptible. In our case, the scan frequency of the LEDs is 400 Hz, for each strip. Further strips may be connected in cascade or in parallel, in order to create various effects, but in this case such a configuration does not concern us. Keep in mind, however, that the more strips are connected to a single data channel, the more the refresh frequency will be limited (it being understood the maximum data-rate allowed). Briefly, the refresh frequency and thus the turning on/off speed for the single LEDs is inversely proportional to the number of LEDs to manage. The Neopixel system’s command protocol considers the sending of three bytes in a 24 bit string, each one of them containing the lighting state for each base colour (the eight bits of the green first, then those of the red, and finally those of the green). Let’s analyze, therefore, the strip’s circuit diagram. The extreme simplicity of the creation is obvious: each smart LED is connected in cascade, given that the data line entering the terminal DI exits from DO, that repeats its data. The power source is a 5 volt one (the strip’s voltage), that can be drawn from Arduino’s 5V contact, given that the current absorption for each strip does not reach 200 mA, and that the Neopixel three coloured LEDs are alternatively lighted. The reference ground for the power source and data (it is the only one, depending on the strip’s G contact) is always Arduino’s one and goes to the GND of this last board. The many capacitors placed on the power source are needed to filter the impulses created on the tracks as an effect of the absorption by the LEDs, when they are lighted. This is necessary, since the pulsation of the diodes’ power supply’s is at a high frequency, and otherwise the noises (that in the end are voltage drops, even if feeble ones, that are concurrent with the lighting of the single LEDs) could interfere with the proper operation of Arduino. Let’s get back to Arduino, now, and see that beyond the the board and the three strips in parallel we connected a button, that we need in order to be able to choose among the tricks of the light considered by the sketch. The button is normally an open one and is connected to Arduino’s pin 6 and to the ground (the pull-up resistor of the corresponding Atmega’s pin is enabled by the software, so to save us an external resistor and to simplify the wiring). Everything is powered via USB, thus via a PC, but it is also possible to power Arduino by means of a dedicated plug; in this case it is advised to use a power supply with an output voltage not greater than 7,5V, in order to not “stress” too much Arduino’s internal regulator. Once the powering has been supplied, Arduino loads the sketch and periodically checks the button’s state; at the same time it starts the default trick of the light, that considers the LEDs lighted with a white colour, and moving from the periphery to the center, all being synchronized while converging. Pressing the button once makes all the LEDs turn red in the same fashion, another pressure does the same with the green light LEDs and a further intervention on the button repeats the game with the blue lighted LEDs. Pressing the button further will produce more light games, for a maximum of 10 as a total. Among these, you will find a trick we created, that perfectly reproduces the effect of the movie’s flux capacitor. Once the tenth one has been reached, it will restart from the default one at the start. Practical Realization Since we wanted to create something that would replicate the panel of the movie’s Flux Capacitor in the most faithful way possible, our project had to have, in addition to the electronic parts, mechanical parts that would look as much as possible like the original ones. Since the switchboard in the movie was the typical one, in metal with a glass window and rubber gasket, we created therefore a “fake” one, made with a grey cardboard box, painted grey, in the place of the switchboard. Then we applied a thick acetate leaf with a fake rubber gasket, 3D printed in black PLA, by means of our 3Drag printer. You could make a plastic box instead, but still you would have to paint it grey. The strips have been arranged in the shape of a three-pointed star, and applied to a false bottom made in corrugated cardboard and painted black, while Arduino has been assembled behind it (between the false bottom and the bottom), secured with distance rings glued to the bottom of the box by means of hot glue. To make the emitted light more uniform, and more similar to the tubular discharge lamps that were used in Back to the Future, we inserted each strip in a transparent plastic sheath, obtained from a part of a transparent pipe (of those used for watering), having a diameter a bit greater than the strip’s width. You could use a transparent and smooth pipe, as well as a translucent one, or one having a machined surface. The strips’ connection wires come out from a hole in the center of the star (at least the real ones, that is to say, the three ones that go to Arduino). They can be obtained with pieces of a twin lead having three wires, that then have to be connected in parallel with three points (or Arduino’s jumpers), inserted in Arduino’s expansion connectors, in the position that is indicated by the wiring diagram. The wires that are applied to the other end of each strip are purely for scenery purposes (they are totally fictitious) and simulate the wires of the discharge lamps’ ends that are found in the panel, as seen in the movie. Each one of them can be secured by means of (red) pipe insulators, coming from the spark plugs of a petrol engine, applied on metal screws that are tightened on round 40mm insulators, that are 3D printed and glued to the black cardboard. The fake wires, that in the original machine in the movie would bring the high voltage, can be obtained by painting transparent rubber pipes (with a diameter of 6÷8 mm) in yellow: you will take care to introduce them in the cardboard. The sketch To obtain the tricks of the light, the sketch that we make available on our Internet website (along with the other project files) has to be loaded in Arduino, by means of the dedicated IDE or via a USB connection to the computer. The sketch makes use of the Neopixel.h Adafruit library, that is included since the beginning (before pins and variables are defined) by means of the following line: #include <Adafruit_NeoPixel.h> Soon after that a pin is assigned: it is used for the connection of the button used to select the tricks of the light (the sixth one, in this case), that is achieved via the following instruction: #define BUTTON_PIN 6 The communication with the LED strips is assigned to the Digital I/O 5 (D5) pin via the following instruction: #define PIXEL_PIN 5 Thus, if you wish to change Arduino’s line (for example if you mean to use Digital I/O 5 for other purposes), please modify (by editing the sketch via Arduino’s IDE) this line, by writing the desired pin number in the place of the 5, then save the sketch and load it again in Arduino. Finally, within the sketch the number of LEDs to be driven for each strip is defined, and it amounts to 8: #define PIXEL_COUNT 8 At this stage the firmware can be started, it can manage both the reading of the button in loop, and the visualization on the LED strips (obtained by means of a switch/case structure and the usage of the Adafruit library). From the celluloid to the reality As it often happens when one dares to forecast something about the world of tomorrow in a literary narrative or a movie, even in Back to the Future the director and the screenwriter created some scenes that proposed their vision of the time to come, and of the innovative things that would have come with it. But, in the same way as in Space: 1999 (the English serie), where it was theorized that already in 1996 we would have had a lunar base inhabited by humans, while today we barely have the ISS wandering in the Space around the Earth, even in Back to the Future we saw things that amazed us, and in the reality they are “yet to come”. It is evocative to try to make a list of what came true and what didn’t. Surely there are things today that the movie had forecasted and that we find today in the real world. The first one is video communication, that is enabled by video chat services, such as Skype; the second one is given by the security systems accessing biometric parameters, that is to say identification technologies that are based on the fingerprint recognition, on the face shape and iris recognition, etc. The third one is given by flat screens (LCD, OLED, etc.) and multivision or, if you want, PIP (Picture in Picture) technology and the same multiple view technology used in video surveillance. The fourth invention is given by flexible displays, for example like the panoramic ones of the most modern curved screen TVs and the OLED ones of smartphones such as Samsung Galaxy. And how could we forget to talk about the PC Tablets, a fifth prediction from Back to the Future? To them we may add the video glasses, an almost oniric vision of Google Glass. The 3D holograms, that were by the way proposed by other movies (for example, Total Recall, starring Arnold Schwarzenegger…) are now a possibility, thanks to the holographic laser. The eight innovation is given by the video games that, instead of a joystick, use a gesture recognition system: e.g., we are talking about the Microsoft Kinect, the Wii, and other systems having wearable sensors. A ninth prediction that came true is the Slamball, a team sport inspired by basketball. They are distinguished since on a Slamball field there are four trampolines, placed under each basket, that enable the players to amplify their jumps and make some slam dunks. In other words, it is a sort of acrobatic basketball game… At the tenth place we find high-tech clothing, of which a forerunner is the one worn by McFly: special fibers, sensors and actuators that enable it to adapt to the body and communicate the condition of the person wearing them, in the perspective of wearable electronics and IoT. The Robotic dustbins that chase the Doc in one of the most funny scenes in the movie can be compared to the street cleaning robots, that have been introduced since a while in various research centres (for example, by the Sant’Anna School in Pisa). A twelfth prediction that came true concerns the camera drones: in Back to the Future there are small planes that chase the news, shoot the facts and broadcast them (in the movie they also shoot the trial at the court). Nowadays drones are very much in use, and amateur multicopters, supplied with a video camera for aerial photography, are very popular. The disappearance of the LaserDisc (a forerunner, albeit bigger, of the dvd format) was announced to Marty McFly: they were used in the video juke boxes and were available for sale until 1998. In 2015, Marty and Jennifer McFly had a house in which everything was connected and could be commanded; this is something that came true, thanks to the always growing appeal of home automation, smart technologies and IoT. A similar situation can be made for the Homechat system, that was presented by LG during the last CES in Las Vegas: it enables the possibility to exchange messages with the household appliances, as if they were a person. Finally, the hoverboard: the flying skateboard in Back to the Future, akin to a hovercraft, could become available for sale for real, by the end of this year. So it was promised by the company Haltek Industries. Pingback: Build your own LED powered light Saber | Open Electronics
https://www.open-electronics.org/flux-capacitor/
CC-MAIN-2020-34
refinedweb
3,404
52.97
Code. Collaborate. Organize. No Limits. Try it Today. Most of the new Windows Mobile devices include a GPS receiver as part of the standard configuration. However, one problem is that of the repeated "cold start." Presumably to save battery life, the GPS receiver is turned off when it is not being used. Unlike standard GPS devices, mobile GPS chipsets do not save data when they are powered off, requiring a "cold start" each time they are used. This means up to 10 minutes of keeping the phone motionless until it has locked on to the satellites. Windows Mobile 5 and 6 standard / smartphone editions do not provide user-accessible configuration options to change this. However, if the GPS remains turned on, even after losing its fix (i.e. by going inside) it will be able to re-acquire its location within seconds of being placed in an area that has a signal. Also, once locked on to a signal, the receiver is able to hang onto it even when going into areas where it would not be able to lock on to the signal from a cold start. I originally dealt with this annoyance by leaving Google Maps running all the time in the background. This solution was imperfect, since it used a lot of memory and CPU, as well as downloading data from the Internet to update the map, which is quite expensive on many mobile devices. I instead designed this utility to run in the background, keep the GPS open, and poll its status at a user-defined interval. This program is also useful if you want to quickly test your GPS to make sure it is configured correctly and/or has a signal. The library used in this app is an open source sample provided for free with the Windows Mobile 6 standard SDK. It encapsulates the API hooks, allowing for quick and easy access to the phone's GPS using C# managed code. I have included the necessary source for the library; if you have the Windows Mobile 6 Standard SDK, those files can also be found at "\program files\Windows Mobile 6 SDK\Samples\Smartphone\CS\GPS". Add the whole folder to your project (minus the demo app), and add... using Microsoft.WindowsMobile.Samples.Location; ... (or the VB equivalent) to your classes that require GPS access. One of the reasons that unnecessarily complex solutions have been posted here and elsewhere is, THE WINDOWS MOBILE 6 SDK LIBRARY DOES NOT WORK PROPERLY IN THE WINDOWS MOBILE 6 EMULATOR. HOWEVER, IT WORKS PERFECTLY ON AN ACTUAL PHONE. A bunch of NMEA files are included with the SDK to simulate navigation on the emulator... when used with the "FakeGPS" driver (for testing GPS apps in the emulator) the latitude and longitude are alternately invalid or ridiculous (near the South Pole.) There is an MSDN blog somewhere apologizing for this goof and giving a possible fix (an equation to convert decimal degrees to standard latitude-longitude, which does absolutely nothing to solve the problem.) That said, use the library. It makes getting GPS data easier than opening a text file, as you will see in the code below. The problem is with the emulator. My advice would be, in the emulator, test with simulated latitude and longitude fed in however you see fit (i.e. an array of values, text file, etc.), then wire your position-getter methods up to the library and deploy to an actual GPS-enabled device. It will function as expected. The project that accompanies this article, while small and simple, demonstrates the use of the core methods that you will actually use when building GPS-enabled applications: Start the GPS: Gps g = new Gps(); Gps.Open(); Determine if the GPS is ready and knows its location: if (g.GetPosition().LatitudeValid) { //Has position: do something with the data } Get latitude / longitude: double latitude = g.GetPosition().Latitude; double longitude = g.GetPosition().Longitude; Stop the GPS: gps.Close(); Knowing all of this, one can easily build a decent application, ready for beta testing by normal users, in under 4 hours. The code for this app is contained in "form1.cs" and its associated files. The other source files bundled with the project are the GPS objects provided by Microsoft. It is so simple as to be trivial - the entire application runs in the code behind form1, the main window. form1 When the program starts, the user is given an option to turn on the GPS and start polling it at regular intervals. Here is how the device is stopped and started: public bool isTurnedOn = false; //tracks state public int pollInterval = 5; //keep-alive interval public Gps gps; //the phone's internal GPS public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { gps = new Gps(); //Create the handle, but don't turn it on yet. } private void mnuTurnOn_Click(object sender, EventArgs e) { if (!isTurnedOn) //Turn on GPS { try { isTurnedOn = true; mnuTurnOn.Text = "Turn Off"; gps.Open(); timer1.Interval = pollInterval * 60 * 1000; UpdateStatus(); timer1.Enabled = true; } catch (Exception ex) { MessageBox.Show("Error: could not find GPS device"); } } else //Turn off GPS { isTurnedOn = false; gps.Close(); UpdateStatus(); mnuTurnOn.Text = "Turn On"; timer1.Enabled = false; } } Accessing the GPS location data is almost too easy. Here is the UpdateStatus() method that is the heart of the program that checks if the GPS is locked to a sufficient number of satellites, and then gets its latitude and longitude. UpdateStatus() private void UpdateStatus() { if (!isTurnedOn) { lblState.Text = "GPS Turned Off"; label1.Visible = label2.Visible = lblStatus.Visible = lblLastFix.Visible = lblLastUpdate.Visible = false; } else { lblState.Text = "GPS is turned on."; label1.Visible = label2.Visible = lblStatus.Visible = lblLastFix.Visible = lblLastUpdate.Visible = true; lblLastUpdate.Text = DateTime.Now.ToString(); if (gps.GetPosition().LatitudeValid) lblLastFix.Text = "Locked on to satellites: "+gps.GetPosition().Latitude.ToString()+ " - "+gps.GetPosition().Longitude.ToString(); else lblLastFix.Text = "No signal"; } } Seeing as the "FakeGPS" emulation does not work, on-device testing should be started early for any application built using these libraries. Building the application here that I have posted for download results in an EXE and a DLL. Place both in the same folder on your mobile device, and you should be good to go. Unfortunately most devices are shipped without the newer versions of the .NET Compact Framework. If you have the Windows Mobile SDK, there will be a CAB for each different processor. Don't worry about breaking your phone by choosing the wrong one - it will simply refuse to install. You can also download the framework by itself from Microsoft. The project is currently set to use a Windows Mobile 6 standard build target (smartphones such as the Motorola Q9H or Samsung Jack II. It will also build for Windows Mobile 6 professional (touchscreen PDAs / PocketPCs.) It has been tested in the real world on a Motorola Q9H with standard configuration but should work on any of the above mentioned device categories provided that a GPS chipset is present and configured properly. Windows Mobile 5 devices should also accept this, and other applications using the intermediate GPS driver. Switch the build target in Visual Studio, and look for warnings on compile: if you are using components (mainly UI ones) not allowed on 5 (i.e. an embedded web browser control that can be manipulated by the host application) you will be notified at build time, and can make the appropriate (minor) changes. Obviously, to make a production GPS application using .NET CF, you will want to create a Setup and Deployment project and explicitly specify that the .NET Compact Framework 2.0 or 3.5 be included with it, rather than making the user download it separately. The current build configuration is for 3.5, but it builds perfectly to a 2.0 target as well without modifications. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) Matware wrote:If you feel it getting warm after about 5 minutes then you'll know your batter life is going to take a nose dive. General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. C# 6: First reactions
http://www.codeproject.com/Articles/27906/A-GPS-Keep-alive-Utility-and-Tester-for-Windows-Mo?msg=2646538
CC-MAIN-2014-15
refinedweb
1,374
56.25
I for his girlfriend. Being the prankster that James is, he decides to meddle with it. He changes all the words in the letter into palindromes. While modifying the letters of the word, he follows 2 rules: (a) He always reduces the value of a letter, e.g. he changes ‘d’ to ‘c’, but he does not change ‘c’ to ‘d’. (b) he carries out to convert a given string into a palindrome. Input Format The first line contains an integer T, i.e., the number of test cases. The next T lines will contain a string each. Output Format A single line containing the number of minimum operations corresponding to each test case. Constraints 1 ≤ T ≤ 10 1 ≤ length of string ≤ 104 All characters are lower cased english letters. Sample Input 3 abc abcba abcd Sample Output 2 0 4 ——— C Solution #include <stdio.h> #include <string.h> int solve(char buffer[]){ int count = 0; int i; int len = strlen(buffer); int dif; for(i=0;i<len/2;i++){ if(buffer[i]!=buffer[len-1-i]){ dif = buffer[i] - buffer[len-1-i]; if(dif<0) dif*=-1; count += dif; } } return count; } int main(){ int cases,k; char buffer[10001]; scanf("%d", &cases); for(k=0;k<cases;k++){ scanf("%s",buffer); if(k>0) printf("\n"); printf("%d",solve(buffer)); } return 0; } Lisp Solution (defun cost (line position) (if (eql (char line position) (char line (- (length line) 1 position))) 0 (abs (- (char-code (char line position)) (char-code (char line (- (length line) 1 position))))) )) (defun calculate (line position) (if (= position (truncate (/ (length line) 2))) 0 (+ (cost line position) (calculate line (+ position 1))) )) (defun main (iteration) (if (> iteration 0) (progn (setq line (read-line)) (format t "~d" (calculate line 0)) (if (> iteration 1) (format t "~%")) (main (- iteration 1))) )) (setq line (read-line)) (setq tests (parse-integer line)) (main tests) i didnt known how to solve this inc++ sample input 1111111111 25114 3333333333 sample output 89 6 1 find no of cases its mean words where no are string logic developpment
https://www.programminglogic.com/c-vs-lisp-program-example/
CC-MAIN-2020-16
refinedweb
342
59.94
In the 4.0 release, the Berkeley DB C++ API has been changed to use the ISO standard C++ API in preference to the older, less portable interfaces, where available. This means the Berkeley DB methods that used to take an ostream object as a parameter now expect a std::ostream. Specifically, the following methods have changed: DbEnv::set_error_stream Db::set_error_stream Db::verify On many platforms, the old and the new C++ styles are interchangeable; on some platforms (notably Windows systems), they are incompatible. If your code uses these methods and you have trouble with the 4.0 release, you should update code that looks like this: #include <iostream.h> #include <db_cxx.h> void foo(Db db) { db.set_error_stream(&cerr); } to look like this: #include <iostream> #include <db_cxx.h> using std::cerr; void foo(Db db) { db.set_error_stream(&cerr); }
http://docs.oracle.com/cd/E17276_01/html/programmer_reference/upgrade_4_0_cxx.html
CC-MAIN-2015-48
refinedweb
139
67.45
03-09-2011 01:07 AM - last edited on 03-09-2011 01:09 AM hello, i write whole code in thread for image processing which come from webservice. and i make a Search Page where user enter the value and click on search button. so according it user get value means images on other page. now i make Home button in page where images load. All work properly,but my problem is that when i press Home button it comes on Search Page,but thread is not close. it works continously until it complete. So what i write code in home button so it come back and thread will close.so it can again work as per change requirement. code: urlclass u=new urlclass(); //my thread object create home1=new ButtonField("Home page",ButtonField.CONSUME_CLICK); home1.setChangeListener(new FieldChangeListener(){ public void fieldChanged(Field field,int content){ if(field==home1) { uc.stop(); UiApplication.getUiApplication().pushScreen(new main()); } } }); add(home1); note:in above code main is my search page. 03-09-2011 04:11 AM we can't kill thread.. a thread will die only if all statement in run() method of that thread excecuted.. maybe you can do this : public class UrlClass extends Thread { private boolean isStop = false; public void run() { while(!isStop) { // image processing } } public void stop() { isStop = true; } } just create the class and start it..to stop the thread just invoke stop() hope this could help 03-09-2011 06:56 PM Don't know whether it helps much, but I also do a <thread>.Interrupt() as part of the close processing. I also have the Thread in a loop testing a 'stop' condition.
http://supportforums.blackberry.com/t5/Java-Development/close-thread-issue/td-p/879019
crawl-003
refinedweb
276
74.9
Created on 2011-03-15 04:46 by eltoder, last changed 2017-03-15 06:45 by mbdevpl. As pointed out by Raymond, constant folding should be done on AST rather than on generated bytecode. Here's a patch to do that. It's rather long, so overview first. The patch replaces existing peephole pass with a folding pass on AST and a few changes in compiler. Feature-wise it should be on par with old peepholer, applying optimizations more consistently and thoroughly, but maybe missing some others. It passes all peepholer tests (though they seem not very extensive) and 'make test', but more testing is, of course, needed. I've split it in 5 pieces for easier reviewing, but these are not 5 separate patches, they should all be applied together. I can upload it somewhere for review or split it in other ways, let me know. Also, patches are versus 1e00b161f5f5, I will redo against head. TOC: 1. Changes to AST 2. Folding pass 3. Changes to compiler 4. Generated files (since they're checked in) 5. Tests In detail: 1. I needed to make some changes to AST to enable constant folding. These are. For example: def foo(): "doc" + "string" Without optimizations foo doesn't have a docstring. After folding, however, the first statement in foo is a string literal. This means that docstring depends on the level of optimizations. Making it an attribute avoids the. 2. Constant folding (and a couple of other tweaks) is performed by a visitor. The visitor is auto-generated from ASDL and a C template. C template (Python/ast_opt.ct) provides code for optimizations and rules on how to call it. Parser/asdl_ct.py takes this and ASDL and generates a visitor, that visits only nodes which have associated rules (but visits them via all paths). The code for optimizations itself is pretty straight-forward. The generator can probably be used for symtable.c too, removing ~200 tedious lines of code. 3. Changes to compiler are in 3 categories a) Updates for AST changes. b) Changes to generate better code and not need any optimizations. This includes tuple unpacking optimization and if/while conditions. c) Simple peephole pass on compiler internal structures. This is a better form for doing this, than a bytecode. The pass only deals with jumps to jumps/returns and trivial dead code. I've also made 'raise' recognized as a terminator, so that 'return None' is not inserted after it. 4, 5. No big surprises here. I'm confused. Why aren't there review links? Because I don't know how to make them. Any pointers? > Because I don't know how to make them. Any pointers? Martin is hacking on the tool these days... So it's no surprise it doesn't work perfectly yet ;) If you have a Google account you can upload these patches to, though. Thanks. Review link: The review links didn't come up automatically because 336137a359ae isn't a hg.python.org/cpython revision ID. I see. Should I attach diffs vs. some revision from hg.python.org? No need, since you manually created a review on appspot. The local Reitveld links are just a convenience that can avoid the need to manually create a review instance. Any comments on the code so far or suggestions on how we should move forward? I've been focusing on softer targets during the sprints - I'll dig into this once I'm back home and using my desktop machine again. I've updated patches on Rietveld with some small changes. This includes better code generation for boolops used outside of conditions and cleaned up optimize_jumps(). This is probably the last change before I get some feedback. Also, I forgot to mention yesterday, patches on Rietveld are vs. ab45c4d0b6ef Just for fun I've run pystones. W/o my changes it averages to about 70k, with my changes to about 72.5k. A couple of somewhat related issues: #10399 AST Optimization: inlining of function calls #1346238 A constant folding optimization pass for the AST Obviously, ast optimizers should work together and not duplicate. Nice to see increased attention. AFAICT my patch has everything that #1346238 has, except BoolOps, which can be easily added (there's a TODO). I don't want to add any new code, though, until the current patch will get reviewed -- adding code will only make reviewing harder. #10399 looks interesting, I will take a look. Is anyone looking or planing to look at the patch? I suspect someone will sometime. There is bit of a backlog of older issues. Finally got around to reviewing this (just a visual scan at this stage) - thanks for the effort. These are mostly "big picture" type comments, so I'm keeping them here rather than burying them amongst all the details in the code review tool. The effect that collapsing Num/Str/Bytes into a single Lit node type has on ast.literal_eval bothered me initially, but looking more closely, I think those changes will actually improve the function (string concatenation will now work, and errors like "'hello' - 'world'" should give a more informative TypeError). (Bikeshed: We use Attribute rather than Attr for that node type, perhaps the full "Literal" name would be better, too) Lib/test/disutil.py should really be made a feature of the dis module itself, by creating an inner disassembly function that returns a string, then making the existing "dis" and "disassembly" functions print that string (i.e. similar to what I already did in making dis.show_code() a thin wrapper around the new dis.code_info() function in 3.2). In the absence of a better name, "dis_to_str" would do. Since the disassembly is interpreter specific, the new disassembly tests really shouldn't go directly in test_compile.py. A separate "test_ast_optimiser" file would be easier for alternate implementations to skip over. A less fragile testing strategy may also be to use the ast.PyCF_ONLY_AST flag and check the generated AST rather than the generated bytecode. I'd like to see a written explanation for the first few changes in test_peepholer.py. Are those cases no longer optimised? Are they optimised differently? Why did these test cases have to change? (The later changes in that file look OK, since they seem to just be updating tests to handle the more comprehensive optimisation) When you get around to rebasing the patch on 3.3 trunk, don't forget to drop any unneeded "from __future__" imports. The generated code for the Lit node type looks wrong: it sets v to Py_None, then immediately checks to see if v is NULL again. Don't use "string" as a C type - use "char *" (and "char **" instead of "string *"). There should be a new compiler flag to skip the AST optimisation step. A bunch of the compiler internal changes seem to make the basic flow of the generated assembly not match the incoming source code.. I think the biggest thing to take out of my review is that I strongly encourage deferring the changes for 5(b) and 5(c). I like the basic idea of using a template-based approach to try to get rid of a lot of the boilerplate code currently needed for AST visitors. Providing a hook for optimisation in Python (as Dave Malcolm's approach does) is valuable as well, but I don't think the two ideas need to be mutually exclusive. As a more general policy question... where do we stand in regards to backwards compatibility of the AST? The ast module docs don't have any caveats to say that it may change between versions, but it obviously *can* change due to new language constructs (if nothing else). >? I would provide this via another compile flag a la PyCF_ONLY_AST. If you give only this flag, you get the original AST. If you give (e.g.) PyCF_OPTIMIZED_AST, you get the resulting AST after the optimization stage (or the same, if optimization has been disabled). Thanks. > string concatenation will now work, and errors like "'hello' - 'world'" > should give a more informative TypeError Yes, 'x'*5 works too. > Bikeshed: We use Attribute rather than Attr for that node type, > perhaps the full "Literal" name would be better Lit seemed more in line with Num, Str, BinOp etc. No reason it can't be changed, of course. > Lib/test/disutil.py should really be made a feature of the dis module > itself Agreed, but I didn't want to widen the scope of the patch. If this is something that can be reviewed quickly, I can make a change to dis. I'd add a keyword-only arg to dis and disassembly -- an output stream defaulting to stdout. dis_to_str then passes StringIO and returns the string. Sounds OK? > Since the disassembly is interpreter specific, the new disassembly > tests really shouldn't go directly in test_compile.py. A separate > "test_ast_optimiser" file would be easier for alternate > implementations to skip over. New tests in test_compiler are not for the AST pass, but for changes to compile.c. I can split them out, how about test_compiler_opts? > I'd like to see a written explanation for the first few changes in > test_peepholer.py Sure. 1) not x == 2 can be theoretically optimized to x != 2, while this test is for another optimization. not x is more robust. 2) Expression statement which is just a literal doesn't produce any code at all. This is now true for None/True/False as well. To preserve constants in the output I've put them in tuples. > When you get around to rebasing the patch on 3.3 trunk, don't forget > to drop any unneeded "from __future__" imports. If you're referring to asdl_ct.py, that's actually an interesting question. asdl_ct.py is run by system installed python2, for obvious reasons. What is the policy here -- what is the minimum version of system python that should be sufficient to build python3? I tested my code on 2.6 and 3.1, and with __future__ it should work on 2.5 as well. Is this OK or should I drop 'with' so it runs on 2.4? > The generated code for the Lit node type looks wrong: it sets v to > Py_None, then immediately checks to see if v is NULL again. Right, comment in asdl_c.py says: # XXX: special hack for Lit. Lit value can be None and it # should be stored as Py_None, not as NULL. If there's a general agreement on Lit I can certainly clean this up. > Don't use "string" as a C type - use "char *" (and "char **" instead > of "string *"). string is a typedef for PyObject that ASDL uses. I don't think I have a choice to not use it. Can you point to a specific place where char* could be used? > There should be a new compiler flag to skip the AST optimisation step. There's already an 'optimizations level' flag. Maybe we should make it more meaningful rather than multiplying the number of flags? > A bunch of the compiler internal changes seem to make the basic flow > of the generated assembly not match the incoming source code. Can you give an example of what you mean? The changes are basically 1) standard way of handling conditions in simple compilers 2) peephole. >. The reason why I think it makes sense to have this in a single change is testing. This allows to reuse all existing peephole tests. If I leave old peephole enabled there's no way to tell if my pass did something from disassembly. I can port tests to AST, but that seemed like more work than match old peepholer optimizations. Is there any opposition to doing simple optimizations on compiler structures? They seem a good fit for the job. In fact, if not for stack representation, I'd say that they are better IR for optimizer than AST. Also, can I get your opinion on making None/True/False into literals early on and getting rid of forbidden_name? Antoine, Georg -- I think Nick's question is not about AST changing after optimizations (this can indeed be a separate flag), but the structure of AST changing. E.g. collapsing of Num/Str/Bytes into Lit. Btw, if this is acceptable I'd make a couple more changes to make scope structure obvious from AST. This will allow auto-generating much of the symtable pass. > and with __future__ it should work on 2.5 as well. Actually, seems that at least str.format is not in 2.5 as well. Still the question is should I make it run on 2.5 or 2.4 or is 2.6 OK (then __future__ can be removed). > not x == 2 can be theoretically optimized to x != 2, ... I don't think it can: >>> class X: ... def __eq__(self, other): ... return True ... def __ne__(self, other): ... return True ... >>> x = X() >>> >>> not x == 2 False >>> x != 2 True >>> > I don't think it can: That already doesn't work in dict and set (eq not consistent with hash), I don't think it's a big problem if that stops working in some other cases. Anyway, I said "theoretically" -- maybe after some conservative type inference. Also, to avoid any confusion -- currently my patch only runs AST optimizations before code generation, so compile() with ast.PyCF_ONLY_AST returns non-optimized AST. While I would not be happy to use class X above, the 3.2 manual explicitly says "There are no implied relationships among the comparison operators. The truth of x==y does not imply that x!=y is false. " . OK, I missed the fact that the new optimisation pass isn't run under PyCF_ONLY_AST. However, as Eugene picked up, my concern is actually with the collapsing of Str/Num/Bytes/Ellipsis into the new Lit node, and the changes to the way None/True/False are handled. They're all changes that *make sense*, but would also likely cause a variety of AST manipulations to break. We definitely don't care when bytecode hacks break, but providing the ast module means that AST manipulation is officially supported. However, the reason I bring up new constructs, is the fact that new constructs may break AST manipulation passes, even if the old structures are left intact - the AST visitor may miss (or misinterpret) things because it doesn't understand the meaning of the new nodes. We may need to take this one back to python-dev (and get input from the other implementations as well). It's a fairly fundamental question when it comes to the structure of any changes. If we have to preserve backward compatibility of Python AST API, we can do this relatively easily (at the expense of some code complexity): * Add 'version' argument to compile() and ast.parse() with default value of 1 (old AST). Value 2 will correspond to the new AST. * Do not remove Num/Str/Bytes/Ellipsis Python classes. Make PyAST_obj2mod and PyAST_mod2obj do appropriate conversions when version is 1. * Possibly emit a PendingDeprecationWarning when version 1 is used with the goal of removing it in 3.5 Alternative implementation is to leave Num/Str/etc classes in C as well, and write visitors (similar to folding one) to convert AST between old and new forms. Does this sounds reasonable? Should this be posted to python-dev? Should I write a PEP (I'd need some help with this)? Are there any other big issues preventing this to be merged? Eugene, I think you're doing great work here and would like to see you succeed. In the near term, I don't have time to participate, but don't let that stop you. Is there any tool to see how it works step-by-step. The whole stuff is extremely interesting, but I can't fit all the details of AST processing in my head. Eugene: I suggest raising the question on python-dev. The answer potentially affects the PEP 380 patch as well (which adds a new attribute to the "Yield" node). Anatoly: If you just want to get a feel for the kind of AST various pieces of code produce, then the "ast.dump" function (along with using the ast.PyCF_ONLY_AST flag in compile) may be enough. You may also want to take a look at the AST -> dot file conversion code in Dave Malcolm's patches on #10399. Eugene raised the question of AST changes on python-dev [1] and the verdict was that so long as ast.__version__ is updated, AST clients will be able to cope with changes. Benjamin Peterson made some subsequent changes to the AST (bringing the AST for try and with statements more in line with the concrete syntax, allowing source-to-source transforms to retain the original code structure). This patch will probably need to be updated to be based on the latest version of the AST - I would be surprised if it applied cleanly to the current tip. [1] Updated the title to reflect that the peephole optimizer will likely continue to exist but in a much simpler form. Some complex peephole optimization such as constant folding can be handled more easily and more robustly at the AST level. Other minor peephole optimizations such as jump-to-jump simplification as still bytecode level optimizations (ones that improve the quality of the generated code without visibility to higher level semantics). Nick, if there's an interest in reviewing the patch I can update the it. I doubt it needs a lot of changes, given that visitor is auto-generated. Raymond, the patch contains a rewrite of low-level optimizations to work before byte code generation, which simplifies them a great deal. As Raymond noted though, some of the block stack fiddling doesn't make sense until after the bytecode has already been generated. It's OK to have multiple optimisers at different layers, each taking care of the elements that are best suited to that level. And yes, an updated patch against the current tip would be good. Of my earlier review comments, the ones I'd still like to see addressed are: - finish clearing out the now redundant special casing of None/True/False - separating out the non-AST related compiler tweaks (i.e. 3b and 3c and the associated test changes) into their own patch (including moving the relevant tests into a separate @cpython_only test case) I'm still not 100% convinced on that latter set of changes, but I don't want my further pondering on those to hold up the rest of the patch. (they probably make sense, it's just that the AST level changes are much easier to review than the ones right down at the bytecode generation level - reviewing the latter means getting back up to speed on precisely how the block stack works and it will be a while before I get around to doing that. It's just one of those things where the details matter, but diving that deep into the compiler is such a rare occurrence that I have to give myself a refresher course each time it happens). I. Marking the PEP 380 implementation as a dependency, as I expect it to be easier to update this patch to cope with those changes than it would be the other way around. Bumping the target version to 3.4. This is still a good long term idea, but it's a substantial enough change that we really want to land it early in a development cycle so we have plenty of time to hammer out any issues. Good call, Nick. In msg132312 Nick asked "where do we stand in regards to backwards compatibility of the AST?" The current ast module chapter, second sentence, says ""The abstract syntax itself might change with each Python release;" this module helps to find out programmatically what the current grammar looks like." where 'current grammar' is copied in 30.2.2. Abstract Grammar. I do not know when that was written, but it clearly implies the the grammark, which defines node classes, is x.y version specific. I think this is the correct policy just so we can make changes, hopefully improvements, such as the one proposed here. I'm working on a AST optimizer for Python 2.6-3.3: It is implemented in Python and is able to optimize much more cases than the current bytecode peepholer. All of the optimisations that assume globals haven't been shadowed or rebound are invalid in the general case. E.g. print(1.5) and print("1.5") are valid for *our* print function, but we technically have no idea if they're equivalent in user code. In short, if it involves a name lookup and that name isn't reserved to the compiler (e.g. __debug__) then no, you're not allowed to optimise it at compile time if you wish to remain compliant with the language spec. Method calls on literals are always fair game, though (e.g. you could optimise "a b c".split()) Any stdlib AST optimiser would need to be substantially more conservative by default. > All of the optimisations that assume globals haven't been shadowed > or rebound are invalid in the general case. My main idea is that the developer of the application should be able to annotate functions and constants to declare them as "optimizable" (constant). I chose to expect builtins as not being overrided, but if it breaks applications, it can be converted to an option disabled by default. There is a known issue: test_math fails because pow() is an alias to matH.pow() in doctests. The problem is that "from math import *" is called and the result is stored in a namespace, and then "pow(2,4)" is called in the namespace. astoptimizer doesn't detect that pow=math.pow because locals are only set when the code is executed (and not at compilation) with something like: exec(code, namespace). It is a limitation of the optimizer. A workaround is to disable optimizations when running tests. It is possible to detect that builtins are shadowed (ex: print=myprint). astoptimizer has an experimental support of assignments, but it only works on trivial examples yet (like "x=1; print(x)") and is disabled by default (because it is buggy). I also plan to disable some optimizations if globals(), vars() or dir() is called. > Any stdlib AST optimiser would need to be substantially more conservative by default. FYI The test suite of Python 2.7 and 3.3 pass with astoptimizer... except some "minor" (?) failures: * test_math fails for the reason explained above * test_pdb: it looks to be an issue with line number (debuggers don't like optimizers :-)) * test_xml_etree and test_xml_etree_c: reference count of the None singleton The test suite helped me to find bugs in my optimizer :-) I also had to add some hacks (hasattr) for test_ast (test_ast generates invalid AST trees). The configuration should also be adapted for test_peepholer, because CPython peepholer uses a limit of 20 items, whereas astoptimizer uses a limit of 4096 bytes/characters for string by default. All these minor nits are now handled in a specific "cpython_tests" config. No, you're assuming global program analysis and *that is not a valid assumption*. One of the key features of Python is that *monkeypatching works*. It's not encouraged, but it works. You simply cannot play games with name lookups like this without creating something that is no longer Python. You also have to be very careful of the interface to tracing functions, such as profilers and coverage analysis tools. > Method calls on literals are always fair game, though (e.g. you could optimise "a b c".split()) What about optimizations that do not change behavior, except for different error messages? E.g. we can change y = [1,2][x] to y = (1,2)[x] where the tuple is constant and is stored in co_consts. This will, however, produce a different text in the exception when x is not 0 or 1. The type of exception is going to be the same. The peephole optimiser already makes optimisations like that in a couple of places (e.g. set -> frozenset): >>> def f(x): ... if x in {1, 2}: pass ... >>> f.__code__.co_consts (None, 1, 2, frozenset({1, 2})) It's name lookup semantics that are the real minefield. It's one of the reasons PyPy's JIT can be so much more effective than a static optimiser - because it's monitoring real execution and inserting the appropriate guards it's not relying on invalid assumptions about name bindings. If I'm not missing something, changing x in [1,2] to x in (1,2) and x in {1,2} to x in frozenset([1,2]) does not change any error messages. Agreed that without dynamic compilation we can pretty much only track literals (including functions and lambdas) assigned to local variables. might also play into this if it happens to go in. Just noting for the record (since it appears it was never brought back to the comments): it is expected that programs that manipulate the AST may require updates before they will work on a new version of Python. Preserving AST backwards compatbility is too limiting to the evolution of the language, so only source compatibility is promised. (That was the outcome of the suggested AST discussions on python-dev that were mentioned earlier) Regenerated for review. "issue11549.patch: serhiy.storchaka, 2016-05-11 08:22: Regenerated for review" diff -r 1e00b161f5f5 PC/os2emx/python33.def --- a/PC/os2emx/python33.def Wed Mar 09 12:53:30 2011 +0100 +++ b/PC/os2emx/python33.def Wed May 11 11:21:24 2016 +0300 The revision 1e00b161f5f5 is 4 years old. The patch looks very outdated :-/ Fairly sure it's 5 years old. Yes, the patch is outdated, conflicts with current code (and would conflict even more after pushing the wordcode patch) and contains bugs. But it moved in right direction, I think your _PyCode_ConstantKey() could help to fix bugs. I'm going to revive this issue. Serhiy: Nice! Yes, _PyCode_ConstantKey solved the problem. But #16619 went in the opposite direction of this patch, and introduced a new type of literal node instead of unifying the existing ones. Kind of a shame, since *this* patch, I believe, both fixes that bug and removes the unreachable code in the example :) I also see that Victor has been doing some of the same work, e.g. #26146. > I also see that Victor has been doing some of the same work, e.g. #26146. ast.Constant idea directly comes from your work. The implementatiln may ve different. It's a first step for AST optimizers. @haypo, how do you think about ast.Lit and ast.Constant? Is this patch updated to use ast.Constant? Or ast.Constant should be used only for some transform like constant folding? > @hay. Hugo Geoffroy added the comment: >`. Since the Python compiler doesn't produce ast.Constant, there is no change in practice in ast.literal_eval(). If you found a bug, please open a new issue. > At least [this library]() would have a serious risk of remote DoS : I tried hard to implement a sandbox in Python and I failed: I don't think that literal_eval() is safe *by design*. Good point Hugo. Yes, this should be taken into account when move constant folding to AST level. Thank you for the reminder. > Since the Python compiler doesn't produce ast.Constant, there is no change in practice in ast.literal_eval(). If you found a bug, please open a new issue. Currently there is no a bug in ast.literal_eval() because the '**' operator is not accepted. >>> ast.literal_eval("2**2**32") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/serhiy/py/cpython/Lib/ast.py", line 85, in literal_eval return _convert(node_or_string) File "/home/serhiy/py/cpython/Lib/ast.py", line 84, in _convert raise ValueError('malformed node or string: ' + repr(node)) ValueError: malformed node or string: <_ast.BinOp object at 0xb6f2fa4c> But if move the optimization to AST level this can add a vulnerability to DOS attack. The optimizer should do additional checks first than execute operators that can return too large value or take too much CPU time. Currently this vulnerability have place in the peephole optimizer. > Currently there is no a bug in ast.literal_eval() because the '**' operator is not accepted. The doc says "This can be used for safely evaluating strings containing Python values from untrusted sources without the need to parse the values oneself. It is not capable of evaluating arbitrarily complex expressions, for example involving operators or indexing." I don't think that it's a bug, but a deliberate design choice. a**b is an obvious trick to DoS a server (high CPU and memory usage). Hugo, Serhiy, and Victor: I think you're all agreeing with each other, but to make sure I'm understanding the point correctly: 1. ast.literal_eval() is currently safe from malicious code like "100000 ** 100000" or "1073741824 * 'a'" because it only traverses addition and subtraction nodes, so any such operations will just throw ValueError (As a point of interest: unary plus and minus are required to support positive and negative numeric literals, while binary addition and subtraction are required to support complex number literals. So the status quo isn't precisely the result of a conscious security decision, it's just a minimalist implementation of exactly what's necessary to support all of the builtin types, which also provides some highly desirable security properties when evaluating untrusted code) 2. an eager constant folding optimisation in the AST tier would happen *before* literal_eval filtered out the multiplication and exponentiation nodes, and hence would make literal_eval vulnerable to remote DOS attacks in cases where it is expected to be safe However, that's not exactly how this patch works: if you pass "PyCF_ONLY_AST" as ast.parse does, it *doesn't* run the constant-folding step. Instead, the constant folding is run as an AST-to-AST transform during the AST-to-bytecode compilation step, *not* the initial source-to-AST step. (see ) This has a few nice properties: - ast.literal_eval() remains safe - source -> AST -> source transformation pipelines continue to preserve the original code structure - externally generated AST structures still benefit from the AST optimisation pass - we don't need a new flag to turn this optimisation pass off when generating the AST for a given piece of source code > 1. Changes to AST I'm working on updating this part. There are some failing tests remains. But I doubt this stage is worth enough for now. >. We have already Constant and NameConstant. So it seems there are no need for None, Bool, TupleConst, SetConst nodes. I think converting Num, Str, Bytes, Ellipsis into Constant in folding stage is easier than fixing all tests. >. Take docstring before constant folding isn't enough? (I'm sorry if I'm wrong. I haven't tried it. They are all NameConstant already. > We have already Constant and NameConstant. So it seems there are no need for > None, Bool, TupleConst, SetConst nodes. Yes, Constant is Victor's version of Lit. > I think converting Num, Str, Bytes, Ellipsis into Constant in folding stage > is easier than fixing all tests. Fixing tests was fairly easy the last time. I think the question is what changes to the public API of AST are acceptable. >. > They are all NameConstant already. Keep in mind this patch is 6 years old :) >> We have already Constant and NameConstant. So it seems there are no need for >> None, Bool, TupleConst, SetConst nodes. > Yes, Constant is Victor's version of Lit. Then, may I remove ast.Lit, and use Constant and NameConstant? >> I think converting Num, Str, Bytes, Ellipsis into Constant in folding stage >> is easier than fixing all tests. > Fixing tests was fairly easy the last time. I think the question is what changes to the public API of AST are acceptable. I think backward compatibility is not guaranteed. But there are some usage of ast. ( ) So I think we should make change small as possible. >>. OK. >> They are all NameConstant already. > Keep in mind this patch is 6 years old :) I know. I want to move this patch forward, but I'm not frontend (parser, AST, and compiler) expert. I can't make design decision without expert's advice. Thanks for your reply. Then, may I update the patch in following direction? * Remove ast.Lit. * Keep docstring change. If you would like to implement constant folding at the AST level, I suggest you to look at my fatoptimizer project: The tricky part is to avoid operations when we know that it will raise an exception or create an object too big according to our constraints. I would prefer to implement an AST optimizer in Python, but converting C structures to Python objects and then back to C structures has a cost. I'm not sure that my optimizer implemented in Python is fast enough. By the way, an idea would be to skip all optimizations in some cases like for script.py when running python3 script.py. Before trying advanced optimizations, I want move suspended obvious optimizations forwards. For example, removing unused constants is suspended because constant folding should be moved from peephole to AST. This is why I found this issue. After that, I'm thinking about shrinking stacksize. frame_dealloc (scans whole stack) is one of hot functions. Dropping ast.Lit is fine. As for the docstring part, I'm torn. Yes it's nice as that will show up semantically in the Python code, but it's also easy to check for by just looking if the first statement is a Str (or Constant if that's what we make all strings). So I'll say I'm +0 on the docstring part. At the AST level, you have a wide range of possible optimizations. See the optimizations that I implemented in fatoptimizer (FAT Python) to have an idea: FAT Python adds guards checked at runtime, something not possible (not wanted) here. But if you start with constant folding, why not implementing constant propagation as well? What about loop unrolling? Where is the limit? If you implement the AST optimizer in C, the limit will probably be your C skills and your motivation :-) In Python, the limit is more the Python semantics which is... hum... not well defined. For example, does it break the Python semantics to replace [i for i in (1, 2, 3)] with [1, 2, 3]? What if you use a debugger? Do yo expect a list comprehension or a literal list? FYI I suspended my work on FAT Python because almost no other core developer was interested. I didn't get any support, whereas I need support to push core FAT Python features like function specialization and runtime checks (PEP 510, see also PEP 511). Moreover, I failed to show any significant speedup on non-trivial functions. I abandoned before investigating function inlining, even if FAT Python already has a basic support for function inlining. This issue is open since 2011. The question is always the same: is it worth it? An alternative is to experiment an AST optimizer outside CPython and come back later with more data to drive the design of such optimizer. With FAT Python, I chose to add hooks in the Python compiler, but different people told me that it's possible to do that without such hook but importlib (importlib hooks). What do you think Naoki? Yes, doing optimizations on AST in CPython is unlikely to give any sizable speed improvements in real world programs. Python as a language is not suited for static optimization, and even if you manage to inline a function, there's still CPython's interpreted overhead and boxed types that dwarf the effect of the optimization. The goal of this patch was never to significantly improve the speed. It was to replace the existing bytecode peephole pass with cleaner and simpler code, which also happens to produce slightly better results. My motivation is improve speed, reduce memory usage, and quicker startup time for real world applications. If some optimization in FAT optimizer has significant speedup, I want to try it. But this time, my motivation is I felt "everyone think constant folding should go to AST from peephole, there is a patch about it, but unfortunately it was suspended (because of lack of reviewers, maybe)." As reading #28813, I think there are consensus about constant folding should go AST. INADA Naoki added the comment: > My motivation is improve speed, Ah, if the motivation is performance, I would like to see benchmark results :-) I understand that an AST optimizer would help to produce more efficient bytecode, right? > reduce memory usage, I noticed an issue with the peephole optimizer: the constant folding step keeps original constants. Moving constant folding to the AST stage fixes this issue by design. > and quicker startup time for real world applications. You mean faster import time on precompiled .pyc files, right? It's related to the hypothetical faster bytecode. > If some optimization in FAT optimizer has significant speedup, I want to try it. See FYI it took me something like 2 months to build FAT Python "infrastructure": fix CPython bugs, design guards, design the AST optimizer, write unit tests, etc. I didn't spend much time on efficient optimizations. But for my first rule was to not break the CPython test suite! Not break the Python semantics, otherwise it would be impossible to put enable the optimizer by default in CPython, which is my long term goal. I've tried to update ast_opt.c[t] without changing AST. But I can't find clear way to solve "foo" + "bar" docstring problem. This patch adds only docstring to AST. Naoki: Can you please open a new issue for your ast-docstring.patch change? I like it, but this issue became too big, and I'm not sure that everyone in the nosy list is interested by this specific change. I submit new issues: * #29463 for AST change (change docstring from first statement to attribute). * #29469 for constant folding Note that this issue contains more peephole -> AST optimization changes. But I want to start these two patch to ease review and discussion. I created the issue #29471: "AST: add an attribute to FunctionDef to distinguish functions from generators and coroutines".
https://bugs.python.org/issue11549
CC-MAIN-2019-26
refinedweb
6,428
65.73
Tollfree in the US: 877-421-0030 Alternate: 770-615-1247 Access code: 173098# Full list of phone numbers Call Time: 8:00 AM Pacific; 11:00 AM Eastern; 1500 UTC Good writeup in Mylin wiki, about "being a contributor" We didn't discuss much on this call, but was jokingly asked if we need a PR firm? Are we perceived as having closed meetings? (Even though not, lots of notes, public number, etc.). Are we perceived as not being innovative, when we see ourselves excelling in stability? We know the balance between innovation and stability is a hard balance to achieve ... but what leds to one perception over another? Stability is hard to "see"? Only miss it when its gone? Change is especially hard when so many committers are very busy, overbooked, overworked, working on their own things, so changing, even testing, even opening bugs for small breaks can seem like a lot of extra work (unless they understand the importance, reasons, need, etc. Case in point ... Eclipse 4.1 :) We should tentatively plan on supporting/running on both. One set of plugins, hopefully, running in compatibility mode. No current plans to exploit e4-only functionality. We need some experience and builds with it, to know if it is feasible. Action: dw to send links on info, schedulue, downloads, etc. for 4.0 We discussed if latest proposal to wtp-pmc list was "legal" or not ... and we left it that we expect Wayne will clarify if branching/moving code in cvs is really not a move, since it is a branch (sounds like a move, but ... it is EPL code?), and if another project can release its own version of WTP namespace bundles/features? That would seem to break a lot of co-existence installs. If everything released as "product" it technically would be possible, but then PHP could not be installed into WTP (as an example, if PHP adopted new Technology code, and WTP did not). And it would be bad/hard to release everything as a "product". Whether legal or not, we could not really see the purpose (other than POC), or how it would work in practice, and seemed like a hard path to go down. So we expect more discussion. Back to meeting list. Please send any additions or corrections to David Williams. Back to the top
http://www.eclipse.org/webtools/development/pmc_call_notes/pmcMeeting.php?meetingDate=2010-06-29
CC-MAIN-2015-32
refinedweb
393
74.19
react-native-line-sdk, the react-native wrapper for LINE A few days ago we released our very first React Native framework to the open source community. react-native-line provides an easy-to-use interface for you to use Line’s mobile SDK seamlessly on your app, without having to worry about Android or iOS differences. How to use it? To start working with react-native-line-sdk, you need to add it to your react-native project using your package manager of preference. For example: npm install react-native-line-sdk Then, you need to link the native implementations to your project running: react-native link react-native-line-sdk After that, you need to follow the Android and iOS guides available here Example usage As an example, let’s make a login flow that uses Line’s SDK: First, you need to require the LineLogin module on your js file: import LineLogin from 'react-native-line-sdk' Then, on your call to action (for example, a TouchableOpacity) you need to call the login function. This will open Line’s own UI (App, or browser, if the app is not installed on the device) and it will resolve the promise when the user finishes that flow successfuly. LineLogin.login() .then((user) => { /* Here, send the user information to your own API or external service for autentication. The user object has the following information: { profile: { displayName: String, userID: String, statusMessage: String, pictureURL: String?, } accessToken: { accessToken: String, expirationDate: String, } } */ }) .catch((err) => { // The promise will be rejected if something goes wrong, check the error message for more information. }); At this point, you should use the promise callbacks to handle the information returned by Line and continue your autentication flow as needed. Where to go from here We hope our article works as a good introduction to this open source library. On GitHub you’ll find everything you need to get started. If you want to collaborate, feel free to contribute with this library. If you need help to develop your project, drop us a line!
https://blog.xmartlabs.com/2017/11/27/React-native-line/
CC-MAIN-2020-16
refinedweb
343
50.26
Hey Flash Developers, welcome to the second part of my Tower Defense Game tutorial. In the first part, we developed the basic mechanism of creating turrets and making them shoot towards the point of mouse click. But that's not what turrets are for! In this part we'll extend the game to include enemies, basic artificial intelligence (AI) in turrets, and some more game elements. Are you ready? Final Result Preview This is the game we are going to create in this tutorial: Click the orange circles to place turrets. The red circles are enemies, and the number on each represents its hit points. Step 1: Recap In the previous tutorial we developed a game which had placeholders for the turrets. We could deploy turrets by clicking those placeholders, and the turrets aimed at the mouse pointer and shot bullets towards the point where the user clicked. We finished with a Main class which had the game loop and game logic. Apart from that we had the Turret class which had nothing much except the update function that made the turret rotate. Step 2: A Separate Bullet Class We previously created the bullets in Main class and attached an ENTER_FRAME listener to move it. The bullet did not have enough properties earlier to consider it a making a separate class. But in such a game bullets can have many varieties like speed, damage, and so on, so it is a good idea to pull out the bullet code and encapsulate it in a separate Bullet class. Let's do it. Create a new class called Bullet, extending the Sprite class. The basic code for this class should be: package { import flash.display.Sprite; public class Bullet extends Sprite { public function Bullet() { } } } Next we put the code to draw the bullet graphic, taken from Main, in Bullet. As we did with the Turret class, we create a function called draw in the Bullet class: private function draw():void { var g:Graphics = this.graphics; g.beginFill(0xEEEEEE); g.drawCircle(0, 0, 5); g.endFill(); } And we call this function from the Bullet constructor: public function Bullet() { draw(); } Now we add some properties to the bullet. Add four variables: speed, speed_x, speed_y and damage, before the Bullet constructor: private var speed:Number; private var speed_x: Number; private var speed_y:Number; public var damage:int; What are these variables for? speed: This variable stores the speed of the bullet. speed_xand speed_y: These store the x and y components of the speed, respectively, so that the calculation of breaking the speed into its components does not have to be done again and again. damage: This is the amount of damage the bullet can do to an enemy. We keep this variable public as we will require this in our game loop in the Mainclass. We initialize these variables in the constructor. Update your Bullet constructor: public function Bullet(angle:Number) { speed = 5; damage = 1; speed_x = Math.cos(angle * Math.PI / 180) * speed; speed_y = Math.sin(angle * Math.PI / 180) * speed; draw(); } Notice the angle variable we receive in the constructor. This is the direction (in degrees) in which the bullet will move. We just break the speed into its x and y components and cache them for future use. The last thing that remains in the Bullet class is to have an update function that will be called from the game loop to update (move) the bullet. Add the following function at the end of the Bullet class: public function update():void { x += speed_x; y += speed_y; } Bingo! We are done with our Bullet class. Step 3: Updating the Main Class We moved a lot of bullet code from Main class to its own Bullet class, so a lot of code remains unused in Main and much needs to be updated. First, delete the createBullet() and moveBullet() functions. Also remove the bullet_speed variable. Next, go to the shoot function and update it with the following code: private function shoot(e:MouseEvent):void { for each(var turret:Turret in turrets) { var new_bullet:Bullet = new Bullet(turret.rotation); new_bullet.x = turret.x + Math.cos(turret.rotation * Math.PI / 180) * 25; new_bullet.y = turret.y + Math.sin(turret.rotation * Math.PI / 180) * 25; addChild(new_bullet); } } We no longer use the createBullet function to create bullet rather use the Bullet constructor and pass the turret's rotation to it which is the direction of the bullet's movement and so we don't need to store it in the bullet's rotation property as we did earlier. Also we don't attach any listener to the bullet as the bullet will be updated from within the game loop next. Step 4: Saving the Bullet References Now that we need to update the bullets from the game loop, we need a reference of them to be stored somewhere. The solution is the same as for the turrets: create a new Array named bullets and push the bullets onto it as they are created. First declare an array just below the turrets array declaration: private var ghost_turret:Turret; private var turrets:Array = []; private var bullets:Array = []; Now to populate this array. We do so whenever we create a new bullet - so, in the shoot function. Add the following just before adding the bullet to the stage: var new_bullet:Bullet = new Bullet(turret.rotation); new_bullet.x = turret.x + Math.cos(turret.rotation * Math.PI / 180) * 25; new_bullet.y = turret.y + Math.sin(turret.rotation * Math.PI / 180) * 25; bullets.push(new_bullet); addChild(new_bullet); Step 5: Update the Bullets Just like how we update the turrets the game loop, we will update the bullets, too. But this time, instead of using a for...each loop, we'll use a basic for loop. Before this, we must add two variables to the top of the game loop, so that we know which variables are used within the game loop and can set them free for garbage collection. var turret:Turret; var bullet:Bullet; Go ahead and add the following code at the end of game loop: for (var i:int = bullets.length - 1; i >= 0; i--) { bullet = bullets[i]; if (!bullet) continue; bullet.update(); } Here we traverse over all the bullets on the stage every frame and call their update function which makes them move. Note here that we iterate the bullets array in reverse. Why? We'll see this ahead. Now that we have a turret variable declared outside already, we don't need to declare it again inside the for...each loop of turrets. Modify it to: for each(turret in turrets) { turret.update(); } Finally we add the boundary check condition; this was previously in the bullet's ENTER_FRAME but now we check it in the game loop: if (bullet.x < 0 || bullet.x > stage.stageWidth || bullet.y < 0 || bullet.y > stage.stageHeight) { bullets.splice(i, 1); bullet.parent.removeChild(bullet); continue; } We check whether the bullet is out of the stage's boundary, and if so we first remove its reference from the bullets array using the splice function, and then remove the bullet from the stage and continue with the next iteration. This is how your game loop should look: private function gameLoop(e:Event):void { var turret:Turret; var bullet:Bullet; for each(turret in turrets) { turret.update(); } for (var i:int = bullets.length - 1; i >= 0; i--) { bullet = bullets[i]; if (!bullet) continue; bullet.update(); } } If you now run the game, you should have the same functionality as in Part 1, with code that is much more clean and organized. Step 6: Presenting the Enemy Now we add one of the most important elements of the game: the Enemy. First thing is to create a new class named Enemy extending the Sprite class: package { import flash.display.Sprite; public class Enemy extends Sprite { public function Enemy() { } } } Now we add some properties to the class. Add them before your Enemy constructor: private var speed_x:Number; private var speed_y:Number; We initialize these variables in the Enemy constructor: public function Enemy() { speed_x = -1.5; speed_y = 0; } Next we create the draw and update functions for the Enemy class. These are very similar to the ones from Bullet. Add the following code: private function draw():void { var g:Graphics = this.graphics; g.beginFill(0xff3333); g.drawCircle(0, 0, 15); g.endFill(); } public function update():void { x += speed_x; y += speed_y; } Step 7: Timing the Game Events In our game we need to have many events that take place at certain times or repeatedly at certain intervals. Such timing can be achieved using a time counter. The counter is just a variable that gets incremented as the time passes in the game. The important thing here is when and by how much amount to increment the counter. There are two ways in which timing is generally done in games: Time based and Frame based. The difference is that the unit of step in time based game is based on real time (i.e. number of milliseconds passed), but in a frame based game, the unit of step is based on frame units (i.e. the number of frames passed). For our game we are going to use a frame based counter. We'll have a counter which we'll increment by one in the game loop, which runs each frame, and so will basically give us the number of frames which have passed since the game started. Go ahead and declare a variable after the other variable declarations in the Main class: private var ghost_turret:Turret; private var turrets:Array = []; private var bullets:Array = []; private var global_time:Number = 0; We increment this variable in the game loop at the top: global_time++; Now based on this counter we can do stuff like creating enemies, which we'll do next. Step 8: Let's Create Some Enemies What we want to do now is create enemies on the field after every two seconds. But we are dealing with frames here, remember? So after how many frames should we create enemies? Well, our game is running at 30 FPS, thus incrementing the global_time counter 30 times each second. A simple calculation tells us that 3 seconds = 90 frames. At the end of the game loop add the following if block: if (global_time % 90 == 0) { } What is that condition about? We use the modulo (%) operator, which gives the remainder of a division - so global_time % 90 gives us the remainder when global_time is divided by 90. We check whether the remainder is 0, as this will only be the case when global_time is a multiple of 90 - that is, the condition returns true when global_time equals 0, 90, 180 and so on... This way, we achieve a trigger at every 90 frames or 3 seconds. Before we create the enemy, declare another array called enemies just below the turrets and bullets array. This will be used to store references to enemies on the stage. private var ghost_turret:Turret; private var turrets:Array = []; private var bullets:Array = []; private var enemies:Array = []; private var global_time:Number = 0; Also declare an enemy variable at the top of the game loop: global_time++; var turret:Turret; var bullet:Bullet; var enemy:Enemy; Finally add the following code inside the if block we created earlier: enemy = new Enemy(); enemy.x = 410; enemy.y = 30 + Math.random() * 370; enemies.push(enemy); addChild(enemy); Here we create a new enemy, position it randomly at the right of the stage, push it in the enemies array and add it to the stage. Step 9: Updating the Enemies Just like we update the bullets in the game loop, we update the enemies. Put the following code below the turret for...each loop: for (var j:int = enemies.length - 1; j >= 0; j--) { enemy = enemies[j]; enemy.update(); if (enemy.x < 0) { enemies.splice(j, 1); enemy.parent.removeChild(enemy); continue; } } Just like we did a boundary check for bullets, we check for enemies too. But for enemies we just check whether they went out of the left side of the stage, as they only move right-to-left. You should see enemies coming from the right if you run the game now. Step 10: Give the Enemies Some Health Every enemy has some life/health and so will ours. We will also show the remaining health on the enemies. Lets declare some variables in the Enemy class for the health stuff: private var health_txt:TextField; private var health:int; private var speed_x:Number; private var speed_y:Number; We initialise the health variable in the constructor next. Add the following to the Enemy constructor: health = 2; Now we initialize the health text variable to show on the center of enemy. We do so in the draw function: health_txt = new TextField(); health_txt.height = 20; health_txt.width = 15; health_txt.textColor = 0xffffff; health_txt.x = -5; health_txt.y = -8; health_txt.text = health + ""; addChild(health_txt); All we do is create a new TextField, set its color, position it and set its text to the current value of health Finally we add a function to update the enemy's health: public function updateHealth(amount:int):int { health += amount; health_txt.text = health + ""; return health; } The function accepts an integer to add to the health, updates the health text, and returns the final health. We'll call this function from our game loop to update each enemy's health and detect whether it's still alive. Step 11: Shooting the Enemies. First lets modify our shoot function a bit. Replace the existing shoot function with the folowing: private function shoot(turret:Turret, enemy:Enemy):void { var angle:Number = Math.atan2(enemy.y - turret.y, enemy.x - turret.x) / Math.PI * 180; turret.rotation = angle; var new_bullet:Bullet = new Bullet(angle); new_bullet.x = turret.x + Math.cos(turret.rotation * Math.PI / 180) * 25; new_bullet.y = turret.y + Math.sin(turret.rotation * Math.PI / 180) * 25; bullets.push(new_bullet); addChild(new_bullet); } The shoot function now accept two parameters. The first is a reference to a turret which will do the shooting; the second is a reference to a enemy towards which it will shoot. The new code here is similar to the one present in the Turret class's update function, but instead of the mouse's position we now use the enemy's cordinates. So now you can remove all the code from the update function of the Turret class. Now how to make the turrets shoot at enemies? Well the logic is simple for our game. We make all the turrets shoot the first enemy in the enemies array. What? Lets put some code and then try to understand. Add up following lines in the end of the for...each loop used to update the turrets: for each(turret in turrets) { turret.update(); for each(enemy in enemies) { shoot(turret, enemy); break; } } For every turret we now update it, then iterate the enemies array, shoot the first enemy in the array and break from the loop. So essentially each turret shoots at the earliest created enemy as it is always at the beginning of the array. Try running the game and you should see turrets shooting the enemies. But wait, what's that bullet stream flowing? Looks like they are shooting too fast. Lets see why. Step 12: Turrets Are Shooting Too Fast As we know, the game loop runs every frame i.e. 30 times a second in our case, so the shooting statement we added in the previous step gets called at the speed of our game loop and hence we see a stream of bullets flowing. Looks like we need a timing mechanism inside the turrets too. Switch over to the Turret class and add the following code: private var local_time:Number = 0; private var reload_time:int; local_time: Our counter is called local_timein contrast to the global_timein the Mainclass. This is for two reasons: first, because this variable is local to the Turretclass; second, because it doesn't always go forward like our global_timevariable - it will reset many times during the course of the game. reload_time: This is the time required by the turret to reload after shooting a bullets. Basically its the time difference between two bullet shoots by a turret. Remember all time units in our game are in terms of frames. Increment the local_time variable in the update function and initialize the reload_time in the constructor: public function update():void { local_time++; } public function Turret() { reload_time = 30; draw(); } Next add the following two functions at the end of the Turret class: public function isReady():Boolean { return local_time > reload_time; } public function reset():void { local_time = 0; } isReady returns true only when the current local_time is greater than the reload_time, i.e. when the turret has reloaded. And the reset function simply resets the local_time variable, to start it reloading again. Now back in the Main class, modify the shoot code in the game loop we added in the previous step to the following: for each(turret in turrets) { turret.update(); if (!turret.isReady()) continue; for each(enemy in enemies) { shoot(turret, enemy); turret.reset(); break; } } So if now the turret isn't ready ( isReady() returns false), we continue with the next iteration of the turret loop. You will see that the turrets fire at an interval of 30 frames or 1 second now. Cool! Step 13: Limit the Turret Range Still something not right. The turrets shoot at enemies irrespective of the distance between them. What's missing here is the range of a turret. Each turret should have its own range inside which its can shoot an enemy. Add another variable to the Turret class called range and set it to 120 inside the constructor: private var reload_time:int; private var local_time:Number = 0; private var range:int; public function Turret() { reload_time = 30; range = 120; draw(); } Also add a function called canShoot at the end of the class: public function canShoot(enemy:Enemy):Boolean { var dx:Number = enemy.x - x; var dy:Number = enemy.y - y; if (Math.sqrt(dx * dx + dy * dy) <= range) return true; else return false; } Every turret can shoot an enemy only when it meets certain criteria - for example, you could let the turret shoot only red enemies with less than half their life and not more than 30px away. All such logic to determine whether the turret is able to shoot an enemy or not will go in the canShoot function, which returns true or false according to the logic. Our logic is simple. If the enemy is within the range return true; otherwise return false. So when the distance between the turret and enemy ( Math.sqrt(dx * dx + dy * dy)) is less than or equal to range, it returns true. A little more modification in the shoot section of the game loop: for each(turret in turrets) { turret.update(); if (!turret.isReady()) continue; for each(enemy in enemies) { if (turret.canShoot(enemy)) { shoot(turret, enemy); turret.reset(); break; } } } Now only if the enemy is within the range of the turret, will the turret shoot. Step 14: Collision Detection A very important part of every game is the collision detection. In our game collision check is done between bullets and enemies. We will be adding the collision detection code inside the for...each loop which updates the bullets in the game loop. The logic is simple. For every bullet we traverse the enemies array and check if there's a collision between them. If so, we remove the bullet, update the enemy health and break out of the loop to check other enemies. Let's add some code: for (i = bullets.length - 1; i >= 0; i--) { bullet = bullets[i]; // if the bullet isn't defined, continue with the next iteration if (!bullet) continue; bullet.update(); if (bullet.x < 0 || bullet.x > stage.stageWidth || bullet.y < 0 || bullet.y > stage.stageHeight) { bullets.splice(i, 1); bullet.parent.removeChild(bullet); continue; } for (var k:int = enemies.length - 1; k >= 0; k--) { enemy = enemies[k]; if (bullet.hitTestObject(enemy)) { bullets.splice(i, 1); bullet.parent.removeChild(bullet); if (enemy.updateHealth(-1) == 0) { enemies.splice(k, 1); enemy.parent.removeChild(enemy); } break; } } } We use ActionScript's hitTestObject function to check for collision between the bullet and enemy. If the collsion occurs, the bullet is removed in the same way as when it leaves the stage. The enemy's health is then updated using the updateHealth method, to which bullet's damage property is passed. If the updateHealth function returns an integer less than or equal to 0, this means the enemy is dead and so we remove it in the same way as the bullet. And our collision detection is done! Step 15: Why Reverse the "For" Loops? Remember that we traverse the enemies and bullets in reverse in our game loop. Let's understand why. Let suppose we used an ascending for loop. We are on index i=3 and we remove a bullet from the array. On removal of the item at position 3, its space is filled by the item then at position 4. So now the item previously at position 4 is at 3. After the iteration i increments by 1 and becomes 4 and so item at position 4 is checked. Oops, you see what happened just now? We just missed the item now at position 3 which shifted back as the result of splicing. And so we use a reverse for loop which removes this problem. You can see why. Step 16: Displaying the Turret's Range Let's add some extra stuff to make the game look good. We'll add functionality to display a turret's range when the mouse is hovered on it. Switch over to the Turret class and add some variables to it: private var range:int; private var reload_time:int; private var local_time:Number = 0; private var body:Sprite; private var range_circle:Sprite; Next update the draw function to the following: private function draw():void { range_circle = new Sprite(); g = range_circle.graphics; g.beginFill(0x00D700); g.drawCircle(0, 0, range); g.endFill(); range_circle.alpha = 0.2; range_circle.visible = false; addChild(range_circle); body = new Sprite(); var g:Graphics = body.graphics; g.beginFill(0xD7D700); g.drawCircle(0, 0, 20); g.beginFill(0x800000); g.drawRect(0, -5, 25, 10); g.endFill(); addChild(body); } We break the graphics of the turret into two parts: the body and the range graphic. We do this so as to give an ordering to the different parts of the turret. Here we require the range_circle to be behind the turret's body, and so we add it first to the stage. Finally, we add two mouse listeners to toggle the range graphic: private function onMouseOver(e:MouseEvent):void { range_circle.visible = true; } private function onMouseOut(e:MouseEvent):void { range_circle.visible = false; } Now attach the listeners to the respective events at the end of the constructor: body.addEventListener(MouseEvent.MOUSE_OVER, onMouseOver); body.addEventListener(MouseEvent.MOUSE_OUT, onMouseOut); If you run the game and try to deploy a turret, you will see a flickering when hovering on the placeholders. Why is that? See the flicker? Step 17: Removing the Flicker Remember we set the mouseEnabled property of the ghost turret to false? We did that because, the ghost turret was capturing mouse events by coming in between the mouse and the placeholder. The same situation has arrived again as the turret itself has two children now - its body and the range sprite - which are capturing the mouse events in between. The solution is the same. We can set their individual mouseEnabled properties to false. But a better solution is to set the ghost turret's mouseChildren property to false. What this does is restrict all the children of ghost turret from receiving mouse events. Neat, huh? Go ahead and set it to false in the Main constructor: ghost_turret = new Turret(); ghost_turret.alpha = 0.5; ghost_turret.mouseEnabled = false; ghost_turret.mouseChildren = false; ghost_turret.visible = false; addChild(ghost_turret); Problem solved. Step 18: What Next? We could extend this demo to include much more advanced features and turn it into a playable game. Some of which could be: - Better AI logic for selecting and shooting enemies. - Different type of turrets, bullets and enemies in the game. - Complex enemy paths instead of straight lines. Let's see what you can come up with from this basic demo. I'll be glad to hear about you tower defense games, and your comments or suggestions for the series. Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
http://code.tutsplus.com/tutorials/make-a-tower-defense-game-in-as3-enemies-and-basic-ai--active-10935
CC-MAIN-2015-32
refinedweb
4,097
65.32
1. A water engine. 2. A water wheel; especially, a small water wheel driven by water from a street main. 1. A water engine. 2. A water wheel; especially, a small water wheel driven by water from a street main. Hydraulic machinery are machines and tools which use fluid power to do work. Heavy equipment is a common example. In this type of machine, high-pressure liquid — called hydraulic fluid — is transmitted throughout the machine to various hydraulic motors and hydraulic cylinders.. Pneumatics, on the other side, is based on the use of a gas as the medium for power transmission, generation and control. A fundamental feature of hydraulic systems is the ability to apply force or torque multiplication in an easy way without the need of mechanical gears or levers, either by altering the effective areas in two connected cylinders or the effective displacement between a pump and motor. Examples (1) Two hydraulic cylinders interconnected:. (2)[1]. This type of circuit can use inexpensive, constant displacement pumps.. The closed center circuits exist in two basic configurations, normally related to the regulator for the variable pump that supplies the oil: Constant pressure systems (CP-system), standard. Pump pressure always equals the pressure setting for the pumpregulator.. life time is prolonged. system generates a constant power loss related to the regulating pressure drop for the pump regulator: Power loss = The average ΔpLS is around 2 MPa (290 psi). If the pump flow is high the extra loss can be considerable. The power loss also increase if the load pressures varies a lot. The cylinder areas, motor displacements and mechanical torque arms must be designed to match in load pressure in order to bring down the power losses. Pump pressure always equals the maximum load pressure when several functions are run simultaneously and the power input to the pump equals the (max. load pressure + ΔpLS) x sum of flow. (1) Load sensing without compensators in the directional valves. Hydraulically controlled LS-pump. (2) Load sensing with up-stream compensator for each connected directional valve. Hydraulically controlled LS-pump. (3) Load sensing with down-stream compensator for each connected directional valve. Hydraulically controlled LS-pump. (4) Load sensing with a combination of up-stream and down-stream compensators. Hydraulically controlled LS-pump. . Open-loop: Pump-inlet and motor-return (via the directional valve) are connected to the hydraulic tank.The term loop applies to feedback; the more correct term is open versus closed "circuit".housing.. In general, valves, cylinders and pumps have female threaded bosses for the fluid connection, and hoses have female ends with captive nuts. A male-male fitting is chosen to connect the two. Many standardized systems are in use.. Hydraulic Power System Analysis, A. Akers, M. Gassman, & R. Smith, Taylor & Francis, New York, 2006, ISBN: 0-8247-9956-9 This entry is from Wikipedia, the leading user-contributed encyclopedia. It may not have been reviewed by professional editors (see full disclaimer) Mentioned in
http://www.answers.com/topic/water-motor
crawl-002
refinedweb
494
58.18
Created on 2020-03-26 01:25 by vstinner, last changed 2020-04-01 15:09 by vstinner. This issue is now closed. $ ./python -m test -R 3:3 test__xxsubinterpreters -m test_ids_global 0:00:00 load avg: 0.80 Run tests sequentially 0:00:00 load avg: 0.80 [1/1] test__xxsubinterpreters beginning 6 repetitions 123456 ...... test__xxsubinterpreters leaked [1, 1, 1] references, sum=3 test__xxsubinterpreters leaked [1, 1, 1] memory blocks, sum=3 test__xxsubinterpreters failed == Tests result: FAILURE == 1 test failed: test__xxsubinterpreters Total duration: 819 ms Tests result: FAILURE It started to leak since: commit 7dd549eb08939e1927fba818116f5202e76f8d73 Author: Paulo Henrique Silva <ph.silva@carta.com> Date: Tue Mar 24 23:19:58 2020 -0300 bpo-1635741: Port _functools module to multiphase initialization (PEP 489) (GH-19151) The folowing test is enough to reproduce the leak: def test_ids_global(self): interp1 = interpreters.create() script, rpipe = _captured_script("pass") interpreters.run_string(interp1, script) rpipe.close() interp2 = interpreters.create() script, rpipe = _captured_script("pass") interpreters.run_string(interp2, script) rpipe.close() I've got it, will investigate asap. The module still uses static state. Fixed the leak and will convert it to use per-module state in a separate issue. New changeset b09ae3ff43111a336c0b706ea32fa07f88c992d9 by Paulo Henrique Silva in branch 'master': bpo-40071: Fix refleak in _functools module (GH19172) > (...) will convert it to use per-module state in a separate issue. Great! As discussed on PR19172, this module uses a global state in functions that do not receive a PyModule* and right now converting such cases to per-module state is not trivial. I will wait for PEP-573 implementation that will hopefully make this easier. Pushed PR19273 to avoid the potential crash induced by the original change. New changeset eacc07439591c97f69ab4a3d17391b009cd78ae2 by Paulo Henrique Silva in branch 'master': bpo-40071: Fix potential crash in _functoolsmodule.c (GH-19273) I closed the issue, the leak is now fixed and _functools has been fixed. I created bpo-40137: TODO list when PEP 573 "Module State Access from C Extension Methods" will be implemented.
https://bugs.python.org/issue40071
CC-MAIN-2020-50
refinedweb
332
58.79
UI launching a scene - chriswilson Hi all, I have made a game (which I have posted about before) using the scenemodule. I am trying to get a pyui file to launch the scene. After some trial and error I managed it like this: import ui from black_white import * from time import sleep from scene import run @ui.in_background def start(sender): run(Game(), show_fps=False) v = ui.load_view() v.present('sheet', hide_title_bar = True) black_whiteis the name of the script running my scene Game. I could only get it to work by running the scene 'inside' the UI. I had to import time.sleepand scene.runas my script needs them, which seems quite messy. When I try to close the UI view vat the end of the startfunction, everything closes! Does anyone know of a better way to do this? You might be able to use a SceneView. Maybe it's just me, but I couldn't understand your question clearly. So this might not be the answer. But oh well. You can create a SceneView: sv = ui.SceneView() Add your scene as a parameter of "sv", then use main_view.add_subview(sv) in your function. Your button calls the function, the just clear the main_view and call the line above. Here you can find an example PhotoTextV2. It's a ui.View with two scrollviews. One for buttons and another one for the scene.SceneView. edit: Have you tried ui.delay instead of time.sleep? - chriswilson
https://forum.omz-software.com/topic/3067/ui-launching-a-scene
CC-MAIN-2021-04
refinedweb
245
87.31
Python Fix Imports Automatically split and sort your import statements in your Python scripts Details Installs - Total 12K - Win 5K - OS X 3K - Linux 4K Readme - Source - raw.githubusercontent.com Python Fix Imports Python Fix Imports is a Sublime Text 3 plugin that can automatically reorganize the import statements of your Python script. Please read the "Rationale" section for more information. This plugin comes from a script that has been written for the Buildbot project, in order to help developers ensuring they properly organize their import statements in their Python files. Rationale The beginning of each Python script is the part of the code that is likely to evolve the most over the lifetime of the file. Imports statements gets added, removed, reorganized all over the time. Thanks to distributed versioning systems such as Git, several persons can easily work on the same time on the same file. And the management of the import statements is likely to cause conflict when each developer adds his modifications. We really started having the need for an automatic reorganization script when we have set up an automatic merge of several branches alltogether. Most of the time, the conflicts were found to be on the import lines. Here are the rules this fiximports script enforces: Rule 1 Each import statement only imports one method, class or module. Yes: from abc import dce from abc import fgh No: from abc import dce, fgh from abc import (dce, fgh) from abc import dce, \ fgh fiximports automatically splits import statements that use a comma. \ and parenthesis are not supported. Bonus: let's say you want where and how an object "object_name" is imported. This rules ensures you will always find the import occurences of the following search pattern: import object_name. No need to do regex, only ``import `` + what you are looking for. Rule 2 Import statements are organized in blocks, separated by an empty line. Each block is alphabetically sorted. This removes any ambiguity in the placement of an import line in a given block. When two developers on two different branches want to add the same import in the same file, the location of this line will be the same and so the merge if any will be obvious. Yes: from abc import aaaa from abc import bbbb from abc import cccc No: from abc import bbbb from abc import aaaa from abc import cccc Sorting only occurs on a given block, if for any reason an import statement needs to be placed after another one, just add an empty line. fiximports can sort all import statements at once (preserving the 'group' splitting). In some project, I tend to enforce the ordering of the groups themself: first the standard library imports: import json import login import os Standart libraries in the form from ... import: from textwrap import dedent from twisted.internet import defer Project modules with their complete name (always uses from __future__ import absolute_import) from myproject.the.module.name import ClassName from myproject.the.other.module.name import TheOtherClassName Example Let's look at the following code: import datetime import collections from io import BytesIO, UnsupportedOperation from .hooks import default_hooks from .structures import CaseInsensitiveDict from .auth import HTTPBasicAuth from .cookies import cookiejar_from_dict, get_cookie_header from .packages.urllib3.fields import RequestField from .packages.urllib3.filepost import encode_multipart_formdata from .packages.urllib3.util import parse_url from .packages.urllib3.exceptions import DecodeError, ReadTimeoutError, ProtocolError, LocationParseError from .exceptions import HTTPError, MissingSchema, InvalidURL, ChunkedEncodingError, ContentDecodingError, ConnectionError, StreamConsumedError from .utils import guess_filename, get_auth_from_url, requote_uri, stream_decode_response_unicode, to_key_val_list, parse_header_links, iter_slices, guess_json_utf, super_len, to_native_string from .compat import cookielib, urlunparse, urlsplit, urlencode, str, bytes, StringIO, is_py2, chardet, json, builtin_str, basestring from .status_codes import codes This automatically becomes with this plugin: import collections import datetime from .hooks import default_hooks from .structures import CaseInsensitiveDict from io import BytesIO from io import UnsupportedOperation from .auth import HTTPBasicAuth from .compat import StringIO from .compat import basestring from .compat import builtin_str from .compat import bytes from .compat import chardet from .compat import cookielib from .compat import is_py2 from .compat import json from .compat import str from .compat import urlencode from .compat import urlsplit from .compat import urlunparse from .cookies import cookiejar_from_dict from .cookies import get_cookie_header from .exceptions import ChunkedEncodingError from .exceptions import ConnectionError from .exceptions import ContentDecodingError from .exceptions import HTTPError from .exceptions import InvalidURL from .exceptions import MissingSchema from .exceptions import StreamConsumedError from .packages.urllib3.exceptions import DecodeError from .packages.urllib3.exceptions import LocationParseError from .packages.urllib3.exceptions import ProtocolError from .packages.urllib3.exceptions import ReadTimeoutError from .packages.urllib3.fields import RequestField from .packages.urllib3.filepost import encode_multipart_formdata from .packages.urllib3.util import parse_url from .status_codes import codes from .utils import get_auth_from_url from .utils import guess_filename from .utils import guess_json_utf from .utils import iter_slices from .utils import parse_header_links from .utils import requote_uri from .utils import stream_decode_response_unicode from .utils import super_len from .utils import to_key_val_list from .utils import to_native_string Indeed, the beginning of the file is much more verbose, but merges will be easier (since when we switched to this paradigm, we almost have not conflict on these lines). Installation To avoid dependencies, all necessary modules are included within the package. Using Sublime Package Control - Use cmd+shift+P shortcut then Package Control: Install Package - Look for Python Fix Imports and install it. Using Git repository on GitHub: Open a terminal, move to Packages directory (refers to the folder that opens when you use the Preferences > Browse Packages... menu). Then type in terminal: git clone python_fiximports Settings Global Settings You'll find settings in Preferences menu (Preferences -> Package Settings -> Python Fix Imports). { // Automatically fix the imports on save "autofix_on_save": false, // Enable or disabl split of every imports in own line (one object import per line) "split_import_statements": true, // Enable or disabl sorting or import in its own group "sort_import_statements": true, } By editing User settings, your personal liking will be kept safe over plugin upgrades. Per-project settings { "settings": { "python_fiximports": { "autofix_on_save": true } } } Usage Formatting is applied on the whole document. Using keyboard: - GNU/Linux: ctrl+alt+shift+i - Windows: ctrl+alt+shift+i - OSX: ctrl+command+shift+i SideBar Right click on the file(s) or folder(s) On Save Imports are reorganized automatically on save if the following setting is set: autofix_on_save. Command Palette Bring up the Command Palette and select one of the following options: Python Fix Imports: Execute Fix imports in the current file immediately. Enable Python Fix Imports (until restart): Toggle the general settings autofix_on_save to Enabled until Sublime restart (overwrite the project and global settings). Disable Python Fix Imports (until restart): Toggle the general settings autofix_on_save to Disabled until Sublime restart (overwrite the project and global settings). Disable Python Fix Imports for this file (until restart): Disable the automatic fix of the import statements in the current file, independently of the global setting autofix_on_save. Enable Python Fix Imports for this file (until restart): Enable the automatic fix of the import statements in the current file, independently of the global setting autofix_on_save. Hint: open Command Palette (ctrl+shift+P) and type Fix... up to highlight full.
https://packagecontrol.io/packages/Python%20Fix%20Imports
CC-MAIN-2019-39
refinedweb
1,166
51.14
I am trying to generate 2KHz square wave via a MCP4725 on RPi3. I need to vary the voltage somewhere between 0 to 5Vpp, so I cannot use the digital GPIO pins and I get this MCP4725 from Adafruit. I connect the MCP4275 and I can see it on the I2C bus. I copy the Adafuit example and modify it as little bit, however when I run the following Python code it does not produce the 2 KHz square wave. The square wave produced is only about 800 Hz. If I reduce the sleep to 0.00001, it gives about 2 KHz but it is not stable but oscillates from 1 KHz to 2 KHz. This is unacceptable for my application. Code: Select all import time # Import the MCP4725 module. import Adafruit_MCP4725 # Create a DAC instance. dac = Adafruit_MCP4725.MCP4725() # Loop forever alternating through different voltage outputs. print('Press Ctrl-C to quit...') while True: dac.set_voltage(0) time.sleep(0.00025) dac.set_voltage(4095) time.sleep(0.00025) I have took a movie on this and I wonder if it is a software issue? I am aware of the following possibilities: 1) I2C speed default to RPi3 is too low, I should change to 3400000 (max 3.4 Mbps according to MCP4725) 2) bad cable but my cable is short and only about 10 cm 3) Adafruit python lib is slow, so I should change to pigpio python lib instead. But I have no idea on how to use those on MCP4725, Thank you! Rolly
https://www.raspberrypi.org/forums/viewtopic.php?f=44&t=190134&p=1195068
CC-MAIN-2020-05
refinedweb
255
82.54
Transpile TypeScript modules for Dynamics 365 simply using rollup.js Summary Today we’re going to learn how to bring Dynamics form customisations to the modern era. No more files that are thousands of lines long, no more syntax errors, fewer bugs, and lower time to deploy. The community is filling with useful ideas on how to utilise TypeScript for Dynamics customisations. You may have seen Scott Durow recently detailed how to get started with TypeScript, or Max Ewing’s post on how to create packages for Dynamics in TypeScript. This post focuses on similar core concepts, but uniquely expands on the focus of modules and build processes to modernise Dynamics customisations. Some assumptions This post will begin with some assumptions (uh oh). - You’re a Dynamics developer - You’re using (or thinking of using) TypeScript for form customisations - You’re using (or thinking of using) a modular design structure in your form scripts If either of points 2 or 3 are false, I’d like to briefly try and convince you to consider them. (If both are true, you may want to skip the next two sections.) Giving modular TypeScript a go Dynamics developers are increasingly turning to TypeScript to write their form customisations in Dynamics. The reasons behind this are TypeScript’s benefits which, in a Dynamics context, can be summarised as: - Providing all the features of JavaScript, with the addition of static type checking - Transpiling (think: compiling) down to plain old JavaScript In practice, this means intellisense against the Dynamics Client API; no more typos causing syntax errors at runtime! All that’s required is a simple import of @types/Xrm into your code (which I later explain). Modular code design As projects evolve and grow, Dynamics form scripts can become large, unstructured and difficult to maintain. This is especially prominent when following the pattern of providing just one script per form. As an alternative, I’d like to suggest modules. Good authors divide their books into chapters and sections; good programmers divide their programs into modules. Like a book chapter, modules are just clusters of words (or code, as the case may be). Good modules, however, are self-contained with distinct functionality, allowing them to be shuffled, removed, or added as necessary, without disrupting the system as a whole. An example of modules in Dynamics In this example, I demonstrate a requirement that gets the name of a lookup field value, and based on that value, toggles another field’s value. The code to get a lookup’s name could be used elsewhere across different forms, so I’ve separated it into its own module called Common: Common is then imported into the Contact script, and used in an onLoad function: It’s important to note Contact is not a module, but a namespace. Its onLoad function doesn’t require an instantiated object and can be called directly from a Dynamics event handler. If you’re following along, make sure to change your field schema names to fields that exist in your testing environment (mine are "dc_country" and "dc_umbrellarequired"), where the first is a lookup and the second is a boolean. Rollup our example Import and export syntax works while we develop: we can tell because intellisense is provided for imported modules. To a browser, these words currently don’t mean anything, and the relative paths of the files certainly don’t. We can’t serve our TypeScript files directly to the browser, so they must be transpiled to JavaScript first. That’s where rollup.js comes in. It’s going to give us the tools to create a build pipeline that: - Transpiles our code - Recursively look through our code for dependencies from a given entry point - Optionally run plugins such as babel - Bundles our code into one, Dynamics and browser-ready output file And it’s really simple. Here’s how. Step 1: Install rollup In the code directory from the code sample above, run the following from the command line: npm install --global rollup` npm install --save-dev rollup-plugin-typescript Step 2: Create a rollup config file Rollup can be run manually through the CLI each time. But config files are too powerful and convenient to pass up! So, create rollup.config.js in your code’s root as follows: Ensure the input relative path is set to your contact script’s directory as necessary. 3. Run your rollup From the command line, run the following command to build your code, and output it to the /build/ directory. rollup -c --experimentalCodeSplitting And that’s it! Upload the output .js file (should be in your /build/ directory) to Dynamics, add it to a Dynamics form and register the onLoad message as Contact.onLoad. Summary This post has detailed how we can structure our modular code on small to enterprise Dynamics 365 projects to increase our script’s: - Maintainability - Readability - Re-usability We’ve learnt how to bundle our individual TypeScript-written modules into single JavaScript files that are browser-ready and usable to enhance forms in Dynamics with custom business logic. Extras - There are many bundlers out there in the wild. This walkthrough uses rollup.js. Experiment with Webpack and Gulp to achieve the same results. - Test your TypeScript files with Xrm tests. Here’s a recent post of mine detailing how Web API calls can be tested using xrm-mock and sinon.js. - Add plugins to rollup.js. How about eslint to lint your code when it’s built, or uglify to minify your build output for production?
https://medium.com/capgemini-dynamics-365-team/transpile-typescript-modules-for-dynamics-365-simply-using-rollup-js-9fb0f4f3eebd
CC-MAIN-2018-43
refinedweb
925
59.64
No SwiftUI app, beside an Hello World, has just a view. When you want to add more than one view, you need to add them to a stack. There are 3 kinds of stacks: HStackaligns items on the X axis VStackaligns items on the Y axis ZStackaligns items on the Z axis Let’s go back to the Hello World app: import SwiftUI struct ContentView: View { var body: some View { Text("Hello World") } } To add a second Text view we can’t do this: struct ContentView: View { var body: some View { Text("Hello World") Text("Hello again!") } } but we have to embed those views into a stack. Let’s try with VStack: struct ContentView: View { var body: some View { VStack { Text("Hello World") Text("Hello again!") } } } See? The views are aligned vertically, one after the other. Here’s HStack: And here’s ZStack, which puts items one in front of the other, and in this case generates a mess: ZStack is useful, for example, to put a background image and some text over it. That’s the simplest use case you can think of. In SwiftUI we organize all our UI using those 3 stacks. We also use Group, a view that, similarly to stacks, can be used to group together multiple views, but contrary to stack views, it does not affect layout. VStack { Group { Text("Hello World") Text("Hello again!") } } One use case that might come handy for groups, beside applying modifiers to child views as we’ll see next, is that views can only have 10 children. So you can use Groupto group together up to 10 views into 1 Group and the stack views are views too, and so they have modifiers. Sometimes modifiers affect the view they are applied to, like in this case: Text("Hello World") .font(.largeTitle) Sometimes however they are used to apply the same property to multiple views at the same time. Like this: VStack { Text("Hello World") Text("Hello again!") } .font(.largeTitle) See? By applying the font() modifier to the VStack, the .largeTitle font was applied to both Text views. This is valid for modifiers that we call environment modifiers. Not every modifier can work this way, but some do, like in the above example. Check out my Web Development Bootcamp. Next cohort is in April 2022, join the waiting list!
https://flaviocopes.com/swiftui-stacks/
CC-MAIN-2021-49
refinedweb
390
71.24
Spidermonkey is the JavaScript interpreter from the Mozilla project. WWW: No installation instructions: this port has been deleted. The package name of this deleted port was: spidermonkey spidermonkey PKGNAME: spidermonkey NOTE: FreshPorts displays only information on required and default dependencies. Optional dependencies are not covered. ===> The following configuration options are available for spidermonkey-1.7.0_1: UTF8=Off (default) "Enable UTF8 support" ===> Use 'make config' to modify these settings Number of commits found: 33 Rename lang/spidermonkey to lang/spidermonkey17 in preperation for the import of lang/spidermonkey18. Approved by: Dan Rench <citric@cubicone.tmetic.com> (maintainer) Approved by: eadler (mentor) Mark as broken on sparc64-9: fails to link. Hat: portmgr Remove more tags from pkg-descr files fo the form: - Name em@i.l or variations thereof. While I'm here also fix some whitespace and other formatting errors, including moving WWW: to the last line in the file. install some missing includes PR: ports/160649 Submitted by: Stephen Hurd <shurd@sasktel.net> Approved by: maintainer - remove MD5 Add option for UTF8 support PR: 140124 Submitted by: Mirko Zinn <mail@derzinn.de> Approved by: maintainer - Fix pkg-plist (install all header files) PR: ports/134770 Submitted by: Dan Rench <citric@cubicone.tmetic.com> Approved by: maintainer - Remove duplicates from MAKE_ENV after inclusion of CC and CXX in default MAKE_ENV Updates the port to Javascript 1.7. Much thanks to Bernhard Fröhlich for doing the heavy lifting. PR: 125191 Submitted by: maintainer Remove always-false/true conditions based on OSVERSION 500000 - Fix build with gcc 4.2 PR: 113094 Submitted by: Anish Mistry <amistry@am-productions.biz> Approved by: maintainer - Makefile cleanup - Update MASTER_SITES PR: 112144 Submitted by: Dan Rench<citric@cubicone.tmetic.com> (maintainer) Make SpiderMonkey build with thread-support regardless of whether or not the post-build self-testing is enabled. The self-test was on by default until March, which hid the problem... Take pointy-hat. Noticed by: Anish Mistry Approved by: portmgr (erwin) - Bump portrevision for the previous update - INSTALLS_SHLIB -> USE_LDCONFIG - Beautify master sites - Install a versioned lib Approved by: citric@cubicone.tmetic.com (maintainer timeout, 16 days) - Expose jsstr.h - Move plist to Makefile PR: ports/96549 Submitted by: sat Approved by: krion (mentor), maintainer The testsuite breaks at certain times of day depending on TZ, so disable the testuite per default. PR: 94765 Submitted by: Dan Rench <citric@cubicone.tmetic.com> - Fix on 64-bit arches PR: ports/92396 Submitted by: Dan Rench <citric@cubicone.tmetic.com> (maintainer) BROKEN on !i386 and on 4.x: Does not compile Remove / from the DISTFILES, properly use grouped master-sites. Noticed by: Ion-Mihai Tetcu <itetcu@people.tecnik93.com> Spidermonkey version update to 1.5 with fixes for ia64/amd64 Update source to spidermonkey 1.5 and patched (much thanks to Anish Mistry) to fix compilation problems under amd64 (and presumably ia64 too but untested) and to make the build thread-safe. PR: ports/91522 Submitted by: Dan Rench <citric@cubicone.tmetic.com> Add another patch, to fix tests, which fail if the timezone is set to UTC. Thanks to Boris Samorodov for assistance in debugging this. Detected by: pointyhat Approved by: portmgr (krion) Unbreak for all platforms (tested on amd64 and i386) -- use -fPIC on sparc64 and -fpic elsewhere. While here, make the following improvements: . ignore the vendor's fdlibm and use our own -lm. fdlibm is derived from the same msun as ours, but spidermonkey was misteriously linking with _both_. All mozilla-ports seem to have the same problem right now; . use our -lreadline instead of compiling vendor's own libeditline; . fix all warnings (clean build with -Wall -Werror); . link the installed executable (js) against the shared library libjs.so instead of against the invididual objects; . unless WITHOUT_TEST is set, download and run vendor's own tests in post-build (this triggers USE_PERL_BUILD). Some tests had to be patched from Mozilla's CVS, because the released tarball of them was not updated since 2002. Bump PORTREVISION. Approved by: portmgr (marcus) Approved by: maintainer timeout Change PORTNAME to spidermonkey to correspond with dirname. PR: 82320 Submitted by: Alex Kapranoff <kappa (at) rambler-co.ru> Approved by: maintainer - Avoid using command execution to fill variables, they would be executed for all targets which is not needed Suggested by: 'the eagle eye' kris - Update to 1.5-rc6 PR: ports/66208 Submitted by: Dan Rench <citric@cubicone.tmetic.com> (maintainer) SIZEify (maintainer timeout) BROKEN on amd64 and ia64: Does not compile (missing -fPIC) Use PLIST_FILES (bento-tested, marcus-reviewed). Bump PORTREVISION on all ports that depend on gettext to aid with upgrading. (Part 2) Add CONFLICT for lang/njs (njs-*) PR: ports/57972 (initial) Submitted by: Thierry Thomas <thierry@pompo.net> Update port: lang/spidermonkey updated to latest source, new contact address Spidermonkey is the JavaScript interpreter from the Mozilla project. This revision updates the port to the newest version. I've also updated my contact address. PR: ports/56593 Submitted by: Dan Rench <citric@cubicone.tmetic.com> Add spidermonkey 1.5.p5, a standalone JavaScript interpreter from the Mozilla project. PR: 51325 Submitted by: Dan Rench (drench@xnet.com) Servers and bandwidth provided byNew York Internet, SuperNews, and RootBSD 16 vulnerabilities affecting 44 ports have been reported in the past 14 days * - modified, not new All vulnerabilities
http://www.freshports.org/lang/spidermonkey/
CC-MAIN-2016-18
refinedweb
881
50.23
I heard that a private constructor prevents object creation from the outside world. When I have a code public class Product { public string Name { get;set;} public double Price {get;set;} Product() { } public Product(string _name,double _price) { } } The reason you would use the pattern you're describing is when you want to control how the object is instantiated. In your example, for instance, you're saying the only way to create a Product is by specifying its name and price. This is with respect to the outside world, of course. You could also do something similar using other access modifiers, and it would have different implications, but it all boils down to controlling how you want the objects instantiated with respect to who will be doing it. If you wanted to prevent object creation altogether you would have to make all your constructors private (or protected). That would force the object to be created from within itself (or an inherited class). Also, as Matti pointed out in the comment below, when you define a constructor that is parameterized you don't need to specify a private default constructor. At that point it is implied.
https://codedump.io/share/Ek1Smt1gf4RD/1/private-constructor-and-public-parameter-constructor
CC-MAIN-2018-09
refinedweb
195
50.26
Soil NPK Sensor Arduino, Description: In today’s episode you will learn how to measure the soil nutrient content like Nitrogen, Phosphorus, and Potassium using the most accurate, fast, and stable Soil NPK Sensor, Arduino Nano, I2C supported Oled display module, HC-05 or HC-06 Bluetooth Module, and an android cell phone application designed in Android Studio. You know all growing plants need 17 essential elements to grow to their full genetic potential. Of these 17 elements, 14 are absorbed by plants through soil, while the remaining three come from air and water. Nitrogen, Phosphorus, and Potassium or in short NPK, are the “Big 3” primary nutrients in commercial fertilizers. Each of these fundamental nutrients plays a key role in plant nutrition. Nitrogen, Phosphorus, and Potassium are really important to soil because; Nitrogen is used by plants for lots of leaf growth and good green color, Phosphorus is used by plants to help form new roots, make seeds, fruits, and flowers, while Potassium helps plants make strong stems and keep growing fast. A certain level of Soil nutrients like Nitrogen, Phosphorus, and Potassium should be maintained in the soil which is only possible if you know how to measure these three elements. Let’s take the corn plant as an example; this is what happens to the corn leaves when the nitrogen, phosphorus, and potassium nutrients are less in the soil. On the other hand, if you add more nitrogen, your plants may look lush and green, but their ability to fruit and flower will be greatly reduced. If you add more phosphorus this will reduce the plant’s ability to take up required micronutrients, particularly iron and zinc this causes the plants to grow poorly and even die. The same thing happens when you add too much Potassium to the soil this disrupts the uptake of other important nutrients, such as calcium, nitrogen, and magnesium. So, now you understand the excess and deficiency of these three elements is not good for the plants. So, before you are planning to add the fertilizer first take a few samples and check the Nitrogen, Phosphorus, and Potassium level using the Soil NPK Sensor. For the initial tests, we did temporary connections on the breadboard, first, we started with the Oled display module and displayed the Nitrogen, Phosphorus, and Potassium values on the Oled display module. After performing the initial tests and once satisfied with the values, then we started with the HC-05 Bluetooth module and displayed the NPK values on the Android cell phone application. You can also connect the Oled display module, this way you can monitor the NPK values both on the Oled display module and also on the Android cell Phone application. So, anyhow now you know exactly what you are going to learn after reading this article. Without any further delay let’s get started!!! Amazon Purchase! Soil NPK Sensor: This is the Soil NPK Sensor. N for Nitrogen, P for phosphorus, and K for Potassium. So this is basically the Soil Nitrogen, Phosphorus, and Potassium 3 in 1 fertility sensor which is used for detecting the content of nitrogen, phosphorus, and potassium in the soil. This Soil NPK Sensor is considered to be the highest precision, accurate with accuracy up to ±2%, fast speed measurement, and with increased stability. The resolution of this Soil NPK Sensor is up to 1mg/kg or (1mg/l), this is an easy-to-carry sensor and can even be used by non-professionals, all you need is to insert these stainless steel rods into the soil and read the soil content. So, the Soil NPK Sensor gives the user an accurate understanding of the soil fertility status, thus the user can measure the soil condition at any time, and then according to the soil condition, the soil fertility can be balanced to achieve a suitable growth environment for the plants. Soil NPK Sensor Features: This Soil NPK Sensor is provided with high-quality stainless steel probes which are completely rust-resistant, electrolytic resistant, salt, and alkali corrosion resistant. Therefore this Soil NPK Sensor is suitable for all kinds of soil. Another feature that I really like is its ability to detect alkaline soil, acid soil, substrate soil, seedling bed soil, and coconut bran soil. Moreover, this Soil NPK Sensor is IP68 grade waterproof and dustproof, to ensure the normal operation of components for a long time. Soil NPK Sensor Specifications: NPK Sensor Pinout: The Soil NPK Sensor has a total of 4 wires. The brown wire is the VCC wire and it should be connected with 9V-24Vdc Power Supply. The Black wire is the GND wire and it should be connected with the Arduino’s GND. The remaining two wires which are the Blue and Yellow wires these are the B and A wires and these two wires should be connected with the B and A pins of the Max485 Modbus module which I will explain in a minute. So, You will need 9 to 24Vdc to power up this Soil NPK Sensor. The NPK Sensor supports 2400, 4800, and 9600 baud rates, due to which it can be used with different microcontroller boards like 8051 family of microcontrollers, PIC microcontrollers, Arduino boards, and so on. In this tutorial, I will use the Soil NPK Sensor with the Arduino board. The Soil NPK Sensor is provided with the Modbus communication port RS485 due to which it can be easily interfaced with the Arduino board using the Modbus module like MAX485/RS485 module. The working temperature is from 5 to 45 Celsius. The Nitrogen, phosphorus, and Potassium resolution is 1mg/kg or 1mg/liter. The measuring range of the Soil NPK Sensor is 0 to 1999mg/kg, and the working humidity is from 5 to 95%. The maximum power consumption is ≤ 0.15W. Voltage: 9V-24V DC Maximum Power Consumption: ≤ 0.15W Baud Rate: 2400/4800/9600 Working Temperature: 5 to 45 ° C Resolution: 1mg/kg (mg/l) Measuring Range: 0-1999mg/kg Working Humidity: 5 to 95% (relative humidity), no condensation Measurement Accuracy: ±2%F.s Communication Port: RS485 Protection Class: IP68 Max485 Chip RS-485 Module TTL to RS-485 Module: This is the MAX485 TTL to RS-485 interface module which is used to connect the Soil NPK Sensor with the Arduino as this interface module can be easily powered up using the Arduino’s 5 Volts. The max485 interface module is ideal for serial communications over long distances of up to 1200 meters or in electrically noisy environments, this is the reason it is commonly used in industrial environments. It supports up to 2.5MBit/Sec data rates, but as the distance increases, the maximum data rate that can be supported comes down. The RS-485 has the ability to communicate with multiple devices (up to 32) on the same Bus/cable when used in master and slave configuration. I have already written a detailed article on how to use the MAX485 interface module with Arduino and communicate with multiple controllers. So, I highly recommend reading this article. KEY FEATURES OF MAX485 TTL TO RS-485 INTERFACE MODULE: - Use MAX485 Interface chip - Uses differential signaling for noise immunity - Distances up to 1200 meters - Speeds up to 2.5Mbit/Sec - Multi-drop supports up to 32 devices on the same bus - Red power LED - 5V operation MAX485 Pinout: We have 4 male headers on the data side, RO is the receiver output and it should be connected with the RX pin of the Arduino. RE is the Receiver Enable. This is active low. This pin should be connected with the Arduino’s digital output pin. Drive LOW to enable receiver, HIGH to enable Driver. DE is the Driver enable pin. This is Active High and is typically jumpered to the RE Pin. DI is the Driver Input and it should be connected with the TX pin of the Arduino. Similarly, We have 4 male headers on the Output side, VCC pin should be connected with the Arduino’s 5 volts. B and A pins should be connected with the B and A pins on the far end module; in our case, we will connect these with the B and A wires of the Soil NPK Sensor. GND pin should be connected with the Arduino’s ground. 1X2 Screw Terminal Block (Output Side) - B = Data ‘B’ Inverted Line. Connects to B on far end module - A = Data ‘A’ Non-Inverted Line. Connects to A on far end module OLED display module and HC05 or HC06 Bluetooth Module: If you have never used the I2C supported Oled display module and the HC05 or HC06 Bluetooth module then I highly recommend reading my getting started tutorials, in which I have explained all the basics including technical specifications, Interfacing, and Arduino Programming. Soil NPK Sensor interfacing with Arduino, Circuit Diagram: Let’s start with the Soil NPK Sensor, as this sensor accepts a wide range of input voltages so we decided to use a 12V power supply. This way we can use a single 12V power supply to power up the NPK sensor and the Arduino board. The Black and Blue wires of the NPK sensor are connected with the B and A pins of the RS485 TTL converter. While the VCC and GND pins are connected with the 5V and GND pins of the Arduino. The RO and DI pins are connected with the D2 and D3 pins of the Arduino. The RE and DE pins are connected with the D8 and D7 pins respectively. The HC-05 Bluetooth module RX and TX pins are connected with the Arduino’s TX and RX pins and the Power supply pins are connected with the Arduino’s 5volt and GND. The SSD1306 I2C supported Oled display module SDA and SCL pins are connected with the A4 and A5 pins while the VCC and GND pins are connected with the 5v and GND pins of the Arduino Nano board. As we are planning to power up the Arduino board using a 12V power supply, so we will need to stepdown this voltage to 5volts. So by using the 7805 voltage regulator we can get regulated 5volts. You can also see two decoupling capacitors are connected at the input and output sides of the voltage regulator. Now, to power up the Arduino Nano, all you need is simply connect the output pin of the voltage regulator with the VIN pin of the Arduino Nano. Next, we started off by interfacing all the components as per the circuit diagram already explained. Android Cell phone Application for the Soil NPK Sensor: The Android cell phone application used for monitoring the Soil NPK Sensor is designed in Android Studio. This is the same application I designed in my previous tutorial. So, I highly recommend reading this tutorial, if in case you want to design your own android cell phone application for monitoring different types of sensors, or else you can download the apk file. Before, you start the programming, first of all, make sure you download all the necessary libraries. The purpose of the following program is to read the Nitrogen, Phosphorus, and Potassium values from the Soil NPK sensor and then display the values on the Oled display module and also on the Android cell phone application. Modbus Command for NPK Sensor The information I am about to share with you guys is really important, let me say this one more time, it’s really important. Because once you understand the frame structures then programming is just a piece of cake. So far you know, the NPK Sensor supports Modbus communication and this is the reason the Modbus-RTU Communication protocol is adopted, let’s take a look at its format. Initial structure ≥ 4 bytes of time Address code = 1 byte The address code is basically the transmitter address and it is unique in the entire communication network, the factory default value is 0x01. Function Code = 1 byte Data area = N bytes This is the specific communication data. Error check = 16-bit RCR Ending structure ≥ 4 bytes of time Below are the Host inquiry and Slave response frame structures. It’s simple, to read data from the NPK sensor we simply send the Host Inquire frame, and then the NPK sensor sends back the Slave response consisting of the desired data. As discussed earlier, on a single bus multiple devices can be connected, so this way the master can communicate with multiple slave devices. Now to avoid any confusion, this is the reason the inquire and response frames are provided with the Address code. So, we simply use the address of the device we want to communicate with, and it will have no effect on the other devices. So the transmitter will send data to that specific NPK sensor and then receive data. As the NPK Sensor is for Nitrogen, Phosphorus, and Potassium, so it means we will be reading these three different values from the NPK Sensor. For each of these “N, P, K” we will send an inquiry frame having different starting addresses. Let’s start with the Nitrogen To read the Nitrogen value from the NPK Sensor you will need to send the following Inquiry frame, and then sensor then replies with the Response frame. So, Nitrogen = 0x01, 0x03, 0x00, 0x1E, 0x00, 0x01, 0xE4, 0x0C So, our inquire frame should have all the above values. In programming what we can do, is to make an array having all these values, which I will explain in the code given below. For the Phosphorus: Phosphorus = 0x01, 0x03, 0x00 0x1F, 0x00, 0x01, 0xB5, 0xCC For the Potassium: Potassium = 0x01, 0x03, 0x00, 0x20, 0x00, 0x01, 0x85, 0xC0 So, now to read the Nitrogen, Phosphorus, and Potassium contents of the soil, we will need to send the following command one by one using Arduino. This is what we are going to do next. Nitrogen = 0x01, 0x03, 0x00, 0x1E, 0x00, 0x01, 0xE4, 0x0C Phosphorus = 0x01, 0x03, 0x00 0x1F, 0x00, 0x01, 0xB5, 0xCC Potassium = 0x01, 0x03, 0x00, 0x20, 0x00, 0x01, 0x85, 0xC0 Soil NPK Sensor Arduino Programming: Soil NPK Sensor Code Explanation: As you know in Arduino Uno and Arduino Nano we have only one serial port, while for this project we need two serial ports. For creating another serial port I am going to use the SoftwareSerial library. This is the reason I added the SoftwareSerial library. I added the Wire library for the I2C communication and the remaining two libraries are used with the Oled display module. #include <SoftwareSerial.h> #include <Wire.h> #include <Adafruit_GFX.h> #include <Adafruit_SSD1306.h> There is nothing much to explain because I have already explained the maximum of the code in my other projects, and moreover, I have added enough comments to explain how the code works. Anyhow, I uploaded the code and successfully displayed the values on the Oled display module and also on the Android cell phone application. For the practical demonstration and step-by-step explanation watch the video tutorial given below. Watch Video Tutorial: 5 Comments I do not know what I have done wrong, I only get: nitrogenValue 250 mg / kg PhosphoroValue 250 mg / Kg PotassiumValue: 250 mg / kg Almost the same here, I get a 255 value on each N, P & K No idea what is wrong. I have read somewhere else it could be due to data 8 bit restriction in this code and we should read 16bits … but IDK what to do. Same problems here, any solutions to fix this? How to power on Soil NPK Sensor? Serial.begin(9600); modbus.begin(9600); Serial.begin(4800); modbus.begin(9600);
https://www.electroniclinic.com/soil-npk-sensor-with-arduino-and-android-cell-phone-application-for-monitoring-soil-nutrient/
CC-MAIN-2021-25
refinedweb
2,615
57.3
. Repro code: > using System.Collections.Generic; > > /// <summary>The Test</summary> > public class Test > { > /// <summary>The Foo</summary> > protected Dictionary<string, object> Foo { get; set; } = new Dictionary<string, object>(); > > /// <summary>Tests the Foo</summary> > protected bool TestFoo { get; set; } > } Steps: > mcs -t:library -doc:test.xml test.cs Result: > test.cs(9,3): warning CS1587: XML comment is not placed on a valid language element > test.cs(10,19): warning CS1591: Missing XML comment for publicly visible type or member `Test.TestFoo' > Compilation succeeded - 2 warning(s) The line in the first warning message doesn't make sense, it seems to be confused by the C#6 auto-property initializer above. This breaks when warning-as-errors in turned on (I found this in xunit). Reproduces in 4.0, 4.2 and master. Fixed in master
https://bugzilla.xamarin.com/35/35087/bug.html
CC-MAIN-2021-39
refinedweb
135
52.26
I have this code. It should work perfectly. It's a circle calculator; I'm doing is as an exercise. I want the user to have the option to return the the 'main menu.' I made a yes/no prompt using char* e; but it's uninitialized. How can I initialize #include <iostream> using namespace std; class Circlecalc { public: double const pi = 3.1415962543; double diameter; double radius; double circumference; }; int _welcome() { Circlecalc calc; cout << endl; int i = 0; char* e; cin >> i; while (i != 5) { switch (i) { case(1) : cout << "Enter your radius." << endl; cin >> calc.radius; cout << endl; cout << (calc.radius * 2) * calc.pi << endl; cout << "Exit? [Y/N]" << endl; cin >> e; if (e == "Y") { _welcome(); } else if (e == "N") { } else { cerr << "Unsupported function" << endl; } case(2) : cout << "Enter your diameter" << endl; cin >> calc.diameter; cout << endl; cout << (calc.diameter * 2) * calc.pi << endl; cout << "Exit? [Y/N]" << endl; cin >> e; if (e == "Y") { _welcome(); } else if (e == "N") { } else { cerr << "Unsupported function" << endl; } break; case(3) : cout << "Enter the circumference" << endl; cin >> calc.circumference; cout << endl; cout << (calc.circumference / 2) / calc.pi; cout << "Exit? [Y/N]" << endl; cin >> e; if ( Instead of: char* e; use: std::string e; The reason you get: Unintialized local variable 'e' used is that e is not set when passed to operator>> that is used by cin, to initialize it assign an array to it, ie: char arr[128] = {0}; char* e = arr; operator>> for cin stream expect that you have provided some memory buffer where read string is located, char* e; is not bound to any such buffer, and that would end in (possibly) crash (Undefined Behaviour). In this case you do not need to. If you only want a single letter input from the user just use a char like char response; Then you would compare it against a character literal instead of a string literal like if (response == 'N' || response == 'n') If you want to compare against a string like "no" or "No" then I suggest you use a std::string and not worry about having to allocate memory for the string.
http://www.devsplanet.com/question/35270476
CC-MAIN-2017-04
refinedweb
353
73.88
Reversing and Exploiting with Free Tools: Part 3 In part two of this series, we learned to solve the exercise stack1 using x64dbg, debugging tool that allows us to analyze a program by running it, tracing it, even allowing us to set breakpoints, etc. In those tools we’re not only running the program, we can also reach the function to analyze and execute it. But even when a tool like this is easy to use, there are many cases where it’s not necessary to run the program. A static analysis would be enough, getting conclusions without running the application or running the minimum possible amount. Static analysis can typically be used in malware analysis, function analysis of programs that don’t run, research of vulnerabilities, code reconstruction and more. In the case of exploit writers, when they analyze a program patch that fixes a program vulnerability, they usually do something called binary diffing or diff. A diff is when we use a tool to compare the vulnerable version with the patched one to figure out if and how the patch solved the issue. This the exact stage of the vulnerability to start developing an exploit. The problem with this approach is that there could be hundreds of changed functions and not all of them are patches. Most of them are little fixes, new functionalities, or minor changes only. To examine all the changes individually to uncover which one is responsible for the fix requires so much complex analysis and debugging that it’s simply unfeasible. We don’t even know how to reach some program functions, which could require testing thousands of combinations in order access the function, making the work far too time consuming. Antonio Rodriguez of Incibe explains binary diffing as follows: As part of the training, there will be a few exercises of binary diffing to find patches. There are some disassembler programs that are interactive, so those not only show the functions and instructions but allow us (according to reversing experts) detection functionality of each one and working with what we’ll see it is the static reversing. In general, static reversing is a powerful technique when mastered and helps us to find a correct path to the wanted function, and it can sometimes complement dynamic reversing. We have to master and gain expertise of all the techniques for later use and to combine them as best as possible to meet our goals. Static reversing also depends on if the programs contain symbols. When you installed Windbg you should have configured a folder for symbols where symbols will be downloaded automatically. This should happen for most of the system binary files. If we program something, you should be able to compile and save symbols in a file with the pdb extension. At the moment, your symbols folder is most likely empty now. As you start working with windbg, IDA symbols will be downloaded and saved in there. Having symbols makes the static analysis easier, so we will start the stack1 analysis with these symbols. Later you will find some cases where symbols are not available, which will require additional steps and skills in the static reversing. For example, this will not happen with third party programs that are not part of the operating system. In the exercise folder, there are three files that correspond to stack1: the executable binary file with EXE extension, the source code CPP, and the symbols files PDB (if you can’t see the extension, go to the folder options or file explorer options in the Windows 10 versions and uncheck ”Hide extensions for known file types.” Static Reversing Exercise Stack1 1-IDA FREE We can now see the file extensions and can begin opening the executable one with IDA FREE. Drag the file to the IDA icon or open IDA, which will prompt us to open a file. Search for the .exe file and open it. Select “NEW” to work with a new analysis file: Search the stack1 executable. It will detect that the executable is a PE exe file. Since IDA FREE does not come with two versions (one for 32 bits and other for 64 bits), it will say that the binary is a 64 bit, which also work. If it says that it can’t find the pdb because it’s not in the symbols folder, click “YES” and search for symbols manually: As IDA loads the symbols, it will detect the main function and displays it directly. Only the pro version of IDA free has a decompiler. IDA FREE does not have a decompiler, so pressing F5, which is the shortcut in the PRO version for decompile a function will just say: In the image, the comparison between printf and gets for the cookie with the value 0x41424344. If the values are not the same, it will follow the path of either the green arrow or the red arrow. GREEN ARROW = Conditional jump result is true. RED ARROW = Conditional jump result is false. In this particular case, the parameters are that JNZ (jump if not zero) or JNE (jump if not equal) = TRUE which means that it will go through the green arrow if values are not the same and through the red arrow if they are the same. In our example, the value is not the same, which means it will be following the green arrow. It’s worth keeping in mind what we saw in the x64dbg debugger, which we can compare with what we see in IDA. Remember that after the PROLOGUE function, EBP was set as a frame pointer and the function was EBP BASED, so the EBP value would be constant until the EPILOGUE. All the variables and parameters that remained constant and were referenced using EBP were called the HORIZON. The HORIZON line can be seen in the stack: Under the HORIZON was the STORED EBP, the RETURN ADDRESS, and the function parameters. Above the HORIZON was space reserved for the variables using the instruction SUB ESP, 0x54 finishing ESP above EBP, which remains a fixed-value. But as we saw previously that while ESP moves in different moments of the function, EBP remains constant. This means that the map will be the distribution of the variables and parameters of the function but will not change as EBP doesn’t change. We built this map tracing in the x64dbg debugger, but we can also see it in IDA without running the program. IDA has the list of variables and parameters under the function declaration. However, it’s not the same complete map. To access that, double click in any variable or parameter: This will display the STATIC REPRESENTATION OF THE STACK, which is the same map as the one from x64dbg. This shows a picture of the entire stack with the variables, the parameters, STORED EBP, RETURN ADDRESS, the HORIZON, etc. Note the following definitions in IDA: db: BYTE=1 byte long dw: WORD=2 bytes long dd: DWORD=4 bytes long In the above example, the variable buf (type db) is an 80 byte long byte array. The dup(?) notation means duplication, meaning that it should repeat as many times as in indicated in the parentheses. Since there is a question mark in the parentheses, it will repeat with an unknown value for the static analysis. The next variable in the example is cookie (type dd), and is four bytes long. As with buf, the question mark means an unknown value. Unlike buf, there is no need to repeat any value, since there is no dup. After the cookie, comes the s that it is below the HORIZON in the STORED EBP. The s is 4 bytes long. Though IDA prints it as a db of 4 bytes long, it really is a dd (DWORD). This is just a quirk in the IDA representation. After s, r is the RETURN ADDRESS and similarly has a length of 4 bytes. Next, both argc and argv are properly detected as DWORDS, making them both 4 bytes long. While the stack representation displays the length of each variable, if we right click any of them, we’ll also see the definition of the type according to the C language: IDA can detect names and exact values because it uses the symbols. But what happens when it doesn’t have symbols? Returning to the map we see in the first column we see the same values of distance use horizon as reference as were in the stack in x64dbg with the horizon set to zero. Above the horizon, variables are represented as EBP-XXX, while below the parameters are represented as EBP+XXX. For example, buf is EBP-0x54, as seen below: Additionally, cookie is EBP-4, while argc is EBP+8 and argv is EBP+C. The ebp-0x4 and ebp-0x54 representations of cookie and buf are the same in x64dbg. In IDA, if I right click on a variable, like ebp+cookie, it will display in the alternative format of ebp-4 because cookie is in the position -4. ebp + cookie = ebp + (-4) = ebp - 4 We can see the position with respect to EBP below: So far, everything we know has been analyzed in IDA. The last piece we need is the distance needed to fill to overflow the buffer and modify the cookie. This can be seen in the static stack representation: We have to fill the 80 bytes of buf, and then the 4 bytes of the cookie, which, when we look at buf and cookie in IDA, is compared against 0x41424344: In the image we can see all the places where cookie is exceeded. The LEA instruction is similar to AMPERSAND, so it gets the address of a variable instead of its value. This can happen using printf to print the address in hexadecimal for the %08x. Additionally, we can see that gets receives the address of buf as parameters, so it will copy whatever we type in the keyboard. For instance, if we type: 80 Aes + “DCBA” Since DCBA is 44 43 42 41, reading it in memory as little endian will be the same when it is compared to 0x41424344. This means it will go the program through the red arrow as the instruction JNE=NOT TRUE is used, completing the sequence. The script is similar to the one we saw in the last part: If we use Popen to redirect the input STDIN, we can send the data from the script with p1.communicate(payload) instead of typing. Payload: the code of an exploit which completes the malicious part of the intrusion. This remote code can be executed in the attacked machine, performing a sequence of malicious activities. In this case the payload is = 80 As + “DCBA” # (“\x44\x43\x42\x41” is similar to “DCBA”) Running the script, with what is deducted from the static reversing using IDA without running the exercise, we successfully get the YOU WIN without problems. 2-RADARE First, we should check the binary’s information, which can be done with an executable called rabin2, located in the same folder where Radare was installed. rabin2 -l name This returns the binary’s information and it contains information that we can check with the argument -h. For example, the argument -i is used to see the imports used by the executable: Below are some of the different options to get additional information. To start using Radare, write the following in a command prompt: radare2 STACK1_VS_2017.exe This will load the binary to analyze it. Then execute the command aaa to analyze the loaded binary. Then, the command afl will load all the functions. From here, we can find the main function, which is in the address 0x401040. With the command pdf we can disassemble a function: With the instruction eco we can list the themes. For example, I used “bright,” because it looks clearer. Radare has both the command based console mode, as well as a visual mode which is entered by typing the key v. Visual mode can be exited by typing q. If you want to see and use the cursor, type c. Help is accessed by typing h. For this exercise, we’ll stay in console mode. Later, we’ll use the visual mode, as well as Radare’s GUI, Cutter. Currently, the cursor is located in 0x401054. To move to main, write s main. This simplifies things, because it takes the actual address that the cursor points to as a reference. In this case, 401040 is the main address. Next, let’s change the function’s name with afn. Now we can disassemble using the new name “my_main.” We can then rename the variables with afvn new_name old_name. As we can see, in the function all names have been changed to new one. With the command agf we can see an ASCII visual representation of the function. While we can see the green arrow (true) and the red arrow (false), and the comparison with 0x41424344, we can’t see the gets. Let’s include the pdb symbol information with the command idp. idp STACK1_VS_2017.pdb If we analyze again with aaa and the newly added symbols, the gets and printf now appear, though we have to again rename the symbols. Remember to load the symbols at the beginning before analyzing with aaa. Otherwise, we would lose all the work done previously. Now let’s see how this appears in Cutter, Radare’s GUI. First we need to download it, uncompress it, and run it. Choose the file to disassemble and the pdb with the symbols. In the quick filter, write main. We can see the function, clicking it and pressing the bar key to enter in graph mode. Right-clicking a variable or pressing the shortcut Y we can rename a variable. For example, we can change the names for buf and cookie. Cutter also has a screen for decompiling. Pressing the X function shows the different references. For example, we can see that one of the decompiler options is with GHIDRA, which we’ll be using later on. Using the console for Radare commands, we can write afvb* to list the variables relative to ebp and afvd to see the value of the variables when we’re debugging. We can see that buf is in -84 and cookie in -4, so the difference between both is 80. We have to fill buf. We can also see that the next 4 will be the famous “DCBA”. In the end, we can reach the same conclusions that we came to using IDA: the destination of gets is buf, data will be copied there, buf is 80 byte long, and right below buf is cookie. Additionally, we will modify buf with DCBA at overflow, which will be compared with 0x41424344, and it will successfully conclude with YOU WIN if nothing differs. When writing this training I talked with Pancake, Radare’s author. I requested that he add a command similar to afbv* not just for listing variables but, similarly to IDA, for listing all the static representations of the function to make it easier see the distance. He has begun work on it, so look out for it in future trainings. 3-GHIDRA Since part 2, a new version of GHIDRA has been released. Be sure to update to 9.1 before continuing. To begin, go to File -> New Project -> Non-Shared Project and then Next>>. Create a folder for the project and write a name. Now drag and drop the executable of stack1 on the active project screen. Once it’s dropped into the window, it will begin to load. When a screen appears with information about the file that we loaded, press OK. Double click on the name of our file so we can begin our analysis. If any screen appears, select YES. While running the analysis, we’ll come across a problem in loading the symbols, causing an error. ERROR: Unable to locate the DIA SDK. It is required to load PDB files. * See docs/README_PDB.html for DLL registration instructions. ghidra.app.util.bin.format.pdb.PdbException: ERROR: Unable to locate the DIA SDK. It is required to load PDB files. *See docs/README_PDB.html for DLL registration instructions. We can see what this file is: We will have to find this library and install it: Microsoft Visual C++ Redistributable for Visual Studio 2017 or If you still have a problem, you can also try downloading this: After uncompressing it, run the .bat from an admin command prompt, producing the following script: xcopy msdia140.dll %systemroot%\system32 regsvr32 %systemroot%\system32\msdia140.dll It’s possible to do this without bat file by running these commands manually. If you use the .bat solution run Ghidra again, repeating the process. It will load the symbols directly. Otherwise, you can go to FILE -> LOAD PDB FILE to load them in WINDOW -> FUNCTIONS We can write main and search the function. There we can see the main function and we can see the gets call, so we know we have properly loaded the symbols. In WINDOW -> FUNCTION GRAPH we can see the graph with function blocks. Some details are not seen by default, but the graph is interactive, and hovering the mouse above shows details as to what is below. We can paint the blocks, rename them, and make other changes. Find the variable references (where a variable is used) by right clicking -> REFERENCES. If it was not selected, mark the function, and right click MAKE_SELECTION. We can see the static representation of the stack: The stored ebp and the return address didn’t appear as DWORDS so press ‘B’ to change variable type until we get the DWORD option. The stack appears quite differently from how it did in IDA: We can see in hexadecimal the distance of buf to cookie: 0x58 - 0x8 = 0x50. We can change it to decimal in the menu by right clicking. Now that it’s in decimal, it is clearer that we have to write 80 ‘A’s because 88 is the buf’s offset, and subtracting 8 of cookie makes the offset 80. We can fill buf, then write 4 bytes more for “DCBA,” writing the same script that we did with the previous static disassemblers. There are some differences from the IDA static representation worth noting. Instead of taking ebp as reference, it takes the return address. So, instead of being buf in 0x54, it is in 0x58 because the STORED EBP is above 0 (RETURN ADDRESS) while in IDA it was under 0, as it was taken in reference the EBP. This is a little bit confusing since we now have EBP as reference, but have worked with return address as the reference. We’ll remain aware of it in a more complex analysis. There’s also an interactive decompilation window in the menu WINDOW. Each marked line appears in the disassembler. CALL GRAPH For the next exercise, we’ll work with stack2. Exercise Stack2 IDA Free While the buf and cookie sizes didn’t change, it is now compared with 0x01020305. The buf size is 80 and underneath, cookie is 4 . So as buf is the parameter of gets, what we write will be saved there. Filling buf with 80 bytes, with 4 more bytes we can modify cookie as we have before. This would make the script read as: import sys from subprocess import Popen, PIPE payload = b"A" * 80 + b"\x05\x03\x02\x01" p1 = Popen(r"C:\Users\<user>\xxxxx\abos y stack nuevos\STACK2_VS_2017.exe", stdin=PIPE) print ("PID: %s" % hex(p1.pid)) print ("Enter to continue") p1.communicate(payload) p1.wait() input() The payload is: /> Because of the Little Endian, 05 03 02 01 is what’s saved in memory, and becomes in the comparison: 0x01020305. Radare2 Now let’s see it in Radare. We should open it with: r2 <executable_name> or radare2 <executable_name> Then load the symbols: Then we analyze with aaa: And then afl to print the functions: We can see the main function, so we should move to that function with s main, then eco bright and pdf main to disassemble it: With the command agf we can see the ASCII visual representation of the function: We can then rename the variables. afvn new_name old_name afvn cookie var_4h afvn buf s To see the variables, write afvb* It’s the same as the way IDA displays the variables, with EBP as a reference. We see the difference 84-4 that gives us the length of buf. Buf is 80 and with 4 bytes more we can modify cookie with “\x05\x03\x02\x01” If we open it with Cutter, we can see the graph option: Rename the variables. We can see the new names in the decompiler, with everything else staying the same. We can see the sizes with the same command of radare2 afbv* payload = b"A" * 80 + b"\x05\x03\x02\x01" And we can see that the script works: Ghidra Drop this new file in the same project as the first exercise: PDB may have a little error—it has worked in previous disassemblers. If there is an error, we will do it without the symbols by looking at the strings. Double click on the area where the pink arrow is pointing. To search the references of the string,right click REFERENCES -> SHOW REFERENCES TO ADDRESS. Rename it to main by right clicking -> EDIT FUNCTION. In the function list, search main and right click -> MAKE SELECTION. Select the function GRAPH in the WINDOW. Rename by right clicking -> EDIT LABEL We can see that it compares cookie with 0x01020305. We can also see the variables but since we don’t have the symbols we can’t see the gets. However, we can rename the function 0x403c5b manually. Now it looks better: Now, let’s take a look at the variables: As there aren’t symbols, we don’t know the buf length, so we need to create an array: Length can vary from 1 to 80. Since we know that cookie is right below, we should write the maximum 80. We can change types with letter B, but even with correct size, it will show as unknown. If we modify DataType manually, we can write the type. For example, we can write char[80], instead of unknown. We can do the same happen dwords, changing it manually to a known type from a list. We know that buf length is 0x50, because is 0x58 - 0x8 = 0x50 or 80 in decimal, and as before are 80 ‘A’s and then b”\x05\x03\x02\x01”. As we go into more complex exercises, we will discover new possibilities about these tools, finishing the left stacks and following with more complex exercises.
https://www.coresecurity.com/core-labs/articles/reversing-and-exploiting-free-tools-part-3
CC-MAIN-2021-10
refinedweb
3,821
70.53
GETDIRENTRIES(2) BSD Programmer's Manual GETDIRENTRIES(2) getdirentries - get directory entries in a filesystem independent format #include <dirent.h> int getdirentries(int fd, char *buf, int nbytes, long *basep); getdirentries()irentries() with buffers smaller than this size.LEN +, and DT_SOCK. The d_namlen entry specifies the length of the file name excluding the NUL byte. Thus the actual size of d_name may vary from 1 to MAXNAMLEN + 1. The d_name entry contains a NUL-terminated file name. Entries may be separated by extra space. The d_reclen entry may be used as an offset from the start of a dirent structure to the next structure, if any. Invalid entries with d_fileno set to 0 may be returned among regular en- tries. The actual number of bytes transferred is returned. The current position pointer associated with fd is set to point to the next block of entries. The pointer may not advance by the number of bytes returned by getdiren- tries(). getdirentries(), or zero. If successful, the number of bytes actually transferred is returned. A value of zero is returned when the end of the directory has been reached. Otherwise, -1 is returned and the global variable errno is set to indi- cate the error. The following code may be used to iterate on all entries in a directory: char *buf, *ebuf, *cp; long base; size_t bufsize; int fd, nbytes; char *path; struct stat sb; struct dirent *dp; if ((fd = open(path, O_RDONLY)) < 0) err(2, "cannot open %s", path); if (fstat(fd, &sb) < 0) err(2, "fstat"); bufsize = sb.st_size; if (bufsize < sb.st_blksize) bufsize = sb.st_blksize; if ((buf = malloc(bufsize)) == NULL) err(2, "cannot malloc %lu bytes", (unsigned long)bufsize); while ((nbytes = getdirentries(fd, buf, bufsize, &base)) > 0) { ebuf = buf + nbytes; cp = buf; while (cp < ebuf) { dp = (struct dirent *)cp; printf("%s\n", dp->d_name); cp += dp->d_reclen; } } if (nbytes < 0) err(2, "getdirentries"); free(buf); getdirentries() will fail if: [EBADF] fd is not a valid file descriptor open for reading. [EFAULT] Either buf or basep points outside the allocated address space. [EINVAL] The file referenced by fd is not a directory, or nbytes is too small for returning a directory entry or block of en- tries, or the current position pointer is invalid. [EIO] An I/O error occurred while reading from or writing to the file system. lseek(2), open(2), opendir(3), dirent(5) The getdirentries() function first appeared in 4.4BSD. MirOS BSD #10-current June 9,.
https://www.mirbsd.org/htman/sparc/man2/getdirentries.htm
CC-MAIN-2016-07
refinedweb
411
64.51
PEP 3155 -- Qualified name for classes and functions Contents Rationale Python's introspection facilities have long had poor support for nested classes. Given a class object, it is impossible to know whether it was defined inside another class or at module top-level; and, if the former, it is also impossible to know in which class it was defined. While use of nested classes is often considered poor style, the only reason for them to have second class introspection support is a lousy pun. Python 3 adds insult to injury by dropping what was formerly known as unbound methods. In Python 2, given the following definition: class C: def f(): pass you can then walk up from the C.f object to its defining class: >>> C.f.im_class <class '__main__.C'> This possibility is gone in Python 3: >>> C.f.im_class Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'function' object has no attribute 'im_class' >>> dir(C.f) ['__annotations__', '__call__', '__class__', '__closure__', '__code__', '__defaults__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__get__', '__getattribute__', '__globals__', '__gt__', '__hash__', '__init__', '__kwdefaults__', '__le__', '__lt__', '__module__', '__name__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__'] This limits again the introspection capabilities available to the user. It can produce actual issues when porting software to Python 3, for example Twisted Core where the issue of introspecting method objects came up several times. It also limits pickling support [1]. Proposal This PEP proposes the addition of a __qualname__ attribute to functions and classes. For top-level functions and classes, the __qualname__ attribute is equal to the __name__ attribute. For nested classes, methods, and nested functions, the __qualname__ attribute contains a dotted path leading to the object from the module top-level. A function's local namespace is represented in that dotted path by a component named <locals>. The repr() and str() of functions and classes is modified to use __qualname__ rather than __name__. Example with nested classes >>> class C: ... def f(): pass ... class D: ... def g(): pass ... >>> C.__qualname__ 'C' >>> C.f.__qualname__ 'C.f' >>> C.D.__qualname__ 'C.D' >>> C.D.g.__qualname__ 'C.D.g' Example with nested functions >>> def f(): ... def g(): pass ... return g ... >>> f.__qualname__ 'f' >>> f().__qualname__ 'f.<locals>.g' Limitations With nested functions (and classes defined inside functions), the dotted path will not be walkable programmatically as a function's namespace is not available from the outside. It will still be more helpful to the human reader than the bare __name__. As the __name__ attribute, the __qualname__ attribute is computed statically and it will not automatically follow rebinding. Discussion Excluding the module name As __name__, __qualname__ doesn't include the module name. This makes it independent of module aliasing and rebinding, and also allows to compute it at compile time. Reviving unbound methods Reviving unbound methods would only solve a fraction of the problems this PEP solves, at a higher price (an additional object type and an additional indirection, rather than an additional attribute). Naming choice "Qualified name" is the best approximation, as a short phrase, of what the additional attribute is about. It is not a "full name" or "fully qualified name" since it (deliberately) does not include the module name. Calling it a "path" would risk confusion with filesystem paths and the __file__ attribute. The first proposal for the attribute name was to call it __qname__ but many people (who are not aware of previous use of such jargon in e.g. the XML specification [2]) found it obscure and non-obvious, which is why the slightly less short and more explicit __qualname__ was finally chosen.
https://www.python.org/dev/peps/pep-3155/
CC-MAIN-2021-10
refinedweb
603
54.93
[ ] Josh Elser commented on ACCUMULO-3943: -------------------------------------- Yes, it's not helpful. The value of fs.defaultFS doesn't help in wrangling hostnames/IP addrs which is what I meant when I said "that's insufficient info" earlier. > volumn definition agreement with default settings > ------------------------------------------------- > > Key: ACCUMULO-3943 > URL: > Project: Accumulo > Issue Type: Bug > Components: gc, master, tserver > Reporter: Eric Newton > Priority: Minor > Fix For: 1.8.0 > > > I was helping a new user trying to use Accumulo. They managed to set up HDFS, running on hdfs://localhost:8020. But they didn't set it up with specific settings, and just used the default port. Accumulo worked initially, but would not allow a bulk import. > During the bulk import process, the servers need to move the files into the accumulo volumes, but keeping the volume the same. This makes the move efficient, since nothing is copied between namespaces. In this case it refused the import because it could not find the correct volume. > Accumulo needs to be more nuanced when comparing hdfs://localhost:8020, and hdfs://localhost. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
http://mail-archives.apache.org/mod_mbox/accumulo-notifications/201507.mbox/%3CJIRA.12848941.1437757903000.300302.1438026784779@Atlassian.JIRA%3E
CC-MAIN-2017-04
refinedweb
186
58.38
Florian Weimer wrote: | <> Some valid criticism there. You missed out the main one that I found when I first started using arch, which was that the tla command line syntax is hideous. > Arch does not implement a distributed system. For example, its archive > replication does not transparently handle write operations. Aaron Bentley's branch claims to support this (though I haven't got around to trying it yet). > The idea to automatically subject files to revision control, based on > regular expressions, is very hard to deal with for users. While being > an interesting experiment, it does not lead to increased usability. I presume you're referring to the "names" tagging method? I agree that it has problems, but it also has its uses and it is just one of the three tagging methods supported by arch -- not even the default one. > GNU arch does not support a centralized development model which lacks > a single, designated committer. Yes, it does. I use it that way with no problems. (The initial set-up required being careful with umasks though, and it'd be really great if arch would support the concept of a "per-archive umask" that it would set before writing to an archive. Perhaps it would be better to see this implented at a filesystem level but I don't think it'll happen -- although ACLs may come close.) > Branch creation is not versioned. Branches cannot be deleted. This > means that branches stay around forever, even after development on > them has finished. (This could be worked around in the implementation > by hiding branches, but it doesn't seem to be the right thing to do.) This is kludged around by archive cycling and "sealing" branches (which hides them from the abrowse command). I don't think that /deleting/ a branch would ever be a good idea (something about changing history and having a consistent global namespace). > In practice, tla requires four inodes per file in a checked-out > project tree: one for the file, one for the file ID, and a a pristine > copy of both. This gracious use of inodes can cause problems. *shrug* It's only twice as many inodes as svn uses. It could be improved, though, but I don't see "lack of inodes" as being a serious problem. My /home partition is a bog-standard ext3 FS and it is 70% full space-wise and only 18% full inode-wise. I suppose it'd be more of a problem if a filesystem was used only to store arch trees, but who does that? :-P > Redesign the changeset format, probably based on VCDIFF (RFC 3284). Ewww! > Do not expose the archive format, but use a changeset server which > implements access control (and pipelining, to cut down effects of > network latency). This would be nice, as an optional feature; but being able to run without having to set up anything on the server is one of the things that I like about arch. Cameron.
http://lists.gnu.org/archive/html/gnu-arch-users/2004-06/msg00190.html
CC-MAIN-2016-40
refinedweb
497
71.55
Overview¶ Parsing Overview¶ Parsing the process of transforming XML elements and attributes in an instance document into java objects. During parsing an XML schema is used to assist in the transformation. The parser uses the schema to determine which types of objects various elements and attributes should be transformed into. Schema Resolution The first step in parsing an instance document is figuring out the schema for it. Typically in a document which conforms to a particular schema all the information about the schema is present on the root element of the document. For example: <po:purchaseOrder xmlns: ... The key attribute is xsi:schemaLocationwhich contains a namespace-schema locationmapping. In the above example the mappings tells us that the schema for the namespace “” can be found in a file named po.xsd. The parser uses this mapping to locate the schema for the document. This is known as Schema Resolution. Once the schema has been “resolved”, it parsed and processing proceeds to the next phase. Element and Attribute Binding Once the schema for the document has been resolved the rest of the document is parsed. As elements and attributes are processed the schema is used to lookup information to assist in parsing the document. The term parsing used here really refers to the act of transforming an element or attribute into a java object. This transformation is performed by a binding. For each element and attribute that is parsed, a binding is located for it. To locate a binding for an element or attribute, the declaration for it is located in the schema. The rules that dictate how element and attribute declarations are resolved are detailed here. Once a declaration has been found, a set of “bindings” for the declaration are derived. For a single element or attribute, the following bindings may be derived: - A binding for the element or attribute itself - A binding for the type of the element or attribute - A binding for each base type As an example, consider processing the purchaseOrderelement shown above. The following bindings would be derived: - The purchaseOrderglobal element declaration - The PurchaseOrderTypetype definition ( the declared type of the purchaseOrderelement ) - The anyTypetype definition ( the base type of all complex type definitions ) Once a set of bindings has been located, they are executed in a defined order, and the element or attribute is transformed into an object. Binding derivation and execution is explained in greater detail here. Document Processing As an instance document is parsed, elements and attributes are transformed into objects. The parser can be thought of as a stack computer in which transformed objects are pushed on a stack to later be consumed by other objects. The following diagram pictorially represents the various states of the stack while an instance document is parsed. Stack is empty as the parser begins to process the instance document Leading edge of the purchaseOrderelement is reached. On the leading edge of an element, all of its attributes are parsed. In this case the orderDateattribute is parsed into a java.util.Dateobject, and placed on the stack. Leading edge of the shipToelement is reached, and attributes parsed. Leading and trailing edges of the streetelement are reached. For elements themselves, transformation occurs on the trailing edge. In this case, the street element is transformed to a java.lang.String,and placed on the stack. Similar to State 3, elements transformed and placed on stack. Similar to State 4, elements transformed and placed on stack. Trailing edge of the shipToelement. At this state, all the child elements have been processed and exist on the stack. In processing the shipToelement all the values which correspond to child elements and attributes are popped off the stack and used to compose the resulting object for the shipToelement, an instance of Address. The transformed object is then placed on the stack. Trailing edge of the purchaseOrderelement, similar to State 6, the objects created for child elements and attributes are used to compose the resulting purchaseOrderobject, an instance of PurchaseOrder. Instance document has been processed. The stack contains the single object which corresponds to the root element of the document, in this case purchaseOrder. Encoding Overview¶ Encoding is the process of serializing a hierarchy of objects as XML. During encoding an XML schema is used to determine how various objects should be encoded as elements / attributes, and to navigate through the hierarchy of objects. Element and Attribute Binding As objects are encoded the XML schema is used to locate bindings to perform the encoding process. During encoding bindings serve two roles: - Serialization of objects as elements and attributes - Navigation among objects by determining which objects correspond to child elements and attributes of a particular element Binding derivation for encoding is identical as it is for parsing, explained here. Object Processing As an object tree is encoded individual objects are serialized as elements and attributes. The following diagram pictorially represents how the encoding process works. The first step is to encode the root element of the document, the PurchaseOrderelement , which corresponds to the top object in the tree Next the the elements type, PurchaseOrderType, is used to move the process forward and infer the next object to encode. The type yields the attribute orderDate. Continuing through the contents of PurchaseOrderTypeis the shipToelement. Since the shipToelement is complex, the encoding process recurses into its type, USAddress, and continues on. The type yields the countryattribute. Continuing through the contents of USAddressis the streetelement. And the stateelement And the zipelement. All the contents of the USAddresstype have been completed, the shipToelement is closed and recursion pops back to the surrounding type All the contents of the PurchaseOrderTypehave been completed, the purchaseOrderelement is closed. Being the root element of the document there is no containing type and the encoding process is stopped. There are some situations where the encoder will try to encode complex features against a complex type that is not completely valid. This happens for example when mapping nested entities against a complex type that doesn’t respect GML object-property model. The Java configuration property encoder.relaxed can be set to false to disable this behavior.
http://docs.geotools.org/latest/userguide/library/xml/internal/overview.html
CC-MAIN-2020-16
refinedweb
1,018
55.03
Before we jump into specifics, I want to explain some important concepts that will help you understand how XSLT works. An XSLT processor (I'll call it an XSLT engine) takes two things as input: an XSLT stylesheet to govern the transformation process and an input document called the source tree. The output is called the result tree. The XSLT stylesheet controls the transformation process. While it is usually called a stylesheet, it is not necessarily used to apply style. This is just a term inherited from the original intention of using XSLT to construct XSL-FO trees. Since XSLT is used for many other purposes, it may be better to call it an XSLT script or transformation document, but I will stick with the convention to avoid confusion. The XSLT processor is a state engine. That is, at any point in time, it has a state, and there are rules to drive processing forward based on the state. The state consists of defined variables plus a set of context nodes, the nodes that are next in line for processing. The process is recursive, meaning that for each node processed, there may be children that also need processing. In that case, the current context node set is temporarily shelved until the recursion has completed. The XSLT engine begins by reading in the XSLT stylesheet and caching it as a look-up table. For each node it processes, it will look in the table for the best matching rule to apply. The rule specifies what to output to build its part of the result tree, and also how to continue processing. Starting from the root node, the XSLT engine finds rules, executes them, and continues until there are no more nodes in its context node set to work with. At that point, processing is complete and the XSLT engine outputs the result document. Let us now look at an example. Consider the document in Example 7-1. <manual type="assembly" id="model-rocket"> <parts-list> <part label="A" count="1">fuselage, left half</part> <part label="B" count="1">fuselage, right half</part> <part label="F" count="4">steering fin</part> <part label="N" count="3">rocket nozzle</part> <part label="C" count="1">crew capsule</part> </parts-list> <instructions> <step> Glue <part ref="A"/> and <part ref="B"/> together to form the fuselage. </step> <step> For each <part ref="F"/>, apply glue and insert it into slots in the fuselage. </step> <step> Affix <part ref="N"/> to the fuselage bottom with a small amount of glue. </step> <step> Connect <part ref="C"/> to the top of the fuselage. Do not use any glue, as it is spring-loaded to detach from the fuselage. </step> </instructions> </manual> Suppose you want to format this document in HTML with an XSLT transformation. The following plain English rules describe the process: Starting with the manual element, set up the "shell" of the document, in this case the html element, title, and metadata. For the parts-list element, create a list of items. For each part with a label attribute, create a li element in the parts list. For each part with a ref attribute, output some text only: the label and name of the part. The instructions element is a numbered list, so output the container element for that. For each step element, output an item for the instructions list. The stylesheet in Example 7-2 follows the same structure as these English rules, with a template for each. <xsl:stylesheet xmlns: <xsl:output <!-- Handle the document element: set up the HTML page --> <xsl:template <html> <head><title>Instructions Guide</title></head> <body> <h1>Instructions Guide</h1> <xsl:apply-templates/> </body> </html> </xsl:template> <!-- Create a parts list --> <xsl:template <h2>Parts</h2> <dl> <xsl:apply-templates/> </dl> </xsl:template> <!-- One use of the <part> element: item in a list --> <xsl:template <dt> <xsl:value-of </dt> <dd> <xsl:apply-templates/> </dd> </xsl:template> <!-- another use of the <part> element: generate part name --> <xsl:template <xsl:variable <xsl:value-of <xsl:text> (Part </xsl:text> <xsl:value-of <xsl:text>)</xsl:text> </xsl:template> <!-- Set up the instructions list --> <xsl:template <h2>Steps</h2> <ol> <xsl:apply-templates/> </ol> </xsl:template> <!-- Handle each item (a <step>) in the instructions list --> <xsl:template <li> <xsl:apply-templates/> </li> </xsl:template> </xsl:stylesheet> You will notice that each rule in the verbal description has a corresponding template element that contains a balanced (well-formed) piece of XML. Namespaces help the processor tell the difference between what is an XSLT instruction and what is markup to output in the result tree. In this case, XSLT instructions are elements that have the namespace prefix xsl. The match attribute in each template element assigns it to a piece of the source tree using an XSLT pattern, which is based on XPath. A template is a mixture of markup, text content, and XSLT instructions. The instructions may be conditional statements (if these conditions are true, output this), content formatting functions, or instructions to redirect processing to other nodes. The element apply-templates, for example, tells the XSLT engine to move processing to a new set of context nodes, the children of the current node. The result of running a transformation with the above document and XSLT stylesheet is a formatted HTML page (whitespace may vary): <html> <head><title>Instructions Guide</title></head> <body> <h1>Instructions Guide</h1> <h2>Parts</h2> <dl> <dt>A</dt> <dd>fuselage, left half</dd> <dt>B</dt> <dd>fuselage, right half</dd> <dt>F</dt> <dd>steering fin</dd> <dt>N</dt> <dd>rocket nozzle</dd> <dt>C</dt> <dd>crew capsule</dd> </dl> <h2>Steps</h2> <ol> <li> Glue fuselage, left half (Part A) and fuselage, right half (Part B) together to form the fuselage. </li> <li> For each steering fin (Part F), apply glue and insert it into slots in the fuselage. </li> <li> Affix rocket nozzle (Part N) to the fuselage bottom with a small amount of glue. </li> <li> Connect crew capsule (Part C) to the top of the fuselage. Do not use any glue, as it is spring-loaded to detach from the fuselage. </li> </ol> </body> </html> As you see here, the elements in the source tree have been mapped to different elements in the result tree. We have successfully converted a document in one format to another. That is one example of XSLT in action.
http://etutorials.org/Programming/Learning+xml/Chapter+7.+Transformation+with+XSLT/7.2+Concepts/
crawl-001
refinedweb
1,077
61.56
Show how a foreign language function can be called from the language. As an example, consider calling functions defined in the C language. Create a string containing “Hello World!” of the string type typical to the language. Pass the string content to C‘s strdup. The content can be copied if necessary. Get the result from strdup and print it using language means. Do not forget to free the result of strdup (allocated in the heap). Notes: - It is not mandated if the C run-time library is to be loaded statically or dynamically. You are free to use either way. - C++ and C solutions can take some other language to communicate with. - It is not mandatory to use strdup, especially if the foreign function interface being demonstrated makes that uninformative. While calling C functions from C++ is generally almost trivial, strdup illustrates some fine point in communicating with C libraries. However, to illustrate how to generally use C functions, a C function strdup1 is used, which is assumed to have the same interface and behaviour as strdup, but cannot be found in a standard header. In addition, this code demonstrates a call to a FORTRAN function defined as FUNCTION MULTIPLY(X, Y) DOUBLE PRECISION MULTIPLY, X, Y Note that the calling convention of FORTRAN depends on the system and the used FORTRAN compiler, and sometimes even on the command line options used for the compiler; here, GNU Fortran with no options is assumed. #include <cstdlib> // for C memory management #include <string> // for C++ strings #include <iostream> // for output // C functions must be defined extern "C" extern "C" char* strdup1(char const*); // Fortran functions must also be defined extern "C" to prevent name // mangling; in addition, all fortran names are converted to lowercase // and get an undescore appended. Fortran takes all arguments by // reference, which translates to pointers in C and C++ (C++ // references generally work, too, but that may depend on the C++ // compiler) extern "C" double multiply_(double* x, double* y); // to simplify the use and reduce the probability of errors, a simple // inline forwarder like this can be used: inline double multiply(double x, double y) { return multiply_(&x, &y); } int main() { std::string msg = "The product of 3 and 5 is "; // call to C function (note that this should not be assigned // directly to a C++ string, because strdup1 allocates memory, and // we would leak the memory if we wouldn't save the pointer itself char* msg2 = strdup1(msg.c_str()); // C strings can be directly output to std::cout, so we don't need // to put it back into a string to output it. std::cout << msg2; // call the FORTRAN function (through the wrapper): std::cout << multiply(3, 5) << std::endl; // since strdup1 allocates with malloc, it must be deallocated with // free, not delete, nor delete[], nor operator delete std::free(msg2); } Content is available under GNU Free Documentation License 1.2.
https://tfetimes.com/c-call-a-foreign-language-function/
CC-MAIN-2019-51
refinedweb
486
55.07
Book Review: Test-Driven JavaScript Development 55.First off the audience for this book are JavaScript developers interested in TDD. More specifically, I would identify the audience being the poor developers that have slaved over JavaScript for endless hours only to find out that there are 'discrepancies' in how their JavaScript functions in one browser versus another (or even across versions of the same browser). If you've ever came into work one day to learn that the latest version of Internet Explorer or Mozilla Firefox now throws errors from the deep recesses of your code and you have absolutely no idea where to start, then this book may be an item of interest to you. After all, wouldn't it be great to pull up the new browser and simply watch all your tests complete code coverage with glaring red results listing specific problematic locations? Secondly, I'd like to establish that I'm writing this review with two key assumptions. The first assumption is that JavaScript is not in and of itself evil. You might hate JavaScript (as did I at one time) but it's a very flexible and enjoyable language when you're not battling some crazy 'feature' that a particular JavaScript engine exhibits or some issue with the dreaded Document Object Model (DOM). The second assumption is that TDD is a net positive when done correctly. To some, it may be a hard sell and the author of the book is no blind preacher. TDD has its pitfalls and the book adequately notes these claiming that TDD can actually work against you if used improperly. Feel free to wage wars in the comments debating whether or not the average JavaScript monkey is capable of avoiding pitfalls and learning to write good unit tests — I'm not getting sidetracked in this review on those topics. This book is divided into four parts. The first part of the book gives you a slight taste of testing right off the bat in chapter one (Automated Testing). Johansen starts by showing a strftime function written in JavaScript and demonstrates briefly the very clumsy standard method of testing the method in a browser. From there he introduces Assertions, Setup, Teardown and Integration Tests. What I particularly enjoyed about this book is that these key components are not forgotten after introducing them, Johansen constantly nods to the reader when duplicate code could be moved to Setup or Teardown. Chapter two is devoted to 'turning development upside-down.' This chapter analyzes the mentality of writing a test, running the test, watching it fail, making the test pass and then refactoring to remove duplication (if necessary). Johansen stresses and restresses throughout the book that the simplest solution should be added to pass the test. Fight the urge to keep coding when you are sure what comes next and just make sure you have unit tests for that new code. The third chapter runs through many test frameworks in JavaScript and settles in on JsTestDriver weighing the pros and cons of each option. Lastly, it is demonstrated how to use JsTestDriver both inside Eclipse and from the command line (something I deeply appreciated). Chapter Four expands on this by proposing learning tests which are tests that you keep around to try out on new browsers to investigate what you depend on. I'm not entirely sold on this practice but this chapter is definitely worth the look at performance testing it provides in a few of the more complete aforementioned frameworks. The next 145 pages are devoted to the JavaScript language itself. The reader will find out in later chapters why this was necessary but this second part felt too long and left me starving for TDD. There's a ton of great knowledge in these chapters and Johansen demonstrates an impressive display in his understanding of ECMAScript standards (all versions thereof) and all the JavaScript engines that implement them. In the following four chapters, the reader is shown the ins and outs of scope, functions, this, closures, anonymous functions, bindings, currying, namespaces, memorization, prototypical inheritance, tons of tricks with properties, mixins, strict mode and even the neat features of tddjs and JSON. What I was most impressed with in this chapter was how much care Johansen took with noting performance pitfalls in all of the above. Example: "closures in loops are generally a performance issue waiting to happen" and on for-in arrays he says "the problem illustrated above can be worked around, as we will see shortly, but not without trading off performance." Johansen seems tireless in enumerating the multitude of ways to accomplish something in JavaScript only to dissect each method critically. If you skip these sections, at least look at 6.1.3 as the bind() implementation developed there becomes critical throughout much of the book's code. Chapter nine provides yet more dos and do nots in JavaScript with a tabbed panel example that demonstrates precisely what obtrusive JavaScript is and why it is labeled as such. Chapter ten is definitely not to be skipped over as it provides feature detection methods (specifically with regard to functions and properties) that are seen in later code snippets. Part two is devoid of any TDD yet rich in demonstrating the power of JavaScript. This is where the book loses a point for me as this seemed too long and a lot of these lessons — though informative — really seemed like they belonged in another book on the JavaScript language itself. I constantly wondered when I would start to see TDD but to a less experienced developer, these chapters are quite enlightening. In the third part, we finally get to some TDD in which an Observer Pattern (pub/sub) is designed using tests with incremental improvements in true TDD fashion. Most importantly to the audience, we encounter our first browser inconsistencies that are tackled using TDD. This chapter illustrates how to make your first tdd.js project using the book's code and build your first tests followed up with the isolation of the code into setup and teardown functions. Rinse, wash, repeat for adding observers, checking for observers and notifying observers (all key functionality in the common observer paradigm). This is a great pragmatic example for TDD and the chapter wraps up with error checking and a new way to build a constructor. As we do this, we have to make changes to the tests and Johansen illustrates another critical part of TDD: fixing the tests after you've improved your code. The twelfth chapter takes our Ajax friend the XMLHttpRequest object and gives it the same treatment as above. Of course, you might know it as the Msxm12.XMLHTTP.6.0 object or a variety of names so this is where our browser differences are exposed. On top of that, we're exposed to stubbing in order to test such an object. The author explores three different ways of stubbing it while building tests for GET requests. After building helpers to successfully stub this, we move on to POST, finally send data in a test and then pay attention to the testing of headers. Personally these two chapters were some of the best in the book and illustrated well a common method of utilizing TDD and stubbing to build up functional JavaScript. Chapter thirteen builds on the previous chapter by examining polling data in JavaScript and how we might keep open a constant stream of data. Before jumping to the solution, the author investigates strategies like polling intervals and long polling which have their downfalls. We eventually come to the Comet client (which uses JSON objects) and build up our test cases that support our development of our new streaming data client. One important aspect brought up is the trick of using the Clock object to fake time. This was completely new to me and very interesting in simulating time with tick() to quickly fake and test expected lengths of time. Chapter fourteen was definitely outside of my comfort zone. JavaScript on the server-side? Blasphemy! Johansen begins to bring together the prior elements to form a fully functional chat server all in JavaScript through TDD. In this chapter the reader is introduced to node.js and a custom version of Nodeunit the author modified to make a little more like JsTestDriver. The controller emerges through the TDD cycles. Responses to POST, adding messages, the domain model and even storage of data are given test cases to insure we are testing feature after tiny feature. Toward the end of the chapter, an interesting problem arises with our asynchronous interface. In testing it, how do we know what will result from a nested callback? Johansen introduces the concept of a Promise which is a placeholder that eventually provides a value. Instead of accepting a callback, the asynchronous method returns a promise object which is eventually fulfilled. We can now test adding messages in asynchronous manner to our chat room. The chapter builds on the chat server to passable functionality — all through TDD. Chapter fifteen concentrates on building the chat client to the above server and in doing so provides the reader with TDD in regards to DOM manipulation and event handling. This chapter finally covers some of the more common problematic aspects of client-side JavaScript. Again, this chapter yielded many tricks that were new to me in TDD. JsTestDriver actually includes two ways to include HTML in a test and Johansen shows how to manipulate the user form on a page in order to test it automatically. The client is developed through TDD and node-paperboy is called in to serve up static files through http with Node.js. The message list displayed in the client is developed through TDD and then the same process used on the user form is done with the message form submission. The author brings in some basic CSS, Juicer and YUI Compressor to reduce all our work down into a 14kB js file containing an entire chat client. With gzip enabled it downloads at about 5kB. Potent stuff. I was sad that more pages weren't spent on the final section. Chapter sixteen further expounds upon mocking, spies and stubbing. It lists different strategies and how to inject trouble into your code by creating stubs that blow up on purpose during testing. And we get a sort of abbreviated dose of Sinon, a mocking and stubbing library for JavaScript. The author repeats a few test cases from chapter eleven and moves on to mocking. Mocking is mentioned throughout the book but is passed over due to the amount of work required to manually mock something. The chapter ends with the author saying 'it depends' on whether you should use stubbing or mocks but it's pretty clear the author provides stubbing as he enumerates the pros and cons of each. Chapter seventeen provides some pretty universal rules of thumb to employ when using TDD. From the obvious revealing intent by clear naming to strategies for isolating behavior, it's got good advice for succeeding with TDD. This advice aims to improve readability, generate true unit tests that stay at the unit level and avoid buggy tests. It's worth repeating that he gives a list of 'attacks' for finding deficiencies in tests: "Flip the value of the boolean expressions, remove return values, misspell or null variables and function arguments, introduce off-by-one errors in loops, mutate the value of internal variables." Introduce one deficiency and run the tests. Make sure they break when and where you would expect them to or your testing isn't as hardened as you might expect. Lastly the author recommends using JsLint (like lint for C). There's a lot of information in this book but I think that the final examples were actually too interesting for my tastes. Often I grapple with the mundane and annoying parts of client side DOM — nothing on the server side. While this might change at some point in the future, I couldn't help but feel that the book would have been better with additional examples of more common problems than a chat client in JavaScript. I was certainly impressed with this example and it will hold the readers' attention much more than what I desire so I feel comfortable recommending this book with a 9/10 to anyone suffering from browser inconsistencies or looking to do TDD in JavaScript. You can purchase Test-Driven JavaScript Development from amazon.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page. Letting it all out (Score:5, Interesting) I know I’m out of the web dev loop, but as I recall, in most cases the web mantra seems to be to crank it out as quick as possible before it’s obsolete.. I do like broad, interface level unit testing (if you can call it that) to test that inputs into the system as a whole result in appropriate output, as these kind of requirements rarely change but individually testing classes just adds extra work, extra bulk when making changes, and never seems to find anything! I have run into this so many times, where someone comes up with a really great way to refractor some code, only to find out that while the code change would be trivial, updating 50 unit tests wouldn’t so it doesn’t happen. And I can come up with much better and more productive ways to document behaviour and coordinate between developers (written documents, java style interface classes/etc) than a bunch of cumbersome unit tests. Re:Letting it all out (Score:4, Interesting) I kind of lump TDD in with traditional older-style thinking. I admit I tend to have a kind of waterfall mentality. You spend months documenting _exactly_ what is needed, months designing the thing down to the tiny component parts, then proceed to basically paraphrase the design into code form. Slow, inflexible, inefficient ... and everything is moving away from it. TDD seems to be a relic of this approach. It would assume you have that first part, a very detailed list of everything you need, up front, before implementation. I've never understood how this is moving forward. To me we should be moving towards approaches that let us change requirements on the fly without it being a massive undertaking. Agile types may feel free to educate me as to why I'm an idiot here! To me TDD binds hands just as effectively as the miles of design and requirement docs did, and this seems like a bad thing. As for the bulk of your comment, I think we need a middle ground. Less hectic and "seat of your pants" ish, but not process overkill. The old "good, cheap, fast - pick 2" argument... but we need a little bit more "good". A Few Responses (Score:5, Interesting) I know I’m out of the web dev loop, but as I recall, in most cases the web mantra seems to be to crank it out as quick as possible before it’s obsolete. I think ECMAScript standards are here to stay for quite a while. I wouldn't worry about too much changing between 5 and 6. This book actually considers all versions of ECMAScript and ES5 was standardized in 2009 [wikipedia.org], 10 years after the last standardized version. While it takes a while for JavaScript engines to catch up on implementing those standards, I wouldn't spin this sort of stuff as "crank it out quick.". I believe this is a false dichotomy that may have been true years ago. Once you see the maturity level of some JavaScript projects that companies like Google are working on, JavaScript doesn't have to be the slap dash crap you are speaking of. It just becomes clear that the language produces products equivalent to how much time is put into it. Here's one of many good counterexamples to your rule [google.com]. I am interested though: how do you define "solid coding?" That sounds like an ambiguous phrased designed to pick and choose language features as the user personally desires. The recent removal of OO programming from CMU's curriculum might point out how yesterday's mentality goes out of style and comes back in two decades later.! Ima let you finish. But I just want to say that that is a totally respectable position. Me, personally, I recognize that I have a huge diverse toolbox full of all sorts of tools. Some better for jobs inside the browser, some better for servers and some better for embedded systems. But I would ask you to consider that it's going to be a while before any of these tools to be your silver bullet.. There are certainly problems with rendering and layouts being difficult to automatically test. But there are a lot of tools out there (like firebug) that help you pin down what is going wrong in certain browsers. While the book doesn't delve deeply into it, it's something that is easier to deal with than, say, a weir Re:Letting it all out (Score:4, Interesting) TDD seems to be a relic of this approach. It would assume you have that first part, a very detailed list of everything you need, up front, before implementation. You seem to have a misapprehension of TDD. TDD is very agile; you write tests as a way to explore the problem space. Tests can be amended, and even deleted as you go along. It's a tight loop consisting of: - while not finished { - Invent a test, write it -- it is likely to not compile, as it will reference types and methods that don't exist - Write just enough code for the test to compile - skeleton classes with methods that return null - Code until all your tests pass - refactor to remove any duplicate code, re-running the test suite between each refactoring - } It is *not* a matter of writing all your tests upfront. It's more like "Hmm, my game is going to need a scoreboard class, and when I construct it, the score should be zero, so I'll write a test for that now."
http://developers.slashdot.org/story/11/03/28/1344224/Book-Review-Test-Driven-JavaScript-Development/interesting-comments
CC-MAIN-2014-52
refinedweb
3,056
67.49
. #include <QtGui> int main(int argc, char *argv[]) { QApplication app(argc, argv); QMainWindow w; QMenu *m1 = w.menuBar()->addMenu("&File"); QMenu *m2 = w.menuBar()->addMenu("&Edit"); QMenu *m3 = w.menuBar()->addMenu("&Action"); m1->addAction("Foo"); m1->addAction("Doo"); QAction *a1 = m2->addAction("Bar"); a1->setEnabled(false); QAction *a2 = m2->addAction("Boo"); a2->setEnabled(false); m3->addAction("Foobar"); m3->addAction("Barfoo"); w.show(); return app.exec(); } Save this in its own folder, and run with qmake -project && qmake && make If you run qtconfig and select Oxygen, the bug appears - you cannot press right to scroll through the menus. It works with other styles however.
http://techbase.kde.org/index.php?title=Projects/Oxygen/StyleWinDec&diff=17331&oldid=17159
CC-MAIN-2014-15
refinedweb
103
52.26
Remove the blanket except. Your script is not freezing, but any error you get is being ignored in an endless loop. Because you use a blanket except: you catch all exceptions, including the keyboard interrupt exception KeyboardInterrupt. At the very least log the exception, and catch only Exception: except Exception: import logging logging.exception('Oops: error occurred') KeyboardInterrupt is a subclass of BaseException, not Exception and won't be caught by this except handler. Take a look at the shutil module for copying files, you doing way too much work: import time import shutil import os.path paths = ('28-000004d2ca5e', '28-000004d2fb20', '28-000004d30568') while True: for i, name in enumerate(paths, 1): src = os.path.join('/sys/bus/w1/devices', name, 'w1_ You're using wrong command: freeze.py in pip/commands directory is for pip freeze command. Use cxfreeze program. cxfreeze /home/frost/Desktop/dd.py See cxfreeze script for usage detail. self.__vector = [self.__vector + vector.__vector for self.__vector, vector.__vector in zip(self.__vector, vector.__vector)] See that? You're assigning values to self.__vector, vector.__vector in the loop for self.__vector, vector.__vector in zip(self.__vector, vector.__vector). You are looking at the repr() representation of a string. This is normal. A string representation uses escape codes for non-printable characters or anything that requires escaping. Python containers show their contents, when printed, as string representations for debugging purposes. The resulting string representation is re-usable as a string literal, you can paste that right back into Python and it'll produce the same value. Print individual values of you want to see the output unescaped: print temp["key"] and if you feel so inclined, compare that with the repr() result of the string: print repr(temp["key"]) cmath does not support numpy arrays: BER2=(3/8)*erfc(sqrt(ebno*(2/5)))+(1/4)*erfc(3*sqrt(2/5*ebno)) You seem to be importing a lot of functions as from foo import * this can really trip you up. Also you are using ints (for example 2/5) instead of floats so the equation above just returns an array of all zero's: >>> 2/5 0 >>> 2./5 0.4 I suggest: >>> import numpy as np >>> import scipy.special as sp >>> EbbyNo=np.arange(0.,16.,1) >>> ebno=10**(EbbyNo/10) >>> BER2=(3./8)*sp.erfc(np.sqrt(ebno*(2./5)))+(1./4)*sp.erfc(3*np.sqrt(2./5*ebno)) >>> BER2 array([ 1.40982603e-01, 1.18997473e-01, 9.77418560e-02, 7.74530603e-02, 5.86237373e-02, 4.18927600e-02, 2.78713278e-02, 1.69667344e-02, 9.247 You're mixing different format functions. The old-style % formatting uses % codes for formatting: 'It will cost $%d dollars.' % 95 The new-style {} formatting uses {} codes and the .format method 'It will cost ${0} dollars.'.format(95) Note that with old-style formatting, you have to specify multiple arguments using a tuple: '%d days and %d nights' % (40, 40) In your case, since you're using {} format specifiers, use .format: "'{0}' is longer than '{1}'".format(name1, name2) Can you playback the caf file ? If you only want to record a sound from microphone to an aac file, you can use Audio Queue Services (I can post some code) Edit : it's an implementation from Apple dev tutorial, there might be some errors since I modified it to fit your question //AudioQ.mm @implementation AudioQ static const int nBuffer = 3; struct AQRecorderState{ AudioStreamBasicDescription mDataFormat; AudioQueueRef mQueue; AudioQueueBufferRef mBuffers[nBuffer]; AudioFileID mAudioFile; UInt32 bufferByteSize; SInt64 mCurrentPacket; bool mIsRunning; }; AQRecorderState aqData; CFURLRef url; static OSStatus BufferFilledHandler(. why arent you using the except keyword try: newbutton['roundcornerradius'] = buttondata['roundcornerradius'] buttons.append(newbutton) except: pass this will try the first part and if an error is thrown it will do the except part you can also add the disered error you want to except a certain error like this except AttributeError: you can also get the excepted error by doing this: except Exception,e: print str(e) use temporary variable to save your string like: temp = "INSER INTO users(" + ",".join(rows_names) + ") VALUES(" + test2 + ")" temp = temp % (name, val, 'SomeNickname', 'password', '13-09-11', '2', '@gmail.com','1') and then interpolate data into it or add additional parentheses around ("INSER INTO users(" + ",".join(rows_names) + ") VALUES(" + test2 + ")") it will look like temp = ("INSER INTO users(" + ",".join(rows_names) + ") VALUES(" + test2 + ")") % (name, val, 'SomeNickname', 'password', '13-09-11', '2', '@gmail.com','1') problem is that you didn't have complete string, and wanted to insert data into ")",m pretty sure this error was due to DeadlineExceededError, which I did not run into locally. My scrape() script now does its thing on fewer companies and articles, and does not run into the exceeded deadline. This works for me: #!/usr/bin/env python # -*- coding: utf-8 -*- import gtk def win_with_image(): pixbuf = gtk.gdk.pixbuf_new_from_file("photo.png") print pixbuf win = gtk.Window(gtk.WINDOW_TOPLEVEL) image = gtk.Image() image.set_from_pixbuf(pixbuf) win.add(image) win.connect("destroy", gtk.main_quit) win.show_all() if __name__ == '__main__': win_with_image() gtk.main() If this doesn`t work for you, try to: start google and type your error and choose the second link (), in general this helps almost always. reinstall libglib install gtk (maybe some graphical libs - libpng, libjpeg, f.e.) reinstall python/gtk package fix broken package repository change files permissions Make it an independent function. def run_main(): .... if __name__ == "__main__": run_main() And you can call run_main() from another file. problem will be the result of uid = User.by_id_name(linkid) won't be a Userid but a User object. its str method will mean that when you log stuff it looks like a uid but it isn't This means the comparisons will fail. You should be comparing str(uid) == linkid for you code to work. to prove this try logging repr(uid) rather than str(uid) The README.md file () says ""Command also requires python-2.7 to use.". From the traceback I see your using Python version 2.6. This could be the problem. If you really have '.config' in the string, that would be the problem. That's a string literal using c as one of its characters. Even if you have '.\config' or r'.config', both of which specify a literal backslash, that would still be wrong: $ cat eleme.py import xml.etree.ElementTree as ET root = ET.fromstring(""" <root> <config> source </config> <config> source </config> </root>""") print r'using .config', root.findall('.config') print r'using .\config', root.findall('.\config') print 'using ./config', root.findall('./config') $ python2.7 eleme.py using .config [] using .\config [] using ./config [<Element 'config' at 0x8017a8610>, <Element 'config' at 0x8017a8650>] A python import statement is an expression just like any other Python code. You can wrap your module import in a try...except block, like so: import somemodule try: from someothermodule import Temperature except ImportError,e: Temperature = 20' Make all of them lists and then iterate over the list executing each in turn. for actionVal,actionDesc,actionFunctions in validActions: if ctx["newAction"] == actionVal: for actionFunction in actionFunctions: actionFunction(). import MySQLdb con = MySQLdb.connect(...) cursor = con.cursor() try: # do stuff with your DB finally: con.close() The finally clause is executed on success as well as on error (exception). If you hit Ctrl-C, you get a KeyboardInterrupt exception.) The same way you protect resources elsewhere. try-except: def setUpClass(cls): # ... acquire resources try: # ... some call that may fail except SomeError, e: # cleanup here Cleanup could be as simple as calling cls.tearDownClass() in your except block. Then you can call assert(False) or whatever method you prefer to exit the test early. Also ran into the same issues when trying to install requests, all the options on did not work. I went to and then clicked on "Download Zip" and I got requests-master.zip. It is not (no longer) recommended you create a subclass; the json.dump() and json.dumps() functions take a default function: def decimal_default(obj): if isinstance(obj, decimal.Decimal): return float(obj) raise TypeError json.dumps({'x': decimal.Decimal('5.5')}, default=decimal_default) Demo: >>> def decimal_default(obj): ... if isinstance(obj, decimal.Decimal): ... return float(obj) ... raise TypeError ... >>> json.dumps({'x': decimal.Decimal('5.5')}, default=decimal_default) '{"x": 5.5}' The code you found only worked on Python 2.6 and overrides a private method that is no longer called in later versions. Although you can encode these characters, they're still at best "frowned upon". See for a list of "bad" characters. Then, see this 1.1 spec as well, which adds some back as allowed in some cases, as "restricted" characters. If the text legitimately should be able to include these characters, it's wise to encode it first, e.g., with base64 encoding. The receiver thus gets well-formed XML (for XML 1.1, it's not always required but that will make it compatible with 1.0). I had to deal with externally-supplied invalid XML myself once before, where I had no control over the sender. It's pretty messy. In my case I could rely on certain patterns, and hence use regular expressions to "patch away" improprieties, but this is a hack: a workaround done out of d Try this: asciidoc_call = ["asciidoc","-b", "docbook45", asciidoc_file_name] the other call would call ascidoc with "-b docbook45" as one single option, which won't work.
http://www.w3hello.com/questions/Python-file-py-converted-to-exe-fails-to-execute-Python-3-4-cx-Freeze-
CC-MAIN-2018-17
refinedweb
1,560
59.3
Back to index An nsIRDFInferDataSource is implemented by a infer engine. More... import "nsIRDFInferDat=" Definition at line 49 of file nsIRDFInferDataSource.idl. Add an observer to this data source. If the datasource supports observers, the datasource source should hold a strong reference to the observer. Get a cursor to iterate over all the arcs that point into a node. Get a cursor to iterate over all the arcs that originate in a resource. Add an assertion to the graph. Notify observers that the datasource is about to send several notifications at once. This must be followed by calling endUpdateBatch(), otherwise viewers will get out of sync. Change an assertion from. [aSource]--[aProperty]-->[aOldTarget] to [aSource]--[aProperty]-->[aNewTarget] Perform the specified command on set of sources. Notify observers that the datasource has completed issuing a notification group. Returns the set of all commands defined for a given source. Retrieve all of the resources that the data source currently refers to. Find an RDF resource that points to a given node over the specified arc & truth value. Find all RDF resources that point to a given node over the specified arc & truth value. Find a child of that is related to the source by the given arc arc and truth value. Find all children of that are related to the source by the given arc arc and truth value. Returns true if the specified node is pointed to by the specified arc. Equivalent to enumerating ArcLabelsIn and comparing for the specified arc. Returns true if the specified node has the specified outward arc. Equivalent to enumerating ArcLabelsOut and comparing for the specified arc. Query whether an assertion exists in this graph. Returns whether a given command is enabled for a set of sources. 'Move' an assertion from [aOldSource]--[aProperty]-->[aTarget] to [aNewSource]--[aProperty]-->[aTarget] Remove an observer from this data source. Remove an assertion from the graph. The wrapped datasource. The InferDataSource contains all arcs from the wrapped datasource plus those infered by the vocabulary implemented by the InferDataSource. Definition at line 58 of file nsIRDFInferDataSource.idl. The "URI" of the data source. This used by the RDF service's |GetDataSource()| method to cache datasources. Definition at line 56 of file nsIRDFDataSource.idl.
https://sourcecodebrowser.com/lightning-sunbird/0.9plus-pnobinonly/interfacens_i_r_d_f_infer_data_source.html
CC-MAIN-2016-44
refinedweb
371
60.11
12 October 2009 16:38 [Source: ICIS news] LONDON (ICIS news)--The North East Process Industry Cluster (NEPIC) is conducting a study into the needs of businesses at the Wilton, UK, chemical complex, said Bob Coxon, who is to oversee the writing of the report, on Monday. Coxon, current chairman of the North East England Science and Industry Council, said the purpose of the report was to assess the needs of businesses at the ?xml:namespace> The study, which would be presented to the government, was expected to be completed within weeks, Coxon said. The Both Stan Higgins, CEO of the North East Process Industry Cluster, and a Department for Business, Innovation and Skills spokeswoman denied rumours last week that Dow Chemical was in talks with US-based Third Coast Chemicals over the sale of the ethylene oxide plant. Dow is going ahead with plans to mothball the plant.
http://www.icis.com/Articles/2009/10/12/9254603/industry-body-to-report-on-wilton-uk-business-needs.html
CC-MAIN-2014-35
refinedweb
149
50.5
I’m building an API with the Flask library in Python. In my API, I try to call a function that may take a while to finish executing. Sometimes it finishes in less than a second, and sometimes it takes a few minutes. How do I kill it if it takes more than 12 seconds without actually sleeping for 12 seconds in my program? from flask import Flask from multiprocessing import Process app = Flask(__name__) def evaluate(to_eval): # ... (might take a while) evaluated = <evaluated> return evaluated @app.route('/api/') def api(): # ... to_eval = <some_value> process = Process(target=evaluate, args=(to_eval,)) evaluated = None status = 'success' process.start() process.join(12) if process.is_alive(): process.terminate() process.join() status = 'timeout' return {'evaluated': evaluated, 'status': status} I tried the above, but it sleeps for 12 seconds even if the function finishes executing in less than a second. I want the response to be returned if the function finishes executing by 12 seconds. If it doesn’t, the code should return a response with an error message saying “Timeout” or something like that. Can someone please help?
https://proxies-free.com/tag/flask/
CC-MAIN-2021-21
refinedweb
182
58.48
Position a CSS background like a block element Percentages in the CSS background-position values “refer to the size of the background positioning area minus size of background image” (source). Chris Coyier illustrated it wonderfully on css-tricks.com and as he said, it's a really clever and intuitive way of doing it. Sometimes though, you may want to position the background as though it was an element with position:absolute. It's trivial if the element has a fixed dimension. You just use pixels instead of percentages. But there's a small arithmetic problem to solve if your layout is fluid. I had to figure out that problem recently in a project which involved elements with a gradient progress bar as a background in a variable-width design. I didn't want to just scale the background because I wanted to preserve the aspect of the gradient. Example Consider the following: …where: 100% is the width of the element, w is the relative width of the progress bar's gradient (30% in this example), p is the percentage of progress so far(75% in this example). You want to position the gradient 1 - p from the right. At right 100% (p = 0%) you end up with the left end of the gradient aligned with the left border of the element. 0% (wrong) Formula Now since the percentage position is calculated in relation to 1 - w, you just need to bring that value back to 1 with a multiplicative inverse and calculate p´ = (1 ÷ (1 - w)) × p For example, if the progress is 75% and you want to position the right edge of the gradient at exactly 25% of the right border, the background-position will be: (1 ÷ (1 - w)) × (1 - p) (1 ÷ (1 - 0.3)) × (1 - 0.75) (1 ÷ 0.7) × 0.25 1.43 × 0.25 0.36 or 36% 75% (positioned at 36% from the right) 0% (positioned at 143% from the right to get it right) Warning Because of the way background image percentage positions are multiplied by 1 - w, the multiplier is zero when the background is stretched to 100% of the width of the element. This is one of the reasons my gradient is scaled down to 30% with background-size: 30%. For Django templates Here is a Django filter that will convert a progress percentage into the required right-positioning percentage. It take one argument, the relative width of the background as a floating point number. from django import template register = template.Library() @register.filter def relative_bg_pos(value, arg): """Calculates the % offset of the background like an absolutely positioned block rather than the way browsers calculate the background-position. See also: """ from_right = 1 - value bg_width = float(arg) size_ratio = 1 / (1 - bg_width); return "{:.2%}".format(size_ratio * from_right)</pre> Use it in a template like so. Suppose that the context variable p contains the progress as a floating point value between 0 and 1. Notice how the background-size is 30% and I passed 0.3 as a parameter to the relative_bg_pos filter. <style> .progressbar { <span class="comment">/* Permalink - use to edit and share this gradient: */</span> background: -moz-linear-gradient(left, rgba(86,88,137,0) 0%, rgba(86,88,137,0.24) 100%); <span class="comment">/* FF3.6+ */</span> background: -webkit-gradient(linear, left top, right top, color-stop(0%,rgba(86,88,137,0)), color-stop(100%,rgba(86,88,137,0.24))); <span class="comment">/* Chrome,Safari4+ */</span> background: -webkit-linear-gradient(left, rgba(86,88,137,0) 0%,rgba(86,88,137,0.24) 100%); <span class="comment">/* Chrome10+,Safari5.1+ */</span> background: -o-linear-gradient(left, rgba(86,88,137,0) 0%,rgba(86,88,137,0.24) 100%); <span class="comment">/* Opera 11.10+ */</span> background: -ms-linear-gradient(left, rgba(86,88,137,0) 0%,rgba(86,88,137,0.24) 100%); <span class="comment">/* IE10+ */</span> background: linear-gradient(to right, rgba(86,88,137,0) 0%,rgba(86,88,137,0.24) 100%); <span class="comment">/* W3C */</span> filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#00565889', endColorstr='#3d565889',GradientType=1 ); <span class="comment">/* IE6-9 */</span> background-size: <span class="fg_yellow">30%</span>; background-repeat: no-repeat; } </style> <div class="progressbar" data-.3</span>" }} top"></div></pre>
https://alexandre.deverteuil.net/post/position-css-background-block-element/
CC-MAIN-2020-10
refinedweb
712
55.84
On 02/22/2011 12:34 PM, Markus Gothe wrote: > Please see attached patch for ioctl() on FreeBSD and Darwin. Their definition > differ from (int, int, ...) and the gnulib variant doesn't work well on > 64-bit Darwin with this proto. Can you please convince your mailer to send patches with MIME type text/plain, rather than encoded as application/octet-stream? > --- lib/ioctl.c.old 2011-02-22 20:21:11.000000000 +0100 > +++ lib/ioctl.c 2011-02-22 20:24:38.000000000 +0100 > @@ -28,7 +28,11 @@ > /* Provide a wrapper with the POSIX prototype. */ > # undef ioctl > int > +#if __FreeBSD__ || __Darwin__ > +rpl_ioctl (int fd, unsigned long request, ... /* {void *,char *} arg */) > +#else > rpl_ioctl (int fd, int request, ... /* {void *,char *} arg */) > +#endif This part is wrong - the replacement should ALWAYS match the POSIX signature, and the type munging take place within the replacement, rather than declaring the replacement with the wrong type. > +++ m4/ioctl.m4 2011-02-22 20:26:10.000000000 +0100 > @@ -24,7 +24,13 @@ > [AC_COMPILE_IFELSE( > [AC_LANG_PROGRAM( > [[#include <sys/ioctl.h>]], > - [[extern int ioctl (int, int, ...);]]) > + [[ > + #if __FreeBSD__ || __Darwin__ > + extern int ioctl (int, unsigned long, ...); > + #else > + extern int ioctl (int, int, ...); > + #endif > + ]]) > ], > [gl_cv_func_ioctl_posix_signature=yes], > [gl_cv_func_ioctl_posix_signature=no]) This is wrong as well - the whole point of this test is to reject the FreeBSD/Darwin ioctl signature as non-compliant, so that the rest of the code will provide a correct signature wrapper in the form of rpl_ioctl. What is the exact failure you are seeing, and on which project? -- Eric Blake address@hidden +1-801-349-2682 Libvirt virtualization library signature.asc Description: OpenPGP digital signature
http://lists.gnu.org/archive/html/bug-gnulib/2011-02/msg00269.html
CC-MAIN-2015-18
refinedweb
266
68.67
Important: Please read the Qt Code of Conduct - Help with QtGStreamer undefined reference to QGST::init I am having some trouble with implementing QtGStreamer with QT. I have downloaded the source code for QtGStreamer and put it in "C:/qt-gstreamer-1.2.0" I am trying to run one of the examples that was provided, the player example. Here is my code: TEMPLATE = app TARGET = player CONFIG += silent CONFIG += pkgconfig contains(QT_VERSION, ^4\\..*) { PKGCONFIG += QtGStreamer-1.0 QtGStreamerUi-1.0 QtGlib-2.0 QtGStreamerUtils-1.0 QT += widgets } contains(QT_VERSION, ^5\\..*) { PKGCONFIG += Qt5GStreamer-1.0 Qt5GStreamerUi-1.0 Qt5Glib-2.0 Qt5GStreamerUtils-1.0 Qt5GStreamerQuick-1.0 QT += widgets \ enginio } QMAKE_CXXFLAGS += -std=c++0x DEFINES += QT_NO_KEYWORDS HEADERS += mediaapp.h player.h SOURCES += main.cpp mediaapp.cpp player.cpp INCLUDEPATH += C:\qt-gstreamer-1.2.0\src \ C:\boost_1_58_0 main.cpp #include "mediaapp.h" #include <QApplication> #include <QGst/Init> int main(int argc, char *argv[]) { QApplication app(argc, argv); QGst::init(&argc, &argv); MediaApp media; media.show(); if (argc == 2) { media.openFile(argv[1]); } return app.exec(); } There are other cpp files in the example, but they are all giving the same type of error. When I try to build the project, I get the error "undefined reference to 'QGst::init'" I cannot figure out what I am doing wrong. I think there needs to be something changed in the .pro file, but I couldn't find anything telling me what to do to fix it. I am on Windows, using with mingw compiler. Hi and welcome to devnet, Might be a silly question but: do you have pkg-config running on Windows ? Yes, I do have it running, however now I am getting an error "QtCore/QtGlobal: No such file or directory, so I added the include path, "C:\Qt\Qt5.4.1\5.4\mingw491_32\include" but i am still getting the same error. Then there's something wrong with either your project or your setup. You shouldn't need to add anything to find Qt's headers Do you have any ideas that could help? Recheck your pro file: do you have any line line QT =or INCLUDEPATH =? Note the missing + No, this is my code: # This is a qmake project file, provided as an example on how to use qmake with QtGStreamer. TEMPLATE = app TARGET = player # produce nice compilation output CONFIG += silent # Tell qmake to use pkg-config to find QtGStreamer. CONFIG += pkgconfig # Now tell qmake to link to QtGStreamer and also use its include path and Cflags. contains(QT_VERSION, ^4\\..*) { PKGCONFIG += QtGStreamer-1.0 QtGStreamerUi-1.0 } contains(QT_VERSION, ^5\\..*) { PKGCONFIG += Qt5GStreamer-1.0 Qt5GStreamerUi-1.0 } QT += core quick widgets CONFIG += qt console bootstrap # Recommended if you are using g++ 4.5 or later. Must be removed for other compilers. #QMAKE_CXXFLAGS += -std=c++0x # Recommended, to avoid possible issues with the "emit" keyword # You can otherwise also define QT_NO_EMIT, but notice that this is not a documented Qt macro. DEFINES += QT_NO_KEYWORDS # Input HEADERS += mediaapp.h player.h SOURCES += main.cpp mediaapp.cpp player.cpp INCLUDEPATH += C:\qt-gstreamer-1.2.0\src Line 3 and 4 (TEMPLATE and TARGET) made no difference when I added a '+' before the '=' TEMPLATE and TARGET should be used with = . They contain only one value. Do you still have that error if you don't use pkgconfig ? Then create a default widget project and see if you can built it @SGaist Hi, I built an blank widget project, and I build one of the examples from Qt, and it worked fine Then keep that one and introduce one by one the elements of your other project until it either fails to build or build successfully For every element in the pro file, it does not fail if I put '#include <QGst/Init>' then I get the error 'QtCore/QtGlobal: No such file or directory' Then if I comment it back out, and run qmake, then build, I get no error. If I comment it out I get the error 'cannot find boost/config.hpp' until I run qmake again. I added 'C:\boost_1_58_0' to the include path, commented out '#include <QGst/Init>' ran qmake, and uncommented it, and it built with no error. Then I ran qmake again, and it game me the 'QtCore/QtGlobal' error again Ok… Then the silly question: do you have a QtGlobal file ? If so where is it ? Yes, I have 5 copies, they are in "C:\Qt\Qt5.4.1\5.4\android_armv5\include\QtCore\QtGlobal" "C:\Qt\Qt5.4.1\5.4\android_armv7\include\QtCore\QtGlobal" "C:\Qt\Qt5.4.1\5.4\android_x86\include\QtCore\QtGlobal" "C:\Qt\Qt5.4.1\5.4\mingw491_32\include\QtCore\QtGlobal" "C:\Qt\Qt5.4.1\5.4\Src\qtbase\include\QtCore\QtGlobal" Should I have a different one somewhere else? I am building with Mingw Did you modify your kits ? No, I even just got done uninstalling and reinstalling Qt 5.4.1 and QtCreator 3.3.2 and I just got the same QtCore error I then added the pkgconfig and config then instead of '#include <QGst/Init>' I did '#include <QGst/init.h>' and I got back to the error of undefined reference to 'QGst::init Check the output of pkg-config for QtGStreamer, compare it to the build output of your application. Check the -I lines to see if something is currently modifying them in the wrong way
https://forum.qt.io/topic/54839/help-with-qtgstreamer-undefined-reference-to-qgst-init/17
CC-MAIN-2020-34
refinedweb
907
65.32
This HTML version of is provided for convenience, but it is not the best format for the book. In particular, some of the symbols are not rendered correctly. You might prefer to read the PDF version. You can buy this book at Amazon. We’ve been using the Discrete Fourier Transform (DFT) since Chapter 1, but I haven’t explained how it works. Now is the time. If you understand the Discrete Cosine Transform (DCT), you will understand the DFT. The only difference is that instead of using the cosine function, we’ll use the complex exponential function. I’ll start by explaining complex exponentials, then I’ll follow the same progression as in Chapter 6: The code for this chapter is in chap07.ipynb, which is in the repository for this book (see Section 0.2). You can also view it at. One of the more interesting moves in mathematics is the generalization of an operation from one type to another. For example, factorial is a function that operates on integers; the natural definition for factorial of n is the product of all integers from 1 to n. If you are of a certain inclination, you might wonder how to compute the factorial of a non-integer like 3.5. Since the natural definition doesn’t apply, you might look for other ways to compute the factorial function, ways that would work with non-integers. In 1730, Leonhard Euler found one, a generalization of the factorial function that we know as the gamma function (see). Euler also found one of the most useful generalizations in applied mathematics, the complex exponential function. The natural definition of exponentiation is repeated multiplication. For example, φ3 = φ · φ · φ. But this definition doesn’t apply to non-integer exponents. However, exponentiation can also be expressed as a power series: This definition works with real numbers, imaginary numbers and, by a simple extension, with complex numbers. Applying this definition to a pure imaginary number, iφ, we get By rearranging terms, we can show that this is equivalent to: You can see the derivation at's_formula. This formula implies that ei: where A is a real number that indicates amplitude and eiφ is a unit complex number that indicates angle. NumPy provides a version of exp that works with complex numbers: >>> phi = 1.5 >>> z = np.exp(1j * phi) >>> z (0.0707+0.997j) Python uses j to represent the imaginary unit, rather than i. A number ending in j is considered imaginary, so 1j is just i. When the argument to np.exp is imaginary or complex, the result is a complex number; specifically, an np.complex128, which is represented by two 64-bit floating-point numbers. In this example, the result is 0.0707+0.997j. Complex numbers have attributes real and imag: >>> z.real 0.0707 >>> z.imag 0.997 To get the magnitude, you can use the built-in function abs or np.absolute: >>> abs(z) 1.0 >>> np.absolute(z) 1.0 To get the angle, you can use np.angle: >>> np.angle(z) 1.5 This example confirms that ei φ is a complex number with magnitude 1 and angle φ radians. If φ(t) is a function of time, ei φ(t) is also a function of time. Specifically, This function describes a quantity that varies in time, so it is a signal. Specifically, it is a complex exponential signal. In the special case where the frequency of the signal is constant, φ(t) is 2 π f t and the result is a complex sinusoid: Or more generally, the signal might start at a phase offset φ0, yielding thinkdsp provides an implementation of this signal, ComplexSinusoid: class ComplexSinusoid(Sinusoid): def evaluate(self, ts): phases = PI2 * self.freq * ts + self.offset ys = self.amp * np.exp(1j * phases) return ys ComplexSinusoid inherits __init__ from Sinusoid. It provides a version of evaluate that is almost identical to Sinusoid.evaluate; the only difference is that it uses np.exp instead of np.sin. __init__ The result is a NumPy array of complex numbers: >>> signal = thinkdsp.ComplexSinusoid(freq=1, amp=0.6, offset=1) >>> wave = signal.make_wave(duration=1, framerate=4) >>> wave.ys [ 0.324+0.505j -0.505+0.324j -0.324-0.505j 0.505-0.324j] The frequency of this signal is 1 cycle per second; the amplitude is 0.6 (in unspecified units); and the phase offset is 1 radian. This example evaluates the signal at 4 places equally spaced between 0 and 1 second. The resulting samples are complex numbers. Just as we did with real sinusoids, we we can create compound signals by adding up complex sinusoids with different frequencies. And that brings us to the complex version of the synthesis problem: given the frequency and amplitude of each complex component, how do we evaluate the signal? The simplest solution is to create ComplexSinusoid objects and add them up. def synthesize1(amps, fs, ts): components = [thinkdsp.ComplexSinusoid(freq, amp) for amp, freq in zip(amps, fs)] signal = thinkdsp.SumSignal(*components) ys = signal.evaluate(ts) return ys This function is almost identical to synthesize1 in Section 6.1; the only difference is that I replaced CosSignal with ComplexSinusoid. Here’s an example: amps = np.array([0.6, 0.25, 0.1, 0.05]) fs = [100, 200, 300, 400] framerate = 11025 ts = np.linspace(0, 1, framerate) ys = synthesize1(amps, fs, ts) The result is: [] At the lowest level, a complex signal is a sequence of complex numbers. But how should we interpret it? We have some intuition for real signals: they represent quantities that vary in time; for example, a sound signal represents changes in air pressure. But nothing we measure in the world yields complex numbers. Figure 7.1: Real and imaginary parts of a mixture of complex sinusoids. So what is a complex signal? I don’t have a satisfying answer to this question. The best I can offer is two unsatisfying answers: Taking the second point of view, we can split the previous signal into its real and imaginary parts: n = 500 thinkplot.plot(ts[:n], ys[:n].real, label='real') thinkplot.plot(ts[:n], ys[:n].imag, label='imag') Figure 7.1 shows a segment of the result. The real part is a sum of cosines; the imaginary part is a sum of sines. Although the waveforms look different, they contain the same frequency components in the same proportions. To our ears, they sound the same (in general, we don’t hear phase offsets). As we saw in Section 6.2, we can also express the synthesis problem in terms of matrix multiplication: PI2 = np.pi * 2 def synthesize2(amps, fs, ts): args = np.outer(ts, fs) M = np.exp(1j * PI2 * args) ys = np.dot(M, amps) return ys. Here’s the example from the previous section again: >>> ys = synthesize2(amps, fs, ts) >>> ys [] The result is the same. In this example the amplitudes are real, but they could also be complex. What effect does a complex amplitude have on the result? Remember that we can think of a complex number in two ways: either the sum of a real and imaginary part, x + i y, or the product of a real amplitude and a complex exponential, A ei φ0. Using the second interpretation, we can see what happens when we multiply a complex amplitude by a complex sinusoid. For each frequency, f, we have: Multiplying by A ei φ0 multiplies the amplitude by A and adds the phase offset φ0. Figure 7.2: Real part of two complex signals that differ by a phase offset. We can test that claim by running the previous example with complex amplitudes: phi = 1.5 amps2 = amps * np.exp(1j * phi) ys2 = synthesize2(amps2, fs, ts) thinkplot.plot(ts[:n], ys.real[:n]) thinkplot.plot(ts[:n], ys2.real[:n]) Since amps is an array of reals, multiplying by np.exp(1j * phi) yields an array of complex numbers with phase offset phi radians, and the same magnitudes as amps. Figure 7.2 shows waveforms with different phase offsets. With φ0 = 1.5 each frequency component gets shifted by about a quarter of a cycle. But components with different frequencies have different cycles; as a result, each component is shifted by a different amount in time. When we add up the components, the resulting waveforms look different. Now that we have the more general solution to the synthesis problem – one that handles complex amplitudes – we are ready for the analysis problem. The analysis problem is the inverse of the synthesis problem: given a sequence of samples, y, and knowing the frequencies that make up the signal, can we compute the complex amplitudes of the components, a? As we saw in Section 6.3, we can solve this problem by forming the synthesis matrix, M, and solving the system of linear equations, M a = y for a. def analyze1(ys, fs, ts): args = np.outer(ts, fs) M = np.exp(1j * PI2 * args) amps = np.linalg.solve(M, ys) return amps analyze1 takes a (possibly complex) wave array, ys, a sequence of real frequencies, fs, and a sequence of real times, ts. It returns a sequence of complex amplitudes, amps. Continuing the previous example, we can confirm that analyze1 recovers the amplitudes we started with. For the linear system solver to work, M has to be square, so we need ys, fs and ts to have the same length. I’ll ensure that by slicing ys and ts down to the length of fs: >>> n = len(fs) >>> amps2 = analyze1(ys[:n], fs, ts[:n]) >>> amps2 [ 0.60+0.j 0.25-0.j 0.10+0.j 0.05-0.j] These are approximately the amplitudes we started with, although each component has a small imaginary part due to floating-point errors.: N = 4 ts = np.arange(N) / N fs = np.arange(N) args = np.outer(ts, fs) M = np.exp(1j * PI2 * args) If M is unitary, M*M = I, where M* is the conjugate transpose of M, and I is the identity matrix. We can test whether M is unitary like this: MstarM = M.conj().transpose().dot(M) The result, within the tolerance of floating-point error, is 4 I, so M is unitary except for an extra factor of N, similar to the extra factor of 2 we found with the DCT. We can use this result to write a faster version of analyze1: def analyze2(ys, fs, ts): args = np.outer(ts, fs) M = np.exp(1j * PI2 * args) amps = M.conj().transpose().dot(ys) / N return amps And test it with appropriate values of fs and ts: N = 4 amps = np.array([0.6, 0.25, 0.1, 0.05]) fs = np.arange(N) ts = np.arange(N) / N ys = synthesize2(amps, fs, ts) amps3 = analyze2(ys, fs, ts) Again, the result is correct within the tolerance of floating-point arithmetic. [ 0.60+0.j 0.25+0.j 0.10-0.j 0.05-0.j]: def synthesis_matrix(N): ts = np.arange(N) / N fs = np.arange(N) args = np.outer(ts, fs) M = np.exp(1j * PI2 * args) return M Then I’ll write the function that takes ys and returns amps: def analyze3(ys): N = len(ys) M = synthesis_matrix(N) amps = M.conj().transpose().dot(ys) / N return amps We are almost done; analyze3 computes something very close to the DFT, with one difference. The conventional definition of DFT does not divide by N: def dft(ys): N = len(ys) M = synthesis_matrix(N) amps = M.conj().transpose().dot(ys) return amps Now we can confirm that my version yields the same result as np.fft.fft: >>> dft(ys) [ 2.4+0.j 1.0+0.j 0.4-0.j 0.2-0.j] The result is close to amps * N. And here’s the version in np.fft: >>> np.fft.fft(ys) [ 2.4+0.j 1.0+0.j 0.4-0.j 0.2-0.j] They are the same, within floating point error. The inverse DFT is almost the same, except we don’t have to transpose and conjugate M, and now we have to divide through by N: def idft(ys): N = len(ys) M = synthesis_matrix(N) amps = M.dot(ys) / N return amps Finally, we can confirm that dft(idft(amps)) yields amps. >>> ys = idft(amps) >>> dft(ys) [ 0.60+0.j 0.25+0.j 0.10-0.j 0.05-0N. Then the DFT and inverse DFT would be more symmetric. But I can’t go back in time (yet!), so we’re stuck with a slightly weird convention. For practical purposes it doesn’t really matter. In this chapter I presented the DFT in the form of matrix multiplication. We compute the synthesis matrix, M, and the analysis matrix, M*. When we multiply M* by the wave array, y, each element of the result is the product of a row from M* and y, which we can write in the form of a summation: where k is an index of frequency from 0 to N−1 and n is an index of time from 0 to N−1. So DFT(y)[k] is the kth element of the DFT of y. Normally we evaluate this summation for N values of k, running from 0 to N−1. We could evaluate it for other values of k, but there is no point, because they start to repeat. That is, the value at k is the same as the value at k+N or k+2N or k−N, etc. We can see that mathematically by plugging k+N into the summation: Since there is a sum in the exponent, we can break it into two parts: In the second term, the exponent is always an integer multiple of 2 π, so the result is always 1, and we can drop it: And we can see that this summation is equivalent to DFT(y)[k]. So the DFT is periodic, with period N. You will need this result for one of the exercises below, which asks you to implement the Fast Fourier Transform (FFT). As an aside, writing the DFT in the form of a summation provides an insight into how it works. If you review the diagram in Section 6.2, you’ll see that each column of the synthesis matrix is a signal evaluated at a sequence of times. The analysis matrix is the (conjugate) transpose of the synthesis matrix, so each row is a signal evaluated at a sequence of times. Therefore, each summation is the correlation of y with one of the signals in the array (see Section 5.5). That is, each element of the DFT is a correlation that quantifies the similarity of the wave array, y, and a complex exponential at a particular frequency. Figure 7.3: DFT of a 500 Hz sawtooth signal sampled at 10 kHz. The Spectrum class in thinkdsp is based on np.ftt.rfft, which computes the “real DFT”; that is, it works with real signals. But the DFT as presented in this chapter is more general than that; it works with complex signals. So what happens when we apply the “full DFT” to a real signal? Let’s look at an example: signal = thinkdsp.SawtoothSignal(freq=500) wave = signal.make_wave(duration=0.1, framerate=10000) hs = dft(wave.ys) amps = np.absolute(hs) This code makes a sawtooth wave with frequency 500 Hz, sampled at frame rate 10 kHz. hs contains the complex DFT of the wave; amps contains the amplitude at each frequency. But what frequency do these amplitudes correspond to? If we look at the body of dft, we see: fs = np.arange(N) It’s tempting to think that these values are the right frequencies. The problem is that dft doesn’t know the sampling rate. The DFT assumes that the duration of the wave is 1 time unit, so it thinks the sampling rate is N per time unit. In order to interpret the frequencies, we have to convert from these arbitrary time units back to seconds, like this: fs = np.arange(N) * framerate / N With this change, the range of frequencies is from 0 to the actual frame rate, 10 kHz. Now we can plot the spectrum: thinkplot.plot(fs, amps) thinkplot.config(xlabel='frequency (Hz)', ylabel='amplitude') Figure 7.3 shows the amplitude of the signal for each frequency component from 0 to 10 kHz. The left half of the figure is what we should expect: the dominant frequency is at 500 Hz, with harmonics dropping off like 1/f. But the right half of the figure is a surprise. Past 5000 Hz, the amplitude of the harmonics starts growing again, peaking at 9500 Hz. What’s going on? The answer: aliasing. Remember that with frame rate 10000 Hz, the folding frequency is 5000 Hz. As we saw in Section 2.3, a component at 5500 Hz is indistinguishable from a component at 4500 Hz. When we evaluate the DFT at 5500 Hz, we get the same value as at 4500 Hz. Similarly, the value at 6000 Hz is the same as the one at 4000 Hz, and so on. The DFT of a real signal is symmetric around the folding frequency. Since there is no additional information past this point, we can save time by evaluating only the first half of the DFT, and that’s exactly what np.fft.rfft does. Solutions to these exercises are in chap07soln.ipynb. The key to the FFT is the Danielson-Lanczos lemma: Where DFT(y)[n] is the nth element of the DFT of y; e is a wave array containing the even elements of y, and o contains the odd elements of y. This lemma suggests a recursive algorithm for the DFT: For the base case of this recursion, you could wait until the length of y is 1. In that case, DFT(y) = y. Or if the length of y is sufficiently small, you could compute its DFT by matrix multiplication, possibly using a precomputed matrix. Hint: I suggest you implement this algorithm incrementally by starting with a version that is not truly recursive. In Step 2, instead of making a recursive call, use dft, as defined in Section 7.7, or np.fft.fft. Get Step 3 working, and confirm that the results are consistent with the other implementations. Then add a base case and confirm that it works. Finally, replace Step 2 with recursive calls. One more hint: Remember that the DFT is periodic; you might find np.tile useful. You can read more about the FFT at. Think DSP Think Java Think Bayes Think Python 2e Think Stats 2e Think Complexity
http://greenteapress.com/thinkdsp/html/thinkdsp008.html
CC-MAIN-2017-43
refinedweb
3,144
66.54
potential null pointer dereference in js/jsd/jsd _scpt .c RESOLVED FIXED in mozilla9 Status () People (Reporter: david.volgyes, Assigned: atulagrwl) Tracking Firefox Tracking Flags (Not tracked) Details Attachments (1 attachment) User Agent: Mozilla/5.0 (X11; Linux x86_64; rv:5.0) Gecko/20100101 Firefox/5.0 Build ID: 20110622232440 Steps to reproduce: cppcheck 1.49 () found a plenty of potential null pointer dereference. This is one of them. Actual results: There is a check in line #533 (if LIVEWIRE is defined) for jsdscript!=0. So jsdscript definitely can be NULL. But a few lines below (line #543) there is a jsdscript->script dereference, where jsdscript!=0 condition is not guaranteed. #ifdef LIVEWIRE if( jsdscript && jsdscript->lwscript ) { uintN newline; jsdlw_RawToProcessedLineNumber(jsdc, jsdscript, line, &newline); if( line != newline ) line = newline; } #endif call = JS_EnterCrossCompartmentCallScript(jsdc->dumbContext, jsdscript->script); Expected results: You should check jsdscript pointer, and handle somehow if it is null. Assignee: nobody → general Component: General → JavaScript Engine Product: Firefox → Core QA Contact: general → general A simple fix which null check the jsdscript and return 0 in case any of them is null. timeless, please pass the review to another person if I have wrongly assigned it to you. I found your name by looking at logs of jsd_scpt.c file. Status: UNCONFIRMED → NEW Ever confirmed: true Comment on attachment 556392 [details] [diff] [review] v1 patch to null check jsdscript I'll poach this review. nit: the spacing should match the (horrible) spacing in the surrounding context: if( !jsdscript ) Either that, or you can reformat the whole file with reasonable spacing ("if (!jsdscript) {"), but that should be done in a separate bug or at least a separate patch. (So don't bother, unless you're feeling especially motivated, but be aware that this code will hopefully die soon anyway.) Attachment #556392 - Flags: review+ Do I need to upload the updated patch? I would not attempt to reformat the complete file if this code is going to die soon. If this code is going to live, i can attempt to reformat the file. This way I will learn the coding style also. I guess the problem which we need to fix in this file is "if(condition1 || condition2) {". Oh, you're going to checkin? anyway. Ok, I fixed the nit and landed the patch. Thanks! @Steve Thanks for the check-in. This is my first patch in mozilla :). Congratulations on the first patch making the product then! Status: NEW → RESOLVED Closed: 8 years ago Resolution: --- → FIXED Target Milestone: --- → mozilla9 Thanks a lot mak. This is just the starting. I have already submitted 10 more patches and waiting for them to get into mozilla-central :). Assignee: general → atulagrwl
https://bugzilla.mozilla.org/show_bug.cgi?id=678988
CC-MAIN-2019-30
refinedweb
445
59.7
From: Douglas Paul Gregor (gregod_at_[hidden]) Date: 2004-02-18 14:44:47 On Wed, 18 Feb 2004, Peter Dimov wrote: > >> I would turn the question around and ask what's wrong with boost::fs > >> (and when I see boost I think std). I've never understood the > >> rationale behind long namespace names. Yes, I can alias filesystem > >> to fs myself. But when all of your users alias filesystem to fs, and > >> you find yourself doing the same in documentation, examples, tests, > >> and in your own code, then perhaps it should have been named fs in > >> the first place. > > > > I do all sorts of things in non-header source files that I would not dare > > do in headers, and creating a name like "fs" is one of them :) > >" :) Doug Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2004/02/61327.php
CC-MAIN-2021-25
refinedweb
151
79.6
Handle Password and Email Changes in Your Rails API This is part two of our series about authentication from scratch using JWT. The first part of the series can be found here. In the previous tutorial, we saw a quick overview about JWT, different authentication mechanisms, and a set of basic authentication APIs, like registration, confirmation, and login. In this part, we will see the next set of APIs such as password (reset and change) and email update. This series is more than a JWT tutorial. Its main goal is to see how to build your own custom authentication solution from scratch and JWT is just the method we opted to use. We will be continue building on our sample application that we developed in the first part which could be found here. If you wish to follow along, you can check out the part-i branch on the linked repository. The API we cover here is a forgot password sequence. The flow generates a password reset token along with an endpoint for the user to validate the token. This endpoint is called when a user clicks the password reset link sent to them via email. That last endpoint is for finally changing the password. The forgot password endpoint generates a password reset token, saves it in the database, and sends an email to the user. This is similar to the confirmation instructions module we saw in the first part. Let’s begin by adding the columns necessary for the password reset functionality. Run, rails g migration AddPasswordResetColumnsToUser And in the generated migration file, add the following: add_column :users, :reset_password_token, :string add_column :users, :reset_password_sent_at, :datetime These two columns are sufficient for the purpose. reset_password_token will store the token that we generate and reset_password_sent_at tracks the time the token is sent for expiry purposes. Let’s add the endpoints now. Start by generating the password controller: rails g controller passwords Add the following routes to your config/routes.rb file: post 'password/forgot', to: 'password#forgot' post 'password/reset', to: 'password#reset' Now, let’s add the corresponding action for the above mentioned routes to controllers/password_controller.rb: ... def forgot if params[:email].blank? return render json: {error: 'Email not present'} end user = User.find_by(email: email.downcase) if user.present? && user.confirmed_at? user.generate_password_token! # SEND EMAIL HERE render json: {status: 'ok'}, status: :ok else render json: {error: ['Email address not found. Please check and try again.']}, status: :not_found end end def reset token = params[:token].to_s if params[:email].blank? return render json: {error: 'Token not present'} end user = User.find_by(reset_password_token: token) if user.present? && user.password_token_valid? if user.reset_password!(params[:password]) render json: {status: 'ok'}, status: :ok else render json: {error: user.errors.full_messages}, status: :unprocessable_entity end else render json: {error: ['Link not valid or expired. Try generating a new link.']}, status: :not_found end end ... Let’s go over this quickly. In the forgot action, get the email in the post request and fetch the user. If the user is found and confirmed, call the generate_password_token on the user model and send the email. The email sending part is skipped, but make sure to include the password_reset_token of the user in the email. In the reset action, get the token sent in the request and validate it via password_token_valid? and reset the password via reset_password. These methods are yet to be added to the user model, let’s do it now: Add the below methods to models/user.rb: ...(10) end ... In the generate_password_token! method we generate a token using the generate_token method and store it in the reset_password_token column, alos setting the reset_password_sent_at to the current time. In the password_token_valid? method, verify the token is sent within the last 4 hours which is the reset password expiry. You are free to change it however you see fit. the reset_password! method updates new password of the user and nullifies the reset token. Reset password set is done. You can test it by sending a post request /passwords/forgot with the email in the body and /passwords/reset with a new password and token in the body. Let’s add the password update link now. Update Password To add the update password, add the route to your routes file: put 'password/update', to: 'password#update' Here is the corresponding action in PasswordsController: def update if !params[:password].present? render json: {error: 'Password not present'}, status: :unprocessable_entity return end if current_user.reset_password(params[:password]) render json: {status: 'ok'}, status: :ok else render json: {errors: current_user.errors.full_messages}, status: :unprocessable_entity end end The password update action is quite straightforward. Get the password from the parameter and save it to DB using the reset_password method that we declared before in the user model. You can now test the password update URL by sending a PUT request to /password/update with the new password in the body. Let’s move on to the next big functionality, Email Update. Email update allows a user to update their primary email on their account. Upon request, we should check if the email is already being used by any other user. If the email is OK, store it and send a verification mail to the new email. Upon confirmation, we’ll replace the primary email with the new email and clear out the token. So, there are two APIs in total: One to make an email update request, one to actually update the email. Let’s get started. Begin by doing a migration to add the necessary column to support this module. Generate a migration: rails g migration AddUnconfirmedEmailTouser Add the following content to it and run rake db:migrate: add_column :users, :unconfirmed_email, :string Update Now, let’s update the routes for these two endpoints. Add these lines to config/routes.rb: ... resources :users, only: [:create, :update] do collection do post 'email_update' ... Add the corresponding actions to UsersController: def update if current_user.update_new_email!(@new_email) # SEND EMAIL HERE render json: { status: 'Email Confirmation has been sent to your new Email.' }, status: :ok else render json: { errors: current_user.errors.values.flatten.compact }, status: :bad_request end end Also add a before_action to do the validations on the new email, add this at top of the user controller class with the methods marked private: class UsersController < ApplicationController before_action :validate_email_update, only: :update ... ... private def validate_email_update @new_email = params[:email].to_s.downcase if @new_email.blank? return render json: { status: 'Email cannot be blank' }, status: :bad_request end if @new_email == current_user.email return render json: { status: 'Current Email and New email cannot be the same' }, status: :bad_request end if User.email_used?(@new_email) return render json: { error: 'Email is already in use.'] }, status: :unprocessable_entity end end ... Here we check if the requested email is already in use, and if the email is the same of what the account already has. If everything is fine, call update_new_email! and send the email. Note that, the email has to be sent to user’s unconfirmed_email instead of their primary one. We have used a couple of new model methods here, so let’s go define them. In models/user.rb add the below functions: def update_new_email!(email) self.unconfirmed_email = email self.generate_confirmation_instructions save end def self.email_used?(email) existing_user = find_by("email = ?", email) if existing_user.present? return true else waiting_for_confirmation = find_by("unconfirmed_email = ?", email) return waiting_for_confirmation.present? && waiting_for_confirmation.confirmation_token_valid? end end Here, in email_used?, apart from checking if the email is used primarily on any accounts we also check if it’s being updated and waiting for confirmation. This can be removed depending upon your needs. The confirmation_token_valid? method was added in the first part of this tutorial. You can now test this route by sending a POST request to /users/update with Now, let’s add the action for the email update endpoint. Add this code to UsersController: def email_update token = params[:token].to_s user = User.find_by(confirmation_token: token) if !user || !user.confirmation_token_valid? render json: {error: 'The email link seems to be invalid / expired. Try requesting for a new one.'}, status: :not_found else user.update_new_email! render json: {status: 'Email updated successfully'}, status: :ok end end This action is quite straightforward. Fetch the user by the token and see if the token is valid. If so, update the email and respond. Let’s add the update_new_email! method to the user model: def update_new_email! self.email = self.unconfirmed_email self.unconfirmed_email = nil self.mark_as_confirmed! end Here we replace the primary email with the updated email and set the updated email field to nil. Also, call the mark_as_confirmed! which we added in the previous part of the series. This method nullifies the confirmation token and sets the confirmed at value. The email update endpoint is also up now. Try sending a POST request to /users/email_update with the email token we generated in previous section in the request body. Conclusion With that, we have arrived at the conclusion of our two-part tutorial on authentication from scratch for a Rails API. To recap, we have implemented Devise’s authentication, confirmation, password, and reconfirmation modules. Not too shabby. The code used in this tutorial is available here. The code where this tutorial starts from is available in the part-i branch. I hope this tutorial helped you in understanding the authentication and rolling out your own authentication system. Thanks for reading.
https://www.sitepoint.com/handle-password-and-email-changes-in-your-rails-api/?utm_source=sitepoint&utm_medium=relatedsidebar&utm_term=ruby
CC-MAIN-2018-22
refinedweb
1,545
50.63
Introduction to Pandas Read File Pandas read File is an amazing and adaptable Python bundle that permits you to work with named and time-series information and also helps you work on plotting the data and writing the statistics of data. There are 2 different ways of reading and writing files in excel and they are reading and writing as CSV file(Comma Separated Values) and also reading and writing as an Excel file. We can utilize them to spare the information and names from Pandas items to a record and burden them later as Pandas Series or DataFrame cases. The greater part of the datasets you work with is called DataFrames. DataFrames is a 2-Dimensional marked Data Structure with a record for lines and sections, where every cell is utilized to store an estimation of any kind. Fundamentally, DataFrames are Dictionary-based out of NumPy Arrays. Syntax: The syntax for Pandas read file is by using a function called read_csv(). This function enables the program to read the data that is already created and saved by the program and implements it and produces the output. The pandas library is one of the open-source Python libraries that gives superior, advantageous information structures and information examination devices and strategies for Python programming. How to Read File Using Various Methods in Pandas? Now we see various examples of how to save and read the various files by executing the programs in Python Pandas. We first have to create a save a CSV file in excel in order to import data in the Python script using Pandas. Pandas is an open-source library that is present on the NumPy library. It permits the client for a quick examination, information cleaning, and readiness of information productively. Pandas is quick and it has superior and profitability for clients. Example #1 Saving the dataframe as a CSV file in the excel sheet and implementing in a shell. Code: import pandas as pd company = ["Google", "Microsoft", "Apple", "Tata"] ceo = ["SundarPichai", "Satya Nadella", "Tim Cook", "Ratan Tata"] score = [80, 60, 70, 90] dictionary = {'company', 'CEO', 'Score'} df = pd.DataFrame(dictionary) df.to_csv(C:\Users\Admin\Desktop\file1.csv', index=False) Output: In the above program, we first import pandas and create a dataframe and later create a dictionary of lists on what has to be printed in the new file. This program executes and creates an excel sheet as file1.csv and our dataframe will be visible in our system excel. Now we need to read this data in file1.csv and then produce the output in our python shell. import csv with open('file1.csv', mode ='r')as file: csvFile = csv.reader(file1) for data in csvFile: print(data) Output: Hence, here we see that open() function opens the file and we import CSV in the shell and we implement the code and produce the data. From the start, the CSV record is opened utilizing the open() technique in ‘r’ mode(specifies read mode while opening a document) which restores the document object then it is perused by utilizing the peruser() strategy for CSV module that profits the peruser object that repeats all through the lines in the predefined CSV archive. Example #2 Implementing a CSV file with dictionary reader function. Code: import csv with open('file1.csv', mode ='r') as file: csvFile = csv.DictReader(file) for data in csvFile: print(data) Output: Here, we first open the CSV file in the Python shell and then import the CSV available in the excel sheet. It is like the past technique, the CSV record is first opened utilizing the open() strategy then it is perused by utilizing the DictReader class of CSV module which works like a normal peruser however maps the data in the CSV document into a word reference. The absolute first line of the record contains word reference keys. Example #3 Implementing a CSV read file as a proper dataframe using pandas read.csv() function. Code: import pandas csvfile = pandas.read_csv('file1.csv') print(csvfile) Output: It is exceptionally simple and easy to peruse a CSV record utilizing pandas library capacities. Here read_csv() strategy for pandas library is utilized to peruse information from CSV documents. In the above program, the csv_read() technique for pandas library peruses the file1.csv record and maps its information into a 2D list. Conclusion We have now figured out how to spare the information and marks from Pandas DataFrame items to various types of documents. We likewise realize how to stack the information from records and make DataFrame objects. We have utilized the Pandas read_csv() and .to_csv() techniques to peruse the CSV documents. We additionally utilized comparable strategies to peruse the Excel document. These capacities are exceptionally helpful and broadly utilized. They permit you to spare or burden your information in a solitary capacity or strategy call. Hence, Pandas play a separate significant role in reading the files in Python. Hence, it is very important to understand the concepts of these Pandas libraries and install those packages in shell or condasoftwares and run the values as a CSV and Excel file. Recommended Articles This is a guide to Pandas Read File. Here we also discuss the introduction and how to read files using various methods in pandas? along with different examples and its code implementation. You may also have a look at the following articles to learn more –
https://www.educba.com/pandas-read-file/?source=leftnav
CC-MAIN-2021-43
refinedweb
900
62.78
Opened 5 years ago Closed 5 years ago Last modified 5 years ago #13072 closed enhancement (fixed) Implementation of PartitionTuple + some minor fixes to partition.py Description (last modified by ) This patch implements the following classes: - PartitionTuple - returns a tuple of partitions - PartitionTuples - factory class for all tuples of partitions - !PartitionTuples_level - class of all tuples of partition of a fixed level - !PartitionTuples_size - class of all tuples of partition of a fixed size - !PartitionTuples_level_size - class of all tuples of partition of a fixed level and size. The first three of these are infinite enumerated classes whereas the last is finite. They all have iterators. The idea is to implement a fully function class for PartitionTuples as I currently need this together with a class for tuples of (standard) tableaux (coming soon). PartitionTuples of level 1 are in natural bijection with Partitions so when given a 1-tuple of partitions, or a partition, PartitionTuples() returns the corresponding Partition. This works almost seamlessly, making it possible to almost ignore the distinction between Partitions() and PartitionTuples(). One exception is that the expected behaviour of for component in mu: do X is different for partitions and partition tuples (in the first case, you expect to loop over the parts of the partition and in the second over the components of the tuple). To get around this both classes now have a components() method so that you can uniformly write for nu in mu.components(): do X Improvements welcome! In terms of implementation, for my use of these objects the level is more intrinsic than the size so I have set the syntax for the PartitionTuples? classes as PartitionTuples(level=l, n=n) where level and n are both optional named arguments BUT level is specified first. Previously, n was given first and level second. I think that it makes much more sense this way around, but if people feel really strongly about this I will change it back. Previously, level was just called k, which is a fairly random variable whereas level makes sense in terms of categorification and higher level Fock spaces. (Replacing n with size would also be sensible but I didn't go there.) Deprecations of old functions: Finally, in addition to these new classes I have removed a bunch functions which were depreciated years ago and depreciated some more functions, as discussed on sage-combinat. I also reinstated the beta_numbers() methods which were removed in the last patch to partition.py (this patch said that beta_numbers and frobenius_coordinates are identical objects, but they are actually different). For discussion about the functions being deprecated please see the following two discussions on sage-combinat: Below is a summary of the above listed in order of what I think is decreasing controversy. - A=sage.combinat.partition.number_of_partitions() is marked for deprecation in favour of B=sage.combinat.partitions.number_of_partitions(), which is what function A() calls most of the time. As agreed above, number_of_partitions() will stay in the global name space, but this made the deprecation somewhat fiddly as I did not want to deprecate number_of_partitions() for "normal use" because from the user perspective this function will not change. Instead, I have deprecated the individual options of number_of_partitions() so deprecation warnings are only generated when A() does NOT call B(). In the global namespace, number_of_partitions still points to A(). When the functions which are marked for deprecation below are removed, number_of_partitions() should be changed to point to B() and A() should be changed into a deprecated_function_alias to B(). See the patch for more details. - For use in Partitions().random_element() the function number_of_partitions() was cached. This cached function was almost never used so, assuming that caching this function is a good idea, I decided to use a cached version of number_of_partitions() inside partition.py always. As shown in the comments below, this leads to a dramatic speed-up. This probably should be revisited when Fredrik Johansson's patch #13199, which uses FLINT to implement a faster version of number_of_partitions, is merged into sage. - The two functions - cyclic_permutations_of_partition - cyclic_permutations_of_partition_iterator are deprecated in sage.combinat.partition and they have been moved to sage.combinat.set_partition and renamed ...._of_set_partition... As far as I can tell these functions are never used but, in any case, they are methods on set partitions rather than partitions. Nonetheless, they need to be deprecated from the global name space. - The following functions were marked for deprecation several years ago so they have been removed from sage.combinat.partition.py: - partitions_list - number_of_partitions_list - partitions_restricted - number_of_partitions_restricted - For the reasons given in #5478, RestrictedPartitions? was also slated for removal but it was decided not to deprecate this class until Partitions() is able to process the appropriate combinations of keyword arguments. See #12278 and the comment by John Palmieri below for more details. Nicolas has suggested that one way of addressing this may be to refactor the partitions code so that it uses Florent's enumerated sets factories #10194. - The following functions now give deprecation warnings and they are marked for removal from the global name space: - partitions_set - number_of_partitions_set - ordered_partitions - number_of_ordered_partitions - partitions, - ferrers_diagram - partitions_greatest - partitions_greatest_eq - partitions_tuples - number_of_partitions_tuples - partition_power In all cases, these function are deprecated in favour of (methods in) parent classes. Apply: trac_13072-tuples-of-partitions_am.patch Attachments (1) Change History (56) comment:1 Changed 5 years ago by comment:2 Changed 5 years ago by - Work issues Some category tests currently fails. deleted comment:3 Changed 5 years ago by comment:4 Changed 5 years ago by comment:5 Changed 5 years ago by Apply trac_13072-tuples-of-partitions_am.patch comment:6 Changed 5 years ago by comment:7 Changed 5 years ago by New version of patch which creates a new file partition_tuple.py which contains all of the partition_tuple code. comment:8 Changed 5 years ago by For the patchbot: Apply trac_13072-tuples-of-partitions_am.patch comment:9 Changed 5 years ago by - Reviewers set to Travis Scrimshaw comment:10 Changed 5 years ago by comment:11 Changed 5 years ago by For the patchbot: Apply trac_13072-tuples-of-partitions_am.patch comment:12 Changed 5 years ago by comment:13 Changed 5 years ago by comment:14 Changed 5 years ago by The following timings show that there is a dramatic speed-up when number_of_partitions is cached: With caching: sage: %timeit [Partitions(n).random_element() for n in range(100)] 25 loops, best of 3: 25 ms per loop sage: %timeit [Partitions(n).random_element() for n in range(100)] 25 loops, best of 3: 24.6 ms per loop sage: %timeit [Partitions(n).random_element() for n in range(100)] 25 loops, best of 3: 25.4 ms per loop Without caching: sage: %timeit [Partitions(n).random_element() for n in range(100)] 5 loops, best of 3: 1.23 s per loop sage: %timeit [Partitions(n).random_element() for n in range(100)] 5 loops, best of 3: 1.23 s per loop sage: %timeit [Partitions(n).random_element() for n in range(100)] 5 loops, best of 3: 1.26 s per loop comment:15 Changed 5 years ago by comment:16 follow-up: ↓ 24 Changed 5 years ago by. comment:17 Changed 5 years ago by comment:18 Changed 5 years ago by comment:19 Changed 5 years ago by For the patchbot: Apply trac_13072-tuples-of-partitions_am.patch comment:20 Changed 5 years ago by comment:21 Changed 5 years ago by comment:22 Changed 5 years ago by comment:23 Changed 5 years ago by For the patchbot: Apply trac_13072-tuples-of-partitions_am.patch comment:24 in reply to: ↑ 16 Changed 5 years ago by Replying to jhpalmieri:. This was discussed in detail on sage-combinat, eventually leading to point #5 in the blurb for this ticket. comment:25 Changed 5 years ago by - Dependencies changed from #9265 to #9265, #11446 - Status changed from needs_review to positive_review Looks good. I've added #11446 as a dependency since this is based respect to #11446 and it's dependency #11442 (both slightly modify partition.py as well). comment:26 follow-up: ↓ 27 Changed 5 years ago by - Status changed from positive_review to needs_work Hi Andrew and Travis, Thanks both for your work. I'm hate to switch back to needs works but looking at the compiled doc, I see various small problems which should be fixed. Here are some of them: - Don't indent bulleted list. It adds an extra uneeded indentation (see e.g. REFERENCE vs AUTHORS in the module class; - There is a proper markup for references (see developper guide); - In the doc of the class PartitionTuple, there is a miss indentation between INPUTand EXAMPLES: - in the doc of Garnir_tableauplease write ``self`` ``cell``, ``FALSE``... (verbatim set-up) but don't forget single back-quote for `(k,a+1,c)`(latex set-up) and similar. The hyperlink in SEE ALSO are missing - There is a proper markup for linking to trac ticket eg: :trac:`13123` - There is a typo in "The Garnir tableau are the “first” non-standard tableaux which arise when you at by simple transpositions." Sorry for being picky for the doc. If it wasn't so late et France, I would have written a review patch. For Travis: please check the compiled doc when your are reviewing a patch. You can refer to. Cheers, Florent comment:27 in reply to: ↑ 26 ; follow-up: ↓ 29 Changed 5 years ago by Hi Florent, Thanks for the specific comments about what needs to be fixed. I'll go through and fix these. For Travis: please check the compiled doc when your are reviewing a patch. You can refer to. Actually, Travis has already spent a large amount of time fixing my doc strings, so there would be many more problems without the large amount of time that he has already spent on this. Andrew comment:28 Changed 5 years ago by comment:29 in reply to: ↑ 27 ; follow-up: ↓ 30 Changed 5 years ago by Hi Andrew, Replying to andrew.mathas: Actually, Travis has already spent a large amount of time fixing my doc strings, so there would be many more problems without the large amount of time that he has already spent on this. Sorry, I you feel I'm being rude. It wasn't my intend. However, I've no chance to know that since there is no record of his work on this ticket. Even when the review is done offline, it is good to keep a (not necessarily detailed) trace of what has been done on the ticket, if only to give proper credit. Cheers, Florent PS: Unfortunately, I've no time to answer your comment on #13074, right now (run to catch a train + teaching). comment:30 in reply to: ↑ 29 Changed 5 years ago by Hi Florent, No, no, that's OK. I just wanted to acknowledge that Travis has done a lot of work on the documentation and that any errors that remained were mine and mine alone. I appreciate your looking at the patch and trying to improve it. I have mostly sorted out the doc string problems in these two patches but there are till one or two that remain. If there is any trick to working out where the errors appear in the code from the rest errors messages please let me know when you have the time as currently I am doing pseudo random searches. Thanks again, Andrew comment:31 Changed 5 years ago by I think that all of the doc string issues really are fixed now. comment:32 Changed 5 years ago by Florent, Thank you for catching these docstring problems. I appreciate you begin picky about the docs. I didn't know about the indentation of the bullet lists, and I did miss that indentation of EXAMPLES:: block in the compiled doc. Andrew, A few more minor issues. - I believe this line (line 263 in partition_tuple.py) has a comma misplaced: When these algebras are not semisimple partition, tuples index... - In partition_tuple.hook_length()it should be coordinates of the form `(k,r,c)` - In partition.random_element(), the inputs uniformand Plancherelshould be indented one more since they are the types of inputs for measure. comment:33 Changed 5 years ago by Hi Travis, I have uploaded a new version of the patch which fixes the errant comma and the indentation in random_element but I think that the ``(k,r,c)`` in the hook_length function is correct because k,r and c are arguments to this function. Of course, it would be more correct to write (``k``,``r``,``c``) but ReST complains about this. I guess that (``k``, ``r``, ``c``) might be OK, although I suspect that it will want a few more spaces like ( ``k`` , ``r`` , ``c`` ) to be legal syntax. I prefer using ``(k,r,c)`` as these three integers together comprise a cell which is really a "single" variable... Let me know if you disagree. Cheers, Andrew comment:34 Changed 5 years ago by Hey Andrew, Yes, it should be ``(k,r,c)`` but right now it is ``(r,c)``. Although on further thought, it might be better to move the note about 0-based (and adding ``k``) and the python *-operator to the header. That way it covers all of the similar functions. Thanks, Travis comment:35 Changed 5 years ago by Thanks Travis, you are right of course. I've updated the patch. Andrew -- For the patchbot: Apply trac_13072-tuples-of-partitions_am.patch comment:36 Changed 5 years ago by - Status changed from needs_work to positive_review Thanks Andrew! Looks good. I like the notes. comment:37 Changed 5 years ago by - Milestone changed from sage-5.4 to sage-5.5 comment:38 Changed 5 years ago by comment:39 follow-up: ↓ 41 Changed 5 years ago by - Status changed from positive_review to needs_work sage -t -force_lib devel/sage/sage/structure/sage_object.pyx ********************************************************************** File "/release/merger/sage-5.5.beta0/devel/sage-main/sage/structure/sage_object.pyx", line 1114: sage: sage.structure.sage_object.unpickle_all() # (4s on sage.math, 2011) Expected: doctest:... DeprecationWarning: This class is replaced by Matrix_modn_dense_float/Matrix_modn_dense_double. See for details. Successfully unpickled ... objects. Failed to unpickle 0 objects. Got: * unpickle failure: load('/release/merger/sage-5.5.beta0/home/.sage/tmp/sage.math.washington.edu/5737/dir_i3wI40//pickle_jar/_class__sage_combinat_partition_PartitionTuples_nk__.sobj') doctest:1172: DeprecationWarning: This class is replaced by Matrix_modn_dense_float/Matrix_modn_dense_double. See for details. Failed: _class__sage_combinat_partition_PartitionTuples_nk__.sobj Successfully unpickled 593 objects. Failed to unpickle 1 objects. ********************************************************************** comment:40 Changed 5 years ago by sage -t --long -force_lib devel/sage/sage/combinat/partition.py ********************************************************************** File "/release/merger/sage-5.5.beta0/devel/sage-main/sage/combinat/partition.py", line 434: sage: all(test2(core,tuple(mus)) # long time (5s on sage.math, 2011) for k in range(Integer(1),Integer(10)) for n_core in range(Integer(10)-k) for core in Partitions(n_core) if core.core(k) == core for n_mus in range(Integer(10)-k) for mus in PartitionTuples(n_mus,k)) Exception raised: Traceback (most recent call last): File "/release/merger/sage-5.5.beta0/local/bin/ncadoctest.py", line 1231, in run_one_test self.run_one_example(test, example, filename, compileflags) File "/release/merger/sage-5.5.beta0/local/bin/sagedoctest.py", line 38, in run_one_example OrigDocTestRunner.run_one_example(self, test, example, filename, compileflags) File "/release/merger/sage-5.5.beta0/local/bin/ncadoctest.py", line 1172, in run_one_example compileflags, 1) in test.globs File "<doctest __main__.example_5[6]>", line 2, in <module> for k in range(Integer(1),Integer(10)) File "<doctest __main__.example_5[6]>", line 7, in <genexpr> for mus in PartitionTuples(n_mus,k)) File "classcall_metaclass.pyx", line 279, in sage.misc.classcall_metaclass.ClasscallMetaclass.__call__ (sage/misc/classcall_metaclas s.c:946) File "/release/merger/sage-5.5.beta0/local/lib/python/site-packages/sage/combinat/partition_tuple.py", line 1059, in __classcall_pri vate__ raise ValueError, 'the level must be a positive integer' ValueError: the level must be a positive integer ********************************************************************** comment:41 in reply to: ↑ 39 ; follow-up: ↓ 42 Changed 5 years ago by Hi Jeroen, Perhaps I am confused, but most of these pickles shouldn't exist any more as they should have been removed by the new pickle jar attached to #9265. Specifically, the pickles _class__ sage_combinat_skew_tableau_SemistandardSkewTableaux_n__ .sobj') _class__ sage_combinat_skew_tableau_SemistandardSkewTableaux_nmu__ .sobj') _class__ sage_combinat_skew_tableau_SemistandardSkewTableaux_p__ .sobj') _class__ sage_combinat_skew_tableau_SemistandardSkewTableaux_pmu__ .sobj') _class__ sage_combinat_skew_tableau_StandardSkewTableaux_n__ .sobj') _class__ sage_combinat_tableau_SemistandardTableaux_n__ .sobj') _class__ sage_combinat_tableau_SemistandardTableaux_nmu__ .sobj') _class__ sage_combinat_tableau_SemistandardTableaux_p__ .sobj') _class__ sage_combinat_tableau_SemistandardTableaux_pmu__ .sobj') _class__ sage_combinat_tableau_StandardTableaux_n__ .sobj') _class__ sage_combinat_tableau_StandardTableaux_partition__ .sobj') _class__ sage_combinat_tableau_Tableau_class__ .sobj') _class__ sage_combinat_tableau_Tableaux_n__ .sobj') shouldn't be there having been replaced with new improved pickles with slightly more informative names (for example, _n_ --> _size_, _p_ --> _shape_ etc.). The pickle _class__ sage_combinat_partition_PartitionTuples_nk__ .sobj I agree is mine to fix but I am also not entirely convinced that the first three pickles are caused by this patch, that is the following pickles: class__ sage_combinat_crystals_affine_AffineCrystalFromClassicalAndPromotion_with_category_element_class__ .sobj') class__ sage_combinat_crystals_tensor_product_CrystalOfTableaux_with_category_element_class__ .sobj') class__ sage_combinat_crystals_tensor_product_TensorProductOfCrystalsWithGenerators_with_category__ .sobj') as I think that I might have also created new pickles for these in #9265. I am, of course, happy to rebuild them just in case. Can you please confirm that the pickle_jar was updated as per the attachment for #9265. The mistake is quite probably mine as I assumed that the whole pickle jar would be replaced whereas if you just added in the new pickles then you would not have been aware that some of the old pickles needed to be deleted (although presumably it would not have been possible to unpickle them...??). If there is any documentation on updating the pickle jar please let me know. Please advise. Cheers, Andrew comment:42 in reply to: ↑ 41 Changed 5 years ago by Replying to andrew.mathas: Can you please confirm that the pickle_jar was updated as per the attachment for #9265. Yes, certainly it was updated. comment:43 follow-up: ↓ 44 Changed 5 years ago by I don't quite understand what you're saying, since my failed test is only complaining about one pickle, namely _class__sage_combinat_partition_PartitionTuples_nk__.sobj comment:44 in reply to: ↑ 43 Changed 5 years ago by Sorry, my mistake: I was testing on 5.3. A. comment:45 follow-up: ↓ 46 Changed 5 years ago by OK, I have fixed the long-test problem. With the pickle it seems to me that the only way to fix the problem is to replace the bad PartitionTuples_nk pickle in the pickle jar with a good one (and leave the other pickles untouched) as the underlying class has changed too much (vbraun posted a comment on #9265 suggesting that I should use register_unpickle_override instead but I tried experimenting with this and it doesn't seem to work). Jeroen can you please confirm that you are are happy for me to do this. comment:46 in reply to: ↑ 45 ; follow-up: ↓ 47. comment:47 in reply to: ↑ 46 ; follow-up: ↓ 48. Just my 2 cents: at this point, partition tuples are a rather peripheral feature. If anyone has saved some pickle containing one, most likely he is in the Sage-Combinat group. Well most likely it's Andrew actually. So I vote for not wasting Andrew's time and simply dropping backward compatibility in that particular situation. I am not taking much risk by volunteering to help whoever might have trouble with such an old pickle :-) Of course, the official procedure would be to run a poll; if you insist, we can do that on Sage-Combinat devel. comment:48 in reply to: ↑ 47 Changed 5 years ago by Nicolas, I started a thread on sage-devel about the pickle jar. comment:49 Changed 5 years ago by - Status changed from needs_work to needs_review Unpickling works now. comment:50 Changed 5 years ago by - Status changed from needs_review to positive_review Everything looks good to me (double checked the pickle jar and sage_object.pyx). comment:51 follow-up: ↓ 52 Changed 5 years ago by - Status changed from positive_review to needs_work Removing all whitespace everywhere is a bad idea, don't do it as it will lead to merge conflicts (unless it was approved by the sage-combinat group, then I take back my words). comment:52 in reply to: ↑ 51 Changed 5 years ago by Removing all whitespace everywhere is a bad idea, don't do it as it will lead to merge conflicts (unless it was approved by the sage-combinat group, then I take back my words). Jeroen, I removed whitespaces added by the patch and then checked that the sage-combiat queue applied cleanly before pushing the patch both to the sage-combinat queue and back to trac. I did not remove all white space from the source files affected by this patch as, I agree, this would probably cause havoc with the queue. As all of the patches in the sage-combinat queue still apply cleanly I am putting this back to a positive review. comment:53 Changed 5 years ago by - Status changed from needs_work to positive_review comment:54 Changed 5 years ago by - Merged in set to sage-5.5.beta1 - Resolution set to fixed - Status changed from positive_review to closed comment:55 Changed 5 years ago by The new patch needs a proper commit message. Changed 5 years ago by Adding proper commit message Should now apply cleanly to sage 5.2
https://trac.sagemath.org/ticket/13072
CC-MAIN-2017-47
refinedweb
3,519
55.64
Support Vector Machines has become one of the state-of-the-art machine learning models for many tasks with excellent results in many practical applications. One of the greatest advantages of Support Vector Machines. Support Vector Machines (SVM) are supervised learning methods that try to obtain these hyperplanes in an optimal way, by selecting the ones that pass through the widest possible gaps between instances of different classes. New instances will be classified as belonging to a certain category based on which side of the surfaces they fall on. To mention some disadvantages, SVM models could be very calculation intensive while training the model and they do not return a numerical indicator of how confident they are about a prediction. We will apply SVM to image recognition, a classic problem with a very large dimensional space Let us start by importing and printing the data’s description import sklearn as sk import numpy as np import matplotlib.pyplot as plt %matplotlib inline from sklearn.datasets import fetch_olivetti_faces faces = fetch_olivetti_faces() print (faces.DESCR) downloading Olivetti faces from to C:\Users\piush\scikit_learn_data. print (faces.keys()) print (faces.images.shape) print (faces.data.shape) print (faces.target.shape) dict_keys(['data', 'images', 'target', 'DESCR']) (400, 64, 64) (400, 4096) (400,) The dataset contains 400 images of 40 different persons. The photos were taken with different light conditions and facial expressions (including open/closed eyes, smiling/not smiling, and with glasses/no glasses). Looking at the content of the faces object, we get the following properties: images, data, and target. Images contain the 400 images represented as 64 x 64 pixel matrices. data contains the same 400 images but as array of 4096 pixels. target is, as expected, an array with the target classes, ranging from 0 to 39. Do we need to normaize? It is important for the application of SVM to obtain good results. print (np.max(faces.data)) print (np.min(faces.data)) print (np.mean(faces.data)) 1.0 0.0 0.547043 Therefore, we do not have to normalize the data. Plot the first 20 images.We can see faces from two persons. We have 40 individuals with 10 different images each. def print_faces(images, target, top_n): # set up the figure size in inches fig = plt.figure(figsize=(12, 12)) fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05) for i in range(top_n): # plot the images in a matrix of 20x20 p = fig.add_subplot(20, 20, i + 1, xticks=[], yticks=[]) p.imshow(images[i], cmap=plt.cm.bone) # label the image with the target value p.text(0, 14, str(target[i])) p.text(0, 60, str(i)) print_faces(faces.images, faces.target, 20) In [11]: from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split( faces.data, faces.target, test_size=0.25, random_state=0) A function to evaluate K-fold cross-validation. In [20]: from sklearn.cross_validation import cross_val_score, KFold from scipy.stats import sem def evaluate_cross_validation(clf, X, y, K): # create a k-fold croos validation iterator))) Cross-validation with five folds In [21]: In [22]:)) In [23]:. In [24]: #): In [25]: def create_target(segments): # create a new y array of target size initialized with zeros y = np.zeros(faces.target.shape[0]) # put 1 in the specified segments for (start, end) in segments: y[start:end + 1] = 1 return y In [29]: target_glasses = create_target(glasses) Perform the training/testing split In [30]: X_train, X_test, y_train, y_test = train_test_split( faces.data, target_glasses, test_size=0.25, random_state=0) In [31]: #a new SVC classifier svc_2 = SVC(kernel='linear') In [32]: #check the performance with cross-validation evaluate_cross_validation(svc_2, X_train, y_train, 5) [ 1. 0.95 0.98333333 0.98333333 0.93333333] Mean score: 0.970 (+/-0.012) In [33]: In [34]: X_test = faces.data[30:40] y_test = target_glasses[30:40] print (y_test.shape[0]) select = np.ones(target_glasses.shape[0]) select[30:40] = 0 X_train = faces.data[select == 1] y_train = target_glasses[select == 1] print (y_train.shape[0]) 10 390 In [35]: svc_3 = SVC(kernel='linear') In [36]:: In [38]: y_pred = svc_3.predict(X_test) eval_faces = [np.reshape(a, (64, 64)) for a in X_test] print_faces(eval_faces, y_pred, 10)
https://adataanalyst.com/scikit-learn/support-vector-machine-scikit-learn-part-1/
CC-MAIN-2022-05
refinedweb
695
52.15
Ticket #15319 (closed defect: fixed) Guest OS failed to initial graphical desktop with vbox 5.0.18 update -> fixed in release 5.0.22 and higher, comment 14 fixed in releases higher than 5.1.10 Description Guest is Debian 7.10 Wheezy, with kernel 3.16 (came from official backports) Desktop environment: gdm3 + gnome 3.4 (fallback/classic) Today, after vbox update to 5.0.18, this linux guest boot correctly. but failed on X graphical desktop. (attached error log.) I have another Guest OS (Debian 8) not affected by vbox 5.0.18. Funny thing is, I try to install VBoxGuestAdditions_5.0.16.iso and bypass this problem. Desktop work normally with 5.0.16 Guest Additions, although it notification me "outdated guest additions version." Attachments Change History Changed 3 years ago by NoNoNo - attachment Xorg.0.log added comment:1 follow-up: ↓ 2 Changed 3 years ago by frank How much video memory do you have configured for your guest? comment:2 in reply to: ↑ 1 Changed 3 years ago by NoNoNo comment:3 Changed 3 years ago by Dragon9k Hi, Same problem for me using Guest Ubuntu 14.04.4 LTS after updating to 5.0.18 guest additions. Had to revert back to 5.0.16 Guest Additions. Guest with 32MB for video. My host is windows 10 with 8GB RAM. comment:5 Changed 3 years ago by raynebc Similar problem for me. Host OS Windows 7 Pro x64 with 16GB of memory. After upgrading Virtualbox and guest additions to version 5.0.18, the guest OS (Ubuntu 15.10 x64) would not boot the X environment and would just display "started light display manager" in its place. The full graphical environment would boot if I ran the guest additions uninstall script from a virtual terminal and rebooted. The OS appears to be working normally after having installed version 5.0.16 of the guest additions. The guest is configured with 256MB of video memory comment:6 Changed 3 years ago by osHH I want to confirm the same behavior for my Ubuntu Server running as guest on Mac OSX host. I had guest additions 4.3.30 installed. After updating VirtualBox to 5.0.20, I would still get my graphical login, but the guest additions were not running, because they were out of date. After updating to guest additions 5.0.20 I could no longer run the graphical system on my linux guest. I went back to a previous state and this time did the guest additions update from 4.3.30 to 5.0.16 and everything works fine now. So it seems after 5.0.16 something was changed that can break the graphical system on linux guests. comment:7 follow-up: ↓ 10 Changed 3 years ago by michael I don't know if everyone is seeing the same issue here, but NoNoNo's issue is that we currently do not support that kernel and X server combination: we expect that with such a recent kernel an X server with the modesetting driver available will be present. You could try to see if there is a package for it on your guest, though I have not tested very old versions of the driver. comment:8 follow-up: ↓ 9 Changed 3 years ago by michael Summary of currently known issues: - some 32-bit guests do not work due to an OS bug. I have committed a work-around which should be available soon. - we do not support guests with kernel 3.11 and later and X server 1.16 and later, though those may work if you manually install the modesetting X.Org driver. If anyone is seeing other issues, please try to reproduce them with step-by-step reproduction instructions (starting from installing the guest OS and the exact type) and put the instructions on this ticket. comment:9 in reply to: ↑ 8 Changed 3 years ago by frank - some 32-bit guests do not work due to an OS bug. I have committed a work-around which should be available soon. That one is hopefully fixed with the last 5.0 Guest Additions test build (build > 107107). comment:10 in reply to: ↑ 7 Changed 3 years ago by NoNoNo comment:11 Changed 3 years ago by michael The problem is that our X server graphics driver can only work if the kernel graphics driver is not loaded. X.Org Server 1.17 and later can use the kernel driver directly, but older ones cannot, so if it is present it prevents them from working at all. I have after all found a way to prevent the kernel driver from loading in that case, so the problem should now be fixed. If you would like to test this, the Additions build on the test build page<1> should contain the fix. <1> Testbuilds comment:12 Changed 3 years ago by michael - Summary changed from Guest OS failed to initial graphical desktop with vbox 5.0.18 update to Guest OS failed to initial graphical desktop with vbox 5.0.18 update -> believed fixed in releases greater than 5.0.20 comment:13 Changed 3 years ago by frank - Status changed from new to closed - Resolution set to fixed Fixed in 5.0.22. Please reopen if necessary. comment:14 Changed 3 years ago by luis.antolin - Status changed from closed to reopened - Resolution fixed deleted I still have this problem. No 3D acceleration so Gnome goes to "fallback". Easy to reproduce. Guest, 64bits, 2Gb RAM, 128MB video RAM, 3D enabled. Install a clean Debian 7.11 using Debian net-install ISO, all default options. Selected packages: just Desktop. Kernel 3.2.0-4-amd64, gnome 3.4.2, xorg 1.12 With additions 5.0.16, all is OK, gnome standard experience. With additions 5.1.10, no 3D acceleration, gnome goes to "fallback". I tried all the workarounds that I could find in 2 hours of "googling", nothing helped. Any suggestion is welcome and I can easily try it. Thanks for your support. comment:15 Changed 3 years ago by michael It looks to me like the version of libGL.so in Debian 7.11 is affected by this issue: comment:16 Changed 3 years ago by luis.antolin It took me a while, but yes, I think that I have verified that the commit at the last message of that link did not make it to mesa 8.0.5 (debian7 old-stable). At file src/mapi/glapi/gen/gl_x86-64_asm.py b/src/mapi/glapi/gen/gl_x86-64_asm.py the lines print '#if defined(GLX_USE_TLS) && defined(__linux__)' [...] print '#endif /* GLX_USE_TLS */ are still at the source package and they should not be. Unfortunately I need to continue working in debian7, I can upgrade, or patch, our source-compile and patch some packages, but upgrading to Debian8 or using +20 not-official packages is not an option. Is there any fix or workaround that could make Guest Additions newer than 5.0.16 work in Debian7? Thanks a lot for your time and support. comment:17 Changed 3 years ago by michael I will think about this, but perhaps the simplest solution would be to ask Debian to apply the fix? comment:18 Changed 3 years ago by luis.antolin I will try to contact the maintainer(s). In the meantime I will also try to rebuild from source myself and apply the fix that you mention at to see if it helps. comment:19 Changed 3 years ago by luis.antolin I have compiled the source package, adding the fix mentioned above, installed the resulting .deb packages on top of my clean Debian7 but unfortunately it did not help. I can provide any needed data or traces. I am also open to any suggestion. Now that I have the compilation environment all set up implementing ideas should be easy. Thanks. comment:20 Changed 3 years ago by michael I tried removing the section without rebuilding the library, using "strip -R .note.ABI-tag /usr/lib/x86_64-linux-gnu/libGL.so.1". After running ldconfig, glxinfo reported that 3D pass-through was enabled, and GNOME-Shell started after a reboot of the guest. comment:21 Changed 3 years ago by luis.antolin That worked perfectly. Additions 5.1.10 running OK, gnome-shell works OK. When I patched and recompiled the source package I did it following tutorials. So most probably I did something wrong and the patching did not make it to the generated binary. Knowing that it works I will redo it more carefully. The maintainers of the packages told me that it is very unlikely that this patch makes it to Debian7. Debian7 wheezy (old-stable) is maintained by the LTS team and they mostly care about fixing security issues. So, problem solved for me (and maybe for others in a similar situation). Thanks again for your time and effort! Changed 3 years ago by luis.antolin - attachment 15-no-abi-tag.diff added patch for Debian7 wheezy mesa 8.0.5-4 that corrects the 3D problem comment:22 Changed 3 years ago by luis.antolin Update. For some reason applying the patch detailed at was not enough. I had to remove .note.ABI-tag also from a couple of .h files. I have attached the patch file that I used in case it could help anyone. I could also provide .deb packages for x86-64 architecture. comment:23 Changed 3 years ago by michael Update: technically this was a different issue, not the original one reported, although the symptoms were the same. To avoid the risk of introducing new problems I decided not to fix this, but instead to detect the problem in the Additions installer and give the user instructions how to fix it themself. You will find the change in the timeline<1> in a day or so if you search for a changeset message containing the tag bugref:8679 (our internal reference number for this issue). comment:24 Changed 3 years ago by michael - Status changed from reopened to closed - Resolution set to fixed comment:25 Changed 3 years ago by michael - Summary changed from Guest OS failed to initial graphical desktop with vbox 5.0.18 update -> believed fixed in releases greater than 5.0.20 to Guest OS failed to initial graphical desktop with vbox 5.0.18 update -> fixed in 5.0.20, comment 14 fixed in releases higher than 5.1.10 comment:26 Changed 3 years ago by michael - Summary changed from Guest OS failed to initial graphical desktop with vbox 5.0.18 update -> fixed in 5.0.20, comment 14 fixed in releases higher than 5.1.10 to Guest OS failed to initial graphical desktop with vbox 5.0.18 update -> fixed in release 5.0.22 and higher, comment 14 fixed in releases higher than 5.1.10
https://www.virtualbox.org/ticket/15319
CC-MAIN-2019-30
refinedweb
1,826
76.52
Hi all, I would like to know, if is necessary to create some instalation package of my plugin, or it just can be copied to another computer? I have prefix id registered within adobe site, on my mac is everything ok, but when i try to copy release build to another mac with indesign it just gave me error message. thanks for the answers No it is not necessary to create an installer for the plugin, unless it does not have a dependecny on something that needs to be copied onto the deployment machine. What kind of error are you getting? It might be because you may be using a third party framework which you are not copying onto the new machine. Manan Joshi - Efficient InDesign Solutions - MetaDesign Solutions It says: Adobe Indesign does not recognise eTag creator.InDesignPlugin as a valid plug-in. Please reinstal ... Does the plugin have a dependency on any third party framework, does this release plugin load on your development machine? Manan Joshi If I run this plugin on development machine everything is ok. If I run it without launching from Xcode, it works fine. I use only C++ libraries and namespaceses eg: std, some streams ... only thing that comes to my mind, I have used singleton created from my template but that shouldn't be problem ... Could be problem in that I would like to rename my project and plugin? Try loading the release plugin of any sample plugin that ships with Indesign SDK on the deployment machine. If that loads on that machine, then either there is a problem in the project setting of your project or there is some dependency for the plugin to load that you are missing. Last thing you could do is install Xcode on the machine and try to debug the problem. Manan Joshi - Efficient InDesign Solutions - MetaDesign Solutions I tryed to run WriteFishPrice on target machine and result was same as in case of my plugin -> same error message. On target machine, there is only 30 days trial of Indesign, but i thing that should not make any difference. I have Xcode on target machine, but I dont have Indesign that supports debuging ... Make sure you are using the correct version of Indesign to load your plugin, for ex: CS5 plugins are not compatible with CS5.5 and vice versa. Secondly you can debug plugin's using release version of Indesign too, you just need to create the debugging symbols for it. There are restricted debugging support on the release build, in this case you just need to look into the gdb console for any messages. Manan Joshi - Efficient InDesign Solutions - MetaDesign Solutions I have had checked version on both machines and they are same: 7.5.2 so, there shouldnt be problem. I will try to launch project from xcode and hope that will see sometnihng in gdb ... Could you gave me a clue how to create debugging symbols for that, please? Show/inspect package contents on both destination and source plugin. Probably the internal links are broken. If so, forget about SMB shares, SFTP or what else you used. ZIP the plugin using Finder's "Create Archive" ... Dirk Hi Dirk, It looks like that are internal links broken..., I didnt use default location for plugin when I was creating it ... My project base location is in folder that contains whole indesign package, so Iam not able to create zip package because there is a lot of other files, that are not needed in plugin package ... This is how it looks like: The build / output location is irrelevant. I build straight into a subfolder of the plug-ins folder. Btw, this is just a matter of taste but I would not dump my sources into the SDK folder - it is no fun if you have to support multiple versions of InDesign. File system links are broken during transport - having a look at the package of the source plugin that works on the development machine should be the proof. If you are not scared by a command line, you can also use terminal: ls -l /drag/your/plugin/into/terminal/window/to/produce/its/path... If you need to transfer a whole folder, use ZIP on that. I forgot to mention an alternative, the Disk Utility can be used to create DMG files from folders. XCode also includes a tool "PackageMaker" that creates whole installer packages. Dirk I know that output location is irrelevant, I have same setting as you. Thank you for all the tips, Iam going to try PackageMaker and will see. I had looked at package on development machine and content of SelectionTest.InDesignPlugin was following: And then I do the same on machine, where I would like to run my plugin and the plugin package contained only one folder: Versions. The other folders wasnt included, so I assume that there are internal links broken... I have no clue how to repare this ... I assume my repeated suggestion to use ZIP at least as proof has also failed. Is the target machine a plain OSX installation that you took out of the box yourself, or could you be fighting some antivirus software? Dirk On targeted machine isnt any antivirus or similiar software. I will try to create zip file at tomorow morning... Thank you for info Ondrej Hi all, solution with ZIP archive worked, from my point of view it is a very elegant solution. Thank you all, that you helped me again
http://forums.adobe.com/message/4217724
CC-MAIN-2013-48
refinedweb
919
63.09
This is the first part in a series of articles on how debuggers work. I'm still not sure how many articles the series will contain and what topics it will cover, but I'm going to start with the basics. In this part I'm going to present the main building block of a debugger's implementation on Linux - the ptrace system call. All the code in this article is developed on a 32-bit Ubuntu machine. Note that the code is very much platform specific, although porting it to other platforms shouldn't be too difficult. Motivation To understand where we're going, try to imagine what it takes for a debugger to do its work. A debugger can start some process and debug it, or attach itself to an existing process. It can single-step through the code, set breakpoints and run to them, examine variable values and stack traces. Many debuggers have advanced features such as executing expressions and calling functions in the debbugged process's address space, and even changing the process's code on-the-fly and watching the effects. Although modern debuggers are complex beasts [1], it's surprising how simple is the foundation on which they are built. Debuggers start with only a few basic services provided by the operating system and the compiler/linker, all the rest is just a simple matter of programming. Linux debugging - ptrace The Swiss army knife of Linux debuggers is the ptrace system call [2]. It's a versatile and rather complex tool that allows one process to control the execution of another and to peek and poke at its innards [3]. ptrace can take a mid-sized book to explain fully, which is why I'm just going to focus on some of its practical uses in examples. Let's dive right in. Stepping through the code of a process I'm now going to develop an example of running a process in "traced" mode in which we're going to single-step through its code - the machine code (assembly instructions) that's executed by the CPU. I'll show the example code in parts, explaining each, and in the end of the article you will find a link to download a complete C file that you can compile, execute and play with. The high-level plan is to write code that splits into a child process that will execute a user-supplied command, and a parent process that traces the child. First, the main function: int main(int argc, char** argv) { pid_t child_pid; if (argc < 2) { fprintf(stderr, "Expected a program name as argument\n"); return -1; } child_pid = fork(); if (child_pid == 0) run_target(argv[1]); else if (child_pid > 0) run_debugger(child_pid); else { perror("fork"); return -1; } return 0; } Pretty simple: we start a new child process with fork [4]. The if branch of the subsequent condition runs the child process (called "target" here), and the else if branch runs the parent process (called "debugger" here). Here's the target process: void run_target(const char* programname) { procmsg("target started. will run '%s'\n", programname); /* Allow tracing of this process */ if (ptrace(PTRACE_TRACEME, 0, 0, 0) < 0) { perror("ptrace"); return; } /* Replace this process's image with the given program */ execl(programname, programname, 0); } The most interesting line here is the ptrace call. ptrace is declared thus (in sys/ptrace.h): long ptrace(enum __ptrace_request request, pid_t pid, void *addr, void *data); The first argument is a request, which may be one of many predefined PTRACE_* constants. The second argument specifies a process ID for some requests. The third and fourth arguments are address and data pointers, for memory manipulation. The ptrace call in the code snippet above makes the PTRACE_TRACEME request, which means that this child process asks the OS kernel to let its parent trace it. The request description from the man-page is quite clear: Indicates that this process is to be traced by its parent. Any signal (except SIGKILL) delivered to this process will cause it to stop and its parent to be notified via wait(). Also, all subsequent calls to exec().) I've highlighted the part that interests us in this example. Note that the very next thing run_target does after ptrace is invoke the program given to it as an argument with execl. This, as the highlighted part explains, causes the OS kernel to stop the process just before it begins executing the program in execl and send a signal to the parent. Thus, time is ripe to see what the parent does: void run_debugger(pid_t child_pid) { int wait_status; unsigned icounter = 0; procmsg("debugger started\n"); /* Wait for child to stop on its first instruction */ wait(&wait_status); while (WIFSTOPPED(wait_status)) { icounter++; /* Make the child execute another instruction */ if (ptrace(PTRACE_SINGLESTEP, child_pid, 0, 0) < 0) { perror("ptrace"); return; } /* Wait for child to stop on its next instruction */ wait(&wait_status); } procmsg("the child executed %u instructions\n", icounter); } Recall from above that once the child starts executing the exec call, it will stop and be sent the SIGTRAP signal. The parent here waits for this to happen with the first wait call. wait will return once something interesting happens, and the parent checks that it was because the child was stopped (WIFSTOPPED returns true if the child process was stopped by delivery of a signal). What the parent does next is the most interesting part of this article. It invokes ptrace with the PTRACE_SINGLESTEP request giving it the child process ID. What this does is tell the OS - please restart the child process, but stop it after it executes the next instruction. Again, the parent waits for the child to stop and the loop continues. The loop will terminate when the signal that came out of the wait call wasn't about the child stopping. During a normal run of the tracer, this will be the signal that tells the parent that the child process exited (WIFEXITED would return true on it). Note that icounter counts the amount of instructions executed by the child process. So our simple example actually does something useful - given a program name on the command line, it executes the program and reports the amount of CPU instructions it took to run from start to finish. Let's see it in action. A test run I compiled the following simple program and ran it under the tracer: #include <stdio.h> int main() { printf("Hello, world!\n"); return 0; } To my surprise, the tracer took quite long to run and reported that there were more than 100,000 instructions executed. For a simple printf call? What gives? The answer is very interesting [5]. By default, gcc on Linux links programs to the C runtime libraries dynamically. What this means is that one of the first things that runs when any program is executed is the dynamic library loader that looks for the required shared libraries. This is quite a lot of code - and remember that our basic tracer here looks at each and every instruction, not of just the main function, but of the whole process. So, when I linked the test program with the -static flag (and verified that the executable gained some 500KB in weight, as is logical for a static link of the C runtime), the tracing reported only 7,000 instructions or so. This is still a lot, but makes perfect sense if you recall that libc initialization still has to run before main, and cleanup has to run after main. Besides, printf is a complex function. Still not satisfied, I wanted to see something testable - i.e. a whole run in which I could account for every instruction executed. This, of course, can be done with assembly code. So I took this version of "Hello, world!" and assembled it: mov ecx, msg mov ebx, 1 mov eax, 4 ; Execute the sys_write system call int 0x80 ; Execute sys_exit mov eax, 1 int 0x80 section .data msg db 'Hello, world!', 0xa len equ $ - msg Sure enough. Now the tracer reported that 7 instructions were executed, which is something I can easily verify. Deep into the instruction stream The assembly-written program allows me to introduce you to another powerful use of ptrace - closely examining the state of the traced process. Here's another version of the run_debugger function: void run_debugger(pid_t child_pid) { int wait_status; unsigned icounter = 0; procmsg("debugger started\n"); /* Wait for child to stop on its first instruction */ wait(&wait_status); while (WIFSTOPPED(wait_status)) { icounter++; struct user_regs_struct regs; ptrace(PTRACE_GETREGS, child_pid, 0, ®s); unsigned instr = ptrace(PTRACE_PEEKTEXT, child_pid, regs.eip, 0); procmsg("icounter = %u. EIP = 0x%08x. instr = 0x%08x\n", icounter, regs.eip, instr); /* Make the child execute another instruction */ if (ptrace(PTRACE_SINGLESTEP, child_pid, 0, 0) < 0) { perror("ptrace"); return; } /* Wait for child to stop on its next instruction */ wait(&wait_status); } procmsg("the child executed %u instructions\n", icounter); } The only difference is in the first few lines of the while loop. There are two new ptrace calls. The first one reads the value of the process's registers into a structure. user_regs_struct is defined in sys/user.h. Now here's the fun part - if you look at this header file, a comment close to the top says: /* The whole purpose of this file is for GDB and GDB only. Don't read too much into it. Don't use it for anything other than GDB unless know what you are doing. */ Now, I don't know about you, but it makes me feel we're on the right track :-) Anyway, back to the example. Once we have all the registers in regs, we can peek at the current instruction of the process by calling ptrace with PTRACE_PEEKTEXT, passing it regs.eip (the extended instruction pointer on x86) as the address. What we get back is the instruction [6]. Let's see this new tracer run on our assembly-coded snippet: $ simple_tracer traced_helloworld [5700] debugger started [5701] target started. will run 'traced_helloworld' [5700] icounter = 1. EIP = 0x08048080. instr = 0x00000eba [5700] icounter = 2. EIP = 0x08048085. instr = 0x0490a0b9 [5700] icounter = 3. EIP = 0x0804808a. instr = 0x000001bb [5700] icounter = 4. EIP = 0x0804808f. instr = 0x000004b8 [5700] icounter = 5. EIP = 0x08048094. instr = 0x01b880cd Hello, world! [5700] icounter = 6. EIP = 0x08048096. instr = 0x000001b8 [5700] icounter = 7. EIP = 0x0804809b. instr = 0x000080cd [5700] the child executed 7 instructions OK, so now in addition to icounter we also see the instruction pointer and the instruction it points to at each step. How to verify this is correct? By using objdump -d on the executable: $ objdump -d traced_helloworld traced_helloworld: file format elf32-i386 Disassembly of section .text: 08048080 <.text>: 8048080: ba 0e 00 00 00 mov $0xe,%edx 8048085: b9 a0 90 04 08 mov $0x80490a0,%ecx 804808a: bb 01 00 00 00 mov $0x1,%ebx 804808f: b8 04 00 00 00 mov $0x4,%eax 8048094: cd 80 int $0x80 8048096: b8 01 00 00 00 mov $0x1,%eax 804809b: cd 80 int $0x80 The correspondence between this and our tracing output is easily observed. Attaching to a running process As you know, debuggers can also attach to an already-running process. By now you won't be surprised to find out that this is also done with ptrace, which can get the PTRACE_ATTACH request. I won't show a code sample here since it should be very easy to implement given the code we've already gone through. For educational purposes, the approach taken here is more convenient (since we can stop the child process right at its start). The code The complete C source-code of the simple tracer presented in this article (the more advanced, instruction-printing version) is available here. It compiles cleanly with -Wall -pedantic --std=c99 on version 4.4 of gcc. Conclusion and next steps Admittedly, this part didn't cover much - we're still far from having a real debugger in our hands. However, I hope it has already made the process of debugging at least a little less mysterious. ptrace is truly a versatile system call with many abilities, of which we've sampled only a few so far. Single-stepping through the code is useful, but only to a certain degree. Take the C "Hello, world!" sample I demonstrated above. To get to main it would probably take a couple of thousands of instructions of C runtime initialization code to step through. This isn't very convenient. What we'd ideally want to have is the ability to place a breakpoint at the entry to main and step from there. Fair enough, and in the next part of the series I intend to show how breakpoints are implemented. References I've found the following resources and articles useful in the preparation of this article:
http://eli.thegreenplace.net/2011/01/23/how-debuggers-work-part-1/
CC-MAIN-2016-07
refinedweb
2,129
69.82
Hi, all. I'm holding a devfs and Hot-Plug BOF at OLS 2000. The BOF is scheduled for Friday, 21-Jul at 10am. I've appended the abstract. In addition, I'll be holding a half-day working on devfs and HotPlug issues. This will be a brainstorming and design session, where developers can state what the need, what they'd like and we can evaluate solutions. There may even be code written :-) The workshop will probably be on Sunday, 23-Jul. Devfs and Hot-Plug ================== Devfs is the Device Filesystem for Linux. It provides an API for device drivers to create device nodes for each device attached to the system. The devfs namespace is designed to show the topology of devices, making it far easier to "navigate" your hardware. The recent support for hot-plug buses such as USB and FireWire require more sophisticated device management to fully utilise these buses. Devfs and its companion, devfsd (the device management daemon) provide an excellent solution to the problems of hot-plug support. In this BOF I will give a brief overview of the history and development of devfs, and summarise other work by USB developers who have used devfs+devfsd in their solutions. Following this, there will be an open discussion for developers who want to learn how devfs works, how to use it and the reasoning behind design choices. Regards, Richard.... Permanent: rgooch@xxxxxxxxxxxxx Current: rgooch@xxxxxxxxxxxxxxx
http://oss.sgi.com/archives/devfs/2000-07/msg00036.html
CC-MAIN-2016-26
refinedweb
239
63.59
In C++, you cannot assign the address of variable of one type to a pointer of another type. Consider this example: int *ptr; double d = 9; ptr = &d; // Error: can't assign double* to int* However, there is an exception to this rule. In C++, there is a general purpose pointer that can point to any type. This general purpose pointer is pointer to void. void *ptr; // pointer to void Example 1: C++ Pointer to Void #include <iostream> using namespace std; int main() { void* ptr; float f = 2.3; ptr = &f; // float* to void cout << &f << endl; cout << ptr; return 0; } Output 0xffd117ac 0xffd117ac Here, the pointer ptr is given the value of &f. The output shows that the void pointer ptr stores the address of a float variable f.
https://www.programiz.com/cpp-programming/pointer-void
CC-MAIN-2020-16
refinedweb
130
78.28
Gridded Datasets ¶ import xarray as xr import numpy as np import holoviews as hv hv.extension('matplotlib') %opts Scatter3D [size_index=None color_index=3] (cmap= interfacing with grid-based datasets directly. Grid-based datasets have two types of dimensions: - they have coordinate or key dimensions, which describe the sampling of each dimension in the value arrays - they have value dimensions which describe the quantity of the multi-dimensional value([[ 0.39038447, 0.5809186 , 0.12968233, 0.98518853, 0.82353369, 0.00904825, 0.92951999, 0.06997991, 0.09565921, 0.39314974], [ 0.19149875, 0.30422852, 0.51478074, 0.76948423, 0.0908213 , 0.47405966, 0.57984901, 0.59721032, 0.40688238, 0.92245316], [ 0.20083408, 0.78531743, 0.41305037, 0.13770441, 0.5807749 , 0.04929245, 0.75421141, 0.91537635, 0.40221771, 0.82849946], [ 0.29525556, 0.31930462, 0.6573146 , 0.02311893, 0.4155926 , 0.78252929, 0.83330492, 0.22257102, 0.50052556, 0.01615106], [ 0.36901954, 0.36214848, 0.30293863, 0.6354043 , 0.00470442, 0.2823036 , 0.88763943, 0.92972773, 0.98962421, 0.4832394 ]])} However HoloViews also ships with interfaces for xarray and iris , two common libraries for working with multi-dimensional datasets: xr_img = img.clone(datatype=['xarray']) arr_img = img.clone(datatype=['image']) iris_img = img.clone(datatype=['cube']) print(type(xr_img.data)) print(type(iris_img.data)) print(type(arr_img.data)) <class 'xarray.core.dataset.Dataset'> <class 'iris.cube.Cube'> <type 'numpy.ndarray'>: <type )), kdims=['x', 'y', 'z'], vdims=['Value']) dataset3d :Dataset [x,y,z] (Value) This is because even a 3D multi-dimensional array represents volumetric data which we can only easily display if it only contains a() heatmap = hv.HeatMap((['A', 'B', 'C'], ['a', 'b', 'c', 'd', 'e'], np.random.rand(5, 3))) heatmap + heatmap.table() heatmap.dimension_values('x') array(['A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B', 'C', 'C', 'C', 'C', 'C'], dtype='|S1') To access just the unique coordinates along a dimension simply supply the expanded=False keyword: heatmap.dimension_values('x', expanded=False) array(['A', 'B', 'C'], dtype='|S1') Finally we can also get a non-flattened, expanded coordinate array returning a coordinate array of the same shape as the value arrays heatmap.dimension_values('x', flat=False) array([['A', 'A', 'A', 'A', 'A'], ['B', 'B', 'B', 'B', 'B'], ['C', 'C', 'C', 'C', 'C']], dtype='|S1') When accessing a value dimension the method will also return a flat view of the data: heatmap.dimension_values('z') array([ 0.7228966 , 0.31783408, 0.93571534, 0.4231442 , 0.49566044, 0.28835859, 0.53386977, 0.61899398, 0.08347936, 0.73628744, 0.49255667, 0.43946026, 0.22211106, 0.29402531, 0.88105038]) We can pass the flat=False argument to access the multi-dimensional array: heatmap.dimension_values('z', flat=False) array([[ 0.7228966 , 0.28835859, 0.49255667], [ 0.31783408, 0.53386977, 0.43946026], [ 0.93571534, 0.61899398, 0.22211106], [ 0.4231442 , 0.08347936, 0.29402531], [ 0.49566044, 0.73628744, 0.88105038]])
http://holoviews.org/user_guide/Gridded_Datasets.html
CC-MAIN-2017-30
refinedweb
473
53
E-reader Newspaper 08/11/09 - 09/11/09 Clinton pushes for cooperation on confronting extremism By David Alexander (Front Row Washington) said, like suicide bombers and those who attack girls for trying to go to school. Submitted at 11/8/2009 5:23:22 PM “In place of these new walls, we Secretary of State Hillary must renew the trans-Atlantic Clinton used an awards ceremony alliance as a cornerstone of a Sunday in Berlin to push g l o b a l a r c h i t e c t u r e o f European allies for greater c o o p e r a t i o n , ” s h e s a i d . cooperation in confronting Clinton’s remarks come as extremism, nuclear proliferation President Barack Obama is and other challenges of the 21st facing a difficult decision on century. whether to deploy additional Her remarks came as thousands troops to Afghanistan. of people crowded into the city The administration has had o n t h e e v e o f t h e 2 0 t h difficulty convincing European anniversary of the collapse of the allies to shoulder a bigger role in Berlin Wall. the conflict, and analysts said “We should look to the examples C l i n t o n ’ s c a l l f o r r e n e w e d of the generations who brought commitment was not likely to us successfully through the 20th change that. century and once again together “Facing difficult pressures on chart a clear and common course A f g h a n i s t a n , t h e O b a m a to safeguard our people and our administration marked the 20th planet, defeat violent extremists anniversary of the fall of the a n d p r e v e n t n u c l e a r Berlin Wall by revving up a proliferation,” Clinton said. rhetorical trope that President “We need to form an even Bush favored –- drawing a stronger partnership to bring parallel between the Cold War down the walls of the 21st and the fight against radical century and to confront those Islamist terrorism,” said Tom who hide behind them,” Clintonfreeword in Berlin) 2 Top News/ World/ Gadgets/ E-reader Newspaper The First Draft: US media’s Fort Hood coverage turns to militancy question By David Morgan (Front Row Washington) Submitted at 11/9/2009 6:27:02 AMyear (Lieberman) Kate Connolly reports from Berlin on the celebrations 20 years on from the fall of the wall By Kate Connolly (World news and comment from the Guardian | guardian.co.uk) Twenty years after the Berlin Wall came down, visitors to the German capital's celebrations tell Kate Connolly what it meant to them Kate Connolly Video: Holiday clips show three US hikers having fun (World news and comment from the Guardian | guardian.co.uk) Submitted at 11/9/2009 7:43:37 AM Shane Bauer, Sarah Shourd and Josh Fattal were arrested in July as they walked through mountainous terrain between the Iraqi and Iranian Kurdish areas Avatar: James Cameron's $500 Million Folly, In Three Dimensions [Avatar] By matt buchanan (Gizmodo) Submitted at 11/9/2009 6:50:46 AM Five. Hundred. Million. Dollars. This. According to NY Times. The bad thing is, I've seen the Avatar trailer in 3D. Big blue crap in three dimensions is just big blue crap that feels like it's right in front of you. In order to be profitable, it needs to generate ticket sales of over $250 million—only Star Trek level. It's not a huge deal, you know. It's just the fate of 3D movies hanging in the balance. [ NYT via io9] E-reader Newspaper Top News/ World/ Tech/ 3 General Casey: diversity shouldn’t be casualty of Fort Hood Iran charges US citizens with spying By Tabassum Zakaria (Front Row Washington) By Adam Gabbatt (World news and comment from the Guardian | guardian.co.uk) behalf of these three young people and their families that the Iranian government exercise Submitted at 11/8/2009 9:07:47 AM compassion and release them so Submitted at 11/9/2009 8:03:30 AM General George Casey, the they can return home, and we Army’s top officer, is concerned Three Americans detained after will continue to make that case." that diversity will become a crossing border from Iraq into In September the Iranian casualty of the Fort Hood Iran earlier this year president, Mahmoud tragedy. Three US citizens who were Ahmadinejad, suggested the The religious beliefs of suspect detained in Iran after crossing the release of the Americans could Major Nidal Malik Hasan, a border from Iraq have been be linked to the release of Iranian Muslim Army psychiatrist, have charged with espionage, the diplomats he said were being led to speculation about motive official Irna news agency has held by US troops in Iraq. in the shooting rampage that reported. Bauer is a freelance journalist killed 13 people. Shane Bauer, 27, Sarah Shourd, and photographer based in the “I’m concerned that this 31, and Josh Fattal, 27, were Middle East who has reported increased speculation could cause arrested in July as they walked from Iraq, Syria, the Darfur a backlash against some of our through mountainous terrain region of Sudan and Yemen, Muslim soldiers. And I’ve asked The bottom line is the military battlefields all over the world, between the Iraqi and Iranian according to his website. our Army leaders to be on the benefits from diversity, he said. Obama said. “They are Christians Kurdish areas where the border is Free the hikers, a website set up lookout for that,” Casey told “Our diversity, not only in our and Muslims, Jews and Hindus not clearly marked. to raise the plight of the CNN’s “State of the Union.” Army, but in our country, is a and nonbelievers.” "The three are charged with detainees, said Fattal had been Asked on NBC’s“Meet the strength. And as horrific as this “They reflect the diversity that espionage," the Tehran general visiting Bauer and Shourd when Press” whether Muslim soldiers tragedy was, if our diversity makes this America. But what p r o s e c u t o r , A b b a s J a f a r i the three embarked on their trip. are conflicted in fighting wars in becomes a casualty, I think that’s they share is a patriotism like no Dolatabadi, was quoted as The site reported on their 100th other,” Obama said. M u s l i m c o u n t r i e s l i k e worse,” Casey said. saying. "Investigations continue day in captivity yesterday. President Barack Obama also Photo credit: Reuters/Jessica into the three detained Americans • Iran Afghanistan and Iraq, Casey said: “I think that’s something that we mentioned military diversity in Rinaldi (Casey at Fort Hood after in Iran." • Iraq have to look at on an individual his Saturday radio address which shooting), Reuters/Jim Young The US secretary of state, • United States (Obama leaving podium after Hillary Clinton, said Iran had no • Hillary Clinton basis. But I think we as an Army was focused on Fort Hood. have to be broad enough to bring Veterans Day is a chance to r e m a r k s a b o u t F o r t H o o d reason to hold the three prisoner in people from all walks of life.” honor Americans who served in s h o o t i n g ) and called for them to be Adam Gabbatt released. guardian.co.uk© Guardian News "We believe strongly that there is & Media Limited 2009 | Use of no evidence to support any this content is subject to our By Stephen Shankland Mozilla helped reshape the Web Google Chrome to reckon with. charge whatsoever," she said Terms & Conditions| More Feeds (Webware.com) since releasing Firefox 1.0 five Originally posted at Deep Tech during a visit to Berlin. years ago. Now it's got a "We would renew our request on Submitted at 11/9/2009 4:00:00 AM reawakened Microsoft and After 5 years, Firefox faces new challenges 4 Top News/ World/ Gadgets/ E-reader Newspaper Healthcare vote: Obama says courageous, Palin says mess More on Hassan | Michael Tomasky By Tabassum Zakaria (Front Row Washington) By Michael Tomasky (World news and comment from the Guardian | guardian.co.uk) Submitted at 11/8/2009 11:25:15 AM Submitted at 11/9/2009 7:43:55 AM The House passage of healthcare legislation means different things to different folks. For President Barack Obama it was a “courageous vote” by members of Congress. entirely reasonable but was a position taken more out of distrust of the media than any kind of Palestinian sympathy. The initial media hysteria in these instances is usually wrong. Never forget poor Richard Jewell. So if Hassan was indeed an American-hating extremist, what are we to make of it? Yes, I'm well aware that some of you think we should make of it that Barack Obama is behind it all and that Hassan's actions are phase one of Obama's plot to destroy the country. But I mean back here on planet Earth. We make of it that the Army needs more rigorous screening and more thoroughgoing reviews of soldiers' states of mind. Anything else? • Fort Hood shootings • US military • United States Okay, it's certainly starting to look like Nidal Hassan held some extreme views and had some dubious connections. The Times reports this morning that he grew more and more opposed the US wars overseas, that he tried to get out of the Army but couldn't (you can; he was wrong about this, or got bad advice) and experienced some racist or religionist taunting. The Wash Post is exploring a link between Hassan mess will be disastrous for our Louisiana below: and a Virginia imam who was a economy, our small businesses, Embedded video from <a "leading promoter" of al-Qaida and our personal liberty,” she href=” and who crossed paths at one says. ” point with two of the 9-11 For Congressman Anh “Joseph” mce_href=” hijackers. Federal investigative Cao, the only Republican to vote video”>CNN Video</a> sources still tell both papers that for the House bill, it was “the Who do you agree with? the operating theory right now is right decision for my district, Click here for more Reuters that he acted alone. even though it was not the political coverage Fair enough. If them's the facts, popular decision for my party.” Photo credit: Reuters/Yuri them's the facts. My position last Michael Tomasky Watch CNN’s interview of the G r i p a s ( O b a m a o n w a y t o Friday -- that his roots and guardian.co.uk© Guardian News first-term congressman from making statement on healthcare) background may or may not turn & Media Limited 2009 | Use of out to be relevant, and that in the this content is subject to our meantime we should not rush to Terms & Conditions| More Feeds conclusions -- was not only Even God Runs Windows XP [Image Cache] By matt buchanan (Gizmodo) Photoshop. And this is what it looked like after a reboot: Apparently, the fog near a plaza Yes, that's a Windows XP error, in Ukraine was so utterly intense, floating in the sky. No, it's not a advertisements were reflected in the sky. This one, for a church or Submitted at 11/9/2009 7:00:00 AM] World/ Finance/ Tech/ E-reader Newspaper 5 Fall of the Berlin Wall: 20 years on leaders gather By Matthew Weaver (World news and comment from the Guardian | guardian.co.uk) president Lech Walesa. At around 6pm, Daniel Barenboim, who was in Berlin to witness the events of 1989, will Submitted at 11/9/2009 8:02:41 AM conduct his Staats Kapelle World leaders are gathering in orchestra on an outdoor stage at Berlin today, two decades after the Brandenburg Gate. the fall of the wall, to celebrate From 6.30pm, world leaders and reflect on the event. Find out i n c l u d i n g M e r k e l , G o r d o n about some of the key events Brown, the French president, here Nicolas Sarkozy, and the Russian Today's events to mark the 20th president, Dmitry Medvedev, anniversary of the fall of the will give speeches. Berlin Wall will range from Afterwards, the dominoes will solemn reflection to high kitsch be toppled and there will be celebration. fireworks at the Brandenburg Memorials are planned for the Gate at 8pm. 136 people who died when they To mark the anniversary, the tried to cross the border while – Guardian has put together a in an event reminiscent of special Berlin Wall package International It's a Knockout – including a series of videos, 1,000 foam dominoes placed audio from those whose lives along the wall's route will be were affected and interactive tipped over. Dancers dressed as guides. a n g e l s w i l l d e s c e n d f r o m • The historian and columnist prominent buildings. Timothy Garton Ash remembers At around 2pm, Angela Merkel, the mood in the German capital the first German leader to grow after the wall fell."As as symbol, up in the communist east, will it lives on, above all, as a image cross the Bornholmer Street of peaceful liberation," he writes. bridge, where the first border • T a k e a h i s t o r i c a l a n d post opened on the evening of 9 geographical journey of the November 1989. Berlin Wall through five videos. She will be accompanied by the • A gallery of images shows the former Soviet president Michael wall from its construction to the Gorbachev and Poland's former commemoration of its demise. o p p o s i t i o n l e a d e r a n d e x - • "Without the Leipzig demos and the will of the people, it would never have happened." Author Anna Funder reflects on life since the fall of the wall in this audio. • Our interactive timeline guides you through the dates and events that shaped the Berlin Wall and finally brought about its downfall. • Our Berlin correspondent, Kate Connolly, reports on today's celebrations and the mood of anticipation in the city. The Berlin Twitter Wall provides live updates and thoughts from across the world. The subject is also trending on Twitter at#fotw. For a historical perspective, the writer Gunter Grass has just published his diaries for 1990. And writer Lisa Selvidge describes her experiences and how they inspired her to write her new novel, The Last Dance over the Berlin Wall. You can see how the Guardian covered the events at the time on our digital archive. Update 3pm: Under drizzly skies Merkel crossed the Bonhomer Bridge flanked by Walesa and Gorbachev. She paid tribute to the courage of both men and to the bravery of the people of East Germany. She said: "This is not just a day of celebration for Germany, (but) a day of celebration for the whole of Europe." Update 3.30pm: Today's best Zelig moment comes from the French president Nicolas Sarkozy who used his Facebook page to suggest he was there 20 years ago. Sarkozy, or a minion on his behalf, posted a picture of the young Nicolas chipping away at the wall, with a caption that reads: "Memories of the fall of the Berlin wall, November 9, 1989". The French media have pointed out that archives showed he was there a week later. Meanwhile, back in Berlin "the atmosphere is fantastic". Visitors to the city today tell Kate Connolly what the fall of the wall meant to them. • Berlin Wall • Germany Matthew Weaver guardian.co.uk© Guardian News & Media Limited 2009 | Use of this content is subject to our Terms & Conditions| More Feeds Review redux: Flixster movie app for BlackBerry By Jessica Dolcourt (Webware.com) Taking a closer look at Flixster's updated movie preview and showtime app for BlackBerry. Originally posted at Crave Kraft launches hostile takeover of Cadbury By Mark Fightmaster (BloggingStocks) Submitted at 11/9/2009 9:50:00 AM Filed under: Deals, Kraft Foods'A' (KFT) Ahead of the pre -determined deadline, Kraft ( KFT) decided to launch its formal offer for U.K.-based chocolate maker Cadbury ( CBY). KFT announced that the cash-and-stock bid is worth $16.46 billion (9.8 billion pounds) or 717 pence per U.K.listed CBY share.. Continue reading Kraft launches hostile takeover of Cadbury Kraft launches hostile takeover of Cadbury originally appeared on BloggingStocks on Mon, 09 Nov 2009 09:50:00 EST. Please see our terms for use of feeds. Permalink| Email this| Comments 6 World/ Finance/ Politics/ E-reader Newspaper Burma claims it will release Aung San Suu Kyi By Mark Tran (World news and comment from the Guardian | guardian.co.uk) raised expectations of Aung San Suu Kyi's imminent release only to dash the hopes of her supporters at home and abroad. Submitted at 11/9/2009 8:02:10 AM Pro-democracy campaigners Diplomat says jailed opposition cautioned against reading too leader will be allowed to organise much into the latest hints on Suu her party for elections next year Kyi's release. "They've been Burma's opposition leader, Aung saying these sorts of things for a San Suu Kyi, may soon be long time but they have never released so she can play a role in delivered on them," said Anna next year's election, a senior Roberts, the director of the Burmese diplomat has said. Burma Campaign UK. "The "There is a plan to release her regime's main concern is get soon ... so she can organise her economic sanctions lifted and get party," Min Lwin, a director- approval for the sham elections general in the foreign ministry, to next year." ld the Associated Press. He gave Tantalising hints of a possible no details and it was unclear release for the political prisoner whether Aung San Suu Kyi came as Min Lwin was in Manila would be allowed to campaign or for a meeting of the Association stand for election. of Southeast Asian Nations Despite the conciliatory remarks, (Asean) and the US. t h e c o u n t r y ' s c o n s t i t u t i o n In a break with George Bush's includes provisions that bar her policy of isolating the Burmese from holding office and ensure regime, Barack Obama has the primacy of the government in decided on a policy of engagment the military. with the junta. Last week the US The Nobel peace prize winner assistant secretary of state for has spent 14 of the last 20 years east Asia, Kurt Campbell, and his under house arrest. In August a deputy, Scott Marciel, became court sentenced her to an the most senior American additional 18 months after an officials to visit Burma since American, John Yettaw, swam 1995, when Madeleine Albright across a lake to her villa in w e n t a s B i l l C l i n t o n ' s Rangoon and stayed overnight. a m b a s s a d o r t o t h e U N . Burma's junta in the the past has. • Aung San Suu Kyi Mark Tran guardian.co.uk© Guardian News & Media Limited 2009 | Use of this content is subject to our Terms & Conditions| More Feeds Sunday Morning Suction Feeding (Little Green Footballs) An interesting little high-speed video of a red bay snook, having a snack.[Video] Cramer on BloggingStocks: Pelosi can't kill the health care sector By Jim Cramer (BloggingStocks) Submitted at 11/9/2009 10:10:00 AM Filed under: Market matters, Abbott Laboratories (ABT), Aetna Inc (AET), CIGNA Corp (CI), Gilead Sciences (GILD), Stocks to Buy, Cramer on BloggingStocks From TheStreet.com Network • Ariad Upgraded, Xenoport Drug Delayed: BioBuzz • Holiday Deals Heat Up at Big Retailers TheStreet.com's Jim Cramer says the Senate is filled with more-savvy politicians, and the upside for beaten-down names is huge. Nancy Pelosi has now said her piece. The most unpopular Speaker of the House in the history of Wall Street has gotten her precious health care legislation through the House after ramming through a stimulus package that had far too little infrastructure and far too much pay raise for municipal and state workers, the most powerful interest group in the country. But this time the Senate sees through it, and the politicians -despite Pelosi's insistence that Tuesday's election went her way -- know better. There are pages after pages after pages in this bill that look threatening. But here's the rub: This bill's public option, the one that is supposed to be a killer to everything health care, should affect no more than 6 million people over a 10-year period, according to the Congressional Budget Office. In order to get 60 votes in the Senate, even that may prove to be too powerful an option. Continue reading Cramer on BloggingStocks: Pelosi can't kill the health care sector Cramer on BloggingStocks: Pelosi can't kill the health care sector originally appeared on BloggingStocks on Mon, 09 Nov 2009 10:10:00 EST. Please see our terms for use of feeds. Permalink| Email this| Comments World/ Finance/ Tech News/ E-reader Newspaper The healthcare vote | Michael Tomasky By Michael Tomasky (World news and comment from the Guardian | guardian.co.uk) funding vote that came earlier Saturday evening on the Stupak amendment. After 64 Democrats voted for Stupak, I'd have Submitted at 11/9/2009 7:30:10 AM thought that many of those 64 Well, it passed. A win is a win is would go ahead and vote for the a win, I guess. If Chelsea beat final bill. Bolton by one goal in extra time, And many did. But 23 it'd show up as a win. And if Democrats voted for the Stupak Notre Dame barely beat Navy - amendment and then went on to oops, bad example! - you know vote against the final passage of what I mean. As long as it goes the bill. What on earth would in the W column, it's all right. make these 23 happy? Nothing But count me among those who short of the whole thing going believe that a 220-215 vote is a away, I guess. little underwhelming. After We'll get more into the picking up two House seats in substance of the abortion thing as last week's elections - the much- the week goes on. I think it was a discussed one in upstate New hideous amendment, but maybe it York, and the less-noticed won't have a terribly dramatic victory of John Garamendi in practical effect, as a piece in what we call the East Bay area of today's NY Times suggests. San Fran/Oakland - the House But I think the vote shows that Democrats have 40 votes to spare neither Pelosi nor the president on any piece of legislation. They has much purchase over the needed every one of them, as 39 centrist Democrats. Obama went Democrats opposed. to the Hill on Saturday morning One can interpret this as t o r a l l y t h e t r o o p s . H e masterful nose-counting by specifically argued to centrists Nancy Pelosi and her team. Or that they should vote yea because one can say that they barely the GOP was going to come after scraped by and maybe needed a them either way. Undoubtedly little luck to do so. The narrow true. But it obviously didn't margin surprised me a bit, persuade all that many people especially after the abortion- (although for the record I should note that a slight majority of the 52-member Blue Dog coalition voted for final passage, by 2824). It would have been nice if Pelosi could have ginned the yea votes up to 230 or so. It would have had a slight psychological effect on the Senate, I think. Now, nervous centrist senators are still going to be … nervous centrist senators. On the other hand, at least they didn't lose the vote, then extend the time limit in contravention of House rules, and then threaten people with familial ruination unless they changed their votes. Just imagine what the tea partiers would have done if Pelosi had done that. Somehow I doubt they complained in 2003 when Tom DeLay did it. • US Congress • US healthcare • United States Michael Tomasky guardian.co.uk© Guardian News & Media Limited 2009 | Use of this content is subject to our Terms & Conditions| More Feeds Barnes and Noble's Nook already makes a splash By Tom Johansmeyer (BloggingStocks) Submitted at 11/9/2009 8:40:00 AM Filed under: Competitive strategy, Google (GOOG), Amazon.com (AMZN), Media World, Technology If Amazon ( AMZN) was comfortable with its spot atop the e-reader market, it just got a wakeup call from Barnes & Noble ( BKS). The brick-andmortar. Singularity University, Day Two: Peter Diamandis Thinks Big By Ted Greenwald (Wired Top Stories) Submitted at 11/9/2009 5:42:00 AM After dinner on day two of Singularity University, Peter Diamandis gives a fantastic presentation about the X-Prize and what it means. This is a guy who radiates energy, seriousness, and goodwill. He would have made a first-class motivational 7 speaker, but he’s focused on substantial issues and favors leather jackets over sharkskin suits. This product is on fire, and it still isn't even on shelves yet. Mary Ellen Keating, a spokeswoman for Barnes & Noble wouldn't reveal how many of these devices have been preordered, but she did say, "Demand for the product in our stores and online has surpassed our expectations." She also noted, "We are working hard to meet demand for the holidays." Continue reading Barnes and Noble's Nook already makes a splash Barnes and Noble's Nook already makes a splash originally appeared on BloggingStocks on Mon, 09 Nov 2009 08:40:00 EST. Please see our terms for use of feeds. Read| Permalink| Email this| Comments 8 Finance/ Tech/ Tech News/ E-reader Newspaper Disney's 'A Christmas News Corp's MySpace Carol': Investors not in a mistakes pile up merry mood? Irony: U2's 'Free' Concert At The Berlin Wall, Blocked By A Big Wall By Steven Mallas (BloggingStocks) Dementia writes in to point out the rather ironic situation of a "free" concert put on by the band U2, at the remains of the Berlin Wall in order to celebrate the demise of the wall... but MTV decided to put up a big temporary barrier around the event so those who didn't have free tickets could not even see the event. Yes, they erected a special "wall" to block out a free concert about The Wall. As Dementia noted with the submission, "you're doing it wrong..." Permalink| Comments| Email This Story By Tom Johansmeyer (BloggingStocks) Submitted at 11/9/2009 9:30:00 AM Filed under: General Electric (GE), Walt Disney (DIS), Sony Corp ADR (SNE), Film Disney ( DIS) had high hopes for A Christmas Carol. It was supposed to be an unqualified blockbuster. Unfortunately, the film's first weekend at the box office was nothing short of a disaster. Too strong? Hardly. According to early estimates at Box Office Mojo, Carol took in little more than $30 million at domestic screenings. It was wasn't supposed to be like this. Carol was supposed to be light-years ahead of the competition. Sony's ( SNE) Michael Jackson's This Is It came in second. The Men Who Stare at Goats, distributed by Liberty Capital Group's ( LCAPA) Overture Films, was third. And The Fourth Kind, from General Electric's ( GE) Submitted at 11/9/2009 10:30:00 AM Filed under: Bad news, Internet, Google (GOOG), News Corp'B' (NWS), Media World, Technology For News Corp. ( NWS), MySpace is the mistake that keeps on costing. It's bad enough that Murdoch's empire paid $500 million for the social networking Universal, is currently ranked, platform shortly before Facebook aptly enough, in fourth place. knocked it from the premier spot Each of the latter three pictures in the social media beauty had a gross of somewhere pageant, but now we also know between $12 million and $14 that News Corp. has committed million. To me, Carol's take $350 million to office space for didn't seem as disproportionate as MySpace that will never be used. News Corp is shelling out more it should have been. Continue reading Disney's 'A than $1 million a month for Christmas Carol': Investors not in 420,000 square feet in Playa a merry mood? Disney's 'A Christmas Carol': Investors not in a merry mood? originally appeared on BloggingStocks on Mon, 09 Nov 2009 09:30:00 EST. Please see our terms for use of feeds. Read| Permalink| Email this| Comments By Mike Masnick (Techdirt) individuals were basically winning them all(Google Submitted at 11/9/2009 1:29:00 AM translation of the original, found While the RIAA has backed via brokep). Basically, the courts down (but not stopped) lawsuits acquitted most of the individuals touchscreen tablet if you can do it against those accused of file accused of private file sharing, first? Wired.com readers sharing in the US, it looks like with the one exception being the submitted illustrations of an the Danish anti-piracy bureau has case where the guy confessed. Apple tablet as part of iTablet decided to drop all of its lawsuits And, the nature of the rulings in mock-up contest. a f t e r i t b e c a m e c l e a r t h a t the acquittals made it clear that it iTab Mania: Wired.com Readers Envision Apple's Tablet By Brian X. Chen (Wired Top Stories) Submitted at 11/8/2009 9:00:00 PM Why wait for Apple to deliver a Vista, near Los Angeles International Airport. The deal was signed in August 2008 by Peter Levinsohn, former president of the Fox Interactive Media Unit. At the time, he issued a chest-puffing memo claiming it was "the single biggest real-estate transaction in Los Angeles in the last 25 years." Fortunately, he didn't mix the word "genius" in there at all. Continue reading News Corp's MySpace mistakes pile up News Corp's MySpace mistakes pile up originally appeared on BloggingStocks on Mon, 09 Nov 2009 10:30:00 EST. Please see our terms for use of feeds. Read| Permalink| Email this| Comments By Mike Masnick (Techdirt) Submitted at 11/9/2009 6:02:00 AM Danish Anti-Piracy Group Withdraws All Its Lawsuits Against Individuals (After Losing Most Anyway) was virtually impossible to win a lawsuit against individuals for file sharing. Of course, we have no doubt that the industry will continue to use other means, such as via regulatory capture, to continue to look for ways not to give consumers what they want. Permalink| Comments| Email This Story Finance/ Gadgets/ E-reader Newspaper Goldman Sachs CEO Lloyd Priceline.com earnings Blankfein pulls a Jeff preview: A sweet deal in Skilling in an interview Q3? By Zac Bissonnette (BloggingStocks) By Trey Thoelcke (BloggingStocks) 9 Rock Band Voice Engine Tricked By Something Called a "Musical Instrument" [Gaming] By John Herrman (Gizmodo) Submitted at 11/9/2009 10:50:00 AM Submitted at 11/9/2009 7:21:29 AM Submitted at 11/9/2009 9:00:00 AM Filed under: Management, Goldman Sachs Group (GS) Goldman Sachs's ( GS) normally reclusive CEO and noted theologian Lloyd Blankfein has been conducting an unprecedented number of interviewers of late to try to bolster the company's image. Maybe they'd be better off if he crawled back into his shell. In an interview with London's Sunday Times, Mr. Blankfein explained that Goldman Sachs is "doing God's work." I never thought of God as a mortgage backed securities trader necessarily, but that's OK. Blankfein added that "We help companies to grow by helping them to raise capital. Companies that grow create wealth. This, in turn, allows people to have jobs that create more growth and more Filed under: Earnings reports, Forecasts Priceline.com Inc. ( PCLN), which was recently added to the S&P 500, is scheduled to discuss its third-quarter 2009 financial results in a conference call w e a l t h . W e h a v e a s o c i a l Monday, November 9, at 4:30 PM ET. You can catch the live purpose." Continue reading Goldman webcast of the call on the Sachs CEO Lloyd Blankfein company's website. pulls a Jeff Skilling in an During the three months that ended in September, Priceline interview Goldman Sachs CEO Lloyd announced a partnership with Blankfein pulls a Jeff Skilling in Ticketmaster ( TKTM) and an interview originally appeared launched a rewards Visa card. on BloggingStocks on Mon, 09 Analysts surveyed by Thomson Nov 2009 10:50:00 EST. Please Reuters expect this leading online see our terms for use of feeds. travel services provider to report Read| Permalink| Email this| that earnings for that period jumped 18.2% from a year ago to Comments On one hand, what's happening here is very simple: Rock Band's singing feature just senses pitch, not words, so it's perfectly reasonable that a flute—or indeed almost any instrument—could do $2.92 per share. And revenue for the trick. On the other? This is the quarter is expected to be art. 23.6% higher to $693.9 million. The musical cosmos have been Continue reading Priceline.com tilted out of balance for quite a earnings preview: A sweet deal while now, violently thrown in Q3? askew sometime between when Priceline.com earnings preview: the first fake guitar rolled off an A sweet deal in Q3? originally assembly line in China and the appeared on BloggingStocks on first time a child recognized the Mon, 09 Nov 2009 09:00:00 Beatles on the radio as "that song EST. Please see our terms for from my Xbox!" Today, as we use of feeds. Permalink| Email watch a young lady with a flute this| Comments totally pass for a Very Serious Yelling Man with a facial tattoo, it feels like, in some small way, order has been restored. [ Neatorama via Kotaku] Haglöfs Laptop Drybags Have a Design Almost as Awesome as Their Name [Laptops] By Mark Wilson (Gizmodo) Submitted at 11/9/2009 7:40:00 AM Maybe I'm just a sucker for umlauts and radioactive thresholds of orange, but these 15 and 17-inch Haglöfs Laptop Drybags have me sold on both their padding and Ziplock-style watertight compartment. They run about $30. [ Haglöfs via Stilsucht via OhGizmo!] 10 Gadgets/ E-reader Newspaper Kohl’s Black Friday ad By Doug Aamoth (CrunchGear) Submitted at 11/9/2009 7:00:00 AM LCD 8X Digital Zoom– $59.99 * Electronics 10-50% Off Entire Stock Of The Sharper Image Products – 1050% Systems 3.5 GPS Navigation System – $69.99 * GPS Dashboard Grip Mat– $9.99 * MP3 Players 2GB MP3 Player w/Video– $24.99 * Televisions 19] More Black Friday deals… Happy 5th Birthday, Firefox! By John Biggs (CrunchGear) There were only two browsers of note, Netscape and Internet Explorer, and firing either up was Come back with me to the turn neither particularly comfortable of the century, circa 1996. Your or interesting. But, hidden deep humble narrator was working for b e h i n d N e t s c a p e ’ s b l a n d campus police at Carnegie- carapace, was Mozilla. When you Mellon University in Pittsburgh, typed “about:mozilla” in the creating FileMaker databases for N e t s c a p e a d d r e s s b a r , f o r their police reports. It wasn’t e x a m p l e , y o u g o t : uncommon then to see DOS And the beast shall come forth machines sitting beside Windows surrounded by a roiling cloud of 95 machines and the web was a vengeance. The house of the primitive and strange thing. unbelievers shall be razed and Submitted at 11/9/2009 6:33:00 AM they shall be scorched to the earth. Their tags shall blink until the end of days. from The Book of Mozilla, 12:10 Pretty badass stuff, especially when most websites were dedicated to kittens and burgeoning corporate identity. I was hooked instantly. This was the browser for me and it slowly became the browser for everyone with self-respect and a brain. Fast forward to 2004: Mozilla and Netscape were on the rocks and it looked like the browser wars had been won. IE was the victor. In order to combat bloat and “feature creep,” however, a ragtag team of coders led by Dave Hyatt and Blake Ross built something they called “Phoenix,” then “Firebird,” then, on November 9, 2004, Firefox 1.0 was born. This turned into the Mozilla suite – Firefox and Thunderbird – were born. On this, the fifth anniversary of that momentous occasion, let’s all tip out a little Jolt for Netscape and toast to the future of Firefox, the best browser in the world. Best of all, the book of Mozilla is still being written and any time you type ‘about:mozilla’ into Firefox you get a red screen and a potent reminder of the early days of the Internet. Happy birthday, Firefox. Gadgets/ Tech/ E-reader Newspaper 11 Sam’s Club (rumored) Black Friday ad By Doug Aamoth (CrunchGear) HP G71 17 LED Notebook w/Blu-ray – $499.00 Digital Cameras Submitted at 11/9/2009 8:00:00 AM Olympus FE-4000 12 MegaPixel A list of rumored items for the Camera – $98.00 Sam’s Club Black Friday ad has Digital Media Cards been percolating around the web Toshiba 16GB SDHC Digital lately. There’s no ad scan to Media Card – $24.00 confirm any of this yet, but I’ll DVD Players update this post once more JVC 1080p Blu-ray Player – information becomes available. $129.00 For now, though, here’s a list of Phillips Dual Screen Portable the rumored electronics items: DVD Player – $99.00 Blank Media Electronics Blu-ray 2-Packs – $17.00 Samsung Compact SD Computers Camcorder w/Bag – $149.00 Acer Aspire One 10.1 Netbook – GPS Systems $197.00 Garmin Nuvi 255w GPS Navigation System – $119.00 Home Theater Samsung 5.1 Blu-ray Home Theater – $398.00 Printers HP AIO Printer Bundle – $69.00 Televisions Hitachi 42 1080p LCD HDTV – $598.00 Phillips 52 1080p LCD HDTV – $1198.00 Vizio 47 1080p 240Hz LCD HDTV – $997.00 Video Games PS3 120GB Bundle – $399.00 Wii Active Life Bundle w/Mat – $69.00 Wii Family Bundle – $349.00 No word on possible doorbuster items or when the store will open on Black Friday. I’ll update this post when more information becomes available. Sam’s Club Black Friday Ad[BlackFriday.info] More Black Friday deals… Casio plans to enter the OLED game By Serkan Toto (CrunchGear) will start operations from April 2010, with both companies Submitted at 11/9/2009 8:00:34 AM involved saying they’ll focus on OLED can still pretty much be manufacturing OLED panels considered a thing of the future, sized ten inches and smaller first but we’re getting closer to use the (like the one you see in the technology in our homes every picture). month. Today, Casio Computer Those OLED screens are announced[JP] it has teamed up supposed to be used in digital with Tokyo-based technology cameras and cell phones by 2015. company Toppan Printing to But Casio and Toppan also said develop and produce OLED they will conduct R&D to panels. The new joint venture eventually develop bigger sized OLEDs, for example for TVs, electroluminescent compounds, venture (total capitalization: $4.5 t o o . T h e O L E D s w i l l b e whereas OLED production today million). m a n u f a c t u r e d u s i n g h i g h - is mainly based on low-polymer p o l y m e r - t y p e o r g a n i c organic compounds. According Modern Warfare 2 Shows How To Piss Off Fans By Mike Masnick (Techdirt) Submitted at 11/9/2009 4:02:00 AM william was the first of a few of you to send in this story about how Infinity Ward seems to have decided to piss off a bunch of fans of the upcoming Modern Warfare 2 by not allowing dedicated game servers, limiting the number of players for PCbased multiplayer games and other limiting features. In one telling quote, one of the game's designers was asked about whether or not a certain feature would be enabled to allow players to change their field of view, and was told: We would to play it at all. like you to play the game the way Permalink| Comments| Email we designed and balanced it. This Story Now, that's fair enough, but if those fans don't want to play the game that way, they're not going 12 Gadgets/ Politics/ E-reader Newspaper HTC 'carefully looking' into netbook category, wants to add 'unique value' By Darren Murph (Engadget) Submitted at 11/9/2009 10:41:00 AM Oh, HTC-- never one to dodge the chance to keep us on edge, are you? Half a year after we heard that the self-proclaimed " quietly brilliant" company was working on an Android netbook with T-Mobile, HTC's own CEO Peter Chou confessed during a recent interview that those very wheels were still turning. During the frenzy that was the HD2 launch, he quipped that his company was still "carefully looking into [the netbook] category and how it can be part By Danny Allen (Gizmodo) lights that make Pac-Man chomp of that," noting that nothing was in the dark. [ Roomba Pac-Man official yet due to its desire to Submitted at 11/9/2009 7:08:00 AM really add "unique value" rather via Engadget] It was only a matter of time, Built using our spare time, than punching out another "meright? Check out this setup where Roomba Pac-Man is designed to too" machine. 'Course, if Intel a laptop player controls "Pac- s h o w c a s e t h e e x t e n s i v e really does revamp its Atom Man" while being chased by robo U n m a n n e d A e r i a l S y s t e m lineup at CES, we'd say this is -vacuum ghosts. And get this: it's software suite that we have actually a demo of their developed to support our unmanned aerial software that personal research. It was also a guides airborne vehicles. great opportunity to use some of That's why the red tape marking o u r s k i l l s f o r o u r o w n (Little Green Footballs) the maze is really only there for entertainment. As a disclaimer, the video. The player sees a our research center, RECUV, is Submitted at 11/8/2009 6:53:42 PM virtual representation on screen, not affiliated with the project, Here’s the kind of brain food and the ghost roombas use and the work done here, while you don’t usually get from US i n t e r n a l o d o m e t r y w i t h a utilizing some software we were media; the BBC’s The positioning system to find their paid to develop at CU, is the sole Intelligence Squared Debate, in a way around, and avoid each creation of those listed at the five-part YouTube playlist. other. bottom of the page. The motion: “Is the Catholic Now they just need those LED- Video: Hacked Roombas Used to Play Pac-Man, Finally! [Roomba] Nook reader turns out to be popular, shipments get pushed back By Matt Burns (CrunchGear) Submitted at 11/9/2009 6:25:14 AM just about the perfect time for the company to come out swinging -after all, you know you still find yourself dreaming about the Shift from time to time. [Via jkOnTheRun] Filed under: Laptops HTC 'carefully looking' into netbook category, wants to add 'unique value' originally appeared on Engadget on Mon, 09 Nov 2009 10:41:00 EST. Please see our terms for use of feeds. Read| Permalink| Email this| Comments Video: The Intelligence Squared Debate church a force for good in the world?” Speaking for the motion, Archbishop John Onaiyekan and Anne Widdencombe MP. Speaking against the motion, Christopher Hitchens and Stephen Fry.[Video not shown]. Gadgets/ Tech/ E-reader Newspaper Epson concocts world's first Gigabyte fixes iPhone 4K HTPS panel, 4K 3LCD sync issue with BIOS projectors closer to reality update By Darren Murph (Engadget) Microsoft to launch Forefront Protection 2010 (CNET News.com) Submitted at 11/9/2009 6:30:00 AM By Vladislav Savov (Engadget) Submitted at 11/9/2009 9:09:00 AM Oh,pound 13 as good as ours as to when this stuff will actually hit the market in a functioning product, but yesterday is as good a day as any to start saving up. [Via Akihabara News] Filed under: Displays, Home Entertainment Epson concocts world's first 4K HTPS panel, 4K 3LCD projectors closer to reality originally appeared on Engadget on Mon, 09 Nov 2009 09:09:00 EST. Please see our terms for use of feeds. Read| Permalink| Email this| Comments Report offensive content: If you believe this comment is Submitted at 11/9/2009 8:47:00 AM offensive or violates the CNET's Site Terms of Use, you can report The Intel P55 Express chipset it below (this will not snafu that caused iPhones to lose automatically remove the their syncing minds has now comment). Once reported, our been remedied -- at least by one staff will be notified and the motherboard maker. Gigabyte comment will be reviewed. has issued a BIOS update making Select type of offense: things all hunky-dory between Offensive: Sexually explicit or the phone and the mobo, putting offensive language your troubles to an end. The P55 Read- Gigabyte Beta BIOS Spam: Advertisements or is Intel's latest midrange chipset download page and orchestrates things for newer Read- Update fixes iPhone sync c o m m e r c i a l l i n k s Disruptive posting: Flaming or Core i5 / i7 machines. The other problem offending other users two P55 purveyors, ASUS and Filed under: Cellphones, Illegal activities: Promote MSI, were also caught by the D e s k t o p s cracked software, or other illegal bug, and there are anecdotal Gigabyte fixes iPhone sync issue reports of success with an ASUS with BIOS update originally content BIOS update, but not official appeared on Engadget on Mon, Comments(optional): Report fixes as of yet. Given the 09 Nov 2009 08:47:00 EST. Cancel competitive nature of this market, Please see our terms for use of This content has passed through though, we'd be surprised if those feeds. Permalink| Email this| fivefilters.org. two companies didn't quickly Comments follow suit. All's well that ends well, right? CrunchDeals: Refurbished Logitech Harmony 890 remote for $100 By Doug Aamoth (CrunchGear) Submitted at 11/9/2009 5:47:00 AM only. The remote carries a 90-day compatible devices is up over warranty direct from Logitech. 175,000 currently. There’s a built Logitech Harmony 890 Remote -in color LCD screen and lithium Control – Refurbished[Amazon] -ion rechargeable battery as well. Again, this deal is good today 14 Gadgets/ Tech News/ E-reader Newspaper Intel purportedly fast-tracking Pine Trail platform, forgetting all about N270 / N280 at CES By Darren Murph (Engadget) Submitted at 11/9/2009 9:56:00 AM Say it with us now: "freaking finally!" The world at large seems perfectly fine with using Atom N270 and N280 CPUs for the rest of eternity (judging by the latest netbook sales figures, anyway), but techies like us are sick and tired of dabbling with the same underpowered chips and the same lackluster capabilities. At long last, we're hearing that Intel will supposedly officially announce the Pine Trail platform in late December, with a raft of netbooks based around the new Pineview chips hitting the CES show floor in January. The 1.66GHz Atom N450, dual-core 1.66GHz Atom D510 and Atom D410 are expected to be all the rage at the show, with the Google May Be Making Their User Interfaces Look Halfway Decent [Google] existing N270 and N280 making an expedited trip to the grave. Good riddance, we say. [Via Register Hardware] Filed under: Laptops Intel purportedly fast-tracking Pine Trail platform, forgetting all about N270 / N280 at CES originally appeared on Engadget on Mon, 09 Nov 2009 09:56:00 EST. Please see our terms for use of feeds. Read| Permalink| Email this| Comments By Tim Stevens (Engadget) By Ted Greenwald (Wired Top Stories) Submitted at 11/9/2009 5:56:00 AM 'If you rearrange the atoms in coal, you get diamond. If you rearrange the atoms in sand, you get silicon. How atoms are arranged is fundamental to all material aspects of life,' says Ralph Merkle, senior research chair at the Institute for Molecular Manufacturing. Those words kick off day 2 at the Singularity University Executive Program. Submitted at 11/9/2009 7:12:00 AM Eng Aerial System suite, designed for only not to make my eyes hurt guidance of airborne vehicles, every time I have to open Gmail. Submitted at 11/9/2009 8:13:00 AM but we're too busy geeking out to The actual user interface won't be We've seen mixtures of Roomba care about potential real-world much better, but at least it will and Pac-Man before, but nothing applications of this tech. Video feel a little bit clearer and like this. A team of developers below. organized. Bonus points: Google have hacked five floor-cleaning Continue reading Autonomous Wave may have found some real bots to create a sort of OCD Roombas do Pac-Man right use, at last. [ Engadget] version of the game, with the Pac (video) -Man bot sucking up little white Filed under: Robots rectangles whilst being chased by Autonomous Roombas do Pacrobot incarnations of Inky, Pinky, Man right (video) originally Blinky, and Clyde. But, when the appeared on Engadget on Mon, Pac-Man vacuum finds a power 09 Nov 2009 08:13:00 EST. pellet those ghostly rovers turn Please see our terms for use of blue and start fleeing. The tech is feeds. Permalink| Email this| supposed to be a demonstration Comments of the developers' Unmanned Autonomous Roombas do Pac-Man right (video) Singularity University: Rearranging Atoms With Ralph Merkle By Jesus Diaz (Gizmodo) Gadgets/ Politics/ E-reader Newspaper 15 Tech Note: Code Frenzy (Little Green Footballs) Submitted at 11/8/2009 1:06:07 PM Here’s a tech note, also known as an open thread, as I delve into the wealth of new goodies in PHP 5. At some point after installing our new server last year, our gracious, kind, and allpowerful web hosting company upgraded our version of PHP from version 4 to version 5 — but somehow I missed the memo. Imagine my shock to check out a phpinfo() command recently and By Nicholas Deleon stores, particularly in the discover that LGF is running the (CrunchGear) northeast, had already broken the latest and greatest, with all those date. Can’t have li’l ol’ mom-and Submitted at 11/9/2009 7:30:54 AM nice new functions for JSON and -pop have all the fun, now can XML parsing, better object You most certainly already we? orientation, etc., instead of know this by now, but Modern What does this mean for you? lumbering along with PHP 4 as I Warfare 2 is probably already You can try to call your local drive that's just begging to be had thought. a v a i l a b l e a t y o u r l o c a l GameStop, and see if it’s selling By Darren Murph (Engadget) broken. Hit the read link for a So I’m refactoring all our code GameStop. The release date the game early. If so, hooray. If Submitted at 11/9/2009 10:18:00 AM look at 90 grueling hours of that reads and parses JSON or (tomorrow, actually) was broken not, you’ll have to wait one more Oh sure, we've seen a few " work, or just jump past the break XML to use the built-in (and last week by various so-called agonizing day to play the game. portable" Gamecube systems for a celebratory video. much faster) PHP 5 functions mom-and-pop video game stores, I’m still undecided if I’m going instead of using outdated external so Activision went ahead and to get it, seeing as though World over the years, but we've yet to [Thanks, Jonathan] started letting select GameStops o f W a r c r a f t t a k e s u p a set our eyes on anything as Continue reading The NCube: classes and libraries written for glorious as this. Not surprisingly, probably the best portable PHP 4. sell the game. supermajority of my gaming So like I was sayin’, here’s yer GameStop, with Activision’s time. Not that any of you care, the NCube's creator is yet another Gamecube of all time Ben Heck apprentice, with the Filed under: Gaming, Handhelds o p e n t h r e a d f o r a S u n d a y eventual approval, made the which I fully recognize. case being a heavily modded The NCube: probably the best afternoon as I geek out into the decision to break the street date, Datamax Kid's Delight and the portable Gamecube of all time void. as its known, because other display an unmodded Zenith originally appeared on Engadget PSone. There's a 2-way switch on Mon, 09 Nov 2009 10:18:00 for running off of batteries or the EST. Please see our terms for AC outlet, a relocated memory use of feeds. Read| Permalink| card slot and a rear-mounted disc Email this| Comments GameStop given permission to break Modern Warfare 2 street date The NCube: probably the best portable Gamecube of all time 16 Gadgets/ Tech/ Tech News/ E-reader Newspaper LG's 15-inch OLED TV GE, Comcast reportedly value now blowing minds in NBCU at $30 billion South Korea (CNET News.com) Further, the two companies have discussed an option whereby GE would sell off all or most of its By Darren Murph (Engadget) One major obstacle seems to ownership of the new company Submitted at 11/9/2009 9:33:00 AM have been settled in Comcast's to Comcast over the next seven quest to buy NBC Universal from years, according to sources cited Call Daegu home? Just over in General Electric--how much to previously. Recent reports say South Korea to visit and / or pay for it. that GE and Comcast have now infiltrate the DMZ? Regardless of Both companies have reportedly decided how to price the new why you're there, you're probably agreed on a price of $30 billion entity after the deal goes into interested in picking up LG's for GE's movie and TV unit, effect so that GE faces no latest, which has been tempting according to sources cited problems selling off its remaining our retinas since IFA. Just as Monday by Reuters and The stake. we'd heard back in late August, Wall Street Journal(subscription The valuation of NBC Universal the aforesaid firm's 15-inch required for full story). was seen as a major challenge in OLED TV is reportedly now on The agreement on the worth of advancing the deal, according to sale in South Korea, and it's NBC Universal (NBCU) is a sources. Comcast naturally was packing a price tag of around 3 major step toward paving the intent on maximizing the value of million ($2,598). By our count, way to create a new, privately its own networks and minimizing this is just the second major, Filed under: Displays, HDTV, h e l d c o m p a n y t h a t w o u l d the value of NBCU to limit the mass-produced OLED TV to hit Home Entertainment combine NBC's TV stations and amount of up-front cash it would store shelves anywhere in the LG's 15-inch OLED TV now Universal Studios with Comcast's need to invest in the new firm. world, but we're hoping to see a blowing minds in South Korea TV and cable stations. NBCU's Latest reports say that Comcast lot more action in this space originally appeared on Engadget Web properties include iVillage would inject anywhere from $4 come CES. You TV makers are on Mon, 09 Nov 2009 09:33:00 and the online video site Hulu, in billion to $6 billion into the new l i s t e n i n g t o o u r r e q u e s t s EST. Please see our terms for which it is a co-owner along with entity. use of feeds. Read| Permalink| News Corp. and Walt Disney Co. demands, right? However, both companies have Email this| Comments [Via OLED-Display] Under the terms of the proposed r e p o r t e d l y a g r e e d t o b a s e deal, Comcast would own a Comcast's final cash payment on majority 51 percent slice of the NBCU's financial performance new entity, with GE owning the before any finalized deal closes. remaining 49 percent. If its performance tanks, Comcast Submitted at 11/9/2009 6:48:00 AM Cartoon: Flag for Moderation By Rob Cottingham (ReadWriteWeb) Submitted at 11/8/2009 11:40:35 AM Those of us who manage online communities have learned to crowdsource a big chunk of our work: identifying user contributions that deserve a higher profile - and those that deserve to be dropped in a deep, dark hole. But there has to be something more nuanced than just thumbsup and thumbs-down buttons. And so... Sponsor More Noise to Signal. Discuss could end up paying less.. This content has passed through fivefilters.org. Gadgets/ Tech/ Tech News/ E-reader Newspaper 17 Sam's Club Black Friday Ad Leaked, Looks Heartbreakingly Lame [Rumor] Nov. 9, 1963: Dual Disasters Stun Japan By Mark Wilson (Gizmodo) By Daniel Dumas (Wired Top Stories) Submitted at 11/9/2009 7:38:27 AM We haven't seen a full ad scan on Sam's Club's Black Friday sale yet, but this potentially leaked list making the rounds now isn't much to write home about. Below are only the parts of the list that we could even merit pasting. The bold items look like the best deals. Acer Aspire One 10.1" Netbook - $197.00 HP G71 17" LED Notebook w/Blu-ray - $499.00 Olympus FE-4000 12 MegaPixel Camera - $98.00 ($30 savings) JVC 1080p Blu-ray Player - $129.00 Garmin Nuvi 255w GPS Navigation System - $119.00 ($10 savings) Hitachi 42" 1080p LCD HDTV $598.00 Phillips 52" 1080p LCD HDTV - $1198.00 ($50 savings) Vizio 47" 1080p 240Hz LCD HDTV - $997.00 (...really?) [ Black Friday] Submitted at 11/8/2009 9:00:00 PM A mining explosion and a train crash combine to kill more than 600 people. Balloons! Sending out a mystery message on your iPhone By David Winograd (The Unofficial Apple Weblog (TUAW)) Submitted at 11/8/2009 8:30:00 PM Filed under: iPhone, App Review With over 100,000 apps in the app store, it's getting harder and harder to find something new; most apps seem to be 'me too' versions of something else. Balloons! US $2.99 [ iTunes Link] for iPhones running OS 3.0 or better, is something I haven't seen before, and it's really very clever. TUAW first got a look at an early development version of Balloons! back at WWDC, including a video interview with the developer. Balloon mail has been used, along with the more common phrase message in a bottle, to describe sending a message into the wind or sea and hoping that someone finds it and contacts you. It's sort of non-directional social networking with a hint of mystery built in. In this app, you start making a balloon by choosing from a variety of balloon styles. Next you create a message that the balloon will convey. Tap in the middle of the screen and the camera activates to take a picture of what's going on in your life at the moment. Then add a bit of text and send the balloon out into the world. Other users, over 900 in the first 3 days of sales, are doing the same thing. Over 3500 balloons have been sent up from the US, Europe, and Japan already. Next, you'll want to catch a balloon. When you do, you'll see the message from the person who made the balloon along with a separate flippable page from everyone who caught the balloon, York City, you can't immediately grab the balloon in London; it needs time to travel. If you want to see what has happened to your balloon, there is a balloon tracking option that tells you how long your balloon has been flying and if has been caught or not. Tap on one of your caught balloons and you'll see all the notes added by those that have seen your balloon. I found this to be a lot of fun. There is a free, advertisingsupported version of the app [ iTunes Link] that doesn't include the tracking option. I liked the idea of giving out a free appetizer, since you can get a added something to it, and let it great idea of how Balloons! fly again. As more people catch, works and quickly realize that the add to, and release balloons, each best part of the app is the balloon takes on a history and tracking option. often has a story to tell. The graphics suit the app nicely. The balloons don't travel Screens are very cartoonish using randomly. If launched in New bright colors and animated clouds. I was taken by the whimsy of this app, and can see it being great for kids as a nudge toward becoming interested in geography. It's also fun, tinged with a bit of longing for faraway places, for everyone. Take a look at the video in the 2nd half of this post to see it in action. Continue reading Balloons! Sending out a mystery message on your iPhone TUAW Balloons! Sending out a mystery message on your iPhone originally appeared on The Unofficial Apple Weblog (TUAW) on Sun, 08 Nov 2009 20:30:00 EST. Please see our terms for use of feeds. Read| Permalink| Email this| Comments 18 Tech/ E-reader Newspaper 7 Ways to Get More Out of LinkedIn By Sharlyn Lauby (Mashable!) flies in the face of conventional wisdom when your goal is to build relationships and Sharlyn Lauby is the president c o m m u n i t y . of Internal T a l e n t Eric B. Meyer, an associate in M a n a g e m e n t ( I T M ) w h i c h the labor and employment group specializes in employee training o f D i l w o r t h P a x s o n L L P , and human resources consulting. reminds us that when using a S h e a u t h o r s a b l o g a t professional networking site such hrbartender.com. as LinkedIn, “don’t give a L i n k e d I n , w h i c h r e c e n t l y potential employer an easy reached the 50 million user excuse to remove you from m i l e s t o n e , h a s l o n g b e e n consideration. Use a professional considered the social networking headshot and scrap the picture of site for professionals. If you’re you doing a keg-stand.” in business, it is basically He adds that “an employer may expected that you have a profile not discriminate when selecting there. one job applicant over another. But with the more mainstream For example, an employer may platforms like Twitter and not base a hiring decision on Facebook being used for business such things as race, religion, purposes, some professionals are gender, and national origin. n e g l e c t i n g t h e i r L i n k e d I n Although actually proving an profiles. While LinkedIn is employer made a discriminatory certainly not as dynamic as other hiring decision may be difficult.” social media sites, it still provides Businesses who engage in hiring a lot of value — if you use it discrimination are the exception, correctly. So whether you’re not the rule. Just remember, by new to LinkedIn or a veteran, using an avatar, you will be here are some of the things you providing information about should consider incorporating yourself a prospective employer into your LinkedIn strategy. 1. may not have otherwise obtained Include a Photo Avatar on its own. 2. Build Your Some media reports claim that Network of Connections because organizations can use While we might be inclined to any criteria they want to make say quality is better than quantity, hiring decisions, photo avatars it could be possible that the p r o v i d e c o m p a n i e s w i t h number of connections you have information they may not have says something about you. Greg otherwise known about you K o u t s i s , c o r p o r a t e a n d based on a resume alone and international channel recruiter for could actually hurt you more than Aplicor LLC, says, “if someone help. But, not including a photo has 20-50+ connections then I with a social networking profile know they probably check Submitted at 11/9/2009 7:57:28 AM make regular updates in LinkedIn. The one space where you can keep your connections informed is the status updates section. Lori Burke, director of human resources at Neighborhood America, explains that updates are not only an interesting read, LinkedIn at least once a week. If but very valuable. “I’ve found someone has 1-19 then I realize new networking groups I may not they probably either haven’t have thought about [via status begun to pop the hood and look updates]. Additionally, it allows inside or gotten past the initial me to learn what others are threshold of their friends, family involved with or in, who they and past colleagues. They might may be connected to, etc. In be a great prospect for me to total, it widens the scope of reach out to but this might not be knowledge for me.” 4. Seek the best use of my time. This Meaningful Recommendations combined with the profile they A terrific feature of LinkedIn is have listed lets me realize t h e a b i l i t y t o p r o v i d e quickly if I am wasting my time recommendations. This is a with someone who has no place for your connections to interest or trust in LinkedIn.” comment about your work. So you might say to yourself, if R e c o m m e n d a t i o n s c a n b e small numbers in the connection thought of as beefed up thank department signal you’re a you cards. Instead of telling one novice, do large numbers mean person how you feel, you’re you’ll connect with just about telling the world that person does a n y o n e ? K o u t s i s s a y s n o t good work. necessarily. “I do not believe It’s important to get good solid there’s a maximum number of recommendations and Meyer connections that makes someone offers some thoughts on how to look like they will just connect do that. First, “think about who with anyone. LinkedIn only knows you best. It could be a coshows 500 then adds the + sign worker or manager. It could also after the 500 so you never really be a client or customer for whom do know how many more than you just did an incredible job on 500 connections someone has a huge project. If you seek a until you connect with them.” 3. recommendation from a client or Use Status Updates to Your c u s t o m e r , b e p o l i t e a n d Advantage remember to thank the person Once you complete your profile, w h o g i v e s y o u t h e there aren’t a lot of places to r e c o m m e n d a t i o n . ” Then, “If you are going to seek a recommendation from a coworker.” 5. Optimize Your Profile.” When filling out your profile, you should think about your WAYS page 20 Tech/ Tech Tips/ Tech News/ E-reader Newspaper 19 Rupert Murdoch Plans To Hide His Sites From Google, The World Yawns By Stan Schroeder (Mashable!) talking right there. I don’t have to explain it to you, folks: breaking even or making a couple of Submitted at 11/9/2009 6:30:15 AM million is not the same. It’s an The media today is widely important sentence because it reporting an excerpt from Rupert shows that Murdoch is not only Murdoch’s interview with Sky interested in making money; he’s News Australia, in which he says interested in making obnoxious he plans to make News Corp amounts of money. And we’re sites invisible to Google’s search supposed to pay for it. engine. He then makes the argument that While Murdoch has been on the it’s better to have quality verge of saying that for quite a audience, willing to pay for their while now, this is the first time news, than having just everyone he actually uttered it. But one has coming to their sites, by which he to listen to the entire interview to refers to people finding articles understand that Murdoch is not on one of his sites via search quite clear about what, exactly, engines such as Google. he plans to do, and even if he is, “We’d rather have fewer people it doesn’t make much sense. coming to our websites, but You can see the interview paying.” below, but I’ll hightlight several Fair enough. Let us know how interesting points which show that went in a couple of years. just how imprecise and confusing It proves that Murdoch is Murdoch’s plans sound. sticking with the old model of Early in the interview he says how news and information is the following: disseminated, and doesn’t plan to “There are no websites, news change it. The problem is, things websites or blog sites anywhere don’t work the way they used to in the world today making any any more. Sometimes, a visitor serious money. Some maybe will come to a news site or a blog break even, or make a couple of and won’t even know where he million.” is; he might think he’s still on This is one rich media mogul Facebook or MySpace. And he subscriber to WSJ.com, you get a paragraph and a subscription form.” I honestly can’t understand what’s his plan here. If he plans to charge for websites, why hide them from the search engines? If you can’t actually read the content without paying, then won’t be interested in anything making the content at least partly on the site except that tiny bit of accessible to Google and other information that made him click search engines can’t hurt? In fact, on the link. Sometimes, the the WSJ that he mentions as an conversation will develop around example isn’t hidden from your article, but not on your site; Google’s indexes, you can easily it may develop on Twitter or find Wall Street Journal articles Digg. As a site owner, you have via Google. to adapt to this. If you plan to just This is just one part of the quite ditch all these visitors, claiming lengthy interview, but it all boils they’re all worthless, you might down to this: Mr. Murdoch is not e n d u p w i t h a n e m p t y ready to accept any of the auditorium. changes brought forth by the Here’s the most important bit, in Internet and the social media w h i c h M u r d o c h r e p l i e s t o movement. Moreover, he doesn’t whether he plans to block sites seem to understand how some from being seen by search parts of it work. He’s got the engines: manpower to announce a war, “I think we will. But that’s when but I’m afraid his army will be we’ll start charging. We do it fighting windmills. already with the Wall Street Reviews: Digg, Facebook, Journal. We have a wall, but it’s Google, MySpace, Twitter not right to the ceiling. You can Tags: News Corp, rupert get the first paragraph of any m u r d o c h story but if you’re not a paying Submitted at 11/8/2009 8:07:00 AM A security guard checks my driver’s license as I drive into the entrance to Moffet Field, a disused naval airbase that hosts the nascent Singularity University. Night has fallen, but it still feels like entering a topsecret installation out of a James By Kevin Purdy (Lifehacker) Submitted at 11/9/2009 4:30:00month Bond movie, crowned by with c a n s t i l l g r a b p a c k a g e s a t strange domed buildings and G e t D e b ' s l e g a c y w e b s i t e . adorned by sculptures of airships. GetDeb.net V2 Beta[via I'm Just an Avatar] Singularity University, Day One: Infinite, In All Directions By Ted Greenwald (Wired Top Stories) GetDeb.net Repository Makes Newer Ubuntu Apps Easily Available [Linux] 20 Tech/ Tech News/ E-reader Newspaper WAYS continued from page 18..). 7. Consider Whether to Link Your Profiles. More business resources from Mashable: - 5 Advanced Social Media Marketing Strategies for Small Business - Top 5 Business Blogging Mistakes and How to Avoid Them - 10 of the Best Social Media Tools for Entrepreneurs - 6 Must-Follow Steps for Selling in Any Economy - 5 Easy Social Media Wins for Your Small Business Tags: business, linkedin, Lists What, Exactly, Is a 'Cop-Killer' Gun? By Nathan Hodge (Wired Top Stories) Submitted at 11/9/2009 6:26:00 AM News reports on the Fort Hood rampage say that the alleged makes the Five-Seven different shooter, Maj. Nidal Hasan, used from other handguns? an FN Herstal Five-Seven pistol — described in some reports as a 'cop killer' gun. What, exactly, Is the Magic Mouse a dog? By Mel Martin (The Unofficial Apple Weblog (TUAW)) Submitted at 11/8/2009 1:30:00 PM Filed under: Analysis / Opinion, Hardware, Peripherals, Bad Apple For some Magic Mouse users, the streamlined human interface device is not only a dog, but a dog that pees on the carpet, smells bad, and barks continuously. Apple support boards are beginning to fill up with complaints about tracking issues and Bluetooth disconnects. There are also complaints about the lack of a third mouse button, and some all-too-early hardware failures. I liked the Magic Mouse when I saw it at my local Apple Store, so I took one home for my Mac Pro. It seemed to work for awhile, but now it is very erratic at tracking and speed, even when MouseZoom is installed. Its Bluetooth connection has dropped several times, and it either comes back after a long wait or simply fails to connect again. When I moved back to my wired Apple mouse, I found that I had actually preferred the form factor of the Magic Mouse, and I missed the button-less scroll wheel. The Magic Mouse seems to be working fine for many users, but there are some hints that the little rodents are having trouble with some older hardware. My 2006 Intel-based Mac Pro may be one of the computers at issue. Apple will hopefully issue a software update, if that is the problem. In my case, the only magic I'm going to see from the Magic Mouse is when it disappears from my desktop. How is it going for you? TUAW Is the Magic Mouse a dog? originally appeared on The Unofficial Apple Weblog (TUAW) on Sun, 08 Nov 2009 13:30:00 EST. Please see our terms for use of feeds. Read| Permalink| Email this| Comments Tech/ E-reader Newspaper 21 Steve Jobs, the moral high ground, and the return to Apple By Steven Sande (The Unofficial Apple Weblog (TUAW)) Submitted at 11/9/2009 9:00:00 AM Filed under: Steve Jobs, Apple History Jesus Diaz over at Gizmodo had a fascinating exposĂŠ in a post late last week that provided a look into some of the thinking of Steve Jobs back in 1997. As Diaz relates,." Diaz went on to conjecture that it was more than decision-making that went into Steve's refusal to push his way back into power; it was love. As Diaz notes, "Steve wanted to be wanted. He knew he was loved by the public and the press. After all, everyone Steve Jobs to know that he was able to regain control of the company through a combination of connections, persuasion, and his love for his company. The rest is history. As Ellison stated in an interview in Fortune, .[via Digg] likes the story of a legend coming spend any money?" back-to see him succeed or, Since his return to Apple, Steve TUAW Steve Jobs, the moral better yet for Hollywood drama, Jobs has, of course, brought the high ground, and the return to fail. More importantly, the company from the brink of Apple originally appeared on The company was his company. He extinction into profitability and U n o f f i c i a l A p p l e W e b l o g didn't have to buy it! That was recognition. Whether or not he (TUAW) on Mon, 09 Nov 2009 absolutely preposterous, he w o u l d h a v e b e e n e q u a l l y 09:00:00 EST. Please see our probably thought at the time. He successful as a result of a hostile terms for use of feeds. knew he was going to return as takeover is a great plot for an Read| Permalink| Email this| King once again, acclaimed by alternative universe sci-fi novel, Comments his troops and his people, so why but it adds a lot to the legend of Microsoft releases Exchange 2010, acquires Teamprise (CNET News.com) Submitted at 11/9/2009 7:45:07 AM Microsoft made two enterprise moves on Monday, one expected and the other a bit of a surprise. As promised, the company used its TechEd event in Berlin to release Exchange 2010, the latest version of its e-mail and calendar server software. Microsoft finalized the code for the product last month and had said it would launch at TechEd.. This content has passed through fivefilters.org. 22 Tech/ Tech Blog/ E-reader Newspaper Happy Birthday, Firefox By Stan Schroeder (Mashable!) starting to appear on the web as web developers make individual choices to support a standardsSubmitted at 11/9/2009 1:49:51 AM based, royalty-free approach. Originally an experimental Expect to see changes in the branch of the Mozilla project, a expectations around the licensing new web browser was launched of codecs. on November 9, 2004: Firefox And over the next five years 1.0. Its aim was to reduce mobile will play an increasingly Mozilla’s bloat (if you remember important role in our lives, and in those early days, the Mozilla the future of the web. The Suite consisted of a web browser, decisions of users, carriers, mail client, news reader, irc about that; it can be summed up governments and the people who client; it even had a web page in three words: privacy, video, build phones will have farc r e a t o r c a l l e d M o z i l l a and mobile. From the blog post: reaching effects on this new Composer), and it was an instant “ O v e r t h e n e x t f i v e y e a r s extension to the Internet and how hit among users and developers everyone can expect that the people will access information alike. browser should take part in a few for decades to come.” Five years later, and Firefox new areas – to act as the user Mozilla plans to celebrate this holds a quarter of the browser agent it should be. Issues around milestone by throwing parties (oh market, and while technically not data, privacy and identity loom no, not that again) around the b e i n g t h e m o s t p o p u l a r large. You will see the values of world. The campaign is called (Microsoft’s Internet Explorer Mozilla’s public benefit mission “Light the World with Firefox”, still clings to that honor), it’s reflected in our product choices and it will include shining the definitely the most prominent in these areas to make users safer Firefox logo in cities such as browser, with thousands of and help them understand what it Paris, Tokyo, Rome and San plugins (add-ons, they’re called), means to share data with web Francisco. Find out more at a busy developer community, and sites. over 330 million users. Expect to see big changes in the n-US/. Reviews: Firefox As for the future, Christopher video space. HTML5-based Tags: birthday, Firefox Blizzard over a t video and open video codecs are hacks.mozilla.org has some idea archive.org's S3-alike service? (Scripting News) Submitted at 11/8/2009 9:04:55 AM. I’ve Got Nothing: Crowdsourced Song Created by YouTubers [VIDEO] By Stan Schroeder (Mashable!) Submitted at 11/9/2009 4:21:20 AM. Reviews: YouTube Tags: viral video, youtube Tech/ Entertainment/ E-reader Newspaper Twitter and Penguins: How the San Francisco Zoo Uses Twitter [VIDEO] TUAW Review and giveaway: Blur Tripod and app for iPhone By Ben Parr (Mashable!) By Steven Sande (The Unofficial Apple Weblog (TUAW)) out about the incident via Twitter. Oh, and don’t forget about the two penguins huddling We know there are a lot of around him. interesting and unique uses for Social media, once a Twitter. We’ve seen Twitter used phenomenon localized to early for customer service, tweets to adopters, has quickly spread into monitor power usage, and even nearly every channel. The fact 140 character marriage proposals, that a zoo is using Twitter to but we never thought about it p e n g u i n e n c o u n t e r ( v i d e o s interact with its visitors and to being used to quickly respond to below), and even access to the know about what’s happening on incidents such as a kid being bit n o r m a l l y o f f - l i m i t s A v i a n its grounds in real-time is just by an otter. Conservation Center. another amazing example of the Earlier today, a group of Twitter While this is a great example of power that social tools provide enthusiasts (including me) using Twitter to reach out to and us. gathered at the San Francisco please customers, it isn’t the only Here’s Anthony’s explanation of Zoo for a zoo tweetup. While in way that the San Francisco Zoo how the zoo uses Twitter. Enjoy! most respect it was your standard (@SFZoo) utilizes Twitter. In BONUS: The Penguins of San gathering of Twitter nerds with the 3 minute clip embedded Francisco Zoo phones tweeting and Twitpics below, animal keeper Anthony We did visit a zoo. It would be a flying, the tweetup was unique Brown discusses some of the shame if we didn’t embed at least because of the involvement of the unique stories of how Twitter has one clip exclusively about cute zoo via Twitter. h e l p e d i m p r o v e t h e z o o , animals: After seeing initial tweets about including how it has helped find Reviews: Twitter the upcoming event, the zoo lost phones and even how the zoo Tags: penguin, San Francisco provided anybody who came to respond to a kid being bit by one Zoo, San Francisco-San Jose, the tweetup with a discount, a of the river otters after finding twitter, zoo Submitted at 11/8/2009 10:59:53 PM Gallery: 'The Insider' in Times Square (ETonline - Breaking News) Submitted at 11/9/2009 6:17:00 AM New pics! TV show "The Insider" takes over Times Square with a line-up of celebrity guests for the latest entertainment news with opposing views! See exclusive behind-the-scenes pics right here, then tune in to "The Insider" on TV all this week to see the fireworks in Manhattan! 23 Submitted at 11/9/2009 8:00:00 AM Filed under: Accessories, taking a photo after a delay or taking several photos in quick succession.. Continue reading TUAW Review and giveaway: Blur Tripod and app for iPhone TUAW TUAW Review and giveaway: Blur Tripod and app for iPhone originally appeared on The Unofficial Apple Weblog (TUAW) on Mon, 09 Nov 2009 08:00:00 EST. Please see our terms for use of feeds. Read| Permalink| Email this| Comments 24 Tech/ Popular News/ E-reader Newspaper Google may lose WSJ, News Corp sites (CNET News.com) BREAKING: EA Acquires Facebook Game Maker Playfish For Up to $400 Million By Ben Parr (Mashable!) Submitted at 11/9/2009 7:43:00 AM Submitted at 11/9/2009 7:14:27 AM Rupert Murdoch is threatening to pull his content from Google. Is this a bluff?(Credit: Dan Farber/CNET) Rupert Murdoch, the media tycoon who has long accused Google of ripping off content from his newspapers, says his sites may soon disappear from the search engine's listings. Murdoch, who is chairman of News Corp., the newspaper, TV and Internet empire that includes The Wall Street Journal, Hulu, The New York Post, and 20th Century Fox, made his comments during an interview with Sky News Australia. much action. Is News Corp trying to scare Google into making more concessions? Or is News Corp just afraid to pull the trigger? This content has passed through fivefilters.org.. Reviews: Facebook, MySpace, pet society, video Tags: acquisition, EA, electronic arts, facebook, gaming, playfish, social gaming, Zynga 4 people found shot to death in rural Texas home (AP) (Yahoo! News: U.S. News) Submitted at 11/9/2009 7:38:22 AM dave b buzzed up: Cao jolts the House (Politico) 13 seconds ago 2009-1109T08:00:02-08:00 This content has passed through fivefilters.org. Tech/ Tech Tips/ E-reader Newspaper YouTube with all of the sizzle but none of the Flash By TJ Luoma (The Unofficial Apple Weblog (TUAW)) this link to your Bookmarks Bar: FlashFree YouTube and you can easily access the Submitted at 11/8/2009 9:15:00 PM NeoSmart/HTML5 version. Filed under: Multimedia[ Our How does it work? Superbly regular Sunday night Talkcast is well. I tested it using Safari, and cancelled due to a sick host. watching a YouTube video Sorry, and we'll see you next through NeoSmart had no week. -Ed.] noticeable impact on my CPU at Let's face it: Flash on the Mac is your computer. Simply go to all. a dog. Actually, that's an insult to their custom web page and paste I've nearly given up hope for a dogs, which are known for the YouTube URL into the field. version of Flash for Mac that running fast. Flash for Mac is I n a m o m e n t y o u w i l l b e d o e s n ' t s t i n k . U n t i l t h e n , such a an unoptimized beast that presented with a clean window ClickToFlash and NeoSmart's you can expect it will suck up as showing you the video, as well as HTML5 YouTube are a great much CPU as possible, even for a download link for the MP4 combination to make your web version. the simplest of videos. surfing more enjoyable. T h e y a l s o h a v e a TUAW YouTube with all of the My first line of defense is C l i c k T o F l a s h ( w h i c h I ' v e G r e a s e m o n k e y / U s e r S c r i p t sizzle but none of the Flash mentioned before), but the folks available which will add a link to originally appeared on The over at NeoSmart have another all YouTube pages. That's nice, U n o f f i c i a l A p p l e W e b l o g solution, at least for YouTube: but what I was really looking for (TUAW) on Sun, 08 Nov 2009 was a bookmarklet I could keep 21:15:00 EST. Please see our HTML5. By using the newest version of in my Bookmarks Bar and just terms for use of feeds. HTML, they have devised a click on when I was on a Read| Permalink| Email this| system to send YouTube videos YouTube page. I didn't find one, Comments directly to any MP4 decoder on so I made one. Drag (don't click!) Anger Against Red Light And Speed Cameras Going Mainstream By Mike Masnick (Techdirt) notes numerous other studies that disagree, and digs into the details of the original study to find that it A bunch of folks have submitted does not account for multiple this recent Washington Post other factors. At best, the studies article about the growing anger seem to indicate that red light and and resentment towards red light speed cameras do not decrease and speed cameras. We've accident rates (in one damning posted similar articles in the past, study, a town that got rid of its but this is one of the first times cameras so a bigger decrease in I've seen the topic discussed in a accidents than a neighboring major mainstream paper. The town that installed them). In the discussion basically hits on all end, it's quite clear that the the high points, showing that cameras are entirely about people really hate the devices and money, and have nothing to do that the reason they're so popular with safety -- and it's nice to see is not safety, but revenue. It also more people recognizing this looks at the stats, talking about a issue. few different studies. It does Permalink| Comments| Email mention one study claiming that This Story the cameras have decreased accidents and fatalities, but then Submitted at 11/9/2009 7:31:00 AM Worm Hits Jailbroken iPhones; Apple Plans Jailbreak Crackdown [Security] By Kevin Purdy (Lifehacker) Submitted at 11/9/2009 6:00:00 AM A team 25 working on boot-up, cryptography, partitioning, security threats, and other areas that seem to have a fairly strong anti-jailbreaking theme in common. [ Sophos via Gizmodo] 26 Tech/ Entertainment/ E-reader Newspaper Get a Quirky Beamer for your iPhone (hint: it's not a car) By Steven Sande (The Unofficial Apple Weblog (TUAW)) Submitted at 11/9/2009 10:00:00 AM Filed under: Accessories, iPhone, it is put into production. While looking at the headline for this post, you might think that we're talking about an oddlypainted Brenthaven: The best computer backpack I've ever seen By David Winograd (The Unofficial Apple Weblog (TUAW)) the usual backpack -- around $75 -- but I thought it would be worth it since I lugged around my PowerBook nearly every day and Submitted at 11/8/2009 5:00:00 PM it looked like the Brenthaven F i l e d u n d e r : A c c e s s o r i e s , provided better padding than the Reviews The TUAW gang have competition. been searching for great holiday Since then, the backpack has gift ideas and I think I've found housed a succession of three 17" one that's been right under my PowerBooks and MacBook Pros nose for five years. under very heavy use. The When I bought my brand new amazing part is that outside of PowerBook G4 17" in 2004, I being a bit dusty, it's in just as no sign of wear and/or tear s p l u r g e d a n d b o u g h t a good shape as the day I bought it. whatsoever. And if there was, or Brenthaven backpack for it. Back No frayed stitching, no stuck ever will be, all Brenthaven bags then it cost a good deal more than zippers, no torn dividers. There is come with a lifetime warranty. Continue reading Brenthaven: The best computer backpack I've ever seen TUAW Brenthaven: The best computer backpack I've ever seen originally appeared on The Unofficial Apple Weblog (TUAW) on Sun, 08 Nov 2009 17:00:00 EST. Please see our terms for use of feeds. Permalink| Email this| Comments product. If you're looking for a way to flash your friends and not get arrested, the Quirky Beamer might just be the answer. [Thanks for the tip, Chris T.] TUAW Get a Quirky Beamer for your iPhone (hint: it's not a car) originally appeared on The Unofficial Apple Weblog (TUAW) on Mon, 09 Nov 2009 10:00:00 EST. Please see our terms for use of feeds. Read| Permalink| Email this| Comments 'The Insider' Invades Times Square in New York (ETonline - Breaking News) Submitted at 11/9/2009 5:18:00 AM TV show "The Insider" hopped coasts this week, traveling east from their home studio in Hollywood. Host Lara Spencer and correspondent Chris Jacobs are taking over the New York landmark of Times Square. Guest panelists, such as the headline making Jon Gosselin and Levi Johnston, along with Niecy Nash and Star Jones, join "The Insider" in the Big Apple. Tune in to "The Insider" all this week to get your latest entertainment news with opposing views from the heart of Manhattan. Tech/ TV/ Tech Tips/ E-reader Newspaper 27 Gigabyte Fixes Windows 7/iPhone Sync Issues By Christina Warren (Mashable!) Verizon's iPhone insults have only just begun (CNET News.com) Oakley Originals/Flickr) One can never have enough buckets of does in this complex It seems as if Verizon Droid's life. And it is refreshing to see avowedly male positioning will someone spending $100 million now include finger-pointing, high in an attempt to take on the prom -pitched taunts, and echoes of "na queen of cell phones. -na-nana-na". However, these ads heap After revealing that Verizon has pressure on the Droid to perform placed the iPhone on the Island as a phone and, indeed, as an of Misfit Toys, Ad Age is item to be seen with. reporting that in the next Droid Functionality can only take one ad, the iPhone will be the subject so far. Somehow, I recall General of another touching description. Motors being the brand of Apparently, the ad says the supposed functionality. And that Droid "swaps semi-functional, didn't quite, well, function for the giggling-brat-vanity for a bare company as things turned out. knuckle bucket of does." This content has passed through Oh, yes, the Droid is flexing its fivefilters.org. youthful muscles.(Credit: CC Submitted at 11/9/2009 7:01:05 AM Yast Tracks and Logs Time Spent on Projects and Tasks [Time Tracking] Submitted at 11/9/2009 6:29:33 AM By Jason Fitzpatrick (Lifehacker)! Reviews: Windows 7 Tags: gigabyte, iphone, itunes, p55, Windows 7 Disney Channel Orders Second Season of Jonas By Bill Gorman (TVbytheNumbers) Submitted at 11/8/2009 10:48:08 PM The Jonas Brothers are keepers at Disney Channel. The cable channel has ordered a second season of the pop stars’ comedy series “Jonas” and has tapped new executive producers: showrunner Lester Lewis and director Paul Hoen. [...] “Jonas” is slated to begin in Production on Season 2 of February for a premiere in the middle of next year. via THR.com. This content has passed through fivefilters.org. Submitted at 11/8/2009 3:00:00 PM! Yast 28 Politics/ Tech Tips/ Entertainment/ E-reader Newspaper And Now, Tulipa 'Ayaan' Dancing in the Street (Little Green Footballs) (ETonline - Breaking News) Submitted at 11/8/2009 3:23:42 PM Submitted at 11/9/2009 4:00:00 AM crossing of Tulipa ‘Gavota’ x a sport of Tulipa ‘Gander’s Rhapsody’. If you were wondering what The new deep purple tulip joins Ayaan Hirsi Ali’s been up to a group of tulips known in lately, the Dutch flower bulb Holland as “black” tulips. The sector just named a tulip after legendary quest of Dutch her: Tulip named after activist hybridizers to create a truly black Ayaan Hirsi Ali. ‘Ayaan’.” Mr. Thijs Leenders, tulip goes back centuries. Many NEW YORK, November 6, president of the North American experts consider Tulipa ‘Ayaan’ 2 0 0 9 : I n a c e r e m o n y o n F l o w e r B u l b W h o l e s a l e r s one of the finest and most November 4 at the Metropolitan Association represented the successful results of this quest. Museum of Art the Dutch flower Dutch bulb sector in the event. The selection of the flower to be bulb sector named a tulip after Also in attendance was Mr. Wim named in her honor was made by A y a a n H i r s i A l i ‘ A y a a n ’ i n Pijbes, director of Amsterdam’s Ms. Hirsi Ali herself. recognition of Ms. Hirsi Ali’s Rijksmuseum. … Upon receipt of the signed defense of freedom and human Tulipa ‘Ayaan’ is a dark christening certificate, the tulip rights for Muslim women. maroon/brown/purple Triumph will be entered into the Classified Ms. Hirsi Ali, a feminist, author, tulip hybridized by Ms. Lydia List and International Register of activist and former member of Boots, of Lybo Hybridizing, Tulip Names, maintained by the the Dutch parliament completed Hem, NL. Ms Boots is one of Royal Dutch Bulb Growers the traditional naming ceremony few women working in the male A s s o c i a t i o n ( K A V B ) i n by drizzling champagne over the d o m i n a t e d w o r l d o f t u l i p Hillegom, the Netherlands. bulbs and declaring, “From now hybridizing. The hybrid is a on this Tulip has the name While the dancers showed off their fancy footwork, Stevie Wonder danced his fingers across "Dancing with the Stars" pro the piano, playing the holiday dancers Cheryl Burke and Tony classic "That's What Christmas Dovolani were dancing down Means to Me." Daughter Aisha Main St. USA in Disneyland. M o r r i s a n d M i n n i e M o u s e The duo was joined by Anika accompanied him for the carol. Noni Rose, who voices the Both performances will be character Tiana in Disney's featured in the 2009 Christmas upcoming animated film 'The Day Parade, which will be Princess and the Frog.' The broadcast on ABC on December performers were filming a New 25. Selena Gomez also got in the Orleans themed musical routine holiday spirit this weekend. on Sunday. YouTube XL Supersizes YouTube for Your TV [YouTube] By Jason Fitzpatrick (Lifehacker) Submitted at 11/9/2009 6:30:00 AM Many 20 feet across your living room. You can quickly navigate to spotlighted, top rated, and most viewed categories. Clicking on "More..." also gives You can also set the next video you quick access to rising videos, in your search results or viewing most favorites, most discussed, category to start playing when and on from the comfort of your living room? Let's hear about it in the comments. YouTube XL[via AskTheAdmin] Tech Tips/ TV/ E-reader Newspaper 29 Selling Homes and Scrapbooking: A Compact and Organized Office [Featured Workspace] By Jason Fitzpatrick (Lifehacker) Submitted at 11/8/2009 4:00:00 PM What photo. Selling Homes and Scrapbooking: A Compact and Organized Office[Lifehacker Workspace Show and Tell Pool] Create a Simple DIY E-Commerce Site [Selling] By Kevin Purdy (Lifehacker) Submitted at 11/9/2009 5:30:00 AMfriendly, w h i l e y o u ' r e i n c h a r g e o f cachet than eBay or Craigslist. explaining why someone would The security and credit card buy from you. processing is handled by PayPal, Hit the link for the very helpful run-through. Got an equally easy system for an e-commerce site? If your job doesn't involve sending out endless amounts of email about it, tell us in the comments. How to set up an ecommerce site using PayPal to process[Ars Technica] Review: Bored to Death - Take a Dive (season finale) By Jonathan Toomey (TV Squad) Submitted at 11/8/2009 10:01:00 PM (? Continue reading Review: Bored to Death - Take a Dive (season finale) Filed under: Other Comedy Shows, OpEd, Episode Reviews, Reality-Free Permalink| Email this| | Comments 30 Tech Tips/ Tech News/ Popular News/ E-reader Newspaper Life360 Protects Your Family & Property Via Web, Mobile, & More By Jolie O'Dell (ReadWriteWeb) Photoshop.com Mobile Fixes Photos on Smartphones [Downloads] By Kevin Purdy (Lifehacker) concerned about making a bad choice, you can upload any pic to Submitted at 11/9/2009 7:00:00 AM a Photoshop.com account first. i P h o n e / A n d r o i d / W i n d o w s The iPhone version has a few Mobile: Photoshop.com, the e f f e c t s a n d p r e - s e t c o l o r online home of Adobe's market- changes—along with multi-touch l e a d i n g i m a g e e d i t o r , h a s functionality, of course—and the released a native photo editor and Android version features a nonphoto uploader for Android multi-touch straightening tool, phones, and it's a fairly versatile but the two are basically the same solution for fixing or offloading app in different shells. All images while you're out and versions are fairly robust for about. mobile editing apps, and do a On Android phones, photos can good job of correcting the most be cropped, rotated, resized, and notable failings of mobile lenses. adjusted for saturation, exposure, Photoshop.com Mobile is a free and tint, as well as have a soft download for iPhones, Android, focus or black & white effect or Windows Mobile devices. applied. Multiple effects can be P h o t o s h o p . c o m M o b i l e [ v i a undone and re-done, and if you're D o w n l o a d S q u a d ] access all their Life360 services from their phones. Right now, Android devices are supported, Submitted at 11/8/2009 9:18:19 PM with a BlackBerry app coming Life360 is often described as an emergency contacts; a thorough, soon and an iPhone app stuck in "OnStar for life," providing its web-enabled ID service that App Store purgatory. users with tools to track and gives first responders instant Another "coming soon" service protect people and things through access to critical information; a we thought was cool - and also a variety of interfaces. service for cataloging and excellent Cheaters fodder - is a T h e c o m p a n y o f f e r s I R L tracking valuable items via coded GPS-enabled tracking dongle that s e r v i c e s s u c h a s c h i l d tags; and identity protection can be thrown in a bag, ductidentification paraphernalia, services. taped to the underside of a car, medical IDs, and credit and The mobile tracking feature - tossed onto a pet's collar, stapled identity protection; but they also w h i c h g o t t h e c o m p a n y a to a child - you name it. Life360 have a cool suite of features that $ 3 0 0 , 0 0 0 i n v e s t m e n t f r o m founder Chris Hulls told us in an revolve around Internet and Google - allows users to locate email that he hopes to roll out the mobile tracking of people, family members using the web hardware within the next six objects, and even pets. Their i n t e r f a c e o r t h e m o b i l e months. "There will be an Android application for tracking application. Custom privacy additional fee, probably in the and securely messaging people settings allow users to find loved neighborhood of $100 for the even netted them a seed round ones in an emergency, check device and $10 per month for from Google their locations, see their statuses, each tracked person," he said. Sponsor a n d r e t r a c e t h e i r p r e v i o u s Some other GPS- and mobileThe concept for the company, locations. While the company enabled features Hulls plans to which was founded in the wake states this will not make family release within the next year are a of Hurricane Katrina, revolves members feel stalked, we see this Curfew 2.0 app, a check-in around disaster preparedness and app as Cheaters fodder as well as system for "distributed" families emergency messaging. Currently, a great way to keep track of the to touch base, and customized the available features include an ones you care about most during alerts for emergency notifications emergency messenger that uses times of crisis. in a user's specific location. email, web, SMS, and phone to The Android app allows users to Discuss get messages through to Radical imam praises alleged Fort Hood shooter (AP) (Yahoo! News: U.S. News) Submitted at 11/9/2009 7:37:55 AM dave b buzzed up: Cao jolts the House (Politico) 11 seconds ago 2009-11- 09T08:00:02-08:00 This content has passed through fivefilters.org. Tech News/ E-reader Newspaper 31 Check Out the Companies That Make ReadWriteWeb Possible By Admin (ReadWriteWeb) data | Hakia: semantic search | Domain.ME: .me domain registrar | Codero: Managed Our mission at ReadWriteWeb is hosting | Groupsite: Social to explore the latest Web collaboration | NaviSite: technology products and trends. Managed hosting | Search Engine We're fortunate to have a great S t r a t e g i e s : C o n f e r e n c e | group of sponsors who support M y D o m a i n . c o m : D o m a i n this goal. So, once a week, we registrar | Backupify: Online write a post about them; about backup | LeapFish: Personalized who they are, what they do, and home page | Media Temple and what they've been up to lately. SixApart: our hosts and blogging Pay them a visit and show your software appreciation of their sponsorship Crowd Science of this site. Pay them a visit or Crowd Science gives online tweet them a "Thank you" (see p u b l i s h e r s r e p o r t s o n t h e link below each sponsor) to show demographics and attitudes of your appreciation for their t h e i r a u d i e n c e . W e a t sponsorship of this site. You can ReadWriteWeb have signed up to also start following some or all of t h i s n e w s e r v i c e , b e c a u s e our sponsors on Twitter with a demographic data is something few clicks on this TweepML we've struggled to get in the past. page. It's important for any online I n t e r e s t e d i n b e i n g a business to know their audience, R e a d W r i t e W e b s p o n s o r ? so Crowd Science is a welcome ReadWriteWeb is one of the addition to the stats armory that most popular blogs in the world most of us in the Internet biz use. and is read by a sophisticated Sign up to get demographic data audience of thought leaders and from Crowd Science. decision-makers. We have Thank Crowd Science on several innovative new features T w i t t e r f o r m a k i n g in our sponsor packages that we'd R e a d W r i t e W e b p o s s i b l e . love to tell you about. Email our M a s h e r y COO Bernard Lunn for all the Mashery is a platform for Web details. services, allowing companies to Sponsor manage their APIs using Ready to learn more about the Mashery's expertise. At the smart companies that support this "Business of APIs" conference, site you love to read? Read on... Mashery CEO Oren Michels Skip to info about: Mashery: explained to the audience that API management services | while APIs are a technology, Rackspace: cloud computing their use is a business decision. experts | Aplus.net: Web hosting | He went on to say that Mashery Crowd Science: demographic has helped customers such as Submitted at 11/8/2009 11:14:31 AM..level. Thank Hakia on Twitter for making ReadWriteWeb possible.. CHECK page 34 32 Tech News/ E-reader Newspaper New iPhone Worm: How Worried Should We Be? By Sarah Perez (ReadWriteWeb) Australia, iKee was created to highlight the iPhone's poor security. Apparently unrepentant Submitted at 11/9/2009 6:24:32 AM about his creation, Towns has Numerous reports have surfaced made no attempt to hide his over the weekend regarding the identity, posting on internet first iPhone worm spotted in the forums and on his Twitter page wild. The worm, known as iKee, about his hack. He even cheekily only affects modified handsets tweets a response to a post on also known as " jailbroken" security firm's Sophos blog devices. These devices have been where the writer had sought out hacked by their owners to allow the hacker's identity via Google f o r t h e i n s t a l l a t i o n o f searches:"You know man if you unapproved, third-party programs wanted my number you could that aren't allowed in the iTunes have asked." And he wasn't App Store. kidding - Towns has been Currently, the worm doesn't happily responding to media appear to be all that malicious - it requests via his Twitter account. simply changes the phone's For example, he told ABC News background image to a photo of that he had personally infected singer Rick Astley, the man 100 iPhones with the worm. whose song "Never Gonna Give From those phones, he explained, You Up" has become a well- the worm will then try to spread known internet meme called " to other devices. rickrolling," a joke where users Perhaps the reason for his are tricked into clicking links that transparency has to do with the r e d i r e c t t h e m t o A s t l e y ' s relatively harmless nature of the YouTube video. attack. The worm just changes Despite the relatively innocuous the iPhone wallpaper on the nature of this particular attack, it affected devices. However, as the may be the precursor to future S o p h o s ' p o s t p o i n t s o u t , attacks of a more malicious " a c c e s s i n g s o m e o n e e l s e ' s nature. But how dangerous will computing device and changing these attacks be to the iPhone- their data without permission is owning population as a whole? Is an offence in many countries." there really a need for concern? While that may be true, it's clear Sponsor that Towns feels as if he's almost About the iKee Worm doing a public service by According to the hacker, 21-year exposing a security vulnerability -old Ashley Towns, a student that many jailbroken iPhones living in New South Wales, face. More Hacks Expected? users. To begin with, they're relatively tech-savvy to have managed to jailbreak their phones to begin with - a process which involves using downloadable software tools that unlock Apple's control mechanisms on the device. While not overly complex, most mainstream iPhone users won't bother to take this action, content with the iTunes App Store and its 100,000 While this particular worm or so available applications. appears to be localized to And then there is the fact that the Australia, it could have spread to attacks don't even affect all other countries and eventually, jailbroken iPhone owners - they worldwide. It also comes directly only affect those who have also on the heels of another similar installed a program called SSH attack on jailbroken devices. on their devices. The program Only last week, a Dutch hacker allows users to access the broke into jailbroken iPhones and iPhone's filesystem with the then displayed a message on the username of "root" and password comprised devices demanding a of "alpine." Since few SSH users ransom of 5 Euros. This attack had bothered to change this root was also made possible through password, that left their phones the same vulnerability that the open to attack. Still, how many people are we iKee worm uses. talking about here? And what Graham Cluley of Sophos sort of iPhone user are they? predicts that other hackers will be Although exact numbers of tempted to write their own code jailbreakers are unknown, mobile now that they've seen what's analytics firm Pinch Media possible. In addition, some recently revealed data showing hackers may be more malicious there are at least 4 million of with their creations than what these jailbroken devices in the we've seen so far. But Who is iPhone ecosystem. It's not known Really Being Affected? However, even if the attacks how many of these users have escalate, the fact of the matter is also installed SSH. that the potential victims are a For the most part, it's likely that minor subset of Apple iPhone those who have done so are knowledgeable enough to prevent future attacks on their devices even if they had become a victim of one of these recent hacks. At the very least, they're now aware of the issue and can follow the straightforward instructions available on the web that explain how to change the root password so it's no longer the default. More Dangerous than the iPhone Worm: Dishonest Developers Despite all the media hoopla over this "first iPhone worm," it's not something that most iPhone owners will have to worry about. What's more concerning are the claims that a supposedly legitimate iPhone development firm has been collecting personally identifiable information from the users of its App Store-approved iPhone games which have been installed over 20 million times. According to a suit filed in the U.S. District Court in Northern California, the firm, Storm8, has been using a a backdoor method which allowed them to collect the phone numbers of anyone who had installed their applications. This wouldn't be the first time that an iPhone developer has done this, either. Apple actually provides an easy way for developers to tap into this information, if they so desire. If anything, this is the real threat that the media should be focused on, not the iPhone worm. Discuss Tech News/ Picture/ E-reader Newspaper 33 5 Years On: ReadWriteWeb's 2004 Interview With Tim O'Reilly By Richard MacManus (ReadWriteWeb) software lock-in would survive in web 2.0? O'Reilly argued that Microsoft Submitted at 11/9/2009 1:53:59 AM would have to change: "I think Five years ago I interviewed tech that the business of Microsoft, publisher Tim O'Reilly about a the company of Microsoft, is new term that his company had going to continue to succeed. But just coined: Web 2.0. The first I think the business model of Web 2.0 conference had been Microsoft is going to have to held the previous month, October change." 2 0 0 4 , a n d O ' R e i l l y h a d This has turned out to be the graciously agreed to give an case. Over the past 5 years, interview to yours truly - "an Microsoft has slowly rolled out a unknown blogger from New "software plus services" strategy Zealand," as I put it back then. under the catch-all phrase 'Live.' The interview ran in a 3-part While the Windows OS and series (see also part 2 and part 3) desktop software such as Office and covered Web 2.0, new continue to be Microsoft's business models, social software mainstay products, some of the and eBooks. functionality gradually moved I've always been a big believer into the cloud - e.g. syncing over in learning from history as we devices. Vista, the current look to the future. So let's re-visit generation of Windows, began this interview from five years ago t h a t t r a n s i t i o n . I n 2 0 0 9 , and see how prescient the father Microsoft is even taking steps to of Web 2.0 was. put Office online. Sponsor With the benefit of hindsight, I Microsoft and Web 2.0 think O'Reilly nailed it in 2004 In 2004 the leading Web 2.0 with this statement: "Microsoft companies were Google, Yahoo! will continue to dominate on the and Amazon. But what of the PC, but the PC is going to be a dominant software company of smaller and smaller part of the t h e p r e v i o u s g e n e r a t i o n , entire business." Microsoft? I asked Tim O'Reilly The Mobile Web, for one, has back in November 2004 whether taken attention away from Microsoft's core strategy of Microsoft. Which is where Apple However, I said that "the other side of that coin [...] is the "data lock-in" of users, where users may not necessarily have control over their content." I asked O'Reilly if that was something comes in... Apple and Web 2.0 for users to be concerned about? At the inaugural 2004 Web 2.0 O'Reilly replied, in November Conference, Apple was a no- 2004, that "there are companies show. In talking about Apple's that are trying to use data lock-in position in the Web industry back as a competitive tool - and there then, O'Reilly said that "Apple is will eventually be a recognition in a position they've been in a lot that this is a problem." of times before. They're like This has indeed happened - and Moses showing the way to the data lock-in is nowhere more of a promised land, but they don't problem than on the world's most actually go there." popular social network circa Although Apple never did open 2009, Facebook. Over the past u p , a s O ' R e i l l y f o r e s a w , few years we at ReadWriteWeb nevertheless they went on to have written many articles about create the most successful new F a c e b o o k ' s ' w a l l e d g a r d e n ' gadget of the past decade: the approach to user data. Users can't iPhone. Apple also created a t a k e t h e i r p e r s o n a l d a t a thriving iPhone app ecosystem. elsewhere. What's more, there So in the case of the Mobile have been bungled attempts to Web, Moses (a.k.a. Steve Jobs) use that data for commercial actually did lead us to the means. promised land! Remember that Facebook had Facebook and Data Lock-in just launched in February 2004 In 2004 I noted that "a lot of and was confined to some what Web 2.0 is about is users selected American Universities producing content and not just (Harvard to Stanford, Columbia consuming it." I pointed to and Yale). It had yet to reach the O'Reilly's own example at the 1 million users mark. While time: Amazon compared to the O'Reilly couldn't have known B a r n e s & N o b l e w e b s i t e . that Facebook would turn into the juggernaut it now is, he did accurately predict that data lockin would become a major issue: ." Conclusion It is remarkable how much can change in the Web industry in five years. Back in 2004, Facebook was a baby and Twitter wasn't even a glint in the milkman's eye. Among the big companies of that time, Apple hadn't yet given birth to the revolutionary iPhone and Microsoft was entering its midlife crisis. On reflection, Tim O'Reilly did extremely well in his 2004 predictions - considering how fast the Internet evolves. And I'm still grateful to him for giving an interview to an unknown New Zealand blogger. How times change... Image credits: Niall Kennedy; Shht!; Alex Eckford Discuss We are the People! By Kay Kremerskothen (Flickr Blog) • About Flickr making photo management an and tags on your photos, post to Flickr is a revolution in photo easy, natural and collaborative any blog, share and more! storage, sharing and organization, process. Get comments, notes, This content has passed through fivefilters.org. 34 TV/ E-reader Newspaper CHECK continued from page 31centric features. Sign up and create a free Groupsite in minutes.: • Vast custom application development capabilities, including SOA solutions, eCommerce, and Web 2.0 applications. • Full stack of enterprise hosting services for mid-market companies, including shared, dedicated, and complex hosting, SaaS enablement, and colocation. • Best in class managed hosting, such as virtualization and utility computing.. Thank NaviSite on Twitter for B a c k u p i f y making ReadWriteWeb possible. Backupify provides reliable MyDomain.com online backup services for a MyDomain is a leading ICANN- range of products, including accredited provider of domain Twitter, WordPress, Facebook, name registration and online Delicious, Basecamp, Google business solutions. For over 10 Docs, Gmail, Zoho, Flickr and years, MyDomain has offered Photobucket. Backups are low-cost domain names and free secure, automatic and easy to set d o m a i n s e r v i c e s i n c l u d i n g up. complete DNS management. Thank Backupify on Twitter for Today, sub-$10 domains without making ReadWriteWeb possible. the constant upsells you'll find at LeapFish some competitors are the norm at The Web has evolved. It used to M y D o m a i n . M y D o m a i n ' s be a place where people came to complete range of solutions j u s t s e a r c h f o r s i m p l e include Web hosting and VPS information. Now it's a place hosting, email, SSL Certificates where people come to also share a n d m o r e . S e a r c h E n g i n e information: information that is Strategies multi-media, complex, real time From social media to local and social; recommended by search to video SEO, Search people who know, and people Engine Strategies Chicago puts you know. LeapFish calls this you in front of the experts who new place The Living Web, and w i l l h e l p y o u s o r t w h i c h it has designed an evolved engine technologies and channel will to help you get the most from it take you to the next level and a service to help you live the new Web. Thank LeapFish on Twitter for making ReadWriteWeb possible.! Thank Media Temple and SixApart Review: The Amazing Race - This Is the Worst Thing I've Ever Done in My Life By Jackie Schnoop (TV Squad) Submitted at 11/9/2009 1:47:00 AM . Continue reading Review: The Amazing Race - This Is the Worst Thing I've Ever Done in My Life Filed under: OpEd, The Amazing Race, Episode Reviews Permalink| Email this| | Comments Tech News/ TV/ Fashion/ E-reader Newspaper 35 Did Google Steal Sidewiki From a Startup? By Jolie O'Dell (ReadWriteWeb) but Reframe It had the further advantage of a stellar advisory board. Submitted at 11/9/2009 12:09:18 AM Fast-forwarding to this fall, Web annotation is a sexy and Google launched Sidewiki in increasingly crowded space in the September, almost a full year market. As in any such pool, the after the debut of Reframe It. a m o u n t o f e l b o w - r u b b i n g Looking at these demo videos b e t w e e n i n d i v i d u a l s a n d back-to-back, the similarities are similarity between products can obvious: lead to suspicion of theft. For an in-depth side-by-side Annotation startup Reframe It, a comparison of both apps, see 14-person team, claims that Google Watch's post on the G o o g l e ' s h o t n e w p r o d u c t subject. The basic conclusion is Sidewiki crosses the line between that the products look similar competitive innovation and IP enough that Google's source code infringement. And with a few had better be drastically different Googlers caught with their hands from Reframe It's if they are to in Reframe It's cookie jar, there avoid a major lawsuit. might be some validity to this But if we had a nickel for every claim. time we spotted disgraceful Sponsor similarities between web We first came across Reframe It products, we'd be... Well, never about a year ago when it first mind what we'd be doing with launched. The company's product that stack o' nickels. Here's the allowed users to "basically write interesting part: Reframe It CEO comments into the margins of the Bobby Fishkin, who claims his Internet" and was in heavy company has neither the time nor competition with services such as the resources to take on tech Diigo and SocialBrowse. When behemoth and pop culture darling Reframe It added Twitter and Google, told eWEEK that there F a c e b o o k i n t e g r a t i o n a n d were several attempts to learn received an official nod from and assimilate his startup's Mozilla this past spring, Diigo technology and interface, right remained as a serious competitor, drama. Run-of-the-mill, workaday, tech IP drama. And we look forward to following up on these reports accordingly. Discuss Street Chic: New York By ELLE.com (ELLE News Blog) Submitted at 11/9/2009 4:00:00 AM Get wrapped up in a cozy knit coat. Photo: Lee Satkowski Think you are Street Chic? Email us your photo and you could appear in ELLE.com's Street Follow ELLE on Twitter. Become our Facebook fan! Chic Daily. Legend of the Seeker fans, sorry, but it will be a while before the ratings are available By Robert Seidman (TVbytheNumbers) Submitted at 11/8/2009 10:48:01! This content has passed through fivefilters.org. 36 Tech News/ Tech Blog/ Entertainment/ Noticings: Geotagging Photo Game Powered by Flickr API By Jolie O'Dell (ReadWriteWeb) version of Flickr or Twitpic. So, with all the other photosharing services out there, why Submitted at 11/8/2009 3:07:38 PM states, "a game of noticing the choose Flickr to build a game We recently told you about the world around you." around? It's question of scale, Flickr App garden and gave a list Sponsor according to the site. "We know of five interesting apps we found "Many of us are moving so fast other photo-sharing services are using this new section of the site. through the urban landscape we available, but we're on Flickr, so One app we didn't find - and one don't take in the things around are our friends, and it really does that brilliantly appropriates the us," the site reads. have the best location API for the Flickr API in a delightful, "Noticings is a game you play by sort of thing we want to do." infectious user experience - is going a bit slower and having a At the moment, the game seems Noticings. Part game, part look around you. It doesn't to have a small user base and a geotagging app, part photoblog, require you change your behavior largely international one - which Noticings asks users to upload significantly or interrupt your means this game is wide open for geotagged photos of interesting r o u t i n e . Y o u j u s t t a k e early-adopting Yankees to go artifacts to Flickr. Users tag the photographs of things that you Team America all over the place! photos "noticings;" those photos think are interesting or things you Also, anything that gets geeks are then imported, analyzed, and see. You'll get points for just outside gets our vote. What do scored, with extra points being noticing things, and you might our readers think? Let us know in awarded for those who post every get bonuses for interesting the comments, and be sure to day in a given week, who post coincidences." include a link to your Noticings photos of lost objects, or who We find the concept charming, a profile if you're playing already. post the first pic from a certain less boozy version of Foursquare, Discuss neighborhood. It is, as the site a more friendly-competitive Submitted at 11/9/2009 12:09:00 AM Werewolf hunk Taylor Lautner shows off his beefed-up physique on the December cover of Men's Health and ET's got a behind-thescenes peek at his photo shoot for the mag. A social namespace (Scripting News) Submitted at 11/8/2009 9:15:38 AM. Taylor packed on 30 pounds of third 'Twilight' film, 'Eclipse,' xSocial:userName -- the user's muscle in a year to convincingly expected out June 30, 2010. "My name. d e p i c t J a c o b B l a c k ' s character continues to grow," he xSocial:userDescription -- a transformation into an almighty tells Men's Health, "so I'd like to string of characters describing the werewolf in 'The Twilight Saga: pack on at least a few more lean user. New Moon,' which arrives Nov. pounds." xSocial:userLocation -- a string, 20. the location of the user. And the star says he plans on xSocial:userUrl -- the address of bulking up even more for the Video: 'Twilight''s Taylor Lautner Muscles Up! (ETonline - Breaking News) E-reader Newspaper. Apple/ Entertainment/ E-reader Newspaper Will the Cloud Lead Me Away From the Mac? By Alfredo Padilla (TheAppleBlog) Submitted at 11/9/2009 7:24:52 AM: • Twitter • Google Reader • Evernote • Google Calendar • Remember The Milk • Gmail • Facebook • WordPress • Socialcast •. 37 John Travolta Breaks Silence after Death of Son (ETonline - Breaking News) Submitted at 11/9/2009 6:00:00 AM Nearly a year after the death of their 16-year-old son Jett, John Travolta and his wife Kelly Preston say they are still struggling to cope with the tragedy. "We've been working very hard every day as a family to heal," Travolta tells USA Today. Jett passed away in January after suffering from a seizure at the family's Grand Bahamas home. "We have our own way of doing it, and it has been helping." Preston tells the newspaper their family has been receiving an "outpouring of love from, really, worldwide. It's been our friends, our family, our church. We partake in spiritual counseling pretty much daily." 38 Apple/ Tech Blog/ E-reader Newspaper Rumor Has It: Verizon iPhone in Q3 2010 By Charles Jade (TheAppleBlog) Submitted at 11/8/2009 2:51:31 PM Even as Verizon continues attacking AT&T’s comparatively poor network with new ads, and by proxy the iPhone, the latest rumor has Apple developing a “worldmode” iPhone capable of running on any network.looking reporting puts the end of AT&T’s nano from the iPhone. Actually, a exclusivity agreement in 2010. smartphone without Wi-Fi in U n f o r t u n a t e l y , t h i n g s s t o p 2009 would belong on the Island m a k i n g s e n s e r i g h t t h e r e . of Misfit Toys, so scratch that, The research note also states the but a “free” iPhone nano under new iPhone has a 2.8 screen, contract would undoubtedly find compared to 3.5 for current its way under many a tree this iPhones. AppleInsider notes year. rumors from last year about an iPhone with a smaller screen, Paul Carr's piece is rubbish (and disgusting) (Scripting News) Anyway... Carr's piece is rubbish, and just this once I'll take the bait. This is how TechCrunch works. Of course what the nurse at the They write something stupid, hospital did, according to his then people write rebuttals account, was horrible. Let's say, explaining how it's stupid, for the sake of argument, that in building flow and page rank. It's addition to being a "citizen the same method John Dvorak journalist" she was also a British explains in an interview I did citizen. with him at the Apple Store in Of course, the movie-taker San Francisco a couple of years shooting the end of the life of the ago. beautiful Iranian protestor did Submitted at 11/8/2009 3:08:58 PM. Apple/ Tech Blog/ E-reader Newspaper 39 Coolest thing my father did (Scripting News) Submitted at 11/8/2009 5:56:48 PM Critical Update Issued for Apple TV By Charles Jade (TheAppleBlog) Submitted at 11/8/2009 8:08:01 AM Ten days after updating the Apple TV’s software to version 3.0, Apple has released version 3.0.1 along with an alarming warning about users’ content “temporarily” disappearing. From the uninformative and unintentionally hilarious support document, if you are running Apple TV 3.0 and “all of your movies, TV shows, and songs appear to be missing” or “all of your movies, TV shows, and songs appear to be present,” you should update to version 3.0.1 immediately. In a letter to unlucky Apple TV users, the Apple TV team (at least those that still have jobs) gave instructions for updating. • Reboot your Apple TV (unplug the power cord and plug it back in) • Select Settings > General from the main menu • Select Update Software • Select Download and Install Forum and found 10,000 page views for the missing content discussion, as well as continuing complaints after updating to the latest version. Reported problems include the Apple TV no longer syncing with iTunes, surround sound problems, new purchases not showing up, as well as performance issues. It appears Apple’s “hobby,” as the Apple TV has been described After a restart, the problem of by company executives, could disappearing content should be use a little more developer solved. That’s the good news. attention, not to mention a The bad news is there are still a purpose besides being an iTunes number of problems with the 3.x Store kiosk. software. Philip Elmer-DeWitt at Apple 2.0 beat me to the Apple Support, Joey. 40 TV/ E-reader Newspaper TV Guide Network and TV Land Acquire Exclusive Joint Basic Cable Rights to Curb Your Enthusiasm By Robert Seidman (TVbytheNumbers) recently garnered tremendous buzz and fan-following during its current season which features a Submitted at 11/8/2009 10:05:15 PM long-awaited “Seinfeld” reunion TV GUIDE NETWORK AND 11 years after the show’s finale. T V L A N D A C Q U I R E Airing now on HBO, the seventh EXCLUSIVE JOINT BASIC s e a s o n o f “ C u r b Y o u r CABLE RIGHTS TO AWARD- Enthusiasm” stars members of WINNING HBO COMEDY the famed “Seinfeld” cast, who S E R I E S “ C U R B Y O U R recreate their sitcom characters ENTHUSIASM” and also play themselves during Networks Jointly Acquire Multi- the show. Platform Rights to the Hit “‘Acquiring a series of this Comedy Series Produced By and caliber truly underscores our Starring “Seinfeld” Co-Creator commitment to defining TV Larry David Guide Network as a destination LOS ANGELES (November 9, for some of the best shows on 2009) TV Land and TV Guide television,” said Ryan O’Hara, Network today announced that President of TV Guide Network they will jointly acquire basic and TVGuide.com. “‘Curb Your cable rights to “Curb Your Enthusiasm’ is our second major Enthusiasm,” the award-winning acquisition this year and it will HBO comedy series produced by become another key building and starring “Seinfeld” co-creator block in our programming lineLarry David. One of the most up for 2010 and beyond.” g r o u n d - b r e a k i n g s h o w s o n “We are excited to bring ‘Curb t e l e v i s i o n , “ C u r b Y o u r Your Enthusiasm to TV Land Enthusiasm” has never before PRIME,” stated Larry W. Jones, b e e n s e e n o n b a s i c c a b l e president, TV Land. “Curb Your television and for the first time Enthusiasm’ is one of the most ever, this irreverent comedy clever, witty and groundbreaking series will be delivered to shows on television and we’re millions of new viewers by TV excited to have it join our roster Guide Network and TV Land. In of top-quality sitcoms. The addition, both networks will irreverent way ‘Curb’ uses a c q u i r e c e r t a i n b r o a d b a n d , characters to illuminate real-life wireless and video-on-demand situations fit perfectly with our rights, bringing “Curb Your s t r a t e g y o f d e l i v e r i n g Enthusiasm” to multiple screens programming that is geared to the for fans to enjoy. life stage and attitudes of people “Curb Your Enthusiasm” has in 2008, including Outstanding Comedy Series. It won the Golden Globe in the Best Television Series – Musical or Comedy category in 2002. In 2003, Robert B. Weide won a directing Emmysthe-scenes of Hollywood with original programming that delivers the latest news on entertainment and pop culture, as well as live coverage of the industry’s biggest events such as the Red Carpet at the Academy Awards and Primetime Emmy Awards. TV Guide Network, TVGuide.com and the TV Guide brand are owned by Lionsgate (NYSE: LGF), the leading nextgeneration. This content has passed through fivefilters.org. TV/ E-reader Newspaper 41 Stargate Universe producer Brad Wright talks back to critics, talks smack about ABC’s V ratings By Robert Seidman (TVbytheNumbers) right that V will go back to just being a letter in the alphabet, but to make that comment after the Submitted at 11/8/2009 4:32:42 PM stellar ratings V pulled in its In her review of the premiere of premiere still is a bit surprising to ABC’s V, Chicago Tribune TV me. Mo felt compelled to respond critic Maureen Ryan made to Brad’s comment. references to similar genre shows Mo notes that many people who that had debuted this fall, were eagerly anticipating SGU including Stargate Universe. Her have given up on it. For now, c o m m e n t s w e r e n ’ t e x a c t l y that describes me perfectly. I’ve f l a t t e r i n g w h i c h l e d S G U given up as well, but if I hear it’s executive producer Brad Wright turned a corner, I will definitely to leave a comment on her blog, check it out again. I watched where among other things he every episode of SG1 and SGA said: ever made and I wanted to like […] fortunately there are enough SGU, but didn’t. For me, it’s viewers and reviewers who think like they’ve used none of what I SGU is neither boring, poorly loved about Stargate: SG1 and plotted, or sexist to keep us on used many of the things I didn’t the air long after “V” is just a love about the later seasons of letter in the alphabet again.” Battlestar Galactica. Talk about putting the V in People who have been following ratings enVy! Wright might be” This content has passed through fivefilters.org. Curb Your Enthusiasm will be on TV Guide and TV Land By Brad Trechak (TV Squad) Submitted at 11/9/2009 9:33:00 AM Curb Your Enthusiasm reruns are coming to basic cable. First they will be shown on the TV Guide Channel next year (doesn't everybody get that channel? I thought it was just a guide to what's on television. They have shows?) and then TV Land in 2013. Any event that brings Larry David's sense of humor to the masses can only be a good thing (Who had the idea for the humor in awkward situations first, Larry or Ricky Gervais?). Mind you, the show's language is somewhat racy for basic cable. and there. At least there's no There will be some bleeping here. Filed under: Programming, OpEd, Curb Your Enthusiasm, Pickups and Renewals, RealityFree Permalink| Email this| | Comments 42 TV/ E-reader Newspaper Paul Mooney's TV history from Black Is the New White Laura Wright is staying with General Hospital By Nick Zaino (TV Squad) By Allison Waldman (TV Squad) Submitted at 11/9/2009 10:03:00 AM Paul Mooney is well known to stand-up comedians for his own work and for writing for his longtime friend, Richard Pryor. Outside of that, though, his name recognition gets a little fuzzier. So for TV comedy fans, Mooney's new memoir, Black Is the New White, provides some great behind-the-scenes moments they should probably know. There are a lot of heartfelt stories about Richard Pryor and Mooney's own personal life, but there is a lot of fun TV trivia, as well. Mooney talks about getting forced onstage by a couple of friends to do his first solo standup. Continue reading Paul Mooney's whenever he catches the show in TV history from Black Is the reruns, he feels a little guilty. New White Readers also get to see Mooney Filed under: Other Comedy and Pryor trying to write for Shows, Celebrities, Reality-Free Sanford and Son, a gig that only Permalink| Email this| | lasted a couple of episodes. Comments Predictably, they were a bit too explosive for network TV. Submitted at 11/9/2009 9:02:00 AM There's a lot to be happy about if you're a fan of General Hospital. There's been some very good news from the soap. Things like James Franco's guest role and the return of the original Lucky, Jonathan Jackson. And Steve Burton has committed to playing Jason for a while longer.. Continue reading Laura Wright is staying with General Hospital Filed under: OpEd, Daytime, Celebrities, Reality-Free Permalink| Email this| | Comments Review: Curb Your Enthusiasm - Officer Krupke By Jonathan Toomey (TV Squad) Submitted at 11/9/2009 8:06:00 AM ( S07E08)"Some guy told me to go 'f*ck my face' once. He went to jail." - Officer Krup bat, that creates a huge conflict Elisabeth Shue). Right off the. Continue reading Review: Curb Your Enthusiasm - Officer Krupke Filed under: OpEd, Curb Your Enthusiasm, Episode Reviews, Reality-Free Permalink| Email this| | Comments TV/ Entertainment/ E-reader Newspaper Why I’m not watching Glee By Julia (TVbytheNumbers) Submitted at 11/8/2009 3:28:05 PM.” This content has passed through fivefilters.org. Review: Mad Men - Shut the Door, Have a Seat (season finale) By Allison Waldman (TV Squad) feet and he's be holding on, trying to right himself and his life. He's tried with Betty. He's Submitted at 11/9/2009 12:01:00 AM tried for Sally and Bobby and (S03E13) It's a cold Friday, Gene -- at least as much as Don D e c e m b e r 1 3 , 1 9 6 3 . T h e is able to try. President's been killed and the With Conrad Hilton he's never world as Don Draper knows it been on a level playing field, and has pretty much fallen apart. For from the moment he was forced most of the season, the ground to sign the contract, Sterling as it had been. With this episode, has been shifting under Don's Cooper has not been his domain this season finale, all was changed and, perhaps, all has been righted. More after the jump. Continue reading Review: Mad Men - Shut the Door, Have a Seat (season finale) Filed under: OpEd, Episode Reviews, Reality-Free, Mad Men Permalink| Email this| | Comments 43 Rob Pattinson on 'New Moon': I'm Afraid of Commitment Like Edward (ETonline - Breaking News) Submitted at 11/9/2009 12:05:00 AM In ET's continuing series of interviews with the stars of 'The Twilight Saga: New Moon,' Rob Pattinson tells all about how he is similar to his character in the film. In 'New Moon,' Rob plays vampire Edward Cullen, who is forced to make a heartbreaking choice regarding his human love, played by Kristen Stewart, in order to try to protect her. In real life, Rob says that he is afraid of commitment similar to Edward. He also opens up about his extreme degree of fame, and whether in his opinion, he lives up to the fans' perception of him. 44 Entertainment/ Sports/ E-reader Newspaper Colts still perfect after missed FG in final John Travolta and Kelly Preston's Daughter Makes second Acting Debut in 'Old Dogs' By Associated Press (ESPN.com) (ETonline - Breaking News) Submitted at 11/9/2009 12:04:00 AM Ella Bleu Travolta is following in the footsteps of John Travolta and Kelly Preston when she shares the screen with her famous parents in 'Old Dogs.' She told ET she was "so excited when I heard they were going to be in it too." Her proud papa called her a "pro." Co-star Robin Williams also praised the young actress, saying, "She's got the chops. She grew up around it…She comes from great stock." Submitted at 11/9/2009 4:53:38 AM Fast Facts • The Colts won their 17th straight, which is tied for the third-longest win streak in NFL history. • Peyton Manning attempted 40 passes in the first half (10 in the second) to become the first QB since Rich Gannon in 2002 to throw at least 40 passes in the first half. • Matt Schaub threw for 300-plus yards for the 10th time in his career, however his teams are 5-5 in those contests. • Andre Johnson caught 10 passes for 103 yards. It was his ninth game with at least 10 receptions and 100 yards since 2008 (most in NFL). • The Colts extended their win streak at home to 10 straight and have won 14 of 15 all-time games against the Texans. • Rapid Reaction -- ESPN Stats & Information This content has passed through fivefilters.org. TCU Horned Frogs rise to No. 4 in latest BCS standings By ESPN.com (ESPN.com) Florida is first for the fourth straight week, and Alabama and Texas switched spots for the NEW YORK -- All those second week in a row. blowouts have carried TCU to The Gators and Crimson Tide unprecedented heights in the have clinched their respective BCS standings, giving the divisions in the Southeastern Horned Frogs hope -- however Conference and will meet Dec. 5 slim -- of becoming the first BCS in the league championship buster to break into the national game. One of them is all but championship game. guaranteed a spot in the BCS title TCU took over fourth place in game on Jan. 7 in Pasadena, the Bowl Championship Series Calif., if they can get through the standings Sunday behind Florida, next month without a loss. Alabama and Texas. It's the The same goes for Texas, which highest BCS ranking ever for a has three regular-season games team from a conference without a n d p o s s i b l y t h e B i g 1 2 an automatic bid to the big- c h a m p i o n s h i p r e m a i n i n g . money bowl games. No potential The other undefeated teams -BCS buster had ever done better TCU, Cincinnati and Boise State than sixth in the BCS standings. -- need the top three to stumble to Submitted at 11/8/2009 4:31:02 PMautomatic bid conferences can earn by finishing in the top 12 of the final standings. TCU plays Mountain West Conference rival No. 16 Utah on Saturday. A win there for the Horned Frogs could provide even more separation between them and Boise State, which does not have another ranked team on its schedule. This content has passed through fivefilters.org. Sports/ E-reader Newspaper Tony Dungy says Philadelphia Eagles QB Michael Vick could end up playing with Buffalo Bills 45 Memphis Grizzlies owner Michael Heisley: Allen Iverson 'would tell me' if mulling retirement By ESPN.com news services (ESPN.com) disappointed. I feel bad because I don't think that's the way he should go out." By Associated Press stay there. If they don't, there are season. Submitted at 11/9/2009 7:56:53 AM Iverson has played in three (ESPN.com) s o m e t e a m s l o o k i n g f o r "Me and Tony talked about my Memphis Grizzlies owner games since returning from a quarterbacks: Cleveland, St. position in the future, whether Submitted at 11/9/2009 7:00:41 AM Louis and Washington. I'm here or whether I'm there," Michael Heisley has shot down hamstring injury, averaging 12.3 NEW YORK -- Tony Dungy "But I think a dark horse is Vick said. "We talked about it, the idea that Allen Iverson is points, 3.7 assists and 22.3 s a y s M i c h a e l V i c k c o u l d Buffalo. They talked originally. but the primary goal is to help mulling retirement as he takes an minutes. But he has expressed potentially wind up in Buffalo, There was some communication this team win the Super Bowl." indefinite leave of absence for a displeasure over coming off the bench thus far this season. w h i c h h e s a y s p r e v i o u s l y there. I think that could be a good Vick has completed 2 of 6 personal matter. d i s c u s s e d s i g n i n g t h e spot." passes for 6 yards and rushed 12 Frustration stemming from his "It's something that I never did in quarterback. Vick has not been the weapon times for 27 yards, mostly out of reserve role combined with the my life, so obviously it's a big stress related to a recent family adjustment," he said last week. Dungy has served as an adviser for the Eagles some expected and the wildcat formation. to Vick since the Super Bowl- he was in for only two plays in He said he usually talks to issue has caused Iverson to "I'm so tired of discussing that, winning coach retired from the their 20-16 loss to the Dallas Dungy at least once a week and consider hanging up his jersey talking about that, every single Colts after last season. Now a Cowboys on Sunday night. Vick receives "great advice" from his for good, The Commercial day. It's just not something that I commentator for NBC, Dungy said his focus is on helping the mentor. Vick, who has talked to Appeal of Memphis reported want to discuss." Iverson has also acknowledged confirmed during the pregame Eagles win a Super Bowl and not churches and schools about the Monday morning. s h o w S u n d a y n i g h t b e f o r e where he'll play next year. poor life choices he's made, has But Heisley said there was no he had become a distraction. Philadelphia hosted Dallas that "It's what I thought it would be," enjoyed his second chance in indication of such consideration "When I hear anything about the by the 34-year-old guard, and the Memphis Grizzlies, I don't hear the Bills and Vick "talked Vick said. "I knew I couldn't Philadelphia. originally" when Vick was come in and do anything that "It's been great," he said. "Every owner told the newspaper that "if you guys talk about anything searching for a team after serving would disrupt the rhythm of the day I wake up and I just thank he was going to retire, he'd tell other than the situation with me coming off the bench," he said 18 months in federal prison for offense and what we had going God I have another opportunity me first." Friday. "I mean, there's got to be running a dogfighting ring. on here. I knew I was going to to play football and put on a TrueHoop: A.I. Out? The Eagles signed Vick to a $1.6 have to be patient." uniform. That's what I'm thankful Allen Iverson isn't thrilled about something else with this team to coming off the bench for the talk about besides that. But I million contract for 2009, with a Bills starter Trent Edwards for." team option for the second year struggled this season before C o p y r i g h t 2 0 0 9 b y T h e Grizzlies. But does that mean his guess that sells a lot better than time in Memphis is over? Blog anything else when it comes to at $5.2 million. But he has played sustaining a concussion. Ryan A s s o c i a t e d P r e s s sparingly. Fitzpatrick, a career backup, has This content has passed through Iverson returned to his home in this team." Atlanta late last week with Information from The "I told Michael to just worry been the starter with Edwards out fivefilters.org. Heisley's permission. Associated Press was used in this about this year," Dungy said. "It's of the lineup. "I expect him to come back," report. technically up to Philadelphia. If Vick didn't want to talk about Heisley said. "If he does retire, This content has passed through they want him back, he has to potential teams for the 2010 I ' l l b e t r e m e n d o u s l y fivefilters.org. 46 Sports/ Popular News/ E-reader Newspaper Kansas City Chiefs release troubled running back Larry Johnson By ESPN.com news services (ESPN.com) Submitted at 11/9/2009 7:55:51 AM In a tersely worded statement Monday, the Kansas City Chiefs announced that embattled running back Larry Johnson has been released. Johnson on Sunday served a one -game suspension for making gay slurs. He was 75 yards from breaking Priest Holmes' franchise record (6,070) for career rushing yards. The Chiefs suspended Johnson without pay through Sunday, the team's bye week, after he questioned coach Todd Haley's qualifications on his Twitter account and twice used a homosexual slur in an exchange with one of his Twitter followers. Last Monday, Kansas City reached a settlement with Johnson through his agent, reducing the amount of pay Johnson would lose by half, to $315,000. The Chiefs initially cleared Johnson to return to the team's facility and participate in team activities beginning Monday. He apologized for his remarks last week, and his agent said his client had learned from the entire experience. Last week, an online petition started by Chiefs fans asked general manager Scott Pioli to deactivate Johnson and keep him on the sideline so he could not pass Holmes' team rushing record or join the team's Ring of Honor at Arrowhead Stadium. Johnson was one of the best running backs in the NFL in 2005 and '06, running for more than 1,700 yards each season and earning Pro Bowl honors. In 2007, he signed a five-year contract extension that guaranteed him about $19 million. Information from The Associated Press was used in this report. This content has passed through fivefilters.org. Witness Says Orlando Office Shooting Lasted 1 Minute (FOXNews.com) SLIDESHOW: Orlando Office Shooting Rodriguez has been charged ORLANDO, Fla. A man who with first-degree murder. was in an Orlando office when a Orlando Police Chief Val former employee came in and Demings says more charges are started shooting says the ordeal expected. that left one dead and five injured An attorney for Rodriguez has lasted about a minute. portrayed the 40-year-old as a Mark Davidson, a vice president mentally ill man who fell victim at the engineering firm Reynolds, t o c o u n t l e s s p e r s o n a l a n d Smith and Hills, said Monday f i n a n c i a l p r o b l e m s . that his co-workers stayed calm Click here for more from Friday and didn't scream as Jason MyFoxOrlando.com. Rodriguez entered a reception This content has passed through area of the eighth-floor office and fivefilters.org. began shooting. Submitted at 11/9/2009 7:15:11 AM Hermida Could Pay Off Big for Red Sox By Frankie Piliere (FanHouse) past the World Series. Some key players have already been locked up and some high upside trades by Frankie Piliere have already gone down. What Filed under: Angels, Brewers, do these moves mean for each Marlins, Red Sox, Twins, White club involved and how will the Sox, MLB Transactions, Scout's players dealt respond to their new Eye View In Advanced Scouting, homes? Just as significant, how MLB FanHouse's professional important will the prospects dealt talent evaluator breaks down turn out to be? offseason moves from a scouting From Mark Teahen headed to perspective. Chicago, to the Carlos Gomez for It hasn't taken long for the Hot J.J. Hardy swap, to Bobby Stove to get heated up as we roll Abreu's new deal with the Submitted at 11/9/2009 12:49:00 AM Angels, each move had a distinct impact. Perhaps the most interesting of these, however, was Jeremy Hermida being shipped to Boston. For the price of a pair of young lefties, the Red Sox took a gamble that may prove very worthy. Hermida Could Pay Off Big for Red Sox originally appeared on Fanhouse MLB Blog on Mon, 09 Nov 2009 00:49:00 EST . Please see our terms for use of feeds. Permalink| Email this| Linking Blogs| Comments Older, Wiser Tony Romo Leads Key Win By Kevin Blackistone (FanHouse) Submitted at 11/9/2009 2:20:00 AM by Kevin Blackistone OLDER, page 47 Sports/ E-reader Newspaper OLDER, continued from page 46 Filed under: NFL, NFL Analysis PHILADELPHIA -- In the wee hours of Monday morning, with a blue Cowboys' baseball cap pulled down snug on his noggin and a short sleeve T-shirt worn over a long sleeve one, Tony Romo looked like the boyish character we've come to see him as. He looked more like some guy who just finished playing a pick-up football game between fraternities rather than the multimillion-dollar NFL quarterback for Jerry Jones' Cowboys that he's been for a number of years now. But when Romo started to talk about what he'd accomplished, he sounded wise beyond his appearance. "If you keep the mental discipline ..." Romo explained in a quite deliberate and thoughtful delivery, "keep getting better, keep learning what they're doing ... you can do some good things." Older, Wiser Tony Romo Leads Key Win originally appeared on Fanhouse - Kevin Blackistone on Mon, 09 Nov 2009 02:20:00 EST . Please see our terms for use of feeds. Permalink| Email this| Linking Blogs| Comments Brian Leetch: Pride of the Rangers By Christopher Botta (FanHouse) Submitted at 11/9/2009 10:00:00 AM by Christopher Botta Filed under: Rangers, NHL Hall of Fame The main press box for Rangers games at Madison Square Garden is in the lower bowl, behind one of the goals. It is here where the perfect imagery can be found to illustrate the magical play of Brian Leetch, who enters the Hockey Hall of Fame on Monday. In this press box sit men and women, some who have been to thousands of games, some perhaps new to the hockey beat. Either way, it can often be a jaded lot. But when Leetch plied his craft as a defenseman for the Rangers from 1988 until 2004, there were countless moments when his artistry made those four Luc Robitaille: The Ultimate Steal rows of tables one of the grandest places to be in sports. The Hockey Hall of Fame Class of '09: Steve Yzerman| Brian Leetch | Brett Hull Luc Robitaille| Lou Lamoriello Brian Leetch: Pride of the Rangers originally appeared on Fanhouse NHL Blog on Mon, 09 Nov 2009 10:00:00 EST . Please see our terms for use of feeds. Permalink| Email this| Linking Blogs| Comments Steve Yzerman: Most Gracious Superstar By Susan Slusser (FanHouse) behavior. The longtime Red Wings captain, and a three-time Stanley by Susan Slusser Cup winner as a player, enters Filed under: Red Wings, NHL the Hall of Fame as a winner on Hall of Fame In an age of look-at the ice and off, a gentleman -me professional athletes, full of respected by his peers and adored boasting, silly taunting and big by his fans. celebrations over routine plays, Yzerman was, and is, classy and Steve Yzerman is a reminder that understated, the embodiment of the best and most talented shine o l d - f a s h i o n e d v a l u e s o f all the brighter for humble sportsmanship and personal 47 Submitted at 11/9/2009 10:00:00 AM accountability. The Hockey Hall of Fame Class of '09: Steve Yzerman | Brian Leetch| Brett Hull Luc Robitaille| Lou Lamoriello Steve Yzerman: Most Gracious Superstar originally appeared on Fanhouse NHL Blog on Mon, 09 Nov 2009 10:00:00 EST . Please see our terms for use of feeds. Permalink| Email this| Linking Blogs| Comments By Adam Gretz (FanHouse) Submitted at 11/9/2009 10:00:00 AM by Adam Gretz Filed under: Kings, Penguins, Rangers, Red Wings, NHL Hall of Fame It doesn't matter how good your team's front office is, the NHL draft can still be a complete shot in the dark in which the most highly-touted, can't miss prospect can miss, and ninth-round picks that sneak under the radar because of concerns about their ability to skate at an NHL level can end up scoring over 600 goals and tallying nearly 1,400 points in a 19-year career -- kind of like Luc Robitaille. The Hockey Hall of Fame Class of '09: Steve Yzerman| Brian Leetch| Brett Hull Luc Robitaille | Lou Lamoriello Luc Robitaille: The Ultimate Steal originally appeared on Fanhouse NHL Blog on Mon, 09 Nov 2009 10:00:00 EST . Please see our terms for use of feeds. Permalink| Email this| Linking Blogs| Comments 48 Sports/ Game/ Popular News/ Brett Hull: Bulldog, Blues and Beyond By Bruce Ciskie (FanHouse) Submitted at 11/9/2009 10:00:00 AM by Bruce Ciskie Filed under: Blues, Stars, Red Wings, NHL Hall of Fame, College Hockey In 1984, a kid with a famous name and loads of potential in his game showed up on the campus of the University of Minnesota Duluth. The Calgary Flames had drafted the kid, but they knew he wasn't ready to play. After two years at UMD, Brett Hull -- son of the great Bobby Hull -- was ready to tear up the NHL. Boy, did he ever do that. Turns out Hull was quite the impact player at every level he ever played at. He finished his career as the only player to ever score 50 goals in college hockey, the minors, and the NHL. The Hockey Hall of Fame Class of '09: Steve Yzerman| Brian Leetch| E-reader Newspaper Manny Pacquiao FanHouse Live Chat, Monday 2 PM ET Small Plane Crashes in Florida Everglades By Michael David Smith (FanHouse) Rescue crews haven't been able to locate any survivors from a small plane crash in the Florida Everglades. According to the Broward County Sheriff's Office, alert drivers on Interstate 75 called dispatchers Sunday night to report the low-flying aircraft. One caller thought the plane crashed, while other drivers said it appeared to disappear. Crews located the crash site Sunday night. Authorities say it appears the single-engine airplane nose-dived into the Everglades. Homicide detectives are expected to be on the scene Monday to continue their investigation. This content has passed through fivefilters.org. Submitted at 11/9/2009 6:24:00 AM Brett Hull Luc Robitaille| Lou Lamoriello Brett Hull: Bulldog, Blues and Beyond originally appeared on Fanhouse NHL Blog on Mon, 09 Nov 2009 10:00:00 EST . Please see our terms for use of feeds. Permalink| Email this| Linking Blogs| Comments by Michael David Smith Filed under: FanHouse Exclusive, Media Watch Manny Pacquiao will fight Miguel Cotto on Saturday night (HBO pay-perview at 9 PM ET), but he'll be taking a few minutes out of his preparations to chat live with FanHouse readers on Monday. If you want to ask Pacquiao a question, you can ask it during our live chat, or submit it in advance to me on Twitter @MichaelDavSmith. The live chat begins below at 2PM ET on Monday. Manny Pacquiao FanHouse Live Chat, Monday 2 PM ET originally appeared on Fanhouse Boxing Blog on Mon, 09 Nov 2009 06:24:00 EST . Please see our terms for use of feeds. Permalink| Email this| Linking Blogs| Comments (FOXNews.com) Submitted at 11/9/2009 7:19:58 AM Second season of Xbox Live's 1 vs 100 starts Nov. 19 By David Hinkle (Joystiq) Submitted at 11/9/2009 10:30:00 AM If you've been missing your trivia fix on Xbox Live, you should be happy to learn that Microsoft has dated the second season of its Xbox Live quiz-'emup, 1 vs 100. Expect the game to return on November 19 at 5:00 PDT, where it'll dish out another 14 weeks' worth of questions -and this time around, your chance to be The One or a part of The Mob will be determined by one thing: score. Multiple-choice maniacs will be able to play the new season of 1 vs 100 on a daily basis through themed Extended Play sessions every night, including "Finish the Nov 2009 10:30:00 EST. Please Lyrics," "80's" night and -- yes, see our terms for use of feeds. w e ' r e t o t a l l y s e r i o u s - - Read| Permalink| Email this| "Vampires." Finally, a theme Comments night your 12-year-old sister can get behind! Second season of Xbox Live's 1 vs 100 starts Nov. 19 originally appeared on Joystiq on Mon, 09 Game/ Economy/ E-reader Newspaper 49 The Best of Big Download: November 2 - 8 By Joystiq Staff (Joystiq) • Interviews: We had interviews with the CEO of the gaming PC maker Maingear and the lead The first full week of November d e s i g n e r b e h i n d S h a t t e r e d 2009 kept us at Big Download on H o r i z o n . our toes with a ton of news, • Serious Sam HD Preview: We downloads, and special features. go hands-on with a near final As usual, we take some time to v e r s i o n o f C r o t e a m ' s F P S smell the roses and look back at r e m a k e . the past week. Please, won't you • Mac Monday: Our look at Mac join us? -based games checks out two Exclusive features titles: T rapped: The Abduction and Harvest: Massive Encounter. • Unreal Development Kit: We • Boot Disk: Our look at retro were all over the release of Epic games checks out the classic Games' free version of Unreal space strategy title Master of Engine 3 with a feature on the Orion II. UDK itself and interviews with • Big Ideas: Our regular look at the teams behind the first two game themes examines what free UDK games; Whizzle and people's perceptions are on The Ball. beauty in a game. • Reviews: We give our final • All You Need To Know: Our v e r d i c t s f o r B o r d e r l a n d s , 411 feature gives you everything Torchlight, AI War: Fleet you need to know about Command and Shattered Horizon Blizzard's upcoming Diablo III. Submitted at 11/8/2009 11:30:00 PM Deal activity lifts Wall Street (Financial Times - US homepage) Submitted at 11/9/2009 7:26:53 AM Deal activity by companies from chocolate makers to a nuclear submarine manufacturer buoyed Wall Street confidence on Monday, lifting US stocks higher after last week’s strong gains. Less than an hour after the opening bell, the S&P 500 was up 1.2 per cent at 1,81.81. The Dow Jones Industrial Average • Playing For Free: Our weekly look at online free-to-play games has our opinion on CrimeCraft which recently turned into a freeto-play game. • Freeware Friday: Don't turn out the lights. This week's column on free PC games checks out the super-spooky Au Sable. • Contest: You still have some time to win one of three Steam download codes for Shattered Horizon. Final Fantasy XIII announcement possibly due on Nov. 13 By Griffin McElroy (Joystiq) Thirteenth." The nature of this "announcement" is still up in the Continue reading The Best of "A Henchmen Inventor Tent air, though our money's on an official North American release Big Download: November 2 - 8 Unto." The Best of Big Download: Before reading on, why not try date for the super-anticipated November 2 - 8 originally to solve that puzzle for yourself? RPG. Though, considering the appeared on Joystiq on Sun, 08 It's an anagram, which means speed at which the series is Nov 2009 23:30:00 EST. Please you can mess the letters about to growing now, we wouldn't be spell a secret message. Here's a surprised if Squeenix went ahead see our terms for use of feeds. Read| Permalink| Email this| hint: It was posted by Square and dropped the Final Fantasy Enix as a tease for Final Fantasy XV bomb on us. Comments XIII, meaning it probably has Final Fantasy XIII something to do with that game. announcement possibly due on What's that? "A Chevron Non- Nov. 13 originally appeared on nineteenth Mutt?" We suppose Joystiq on Sun, 08 Nov 2009 that's a pretty good guess, but 19:30:00 EST. Please see our rose 1.1 per cent to 10,129.60 unfortunately, it's the wrongest terms for use of feeds. and was touching fresh highs for thing ever. A Final Fan-site Read| Permalink| Email this| the year. The Nasdaq climbed 1.3 r e c e n t l y h a d b e t t e r l u c k Comments per cent to 2,140.06. uncovering the message's true This content has passed through meaning: "Announcement Nov fivefilters.org. Submitted at 11/8/2009 7:30:00 PM 50 Game/ E-reader Newspaper Japanese hardware sales, Oct. 26 - Nov. 1: Away we Go edition By Griffin McElroy (Joystiq) place the PSP Go into our weekly line-up. The system's one-day sales of 29,109 garnered it a It was almost one year ago to the somewhat disappointing fourthday when we last added a p l a c e f i n i s h - - w h i l e participant to the Japanese sales simultaneously, the sales totals of charts, leading to a mathematical every other console (save for the anomaly which'sploded our DSi) increased, reversing a recent living quarters. Luckily, we downward trend for the region. learned from our mistakes, and Our theory on why this occurred have remembered not to add is simple: Upon seeing the digital today's newcomer to our Excel distribution-based future our spreadsheet, preserving the world has in store, Japanese integrity of our apartment and, gamers began to hoard the more importantly, the space-time t a n g i b l e m e d i a - r e a d i n g continuum. platforms, which will Not that we require number- undoubtedly become priceless crunching software to accurately relics in the coming years. Submitted at 11/8/2009 9:30:00 PM Also, an action game featuring an attractive, gun-toting witch whose clothes frequently disappear was released. That might have contributed to the sales surge. Like, a little. - DSi: 37,517 4,682 (11.10%) - PS3: 36,061 6,084 (20.30%) - PSP: 34,911 2,046 (6.23%) - PSP Go: 29,109 (New Entry!) - Wii: 28,888 2,971 (11.46%) - DS Lite: 6,902 352 (5.37%) - Xbox 360: 6,047 1,577 (35.28%) - PS2: 1,966 15 (0.77%) [Source: Media Create] See: The attractive archives Japanese hardware sales, Oct. 26 - Nov. 1: Away we Go edition originally appeared on Joystiq on Sun, 08 Nov 2009 21:30:00 EST. Please see our terms for use of feeds. Permalink| Email this| Comments For the UK's best Modern Warfare 2 deal, head to the grocery store By JC Fletcher (Joystiq) Submitted at 11/9/2009 9:30:00 AM 2 to just £26 ($43.76), an amazing £29 off the regular price. Tesco is also offering a big price cut on Modern Warfare 2: it's £25 with the purchase of another are open all the time anyway, Sainsbury's locations. Unless bestselling game ( The Guardian will begin selling the game at they're like stores in the US, in specifies "top-20" games), or midnight tonight, as will five which case they began selling £39.70 on its own. Tescos, which them last week. For the UK's best Modern Warfare 2 deal, head to the grocery store originally appeared on Joystiq on Mon, 09 Nov 2009 09:30:00 EST. Please see our terms for use of feeds. Read| Permalink| Email this| Comments Game/ Economy/ E-reader Newspaper 51 China car sales surge on stimulus steps (Financial Times - US homepage) Submitted at 11/9/2009 1:29:05 AM Wii Fit Plus claims top spot in UK sales chart, Dragon Age debuts in fifth By Alexander Sliwinski (Joystiq) have a stronger showing in the US, though we'll have to wait for the NPD results to see if it lands Submitted at 11/9/2009 10:01:00 AM in the top ten. Wii Fit Plus has taken the top Source-- A fitting No1 [Chart spot in the UK during its second Track] week of release, according to Source-- All formats chart Chart Track. The excellent [Chart Track] Dragon Age: Origins premiered Wii Fit Plus claims top spot in in fifth place, behind Wii Fit UK sales chart, Dragon Age Plus, Wii Sports Resort and two d e b u t s i n f i f t h o r i g i n a l l y football games. No surprises appeared on Joystiq on Mon, 09 there! Nov 2009 10:01:00 EST. Please Ratchet and Clank: A Crack in see our terms for use of feeds. Time premiered at the 22nd spot, Permalink| Email this| which is a weaker debut than we C o m m e n t s had anticipated for the titular duo's final(ish) outing. It'll likely SHANGHAI, Nov 9 - China’s passenger cars sales in October surged 75.8 per cent from a year earlier, official data showed, extending the explosive growth in recent months as government incentive policies continued to lure customers. By JC Fletcher (Joystiq) Continue reading NintendoWare A total of 946,400 passenger Weekly: Excitebike World Rally, cars were sold in October, up Submitted at 11/9/2009 11:00:00 AM Electroplankton, Cybernoid, and sharply from 538,500 units sold a year earlier, but slightly lower It's a very exciting day for more Nintendo downloads, thanks to NintendoWare Weekly: that 1.02m units sold in one very thrilling two-wheeler. E x c i t e b i k e W o r l d R a l l y , S e p t e m b e r , t h e C h i n a But Excitebike World Rally is far Electroplankton, Cybernoid, and Association of Automobile from the only offering this week. more originally appeared on Manufacturers said on Monday. In fact, there are multiple games Joystiq on Mon, 09 Nov 2009 “Autumn is usually the best auto available for WiiWare, Virtual 11:00:00 EST. Please see our selling season. But it is not that obvious this year as sales have Console, and DSiWare, which terms for use of feeds. seems like a rare feat for Read| Permalink| Email this| been so strong all along,” said Qin Xuwen, an analyst with Nintendo of America. Head past Comments Orient Securities. the break to see if any of the This content has passed through other new games excite you. fivefilters.org. NintendoWare Weekly: Excitebike World Rally, Electroplankton, Cybernoid, and more Cadbury rejects Kraft’s £9.8bn hostile bid (Financial Times - US homepage) Submitted at 11/9/2009 7:17:02 AM Cadbury on Monday rejected a hostile cash and shares bid from Kraft that valued the UK confectionery group at £9.8bn or 717p a share, after the US food group formalised the terms of an indicative offer it made two months ago. Shares in Cadbury, which had been trading higher in morning trading in London, fell 2 per cent after Kraft announced its bid but recovered later in the session to stand fractionally higher at 758½p. Kraft shares were little changed in New York at $26.79. This content has passed through fivefilters.org. 52 Popular News/ Economy/ E-reader Newspaper Gulf Coast preps as Ida weakens to tropical storm (AP) (Yahoo! News: U.S. News) Earlier, heavy rain in Ida's wake triggered flooding and landslides in El Salvador that killed 124 PENSACOLA, Fla. – Schools people. One mudslide covered closed, residents of low-lying the town of Verapaz, about 30 areas sought shelter and Florida's miles outside the capital, San governor declared a state of Salvador, before dawn Sunday. emergency Monday as a lateIn the U.S., there were no season tropical storm churned immediate plans for mandatory toward the Gulf Coast. evacuations, but authorities in After a quiet storm season, some coastal areas were opening residents took the year's first shelters and encouraging people serious threat in stride. near the water or in mobile "Even though we're telling homes to leave. everybody to be prepared, my gut Monday morning, Ida was tells me it probably won't be that located about 185 miles southbad," said Steve Arndt, director southeast of the mouth of the of Bay Point Marina Co. in Mississippi River and about 285 Panama City, Fla. miles south-southwest of Ida started out as the third Pensacola. It was moving northhurricane of this year's Atlantic northwest near 17 mph. season, which ends Dec. 1, but it Officials were encouraging weakened to a tropical storm residents to prepare for potential M o n d a y m o r n i n g , w i t h gusts of 60 mph by removing tree maximum sustained winds near limbs that could damage their 70 mph. The U.S. National homes and securing or bringing Hurricane Center said it was not in any trash cans, grills, potted expected to strengthen again plants or patio furniture. before making landfall along the Residents of Pensacola Beach, Gulf Coast sometime Tuesday Fla., and nearby Perdido Key morning. were encouraged to leave, as T r o p i c a l s t o r m w a r n i n g s were people farther inland who extended more than 200 miles live in mobile homes, and school across Louisiana, Mississippi, was canceled in the area Monday Alabama and Florida. and Tuesday. Some schools Submitted at 11/9/2009 7:20:34 AM. ___ Associated Press writers Suzette Laboy in Miami and Becky Bohrer in New Orleans. This content has passed through fivefilters.org. UK change of heart on banking tax plan (Financial Times - US homepage) Submitted at 11/8/2009 11:20:26 AM Gordon Brown, the British prime minister, rapidly backpedalled from his proposal for a financial transactions tax on Sunday after a chorus of criticism of his plan set out in a speech on Saturday to a meeting of global finance ministers. The US led a backlash against the“Tobin tax” on financial transactions after Mr Brown took a Group of 20 finance ministers’ meeting by surprise with his proposal at the gathering in St Andrews, Scotland. This content has passed through fivefilters.org. Popular News/ Economy/ E-reader Newspaper Resolute Fort Hood soldiers ready for return (AP) (Yahoo! News: U.S. News) Submitted at 11/9/2009 7:23:06 AM FORT HOOD, Texas – Pvt. Joseph Foster took a bullet in the leg during the Fort Hood." Across Fort Hood, signs point to a post on the mend after the shooting spree Thursday that killed 13 and wounded 29. Accused gunman Maj. Nidal Malik Hasan, shot in the torso by civilian police to end the rampage, was in critical but stable condition and breathing on his own at an Army hospital in San Antonio. Authorities continue to refer to Hasan, 39, as the only suspect in the shootings but they won't say when charges would be filed and have said they have not determined a motive. Sixteen victims remained hospitalized with gunshot wounds, and seven were in intensive care. Even as the community took time to mourn the victims at worship services on and off the post Sunday,." President Barack Obama will attend a memorial service Tuesday honoring victims of the attack, amid growing suggestions that Hasan's superior officers may have missed signs that he was embracing an increasingly extremist view of Islamic ideology. Sen. Joe Lieberman said Sunday he wants Congress to determine whether the shootings constitute a terrorist attack and whether warning signs were missed. A day earlier, George Casey." Sgt. 1st Class Frank Minnie was in the processing center Monday and Wednesday, getting some health tests and immunizations in preparation for his deployment. The mass shooting happened Thursday, but Minnie said." ___ Associated Press writers Allen Breed and Jeff Carlton in Fort Hood and Pamela Hess, Devlin Barrett, Richard Lardner and Jessica Gresko in Washington contributed to this report. This content has passed through fivefilters.org. 53 Murdoch hints he will sue BBC (Financial Times - US homepage) Submitted at 11/9/2009 4:38:58 AM Rupert Murdoch indicated on Monday that News Corporation would sue the BBC over breach of copyright for “stealing” material from his newspapers round the world. Mr Murdoch, interviewed on Sky News Australia, was asked how he would be able to instigate his proposal to charge for newspaper websites such as The Times in the UK or The Australian when the BBC and ABC produced free news content on their sites. This content has passed through fivefilters.org. 54 Popular News/ Media/ E-reader Newspaper Ida Weakens to Tropical Storm, Gulf Still on Warning Around the Net In Media: Fox News Pushes Live Web Video Effort (FOXNews.com) (MediaPost | Media News) extended more than 200 miles across Louisiana, Mississippi, Submitted at 11/9/2009 7:23:53 AM Alabama and Florida. PENSACOLA, Fla. Schools Earlier, heavy rain in Ida's wake closed, residents of low-lying triggered flooding and landslides areas sought shelter and Florida's in El Salvador that killed 124 governor declared a state of people. One mudslide covered emergency Monday as a late- the town of Verapaz, about 30 season tropical storm churned miles outside the capital, San toward the Gulf Coast. Salvador, before dawn Sunday. After a quiet storm season, In the U.S., there were no residents took the year's first immediate plans for mandatory serious threat in stride. evacuations, but authorities in Maps, forecasts, radar and more some coastal areas were opening at MyFoxHurricane.com shelters and encouraging people "Even though we're telling near the water or in mobile everybody to be prepared, my gut homes to leave. tells me it probably won't be that Monday morning, Ida was bad," said Steve Arndt, director located about 185 miles southof Bay Point Marina Co. in southeast of the mouth of the Panama City, Fla. Mississippi River and about 285 Ida started out as the third m i l e s s o u t h - s o u t h w e s t o f hurricane of this year's Atlantic Pensacola. It was moving northseason, which ends Dec. 1, but it northwest near 17 mph. weakened to a tropical storm Officials were encouraging M o n d a y m o r n i n g , w i t h residents to prepare for potential maximum sustained winds near gusts of 60 mph by removing tree 70 mph. The U.S. National limbs that could damage their Hurricane Center said it was not homes and securing or bringing expected to strengthen again in any trash cans, grills, potted before making landfall along the plants or patio furniture. Gulf Coast sometime Tuesday Residents of Pensacola Beach, morning. Fla., and nearby Perdido Key T r o p i c a l s t o r m w a r n i n g s were encouraged to leave, as were people farther inland who live in mobile homes, and school was canceled in the area Monday and Tuesday. Some schools. This content has passed through fivefilters.org. Submitted at 11/8/2009 6:52:26 PM With Strategy Room, a Web video "network" that produces eight hours of live programming each weekday, Fox News is trying to catch up in the digital world. This past September, FoxNews.com's audience had nearly doubled, but it still trailed rival CNN by over 8 million users. Born as a wall-to-wall political channel during the presidential race, Strategy Room is an experiment that clicked. Live Web video has been fairly limited, but Strategy Room has a fully programmed lineup, including a regular entertainment show, a health series and a business hour. In September, Strategy Room averaged 28,000 viewers per day, with viewership spiking around midday. Next up: A new live chat functionality that producers can turn on and off. Also, producers will soon be able to post episodes and short highlights of shows almost instantly, and users will be able to catch up on segments that interest them during Strategy Room's regular eight-hour schedule. This content has passed through fivefilters.org. Popular News/ Media/ E-reader Newspaper Transit moving again in Philly after 6-day strike (AP) (Yahoo! News: U.S. News) Submitted at 11/9/2009 6:57:43 AM PHILADELPHIA –. ___ Associated Press Writer Patrick Walters contributed to this report. This content has passed through fivefilters.org. 55 Around the Net In Media: Condand#233; Nast Hires Crisis Expert (MediaPost | Media News) Submitted at 11/8/2009 6:54:41 PM Condé Nast Nast executives have tapped Washington, D.C.based crisis manager and media coach Michael Sheehan to improve the company's image. Sheehan is known for coaching Bill Clinton, Barack Obama, AIG and JP Morgan. Lucky publisher Gina Sanders used Sheehan when she launched Teen Vogue. Help is needed because morale at Condé is hitting an all-time low. After the closure six magazines and hundreds of layoffs, the publisher's glitzy image has also taken a drubbing on Madison Avenue. Meanwhile, cutbacks are picking up on the newspaper side of Condé's parent company Advance Publications. The Staten Island Advance, one of the company's first papers, is looking for another 40-plus volunteers to take severance packages before year's end. This content has passed through fivefilters.org. 56 Popular News/ E-reader Newspaper Scores die in El Salvador floods (BBC News | Americas | World Edition) Submitted at 11/9/2009 7:07:01 AM Please turn on JavaScript. Media requires JavaScript to play. The torrential rain triggered landslips and mudslides At least 124 people have been killed in El Salvador by flooding and landslides following days of heavy rain, the government says.. The BBC's weather centre says the disastrous rains were mainly caused by a low pressure system in the Pacific, which was linked indirectly to Hurricane Ida, which passed the country three days ago. Ida was downgraded to a tropical storm as it crossed the Gulf of Mexico on Monday. Nonetheless, storm warnings remain in place along the Gulf Coast of the US, from Mississippi to Florida. Massive rockslides Soldiers joined residents of Verapaz, a town with a population of about 3,000 some 50km (30 miles) outside San Salvador, to dig through the mud with shovels under a persistent drizzle, the Associated Press reported. Emergency services said about 300 homes had been destroyed in the town, which was hit by massive rockslides from the Chichontepec volcano. AP earlier quoted Red Cross spokesman Carlos Lopez Mendoza as saying 60 people there were missing. "The images that we have seen today are of a devastated country," President Funes said on local television. Our correspondent says that this is easily the biggest crisis the government of Mr Funes has had to face since coming to office five months ago. Are you in El Salvador? Have you been affected by the floods? Send us your stories using the form below. You can also send us your pictures and videos of the flooding to yourpics@bbc.co.uk or text them to+44 7725 100 100. If you have a large file you can upload here. Read the terms and conditions At no time should you endanger yourself or others, take any The country's official death toll A reporter on the El Salvador unnecessary risks or infringe any was not broken down by location daily La Prensa Grafica, Juan laws. but the deaths were concentrated Carlos Barahona, told the BBC A selection of your comments in San Salvador and San Vicente that San Vicente had been may be published, displaying province, where Verapaz is virtually cut off by landslides and your name and location unless situated. collapsed bridges. you state otherwise in the box "It was terrible," said Manuel Other badly affected areas were below. Melendez, 61, whose home in the L a L i b e r t a d , L a P a z a n d The BBC may edit your town was destroyed. Cuscatlan, he said. comments and not all emails will "The rocks came down on top of About 7,000 people are living in be published. Your comments the houses and split them in two, shelters as a result of the disaster. may be published on any BBC and split the pavement. I heard Large parts of the country are media worldwide. people screaming all around." without electricity or clean water Print Sponsor Collapsed walls, boulders and a n d r e m a i n c u t o f f f r o m This content has passed through downed power lines that blocked government aid, the BBC's Latin fivefilters.org. heavy machinery have been America correspondent Will impeding the rescue effort. Grant reports. Popular News/ Media/ E-reader Newspaper Alleged Fort Hood Shooter Frequented Local Strip Club (FOXNews.com) of the alleged killer from that offered by his imam and family members, who have described KILLEEN, Texas The Army him as a devout Muslim, and one psychiatrist authorities say killed who had difficulty finding a wife 13 people and wounded 29 others who would wear a head scarf and at the Fort Hood Army Base would pray five times a day. Thursday was a recent and Starz is a strip club located just frequent customer at a local strip down the road from the main gate club, employees of the club told entrance to the Fort Hood Base. FoxNews.com exclusively. It does not serve alcohol, but Maj. Nidal Malik Hasan came customers bring their own beer into the Starz strip club not far and liquor and buy ice buckets from the base at least three times and mixers at the club. in the past month, the club's Hasan sat at a table in the back general manager, Matthew Jones, corner of the club, to the left of t o l d F o x N e w s . c o m . A r m y the stage on which strippers investigators building their case dance around a pole, employees against Hasan plan to interview said. Jones soon. Jennifer Jenner, who works at "The last time he was here, I Starz using the stage name Paige, remember checking his military said Hasan bought a lap dance ID at the door, and he paid his from her two nights in a row. She $15 cover and stayed for six or said he paid $50 for a dance seven hours," Jones, 37, said. lasting three songs in one of the FULL COVERAGE: Fort Hood club's private rooms on Oct. 29 Tragedy and Oct. 30. SLIDESHOW: Deadly Shooting "I remembered his face because it at Fort Hood. was the first lap dance I [gave] to Hasan's presence at the club a customer while working here," paints a starkly different portrait she said. "When I saw his face Submitted at 11/9/2009 7:04:42 AM ." Click here for more from MyFoxDFW.com. This content has passed through fivefilters.org. 57 Kraft in Cadbury takeover bid (BBC News | Americas | World Edition) Submitted at 11/9/2009 7:54:49 AM US food company Kraft has launched a ÂŁ9.8bn ($16.4bn) hostile bid for UK confectioner Cadbury. Cadbury said it had "emphatically rejected" the new offer, which is being put directly to its shareholders. Kraft said it would offer 300p in cash and 0.2589 new Kraft shares for each Cadbury share, the same terms as it proposed in September. As Kraft shares have dropped in value since then, the bid is now worth less than the original ÂŁ10.2bn approach. In a statement Cadbury chairman Roger Carr called the offer "derisory". "Kraft's offer does not come remotely close to reflecting the KRAFT page 58 Around the Net In Media: Fox To Add E-commerce To DVDs (MediaPost | Media News) Submitted at 11/8/2009 6:56:10 PM Fox is hoping to boost the value of its upcoming DVDs with FoxPop, which makes its debut Dec. 1. With a free downloadable app for Mac, PC or iPhone, users can get a constant barrage of facts, photos, games and trivia questions on a second screen related to the movie they are watching on the first screen. FoxPop is the result of a partnership with Spot411. It works by "listening" to the audio of the movie and, within a few seconds, syncing to the exact moment in the movie. Fox isn't charging extra for FoxPopized movies, but believes it will make money with e-commerce and marketing opportunities. For example, an iTunes purchase function allows users who hear a song they like while watching a movie, to buy the music with a click. This content has passed through fivefilters.org. 58 Popular News/ E-reader Newspaper Senate may probe army shooting (BBC News | Americas | World Edition) 'Rants' Mr Lierberman said that if Maj Hasan had shown signs of Submitted at 11/8/2009 9:03:03 AM becoming an Islamist radical, the S e n i o r U S S e n a t o r J o e army should have discharged Lieberman says he plans to open him. a congressional investigation into The Associated Press news last week's deadly shooting at a agency reports that some of Maj Texas army base. Hasan's colleagues had expressed Mr Lieberman, who chairs the concern about his growing anger S e n a t e H o m e l a n d S e c u r i t y over the US wars in Iraq and Committee, told Fox TV that he Afghanistan. wanted to find out whether it was Another army psychiatrist, Val a terrorist attack. Finnell, told AP he had Nidal Malik Hasan, a Muslim c o m p l a i n e d t o a r m y army major, is suspected of administrators about what he killing 13 people. considered Maj Hasan's "antiMr Lieberman also said he American" rants. hoped to determine whether the "In retrospect, I'm not surprised army missed signs that Maj he did it," Mr Finnell said of the Hasan harboured extreme views. shootings. T h e 3 9 - y e a r - o l d a r m y Investigators are still looking psychiatrist opened fire at the into the motive of the attack. Fort Hood base on Thursday. But Army Chief of Staff George Besides those killed, 29 people C a s e y w a r n e d a g a i n s t were wounded. speculation. Maj Hasan was shot by a police He told ABC's This Week officer and remains in a coma. programme on Sunday that focusing on Maj Hasan's religion could "heighten the backlash" against all Muslims in the military. Reports suggested that Maj Hasan, who was due to be sent to Afghanistan, had been increasingly unhappy in the army. His cousin told US media last week that he had been opposed to his imminent deployment, describing it as his "worst nightmare". Mr Hasan's cousin also said the gunman had been battling racial harassment because of his "Middle Eastern ethnicity." Maj Hasan was born in the US of Palestinian parents and has been described as a devout Muslim. Print Sponsor This content has passed through fivefilters.org. KRAFT continued from page 57 true value of our company, and involves the unattractive prospect of the absorption of Cadbury into a low growth conglomerate business model," he continued. 'Long-term' value Under Takeover Panel rules, Kraft had until 1700 GMT on Monday to make a new offer or it would have been blocked from making an approach for six months. Kraft chairman Irene Rosenfeld questioned Cadbury's continued ability to stand alone. "We believe that our proposal offers the best immediate and long-term value for Cadbury's shareholders and for the company itself compared with any other option currently available, including Cadbury remaining independent," she said. Shares in Cadbury, which had been more than 1% higher in Monday morning trade, then fell to stand 0.5% lower on the day at 754p. Many investors had expected Kraft to increase its offer to tempt the board to back the proposal. Weekend reports had said that some Cadbury shareholders thought 820p a share would be a "starting point" for discussions with Kraft. KRAFT page 59 Chavez steps up Colombia war talk (BBC News | Americas | World Edition) fears of a possible spark on the border which could lead to further violence. Submitted at 11/8/2009 6:08:42 PM Frozen ties Venezuelan President Hugo In response to Mr Chavez's Chavez has urged his armed comments, Colombian President forces to be prepared for possible A l v a r o U r i b e s a i d h i s w a r w i t h C o l o m b i a a m i d government would seek help growing diplomatic and border from the UN Security Council tensions. and also the Organization of He said the best way to avoid American States. war was to prepare for it. In "Colombia has not made nor will response, Colombia said it would it make any bellicose move seek UN help. toward the international Venezuela blames the tension community, even less so toward with its neighbour on closer fellow Latin American nations," military ties between Colombia a statement by Mr Uribe said. and the US. Ties between Colombia and Colombia says US forces are Venezuela have been frozen there to help in the fight against since July when Bogota said it rebels and drug traffickers. would let the US army use its "Let's not waste a day on our military bases for anti-drugs main aim: to prepare for war and operations. to help the people prepare for The agreement has caused alarm war, because it is everyone's among some of Colombia's responsibility," Mr Chavez said neighbours, who object to an during his TV and radio show increased US military presence in Alo, Presidente. the region. Mr Chavez has also ordered When news of the deal first 15,000 troops to the border, broke in August, Mr Chavez citing increased violence by warned that "winds of war" were Colombian paramilitary groups. blowing across the continent. The BBC's Jeremy McDermott Print Sponsor in Bogota, Colombia, says that This content has passed through normally such declarations would fivefilters.org. not cause alarm, but because of the current tensions there are Popular News/ Economy/ E-reader Newspaper US trio 'on Iran spying charge' (BBC News | Americas | World Edition) appealed for their release. "We believe strongly that there is no evidence to support any Submitted at 11/9/2009 6:43:23 AM charge whatsoever," Mrs Clinton Three young Americans detained said. in Iran over alleged illegal entry She urged Tehran to free the are to be charged with espionage, group, calling on the authorities Iranian state news agency Irna to "exercise compassion". says. 'Charges of spying' Shane Bauer, Sarah Shourd and The three Americans were Joshua Fattal have been held by seized by Iranian border guards Iranian authorities since the end on 31 July. Their relatives say of July. they accidentally strayed into The trio are thought to have Iran while hiking. crossed a poorly marked border According to the state news by mistake while hiking in Iraq's agency, the move was announced Kurdish region. by general prosecutor Abbas Speaking in Berlin, US Secretary Jafari Dolatabadi. o f S t a t e H i l l a r y C l i n t o n "The three Americans arrested c o n d e m n e d t h e n e w s a n d near the border of Iran and Iraq KRAFT continued from page 58 Shares in Cadbury have risen about 30% since late August. Cadbury has 50,000 private shareholders. The largest is US investment management firm Franklin Resources, which owns just over 8%. Legal & General holds 5.2% of the firm. Kraft will have 60 days from the posting of its offer document to gain shareholder's support for the bid unless a competitor enters the frame.. Print Sponsor This content has passed through fivefilters.org. are facing charges of spying and the inquiry is continuing," Irna quoted him as saying. The prosecutor said that an opinion on the case would be given "in the not too distant future". Swiss diplomats were allowed to meet the trio, who are in their 20s and 30s, in late September for the first time since their arrest. The Swiss government represents US interests in Iran, with whom the US has no formal diplomatic relations. Print Sponsor This content has passed through fivefilters.org. 59 Health-care reform in America: Claiming a victory (The Economist: News analysis) election next year. After securing the lone Republican's support, Steny Hoyer, the Democrat's Submitted at 11/8/2009 5:46:38 PM majority leader, jokingly declared [ fivefilters.org: unable to that there had been “a bipartisan retrieve full-text content] voteâ€?. A bill to reform health care Success in the House, however, squeaks through the House. The is just one part of a long process. action moves to the Senate Senators must next debate their THE House of Representatives own health-care proposals, which narrowly passed a health-care bill could be brought to the Senate on Saturday November 7th, a big floor before the end of this step for those who want to month. If they manage to pass a reform America's $2.5 trillion bill then differences between the health-care system. Barack House and Senate versions would Obama spent part of his Saturday need to be hammered out before making a rare visit to Capitol Hill a final act is sent to the president t o p r e s s s o m e h e s i t a t i n g to sign. Nevertheless, the House Democrats into giving their b i l l m a r k s a s t e p t o w a r d s support, although in the end the A m e r i c a g e t t i n g t h e m o s t tally of 220-215 in favour of the significant piece of health-care legislation, with 39 Democrats legislation through Congress voting against, was a tight since Medicare in 1965, creating margin. The bill picked up the near-universal coverage for support of just one Republican, health insurance. Veterans of past J o s e p h C a o , a f i r s t - t e r m health-care battles are delighted. congressman from New Orleans ... who faces a tough battle for re- 60 Media/ E-reader Newspaper Research Brief: Radio Dominant Audio Device (MediaPost | Media News) effect on radio consumption. Radio was found to have a higher reach (82%) among those who According to a Nielsen analysis listen to portable audio devices, of a media study conducted by compared to the average reach t h e C o u n c i l f o r R e s e a r c h for all audio consumers. Excellence, 77% of adults are Jeff Haley, President and CEO reached by broadcast radio on a of the Radio Advertising Bureau daily basis, second only to (RAB), concludes that "... this... television at 95%. The study observational study of today's f o u n d t h a t W e b / I n t e r n e t consumer proves that the primary (excluding email) reached 64%, source of new music is the newspaper 35%, and magazines radio." 27%. Another key takeaway from the And, in a deeper analysis of reports is that broadcast radio is audio media titled "How U.S. the dominant form of audio Adults Use Radio and Other media at home, work, and in the Forms of Audio," Nielsen found car. Exposure to audio listening that: falls into four tiers in terms of • 90% of consumers listen to level of usage among listeners: some form of audio media per • Broadcast & satellite radio day (79.1% daily reach; 122 • The 77% who listen to minutes daily use among users) broadcast radio surpass the 37% • CDs and tapes (37.1% daily who listen to CDs and tapes reach; 72 minutes) and the 12% who listen to • Portable audio portable audio devices. [ipods/MP3 players] ( 11.6% • Almost 80% of those aged daily reach; 69minutes), digital 18 to 34 listening to broadcast audio stored on a computer such radio in an average day. as music files downloaded or transferred to and played on a While the recent emergence of computer (10.4% daily portable audio devices like the reach; 65 minutes average use), iPod and other MP3 players was and digital audio streamed on considered a threat to traditional a computer (9.3% daily reach; forms of audio, this study's 67 minutes) evidence suggests that the new • Audio on mobile phones ( technology has had a positive Submitted at 11/9/2009 5:15:20 AM Audio Sources by Location(% of Minutes) % of Minutes Listened Source Own Home Car Work Broadcast radio 46.4% 74.2% 53.8% Satellite radio 7.2 5.5 12.3 CDs/Tapes 20.6 16.2 4.0 Digital audio stored 8.8 5.0 Digital audio streamed 6.7 12.6 Portable audio 8.6 3.6 1.6 Other audio 1.7 10.6 Source: The Nielsen Company, October 2009 Other findings highlighted in the report include: • Audio media exposure has the highest reach among those with higher levels of education and income • Approximately 12% of study participants listened to MP3s and iPods for an average of 69 minutes per day, yet eight-in-ten of these individuals also listened to broadcast radio for an average of 97 minutes per day • 90% of adults are exposed to some form of audio media on a daily basis, with broadcast radio having by far the largest share ofbased" media platforms: Considering Portable Audio • Live television had the Devices: highest reach and daily usage • MP3 and iPod players among users (95.3%, 331 averaged only 8 minutes of minutes) listening per day among the • B r o a d c a s t radio entire observed sample, with just (77.3% reach, 109 minutes) u n d e r 9 0 % o f t h e s a m p l e • Web/Internet [excluding use not listening a t a l l . of email] (63.7%, 77 minutes) • Among listeners of portable • Newspapers (34.6%, 41 audio devices (11.6%), the minutes) h i g h e s t r e a c h w a s a m o n g • Magazines (26.5%, 22 t h o s e a g e d 1 8 t o 3 4 y e a r s minutes) (20.8%), singles (18.5%), and those who tend to be more For additional information from technology-savvy (18.2%) Nielsen, and to access the PDF • Among those who also file of the study, please go here. listened to portable audio devices This content has passed through such as MP3 players or iPods, fivefilters.org. broadcast radio had a daily reach of 81.6% reach and 97 minutes of average listening time Media/ E-reader News/ E-reader Newspaper 61 MediaDailyNews: Clear Channel Doubles LibreDigital Provides Up On Integrated Marketing Team, Taps AllAccess to Content Reichig To Oversee Outdoor Assets (Information Today) (MediaPost | Media News) Prior to joining Clear Channel, Reichig had been principal of InFocus Media Consulting, In another hire of a top media developing sales strategies for sales and marketing executive, key clients including Weather Clear Channel has recruited Channel, Rainbow Media and Debbie Reichig as senior vice Current Media. Prior to that, she president-business development served as senior vice presidentand marketing of its Clear market development at NBC Channel Outdoor Holdings unit. U n i v e r s a l , d i r e c t i n g t h e Reichig, who has a long track integrated media sales marketing record for leveraging innovative division across all assets and research measuring the ROI, or instituting mechanisms for return-on-investment, of media e n h a n c e d marketing buys for advertisers and agencies communications across the w i l l b e r e s p o n s i b l e f o r organization. developing sales strategies and But Reichig is perhaps best research related to it. She reports known for her role as senior vice to Rocky Sisson, executive vice president-sales strategy at Court president-sales and marketing. TV, where she developed Reichig's appointment comes innovative research initiatives to just months after Clear Channel measure the ROI of advertising hired another high-profile media buys on the cable channel. m a r k e t i n g e x e c u t i v e , J o h n Among other things, she created Partilla, 44, to serve as executive an ROI Council in which ad vice president and president of a g e n c y e x e c u t i v e s a n d global media sales of the parent advertisers directed innovative, c o m p a n y , C l e a r C h a n n e l primary research on media Communications, indicating that advertising effectiveness, and she the company remains committed also helped strike some of the t o a n a g g r e s s i v e s a l e s first upfront advertising sales development marketing strategy. deals based on guarantees of Partilla had previously been head ROI, as opposed to conventional of Time Warner's Global Media ratings delivery. Group. Reichig left Court TV when it Submitted at 11/9/2009 4:16:23 AM was sold to Time Warner and integrated into its Turner Broadcasting System unit. Reichig also has extensive experience in research and marketing for broadcast TV and online media, having worked in senior roles at iVillage, Comedy Central, the Network Television Association, AGB, Lifetime Television, and Blair/Telerep. But this is her first role explicitly developing out-of-home media, which increasingly is taking on auspices of other electronic media outlets as it integrates digital technologies and screens. "I worked in both cable and online when they were nascent industries and although outdoor is one of, if not the oldest ad platform, I see a great many parallels," Reichig tells MediaDailyNews. "Outdoor is poised to explode on the marketplace with new digital capabilities, new metrics and newly defined value. I am treating it as a new ad platform and expect it to see similar levels of growth." This content has passed through fivefilters.org. (Yahoo! News Search Results for e-readers) Submitted at 11/9/2009 7:10:50 AM LibreDigital Provides AllAccess to Content by Nancy Davis Kho Posted On November 9, 2009 There's no question that ereaders are garnering huge interest from consumers and enterprise users alike. But a niggling problem remains for mobile readers-how to untether content from the device on which it's read. At the recent Internet Librarian conference in Monterey, Calif., Britt Mueller, director of library services at QUALCOMM, found that employee interest was high during a recent pilot program for e-reader loans. But she was frustrated by the lack of material available to her users. "We are talking to vendors about the fact that content will drive usage [of mobile devices]. We need materials that can be used, untethered, on any device." That preference for untethered, open access formats will be a major decision factor driving device adoption patterns as ereaders move mainstream, according to Forrester analyst Sarah Rotman Epps. "Content choice really matters for consumers," Epps says. "The ability to have a broad selection of cheap content makes a difference to them." While Amazon dominates the e-reader market with its proprietary format-Forrester estimates it controls nearly 60% of e-reader market share-newer entrants are seeking to make it as easy as possible for readers to access content across devices, hewing to the EPUB standards for flexible delivery formats. L i b r e D i g i t a l () is one player in the e-reader market trying to offer maximum format flexibility to readers. The Austin, Texas-based company, which has offered publishers digital warehousing and e-distribution solutions since 1999, recently showcased a new "AllAccess" content delivery platform at the Texas Book Festival (TBF;). The TBF, an annual event drawing more than 200 authors and more than 35,000 book lovers, provided a peek at the technology that company officials expect to launch in 2Q 2010 to allow publishers, resellers, and authors to give readers access to ebooks LIBREDIGITAL page 63 62 E-reader News/ E-reader Newspaper E-Readers Up Close: Getting to know the Sony Readers, Part 1 (O'Reilly Media) (Yahoo! News Search Results for e-readers) available. Others include Amazon's Kindle and Barnes & Noble's Nook. Submitted at 11/9/2009 8:00:37 AM Sony offers several versions of William Stanek here, taking an this device, including the PRSup close look at e-readers. First 505 and the PRS-700. The PRSup, the Sony e-readers. 505 was introduced in 2007 and Sony unveiled its first reader the PRS-700 was introduced in device in January 2006 and the 2008. Both devices have their device became available in early strengths. 2007. The Sony Reader, like all The PRS-505 and the PRS-700 currently available e-readers, has h a v e a 6 - i n c h s c r e e n t h a t a black-and-white active matrix provides a resolution of 600x800 EPD display. As with other pixels--or approximately 170 devices and E Ink itself, the Sony pixels per inch. High contrast and Reader has evolved through high resolution, with a near 180º several generations of products. viewing angle ensures easy T h e o r i g i n a l S o n y R e a d e r reading in variety of lighting supported 4 grayscale levels and conditions. With eight levels of was able to switch the display at gray, the screen provides good a typical rate of 1.2 seconds. This display for charts, illustrations meant that the device typically and other types of graphics. displayed the next page in an e- (Readers with 8 levels of gray book in 1.2 seconds. scale are second generation. Second generation Sony Reader Readers with 16 levels of gray models support 8 grayscale levels scale are third generation.) or higher and are able to more Turning the page in an e-book rapidly switch the display. The takes about a second and the typical display switch rate is 40% battery supports approximately faster than the original reader at 7,500 continuous page turns on a .74 seconds or less. This means single charge. This number of that the device typically displays page turns per battery charge is the next page in an e-book in .74 fairly typical for 1st and 2nd seconds or less. Additionally, as generation e-readers. Like most e t h e P R S - 7 0 0 h a s a f a s t e r -readers, the Sony readers run the processor than the PRS-505, the Linux operating system and use a PRS-700 is able to more quickly USB 2.0 interface. Using a render the page for the display. standard USB cable, you can The Sony Portable Reader connect the reader to your System (PRS) is one of the most computer and then use the Sony a d v a n c e d r e a d e r d e v i c e s epublishing EBook E-READERS page 65 E-reader News/ E-reader Newspaper LIBREDIGITAL continued from page 61 on nearly any device. Regardless of where they are reading, AllAccess' platform formats the content such that the presentation is optimized for each device, and licensing and provisioning are all handled in the background. Visitors to the TBF website are able to use AllAccess to download The Story of Edgar Sawtelle by author David Wroblewski to their iPhone, Kindle, Sony Reader, a web browser, or desktop. Bob Carlton, vice president of marketing for LibreDigital, says that the consumer reaction to AllAccess' technology at the festival was encouraging. "We were heartened by the number of ebook readers saying ‘this is what we needed.' They are realizing that prior to this, they didn't own the content they downloaded; they were only renting it on a specific platform." Carlton sees user expectations for e-readers evolving rapidly through 2010; indeed, according to a July 2009 Forrester report titled "Who Will Buy an eReader?" the mainstream audience now coming on board with e-readers includes more female users and less tech-savvy people who read voraciously than the early adopters. Carlton says his company's use case research suggests that the mainstream reading public will want to read the same content across platforms during the course of a single day. "They may start a book during the commute on a laptop, switch to an iPhone while waiting in the carpool, and finish it sitting at a desktop computer screen while waiting on hold," Carlton says, an experience that publishers can offer readers via AllAccess technology. Sarah Wendell, who blogs about romance novels as Smart Bitch Sarah at the Smart Bitches, Trashy Books blog ( m),. Those existing relationships with publishers will come in handy as LibreDigital pursues digital distribution arrangements. The company says it is in discussions with a number of publishers about piloting the AllAccess platform during 1Q 2010. Regarding how the service will be priced, Carlton expects that, much like the music industry has done in differentiating price points by distribution channel, LibreDigital's flexible digital access will be bundled as a part of the publisher's overall pricing structure. "We think some will bundle it as part of their content access fee, while others will use the flexible delivery format to add value to their traditional models." The AllAccess announcement does highlight a further blurring of the lines between digital distributors, publishers, retailers, and authors. While LibreDigital has worked primarily with publishers in the past, it's not hard to imagine authors drawn to the freedom that the AllAccess platform would give them in distributing their works directly to readers. As Forrester's Epps points out, "The AllAccess announcement is interesting because it shows that new entrants are coming from adjacent markets." She believes this shows just how vulnerable bricks and mortar competitors such as Barnes & Noble and Borders are to new market entrants. But Carlton takes pains to point out that his company is a strong partner of Barnes & Noble, indeed powering part of its site with LibreDigital technology. "In this emerging ecosystem there are spaces where companies compete, and other spaces where they are partners. The majority of ebook reading happens on laptops and netbooks, not on devices. Retailers like the fact that AllAccess gives them freedom of choice." This content has passed through fivefilters.org. 63 Taiwan Firm Positioned for E-Reader Takeoff (New York Times) (Yahoo! News Search Results for e-readers) Submitted at 11/8/2009 12:14:05 PM TAIPEI — With the market for electronic book readers set to take off, things are looking up for a little-known Taiwanese company that will probably supply most of the “e-paper” they use. The company, Prime View International, said this summer that it would pay about $215 million to acquire E-Ink, which owns the technology for displaying text in the most popular readers, including Amazon’s Kindle and Sony’s Reader. Prime View, often referred to as P.V.I., recently sweetened its offer and says it hopes to close the deal by the end of the year. It already manufactures e-reader display modules for the Kindle and the Reader. “E-Ink is by far the leader” in the field, said John Chen, director of the display technology center at ITRI, a government-financed technology incubator in Taiwan. “P.V.I. is going to strengthen its leadership in the next year or two, before anyone else can catch up.” Demand for e-paper is expected to rise, with Amazon expanding the availability of the Kindle to Europe and the U.S. book retailer TAIWAN page 65 64 E-reader News/ E-reader Newspaper New Multimedia Device Joins Parade of EReaders (Enterprise Security Today) (Yahoo! News Search Results for e-readers) There are also reports that tiremaker Bridgestone is developing a flexible e-book Submitted at 11/9. E-reader Newspaper 65 TAIWAN continued from page 63 Barnes & Noble creating its own e-reader to compete with Amazon and Sony. The availability of more content and the ability to download material wirelessly has fueled demand for the devices. DisplaySearch, a market researcher based in Austin, Texas, forecasts the global market for e-paper, including epaper used in e-books, to hit $5.9 billion by 2015, from $400 million this year. This is not the first time Prime View has jumped into a growing market early. It became the first Taiwanese maker of flat-panel screens in 1994. Ten years later, in a crowded market dominated by the likes of Samsung and LG Display, it decided to focus on specialty products like custom displays for medical devices. In 2005, it acquired Philips Electronics’ e-paper display unit, in an early bet on the industry. “All the big companies like Samsung weren’t so interested in this market,” said David Hsieh, president of DisplaySearch’s Taiwan branch. “So Prime View found a good niche.” It was also a good fit considering Prime View’s pedigree. The company is a subsidiary of Yuen Foong Yu Group, a Taiwanese paper and pulp company. The group started making toilet paper and paperboard as early as 1939 and began producing coated paper in the 1950s with Japanese technology, according to its Web site. Now, one of Taiwan’s first mass producers of paper looks set, through a subsidiary, to become the world’s first mass producer of e-paper. Analysts say Prime View’s production capacity, which includes factories in South Korea it acquired in 2007, make it the only e-paper company with the scale to meet booming global demand. And the ownership of E -Ink will mean they have no intellectual property issues to overcome and can make e-paper “from head to toe,” Mr. Hsieh said. The company has its critics. Jeff Pu, an expert on the flat-panel industry in Taiwan, says Prime View has too much exposure in conventional liquid crystal displays. Prime View says that about half of its business concerns e-paper products. A demand dip could be punishing, said Mr. Pu, who currently analyzes the mobile industry at Fubon Securities in Taipei. For example, he said, Prime View executives told analysts in April that its Korean factories were operating at 30 percent of capacity in the first quarter of this year, and that 65 percent was “break-even level.” Mr. Pu also sees a price war coming, as AU Optronics, LG Display and others enter the epaper market. AU Optronics has the most promising e-paper technology after E-Ink, the “microcups” technology owned by its subsidiary Sipix. Prime View will have to cut its prices after it loses its first-mover advantage, Mr. Pu said. For now, Prime View is shrugging off such predictions. A company spokesman, Stephen Chen, conceded that capacity was low at the company’s Korean factories early this year but said that was because of the unusually bad economic downturn. Mr. Chen said the company did not plan to license the E-ink technology to others and declined to comment on whether it might make its own e-reader. “So far, for mass production and quality, E-Ink is the first priority for customers,” Mr. Chen said. “So I think we’ll keep the leading edge for some time — a few years is certain.” This content has passed through fivefilters.org. E-READERS continued from page 62 williamstanek at aol dot com This content has passed through fivefilters.org. Published on Nov 9, 2009 Daily News for your E-reader. Liberty Newsprint in is America's daily E-Reader Newsfeed Archive powered by Feedjournal.com's publisher aggre...
https://issuu.com/libertynewsprint/docs/libertynewsprint_nov-9-09
CC-MAIN-2017-43
refinedweb
39,114
66.17
MS Dynamics CRM 3.0 double array[nCols][nRows]; upon increasing nRows above approximately 250 with nCols = 100 the program was segfaulting. After some reading I came to the conclusion that I was creating arrays that were too big for the stack and should instead be allocated memory on the heap? It's quite possible that i'm very wrong here so please point out my mistakes. To allocate space on the heap for the arrays, and for conveniance in array indexing, I chose to replace the static array creation lines with a malloc call like (found in the cfaq) double **array = malloc ( nCols * sizeof ( *array ) ); array[0] = malloc ( nCols * nRows * sizeof ( **array ) ); for ( ii = 1; ii < nCols; ++ii ) array[ii] = array[0] + ii * nRows; such that indexing the array may be done according to for ( ii = 0; ii < nCols; ++ii ) { for ( jj = 0; jj < nRows; ++jj) { array[ii][jj] = 0.0; } 1. Are arrays of size 300x300 large enough to overwhelm the space available on the stack (if relevant my machine is an Intel Core 2 Duo running linux)? 2. What is the standard approach to creating large 2D arrays, i.e., is the approach I have chosen appropriate? 3. Is it possible I have allocated too many 2D arrays (perhaps 30 in all) using the above malloc approach and that is the cause of my simulation woes? Any help would be great as my brain is slowly melting away. Dav. <snip> > 1. Are arrays of size 300x300 large enough to overwhelm the space > available on the stack (if relevant my machine is an Intel Core 2 Duo > running linux)? struct array_double_2d { size_t y; size_t x; double **data; -- Richard Heathfield "Usenet is a strange place" - dmr 29/7/1999 email: rjh at the above domain, - www. > <snip> >> 1. Are arrays of size 300x300 large enough to overwhelm the space >> available on the stack (if relevant my machine is an Intel Core 2 Duo >> running linux)? > Typical double on a modern desktop system, is 8 bytes. 8 * 300 * 300 is > a mere 720,000 bytes. Nowadays, that's peanuts for *dynamic* > allocation, but could easily cause problems with static allocation, > yes. There are three storage durations, automatic, static, and allocated. Automatic storage duration refers to objects declared locally within a function or block (sometimes referred to as "stack"). Static storage duration refers to objects that exist throughout the lifetime of the program; they're declared with the keyword "static" or outside any function. Allocated storage duration refers to objects allocated by calls to malloc (or calloc, or realloc) (sometimes referred to as "heap"), An implementation is likely to place different limits on these three kinds of storage duration; automatic duration often has the lowest limit. As long as you can deal with the differing semantics, switching from automatic to static storage duration *might* solve your problem. Some systems also provide ways to change memory allocation limits, either system-wide or for a single process. <OT>On Unix-like systems, see "limit" or "ulimit">,<OT> Also, if the bounds of your arrays are constant and you choose to use dynamic allocation (malloc), that can simplify your code. Many examples of dynamic allocation of two-dimensional arrays are designed to allow for both dimensions being determined at execution time. If you know in advance that you want your arrays to be exactly 300 x 300, you can use a single allocation. For example: #include <stdio.h> #include <stdlib.h> #define MATRIX_SIZE 300 typedef double matrix[MATRIX_SIZE][MATRIX_SIZE]; int main(void) { matrix *m; int i, j; m = malloc(sizeof *m); if (m) { printf("Allocated %lu bytes\n", (unsigned long)sizeof *m); } else { fprintf(stderr, "Allocation failed\n"); exit(EXIT_FAILURE); } for (i = 0; i < MATRIX_SIZE; i ++) { for (j = 0; j < MATRIX_SIZE; j ++) { (*m)[i][j] = i + j; } } printf("(*m)[123][234] = %g\n", (*m)[123][234]); return 0;
http://www.megasolutions.net/c/correcting-code-for-larger-2D-arrays-help-77019.aspx
CC-MAIN-2014-23
refinedweb
644
58.11
4.33: Syntactic Sugar with Getting a Board Space’s Icon’s Shape and Color - Page ID - 14478 def getShapeAndColor(board, boxx, boxy): # shape value for x, y spot is stored in board[x][y][0] # color value for x, y spot is stored in board[x][y][1] return board[boxx][boxy][0], board[boxx][boxy][1] The getShapeAndColor() function only has one line. You might wonder why we would want a function instead of just typing in that one line of code whenever we need it. This is done for the same reason we use constant variables: it improves the readability of the code. It’s easy to figure out what a code like shape, color = getShapeAndColor() does. But if you looked a code like shape, color = board[boxx][boxy][0], board[boxx][boxy][1], it would be a bit more difficult to figure out.
https://eng.libretexts.org/Bookshelves/Computer_Science/Book%3A_Making_Games_with_Python_and_Pygame_(Sweigart)/04%3A_Memory_Puzzle/4.33%3A_Syntactic_Sugar_with_Getting_a_Board_Space%E2%80%99s_Icon%E2%80%99s_Shape_and_Color
CC-MAIN-2021-25
refinedweb
147
63.22
Graham Dumpleton wrote: > vegetax wrote .. >> Graham Dumpleton wrote: >> >> >> - Where do i set a database connection pool to load at server >> >> initialization ,so that all request can access it? is the pythonImport >> >> directive the best place? where do i set a clean up function for the >> >> pool >> >> at server finalization ? >> > >> > Cleanup function registration for stuff that should be done at time of >> > child >> > termination can only be done with req.server.register_cleanup(). There >> > probably should be an apache.register_cleanup() method which would be >> > available from a module imported using PythonImport. This would then >> be >> > the >> > best way of doing it. >> > >> > It seems that the best one could do now is import the module when >> > required >> > but don't do anything at the time of import which would require a >> > cleanup >> > function to be registered. Then, when the first handler calls in to the >> > actual module, require that the "req" object be passed into the pool, >> > with >> > those resources which need to be cleaned up later being created then >> > with a >> > cleanup function being registered through >> > req.server.register_cleanup(). >> > >> > I have added a bug report suggesting that apache.register_cleanup() be >> > added to allow it to be used from module imported using PythonImport. >> >> But is to problematic to clean resources at request level, i think in the >> midtime i will be cleaning up resources like connections with an external >> script which i run after apache exits. > > I perhaps didn't explain it properly or I don't understand what you are > saying. > >>From a request object there are two ways of registering a cleanup > function. The first is: > > req.register_cleanup() > > The register function in this case will be called when the handler for > that specific request is finished. > > The other which I referenced was: > > req.server.register_cleanup() > > This registers a function which will only be called when the Apache > process itself terminates. Ie., when one does apachectl stop or > restart. > > Thus, wasn't talking about cleaning up resources at request level. > The unfortunate bit was that since apache.register_cleanup() doesn't > exist, one has to defer registration of cleanup function for process > termination until one has access to req.server, which is at point of > first request that needs pool. > >> > FWIW, in Vampire, when Vampire's module importing mechanism is used a >> > stripped down request object is available in the set of global >> > variables during import as __req__. Thus in Vampire one could actually >> > register a cleanup function during import by using: >> > >> > __req__.server.register_cleanup(....) >> > >> > This would save each handler having to pass the req object into a pool >> > and >> > means one wouldn't have to delay creation of resources which needed the >> > cleanup function to be registered. >> >> Looks like a good solution when the clean up is needed per request,and >> is >> also posible that the pool component was made by someone else,and cant >> take >> req as parameter. > > Again, not talking per request here as am registering cleanup handler > via the server object and not the actual request object. Sorry about the missunderstanding,i see the irony of the req.server.register_cleanup, maybe adding a PythonCleanUp directive ? >> > BTW, have you considered other page templating solutions besides PSP? >> > In terms >> > of best separation between model, view and controller, or at least >> > between the >> > HTML that represents a page and the code that populates it, I would >> > recommend >> > using HTMLTemplate. >> > >> > >> > >> > Why I prefer it over PSP is that in PSP you are effectively still >> > embedding >> > Python code in the template itself and to render the template you are >> > actually >> > executing it, with there being call outs from the template to collect >> > data. >> > In HTMLTemplate, the template object is distinct, with your controller >> > code >> > making calls into the template object when desired to set fields within >> > it. >> > Ie., DOM like but not having the overhead of a DOM because only >> > fillable parts >> > of the template are indexed. >> > >> > What this means is that with HTMLTemplate you aren't forced to prepare >> > all your >> > data up front before filling in the template, instead you can fill it >> > in bit >> > by bit. >> > >> > I can supply references to example of using HTMLTemplate from Vampire >> > later if >> > you are interested. >> > Graham >> >> Thanks for the advice graham,but i dont share the philosophy kind of >> those templates engines, first and last it uses its own tag language, i >> HATE that. > > Huh. PSP also defines its own language within HTML so I don't really > see the difference. I personally find the PSP syntax more confusing, > more complicated and more error prone especially if embedding actual > Python code. Please don't confuse HTMLTemplate with systems like > TAL and Metal. It is different and much simpler, with the manner in > which pages are constructed and then rendered being different as well. > >> I like psp because it lets you embed python code, so that i can >> generate complex dynamic views,but the code inside the psp is only for >> content displaying, 100% of the form processing is done by the controler >> calling domain object's methods! in extremely exceptional situations the >> psp code will have a litle processing or data gathering, but those are >> only >> exceptions, the key here is that the team respect the rules. >> >> I work with web designers,you simply cant let them touch the dynamic >> parts,those parts tend to be complex,they should work in the static parts >> and coordinate with the programmer in charge of the dynamic view >> generation,and then plug dynamic and static parts together, in order to >> coordinate they MUST have some programming knowledge ,anyway i use >> javascript a lot and i dont take a person who doesnt know javascript as >> a >> web designer.Thats my point of view. > > When using HTMLTemplate the web designers don't need to touch > any Python code at all and thus don't go anywhere near the controller > aspects of the application. Because HTMLTemplate is standard XHTML > the web designer can even use high level web design tools. All they > need to do is add the appropriate namespace designated attributes > in the XHTML as per Python coders directions as to what data will be > filled in. The web designer can even put in dummy data in the template > as place holders so the pages look correct for some set of data when > they are designing it, with the dummy content being replaced by the > controller code when it runs. > Anyway, there is probably no point continuing this particular discussion. I also agree with you =) in that ,is just that we have different methodologies and/or requirements,which need different tools to work well.
https://modpython.org/pipermail/mod_python/2005-March/017733.html
CC-MAIN-2022-21
refinedweb
1,095
59.74
Novice working with C#. Opened a project file to commence lessons with the Head First C# programming eBook. When reopened errors were found. Images are uploaded to a microsoft skydrive folder: Hoping that someone experienced can stay with me until I can grasp each error and know precisely where to place the remedies. Ideally I would want guidance on the first of the five issues then proceed. The errors read as follows: **1.The file C:\Users\maurice\Documents\Visual Studio 2010\Projects\Head First IDE Lab\Head First IDE Lab1\Form2.cs does not support code parsing or generation because it is not contained within a project that contains code 2.A namespace cannot directly contain members such as fields or methods 3.Type or namespace definition, or end of file expected - A namespace cannot directly contain members such as fields or methods 5 Type or namespace definition, or end of file expected ** As a complete beginner, I do not know precisely where to go to and how to remedy each of the above errors. Additionally, I do no know the finer points of posting e.g. the code an inline code.
https://www.daniweb.com/programming/software-development/threads/426755/code-error-issues
CC-MAIN-2017-09
refinedweb
193
64.71
make it to 13 years. Many don’t make it to 10, or even 5. But Django is still here, still pumping out releases, still going strong. I saw an article recently which surveyed Stack Overflow questions year to year, and found interest in Django is remarkably stable. So how did that happen? Before I try to answer that question, a reminder: this is a blog. My blog. It’s where I post my opinions, and opinions are not objective truth; if I ever need to speak ex cathedra (assuming I find some topic I could even do that with), I’ll be sure to mark it clearly, So you may well disagree with what I have to say below, or feel that I’m biased. I certainly am, and so is everyone else, and I’m not presenting this as anything other than my own opinion. So. Here goes. There’s an app for that Yes, the ecosystem of third-party more-or-less-pluggable Django applications is a huge advantage. But how did we get there? For (literally) a decade now, I’ve been telling anyone who will listen that Django’s concept of an application is one of its secret weapons. At first, a lot of people asked why that mattered — after all, Python uses WSGI and WSGI has an application concept, and WSGI applications are composable! Why would Django’s variation on that matter? If you’re not familiar with WSGI — the gateway protocol for Python web applications — it’s a CGI-like programming model (and I’ve criticized it in the past, on grounds that we really ought to be able to do better than CGI‘s model). A WSGI application is a Python callable with a specific signature: def application(environ, start_response): # Actual application code goes here. Here, environ is a Python dict corresponding to the environment variables a CGI web application would access, and start_response is a callable which can be invoked with an HTTP status code and headers (and optionally information about exceptions raised). Then the application callable returns an iterable of the response body. WSGI applications are composable, then, but they are composable in the sense that, say, HTTP proxies are. A WSGI application never (officially) knows, and often does not care, whether it has been invoked directly by the server, or by some other WSGI application proxying a request to it. This allows WSGI applications to call each other and act as “middlewares”, and provide additional functionality (for example, a WSGI application might proxy to a second application, and only modify things by compressing the other application’s response when appropriate). But there’s no explicit knowledge of what other applications are present or what functionality they might provide. Like HTTP proxies, which communicate by setting and reading particular standard or custom header values, WSGI applications communicate by setting and reading particular standard or custom keys in the environ dict which gets passed around, and interoperability comes solely from knowing which keys to look for and what their values indicate. Deployment of Django takes place as a WSGI application; Django provides a callable implementing the WSGI signature, and then hands off to its own machinery before returning a WSGI-appropriate response. But a deployment of Django almost always consists of multiple Django applications, which gives us a hint that a Django application is something different. A succinct description would be that a Django application is an encapsulated piece of functionality, including all the code necessary to provide that functionality and also to expose it for use by Django or by other Django applications. Django itself provides just enough consistency and standardization and API support to make this work. Django applications are expected to know about each other, and to import and call and subclass and use each other’s code as needed. To take an example: this site, that you’re looking at right now, uses over a dozen Django applications. Some come with the framework itself and are bundled into django.contrib, like the auth system and the session framework. Others I wrote myself specifically for this site, like the app which provides the models, views and URL routing for my blog posts. Still others were written by me, or other people, for generic re-use, like the contact form or the app that generates my Content Security Policy header. You don’t get the gigantic ecosystem of Django apps without the framework providing the tools to make that happen. And although people have periodically suggested that Django could do more — such as more enforced structure, or options for applications to supply configuration automatically — I think it’s pretty clear that the Django application, as a flexible abstraction for writing reusable functionality, has been a gigantic success. It’s also worth noting, since one of the biggest debates in the early days of Django was whether it would lose out to frameworks which were agnostic about component choices (i.e., bring your own preferred ORM, template language, etc. and the framework is mostly glue code to plug them together), that Django shipping its own set of components and being relatively tightly coupled to them — some of the nice integrations still work when you swap out some components, but not all, and especially not the ORM — has been one of the biggest enablers of the Django application boom. A Django application can make a lot of assumptions about available components, and doesn’t have to go through a bunch of abstracted/indirect APIs to try to make all components look the same to the framework. This does mean you lose some flexibility — you can’t, for example, just decide to drop the Django ORM and use SQLAlchemy and still have everything, or even most things, work — but the payoff from that is, well, the entire ecosystem of Django applications. Django is boring I love boring software, and apparently a lot of other people do, too. And Django is a great example of boring software. One way Django is boring is its pace of change: you can go look up Django applications written years ago, or documentation for ancient versions of Django, and a huge amount of what’s presented will still work, or require only very minor changes. There was a big rewrite of the ORM between 0.91 and 0.95, which required models and query code to be updated in basically every app, but that’s the only large-scale backwards-incompatible break Django has ever had, and it happened way back in 2006. There have, of course, been backwards-incompatible changes since then, but Django uses a combination of slow deprecation cycles and long-term support releases to minimize the impact. If you stick to an LTS, you get three years of upstream support, and in the post-Django-2.0 world, there are strong guarantees being made that if your code runs on an LTS release without raising any deprecation warnings, you can jump to the next LTS release with no code changes. And in spite of changes over time, code written with Django today looks remarkably like code written a decade ago. Most of the changes have been in favor of making already-existing things easier and more consistent, rather than change for change’s sake, and several massive code changes have been pulled off with few or no backwards incompatibilities in APIs provided for developers. For example, the ORM has actually been rewritten multiple times over Django’s history; you just wouldn’t notice all of them unless you were using a bunch of undocumented internals, or you were otherwise digging around in the source. The persistence, not just of particular APIs but of the cohesive feel of “Django-ness”, makes it easy to learn Django once and then come back to it years later without facing a massive learning curve. Consistency over time isn’t the only way Django is boring, though. Django is also reliable: you know you can deploy something built with Django and, if your pager goes off at 2AM on a weekend, it’s almost certainly not going to be due to something wrong with Django. And if there is a serious bug, you know it’s going to be fixed soon; there’s a regular cadence for bugfix releases (they come out monthly), and show-stoppers can get releases on a faster schedule if they’re bad enough, though I don’t recall the last time there was a bug bad enough to force an emergency release (there was one security issue that memorably resulted in me rolling a release at around 2AM in a hotel room in Denver, but that had more to do with it being initially reported publicly). Of course, like any open-source project that survives long enough, Django does have its share of ancient tickets sitting open in the tracker, but most of them are feature requests that never got a design fleshed out, or major changes that probably will never happen all these years later (did we ever finally close the super-old one that asked to have the ORM completely redesigned? I should check). Finally, in a highly subjective sense, Django is software that’s used in boring ways. I don’t meant to say nobody’s ever done something hip and trendy with Django — plenty of people have, or have tried — just that it feels like Django doesn’t have a reputation for that. Instead, Django — remember the motto, “The framework for perfectionists with deadlines” — mostly seems to have a reputation for being software that people just use to get things done, without a ton of fanfare for the technology involved. I have a lot of strong thoughts on that and why it’s a good thing, but all I’ll say for now is that the longer I work in tech, the more convinced I am that lots of conversations about or focus on technology choices at a company is a sign of other things being badly wrong. Mostly, though, Django-powered sites and services seem to be pretty low-key about their tech stacks, to such a degree that I often find I’ve seen or used something a bunch of times, and only much later discover it was built with Django. That may not be someone else’s idea of boring, or of the good kind of boring-ness, but it is mine. Django isn’t perfect In the early days, a common criticism of Django was that none of the components it shipped with were the best available in Python. People would say SQLObject (remember SQLObject?) was clearly the best Python ORM, or put forward Cheetah or Genshi or Mako or plenty of other candidates as the best Python template language, FormEncode as the best forms library, WebOb/Paste as the best HTTP request/response abstraction, and use these as arguments against Django. People still do this today, to an extent, by pointing out that they’d rather use SQLAlchemy than Django’s ORM (though Django’s ORM is now a lot more powerful than people expect or remember from the old days, but that’s a story for another time) And — from the perspective of someone who wants to choose the best available library for each role and glue them together — they were right. Django’s ORM has never been the best ORM available in Python. Django’s template library has never been the best template library available in Python. None of the components Django ships with are the best version of that component you could get. And that’s OK. Remember that tagline? Django didn’t advertise itself as the perfect framework. It advertised itself as “the framework for perfectionists with deadlines”. That means it’s OK not to have the absolute best possible ORM, for example, or the absolute best possible template language or forms library or the best possible anything. Django’s job is to deliver good, not best or perfect, and I think Django has been very successful at doing that. For a working programmer, perfect is the enemy of good, and shipping something good quickly is better than maybe being able to ship something perfect much later. There are people who would strongly disagree with me on this, and who prefer frameworks that are built to let them mix and match components to their heart’s content, or frameworks which try to pick the best of each type of component at a given moment, and focus on making them work with each other. And that’s OK, too. This is why Flask and Pyramid and other frameworks exist, and people use them. My personal preference, though, is for Django’s approach; I think it’s paid off in (subjectively) higher practicality and ease of use, and that those are a big part of why Django has been so successful. The same thing is true in a lot of other aspects of Django. For example, I hear sometimes that Django has a reputation for security. But we’re very far from perfect on that (though to be fair, nothing is or can be perfect when it comes to security). In fact, last year at PyCon I gave a tutorial on the topic which opened by mentioning that in the (at the time) twelve years since Django’s initial release, we’d averaged one disclosed security issue every 66 days. Still, we’re apparently doing something right, or right enough, to have a reputation. Django can’t protect you from everything, of course, or even from most things. But it does its best to protect you from or mitigate common security issues. Django also tries to make it easy to do the right thing for cases where a global default behavior isn’t sufficient; for example, several basic best practices around running over SSL are quick toggles in settings. And at the level of the framework itself, there’s a security policy which, while not perfect (and improving a few sections of it is on my to-do list), has at least led to a productive, mostly cooperative relationship with people who find issues in Django. I could go on about this for a while, but I think the point is clear enough: Django isn’t perfect, but Django is and consistently has been good, and that goes a long way. Django is made of people Finally, Django is more than just the framework, or the app ecosystem. It started out as an internal tool at a single company, but then was open-sourced, and then a nonprofit foundation was created to be the steward of the copyrights and trademarks (necessary disclaimer: since January 2016 I’ve served on the Board of Directors of that foundation; this blog post is still my own personal opinions and does not represent an official position of the DSF or its Board). It started out with two “BDFLs” — Adrian and Jacob — who then stepped down from that role and were replaced with a rough consensus model and a rotating technical board to act as the ultimate tie-breaker/decision-maker when all else failed (necessary disclaimer number two, I suppose: I served on the technical board for the Django 1.10, 1.11, and 2.0 release cycles; this blog post is still just my personal opinions). There’s a trend here: openness and bringing in more people, while keeping central “ownership” or authority as limited and little-used as possible. So while the DSF owns and protects the copyrights and trademarks, and collects and distributes donations to support and promote Django, and the technical board serves as a backstop, “Django” isn’t and never can be owned by any one person or entity. It’s the result of a worldwide community of people. And it’s not just the code of the core framework. There are people who use Django to teach programming and web technology. There are people who put on conferences and social events. There are people who write about Django, or try to improve it, or try to make learning and using it easier. They’re respectful and friendly and welcoming and thoughtful and helpful and patient and many other virtues besides. Prior to Django, I’d never been part of, or heard of, a community around a software project that was as good as what coalesced around Django. I think part of the reason for this was touched on above, in some of the ways Django is “boring”. Part of it was Adrian and Jacob leading by example in the early days, which attracted like-minded people who carried on the good work. Part of it was that Django, led by two liberal-arts majors turned programmers working at a newspaper, launched with very good (by the standards of open-source projects in 2005) documentation, written in a friendly and accessible way, and has continued to emphasize good documentation ever since. Part of it was probably just plain luck; some truly incredible people have, for whatever reason, decided to join the Django community and start doing wonderful things. I feel lucky to have gotten to meet and work with them, and — through roles like serving in the DSF — to support their efforts. And I believe the community — however and whyever it formed and has kept going — is absolutely one of the strongest selling points of Django. And it shows no sign of fading any time soon. Ite, missa est I could probably ramble on for a good while longer, but I’ve hit the main points that, in my opinion, form the core of why Django has been successful over a (for an open-source web framework) long period. You may have noticed that none of them were things like “blazing fast performance” or “massive scalability at the push of a button” or “written by rockstar guru ninja wizard 1000x programmers”. There’s a reason for that, and if you’re unsure what it is, re-read the sections above. Meanwhile, here’s to another thirteen years of Django, and a continuation of all the good things I’ve listed above.
https://www.b-list.org/weblog/2018/feb/22/teenage-django/
CC-MAIN-2018-47
refinedweb
3,040
54.36
copied from earlier location This meeting was requested by the XML CG and announced on the XML Plenary mailing list. We met in Philadelphia in conjunction with the XML'99/Markup Technologies'99 conferences. The stated purpose of the meeting was to initiate "an hoc task force of volunteers from the XML plenary whose job is to draft a charter to be proposed to the CG for an XML Packaging Working Group." Paula Angerstein chaired this meeting; Paul Grosso took notes. Some relevant documents: The XML Packaging WG has an approved charter. This TF meeting is to consider the range of related issues/use cases, re-evaluate the charter in light of these issues, and make suggestions to the CG about how best to address the issues. Possible outputs include a revised charter for the XML Packaging WG, suggested work items for other existing WG's, and suggestions for other potential WGs. Joseph: must be able to sign a package, because how a document is rendered (via a stylesheet) may be important. Sometimes reference things or make local copies of things versus actually delivering the master copy of the thing. Don't want to arbitrarily mess with things in the package. Noah: can you use a package to define a namespace and/or class of documents (as opposed to only defining a single document). Ashok: versioning of instances, schemas, etc. Packaging of SVG and other graphics and binary data. Daniel: using packaging as a cache. Packaging software distributions. Metadata about components and metadata about the package itself. Marc: need to remember network constraints--packaging isn't always monolithic. Daniel: a package is a resource. Murray: the package metadata could identify WAI features. SteveZ: getting one component out of the package without unpacking the entire package. Daniel: identifying one component of a package as special, i.e., the "root" of a package; indexing. SteveS: not having to download the whole package before unpacking part of it--streaming. SteveZ: can a package be a component of a package? Christina: metadata in package that includes contractual constraints on the information in the package. Need an index to randomly access the package. EDI packages may consists of multiple messages. May want to send fragments rather than whole documents; graphics may be associated with specific fragments. May need to package business metadata into a package in the case of archiving. Packages need to be able to reference resources as well as contain resources. SteveS: incremental generation and consumption of packages to support streaming. Need for intra-package references. Marc: being able to treat a package on a file system in a similar way to a package being transmitted. Joseph: part of metadata about the package might include a URL mapping capability (e.g., to allow one to substitute a local copy of a resource). Tim: Just addressing use cases 1, 2, and 4 would hit our 80/20 point, be buildable out of existing technology, and doable in a reasonable time frame. [Tim also elaborated on the theme of his paper that a namespace URI could point to a package of pointers to interesting things associated with this namespace.] Dan: use case of asking for an invoice and then getting as a response an invoice plus other stuff in a package. One needs to be able to identify the invoice, so that something that only expected an invoice back can find that and ignore the rest. Noah: Dan's case might happen in the case of layered protocols, e.g., one layer cares about the signature and another layer cares about the thing that is signed. Ben: a fallback concept might be useful in a package, so that a single package could include multiple things to be used depending on the target output device or so. Jon: EDXML meeting included a meeting of some XML messaging and packaging group. IETF gets in here somewhere that I couldn't understand. Jon has a draft he will circulate to us soon. Maybe we should hold off some of our work--or redirect our work--in light of this work. Didier: business data flow where each node is adding to the package or is putting packages within packages. Peter: three levels of packages: logical level structure, physical level container, third level incremental creation and transmission. This task force will communicate using the following email list: w3c-xml-pkg-tf@w3.org; everyone at this meeting plus others who have already sent the TF chair an expression of interest will be put on the initial list.
http://www.w3.org/XML/2000/07/xpkg-19991207-min
crawl-002
refinedweb
757
56.66
Im preparing myself to exercise 45, where I have to create my own game. I was thinking to structure the game in a way so all my rooms are in a separate fil - Rooms.py. However I can’t seem to get it work properly, or maybe it’s me who don’t understand it right. I have my main code: import Rooms class Main(object): def __init__(self): self.Rooms = Rooms def mainClassTest(self): self.Rooms.Second.SecondClassTxt() def mainFunctionTest(self): self.Rooms.SecondFunctionTxt() who=Main() who.mainClassTest() who.mainFunctionTest() And I have my second code called Rooms.py - in same folder with the following: class Second(object): def SecondClassTxt(self): print('SecondClassTxt') def SecondFunctionTxt(): print('SecondFunctionTxt') My problem is I can get the SecondFunctionTxt() function to run, but I can’t get the method to run inside the ‘Second’ class. I get the following error: self.Rooms.Second.SecondClassTxt() TypeError: SecondClassTxt() missing 1 required positional argument: ‘self’ I can see how I can just make a bunch of function for all my rooms and be done with it, but shouldn’t I also be able to access classes from my main code?
https://forum.learncodethehardway.com/t/exercise-45-trying-to-import-another-script-but-can-get-it-to-work/3942
CC-MAIN-2022-40
refinedweb
194
56.05
Hi, I attached an updated patch. As you might have already noticed, I do not have much time to work on this project so please keep the focus on the important things. I do not mind if Diego or someone else fixes the alignment, coding style, typo and wording problems directly in the SVN sources or if these things are pointed out in a single review but it is very frustrating to resubmit this patch again and again and to synchonize the main and soc svn for things that in the end do not give any real benefit. This is an unacceptable waste of my time. Thanks. > > > > + /** decode transform type */ > > > > + if (chgroup->num_channels == 2) { > > > > + if (get_bits1(&s->gb)) { > > > > + if (get_bits1(&s->gb)) { > > > > + av_log_ask_for_sample(s->avctx, > > > > + "unsupported channel transform > > > > type\n"); + } > > > > + } else { > > > > > > > > +; + > > > > } > > > > > > why the special handling of 2 vs. >2 channels here? > > > > When the stream has only 2 channels, the channels are M/S stereo coded > > (transform 1) > > When the stream has more than 2 channels, the matrix multiplication is > > used. for the 2 channels that contain data for the current subframe > > length/offset. (num_channels in the channel group != num_channels in the > > stream) > > iam not sure if we talk about the same thing or if i misunderstand you but > the 2channel in a subframe and the M/S case look pretty much the same to me > if so i wonder why they are not handled by the same code ... > I'm not sure I really did the change you meant. See the attached patch. > > + uint16_t channel_len; ///< channel frame > > length in samples > > do we need that in the context or can it be a local var? > also if i understand the code the variable name is not too good > Changed to be a local var. > > + uint16_t decoded_samples; ///< already > > processed samples > > the _number_of_ already processed samples ? > Fixed. > > + int8_t transmit_sf; ///< transmit > > scale factors for the current subframe > > flag indicating that ... ? > Fixed. > > + uint8_t bits_per_sample; > > i think this should be explained more completely as similarly named vars > exist in AVCodecContext, that is in how far is that different ... > Fixed. > > + int8_t num_channels; > > same issue > Fixed. > > + uint8_t max_num_subframes; ///< maximum number > > of subframes > > that doxy is redundant > > > + int8_t num_possible_block_sizes; ///< number of > > distinct block sizes that can be found in the file > > > > + uint16_t min_samples_per_subframe; ///< minimum samples > > per subframe > > same > Removed. > > + int16_t > > sf_offsets[WMAPRO_BLOCK_SIZES][WMAPRO_BLOCK_SIZES][MAX_BANDS]; ///< scale > > factor resample matrix > > does this really need to be 16bit ? > Changed. > > +#ifdef DEBUG > > + /** dump the extradata */ > > + for (i=0 ; i<avctx->extradata_size ; i++) > > + av_log(avctx, AV_LOG_DEBUG, "[%x] ",avctx->extradata[i]); > > + av_log(avctx, AV_LOG_DEBUG, "\n"); > > +#endif > > dprintf() > Changed. > > + /** subframe info */ > > + log2_num_subframes = ((s->decode_flags & 0x38) >> 3); > > log2_max_num_subframes ? > Changed. > > + while (missing_samples > 0) { > > isnt that the same as a simple check on min_channel_len, which at the end > should be frame len? > It is but I don't think the code will become cleaner when min_channel_len is checked. > > > > + if (channels_for_cur_subframe == 1 || > > + min_samples == missing_samples) > > these 2 look redundant > also the condition for reading the mask could just be used instead of > the temporary var read_channel_mask Did the second thing. Please explain why these 2 look redundant. > > + /* 1 bit indicates if the subframe length is zero */ > > no, its never zero, that would also make no sense > Oups. Fixed. > > + /** add subframes to the individual channels */ > > + if (min_channel_len == chan->channel_len) { > > + --channels_for_cur_subframe; > > + if (channel_mask & (1<<channels_for_cur_subframe)) { > > id do a get_bits1() here instead of loading it in a mask and then > extracting it > (btw you can just do GetBitContext mask_gb= *s->gb) Then this would reintroduce the check for the case that the subframe is used for all channels. > > + > > + if (vlctable) { > > + run = coef1_run; > > + level = coef1_level; > > + } else { > > + run = coef0_run; > > + level = coef0_level; > > + } > > have you tried run = coef_run[vlctable] ... or so? > i mean it might be faster as it doesnt do a conditional branch ... > That does not seem to change much. > > + if (i >= s->num_bands) { > > + av_log(s->avctx,AV_LOG_ERROR, > > + "invalid scale factor coding\n"); > > + return AVERROR_INVALIDDATA; > > + } else > > + s->channel[c].scale_factors[i] += (val ^ sign) - > > sign; > > the else is superflous > Fixed. > > + s->channel[c].coeffs = > > &s->channel[c].out[(s->samples_per_frame>>1) + > > + offset]; > > > > + memset(s->channel[c].coeffs,0,sizeof(float) * subframe_len); > > cant that be avoided? > Not directly. One would have to compensate this in three other places (before rl_decode, in vector decode when value == 0 and when the coeffs are not transmitted. Regards Sascha -------------- next part -------------- A non-text attachment was scrubbed... Name: wmapro_try5.patch Type: text/x-diff Size: 61452 bytes Desc: not available URL: <>
http://ffmpeg.org/pipermail/ffmpeg-devel/2009-August/061520.html
CC-MAIN-2016-26
refinedweb
728
61.97
#include <Optimizer.h> A class that stores image downsampling/recompression settings for monochrome images. Definition at line 153 of file Optimizer.h. Definition at line 156 of file Optimizer.h. Definition at line 164 of file Optimizer.h. create an MonoImageSettings object with default options Sets whether recompression to the specified compression method should be forced when the image is not downsampled. By default the compression method for these images will not be changed. Sets the output compression mode for monochrome images The default value is e_ccitt (CCITT group 4 compression) Sets the downsample mode for monochrome images The default value is e_default Sets the maximum and resampling dpi for monochrome images. By default these are set to 144 and 96 respectively. Sets the quality for lossy compression modes from 1 to 10 where 10 is lossless (if possible). The default value for JBIG2 is 8.5. The setting is ignored for FLATE.
https://www.pdftron.com/api/PDFTronSDK/cpp/classpdftron_1_1_p_d_f_1_1_mono_image_settings.html
CC-MAIN-2020-45
refinedweb
153
57.87
.jmx.adaptor.control;23 24 /**25 * A simple tuple of an mbean operation name,26 * index, sigature, args and operation result.27 *28 * @author Scott.Stark@jboss.org29 * @version $Revision: 37459 $30 */31 public class OpResultInfo32 {33 public String name;34 public String [] signature;35 public String [] args;36 public Object result;37 38 public OpResultInfo() {39 }40 41 public OpResultInfo(String name, String [] signature, String [] args, Object result)42 {43 this.name = name;44 this.signature = signature;45 this.args = args;46 this.result = result;47 }48 }49 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/jboss/jmx/adaptor/control/OpResultInfo.java.htm
CC-MAIN-2018-26
refinedweb
104
52.56
After I fixed access to task->tgid in kernel/acct.c, Oleg pointed out some bad side effects with this accounting vs pid namespaces interaction. So here is the approach to make this accounting work with pid namespaces properly. The idea is simple - when task dies it accounts itself in each namespace it is visible from. That was the summary, the details are in patches. Signed-off-by: Pavel Emelyanov <xemul@openvz.org> -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [email]majordomo@vger.kernel.org[/email] More majordomo info at [url][/url] Please read the FAQ at [url][/url]
http://fixunix.com/kernel/378676-%5Bpatch-0-10%5D-make-bsd-process-accounting-work-pid-namespaces-print.html
CC-MAIN-2015-35
refinedweb
110
59.4
This forum will be closed on September 23, 2020. I'm using (or rather attempting to use) a COM+ application (a ServicedComponent in .NET) to circumvent some permissions issues as I can run the COM+ app under a different identity. I've set up a project that uses regsvcs (x64 version) to install the COM+ app after a successful build. All of the COM+ requirements such as parameterless ctors and strong-naming have been adhered to. In my testing environment the COM+ app is on the same machine. I initially saw this error in my unit tests and thought it might be a bitness (x86 vs. x64) issue so I broke it out into an x64 test harness but the error remains the same which is: "This remoting proxy has no channel sink which means either the server has no registered server channels that are listening, or this application has no suitable client channel to talk to the server." Is it possible that when the COM+ client and COM+ component are both managed, the CLR tries to be clever and attempts to switch to using .NET remoting as a communication channel? If this is the case it's interesting as this page on MSDN specifically talks about Remoting being a "legacy technology that is retained for backward compatibility" What do I need to get around this issue? Is this something new brought on by some changes in .NET v4 as I have used this approach successfully in the past with no issues. Sample code: public class Foo: ServicedComponent { public Foo() {} public Foo Create(string p1, int p2) { // Do stuff return this; } public bool DoSomething(string p3) { // Do stuff here return true; } } public class FooUser { public FooUser() { Foo foo = new Foo().Create("One", 2); foo.DoSomething("Three"); // This line causes the error mentioned above } } I also tried to create an interface with the necessary methods and made the following modifications to my instantiating code based on some research: Type fooType = Type.GetTypeFromProgID("FooNamespace.Foo", "localhost"); IFoo foo = (IFoo) Activator.CreateInstance(fooType); foo.Create("One", 2); foo.DoSomething("Three"); // Still the same error Sunt ludi et ioci dum aliquis oculo nocet.
https://social.msdn.microsoft.com/Forums/en-US/1100af92-8177-44d9-8130-bbbaf0b01480/remoting-error-when-com-client-and-com-component-are-both-managed?forum=windbg
CC-MAIN-2020-40
refinedweb
359
61.77
Ran.-charset=\"utf-8\" action=\"%s\">" \ % . If you liked this blog, share the love: January 6th, 2006 at 11:05 pm interesting … I’ve been waiting for their release actually. now if I can I find something like phphtmllib in python. you have any idea harry ? January 7th, 2006 at 5:41 am Hi Harry, Good to see you are also doing things with Python. I am also learning it at the moment and was interested in web.py as well. Now that it’s been released I don’t find it very impressive, especially the 404 embedded error pages… But I guess it’s just a start. I’d be interested to know which way you use to deploy/test your Python web applications : lighttpd and fastcgi ? Do you use a Python library ? Thanks for the article. January 7th, 2006 at 4:50 pm To be frank, I haven’t really done much of anything with Python regarding web apps, aside from some stuff with the XML-RPC server, so I don’t really know much of what’s what in Python for web apps. Python’s had me more interested due to wxPython and the outstanding Win 32 extensions. There was template engine I ran into (now lost the name / link) via the daily Python URL which was something like WACT’s template engine - template tags become objects you can interact with but that’s not quite phphtmllib. For anyone who knows Python better, phphtmllib is a library for building HTML (or otherwise) explicitly with objects (no templates involved) - something like DOM for HTML. This may sound perverse but that’s actually why I think it’s looking good: it’s unimpressive meaning it’s very simple (at least it is right now) but it already “works” and I think the API design is right for putting HTTP resources first - re earlier discussion, the “controller” has taken a less important role in the equation. Otherwise think it’s cool that the framework doesn’t “dwarf” the application using the framework. Think that provides a greater incentive to build distributable apps on top of it, with web.py included. A more extreme point of view is web.py should go into the standard Python distribution one day. OK they’re not pretty but I think it’s definately the right thing to be doing by having 404 behaviour in there by default. You can probably (just a guess right now - think think this is valid Python) override the default function with your own something like; def notfound(): web.ctx.status = ‘404 Not Found’ header(’Content-Type’, ‘text/html’) # add stuff here for your own 404 page web.notfound = notfound As said, haven’t done anything serious with Python on the web - Python has it’s own SimpleHTTPServer (and versions of that like a Threaded one for concurrent request handling). web.py seems to be “aware” of lighthttpd and fastcgi via this stuff. More I can’t say there. Testing-wise, there’s a bunch of Python unit testers - this is a good place to start it seems and a bunch more “web testers” of which I think all can be found here. Regarding distibution, what Python does have is distutils, for making Python modules easy to install (you write a script called setup.py) and I have used py2exe, which is an awesome tool for Python on Windows - converts your Python script(s) into an Windows executable. At the moment I’m just in a phase of looking at what different people are doing in different languages, and not sure I’d do anything real with web.py yet. I could argue that in a way, web.py is rediscovering what PHP already does, the URL to class mapping being like PHP under Apache, mapping to PHP scripts on the filesystem, perhaps with some mod_rewrite / mod_alias / mod_actions magic. Of course not many people are writing PHP scripts like this; Perhaps we should be? January 7th, 2006 at 7:50 pm More on the FCGI / lightHttpd front: this in an excellent read FastCGI, SCGI, and Apache: Background and Future January 9th, 2006 at 4:26 am one thing I like about webpy is it let me learn python the fun way. no complicated setup just to get your database row displayed on the browser. kind of reminds me to the first day I wrote my php script. btw, I’ve created a simple blog in web.py. thanks to aaron for helping me sorting out few problems that I have while on my way doing it. January 9th, 2006 at 4:57 pm You use the url “localhost:8080/pages/somepage” in your example, but your code implies that the url “localhost:8080/page/somepage” should be used. notice the difference (pages vs. page). In the former, you get the default web.py notfound page, in the latter, you get your behavior. Thanks for the wonderful example, -Sam January 10th, 2006 at 7:10 am I find Zope 3 to be really really cool.. January 10th, 2006 at 10:28 am Thanks - fixed. Sitepoint really needs a Python blogger - personally have zero experience of Zope but have read other good things about Zope3 January 12th, 2006 at 11:15 am I have not personally taken the time to go about playing with Python, but there is much noise about Django at and I have a post-it not telling me about that I have not yet looked at either. I have though, looked at a video made by a user of Django, Tom Dyson. The video can be found at. January 12th, 2006 at 3:22 pm Regarding templates and phphtmllib: you’d probably like STAN which sounds similar to your description. There’s an example on this page: Regarding distribution: there’s a new package called setuptools that produces “Python Eggs”. Eggs provide good metadata about a package including dependencies. So, users on any platform can run “easy_install January 12th, 2006 at 3:29 pm Ugh. Half of my comment was eaten by angle brackets. I was saying that users can easy_install *Package* and get that package and all of its dependencies. This makes it much easier to use other code in your projects without worrying about difficulty installing. setuptools also makes sure that the correct version of a Python package is installed. Linux users get that from their distributions, but Mac and Windows users don’t and setuptools works on all. I am the creator of TurboGears, which has gotten very popular since its release (more than 1,000 people on the high-traffic googlegroup). Part of the reason that TurboGears has been successful is that it helps out with many parts of building a web app and uses pre-existing components for the major parts (which is where easy_install has been exceedingly helpful!). It happens that a wiki example is the most popular demo I’ve done for TurboGears: This is a great time to be doing web programming in Python, because there are many people focused on making development easier. March 1st, 2006 at 9:42 pm […] SitePoint shows how to build a simple wiki with web.py: More interesting was hacking something together with it—a very simple wiki which took about 2 hours to get to where it is … while reading the docs and tutorial. […] March 18th, 2006 at 9:31 am […]). […] April 8th, 2006 at 8:33 pm […] SitePoint Blogs » a simple wiki with web.py (tags: python web.py wiki) […] April 14th, 2006 at 3:26 am wikidir = os.path.realpath(’./pages’) will not work if you run you application through mod_python/WSGI.. You need something like this: realpath = os.path.realpath(os.path.dirname(__file__)) wikidir = os.path.join(realpath, “pages”) This will work if you run it either standalone, or through mod_python. June 8th, 2006 at 9:09 am is retreiveng quasi empty file with markdown moved to December 6th, 2006 at 4:29 pm You can do this when you host with DreamHost and get $30 off any hosting package with this promo code RFX30.
http://www.sitepoint.com/blogs/2006/01/06/a-simple-wiki-with-webpy/
crawl-001
refinedweb
1,356
69.92
[FIXED] [1.2.3] MultiField fields dissapear on Window resize [FIXED] [1.2.3] MultiField fields dissapear on Window resize I have a very simple test case in which I have a resizable Window containing a FormPanel with a MultiField build from two TextFields. When I resize the window the TextFields become invisible. If I drag and move the window around they come back. I have to mention that in version 1.2.2 it was even worse because the fields were not visible after the window was shown. I had to drag it to make the fields visible. Windows XP IE7 and Hosted Mode GXT-1.2.3 Here is the code: Code: public class FormClient implements EntryPoint { public void onModuleLoad() { final Window window = new Window(); window.setLayout(new FitLayout()); window.setHeading("Resize Me"); window.setSize(400, 300); FormPanel formPanel = new FormPanel(); TextField field1 = new TextField(); TextField field2 = new TextField(); MultiField multiField = new MultiField("Multi Field", field1, field2); formPanel.add(multiField); window.add(formPanel); Viewport viewport = new Viewport(); viewport.add(new Button("Open Window", new SelectionListener<ComponentEvent>() { @Override public void componentSelected(ComponentEvent ce) { window.show(); } })); RootPanel.get().add(viewport); } } Thanks for reporting. I have a fix ready and it will be in svn soon. Tables are bad in IE When it's fixed is it going to be included in a 1.2.x release or just trunk (2.0)? Thanks, Daniel Both 1.2 and 2.0 code. Will be part of the next release.
http://www.sencha.com/forum/showthread.php?61821-FIXED-1.2.3-MultiField-fields-dissapear-on-Window-resize&s=be272cd2fb29a179ec04584ecf8495a4&p=311181
CC-MAIN-2014-42
refinedweb
247
61.22
There. Start an application using wsgiref and with an optional reloader. This wraps wsgiref to fix the wrong default reporting of the multithreaded WSGI variable and adds optional socktes and some browsers will try to access ipv6 first and then iv) New in version 0.6. The builtin server supports SSL for testing purposes. If an SSL context is provided it will be used. That means a server can either run in HTTP or HTTPS mode, but not both. This feature requires the Python OpenSSL library. to the run_simple() method: ‘/path/to/the/key.key’)) You will have to acknowledge the certificate in your browser once then. Instead of using a tuple as ssl_context you can also create the context programmatically. This way you have better control over it: from OpenSSL import SSL ctx = SSL.Context(SSL.SSLv23_METHOD) ctx.use_privatekey_file('ssl.key') ctx.use_certificate_file('ssl.cert') run_simple('localhost', 4000, application, ssl_context=ctx).
http://werkzeug.pocoo.org/docs/0.9/serving/
CC-MAIN-2014-41
refinedweb
152
58.89
package # This is JSON::backportPP JSON::PP; # JSON-2.0 use 5.005; use strict; use Exporter (); BEGIN { @JSON::backportPP::ISA = ('Exporter') } use overload (); use JSON::backportPP::Boolean; use Carp (); #use Devel::Peek; $JSON::backportPP::VERSION = '4.06'; @JSON::PP::EXPORT = qw(encode_json decode_json from_json to_json); # instead of hash-access, i tried index-access for speed. # but this method is not faster than what i expected. so it will be changed. use constant P_ASCII => 0; use constant P_LATIN1 => 1; use constant P_UTF8 => 2; use constant P_INDENT => 3; use constant P_CANONICAL => 4; use constant P_SPACE_BEFORE => 5; use constant P_SPACE_AFTER => 6; use constant P_ALLOW_NONREF => 7; use constant P_SHRINK => 8; use constant P_ALLOW_BLESSED => 9; use constant P_CONVERT_BLESSED => 10; use constant P_RELAXED => 11; use constant P_LOOSE => 12; use constant P_ALLOW_BIGNUM => 13; use constant P_ALLOW_BAREKEY => 14; use constant P_ALLOW_SINGLEQUOTE => 15; use constant P_ESCAPE_SLASH => 16; use constant P_AS_NONBLESSED => 17; use constant P_ALLOW_UNKNOWN => 18; use constant P_ALLOW_TAGS => 19; use constant OLD_PERL => $] < 5.008 ? 1 : 0; use constant USE_B => $ENV{PERL_JSON_PP_USE_B} || 0; BEGIN { if (USE_B) { require B; } } BEGIN { my @xs_compati_bit_properties = qw( latin1 ascii utf8 indent canonical space_before space_after allow_nonref shrink allow_blessed convert_blessed relaxed allow_unknown allow_tags ); my @pp_bit_properties = qw( allow_singlequote allow_bignum loose allow_barekey escape_slash as_nonblessed ); # Perl version check, Unicode handling is enabled? # Helper module sets @JSON::PP::_properties. if ( OLD_PERL ) { my $helper = $] >= 5.006 ? 'JSON::backportPP::Compat5006' : 'JSON::backportPP::Compat5005'; eval qq| require $helper |; if ($@) { Carp::croak $@; } } for my $name (@xs_compati_bit_properties, @pp_bit_properties) { my $property_id = 'P_' . uc($name); eval qq/ sub $name { my \$enable = defined \$_[1] ? \$_[1] : 1; if (\$enable) { \$_[0]->{PROPS}->[$property_id] = 1; } else { \$_[0]->{PROPS}->[$property_id] = 0; } \$_[0]; } sub get_$name { \$_[0]->{PROPS}->[$property_id] ? 1 : ''; } /; } } # Functions my $JSON; # cache sub encode_json ($) { # encode ($JSON ||= __PACKAGE__->new->utf8)->encode(@_); } sub decode_json { # decode ($JSON ||= __PACKAGE__->new->utf8)->decode(@_); } # Obsoleted sub to_json($) { Carp::croak ("JSON::PP::to_json has been renamed to encode_json."); } sub from_json($) { Carp::croak ("JSON::PP::from_json has been renamed to decode_json."); } # Methods sub new { my $class = shift; my $self = { max_depth => 512, max_size => 0, indent_length => 3, }; $self->{PROPS}[P_ALLOW_NONREF] = 1; bless $self, $class; } sub encode { return $_[0]->PP_encode_json($_[1]); } sub decode { return $_[0]->PP_decode_json($_[1], 0x00000000); } sub decode_prefix { return $_[0]->PP_decode_json($_[1], 0x00000001); } # accessor # pretty printing sub pretty { my ($self, $v) = @_; my $enable = defined $v ? $v : 1; if ($enable) { # indent_length(3) for JSON::XS compatibility $self->indent(1)->space_before(1)->space_after(1); } else { $self->indent(0)->space_before(0)->space_after(0); } $self; } # etc sub max_depth { my $max = defined $_[1] ? $_[1] : 0x80000000; $_[0]->{max_depth} = $max; $_[0]; } sub get_max_depth { $_[0]->{max_depth}; } sub max_size { my $max = defined $_[1] ? $_[1] : 0; $_[0]->{max_size} = $max; $_[0]; } sub get_max_size { $_[0]->{max_size}; } sub boolean_values { my $self = shift; if (@_) { my ($false, $true) = @_; $self->{false} = $false; $self->{true} = $true; } else { delete $self->{false}; delete $self->{true}; } return $self; } sub get_boolean_values { my $self = shift; if (exists $self->{true} and exists $self->{false}) { return @$self{qw/false true/}; } return; } sub filter_json_object { if (defined $_[1] and ref $_[1] eq 'CODE') { $_[0]->{cb_object} = $_[1]; } else { delete $_[0]->{cb_object}; } $_[0]->{F_HOOK} = ($_[0]->{cb_object} or $_[0]->{cb_sk_object}) ? 1 : 0; $_[0]; } sub filter_json_single_key_object { if (@_ == 1 or @_ > 3) { Carp::croak("Usage: JSON::PP::filter_json_single_key_object(self, key, callback = undef)"); } if (defined $_[2] and ref $_[2] eq 'CODE') { $_[0]->{cb_sk_object}->{$_[1]} = $_[2]; } else { delete $_[0]->{cb_sk_object}->{$_[1]}; delete $_[0]->{cb_sk_object} unless %{$_[0]->{cb_sk_object} || {}}; } $_[0]->{F_HOOK} = ($_[0]->{cb_object} or $_[0]->{cb_sk_object}) ? 1 : 0; $_[0]; } sub indent_length { if (!defined $_[1] or $_[1] > 15 or $_[1] < 0) { Carp::carp "The acceptable range of indent_length() is 0 to 15."; } else { $_[0]->{indent_length} = $_[1]; } $_[0]; } sub get_indent_length { $_[0]->{indent_length}; } sub sort_by { $_[0]->{sort_by} = defined $_[1] ? $_[1] : 1; $_[0]; } sub allow_bigint { Carp::carp("allow_bigint() is obsoleted. use allow_bignum() instead."); $_[0]->allow_bignum; } ############################### ### ### Perl => JSON ### { # Convert my $max_depth; my $indent; my $ascii; my $latin1; my $utf8; my $space_before; my $space_after; my $canonical; my $allow_blessed; my $convert_blessed; my $indent_length; my $escape_slash; my $bignum; my $as_nonblessed; my $allow_tags; my $depth; my $indent_count; my $keysort; sub PP_encode_json { my $self = shift; my $obj = shift; $indent_count = 0; $depth = 0; my $props = $self->{PROPS}; ($ascii, $latin1, $utf8, $indent, $canonical, $space_before, $space_after, $allow_blessed, $convert_blessed, $escape_slash, $bignum, $as_nonblessed, $allow_tags) = @{$props}[P_ASCII .. P_SPACE_AFTER, P_ALLOW_BLESSED, P_CONVERT_BLESSED, P_ESCAPE_SLASH, P_ALLOW_BIGNUM, P_AS_NONBLESSED, P_ALLOW_TAGS]; ($max_depth, $indent_length) = @{$self}{qw/max_depth indent_length/}; $keysort = $canonical ? sub { $a cmp $b } : undef; if ($self->{sort_by}) { $keysort = ref($self->{sort_by}) eq 'CODE' ? $self->{sort_by} : $self->{sort_by} =~ /\D+/ ? $self->{sort_by} : sub { $a cmp $b }; } encode_error("hash- or arrayref expected (not a simple scalar, use allow_nonref to allow this)") if(!ref $obj and !$props->[ P_ALLOW_NONREF ]); my $str = $self->object_to_json($obj); $str .= "\n" if ( $indent ); # JSON::XS 2.26 compatible unless ($ascii or $latin1 or $utf8) { utf8::upgrade($str); } if ($props->[ P_SHRINK ]) { utf8::downgrade($str, 1); } return $str; } sub object_to_json { my ($self, $obj) = @_; my $type = ref($obj); if($type eq 'HASH'){ return $self->hash_to_json($obj); } elsif($type eq 'ARRAY'){ return $self->array_to_json($obj); } elsif ($type) { # blessed object? if (blessed($obj)) { return $self->value_to_json($obj) if ( $obj->isa('JSON::PP::Boolean') ); if ( $allow_tags and $obj->can('FREEZE') ) { my $obj_class = ref $obj || $obj; $obj = bless $obj, $obj_class; my @results = $obj->FREEZE('JSON'); if ( @results and ref $results[0] ) { if ( refaddr( $obj ) eq refaddr( $results[0] ) ) { encode_error( sprintf( "%s::FREEZE method returned same object as was passed instead of a new one", ref $obj ) ); } } return '("'.$obj_class.'")['.join(',', @results).']'; } if ( $convert_blessed and $obj->can('TO_JSON') ) { my $result = $obj->TO_JSON(); if ( defined $result and ref( $result ) ) { if ( refaddr( $obj ) eq refaddr( $result ) ) { encode_error( sprintf( "%s::TO_JSON method returned same object as was passed instead of a new one", ref $obj ) ); } } return $self->object_to_json( $result ); } return "$obj" if ( $bignum and _is_bignum($obj) ); if ($allow_blessed) { return $self->blessed_to_json($obj) if ($as_nonblessed); # will be removed. return 'null'; } encode_error( sprintf("encountered object '%s', but neither allow_blessed, convert_blessed nor allow_tags settings are enabled (or TO_JSON/FREEZE method missing)", $obj) ); } else { return $self->value_to_json($obj); } } else{ return $self->value_to_json($obj); } } sub hash_to_json { my ($self, $obj) = @_; my @res; encode_error("json text or perl structure exceeds maximum nesting level (max_depth set too low?)") if (++$depth > $max_depth); my ($pre, $post) = $indent ? $self->_up_indent() : ('', ''); my $del = ($space_before ? ' ' : '') . ':' . ($space_after ? ' ' : ''); for my $k ( _sort( $obj ) ) { if ( OLD_PERL ) { utf8::decode($k) } # key for Perl 5.6 / be optimized push @res, $self->string_to_json( $k ) . $del . ( ref $obj->{$k} ? $self->object_to_json( $obj->{$k} ) : $self->value_to_json( $obj->{$k} ) ); } --$depth; $self->_down_indent() if ($indent); return '{}' unless @res; return '{' . $pre . join( ",$pre", @res ) . $post . '}'; } sub array_to_json { my ($self, $obj) = @_; my @res; encode_error("json text or perl structure exceeds maximum nesting level (max_depth set too low?)") if (++$depth > $max_depth); my ($pre, $post) = $indent ? $self->_up_indent() : ('', ''); for my $v (@$obj){ push @res, ref($v) ? $self->object_to_json($v) : $self->value_to_json($v); } --$depth; $self->_down_indent() if ($indent); return '[]' unless @res; return '[' . $pre . join( ",$pre", @res ) . $post . ']'; } sub _looks_like_number { my $value = shift; if (USE_B) { my $b_obj = B::svref_2object(\$value); my $flags = $b_obj->FLAGS; return 1 if $flags & ( B::SVp_IOK() | B::SVp_NOK() ) and !( $flags & B::SVp_POK() ); return; } else { no warnings 'numeric'; # if the utf8 flag is on, it almost certainly started as a string return if utf8::is_utf8($value); # detect numbers # string & "" -> "" # number & "" -> 0 (with warning) # nan and inf can detect as numbers, so check with * 0 return unless length((my $dummy = "") & $value); return unless 0 + $value eq $value; return 1 if $value * 0 == 0; return -1; # inf/nan } } sub value_to_json { my ($self, $value) = @_; return 'null' if(!defined $value); my $type = ref($value); if (!$type) { if (_looks_like_number($value)) { return $value; } return $self->string_to_json($value); } elsif( blessed($value) and $value->isa('JSON::PP::Boolean') ){ return $$value == 1 ? 'true' : 'false'; } else { if ((overload::StrVal($value) =~ /=(\w+)/)[0]) { return $self->value_to_json("$value"); } if ($type eq 'SCALAR' and defined $$value) { return $$value eq '1' ? 'true' : $$value eq '0' ? 'false' : $self->{PROPS}->[ P_ALLOW_UNKNOWN ] ? 'null' : encode_error("cannot encode reference to scalar"); } if ( $self->{PROPS}->[ P_ALLOW_UNKNOWN ] ) { return 'null'; } else { if ( $type eq 'SCALAR' or $type eq 'REF' ) { encode_error("cannot encode reference to scalar"); } else { encode_error("encountered $value, but JSON can only represent references to arrays or hashes"); } } } } my %esc = ( "\n" => '\n', "\r" => '\r', "\t" => '\t', "\f" => '\f', "\b" => '\b', "\"" => '\"', "\\" => '\\\\', "\'" => '\\\'', ); sub string_to_json { my ($self, $arg) = @_; $arg =~ s/([\x22\x5c\n\r\t\f\b])/$esc{$1}/g; $arg =~ s/\//\\\//g if ($escape_slash); $arg =~ s/([\x00-\x08\x0b\x0e-\x1f])/'\\u00' . unpack('H2', $1)/eg; if ($ascii) { $arg = JSON_PP_encode_ascii($arg); } if ($latin1) { $arg = JSON_PP_encode_latin1($arg); } if ($utf8) { utf8::encode($arg); } return '"' . $arg . '"'; } sub blessed_to_json { my $reftype = reftype($_[1]) || ''; if ($reftype eq 'HASH') { return $_[0]->hash_to_json($_[1]); } elsif ($reftype eq 'ARRAY') { return $_[0]->array_to_json($_[1]); } else { return 'null'; } } sub encode_error { my $error = shift; Carp::croak "$error"; } sub _sort { defined $keysort ? (sort $keysort (keys %{$_[0]})) : keys %{$_[0]}; } sub _up_indent { my $self = shift; my $space = ' ' x $indent_length; my ($pre,$post) = ('',''); $post = "\n" . $space x $indent_count; $indent_count++; $pre = "\n" . $space x $indent_count; return ($pre,$post); } sub _down_indent { $indent_count--; } sub PP_encode_box { { depth => $depth, indent_count => $indent_count, }; } } # Convert sub _encode_ascii { join('', map { $_ <= 127 ? chr($_) : $_ <= 65535 ? sprintf('\u%04x', $_) : sprintf('\u%x\u%x', _encode_surrogates($_)); } unpack('U*', $_[0]) ); } sub _encode_latin1 { join('', map { $_ <= 255 ? chr($_) : $_ <= 65535 ? sprintf('\u%04x', $_) : sprintf('\u%x\u%x', _encode_surrogates($_)); } unpack('U*', $_[0]) ); } sub _encode_surrogates { # from perlunicode my $uni = $_[0] - 0x10000; return ($uni / 0x400 + 0xD800, $uni % 0x400 + 0xDC00); } sub _is_bignum { $_[0]->isa('Math::BigInt') or $_[0]->isa('Math::BigFloat'); } # # JSON => Perl # my $max_intsize; BEGIN { my $checkint = 1111; for my $d (5..64) { $checkint .= 1; my $int = eval qq| $checkint |; if ($int =~ /[eE]/) { $max_intsize = $d - 1; last; } } } { # PARSE my %escapes = ( # by Jeremy Muhlich <jmuhlich [at] bitflood.org> b => "\x8", t => "\x9", n => "\xA", f => "\xC", r => "\xD", '\\' => '\\', '"' => '"', '/' => '/', ); my $text; # json data my $at; # offset my $ch; # first character my $len; # text length (changed according to UTF8 or NON UTF8) # INTERNAL my $depth; # nest counter my $encoding; # json text encoding my $is_valid_utf8; # temp variable my $utf8_len; # utf8 byte length # FLAGS my $utf8; # must be utf8 my $max_depth; # max nest number of objects and arrays my $max_size; my $relaxed; my $cb_object; my $cb_sk_object; my $F_HOOK; my $allow_bignum; # using Math::BigInt/BigFloat my $singlequote; # loosely quoting my $loose; # my $allow_barekey; # bareKey my $allow_tags; my $alt_true; my $alt_false; sub _detect_utf_encoding { my $text = shift; my @octets = unpack('C4', $text); return 'unknown' unless defined $octets[3]; return ( $octets[0] and $octets[1]) ? 'UTF-8' : (!$octets[0] and $octets[1]) ? 'UTF-16BE' : (!$octets[0] and !$octets[1]) ? 'UTF-32BE' : ( $octets[2] ) ? 'UTF-16LE' : (!$octets[2] ) ? 'UTF-32LE' : 'unknown'; } sub PP_decode_json { my ($self, $want_offset); ($self, $text, $want_offset) = @_; ($at, $ch, $depth) = (0, '', 0); if ( !defined $text or ref $text ) { decode_error("malformed JSON string, neither array, object, number, string or atom"); } my $props = $self->{PROPS}; ($utf8, $relaxed, $loose, $allow_bignum, $allow_barekey, $singlequote, $allow_tags) = @{$props}[P_UTF8, P_RELAXED, P_LOOSE .. P_ALLOW_SINGLEQUOTE, P_ALLOW_TAGS]; ($alt_true, $alt_false) = @$self{qw/true false/}; if ( $utf8 ) { $encoding = _detect_utf_encoding($text); if ($encoding ne 'UTF-8' and $encoding ne 'unknown') { require Encode; Encode::from_to($text, $encoding, 'utf-8'); } else { utf8::downgrade( $text, 1 ) or Carp::croak("Wide character in subroutine entry"); } } else { utf8::upgrade( $text ); utf8::encode( $text ); } $len = length $text; ($max_depth, $max_size, $cb_object, $cb_sk_object, $F_HOOK) = @{$self}{qw/max_depth max_size cb_object cb_sk_object F_HOOK/}; if ($max_size > 1) { use bytes; my $bytes = length $text; decode_error( sprintf("attempted decode of JSON text of %s bytes size, but max_size is set to %s" , $bytes, $max_size), 1 ) if ($bytes > $max_size); } white(); # remove head white space decode_error("malformed JSON string, neither array, object, number, string or atom") unless defined $ch; # Is there a first character for JSON structure? my $result = value(); if ( !$props->[ P_ALLOW_NONREF ] and !ref $result ) { decode_error( 'JSON text must be an object or array (but found number, string, true, false or null,' . ' use allow_nonref to allow this)', 1); } Carp::croak('something wrong.') if $len < $at; # we won't arrive here. my $consumed = defined $ch ? $at - 1 : $at; # consumed JSON text length white(); # remove tail white space return ( $result, $consumed ) if $want_offset; # all right if decode_prefix decode_error("garbage after JSON object") if defined $ch; $result; } sub next_chr { return $ch = undef if($at >= $len); $ch = substr($text, $at++, 1); } sub value { white(); return if(!defined $ch); return object() if($ch eq '{'); return array() if($ch eq '['); return tag() if($ch eq '('); return string() if($ch eq '"' or ($singlequote and $ch eq "'")); return number() if($ch =~ /[0-9]/ or $ch eq '-'); return word(); } sub string { my $utf16; my $is_utf8; ($is_valid_utf8, $utf8_len) = ('', 0); my $s = ''; # basically UTF8 flag on if($ch eq '"' or ($singlequote and $ch eq "'")){ my $boundChar = $ch; OUTER: while( defined(next_chr()) ){ if($ch eq $boundChar){ next_chr(); if ($utf16) { decode_error("missing low surrogate character in surrogate pair"); } utf8::decode($s) if($is_utf8); return $s; } elsif($ch eq '\\'){ next_chr(); if(exists $escapes{$ch}){ $s .= $escapes{$ch}; } elsif($ch eq 'u'){ # UNICODE handling my $u = ''; for(1..4){ $ch = next_chr(); last OUTER if($ch !~ /[0-9a-fA-F]/); $u .= $ch; } # U+D800 - U+DBFF if ($u =~ /^[dD][89abAB][0-9a-fA-F]{2}/) { # UTF-16 high surrogate? $utf16 = $u; } # U+DC00 - U+DFFF elsif ($u =~ /^[dD][c-fC-F][0-9a-fA-F]{2}/) { # UTF-16 low surrogate? unless (defined $utf16) { decode_error("missing high surrogate character in surrogate pair"); } $is_utf8 = 1; $s .= JSON_PP_decode_surrogates($utf16, $u) || next; $utf16 = undef; } else { if (defined $utf16) { decode_error("surrogate pair expected"); } if ( ( my $hex = hex( $u ) ) > 127 ) { $is_utf8 = 1; $s .= JSON_PP_decode_unicode($u) || next; } else { $s .= chr $hex; } } } else{ unless ($loose) { $at -= 2; decode_error('illegal backslash escape sequence in string'); } $s .= $ch; } } else{ if ( ord $ch > 127 ) { unless( $ch = is_valid_utf8($ch) ) { $at -= 1; decode_error("malformed UTF-8 character in JSON string"); } else { $at += $utf8_len - 1; } $is_utf8 = 1; } if (!$loose) { if ($ch =~ /[\x00-\x1f\x22\x5c]/) { # '/' ok if (!$relaxed or $ch ne "\t") { $at--; decode_error('invalid character encountered while parsing JSON string'); } } } $s .= $ch; } } } decode_error("unexpected end of string while parsing JSON string"); } sub white { while( defined $ch ){ if($ch eq '' or $ch =~ /\A[ \t\r\n]\z/){ next_chr(); } elsif($relaxed and $ch eq '/'){ next_chr(); if(defined $ch and $ch eq '/'){ 1 while(defined(next_chr()) and $ch ne "\n" and $ch ne "\r"); } elsif(defined $ch and $ch eq '*'){ next_chr(); while(1){ if(defined $ch){ if($ch eq '*'){ if(defined(next_chr()) and $ch eq '/'){ next_chr(); last; } } else{ next_chr(); } } else{ decode_error("Unterminated comment"); } } next; } else{ $at--; decode_error("malformed JSON string, neither array, object, number, string or atom"); } } else{ if ($relaxed and $ch eq '#') { # correctly? pos($text) = $at; $text =~ /\G([^\n]*(?:\r\n|\r|\n|$))/g; $at = pos($text); next_chr; next; } last; } } } sub array { my $a = $_[0] || []; # you can use this code to use another array ref object. decode_error('json text or perl structure exceeds maximum nesting level (max_depth set too low?)') if (++$depth > $max_depth); next_chr(); white(); if(defined $ch and $ch eq ']'){ --$depth; next_chr(); return $a; } else { while(defined($ch)){ push @$a, value(); white(); if (!defined $ch) { last; } if($ch eq ']'){ --$depth; next_chr(); return $a; } if($ch ne ','){ last; } next_chr(); white(); if ($relaxed and $ch eq ']') { --$depth; next_chr(); return $a; } } } $at-- if defined $ch and $ch ne ''; decode_error(", or ] expected while parsing array"); } sub tag { decode_error('malformed JSON string, neither array, object, number, string or atom') unless $allow_tags; next_chr(); white(); my $tag = value(); return unless defined $tag; decode_error('malformed JSON string, (tag) must be a string') if ref $tag; white(); if (!defined $ch or $ch ne ')') { decode_error(') expected after tag'); } next_chr(); white(); my $val = value(); return unless defined $val; decode_error('malformed JSON string, tag value must be an array') unless ref $val eq 'ARRAY'; if (!eval { $tag->can('THAW') }) { decode_error('cannot decode perl-object (package does not exist)') if $@; decode_error('cannot decode perl-object (package does not have a THAW method)'); } $tag->THAW('JSON', @$val); } sub object { my $o = $_[0] || {}; # you can use this code to use another hash ref object. my $k; decode_error('json text or perl structure exceeds maximum nesting level (max_depth set too low?)') if (++$depth > $max_depth); next_chr(); white(); if(defined $ch and $ch eq '}'){ --$depth; next_chr(); if ($F_HOOK) { return _json_object_hook($o); } return $o; } else { while (defined $ch) { $k = ($allow_barekey and $ch ne '"' and $ch ne "'") ? bareKey() : string(); white(); if(!defined $ch or $ch ne ':'){ $at--; decode_error("':' expected"); } next_chr(); $o->{$k} = value(); white(); last if (!defined $ch); if($ch eq '}'){ --$depth; next_chr(); if ($F_HOOK) { return _json_object_hook($o); } return $o; } if($ch ne ','){ last; } next_chr(); white(); if ($relaxed and $ch eq '}') { --$depth; next_chr(); if ($F_HOOK) { return _json_object_hook($o); } return $o; } } } $at-- if defined $ch and $ch ne ''; decode_error(", or } expected while parsing object/hash"); } sub bareKey { # doesn't strictly follow Standard ECMA-262 3rd Edition my $key; while($ch =~ /[^\x00-\x23\x25-\x2F\x3A-\x40\x5B-\x5E\x60\x7B-\x7F]/){ $key .= $ch; next_chr(); } return $key; } sub word { my $word = substr($text,$at-1,4); if($word eq 'true'){ $at += 3; next_chr; return defined $alt_true ? $alt_true : $JSON::PP::true; } elsif($word eq 'null'){ $at += 3; next_chr; return undef; } elsif($word eq 'fals'){ $at += 3; if(substr($text,$at,1) eq 'e'){ $at++; next_chr; return defined $alt_false ? $alt_false : $JSON::PP::false; } } $at--; # for decode_error report decode_error("'null' expected") if ($word =~ /^n/); decode_error("'true' expected") if ($word =~ /^t/); decode_error("'false' expected") if ($word =~ /^f/); decode_error("malformed JSON string, neither array, object, number, string or atom"); } sub number { my $n = ''; my $v; my $is_dec; my $is_exp; if($ch eq '-'){ $n = '-'; next_chr; if (!defined $ch or $ch !~ /\d/) { decode_error("malformed number (no digits after initial minus)"); } } # According to RFC4627, hex or oct digits are invalid. if($ch eq '0'){ my $peek = substr($text,$at,1); if($peek =~ /^[0-9a-dfA-DF]/){ # e may be valid (exponential) decode_error("malformed number (leading zero must not be followed by another digit)"); } $n .= $ch; next_chr; } while(defined $ch and $ch =~ /\d/){ $n .= $ch; next_chr; } if(defined $ch and $ch eq '.'){ $n .= '.'; $is_dec = 1; next_chr; if (!defined $ch or $ch !~ /\d/) { decode_error("malformed number (no digits after decimal point)"); } else { $n .= $ch; } while(defined(next_chr) and $ch =~ /\d/){ $n .= $ch; } } if(defined $ch and ($ch eq 'e' or $ch eq 'E')){ $n .= $ch; $is_exp = 1; next_chr; if(defined($ch) and ($ch eq '+' or $ch eq '-')){ $n .= $ch; next_chr; if (!defined $ch or $ch =~ /\D/) { decode_error("malformed number (no digits after exp sign)"); } $n .= $ch; } elsif(defined($ch) and $ch =~ /\d/){ $n .= $ch; } else { decode_error("malformed number (no digits after exp sign)"); } while(defined(next_chr) and $ch =~ /\d/){ $n .= $ch; } } $v .= $n; if ($is_dec or $is_exp) { if ($allow_bignum) { require Math::BigFloat; return Math::BigFloat->new($v); } } else { if (length $v > $max_intsize) { if ($allow_bignum) { # from Adam Sussman require Math::BigInt; return Math::BigInt->new($v); } else { return "$v"; } } } return $is_dec ? $v/1.0 : 0+$v; } sub is_valid_utf8 { $utf8_len = $_[0] =~ /[\x00-\x7F]/ ? 1 : $_[0] =~ /[\xC2-\xDF]/ ? 2 : $_[0] =~ /[\xE0-\xEF]/ ? 3 : $_[0] =~ /[\xF0-\xF4]/ ? 4 : 0 ; return unless $utf8_len; my $is_valid_utf8 = substr($text, $at - 1, $utf8_len); return ( $is_valid_utf8 =~ /^(?: [\x00-\x7F] |[\xC2-\xDF][\x80-\xBF] |[\xE0][\xA0-F][\x80-\xBF][\x80-\xBF] )$/x ) ? $is_valid_utf8 : ''; } sub decode_error { my $error = shift; my $no_rep = shift; my $str = defined $text ? substr($text, $at) : ''; my $mess = ''; my $type = 'U*'; if ( OLD_PERL ) { my $type = $] < 5.006 ? 'C*' : utf8::is_utf8( $str ) ? 'U*' # 5.6 : 'C*' ; } for my $c ( unpack( $type, $str ) ) { # emulate pv_uni_display() ? $mess .= $c == 0x07 ? '\a' : $c == 0x09 ? '\t' : $c == 0x0a ? '\n' : $c == 0x0d ? '\r' : $c == 0x0c ? '\f' : $c < 0x20 ? sprintf('\x{%x}', $c) : $c == 0x5c ? '\\\\' : $c < 0x80 ? chr($c) : sprintf('\x{%x}', $c) ; if ( length $mess >= 20 ) { $mess .= '...'; last; } } unless ( length $mess ) { $mess = '(end of string)'; } Carp::croak ( $no_rep ? "$error" : "$error, at character offset $at (before \"$mess\")" ); } sub _json_object_hook { my $o = $_[0]; my @ks = keys %{$o}; if ( $cb_sk_object and @ks == 1 and exists $cb_sk_object->{ $ks[0] } and ref $cb_sk_object->{ $ks[0] } ) { my @val = $cb_sk_object->{ $ks[0] }->( $o->{$ks[0]} ); if (@val == 0) { return $o; } elsif (@val == 1) { return $val[0]; } else { Carp::croak("filter_json_single_key_object callbacks must not return more than one scalar"); } } my @val = $cb_object->($o) if ($cb_object); if (@val == 0) { return $o; } elsif (@val == 1) { return $val[0]; } else { Carp::croak("filter_json_object callbacks must not return more than one scalar"); } } sub PP_decode_box { { text => $text, at => $at, ch => $ch, len => $len, depth => $depth, encoding => $encoding, is_valid_utf8 => $is_valid_utf8, }; } } # PARSE sub _decode_surrogates { # from perlunicode my $uni = 0x10000 + (hex($_[0]) - 0xD800) * 0x400 + (hex($_[1]) - 0xDC00); my $un = pack('U*', $uni); utf8::encode( $un ); return $un; } sub _decode_unicode { my $un = pack('U', hex shift); utf8::encode( $un ); return $un; } # # Setup for various Perl versions (the code from JSON::PP58) # BEGIN { unless ( defined &utf8::is_utf8 ) { require Encode; *utf8::is_utf8 = *Encode::is_utf8; } if ( !OLD_PERL ) { *JSON::PP::JSON_PP_encode_ascii = \&_encode_ascii; *JSON::PP::JSON_PP_encode_latin1 = \&_encode_latin1; *JSON::PP::JSON_PP_decode_surrogates = \&_decode_surrogates; *JSON::PP::JSON_PP_decode_unicode = \&_decode_unicode; if ($] < 5.008003) { # join() in 5.8.0 - 5.8.2 is broken. package # hide from PAUSE JSON::PP; require subs; subs->import('join'); eval q| sub join { return '' if (@_ < 2); my $j = shift; my $str = shift; for (@_) { $str .= $j . $_; } return $str; } |; } } sub JSON::PP::incr_parse { local $Carp::CarpLevel = 1; ( $_[0]->{_incr_parser} ||= JSON::PP::IncrParser->new )->incr_parse( @_ ); } sub JSON::PP::incr_skip { ( $_[0]->{_incr_parser} ||= JSON::PP::IncrParser->new )->incr_skip; } sub JSON::PP::incr_reset { ( $_[0]->{_incr_parser} ||= JSON::PP::IncrParser->new )->incr_reset; } eval q{ sub JSON::PP::incr_text : lvalue { $_[0]->{_incr_parser} ||= JSON::PP::IncrParser->new; if ( $_[0]->{_incr_parser}->{incr_pos} ) { Carp::croak("incr_text cannot be called when the incremental parser already started parsing"); } $_[0]->{_incr_parser}->{incr_text}; } } if ( $] >= 5.006 ); } # Setup for various Perl versions (the code from JSON::PP58) ############################### # Utilities # BEGIN { eval 'require Scalar::Util'; unless($@){ *JSON::PP::blessed = \&Scalar::Util::blessed; *JSON::PP::reftype = \&Scalar::Util::reftype; *JSON::PP::refaddr = \&Scalar::Util::refaddr; } else{ # This code is from Scalar::Util. # warn $@; eval 'sub UNIVERSAL::a_sub_not_likely_to_be_here { ref($_[0]) }'; *JSON::PP::blessed = sub { local($@, $SIG{__DIE__}, $SIG{__WARN__}); ref($_[0]) ? eval { $_[0]->a_sub_not_likely_to_be_here } : undef; }; require B; my %tmap = qw( B::NULL SCALAR B::HV HASH B::AV ARRAY B::CV CODE B::IO IO B::GV GLOB B::REGEXP REGEXP ); *JSON::PP::reftype = sub { my $r = shift; return undef unless length(ref($r)); my $t = ref(B::svref_2object($r)); return exists $tmap{$t} ? $tmap{$t} : length(ref($$r)) ? 'REF' : 'SCALAR'; }; *JSON::PP::refaddr = sub { return undef unless length(ref($_[0])); my $addr; if(defined(my $pkg = blessed($_[0]))) { $addr .= bless $_[0], 'Scalar::Util::Fake'; bless $_[0], $pkg; } else { $addr .= $_[0] } $addr =~ /0x(\w+)/; local $^W; #no warnings 'portable'; hex($1); } } } # shamelessly copied and modified from JSON::XS code. $JSON::PP::true = do { bless \(my $dummy = 1), "JSON::PP::Boolean" }; $JSON::PP::false = do { bless \(my $dummy = 0), "JSON::PP::Boolean" }; sub is_bool { blessed $_[0] and ( $_[0]->isa("JSON::PP::Boolean") or $_[0]->isa("Types::Serialiser::BooleanBase") or $_[0]->isa("JSON::XS::Boolean") ); } sub true { $JSON::PP::true } sub false { $JSON::PP::false } sub null { undef; } ############################### package # hide from PAUSE JSON::PP::IncrParser; use strict; use constant INCR_M_WS => 0; # initial whitespace skipping use constant INCR_M_STR => 1; # inside string use constant INCR_M_BS => 2; # inside backslash use constant INCR_M_JSON => 3; # outside anything, count nesting use constant INCR_M_C0 => 4; use constant INCR_M_C1 => 5; use constant INCR_M_TFN => 6; use constant INCR_M_NUM => 7; $JSON::backportPP::IncrParser::VERSION = '1.01'; sub new { my ( $class ) = @_; bless { incr_nest => 0, incr_text => undef, incr_pos => 0, incr_mode => 0, }, $class; } sub incr_parse { my ( $self, $coder, $text ) = @_; $self->{incr_text} = '' unless ( defined $self->{incr_text} ); if ( defined $text ) { if ( utf8::is_utf8( $text ) and !utf8::is_utf8( $self->{incr_text} ) ) { utf8::upgrade( $self->{incr_text} ) ; utf8::decode( $self->{incr_text} ) ; } $self->{incr_text} .= $text; } if ( defined wantarray ) { my $max_size = $coder->get_max_size; my $p = $self->{incr_pos}; my @ret; { do { unless ( $self->{incr_nest} <= 0 and $self->{incr_mode} == INCR_M_JSON ) { $self->_incr_parse( $coder ); if ( $max_size and $self->{incr_pos} > $max_size ) { Carp::croak("attempted decode of JSON text of $self->{incr_pos} bytes size, but max_size is set to $max_size"); } unless ( $self->{incr_nest} <= 0 and $self->{incr_mode} == INCR_M_JSON ) { # as an optimisation, do not accumulate white space in the incr buffer if ( $self->{incr_mode} == INCR_M_WS and $self->{incr_pos} ) { $self->{incr_pos} = 0; $self->{incr_text} = ''; } last; } } my ($obj, $offset) = $coder->PP_decode_json( $self->{incr_text}, 0x00000001 ); push @ret, $obj; use bytes; $self->{incr_text} = substr( $self->{incr_text}, $offset || 0 ); $self->{incr_pos} = 0; $self->{incr_nest} = 0; $self->{incr_mode} = 0; last unless wantarray; } while ( wantarray ); } if ( wantarray ) { return @ret; } else { # in scalar context return defined $ret[0] ? $ret[0] : undef; } } } sub _incr_parse { my ($self, $coder) = @_; my $text = $self->{incr_text}; my $len = length $text; my $p = $self->{incr_pos}; INCR_PARSE: while ( $len > $p ) { my $s = substr( $text, $p, 1 ); last INCR_PARSE unless defined $s; my $mode = $self->{incr_mode}; if ( $mode == INCR_M_WS ) { while ( $len > $p ) { $s = substr( $text, $p, 1 ); last INCR_PARSE unless defined $s; if ( ord($s) > 0x20 ) { if ( $s eq '#' ) { $self->{incr_mode} = INCR_M_C0; redo INCR_PARSE; } else { $self->{incr_mode} = INCR_M_JSON; redo INCR_PARSE; } } $p++; } } elsif ( $mode == INCR_M_BS ) { $p++; $self->{incr_mode} = INCR_M_STR; redo INCR_PARSE; } elsif ( $mode == INCR_M_C0 or $mode == INCR_M_C1 ) { while ( $len > $p ) { $s = substr( $text, $p, 1 ); last INCR_PARSE unless defined $s; if ( $s eq "\n" ) { $self->{incr_mode} = $self->{incr_mode} == INCR_M_C0 ? INCR_M_WS : INCR_M_JSON; last; } $p++; } next; } elsif ( $mode == INCR_M_TFN ) { while ( $len > $p ) { $s = substr( $text, $p++, 1 ); next if defined $s and $s =~ /[rueals]/; last; } $p--; $self->{incr_mode} = INCR_M_JSON; last INCR_PARSE unless $self->{incr_nest}; redo INCR_PARSE; } elsif ( $mode == INCR_M_NUM ) { while ( $len > $p ) { $s = substr( $text, $p++, 1 ); next if defined $s and $s =~ /[0-9eE.+\-]/; last; } $p--; $self->{incr_mode} = INCR_M_JSON; last INCR_PARSE unless $self->{incr_nest}; redo INCR_PARSE; } elsif ( $mode == INCR_M_STR ) { while ( $len > $p ) { $s = substr( $text, $p, 1 ); last INCR_PARSE unless defined $s; if ( $s eq '"' ) { $p++; $self->{incr_mode} = INCR_M_JSON; last INCR_PARSE unless $self->{incr_nest}; redo INCR_PARSE; } elsif ( $s eq '\\' ) { $p++; if ( !defined substr($text, $p, 1) ) { $self->{incr_mode} = INCR_M_BS; last INCR_PARSE; } } $p++; } } elsif ( $mode == INCR_M_JSON ) { while ( $len > $p ) { $s = substr( $text, $p++, 1 ); if ( $s eq "\x00" ) { $p--; last INCR_PARSE; } elsif ( $s eq "\x09" or $s eq "\x0a" or $s eq "\x0d" or $s eq "\x20" ) { if ( !$self->{incr_nest} ) { $p--; # do not eat the whitespace, let the next round do it last INCR_PARSE; } next; } elsif ( $s eq 't' or $s eq 'f' or $s eq 'n' ) { $self->{incr_mode} = INCR_M_TFN; redo INCR_PARSE; } elsif ( $s =~ /^[0-9\-]$/ ) { $self->{incr_mode} = INCR_M_NUM; redo INCR_PARSE; } elsif ( $s eq '"' ) { $self->{incr_mode} = INCR_M_STR; redo INCR_PARSE; } elsif ( $s eq '[' or $s eq '{' ) { if ( ++$self->{incr_nest} > $coder->get_max_depth ) { Carp::croak('json text or perl structure exceeds maximum nesting level (max_depth set too low?)'); } next; } elsif ( $s eq ']' or $s eq '}' ) { if ( --$self->{incr_nest} <= 0 ) { last INCR_PARSE; } } elsif ( $s eq '#' ) { $self->{incr_mode} = INCR_M_C1; redo INCR_PARSE; } } } } $self->{incr_pos} = $p; $self->{incr_parsing} = $p ? 1 : 0; # for backward compatibility } sub incr_text { if ( $_[0]->{incr_pos} ) { Carp::croak("incr_text cannot be called when the incremental parser already started parsing"); } $_[0]->{incr_text}; } sub incr_skip { my $self = shift; $self->{incr_text} = substr( $self->{incr_text}, $self->{incr_pos} ); $self->{incr_pos} = 0; $self->{incr_mode} = 0; $self->{incr_nest} = 0; } sub incr_reset { my $self = shift; $self->{incr_text} = undef; $self->{incr_pos} = 0; $self->{incr_mode} = 0; $self->{incr_nest} = 0; } ############################### 1; __END__ =pod =head1 NAME JSON::PP - JSON::XS compatible pure-Perl module. =head1; =head1 VERSION 4.05 =head1 DESCRIPTION JSON::PP is a pure perl JSON decoder/encoder, and (almost) compatible to much faster L<JSON::XS> written by Marc Lehmann in C. JSON::PP works as a fallback module when you use L L<JSON::Tiny>, which is derived from L<Mojolicious> web framework and is also smaller and faster than JSON::PP. JSON::PP has been in the Perl core since Perl 5.14, mainly for CPAN toolchain modules to parse META.json. =head1 FUNCTIONAL INTERFACE This section is taken from JSON::XS almost verbatim. C<encode_json> and C<decode_json> are exported by default. =head2. =head2 decode_json $perl_scalar = decode_json $json_text The opposite of C<encode_json>: expects an UTF-8 (binary) string and tries to parse that as an UTF-8 encoded JSON text, returning the resulting reference. Croaks on error. This function call is functionally identical to: $perl_scalar = JSON::PP->new->utf8->decode($json_text) Except being faster. =head2 JSON::PP::is_bool $is_boolean = JSON::PP::is_bool($scalar) Returns true if the passed scalar represents either JSON::PP::true or JSON::PP::false, two constants that act like C<1> and C<0> respectively and are also used to represent JSON C<true> and C<false> in Perl strings. See L<MAPPING>, below, for more information on how JSON values are mapped to Perl. =head1 OBJECT-ORIENTED INTERFACE This section is also taken from JSON::XS. The object oriented interface lets you configure your own encoding or decoding style, within the limits of supported formats. =head2 new $json = JSON::PP->new Creates a new JSON::PP object that can be used to de/encode JSON strings. All boolean flags described below are by default I<disabled> (with the exception of C<allow_nonref>, which defaults to I<enabled> since version C<4.0>). The mutators for flags all return the JSON::PP object again and thus calls can be chained: my $json = JSON::PP->new->utf8->space_after->encode({a => [1,2]}) => {"a": [1, 2]} =head2 ascii $json = $json->ascii([$enable]) $enabled = $json->get_ascii If C<$enable> is true (or missing), then the C<encode> method will not generate characters outside the code range C C<$enable> is false, then the C<encode> method will not escape Unicode characters unless required by the JSON syntax or other flags. This results in a faster and more compact format. See also the section I<ENCODING/CODESET FLAG NOTES> later in this document. The main use for this flag is to produce JSON texts that can be transmitted over a 7-bit channel, as the encoded JSON texts will not contain any 8 bit characters. JSON::PP->new->ascii(1)->encode([chr 0x10401]) => ["\ud801\udc01"] =head2 latin1 $json = $json->latin1([$enable]) $enabled = $json->get_latin1 If C<$enable> is true (or missing), then the C<encode> method will encode the resulting JSON text as latin1 (or iso-8859-1), escaping any characters outside the code range C<0..255>. The resulting string can be treated as a latin1-encoded JSON text or a native Unicode string. The C<decode> method will not be affected in any way by this flag, as C<decode> by default expects Unicode, which is a strict superset of latin1. If C<$enable> is false, then the C<encode> method will not escape Unicode characters unless required by the JSON syntax or other flags. See also the section I::PP->new->latin1->encode (["\x{89}\x{abc}"] => ["\x{89}\\u0abc"] # (perl syntax, U+abc escaped, U+89 not) =head2 utf8 $json = $json->utf8([$enable]) $enabled = $json->get_utf8 If C<$enable> is true (or missing), then the C<encode> method will encode the JSON result into UTF-8, as required by many protocols, while the C<decode> method expects to be handled an UTF-8-encoded string. Please note that UTF-8-encoded strings do not contain any characters outside the range C<0..255>, they are thus useful for bytewise/binary I/O. In future versions, enabling this option might enable autodetection of the UTF-16 and UTF-32 encoding families, as described in RFC4627. If C<$enable> is false, then the C<encode> method will return the JSON string as a (non-encoded) Unicode string, while C<decode> expects thus a Unicode string. Any decoding or encoding (e.g. to UTF-8 or UTF-16) needs to be done yourself, e.g. using the Encode module. See also the section I<ENCODING/CODESET FLAG NOTES> later in this document. Example, output UTF-16BE-encoded JSON: use Encode; $jsontext = encode "UTF-16BE", JSON::PP->new->encode ($object); Example, decode UTF-32LE-encoded JSON: use Encode; $object = JSON::PP->new->decode (decode "UTF-32LE", $jsontext); =head2 pretty $json = $json->pretty([$enable]) This enables (or disables) all of the C<indent>, C<space_before> and C<space_after> (and in the future possibly more) flags in one call to generate the most readable (or most compact) form possible. =head2 indent $json = $json->indent([$enable]) $enabled = $json->get_indent If C<$enable> is true (or missing), then the C<encode> method will use a multiline format as output, putting every array member or object/hash key-value pair into its own line, indenting them properly. If C<$enable> is false, no newlines or indenting will be produced, and the resulting JSON text is guaranteed not to contain any C<newlines>. This setting has no effect when decoding JSON texts. The default indent space length is three. You can use C<indent_length> to change the. You will also most likely combine this setting with C<space_after>.... ] =item * C-style multiple-line '/* */'-comments (JSON::PP only) Whenever JSON allows whitespace, C-style multiple-line comments are additionally allowed. Everything between C</*> and C<*/> is a comment, after which more white-space and comments are allowed. [ 1, /* this comment not allowed in JSON */ /* neither this one... */ ] =item * C++-style one-line '//'-comments (JSON::PP only) Whenever JSON allows whitespace, C++-style one-line comments are additionally allowed. They are terminated by the first carriage-return or line-feed character, after which more white-space and comments are allowed. [ 1, // this comment not allowed in JSON // neither this one... ] =item * literal ASCII TAB characters in strings Literal ASCII TAB characters are now allowed in strings (and treated as C<\t>). [ "Hello\tWorld", "Hello<TAB>World", # literal <TAB> would not normally be allowed ] Unlike other boolean options, this opotion is enabled by default beginning with version C<4.0>.. Example, encode a Perl scalar as JSON value without enabled C<allow_nonref>, resulting in an error: JSON::PP->new->allow_nonref(0)->encode ("Hello, World!") => hash- or arrayref expected... =head2 allow_unknown $json = $json->allow_unknown([$enable]) $enabled = $json->get_allow_unknown If C<$enable> is true (or missing), then C<encode> will I<not> throw an exception when it encounters values it cannot represent in JSON (for example, filehandles) but instead will encode a JSON C<null> value. Note that blessed objects are not included here and are handled separately by c<allow_blessed>. If C<$enable> is false (the default), then C<encode> will throw an exception when it encounters anything it cannot encode as JSON. This option does not affect C<decode> in any way, and it is recommended to leave it off unless you know your communications partner. =head2 allow_blessed $json = $json->allow_blessed([$enable]) $enabled = $json->get_allow_blessed See L<OBJECT SERIALISATION> for details. If C<$enable> is true (or missing), then the C<encode> method will not barf when it encounters a blessed reference that it cannot convert otherwise. Instead, a JSON C<null> value is encoded instead of the object. If C<$enable> is false (the default), then C<encode> will throw an exception when it encounters a blessed object that it cannot convert otherwise. This setting has no effect on C<decode>. =head2 convert_blessed $json = $json->convert_blessed([$enable]) $enabled = $json->get_convert_blessed See L<OBJECT SERIALISATION> for details. any C<to_json> function or method. If C<$enable> is false (the default), then C<encode> will not consider this type of conversion. This setting has no effect on C<decode>. =head2 allow_tags $json = $json->allow_tags([$enable]) $enabled = $json->get_allow_tags See L<OBJECT SERIALISATION> for details. If C<$enable> is true (or missing), then C<encode>, upon encountering a blessed object, will check for the availability of the C<FREEZE> method on the object's class. If found, it will be used to serialise the object into a nonstandard tagged JSON value (that JSON decoders cannot decode). It also causes C<decode> to parse such tagged JSON values and deserialise them via a call to the C<THAW> method. If C<$enable> is false (the default), then C<encode> will not consider this type of conversion, and tagged JSON values will cause a parse error in C<decode>, as if tags were not part of the grammar. =head2 boolean_values $json->boolean_values([$false, $true]) ($false, $true) = $json->get_boolean_values By default, JSON booleans will be decoded as overloaded C<$JSON::PP::false> and C<$JSON::PP::true> objects. With this method you can specify your own boolean values for decoding - on decode, JSON C<false> will be decoded as a copy of C<$false>, and JSON C<true> will be decoded as C<$true> ("copy" here is the same thing as assigning a value to another variable, i.e. C<$copy = $false>). This is useful when you want to pass a decoded data structure directly to other serialisers like YAML, Data::MessagePack and so on. Note that this works only when you C<decode>. You can set incompatible boolean objects (like L<boolean>), but when you C<encode> a data structure with such boolean objects, you still need to enable C<convert_blessed> (and add a C<TO_JSON> method if necessary). Calling this method without any arguments will reset the booleans to their default values. C<get_boolean_values> will return both C<$false> and C<$true> values, or the empty list when they are set to the default. =head2 filter_json_object $json = $json->filter_json_object([$coderef]) When C<$coderef> is specified, it will be called from C<decode> each time it decodes a JSON object. The only argument is a reference to the newly-created hash. If the code references returns a single scalar (which need not be a reference), this value (or rather a copy of it) is inserted into the deserialised data structure. If it returns an empty list (NOTE: I<not> C<undef>, which is a valid scalar), the original deserialised hash will be inserted. This setting can slow down decoding considerably. When C<$coderef> is omitted or undefined, any existing callback will be removed and C<decode> will not change the deserialised hash in any way. Example, convert all JSON objects into the integer 5: my $js = JSON::PP->new->filter_json_object(sub { 5 }); # returns [5] $js->decode('[{}]'); # returns 5 $js->decode('{"a":1, "b" If C<$enable> is true (or missing), the string returned by C<encode> will be shrunk (i.e. downgraded if possible). The actual definition of what shrink does might change in future versions, but it will always try to save space at the expense of time. If C<$enable> is false, then JSON::PP does nothing. =head2 max_depth $json = $json->max_depth([$maximum_nesting_depth]) $max_depth = $json->get_max_depth Sets the maximum nesting level (default C C<{> or. See> for more info on why this is useful. =head2 encode $json_text = $json->encode($perl_scalar) Converts the given Perl value or data structure to its JSON representation. Croaks on error. =head2 decode $perl_scalar = $json->decode($json_text) The opposite of C<encode>: expects a JSON text and tries to parse it, returning the resulting simple scalar or reference. Croaks on error. =head2 decode_prefix ($perl_scalar, $characters) = $json->decode_prefix($json_text) This works like the::PP->new->decode_prefix ("[1] the tail") => ([1], 3) =head1 L<Cpanel::JSON::XS>, a fork of JSON::XS by Reini Urban, which supports some of these (with a different set of incompatibilities). Most of these historical flags are only kept for backward compatibility, and should not be used in a new application. =head2 allow_singlequote $json = $json->allow_singlequote([$enable]) $enabled = $json->get_allow_singlequote If C<$enable> is true (or missing), then C<decode> will accept invalid JSON texts that contain strings that begin and end with single_singlequote->decode(qq|{"foo":'bar'}|); $json->allow_singlequote->decode(qq|{'foo':"bar"}|); $json->allow_singlequote->decode(qq|{'foo':'bar'}|); =head2 allow_barekey $json = $json->allow_barekey([$enable]) $enabled = $json->get_allow_barekey If C<$enable> is true (or missing), then C<decode> will accept invalid JSON texts that contain JSON objects whose names don't begin and end with_barekey->decode(qq|{foo:"bar"}|); =head2 allow_bignum $json = $json->allow_bignum([$enable]) $enabled = $json->get_allow_bignum If C<$enable> is true (or missing), then C<decode> will convert big integers Perl cannot handle as integer into L<Math::BigInt> objects and convert floating numbers into L<Math::BigFloat> objects. C<encode> will convert C<Math::BigInt> and C<Math::BigFloat> objects into JSON numbers. $json->allow_nonref->allow_bignum; $bigfloat = $json->decode('2.000000000000000000000000001'); print $json->encode($bigfloat); # => 2.000000000000000000000000001 See also L<MAPPING>. =head2 loose $json = $json->loose([$enable]) $enabled = $json->get_loose If C<$enable> is true (or missing), then C<decode> will accept invalid JSON texts that contain unescaped [\x00-\x1f\x22\x5c] characters.->loose->decode(qq|["abc def"]|); =head2 escape_slash $json = $json->escape_slash([$enable]) $enabled = $json->get_escape_slash If C<$enable> is true (or missing), then C<encode> will explicitly escape I<slash> (solidus; C<U+002F>) characters to reduce the risk of XSS (cross site scripting) that may be caused by. C<decode> will not be affected in any way. =head2 indent_length $json = $json->indent_length($number_of_spaces) $length = $json->get_indent_length This option is only useful when you also enable C<indent> or C<pretty>. JSON::XS indents with three spaces when you C<encode> (if requested by C<indent> or C<pretty>), and the number cannot be changed. JSON::PP allows you to change/get the number of indent spaces with these mutator/accessor. The default number of spaces is three (the same as JSON::XS), and the acceptable range is from C<0> (no indentation; it'd be better to disable indentation by C<indent(0)>) to C<15>. =head2 sort_by $json = $json->sort_by($code_ref) $json = $json->sort_by($subroutine_name) If you just want to sort keys (names) in JSON objects when you C<encode>, enable C<canonical> option (see above) that allows you to sort object keys alphabetically. If you do need to sort non-alphabetically for whatever reasons, you can give a code reference (or a subroutine name) to C<sort_by>, then the argument will be passed to Perl's C<sort> built-in function. As the sorting is done in the JSON::PP scope, you usually need to prepend C<JSON::PP::> to the subroutine name, and the special variables C<$a> and C<$b> used in the subrontine used by C<sort_by> affects all the plain hashes in the data structure. If you need finer control, C<tie> necessary hashes with a module that implements ordered hash (such as L<Hash::Ordered> and L<Tie::IxHash>). C<canonical> and C<sort_by> don't affect the key order in C<tie>d hashes. use Hash::Ordered; tie my %hash, 'Hash::Ordered', (name => 'CPAN', id => 1, href => ''); print $json->encode([\%hash]); # [{"name":"CPAN","id":1,"href":""}] # order is kept =head1 INCREMENTAL PARSING This section is also taken from JSON::XS. C<decode_prefix> to see if a full JSON object is available, but is much more efficient (and can be implemented with a minimum of method calls). JSON::PP. C<max_size>) to ensure the parser will stop parsing in the presence if syntax errors. The following methods implement this incremental parser. =head2 incr_parse (other than whitespace)::PP->new->incr_parse ("[5][7][1,2]"); =head2 incr<will> fail under real world conditions). As a special exception, you can also call this method before having parsed anything.). =head2 incr_skip $json->incr_skip This will reset the state of the incremental parser and will remove the parsed text from the input buffer so far. This is useful after C<incr_parse> died, in which case the input buffer and incremental parser state is left unchanged, to skip the text parsed so far and to reset the parse state. The difference to C<incr_reset> is that only text until the parse error occurred is removed. =head2 incr_reset . =head1 MAPPING Most of this section is also taken from JSON::XS. This section describes how JSON::PP maps Perl values to JSON values and vice versa. These mappings are designed to "do the right thing" in most circumstances automatically, preserving round-tripping characteristics (what you put in comes out as something equivalent). For the more enlightened: note that in the following descriptions, lowercase I<perl> refers to the Perl interpreter, while uppercase I<Perl> refers to the abstract Perl language itself. =head2 JSON -> PERL =over 4 =item object A JSON object becomes a reference to a hash in Perl. No ordering of object keys is preserved (JSON does not preserve::PP::PP only guarantees precision up to but not including the least significant bit. When C<allow_bignum> is enabled, big integer values and any numeric values will be converted into L<Math::BigInt> and L<Math::BigFloat> objects respectively, without becoming string scalars or losing precision. =item true, false These JSON atoms become C<JSON::PP::true> and C<JSON::PP::false>, respectively. They are overloaded to act almost exactly like the numbers C<1> and C<0>. You can check whether a scalar is a JSON boolean by using the C<JSON::PP::is_bool> function. =item null A JSON null atom becomes C<undef> in Perl. =item shell-style comments (C<< # I<text> >>) As a nonstandard extension to the JSON syntax that is enabled by the C<relaxed> setting, shell-style comments are allowed. They can start anywhere outside strings and go till the end of the line. =item tagged values (C<< (I<tag>)I<value> >>). Another nonstandard extension to the JSON syntax, enabled with the C<allow_tags> setting, are tagged values. In this implementation, the I<tag> must be a perl package/class name encoded as a JSON string, and the I<value> must be a JSON array encoding optional constructor arguments. See L<OBJECT SERIALISATION>, below, for details. =back =head2 PERL -> JSON The mapping from Perl to JSON is slightly more difficult, as Perl is a truly typeless language, so we can only guess which JSON type is meant by a Perl value. =over 4 =item hash references Perl hash references become JSON objects. As there is no inherent ordering in hash keys (or JSON objects), they will usually be encoded in a pseudo-random order. JSON::PP can optionally sort the hash keys (determined by the I<canonical> flag and/or I<sort_by> property), so the same data structure will serialise to the same JSON text (given same settings and version of JSON::PP), but this incurs a runtime overhead and is only rarely useful, e.g. when you want to compare some JSON text against another for equality. ::PP::false> and C<JSON::PP::true> to improve readability. to_json [\0, JSON::PP::true] # yields [false,true] =item JSON::PP::true, JSON::PP::false These special values become JSON true and JSON false values, respectively. You can also use C<\1> and C<\0> directly if you want. =item JSON::PP::null This special value becomes JSON null. =item blessed objects Blessed objects are not directly representable in JSON, but C<JSON::PP> allows various ways of handling objects. See L<OBJECT SERIALISATION>, below, for details. =item simple scalars Simple Perl scalars (any scalar that is not a reference) are the most difficult objects to encode: JSON::PP will encode undefined scalars as JSON C # C<use> JSON::PP (or JSON.pm).. JSON::PP (and JSON::XS) trusts what you pass to C<encode> method (or C<encode_json> function) is a clean, validated data structure with values that can be represented as valid JSON values only, because it's not from an external data source (as opposed to JSON texts you pass to C<decode> or. =back =head2 OBJECT SERIALISATION As JSON cannot directly represent Perl objects, you have to choose between a pure JSON representation (without the ability to deserialise the object automatically again), and a nonstandard extension to the JSON syntax, tagged values. =head3 SERIALISATION What happens when C<JSON::PP> encounters a Perl object depends on the C<allow_blessed>, C<convert_blessed>, C<allow_tags> and C<allow_bignum> settings, which are used in this order: =over 4 =item 1. C<allow_tags> is enabled and the object has a C<FREEZE> method. In this case, C<JSON::PP> creates a tagged JSON value, using a nonstandard extension to the JSON syntax. This works by invoking the C<FREEZE> method on the object, with the first argument being the object to serialise, and the second argument being the constant string C<JSON> to distinguish it from other serialisers. The C<My::Object> C<FREEZE> method might use the objects C<type> and C<id> members to encode the object: sub My::Object::FREEZE { my ($self, $serialiser) = @_; ($self->{type}, $self->{id}) } =item 2. C<convert_blessed> is enabled and the object has a C<TO_JSON> method. In this case, the C<TO_JSON> method of the object is invoked in scalar context. It must return a single scalar that can be directly encoded into JSON. This scalar replaces the object in the JSON text. For example, the following C<TO_JSON> method will convert all L<URI> objects to JSON strings when serialised. The fact that these values originally were L<URI> objects is lost. sub URI::TO_JSON { my ($uri) = @_; $uri->as_string } =item 3. C<allow_bignum> is enabled and the object is a C<Math::BigInt> or C<Math::BigFloat>. The object will be serialised as a JSON number value. =item 4. C<allow_blessed> is enabled. The object will be serialised as a JSON null value. =item 5. none of the above If none of the settings are enabled or the respective methods are missing, C<JSON::PP> throws an exception. =back =head3 DESERIALISATION For deserialisation there are only two cases to consider: either nonstandard tagging was used, in which case C<allow_tags> decides, or objects cannot be automatically be deserialised, in which case you can use postprocessing or the C<filter_json_object> or C<filter_json_single_key_object> callbacks to get some real objects our of your JSON. This section only considers the tagged value case: a tagged JSON object is encountered during decoding and C<allow_tags> is disabled, a parse error will result (as if tagged values were not part of the grammar). If C<allow_tags> is enabled, C<JSON::PP> will look up the C<THAW> method of the package/classname used during serialisation (it will not attempt to load the package as a Perl module). If there is no such method, the decoding will fail with an error. Otherwise, the C<THAW> method is invoked with the classname as first argument, the constant string C<JSON> as second argument, and all the values from the JSON array (the values originally returned by the C<FREEZE> method) as remaining arguments. The method must then return the object. While technically you can return any Perl scalar, you might have to enable the C<allow_nonref> setting to make that work in all cases, so better return an actual blessed reference. As an example, let's implement a C<THAW> function that regenerates the C<My::Object> from the C<FREEZE> example earlier: sub My::Object::THAW { my ($class, $serialiser, $type, $id) = @_; $class->new (type => $type, id => $id) } =head1 ENCODING/CODESET FLAG NOTES This section is taken from JSON::XS. The interested reader might have seen a number of flags that signify encodings or codesets - C<utf8>, C<latin1> and C<ascii>. There seems to be some confusion on what these do, so here is a short comparison: C<utf8> controls whether the JSON text created by C<encode> (and expected by C<decode>) is UTF-8 encoded or not, while C<latin1> and C<ascii> only control whether C<encode> escapes character values outside their respective codeset range. Neither of these flags conflict with each other, although some combinations make less sense than others. Care has been taken to make all flags symmetrical with respect to C<encode> and I<encodes> them, in our case into octets. Unicode is (among other things) a codeset, UTF-8 is an encoding, and ISO-8859-1 (= latin 1) and ASCII are both codesets I<and> encodings at the same time, which can be confusing. =over 4 =item C<utf8> flag disabled When C<utf8> is disabled (the default), then C<encode>/C). =item C<utf8> flag enabled If the C<utf8>-flag is enabled, C<encode>/C<decode> will encode all characters using the corresponding UTF-8 multi-byte sequence, and will expect your input strings to be encoded as UTF-8, that is, no "character" of the input string must have any value > 255, as UTF-8 does not allow that. The C<utf8> flag therefore switches between two modes: disabled means you will get a Unicode string in Perl, enabled means you get an UTF-8 encoded octet/binary string in Perl. =item C<latin1> or C<ascii> flags enabled With C<latin1> (or C<ascii>) enabled, C<encode> will escape characters with ordinal values > 255 (> 127 with C<ascii>) and encode the remaining characters as specified by the C<utf8> flag. If C C<utf8> is enabled, you still get a correct UTF-8-encoded string, regardless of these flags, just some more characters will be escaped using C<\uXXXX> then before. Note that ISO-8859-1-I<encoded> strings are not compatible with UTF-8 encoding, while ASCII-encoded strings are. That is because the ISO-8859-1 encoding is NOT a subset of UTF-8 (despite the ISO-8859-1 I<codeset> being a subset of Unicode), while ASCII is. Surprisingly, C<decode> will ignore these flags and so treat all input values as governed by the C<utf8> flag. If it is disabled, this allows you to decode ISO-8859-1- and ASCII-encoded strings, as both strict subsets of Unicode. If it is enabled, you can correctly decode UTF-8 encoded strings. So neither C<latin1> nor C<ascii> are incompatible with the C<utf8> flag - they only govern when the JSON output engine escapes a character or not. The main use for C<latin1> is to relatively efficiently store binary data as JSON, at the expense of breaking compatibility with most JSON decoders. The main use for. =back =head1 BUGS Please report bugs on a specific behavior of this module to RT or GitHub issues (preferred): L<> L<> As for new features and requests to change common behaviors, please ask the author of JSON::XS (Marc Lehmann, E<lt>schmorp[at]schmorp.deE<gt>) first, by email (important!), to keep compatibility among JSON.pm backends. Generally speaking, if you need something special for you, you are advised to create a new module, maybe based on L<JSON::Tiny>, which is smaller and written in a much cleaner way than this module. =head1 SEE ALSO The F<json_pp> command line utility for quick experiments. L<JSON::XS>, L<Cpanel::JSON::XS>, and L<JSON::Tiny> for faster alternatives. L<JSON> and L<JSON::MaybeXS> for easy migration. L<JSON::backportPP::Compat5005> and L<JSON::backportPP::Compat5006> for older perl users. RFC4627 (L<>) RFC7159 (L<>) RFC8259 (L<>) =head1 AUTHOR Makamaka Hannyaharamitu, E<lt>makamaka[at]cpan.orgE<gt> =head1 CURRENT MAINTAINER Kenichi Ishigaki, E<lt>ishigaki[at]cpan.orgE<gt> =head1 COPYRIGHT AND LICENSE Copyright 2007-2016 by Makamaka Hannyaharamitu Most of the documentation is taken from JSON::XS by Marc Lehmann This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. =cut
https://web-stage.metacpan.org/release/ISHIGAKI/JSON-4.03/source/lib/JSON/backportPP.pm
CC-MAIN-2022-05
refinedweb
9,115
50.46
How to build Twitter’s realtime likes feature with Laravel be demonstrating how to build your very own realtime likes count on the web using Laravel and Pusher. Here’s how our app will work when we’re done: We’ll build a very simple app (which I’ll unimaginatively call Chirper) and stuff it with some fake data so we can get moving fast. On the home page of our app, users will see all chirps with the most recent ones first, and they can click a button to Like or Unlike them. Whenever a user likes or unlikes a chirp, the likes count displayed next to the chirp should increment or decrement in every other browser where the page is open. You can check out the source code of the completed application on Github. Setup the Project I’ll be using Laravel 5.4 in this post, but the techniques here should work for any version of Laravel 5.3 or above: composer create-project laravel/laravel=5.4.* chirper Then set your app details in your .env file: APP_NAME=Chirper DB_DATABASE=chirper Remember to set your DB_USERNAME and DB_PASSWORD as appropriate for your machine, and to create a database named “Chirper”. Next, we’ll set up our data structures. To keep things simple, our app will have just two main entities: users and chirps. Since Laravel already provides us with a User model and migration, we just need to set up the Chirp model and migration. php artisan make:model Chirp -m For chirps, we’ll store: - the text - the date it was posted - the user who posted it, and - the number of likes it has received So we edit the up method in the migration file generated by the above command to look like this: public function up() { Schema::create('chirps', function (Blueprint $table) { $table->increments('id'); $table->string('text'); $table->unsignedInteger('user_id'); $table->integer('likes_count')->default(0); $table->timestamp('posted_at'); $table->foreign('user_id')->references('id')->on('users'); }); } Let’s open up our Chirp model and make some changes to it. First, we have to tell Laravel that this model doesn’t use the regular timestamps ( created_at and updated_at). Then we need to allow its attributes to all be assigned in one go. Lastly, we’ll add an accessor so we can easily retrieve the details of the User who posted the chirp. class Chirp extends Model { public $timestamps = false; protected $guarded = []; public function author() { return $this->belongsTo(User::class, 'user_id', 'id'); } } Next, we’ll write a seed to generate some fake users and their chirps for our app. If you open up the file database/factories/ModelFactory.php, you’ll notice that Laravel already provides us with a seeder for Users. Let’s add one for Chirps: $factory->define(App\Chirp::class, function (Faker\Generator $faker) { return [ 'text' => $faker->sentence(), 'likes_count' => $faker->randomDigitNotNull, 'posted_at' => $faker->dateTimeThisYear(), 'user_id' => random_int(1, 10) ]; }); And then we call the factory functions in our database/seeds/DatabaseSeeder.php: <?php use App\Chirp; use App\User; use Illuminate\Database\Seeder; class DatabaseSeeder extends Seeder { public function run() { factory(User::class, 10)->create(); factory(Chirp::class, 30)->create(); } } Now, if we run php artisan migrate --seed We should see our database tables have been created and filled with fake data. Note: if you run into this error “Specified key was too long; max key length is 767 bytes ” when you run migrations, follow these instructions to fix it. Setup the Views Next, we’ll run the command: php artisan make:auth We won’t be using any auth features, but we’ll run this because it also saves us time by setting up some frontend templates and JavaScript for us. Let’s set up our home routes and view. First, replace the home route in your routes/web.php with our home route: Route::get('/', 'HomeController@index'); Then in app/Controllers/HomeController.php, we’ll implement the index method. (Don’t forget to remove the auth middleware in the constructor): public function index() { $chirps = Chirp::with('author') ->orderBy('posted_at', 'desc') ->get(); return view('home', ['chirps' => $chirps]); } In this method we simply retrieve all chirps along with their author details and pass them to the view to render. Lastly, we set up the view, a simple UI that displays a list of chirps, with the author name, time posted and a Like button below it next to the number of likes the chirp has. We’ll add a few attributes to some elements, though: - an onclickhandler for each Likebutton. - a data-chirp-idon each button so we can identify which chirp the button references. - an idon each likes_countwhich includes the chirp’s id so we can easily locate it via document.querySelector. @extends('layouts.app') @section('content') <div class="container-fluid text-center"> @foreach($chirps as $chirp) <div class="jumbotro"> <div>by <b>{{ $chirp->author->name }}</b> on <small>{{ $chirp->posted_at }}</small> </div> <div> <p>{{ $chirp->text }}</p> </div> <div class="row"> <button onclick="actOnChirp(event);" data-Like</button> <span id="likes-count-{{ $chirp->id }}">{{ $chirp->likes_count }}</span> </div> </div> @endforeach </div> @endsection Let’s start our app to be sure everything’s fine thus far: php artisan serve Now visit your homepage at and you should see all the chirps displayed neatly. Implement the Like Logic Now we’ll implement the logic for liking and unliking a chirp. First of all, we’ll take a look at our frontend. When a user clicks on ‘Like’, we want a couple of things to happen: - The text on the button changes from Liketo Unlike. - The likes count displayed next to the chirp increases by 1. - An AJAX request is made to the server to increment the likes_count in the database by 1. - The likes count displayed next to the chirp increases by 1 in all other tabs/windows where the page is open. (This is where Pusher comes in.) Similarly, for “unliking”: - The text on the button changes from Unliketo Like. - The likes count displayed next to the chirp decreases by 1. - An AJAX request is made to the server to decrement the likes_count in the database by 1. - The likes count displayed next to the chirp decreases by 1 in all other tabs/windows where the page is open. (Again, the Pusher magic.) In order for us to easily manage these two types of events, we’ll introduce the concept of chirp actions. For our basic use case here, we’ll just have two types of actions: Like and Unlike. Both actions will go to the same endpoint, where the server will do the database update and return a 200 OK response. Let’s define a route for that: Route::post('/chirps/{id}/act', 'HomeController@actOnChirp'); The Like button should make a request of this form: { "action": "Like" } In the case of unlikes, the action will be “Unlike”. In our controller, we retrieve the action value and increment or decrement as needed. We’ll use the same HomeController to keep things simple: public function actOnChirp(Request $request, $id) { $action = $request->get('action'); switch ($action) { case 'Like': Chirp::where('id', $id)->increment('likes_count'); break; case 'Unlike': Chirp::where('id', $id)->decrement('likes_count'); break; } return ''; } Now let’s implement the JavaScript for liking/unliking. Because the code isn’t much, we’ll add it directly to the “content” section of our home.blade.php: In your base layout ( layouts/app.blade.php), add a section for scripts after the script tag that includes app.js (so it gets run after Echo and Axios have been initialized): <script src="{{ asset('js/app.js') }}"></script> @yield('js') We’ll inject our page’s JavaScript into that section in our home.blade.php @section('js') <script> var updateChirpStats = { Like: function (chirpId) { document.querySelector('#likes-count-' + chirpId).textContent++; }, Unlike: function(chirpId) { document.querySelector('#likes-count-' + chirpId).textContent--; } }; var toggleButtonText = { Like: function(button) { button.textContent = "Unlike"; }, Unlike: function(button) { button.textContent = "Like"; } }; var actOnChirp = function (event) { var chirpId = event.target.dataset.chirpId; var action = event.target.textContent; toggleButtonText[action](event.target); updateChirpStats[action](chirpId); axios.post('/chirps/' + chirpId + '/act', { action: action }); }; </script> @endsection First, we have two objects containing two methods each, corresponding to the two possible actions. The names of the methods are capitalised so we can easily call them via the text on the button. The first object contains methods to update the likes count displayed below the chirp, while the second contains methods to change the text on the button. We’ve separated these two functionalities because of our criteria no. 4 above: for a different user viewing this page at the same time, only the likes count should update; the text on the button shouldn’t change. We attach an onclick handler ( actOnChirp) to each chirp like button as they are rendered. In this method, we perform the desired actions: change the button text, update the likes count and send the action to the server using Axios, which comes bundled with Laravel. At this point, visiting the home page and clicking the Like button for a chirp works as expected. All good so far. Broadcast the event with Pusher We need to do one more thing when a chirp is liked or unliked is to ensure the likes count shown in every browser on that page shows the newly updated value. We’ll do this by broadcasting a new event whenever a chirp is acted on. Pusher gives us the means to do this with their messaging system, and Laravel provides an events and broadcasting system that supports Pusher out of the box. First, let’s create the event class: php artisan make:event ChirpAction For the browser to update the likes count on the UI accordingly, it needs to know two things: - which chirp was acted on - what kind of action We need to send this data along with this event when broadcasting it, so let’s open up the generated app/Events/ChirpAction.php and add those two. Our class should look something like this: <?php namespace App\Events; use Illuminate\Queue\SerializesModels; use Illuminate\Foundation\Events\Dispatchable; use Illuminate\Broadcasting\InteractsWithSockets; class ChirpAction { use Dispatchable, InteractsWithSockets, SerializesModels; public $chirpId; public $action; public function __construct($chirpId, $action) { $this->chirpId = $chirpId; $this->action = $action; } } And now we need to fire this event whenever a new chirp action occurs. So we edit our HomeController‘s actOnChirp method to include this: public function actOnChirp(Request $request, $id) { $action = $request->get('action'); switch ($action) { case 'Like': Chirp::where('id', $id)->increment('likes_count'); break; case 'Unlike': Chirp::where('id', $id)->decrement('likes_count'); break; } event(new ChirpAction($id, $action)); // fire the event return ''; } At this point, whenever a chirp is liked or unliked, the event will be fired. But it’s only local to the server, so let’s fix that by implementing broadcasting to other clients. Create a free Pusher account if you don’t have one already. Then visit your dashboard and create a new app, taking note of your app’s credentials. We’ll need them in a bit. Let’s set things up on the frontend. We’ll use Laravel Echo to listen for and respond to broadcasts via Pusher. First install the needed dependencies: npm install --save laravel-echo pusher-js In your resources/assets/bootstrap.js, uncomment/add these lines: import Echo from 'laravel-echo' window.Pusher = require('pusher-js'); window.Echo = new Echo({ broadcaster: 'pusher', key: 'your-pusher-key', cluster: 'your-app-cluster' }); Replace your-pusher-key and your-app-cluster with your app’s Pusher key and cluster as seen in your Pusher dashboard. In the script section of our home.blade.php, we’ll tell Echo to listen for chirp actions and update the chirp’s likes counts accordingly: Echo.channel('chirp-events') .listen('ChirpAction', function (event) { console.log(event); var action = event.action; updateChirpStats[action](event.chirpId); }) I’ve named my channel ‘chirp-events’, but you can use anything you like. The event variable passed to the function will contain the properties we defined earlier on our ChirpAction event ( action and chirpId), so we can simply access them and update the UI for the corresponding chirp. We’re logging the event data to our console, just for debugging purposes, so we can see what’s going on. Then we install all our dependencies and compile our frontend assets so our updates to bootstrap.js show up: npm install && npm run dev Now, let’s set up Echo and Pusher on the server. First, we’ll install the Pusher library: composer require pusher/pusher-php-server Next, we’ll configure our server to use broadcasting via Pusher. Add this to the aliases array of your config/app.php: 'Pusher' => Pusher\Pusher::class Also uncomment this line from the providers array to enable broadcasting: App\Providers\BroadcastServiceProvider::class, Let’s configure our broadcasting and Pusher settings. Laravel already comes with a config/broadcasting.php for this which pulls values from the .env file, so open up the .env file and edit it: BROADCAST_DRIVER=pusher PUSHER_APP_ID=XXXXXXXXX PUSHER_APP_KEY=YYYYYYYY PUSHER_APP_SECRET=ZZZZZZZZ Replace the stubs above with your app credentials from your Pusher dashboard. Lastly, add your cluster in the options array of config/broadcasting.``php. After making these changes, you might need to run php artisan config:cache so your changes get persisted from the .env to the config files. To enable broadcasting of our event, we’ll make it implement the ShouldBroadcastNow interface. (Normally, we would use the ShouldBroadcast interface, but then we would need to setup and configure queues. Using ShouldBroadcastNow forces the event to be dispatched immediately.) We’ll also implement a broadcastOn method that returns the channel (or channels) we want our event to be broadcast on. We’ll use the same channel name we used on the frontend. At this point, our event class looks like this: namespace App\Events; use Illuminate\Broadcasting\Channel; use Illuminate\Queue\SerializesModels; use Illuminate\Foundation\Events\Dispatchable; use Illuminate\Broadcasting\InteractsWithSockets; use Illuminate\Contracts\Broadcasting\ShouldBroadcastNow; class ChirpAction implements ShouldBroadcastNow { use Dispatchable, InteractsWithSockets, SerializesModels; public $chirpId; public $action; public function __construct($chirpId, $action) { $this->chirpId = $chirpId; $this->action = $action; } public function broadcastOn() { return new Channel('chirp-events'); } } Okay, we’re all set! Open up the homepage of your app in two different tabs and try Liking and Unliking from the different windows. You should see the events get logged to your browser console like this: Note: If you find an error logged to your console about the WebSocket connection being closed instead, try restarting your browser. Exclude the Sender You might have noticed that we have a small problem: when you click “Like” or “Unlike”, the count increases or decreases by two, not one. This happens because the event is currently being broadcast to everyone, including the tab that sent it. So the first increase is due to the button click, and the second is due to the received message. We need to find a way of excluding the sender of the message from receiving it too. Luckily, we can do that easily with Laravel, by changing one line of code in our HomeController‘s actOnChirp method: // replace this... event(new ChirpAction($id, $action)); // with this... broadcast(new ChirpAction($id, $action))->toOthers(); And now, if you Like or Unlike a chirp, you should see it shows up in the other window(s) and increments only by 1 on this window. Here’s what actually goes on here: - Pusher provides each connected tab with an identifier called the socket ID. Whenever a Pusher message is sent containing this id, Pusher knows not to send the message to whichever tab owns that ID. - Laravel Echo automatically attaches this socket ID to the request sent by Axios as a header, X-Socket-Id. You can view it by running Echo.socketId()in your console. - By using the broadcast...toOtherscombo, we’re letting Laravel know that it should include the socket ID in its message data, so Pusher can exclude that tab. That’s all there is to it. Conclusion This is just a proof-of-concept to demonstrate how this could be implemented with event broadcasting via Pusher. There are a lot more complex use cases available, so here’s your chance to get started building more powerful things with Pusher and Laravel. Let us know what you build in the comments.
https://blog.pusher.com/build-twitter-realtime-likes-feature-with-laravel/
CC-MAIN-2018-09
refinedweb
2,727
53.61
Hi, I'm writing a python program which uses the UE9 streaming functions, and when my program crashes and I can't access the device, I may unplug and plug the USB port again to get the access back. But since yesterday, I completely lost the access of my device... Before that, I had the power-up LED behavior as described in , but now when I plug it, using any PC or the power supply, the COMM led stays desperately off, as seen on attached file. Now I get the labjack not found error everytime i try to open the device in python. Here's a piece of the code I typically use in my program, the one that "broke" my device:import ue9 from Queue import Queue handle = ue9.UE9() handle.streamConfig(NumChannels=2, ChannelNumbers=[0, 1], ScanFrequency=25000) handle.streamStart() queue = Queue() while True: try: data = handle.streamData().next() queue.put(data) except: handle.streamStop() break Typically, when an exception occurs, it happens that the streamStop() call does not work properly, so the streaming is still on but my program is terminated. The fastest way I found to reset the device was then to unplug and plug it again. I tried the ethernet connection, and it doesn't work either. I'm using Ubuntu 16.04, and v1.11 firmware. Any ideas for me ? Regards, François Un-plugging the device to reset it and terminate your program is acceptable behavior when streamStop doesn't work. I highly doubt the code you wrote broke the device. ESD or some other form of electrical shock from your computer or what you are sensing is likely the cause of your LED issue. It looks like some sort of shock was delivered to the UE9s COMM processor and it is no longer working properly. A few quick things you can do before following the RMA route is to plug the device into your computer and check the voltage between VS and GND using a DMM. The voltage should be ~5V (4.5-5V). You can also disconnect the UE9 from the computer and measure the resistance between VS and GND, the resistance should be something close to 55k ohms. The RMA process is described on the about/returns page. If you want to send it back to us to have us look at the device feel free to follow the steps outlined on that page and we will approve your RMA. Do further testing with nothing connected to the UE9 except the USB cable. Also, try powering up with the power supply rather than USB cable to see if you get any COMM LED activity. So you are not getting any COMM LED activity at all? You can try powering up with a jumper securely installed from FIO0 to SCL, or from FIO1 to SCL, but at this point is seems that your COMM chip or its power supply might have been damaged. Try a different computer to see if you can get an COMM LED activity. You can use a DMM to do some initial checks for hardware damage. Clamp the negative lead securely into a GND terminal, and the check voltages (or resistance) with the positive lead. If testing at a screw terminal note that the probe or wire must be securely clamped inside the screw terminal ... you can't just touch the screw head. Voltage measurements are done with just the power supply connected (USB or Vext). Nothing else except DMM. Resistance measurements are done with nothing connected except DMM. The UE9 should be unpowered. All power rails typically show a resistance >50 kohms. Checking the voltage and resistance of various power rails is the first step. The main one to check is VS, but a complete list is: VS (any screw terminal, ~5 volts) Vusb (pin 6 of U17, ~5 volts) Vext (pin 8 of U17, ~5 volts) 3.3Vcontrol (pin 5 of U14) 3.3Vcomm (pin 5 of U15) 2.5Vcomm (pin 4 of U16, C145 is best place to measure) I tried all of the above: I tried the jumper, I plugged the power supply or to another computer doesn't work either, and every voltage listed is OK. So I'll initiate a return. Thanks for your reply. Sounds good. Let us know if you have issues initiating a return.
https://labjack.com/comment/4126
CC-MAIN-2021-49
refinedweb
725
72.76
Details - Type: Improvement - Status: Open - Priority: Major - Resolution: Unresolved - Affects Version/s: None - - Component/s: Authentication & Authorization, Core & storage, Templates and UI - Labels:None Description Subpages should be supported. (This might need to be pushed to 3.1 due to the workload, but 3.0 should at least be designed so that it does not prevent this to be implemented.) Attachments are already a kind of subpages, so once JCR unifies content handling, this should be rather trivial. Activity Yup, all the APIs that currently take in a page name as a string will need to start accepting paths. Opinions: should we use an actual Path-like structure (like a WikiPath class) or just Strings? We should probably have a path-like class, and a factory class that creates them for supplied strings. This would allow caching, which is a good thing. For access-control purposes (calculating whether one permission implies another, for example), paths will be essential. I'm sure other features in JSPWiki 3 would benefit from them too. On the other hand, since the JCR interface is based around Strings, there will be a need to convert back and forth anyway. They would certainly make parsing easier... And we could still keep WikiPage.getName() (to return a String), while the internal core would use WikiPage.getPath(). Though care must be taken to make sure that we don't mix Wiki paths and JCR paths in the developer's mind - JCR paths are internal to the system. public class WikiPath { /** Constructs a WikiPath from a full string */ public WikiPath( String path ); /** Resolves a Wikipath relative to a current path. For example, if currPath = "MyWiki:Foo", * and path = "Bar" returns "MyWiki:Bar". If path = "/Bar" returns "MyWiki:Foo/Bar". */ public WikiPath resolve( WikiPath currPath, String path ); /** Resolves a path with respect to the current context */ public WikiPath resolve( WikiContext context, String path ); /** Returns the WikiPath as a full path ("MyWiki:MainPage/SubPage") */ public String toString(); } This will require some extensions to the Permission classes, because they do not at the moment support nested page syntax (e.g., path-like structures).
https://issues.apache.org/jira/browse/JSPWIKI-420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
CC-MAIN-2015-22
refinedweb
351
63.19
Metadoc Metadoc generates documentation metadata for Object Oriented (Class) JavaScript libraries. Running the utility will produce a JSON file describing the code. This can be used as a data source for creating custom HTML documentation (or any other output format), or for further processing. Metadoc was designed as command line utility, but can also be used programmatically. It is a custom extension of the productionline build utility (from the same authors of this tool). It was originally designed to document the NGN and Chassis libraries. Workflow Metadoc was designed to support a "code and comment" workflow. It will parse JavaScript code and extract as much metadata as possible from the code's Abstract Syntax Tree. AST parsing creates a significant amount of information, but isn't always sufficient for creating detailed documentation, such as class inheritance chains. To support greater detail, Metadoc reads inline comment blocks, written directly in the code. Comments can be used to supplement and/or override AST parsing. Comment parsing follows a style similar to JSDoc. Using a familiar @tag syntax, Metadoc provides powerful capabilities for creating fine detail in documentation. Example Input Files: Event.js & Meetup.js Output: api.json Getting Started // Install metadoc npm install -g @author.io/metadoc // Run metadoc metadoc --source "/path/to/source_directory" --output "/path/to/output_directory" If you want to use metadoc programatically (i.e. require('@author.io/metadoc')), take a look at the cli.js file as an example (which includes the metadoc generator). The metadoc generator is an extension of productionline. Ignoring Files It is possible to ignore files and/or directories using glob syntax. For example: --ignore "./node_modules"ignores the entire node_modulesdirectory. --ignore /path/to/**/.*ignores all files in any directory whose name starts with an dot (ex: .testfile.js). It is possible to use the --ignore flag multiple times. Warnings Metadoc is capable of warning developers about some common code issues/concerns: --warnOnNoCodetriggers a warning whenever a code comment triggers an action for which no related code can be found. This is most useful for identifying comments that shouldn't actually be in the code base. --warnOnSkippedEventstriggers a warning whenever an event is detected but not documented. This is most commonly used to identify events that are considered "internal" to a class. --warnOnSkippedTagstriggers a warning whenever a tag is skipped. This is the default behavior, but this tag will allow this feature to be turned off (i.e. --warnOnSkippedTags false) --errorOnCommentFailurethrows an error when a comment cannot be processed. This is the default behavior, but this tag will allow this feature to be turned off (i.e. --errorOnCommentFailure false) Documenting Code The code will be automatically documented based on the JavaScript AST (Abstract Syntax Tree). However; this doesn't always reflect the true nature of how a library should be used. To accommodate customizations, the generator parses comments within the code, allowing developers to override the AST documentation with custom comment blocks. Comment Tags Tags can be used to modify documentation snippets. Tags use the following format unless otherwise defined: /** * @tag {[type]} <name> * <description> */ The following tags are available: @author Identifies a specific person/organization recognized as the author of a snippet of code. @cfg Identifies a write-only configuration property. Aliases: config, configuration @cfgproperty Identifies a configuration property (write-only) that also has a corresponding readable/writable property. @class Identifies a class. @constructor Marks a method as the constructor of a class. @exception Identifies a custom NGN Exception. @extends Identifies which class is being extended. @fires Identifies an event. See "Documenting Events" below for additional detail. Aliases: triggers, trigger, event @hidden Indicates the section should be included in the documentation but hidden from view. This differs from the @ignore tag, which prevents the documentation from being generated at all. @ignore Indicates a section should be ignored from the documentation (i.e. prevents generation of a segment of code documentation). @info Keep information separated from descriptions, such has help comments or tooltips. Use of multiple @info tags are supported. This tag also supports content titles: /** * @info title goes here * primary content goes here. */ @method Identifies a method. @namespace Identifies a namespace. Namespaces identify class structure/hierarchy and cannot be ignored or hidden using @ignore or @hidden. @param Identifies an argument/paramenter. See "Documenting Parameters" for details. Aliases: arg, argument, parameter @private Indicates the snippet is private/not explicitly accessible as a developer interface (internal). @property Identifies a property of a class. Aliases: prop @readonly Indicates a snippet is read-only. This applies to properties. @return Identifies the data returned by a method. Aliases: returns @todo This is a special tag that annotates the documentation with a known task that requires completion (a developer to-do task). Format: @todo Describe the task here @typedef This is a special tag that defines a simple custom type. JavaScript does not enforce types (a weakly typed language). This tag allows developers to document general guidelines for arbitrary simplistic data structures. This is useful when the same type of data is used/expected repeatedly within a code base. Format: /** * @typedef {<type>} <name> (<options>) * <description> */ The <type> can be any valid JavaScript primitive, such as object, string, number, etc. The <name> should describe the data type uniquely throughout the entire code base. (<options>) is an optional list of possible values (enumeration). <description> is a custom description of the data type. For example: /** * @typedef {Error} MyError * This is my custom error. */ The example above defines a data type called MyError, which is a custom error. An example using options: /** * @typedef {String} MyLetter (a, b, c) * Identifies my favorite letter. */ This example recognizes a type called MyLetter, a string, which can have a, b, or c as valid values. Aliases: @type @writeonly Indicates a property is only writable. Flags In addition to tags, there are a number of recognized flags that can be used to annotate a documentation snippet. @protectedIdentifies a protected method/attribute. @deprecatedIndicates the feature will no longer be available in a future version. @experimentalIndicates the feature is not considered "production ready". @warningProvides a warning message. @hiddenIndicates the feature should be hidden but not removed from the documentation. @singletonIndicates a class is a singleton. @interfaceIndicates a class is an interface. @staticIndicates a method is static. @since* Identifies the version and/or date when the feature is generally available. This is typically used to identify new features that have been added to the original platform. It is also possible to create a custom flag using @flag <flag_name>. Documenting Parameters While parameters (function arguments) in JavaScript can have default values, there are still several cases where it is necessary to provide greater detail about parameters. For example, some methods only accept a parameter value from a predetermined set (enumeration). Parameters can be documented with additional detail using the following format: /** * @param {type} [<parameter_name>=<default>] (<enumerable_list) * <description> */ The type indicates the data type, while the [ and ] indicate the parameter is optional. A default value may be supplied, as well as a description. For example: /** * @param {String} [myParameter=example] (example,a,b) * This is an example parameter. */ The example above describes a string parameter named myParameter. Acceptable (enumerable) values are example, a, and b. The default value is example. The description is This is an example parameter.. Documenting Callback Parameters Callback functions are a unique type of parameter. These parameters may have their own arguments/parameters. Metadoc supports them using a dot notation syntax: /** * @param {function} callback * This is an example callback. * @param {boolean} callback.a * The first element is a. * @param {string} callback.b (possible,values) * The next element is b. */ The comment above indicates a parameter is a callback method that receives two arguments: a and b. The first argument ( a) is a boolean value. The second ( b) is a string whose value will be either possible or values. Documenting Events Metadoc was built to document the NGN and Chassis libraries. NGN ships with an event emitter class (works with Node.js events.EventEmitter). This class is commonly extended, meaning many classes within the library fire events. As a result, metadoc supports documenting the most common event emitter styles, plus those found in NGN. The following syntax provides a powerful way to generate event documentation overrides: /** * @fires {<arg1_name>:<arg1_type>} <event_name> * <description> */ - @fires is the tag. This is required. <arg_name>is the optional descriptive name of a callback argument passed to an event handler. <arg_type>is the data type of the argument passed to an event handler. <event_name>is the name of the the event that gets fired. <description> Example: - Basic Event /** * @fires {Object} myEvent * myEvent is fired from time to time. */ this.on('myEvent', function (obj) { console.log(obj) // Outputs { data: 'abc' } }) this.emit('myEvent', { data: 'abc' }) This event is called "myEvent", and it sends an object to event handlers. - Basic Event: Named Arguments /** * @fires {myName:Object} myEvent * myEvent is fired from time to time. */ this.on('myEvent', function (obj) { console.log(obj) // Outputs { data: 'abc' } }) this.emit('myEvent', { data: 'abc' }) This is the exact same event as the basic event in #1, but the @fires {myName:Object} will produce a label called "myName", which represents { data: 'abc' } (payload), a known Object. - Complex Event: Multiple Callback Arguments /** * @fires {Object,String} myEvent * myEvent is fired from time to time. */ this.on('myEvent', function (obj, label) { console.log(obj) // Outputs { data: 'abc' } console.log(label) // Outputs 'event fired' }) this.emit('myEvent', { data: 'abc' }, 'event fired') The major difference is the comma separated data types ( {Object,String}), which tells the documentation generator that the event will send two arguments to event handlers. The first is an Object and the second is String. It is possible to document multiple name:type callback arguments by separating with a comma. @fires {a:Object,b:String} would generate a label called a for the Object argument and a label called b for the String object. It is also possible for an argument to have more than one valid data type by separating types with the pipe | character. For example, @fires {a:Object|Boolean,b:String} states that the first argument (labeled a) can be an Object or Boolean value. Post Processors - metadoc-md: Convert markdown, mermaid, and mathjax descriptions to HTML. - metadoc-api: Generate a static JSON API (splits metadoc up into individual JSON files for serving over HTTP).
https://www.npmtrends.com/@author.io/metadoc
CC-MAIN-2022-27
refinedweb
1,708
51.04
06 April 2012 06:35 [Source: ICIS news] (adds details throughout) SINGAPORE (ICIS)--Taiwan’s state-owned refining firm CPC was forced to shut its 500,000 tonne/year No 5 cracker at its Kaohsiung complex on Friday morning following a pipeline leak that led to an explosion at the site, a company official said. “We are still attending to the accident site and we have no other details,” the official said. The explosion, believed to be at a crude butadiene tank at the site, occurred at about 03.00 hours ?xml:namespace> The subsequent fire was extinguished in about two hours and there were no casualties, they said. An aromatics facility at the site which has a 140,000 tonnes/year benzene unit may have been shut following the outage at the cracker, said a source close to the company. However, the likely impact of a shutdown at the No 5 aromatics unit would be limited, the source said, adding that the operating situation at the facility is still uncertain. CPC’s No 3, No 4 and No 6 aromatics units were not directly affected as they are located away form the The CPC official said the No 5 cracker was operating at 90% capacity prior to the outage. CPC facilities include a 230,000 tonne/year No 3 cracker and a 385,000 tonne/year No 4 cracker in Linyuan in southern The company runs three paraxylene (PX) units at the site with a total production capacity of 660,000 tonnes/year. CPC also produces 170,000 tonnes/year of OX at the site. A CPC customer based in “There is a gathering at the The incident at the [The BD extraction unit at the No 5 cracker – capacity is 95,000 tonnes/year. “And with most of the synthetic rubber plants now either shut or operating at reduced rates, the impact on BD pricing will be limited,” they said. Major Taiwanese synthetic rubber producer TSRC Corp’s 100,00 tonne/year styrene butadiene rubber (SBR) plant at Its other 60,000 tonne/year butadiene rubber (BR) plant is operating at a reduced rate of 70-80% because of poor margins. “Even if the BD price was to go up, we will not accept the higher BD price increase, as the demand for synthetic rubber is very weak and we cannot pass on the costs to our customers,” a company source at TSRC said. BD is the feedstock for synthetic rubber. Additional reporting by Helen Lee, Helen Yan, Mahua Chakravarty, Bohan Loh and Quintella K
http://www.icis.com/Articles/2012/04/06/9548361/Taiwans-CPC-shuts-Kaohsiung-cracker-after-explosion.html
CC-MAIN-2014-15
refinedweb
427
52.83
This chapter describes how to configure the CSM and contains these sections: Before you configure the CSM, you must take these actions: This example shows how to configure VLANs: This example shows how to configure a physical interface as a Layer 2 interface and assign it to a VLAN: This example shows how to configure the Layer 3 VLAN interface: The software interface for the CSM is the Cisco IOS command-line interface. To understand the Cisco IOS command-line interface and Cisco IOS command modes, refer to Chapter 2 in the Catalyst 6000 Family IOS Software Configuration Guide. In any command mode, you can get a list of available commands by entering a question mark (?) as follows: or This section describes three methods for upgrading the CSM: To upgrade the CSM you need to perform a session into the CSM module being upgraded. During the upgrade, enter all commands on a console connected to the supervisor engine. Enter each configuration command on a separate line. To complete the upgrade, enter the exit command to return to the supervisor engine prompt. To upgrade the CSM from the supervisor engine bootflash, perform these steps: Step 2 Set up a session between the supervisor engine and the CSM: Step 3 Load the image from the supervisor engine to the CSM: where: zz = 12 if the supervisor engine is installed in chassis slot 1. zz = 22 if the supervisor engine is installed in chassis slot 2. Step 4 Reboot the CSM by power cycling the CSM or by entering the following commands on the supervisor engine console: To upgrade the CSM from a removable Flash PC card inserted in the supervisor engine, perform these steps: x = 0 if the Flash PC card is installed in supervisor engine PCMCIA slot 0. To upgrade the CSM from an external TFTP server, perform these steps: Step 2 Configure the interface that is connected to your TFTP server. Step 3 Add the interface to the VLAN. Step 4 Enter the CSM vlan command. See the "Configuring VLANs" section for more information. Step 5 Add an IP address to the VLAN for the CSM. Step 6 Enter the show csm slot vlan detail command to verify your configuration. See the "Configuring VLANs" section for more information. Step 7 Make a Telnet connection into the CSM with the session CSM-slot-number 0 command. Step 8 Upgrade the image using the upgrade TFTP-server-IP-address c6slb-apc.rev-number.bin command. For information about saving and restoring configurations, refer to the Catalyst 6000 Family IOS Software Configuration Guide. Load balancing on the Catalyst 6000 family switch can operate in two modes: the routed processor (RP) mode and the CSM mode. By default, the CSM is configured in RP mode. The RP mode allows you to configure one or multiple CSMs in the same chassis and run Cisco IOS SLB on the same switch. The following sections provide information about CSM modes: CSM mode allows you to configure a single CSM only. The CSM mode is supported for backward compatibility with previous software releases. The single CSM configuration will not allow Cisco IOS SLB to run on the same switch. Before you can enter CSM configuration commands on the switch, you must specify the CSM that you want to configure. To specify a CSM for configuration, use the module csm slot-number command where slot-number is the chassis slot where the CSM being configured is located. The module csm command places you in CSM configuration submode. All further configuration commands that you enter apply to the CSM installed in the slot you have specified. The command syntax for CSM mode and RP mode configuration is identical with these exceptions: To configure a virtual server for multiple CSMs, perform this task: Specifies the location of the CSM you are configuring. Configures the virtual server. Existing CSM configurations are migrated to the new configuration when the mode is changed from csm to rp using the ip slb mode command. If any Cisco IOS SLB or CSM configuration exists, you are prompted for the slot number. You can migrate from an RP mode configuration to CSM mode configuration on the Catalyst 6000 family switch. You can only manually migrate from a Cisco IOS SLB configuration to a CSM configuration. The configuration process described here assumes that the switch is in the RP mode. Figure 3-1 shows an overview of the configuration process required and optional operations are shown. To configure the required parameters, see the following sections: After you configure the required load-balancing parameters on the CSM, you can configure the optional parameters in the following sections: To save or restore your configurations or to work with advanced configurations, refer to the following sections in Chapter 3 through Chapter 6: When you install the CSM in a Catalyst 6500 series switch, you need to configure client-side and server-side VLANs. (See Figure 3-2.) Diagram notes: *Any router configured as a client-side gateway or a next-hop router for servers more than one hop away must have ICMP redirects disabled. The CSM does not perform a Layer 3 lookup to forward traffic; the CSM cannot act upon ICMP redirects. ** You can configure up to seven gateways per VLAN for up to 256 VLANs and up to 224 gateways for the entire system. If an HSRP gateway is configured, the CSM uses 3 gateway entries out of the 224 gateway entries because traffic can come from the virtual and physical MAC addresses of the HSRP group. (See the "Configuring HSRP" section.) To configure client-side VLANs, perform this task: Configures the client-side VLANs and enters the client VLAN mode1. Configures an IP address to the CSM used by probes and ARP requests on this particular VLAN2. Configures the gateway IP address. 2The no form of this command restores the defaults. This example shows how to configure the CSM for client-side VLANs: To configure server-side VLANs, perform this task: Configures the server-side VLANs and enters the server VLAN mode1. Configures an IP address for the server VLAN2. (Optional) Configures multiple IP addresses to the CSM as alternate gateways for the real server3. Configures a static route to reach the real servers if they are more than one Layer 3 hop away from the CSM. Displays the client-side and server-side VLAN configurations. 3The alias is required in the redundant configuration. See the "Configuring Fault Tolerance" section. This example shows how to configure the CSM for server-side VLANs: A server farm or server pool is a collection of servers that contain the same content. You specify the server farm name when you configure the server farm and add servers to it, and when you bind the server farm to a virtual server. When you configure server farms, do the following: You also can configure inband health monitoring for each server farm (see the "Configuring Inband Health Monitoring" section). You can assign a return code map to a server farm to configure return code parsing (see the "Configuring HTTP Return Code Checking" section. To configure server farms, perform this task: Creates and names a server farm and enters the server farm configuration mode1 2. Configures the load-balancing prediction algorithm2. If not specified, the default is roundrobin. (Optional) Enables the NAT mode, client2. See the "Configuring Client NAT Pools" section. (Optional) Specifies that the destination IP address is not changed when the load balancing decision is made. (Optional) Associates the server farm to a probe that can be defined by the probe command2. (Optional) Binds a single physical server to multiple server farms and reports a different weight for each one2. The bindid is used by DFP. (Optional) Sets the behavior of connections to real servers that have failed2. Enables the real servers. Displays information about one or all server farms. This example shows how to configure a server farm, named p1_nat, using the least-connections (leastconns) algorithm. The real server with the fewest number of active connections will get the next connection request for the server farm with the leastconns predictor. Real servers are physical devices assigned to a server farm. Real servers provide the services that are load balanced. When the server receives a client request, it sends the reply to the CSM for forwarding to the client. You configure the real server in the real server configuration mode by specifying the server IP address and port when you assign it to a server farm. You enter the real server configuration mode from the server farm mode where you are adding the real server. To configure real servers, perform this task: Identifies a real server as a member of the server farm and enters the real server configuration mode. An optional translation port can also be configured1, 2. (Optional) Sets the weighting value for the virtual server predictor algorithm to assign the server's workload capacity relative to the other servers in the server farm if the round robin or least connection is selected2. (Optional) Sets the maximum number of active connections on the real server2. When the specified maximum is reached, no more new connections are sent to that real server until the number of active connections drops below the minimum threshold. (Optional) Sets the minimum connection threshold2. Enables the real server for use by the CSM2 3. (Optional) Displays information about configured real servers. The sfarm option limits the display to real servers associated with a particular virtual server. The detail option displays detailed real server information. Displays active connections to the CSM. The vserver option limits the display to connections associated with a particular virtual server. The client option limits the display to connections for a particular client. The detail option displays detailed connection information. 3Repeat Steps 1 through 5 for each real server you are configuring. This example shows how to create real servers: Policies are access rules that traffic must match when balancing to a server farm. Policies allow the CSM to balance Layer 7 traffic. Multiple policies can be assigned to one virtual server, creating multiple access rules for that virtual server. When configuring policies, you first configure the access rules (maps, client-groups, and sticky groups) and then you combine these access rules under a particular policy. When the CSM is able to match policies, it selects the policy that appears first in the policy list. Policies are located in the policy list in the sequence in which they were bound to the virtual server. You can reorder the policies in the list by removing policies and reentering them in the correct order. Enter the no slb-policy policy name command and the slb-policy policy name command in the vserver submode to remove and enter policies. To configure load-balancing policies, perform this task: Creates the policy and enters the policy submode to configure the policy attributes1. Associates a URL map to a policy2. You must have previously created and configured the URL maps and cookie maps with the map command. See the "Configuring Maps" section. Associates a cookie map to a policy2. Associates an HTTP header map to a policy. Associates this policy to a specific sticky group2. Configures a client filter associated with a policy. Only standard IP access lists are used to define a client filter. Configures the server farm serving a particular load-balancing policy. Only one server farm can be configured per policy2. Marks traffic with a dscp-value if packets matched with the load-balancing policy2. This example assumes that the URL map, map1 has already been configured and shows how to configure server load-balancing policies and associate them to virtual servers: You configure maps to define multiple URLs, cookies, HTTP headers, and return codes into groups that can be associated with a policy when you configure the policy. (See the "Configuring Policies" section.) Regular expressions for URLs (for example, url1 and url2) are based on UNIX filename specifications. See Table 3-1 for more information. To add a URL map, perform this task: Creates a group to hold multiple URL match criteria.1, 2 Specifies a string expression to match against the requested URL2. * Zero or more characters. ? Exactly one character. \ Escaped character. Bracketed range [0-9] Matching any single character from the range. A leading ^ in a range Do not match any in the range. All other characters represent themselves. .\a Alert (ASCII 7). .\b Backspace (ASCII 8). .\f Form-feed (ASCII 12). .\n New line (ascii 10). .\r Carriage return (ASCII 13). .\t Tab (ASCII 9). .\v Vertical tab (ASCII 11). .\0 Null (ASCII 0). .\\ Backslash. .\x## Any ASCII character as specified in two-digit hex notation. To add a cookie map, perform this task: Configures multiple cookies into a cookie map1. Configures multiple cookies1. This example shows how to configure maps and associate them with a policy:: Creates and names an HTTP header map group. For more information about header maps, see the "Configuring Generic Header Parsing" section. To create a map for return code checking, perform this task: Creates and names a return code map group. For more information about return code maps, see the "Configuring HTTP Return Code Checking" section. Configuring a sticky group involves configuring the attributes of that group and associating it with a policy. Sticky time specifies the period of time that the sticky information is kept. The default sticky time value is 1440 minutes (24 hours). To configure sticky groups, perform this task: Ensures that connections from the same client matching the same policy use the same real server1. This example shows how to configure a sticky group and associate it with a policy: Virtual servers represent groups of real servers and are associated with real server farms through policies. Configuring virtual servers requires that you set the attributes of the virtual server specifying the default server farm (default policy) and that you associate other server farms through a list of policies. The default server farm (default policy) is used if a request does not match any SLB policy or if there are no policies associated with the virtual server. Before you can associate a server farm with the virtual server, you must configure the server farm. For more information, see the "Configuring Server Farms" section. Policies are processed in the order in which they are entered in the virtual server configuration. For more information, see the "Configuring Policies" section. In software release 2.2(1), you can configure each virtual server with a pending connection timeout to terminate connections quickly if the switch becomes flooded with traffic. This connection applies to a transaction between the client and server that has not completed the request and reply process.) In software release 2.1(1), the CSM can load-balance traffic from any IP protocol. When you configure a virtual server in vserver submode, you must define the IP protocol that the virtual server will accept. Configure the virtual server in the virtual server configuration submode. To configure virtual servers, perform this task: Identifies the virtual server and enters the virtual server configuration mode1, 2. Sets the IP address for the virtual server optional port number or name and the connection coupling and type2. The protocol value is tcp, udp, Any (no port-number is required), or a number value (no port-number is required). Associates the default server farm with the virtual server2 3. Only one server farm is allowed. If the server farm is not specified, all the requests not matching any other policies will be discarded. (Optional) Configures connections from the client to use the same real server2 3. The default is sticky off. (Optional) Restricts which clients are allowed to use the virtual server2 3. (Optional) Associates one or more content switching policies with a virtual server2. Enables the virtual server for use by the CSM2. Displays information for virtual servers defined for Content Switching. 3These parameters refer to the default policy. This example shows how to configure a virtual server named barnett, associate it with the server farm named bosco, and configure a sticky connection with a duration of 50 minutes to sticky group 12: This example shows how to configure a virtual server name vs1, with two policies and a default server farm when client traffic matches a specific policy. The virtual server will be load balanced to the server farm attached to that policy. When client traffic fails to match any policy, the virtual server will be load balanced to the default server farm named bosco. Transmission Control Protocol (TCP) is a connection-oriented protocol that uses known protocol messages for activating and deactivating TCP sessions. In server load balancing, when adding or removing a connection from the connection database, the Finite State Machine correlates TCP signals such as SYN, SYN/ACK, FIN, and RST. When adding connections, these signals are used for detecting server failure and recovery and for determining the number of connections per server. The CSM also supports User Datagram Protocol (UDP). Because UDP is not connection-oriented, protocol messages cannot be generically sniffed (without knowing details of the upper-layer protocol) to detect the beginning or end of a UDP message exchange. Detection of UDP connection termination is based on a configurable idle timer. Protocols requiring multiple simultaneous connections to the same real server are supported (such as FTP). Internet Control Management Protocol (ICMP) messages destined for the virtual IP address are also handled (such as ping). To configure TCP parameters, perform this task: Identifies the virtual server and enters the virtual server configuration mode1,2. Configures the amount of time (in seconds) that connection information is maintained in the absence of packet activity for a connection2. This example shows how to configure TCP parameters for virtual servers: Configuring the Dynamic Feedback Protocol (DFP) allows servers to provide feedback to the CSM to enhance load balancing. DFP allows host agents (residing on the physical server) to dynamically report the change in status of the host systems providing a virtual service. To configure DFP, perform this task: Configures DFP manager, supplies an optional password, and enters the DFP agent submode1, 2. Configures the time intervals between keepalive messages, the number of consecutive connection attempts or invalid DFP reports, and the interval between connection attempts2. Displays DFP manager and agent information. This example shows how to configure the dynamic feedback protocol: The redirect-vserver command is a server farm submode command that allows you to configure virtual servers dedicated to real servers. This mapping provides connection persistence, which maintains connections from clients to real servers across TCP sessions. To configure redirect virtual servers, perform this task: Configures virtual servers dedicated to real servers and enters the redirect server submode1, 2. Configures the destination URL host name when redirecting HTTP requests arrive at this server farm. Only the beginning of the URL can be specified in the relocation string. The remaining portion is taken from the original HTTP request2. Configures the relocation string sent in response to HTTP requests in the event that the redirect server is out of service. Only the beginning of the relocation string can be specified. The remaining portion is taken from the original HTTP request2. Configures the redirect virtual server IP address and port2. Sets the CSM connection idle timer for the redirect virtual server2. Configures the combination of the ip-address and network-mask used to restrict which clients are allowed to access the redirect virtual server2. Enables the redirect virtual server and begins advertisements2. (Optional) Enables SSL forwarding by the virtual server. Shows all redirect servers configured. This example shows how to configure redirect virtual servers to specify virtual servers to real servers in a server farm: When you configure client Network Address Translation (NAT) pools, NAT converts the source IP address of the client requests into an IP address on the server-side VLAN. Use the NAT pool name in the serverfarm submode of the nat command to specify which connections need to be configured for client NAT pools. To configure client NAT pools, perform this task: Configures a content switching NAT. You must create at least one client address pool to use this command1, 2. Enters the serverfarm submode to apply the client NAT. Associates the configured NAT pool with the server farm. Displays the NAT configuration. This example shows how to configure client NAT pools: NAT for the server allows you to support connections initiated by real servers and to provide a default configuration used for servers initiating connections that do not have matching entries in the server NAT configuration. By default, the CSM allows server-originated connections without NAT. To configure NAT for the server, perform this task: Configures the server-originated connections. Options include dropping the connections, configuring them with NAT with a given IP address, or with the virtual IP address that they are associated with1, 2. Configures the static nat submode where the servers will have this NAT option. You cannot use the same real server with multiple NAT configuration options.
http://www.cisco.com/en/US/products/hw/switches/ps708/module_installation_and_configuration_guides_chapter09186a008007fa6c.html
crawl-002
refinedweb
3,514
55.34
Welcome to the third installment of PHP-ing like a productive person. It’s code writing time. Here is how I like to start: Yes, I’m a toucher. Sue me! The Front Controller The index file seems like a good place to start, right? That’s the first place your browser will hit. This is the central point of your application. And just for kicks, I will designate this file to be my front controller. What does that mean? It means that it will do all the routing. It will capture the GET requests from the visitors and then send them where they need to go. One of the things I want in this pastebin incarnation are nice URL’s. This, is what I consider an ugly URL: On the other hand, this is what I consider a nice(er) URL: What’s the difference here? Well, the difference is in how Apache understand these two requests. The first one means “Yo, Apache, bring me index.php – I have a GET request for him”. The second means “Yo, Apache – I want to go to the 2345 folder and see what’s there!” In other words, if you use the second URL Apache will try to find an index file in /vagrant/www/2345 directory. We don’t want that. We want our own application handle all the routing in hourse so to speak. How do we accomplish that? We tell apache to screw off – that’s how. There are two ways to do this. In our vagrant server the easiest way to get it done is to use Apache ModRewrite. First, let’s enable it: Next let’s upgrade out site config: In that file we need to add 5 new lines under the Directory heading. Everything under allow from all is new: Or you can add the same lines in a .htaccess file in your project directory. The above method is considered a bit safer, but you do what you must. Once you add these lines and restart Apache, all requests will be re-routed to index.php. Right now now that file is blank so there is nothing exciting going on yet. So let’s change that. First line of the code includes Composer’s autoloader. This will allow me to seamlessly use the Twig and RedBean templates without actually having to explicitly include them anywhere. You’ll see this in action in just a minute. Don’t worry about PasteController.php. It isn’t a thing yet. It does not exist. I just made it up. This code right will bug the fuck out if you try to access it via browser but that’s ok. What I’m doing here is sort of structuring the code in my head and putting it down. Having written all these calls to the non-existent methods gives me a pretty good idea what this new class ought to do: - If the uri is / then it will show the blank form - If the uri is /paste it will attempt to handle submitted data - If it is something else, it will test if the URL is a valid paste number and show associated paste - Otherwise, it will display an error Let’s create the controller then! Building the first class I have a feeling you’re going to be sick of me touching things before this is done: I already know the basic layout of this class because I just drafted it out in my index class. All I have to do is to write it down now: I added the the “hello world” line so that I can verify my code is working. I can do that by navigating to on my host OS. If I nothing is amiss I should see the age-old one-line greeting echoed back to me. Creating Twig Templates Hello world is nice, but I want to make something real. In the first installment of this series I mentioned that echoing HTML from PHP scripts is not the greatest practice. So I want to do this right. I want to set up some Twig templates: Inside header.html I put… Well, header stuff: The {{ title }} bit is where we are going to pass the page title. In case you haven’t noticed this is the Twig template syntax. The form.html file is equally simple: Yep, I’m importing the header and footer files. Twig will assemble all of it for me into one nice file and I haven’t used a single line of PHP in these templates. This is rather elegant, don’t you think? So, now lets twig it up in our controller: Note, no include statements but Twig works! That’s the magic of auto loading. This is more or less all there is to Twig. It takes two lines to set it up, and another line or two to actually use it. There is really no reason not to include this library in your projects. Testing Seeing how we have our first class taking shape, it is probably a good idea to start building our first unit test: To make a test we extend the PHPUnit_Framework_TestCase class like this: We can run it by issuing the command: As you can see I’m testing the “easily testable” method isValidPasteURI. Why? Because it actually returns a value. These sort of methods are easy to unit test because you can just call them repeatedly with a wide variety of test values, and then make assertions about the results. Methods which produce side-effects and return no values (like showPasteForm) are little less testable. The results should look a bit like this: Why did this test fail? Well, we made our isValidPasteURI method always return true… My test expects it to return false with some values, and true with others. This was exactly what we should have expected. Now I can go back and fix the method so that it passes all my tests. We know that a REQUEST_URI will need to start with a / and be followed by a number so here is a possible implementation: Is this correct? I don’t know, let’s see: As far as my Unit Test is concerned, the method performs acceptably. Is my Unit Test correct? Is it exhaustive enough? Probably not, but I can add to it later if I feel extra vigilant. How do we test methods in which the main functionality is producing side effects? Good question. It seems that we should at least make an effort to test our showPasteForm which renders a Twig template and sends it to the browser. How do we make assertions about output though? Well, if you just call the method in your test and run phpunit with –verbose –debug attributes, it will dump the raw HTML to the console and you can eyeball it. But that’s not really testing – that’s checking. We want to test! Sadly, checking output is not really what Unit Testing was designed for. To test this function we would ideally want to emulate what the browser is doing – so generate a fake request to index.php and see if a form is displayed by patterm matching the HTML. That’s not unit testing though. That’s acceptance testing, and there are actually frameworks that let us do that – like Codeception for example. But I want to concentrate on unit tests now, which test elements of your code in isolation. Fortunately PHPUnit has some functionality that can help us test this method. There is a near little assertion named expectOutputRegex. As you can probably imagine it will do some pattern matching on the output. This is how you use it: This is little backwards from all other tests because you assert firsts and run code afterwards, but it works. The downside of this is that you can usually only make one assertion per test. If you assert multiple things, only the last one will actually be tested. So you better pick a good regex. What I want to know when this test runs is that Twig assembled my templates correctly. If you remember, in the header template the title was not defined. It contained a variable. In addition the form template did not have a title tag – it imported it. So the only way for the rendered page to have a title, and a correct one is for Twig to have worked correctly. Hence, that’s what I’m pattern matching in this test. Error Handling Let’s handle that pesky error situation when someone inadvertently tries to access a URI that is not a valid paste address. For example navigating to /foo should produce a 404 error code so that browsers know that nothing is there. We already have a method for that in our PasteController but… Well, is that a good place for it? Why a controller related to paste stuff should handle routing errors? How about we make an ErrorController.php class and encapsulate that functionality away: This works, but it doesn’t really display any meaningful message to the user. The browser knows there was a 404 error, but the user sees a blank page. It would be better if we could pull in our templating engine and display a nicely formatted error message. But… Well, this is not really an issue but I already initialized Twig and set it up to my liking in PasteController. Now I will have to do it again. And if I define another controller, I will have to do it there too. I know it’s only two lines but that’s not the case. I’m worried about the configuration options (like enabling the auto-escaping feature). I want it to be consistent throughout my application. So let me make a TwigFactory! It will be a static class that will give you and instance of Twig environment pre-configured to your liking. Both Paste and Error classes will be able to grab it whenever they need it: Refactoring My project directory is getting messy. I have controller classes floating around, I have helper classes (like the TwigFactory) and all kinds of other stuff in there. I think it’s time to do some cleanup. It is usually a good idea to commit your existing code right before you’re going to make major changes: This way if we royally mess something up, we can quickly roll back to the last known good state for the project and start over. Now we can proceed with potentially destructive refactoring actions: This should actually make our working environment much cleaner. Observe: Now that I cleaned everything up I will need to go back and correct the include statements everywhere and then re-test everything. Which reminds me, we should build a unit test for Error Controller as well. Good news is that we can crib most of it from the Paste Controller test: Now that I have multiple tests I can actually run them together by simply pointing PHPUnit at my test directory like this: Auto Loading and Code Standards I just realized something… I’ve been doing this wrong. I’m kinda blogging along as I assemble this code, and it suddenly struck me that I am not using one of the great features that comes with Composer – auto-loading. Or rather, I’m using it for all the third party classes, but I still have to “require” my own classes. This is silly. But alas, to actually leverage this feature my code should conform to PSR-0… Which it does not. It is not properly namespaced, and not organized the way it should be. So we need to do even more refactoring. Best do it now, while the code-base is still small and uncomplicated. The way Composer’s autoloading works is that you can specify your own namespace in the composer.json like this: Here I’m telling composer to look for SillyPastebin code in the src/ directory. This directory does not exist yet. I will have to create it. Once I have this directory in place, the rest is just a pattern matching game. When you instantiate a namespaced class like this: SillyPastebin\Namespace\SomeClass the composer autoloader simply converts all the php namespace delimiters (these things: \) into system path delimiters (in our case /) using the directory you specified in the config file as root. So if it sees the invocation I shown above it will attempt to auto-load a class located at: /src/SillyPastebin/Namespace/SomeClass.php This is not how my code is organized. I have controller and helper folders in my top level directory and nothing is namespaced. With the code as it is, there is simply no way for me to leverage this great feature. Which means we have some housekeeping and refactoring to do. First let’s move the files around to conform to these conventions: When finished, my directory structure will look like this: Next let’s refactor our code: And: And: Note that I had to prefix the Twig classes with \ to indicate they are not part of the SillyPastebin\Helper namespace. Finally here is how you change the index: We’ll also need to refactor our tests the same way, but I’m not gonna show that because it’s basically more of the same. You remove the require statements, and you add SillyPastebin\Namespace in front of all the class instatioation calls and that’s about it. Last thing to do is to update our composer config to make sure the auto-loading feature works: The whole thing took about five minutes, and I achieved something really cool – I will never, ever have to write another require or include statement for this project. The first line of index.php is actually the only include that I will ever need from now on. This is very, very cool and I get this for free with Composer as long as I namespace my code the right way… Which is what I should be doing anyway. Lessons Learned I started this series with the best intentions – I wanted to write this thing right. I wanted to practice what I preached in Part 1, and yet when I sat down and started writing code bad habits started sneaking in. This happens all the time – you forget, or you willingly ignore some rule or best practice. At first it works out fine, but then as you continue working things start to crumble. The code starts looking subtly wrong. You know what reeled me back onto the correct path again? Unit tests. No seriously, scroll back a few paragraphs and look at the code I posted for ErrorControllerTest class. I saw these lines piling up on top: Here is what I realized: I may have more helper classes in the future. Also Models, which we didn’t even talk about yet. I will have to include every single one of them, on every single unit test. That’s silly. If I had to do this once in my index file, I’d probably be fine with it – but now I saw dozens of unit tests, with long, long lines of includes on top, and it looked wrong. And so, I ended up refactoring my code to conform to the PSR-0 standard, and to use proper name spacing. Unit testing saved the day once again, and not by detecting errors, but by forcing me to think about how I structure my code. I mentioned this in part one – that’s the hidden value of Unit Tests. They make you think not only about what your code is doing, but also how it is doing it. This is the primary reason why I didn’t go back and rework this post to look as if I meant to do it this way all along. It would probably be clearer, but I wanted to showcase how even if you have best intentions you can get easily lulled into complacency and how following some best practices and using the right tools can jolt you right back onto the correct path. Next time I’ll try to finally get some database interaction accomplished using RedBean. Unless I get side-tracked again. I’ve still been running along in parallel with my own implementation in Elisp. It’s been a lot of fun! It’s got syntax highlighting, diffs, and supports three different backend databases. Again, here’s a demo hosted for the short-term, by same text editor instance I used to write it. For visitors from the future, here’s a screenshot (and those fuzzy timestamps update live!). Try clicking the “diff” link. I kept the server side as simple as possible. Counting only one backend database, it’s about 150 lines of code and only serves one static page, a few scripts, and a single JSON form. All the heavy work is done client-side, including page generation, syntax highlighting, and unified diffs. The downside is that I bet it’s not very search-engine friendly. I do have some unit tests in place, since you’re using unit tests for your pastebin. Thanks, this has been very educational. Pingback: Unit Testing Sinatra Apps | Terminally Incoherent
http://www.terminally-incoherent.com/blog/2012/12/26/php-like-a-pro-part-3/
CC-MAIN-2015-35
refinedweb
2,892
72.36
kutil_openlog— #include <sys/types.h> #include <stdarg.h> #include <stdint.h> #include <kcgi.h>int kutil_openlog(const char *file); kutil_openlog() function configures output for the kutil_log(3) family of functions. By default, these functions log to stderrand inherit the initial output buffering behaviour (see setvbuf(3)). If file is not NULL, kutil_openlog() first redirects stderr to file. Then, regardless of whether file is NULL, the output buffering of the stream is set to line buffered. CGI scripts invoking long-running child processes via fork(2) should use this function with a valid file as the web server might wait for all file descriptors to close before closing the request connection. kutil_openlog() function returns zero on failure (system error) and non-zero on success. If kutil_openlog() fails to re-open stderr, the output stream may no longer be operable: the caller should exit. kutil_openlog() function was written by Kristaps Dzonsons <kristaps@bsd.lv>. kutil_openlog() will fail to create the file and exit with failure.
https://kristaps.bsd.lv/kcgi/kutil_openlog.3.html
CC-MAIN-2019-22
refinedweb
163
59.9
参考Pylons下unicode介绍: In Python source code, Unicode literals are written as strings prefixed with the 'u' or 'U' character: 12 >>> u'abcdefghijk'>>> U'lmnopqrstuv' You can also use ", """` or ''' versions too. For example: 123 >>>: 123456789 >>>franç: 1234 #!u = u'abcdé'print ord(u[-1]) When you run it with Python 2.4, it will output the following warning: sys:1: DeprecationWarning: Non-ASCII character '\xe9' in file testas.py on line2, but no encoding declared; see for details and then the following output: 1. If you are working with Unicode in detail you might also be interested in the unicodedata module which can be used to find out Unicode properties such as a character's name, category, numeric value and the like. 2 Applying this to Web Programming: 123456 def read_file(filename, encoding): if '/' in filename: raise ValueError("'/' not allowed in filenames") unicode_name = filename.decode(encoding) f = open(unicode_name, 'r') # ... return contents of. 博客园
http://www.cnblogs.com/sharplife/archive/2009/03/16/1413611.html
CC-MAIN-2017-47
refinedweb
153
55.95
Hello, The reason for the threaded performance you are seeing is due to Python’s Global Interpreter Lock <>, which allows only one thread to perform a computation. In your code, the “computation” is the line d = list(db.app_login_logout_log.find()) where the GIL is held by the thread that is converting BSON into Python data structure. The GIL is not held for IO, but since the mongod you’re connected to is in localhost, comparatively little time is spent doing IO. Using the multiprocessing module allows the program to scale up a bit better. For example, by modifying the code a little to be: import pymongo import sys import time from multiprocessing import Process def xx(i): conn = pymongo.MongoClient('localhost',27017) db = conn.test print i, 'started' a = time.time() d = list(db.test.find().limit(100000)) print i,'finished. time:',time.time() - a procs = [Process(target=xx, args=(i,)) for i in range(int(sys.argv[1]))] start = time.time() for p in procs: p.start() for p in procs: p.join() print 'all done: %.2f' % (time.time() - start) The output shows better scaling vs. using threads: $ python foo.py 1 0 started 0 finished. time: 0.338715076447 all done: 0.35 $ python foo.py 10 0 started 1 started 2 started 3 started 4 started 5 started 6 started 7 started 8 started 9 started 4 finished. time: 1.09985303879 5 finished. time: 1.10895490646 0 finished. time: 1.11398696899 8 finished. time: 1.12296009064 3 finished. time: 1.13394904137 6 finished. time: 1.13271999359 2 finished. time: 1.13802504539 9 finished. time: 1.13268995285 7 finished. time: 1.13850402832 1 finished. time: 1.15410399437 all done: 1.17 For more information, please see Thread State and the Global Interpreter Lock <> . Best regards, Kevin
https://marc.ttias.be/mongodb-user/2016-04/msg00097.php
CC-MAIN-2017-26
refinedweb
297
72.22
Build an Android Pager Component One of the most important user interface paradigms for modern mobiles is their swipable multi-touch interface. The most obvious example of this is the pager view, which allows a user to swipe through pages of information intuitively. The best known example of this is the home screen of iOS, which arranges applications in a grid across multiple pages. This has proven so successful that it has been used in a variety of other places, and has been outright copied by other phone manufacturers. When I started programming for Android, I had assumed there would be an out-of-the box component allowing me to do this, or at least one that would be easily configurable to perform a paged view. That turned out not to be the case, so I had to dive into the Android source to work out a way of implementing it (and as an aside, this is an out-of-the-box feature of iOS). I thought I’d share the fruits of my labour with you. Investigating At first, I started playing with ViewFlipper using Gestures to provide the input to change pages. This worked, but did not give the correct feedback to the user while scrolling. When the pager scrolls, the user expects the page to move with the swipe. With this approach, the page does not animate until the swipe gesture is completed. Next, I toyed with the idea of writing my own scrolling component from scratch. This would give me the most flexibility, but would involve a lot of effort. It’s generally much better to build on existing components rather than write your own, so I had one last trawl through the API to see what I could use. It was then that I found HorizontalScrollView. This acts as a traditional scroll view except it works horizontally, which is exactly what I want. It also provides key bindings for free, which is a bonus. The only missing pieces are a method of getting it to snap to the page boundary when the user releases their finger, an easy to use API for adding and removing pages, and a page indicator to show the current page. I created a subclass, and started customising it to produce the effect I wanted. Simple Page Management HorizontalScrollView accepts just one child view by default, and it is up to the programmer to ensure it is of the correct size. I wanted to have multiple child views, or pages, and each page had to take up the entire viewport. To do this I added an addPage() method, which would take a view, resize it to be the size of the viewport, and add it to the LinearLayout which served as the child view of the ScrollView. public class Pager extends HorizontalScrollView { private LinearLayout contents; ... public void addPage(View child) { int width = getWidth(); child.setLayoutParams(new LayoutParams(width, LayoutParams.FILL_PARENT)); contents.addView(child); contents.requestLayout(); firePageCountChanged(); } ... } Snap to Page Boundary To do this, we simply listen to touch-up events, and then issue a final smoothScrollTo() call to scroll back to the nearest page boundary public boolean onTouchEvent(MotionEvent evt) { boolean result = super.onTouchEvent(evt); int width = getWidth(); if (evt.getAction() == MotionEvent.ACTION_UP) { int pg = (getScrollX() + width / 2) / width; smoothScrollTo(pg * width, 0); } return result; } Additional code is required to handle keyboard based events in a similar fashion, which I leave as an exercise for the reader. Pager at GitHub contains the full source code within the PagingScrollerExample. Page Indicator By default, the horizontal scroll pane will display a standard style scroll bar while the scroll pane is scrolling, which fades after a few seconds. This is fine under most circumstances, and is exactly what I used on NodeDroid, my usage checking application. But I thought it might be nice to have an iOS style page indicator at the bottom of the screen. This was implemented by creating a separate custom view component to display the pages, which I called a PageIndicator. This component exists independently of the pager so that you can place the view where you wish. In your activity start up, you simply tell the page indicator which pager to listen to, and it automatically updates itself from then on. In order for the pager to update itself properly, it needs to know when the page changes, and when pages are added or removed from the pager. Inexplicably, there is no default listener for ScrollView in Android, so the only way you can receive notification is to trap it in a subclass using the onScrollChanged method. In my code, I created a second set of Listener/Event classes called OnPageChangeListener, with two events named onPageChange() and onPageCountChange(). PageIndicator at GitHub contains the full source code within the PagingScrollerExample. Try for Yourself I have produced a sample application which demonstrates the use of the view, and the code is available on GitHub. Check out the PagingScrollerExample. If you’d like to try it out, clone the repo, load up the project in Eclipse (with the ADT Plugin for Eclipse installed), and you should be away.
https://www.sitepoint.com/how-to-build-an-android-pager-component/
CC-MAIN-2017-39
refinedweb
856
59.84
EDIT: This is not about fat arrows. It's also not about passing this to an IIFE. It's a transpiler-related question. So I've created a simple pub-sub for a little app I'm working on. I wrote it in ES6 to use spread/rest and save some headaches. I set it up with npm and gulp to transpile it but it's driving me crazy. I made it a browser library but realized it could be used anywhere so I decided to make it Commonjs and AMD compatible. Here's a trimmed down version of my code: (function(root, factory) { if(typeof define === 'function' && define.amd) { define([], function() { return (root.simplePubSub = factory()) }); } else if(typeof module === 'object' && module.exports) { module.exports = (root.simplePubSub = factory()) } else { root.simplePubSub = root.SPS = factory() } }(this, function() { // return SimplePubSub }); }(undefined, function() { }((window || module || {}), function() { ES6 code has two processing modes: <script>, or any other standard ES5 way of loading a file When using Babel 6 and babel-preset-es2015 (or Babel 5), Babel by default assumes that files it processes are ES6 modules. The thing that is causing you trouble is that in an ES6 module, this is undefined, whereas in the "script" case, this varies depending on the environment, like window in a browser script or exports in CommonJS code. If you are using Babel, the easiest option is to write your code without the UMD wrapper, and then bundle your file using something like Browserify to automatically add the UMD wrapper for you. Babel also provides a babel-plugin-transform-es2015-modules-umd. Both are geared toward simplicity, so if you want a customized UMD approach, they may not be for you. Alternatively, you would need to explicitly list all of the Babel plugins in babel-preset-es2015, making sure to exclude the module-processing babel-plugin-transform-es2015-modules-commonjs plugin. Note, this will also stop the automatic addition of use strict since that is part of the ES6 spec too, you may want to add back babel-plugin-transform-strict-mode to keep your code strict automatically. As mentioned in the comments, there are a few community presets that now do this for you. I'd probably recommend babel-preset-es2015-webpack or babel-preset-es2015-script, both of which are es2015 without transform-es2015-modules-commonjs included.
https://codedump.io/share/bTF9hMgw2toe/1/how-to-stop-babel-from-transpiling-39this39-to-39undefined39
CC-MAIN-2016-44
refinedweb
391
54.02
Introductions to elliptic curves often start by saying that elliptic curves have the form y² = x³ + ax + b. where 4a³ + 27b² ≠ 0. Then later they say “except over fields of characteristic 2 or 3.” What does characteristic 2 or 3 mean? The order of a finite field is the number of elements it has. The order is always a prime or a prime power. The characteristic is that prime. So another way to phrase the exception above is to say “except over fields of order 2n or 3n.” If we’re looking at fields not just of characteristic 2 or 3, but order 2 or 3, there can’t be that many of them. Why not just list them? That’s what I plan to do here. General form of elliptic curves All elliptic curves over a finite field have the form y² + a1xy + a3y = x³ + a2x² + a4x + a6, even over fields of characteristic 2 or 3. When the characteristic of the field is not 2, this can be simplified to y² = 4x³ + b2x² + 2b4x + b6 where b2 = a1² + 4a4, b4 = 2a4 + a1a3, and b6 = a3² + 4a6. When the characteristic is at least 5, the form can be simplified further to the one at the top with just two parameters. General form of the discriminant The discriminant of an elliptic curve is something like the discriminant of a quadratic equation. You have an elliptic curve if and only if it is not zero. For curves of characteristic at least five, the condition is 4a³ + 27b², but it’s more complicated for characteristic 2 and 3. To define the discriminant, we’ll need to use b2, b4, and b6 from above, and also b8 = a1²a6 + 4a2a6 – a1a3a4 + a2a3² – a4². Now we can define the discriminant Δ in terms of all the b‘s. Δ = –b2²b8 – 8b4³ – 27b6² + 9b2b4b6. See Handbook of Finite Fields page 423. Enumerating coefficients Now we can enumerate which parameter combinations yield elliptic curves with the following Python code. from itertools import product def discriminant(a1, a2, a3, a4, a6): b2 = a1**2 + 4*a4 b4 = 2*a4 + a1*a3 b6 = a3**2 + 4*a6 b8 = a1**2*a6 + 4*a2*a6 - a1*a3*a4 + a2*a3**2 - a4**2 delta = -b2**2*b8 - 8*b4**3 - 27*b6**2 + 9*b2*b4*b6 return delta p = 2 r = range(p) for (a1, a2, a3, a4, a6) in product(r,r,r,r,r): if discriminant(a1, a2, a3, a4, a6)%p != 0: print(a1, a2, a3, a4, a6) The code above does return the values of the a‘s that yield an elliptic curve, but in some sense it returns too many. For example, there are 32 possible combinations of the a‘s when working over GF(2), the field with two elements, and 16 of these lead to elliptic curves. But some of these must lead to the same set of points because there are only 4 possible (x, y) affine points on the curve, plus the point at infinity. Now we get into a subtle question: when are two elliptic curves the same? Can two elliptic curves have the same set of points and yet be algebraically different? Sometimes, but not usually. Lenstra and Pila [1] proved that two elliptic curves can be equal as sets but not equal as groups if and only if the curve has 5 points and the field has characteristic 2. [2] Lenstra and Pila give the example of the two equations y² + y = x³ + x² and y² + y = x³ + x over GF(2). Both determine the same set of points, but the two curves are algebraically different because (0,0) + (0,0) equals (1,1) on the first curve and (1,0) on the second. Enumerating points on curves The following Python code will enumerate the set of points on a given curve. def on_curve(x, y, a1, a2, a3, a4, a6, p): left = y**2 + a1*x*y + a3*y right = x**3 + a2*x**2 + a4*x + a6 return (left - right)%p == 0 def affine_points(a1, a2, a3, a4, a6, p): pts = set() for x in range(p): for y in range(p): if on_curve(x, y, a1, a2, a3, a4, a6, p): pts.add((x,y)) return pts We can use this code, along with Lenstra and Pila’s result, to enumerate all elliptic curves of small order. All elliptic curves over GF(2) Now we can list all the elliptic curves over the field with two elements. Curves of order 5 The two curves in the example of Lendstra and Pila are the only ones over GF(2) with five points. So the two curves of order 5 over GF(2) are y² + y = x³ + x² y² + y = x³ + x. They determine the same set of points but are algebraically different. Curves of order 4 There are four curves of order 4.They contain different sets of points, i.e. each omits a different one of the four possible affine points. y² + xy = x³ + 1 y² + xy = x³ + x² + x y² + xy + y = x³ + x² y² + xy + y = x³ + x² + x Curves of order 3 There are two distinct curves of order 3, each determined by two equations. The first curve is determined by either of y² + y = x³ y² + y = x³ + x² + x and the second by either of y² + xy + y = x³ + 1 y² + y = x³ + x² + x + 1 Curves of order 2 There are 4 curves of order two; each contains a different affine point. y² + xy + y = x³ + 1 y² + xy + y = x³ + x + 1 y² + xy = x³ + x² + 1 y² + xy = x³ + x² + x Curves of order 1 These are curves containing only the point at infinity y² + y = x³ + x + 1 y² + y = x³ + x² + 1 There are no affine points because the left side is always 0 and the right side is always 1 for x and y in {0, 1}. All elliptic curves over GF(3) There are too many elliptic curves over GF(3) to explore as thoroughly as we did with GF(2) above, but I can report the following results that are obtainable using the Python code above. An elliptic curve over GF(3) contains between 1 and 7 points. Here are the number of parameter combinations that lead to each number of points. 1: 9 2: 22 3: 26 4: 15 5: 26 6: 22 7: 9 Obviously there’s only one curve with one point, the point at infinity, so the nine coefficient combinations that lead to a curve of order 1 determine the same curve. There are 9 distinct curves of order 2 and 12 distinct curves of order 3. All the curves of orders 4, 5, 6, and 7 are distinct. Related posts [1] H. W. Lenstra, Jr and J. Pila. Does the set of points of an elliptic curve determine the group? Computational Algebra and Number Theory, 111-118. [2] We are not considering isomorphism classes here. If two curves have a different set of points, or the same set of points but different group properties, we’re considering them different.
https://www.johndcook.com/blog/2019/03/11/elliptic-curves-gf2-gf3/
CC-MAIN-2020-05
refinedweb
1,192
77.06
sorry, i know this is a bit numpty... i have a radeon 4850 running on opensuse 11.0 using the latest drivers (v1.01.0-beta for lnx64) the x driver seems to work (i can use it with a display) but my test cal program fails to detect any devices attached. the program: #include "cal.h" #include "calcl.h" #include #include #include #include int main(int argc, char** argv) { // #0. init the card std::cout << 123 << std::endl; if (calInit() != CAL_RESULT_OK) { std::cout << "BAD: cal init failed. bailing. \n"; return -1; } // #1. get the cal version CALuint v[3]; calGetVersion(&v[0], &v[1], &v[2]); std::cout << "cal runtime version: " << v[0] << "." << v[1] << "." << v[2] << std::endl; // #2. get number of devices on the system CALuint ndev = -1; if (calDeviceGetCount(&ndev) != CAL_RESULT_OK) { std::cout << "BAD: failed to retrieve number of devices. \n"; } std::cout << "number of cal devices: " << ndev << std::endl; // #3. get the 0th device info CALdeviceinfo dev_info; if(calDeviceGetInfo(&dev_info, 0) != CAL_RESULT_OK) { std::cout << "BAD: failed to retrieve device infor for device 0. \n"; } return 0; } gives output: 123 cal runtime version: 1.1.1 number of cal devices: 0 BAD: failed to retrieve device infor for device 0. Hi, Could you tell us which drivers your using currently? thank you both for your responses yes support does seem a bit noncommital. i havent been able to find a clear statement of when and how much ati will support this. on the other hand people claim to have got this working so i thought i would give it a try. btw i am a complete newbie at gpu programming so it is highly possible i am missing a trick: 1. i installed clean os, suse 11.0 with xwindows working and the kde desktop. using a single monitor this worked but resolution was not optimal. 2. only non os thing i installed is the stream sdk using the driver and sdk package i got from here: looking at the readme, my expectation was that in this package are the driver, the cal sdk and the brook sdk - i.e. everything i needed. this made my xwindows work very nicely so i thought something knew something about my card. but i had problems running both brook and call samples either prebuilt ones or the ones i built myself. as far as versions. looking at the readme in the sdk package: - the driver is v 8.49.4 (x86_64) - the cal sdk is 1.01.1_beta (x86_64) - the brook sdk is 1.01.1_beta (x86_64) looking at the drivers , the latest version is 8.6 but the slight problem there is that there does not seem to be one for Radeon HD 4xxx. should i just try the HD 3xxx one? again, thanks for any help. Hi, thanks is there a suse section i am missing there? no, not at all. i would prefer ubuntu to windoze. would it be possible to get the precise software stack specification: - os version - version of the stream sdk (i guess this would have to be v 1.01.1_beta?) - version of the driver (if different from the one which shipps with the sdk) - any other packages/updates needed to get it going? i had a bit of a look at versions and i will try the latest driver which is 8.6 instead of the one which ships with the sdk. but after that, my next step will be windoze or/and ubuntu thanks a lot for that!! Here is the thread I was refering to: The currently shipped SDK does not 'officially' support Radeon HD4850 and HD 4870 yet, as it was released before the card was available. Although there is some reports of it working with certain configurations, however there is not a blessed version. We are working on this and will have official support soon. nah fair. a couple of questions: 1. when will the ati pope dispense his blessing? i am not looking for a date here just some indication as in days, months, years?? 2. what is the meekest hd radeon supported by the stream sdk on linux. being a newbiee my primary concern is learning. 0.1 TFlops or 1Tflops is more or less the same to me at this point. so the idea would be to set up the environment and learn. when the new sdk is out switch to 4850. thanks The meekest card that supports all the features is the HD3850. Although other cards such as HD34XX and HD36XX are supported, they don't support double precision and scatter/gather and thus are not the best cards to be learning with. I can't give an exact date on the release but maybe someone who can will chime in with an estimate. cool. thanks for that.
https://community.amd.com/thread/97532
CC-MAIN-2019-13
refinedweb
805
84.47
Subject: [boost] [log] Boost.Log formal review From: barend (barend_at_[hidden]) Date: 2010-03-16 15:39:56 Hi, This is my review of Boost.Log submitted by Andrey Semashev. > Please explicitly state in your review whether > the library should be accepted. I've been doubting about this for a few days. When I started my review, by trying to use the library, I discovered that the library is not header-only. I had thought that a logging library should be lightweight and had therefore expected only headerfiles. I then tried the library from John Torjo from 2007 but it is not header- only neither, so no reason to favour that one (I didn't dive into it further). Because of some messages on the list, a.o.: > I think that any reasonable log library must be compiled > to it's own lib due to the intermodule singleton requirement I realized that it is indeed useful to have a non-header-only option as well. The answers of Andrey were satisfactory, a.o.: > If we have a list of places in Boost where logging may be applied, > perhaps we could compile a set of requirements for such a lightweight > wrapper. When we have it, it would be easier to tell, whether Boost.Log > fits the role or not, or whether the wrapper should be part of it or a > separate submission. I'd be interested to hear from the library authors > on this topic. And I found in the Boost.Log documentation, TODO-list: > Think over a header-only configuration. Perhaps, with a reduced > functionality. So, good. Then I started the real review and I started to like the library, it is definitely useful, is configurable, trivial logging is quite simple. It also contains very useful non-trivial features such as writing to the Windows Event Log and alarm critical states as balloon tips. So I finally decided to give my YES vote, contingent on the condition that a lightweight header-only configuration, e.g. single-threaded, single-module, will be available for library writers. Some more about this below. --------------------------------------------------------------------- > What is your evaluation of the design? I like the options, the trivial logging, multiple back-ends, modularity and extensibility. I didn't have a detailed look to the design but in general it looks good to me. > What is your evaluation of the implementation? I didn't have a detailed look either. I glanced through the sources and the implementation is looking neat. I wonder why the class "basic_slim_string" is necessary, to me it sounds very strange why a task as logging needs an own string implementation. However, I didn't have the time to dive into it and didn't ask questions on this on the list. > What is your evaluation of the documentation? The documentation is good. Pages like design overview, tutorial, installation are easy to follow. The tutorial pages however contain some omissions and you have to refer to the examples to see what is missing. I mean here the chapter "Trivial logging with filters" which seems to give a complete example, but misses the namespace aliases, two of them would have been enough. > What is your evaluation of the potential usefulness of the library? Very useful, definitely, for applications and for libraries. > Did you try to use the library? With what compiler? Did you have any > problems? Yes, I used it using Visual C++ 2008 Express Edition. Because Boost.Log is not header-only, and I don't have the Boost libs compiled on my machine (don't ask me why, I never have them), I had several issues which other reviewers probably didn't run into. The defines _CRT_SECURE_NO_WARNINGS and _SCL_SECURE_NO_WARNINGS were necessary to surpress M$ warnings. The define BOOST_ALL_NO_LIB is of course necessary to build it like I did, and it works for Boost.Log. The compiler complained about a missing simple_event_log.h by event_log_backend.cpp, so I drafted it by hand, but I later saw that it is documented and you should neatly use the message compiler, so OK. After adding some Boost source files from thread, filesystem, regex and system it worked, datetime was not necessary. A complete build (including all mentioned sources) costs about 2 minutes. The tutorial did work (see message above on filtering, adding namespace aliasses was necessary) and I experimented with some things, many options seems very useful to me such as the log format, automatic date time, log file rotation, log file max size, etc. etc. I like the streaming feature (so for me printf is not necessary). I have some reservations about the compiler support: GCC 3.4.5, the default on Linux and MinGW, is not supported and for a task as logging this seems to me as a disadvantage. > How much effort did you put into your evaluation? A glance? A quick > reading? In-depth study? About five to six hours spread over some days. > Are you knowledgeable about the problem domain? Yes, I always need logging. In our library at Geodan we built logging as a singleton in 1995. We rewrote it to a logging-over-DLL's in 2000, but these two were not 10% as sophisticated and complete as this logging library is. So yes, these things were not header-only neither. --------------------------------------------------------------------- So back on the logging for libraries. If Boost libraries which are header-only will start to use this library, without protection, thousands and thousands of project files and makefiles worldwide will be broken. This answer: > If an existing header-only library starts using the Boost.Log library > we'll need to update the makefiles. This is a reasonable cost (not from Andrey)is therefore not satisfactory to me. This answer: > Which probably means that the decision is with the author(s) of each > individual library. (from the review manager) surprised me a bit, I think that in a library under review such an important thing may certainly be discussed, per library. However, as said, it is not a reason to vote no, because the rest of the library is quite good. A small macro (sorry) wrapper would already do most of the trick, and avoid breaking project files: #if defined(BOOST_ALL_LOG) || defined(BOOST_GEOMETRY_LOG) #define BOOST_LOG_GEOMETRY(o, s) BOOST_LOG_TRIVIAL(o) << s #else #define BOOST_LOG_GEOMETRY(o, s) #endif Where it can be called in libraries as: BOOST_LOG_GEOMETRY(info, "Log line with a value " << 3); This is inspired on the general define "BOOST_ALL_NO_LIB" and library- specific define "BOOST_LOG_NO_LIB". This small wrapper would really be useful, at least for us, and replace the #ifdef BOOST_GEOMETRY_DEBUG lines which are currently in our Boost.Geometry library. This is zero cost when one of the macro's BOOST_ALL_LOG or BOOST_GEOMETRY_LOG is not defined (by default it is not), and contains much functionality of Boost.Log as filtering, rotation, etc, when one of these macros is defined. Of course it can be enhanced or there are better solutions. Anyway, a lightweight solution will not be difficult. --------------------------------------------------------------------- So thanks Andrey for submitting this library, thanks Volodya for managing the review, and I hope it will be accepted with a lightweight header-only configuration included. Regards, Barend Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2010/03/163281.php
CC-MAIN-2019-47
refinedweb
1,212
65.62
Help in C please if you know a little about C help me #include <stdlib.h> int main() { int i; int c; char me[20]; printf("What is your first name?\n"); scanf("%s",&me); getchar (); if ("%s" == "Bob") { printf("My name is too!\n"); } printf("Darn, nice to meet you %s.\n",me); printf("How old are you?\n"); scanf("%i",&i); if (i>50) { printf("You are older than me!, by %d years!\n",i-50); } else { printf("I am %d years older than you!\n",50-i); } getchar (); printf("Well anyway my name is Bob, so nice to meet you!\n"); getchar (); } this is my example code but heres what happens. I do not know how to test if a string's contents are equvilent to the name Bob. Could anyone help me on how to test a string :-(
http://tibasicdev.wikidot.com/forum/t-116936/help-in-c-language
CC-MAIN-2016-50
refinedweb
142
95.57
More Videos Streaming is available in most browsers, and in the WWDC app. Mysteries of Auto Layout, Part 2 Continue your pursuit of Auto Layout mastery. Gain high level insight into how Auto Layout works, and strategies for debugging layout issues. Learn how to use new APIs for constraint creation and layout guides to build more complex layouts. Resources Related Videos WWDC 2018 WWDC 2016 - Download JESSE DONALDSON: Hi, everyone. Thanks for coming. My name is Jesse, and I am responsible for Auto Layout in the AppKit and Foundation frameworks. Layout is one of the most fundamental tasks that we perform when we build an application, and Auto Layout is about the neatest thing ever, but sometimes it can seem kind of mysterious, and so today I want to look at a few aspects of Auto Layout that are less well understood and go through them in some detail. This is the second part of our two-part series, and here's a brief list of the topics we're going to be looking at. I would like to start with the layout cycle. You probably know how to configure your user interface, but Auto Layout can still be a little bit of a black box. You kind of configure things, you run your application, you get some layout. Hopefully it's the layout that you want, but if it's not, it can be hard to know where to look. So I want to look at what happens in the middle here, how we actually go from having constraints on the view to having the frames assigned to those views. So here is a high-level overview of the process. We start with the application run loop cheerfully iterating until the constraints change in such a way that the calculated layout needs to be different. This causes a deferred layout pass to be scheduled. When that layout pass eventually comes around, we go through the hierarchy and update all the frames for the views. This is a little abstract, so I made a simple example here. The idea is that when we uncheck this top checkbox, we'll modify a constraint to shrink the window and hide the checkboxes on the bottom. So we start with frames looking like this. When we change the constraint, the layout engine's notion of where everything is has already changed, but the UI hasn't updated yet. And then when the layout pass comes along, the UI actually changes to match what the engine thinks should be. So let's talk about constraint changes. The constraints that you create are converted to mathematical expressions and kept inside the Layout Engine. So a constraints change is really just anything that affects these expressions, and so that includes some of the obvious things like activating or deactivating constraints or changing the priority or the constant on a constraint, but also less obvious things like manipulating the view hierarchy or reconfiguring certain kinds of controls. Because those may cause constraint changes indirectly. So what happens when a constraint changes? Well, the first thing that happens is that the Layout Engine will recompute the layout. These expressions are made up of variables that represent things like the origin or the size of a particular view. And when we recalculate the layout, these variables may receive new values. When this happens, the views that they represent are notified, and they mark their superview as needing layout. This is actually what causes the deferred layout pass to be scheduled. So if we look at the example here, this is where you see the frame actually change in the Layout Engine but not yet in the view hierarchy. So when the deferred layout pass comes along, the purpose of this, of course, is to reposition any views that are not in the right place. So when we are finished, everything is in the right spot. And a pass is actually a little bit of a misnomer. There are a couple of passes that happen here. The first is for updating constraints. The idea with this is to make sure that if there are any pending changes to constraints, they happen now, before we go to all the trouble to traverse the view hierarchy and reposition all the views. And then the second pass is when we do that view repositioning. So let's talk about update constraints. Views need to explicitly request that their update constraints method be called. And this pretty much works the same way as setNeedsDisplay. You call setNeedsUpdateConstraints, and then some time later your update constraints method will be called. is much better. Update constraints constraints? Well, it boils down to performance. If you find that just changing your constraints in place is too slow, then update constraints might be able to help you out. It turns out that changing a constraint inside update constraints is actually faster than changing a constraint at other times. The reason for that is because the engine is able to treat all the constraint changes that happen in this pass as a batch. This is the same kind of performance benefit that you get by calling activate constraints on an entire array of constraints as opposed to activating each of those constraints individually. One of the common patterns where we find that this is really useful is if you have a view that will rebuild constraints in response to some kind of a configuration change. It turns out to be very common for clients of these kinds of views to need to configure more than one property, so it's very easy for the view, then, to end up rebuilding its constraints multiple times. That's just a lot of wasted work. It's much more efficient in these kinds of situations to have the view just call setNeedsUpdateConstraints and then when the update constraints pass comes along, it can rebuild its constraints once to match whatever the current configuration is. In any case, once this pass is complete, we know the constraints are all up-to-date, we are ready to proceed with repositioning the views. So this is where we traverse the view hierarchy from the top down, and we'll call layoutSubviews on any view marked as needing layout. On OS X, this method is called layout, but the idea is the same. The purpose is for the receiver to reposition its subviews. It's not for the receiver to reposition itself. So what the framework implementation does is it will read frames for the subviews out of the Layout Engine and then assign them. On the Mac we use setFrame for this, and on iOS, it's setBounds and setCenter, but the idea is the same. So if we look at the example again, this is where you actually see the UI update to match the frames that are in the Layout Engine. One other note about layoutSubviews: A lot of people will override this in order to get some kind of a custom layout, and it's fine if you need to do this, but there are some things that you need to know because it can be very easy to do things here that can get you into trouble. So I want to look at this in a little more detail. You should really only need to override layoutSubviews if you need some kind of a layout that just can't be expressed using constraints. If you can find a way to do it using constraints, that's usually more robust, more trouble free. If you do choose to override this, you should keep in mind that we're in the middle of the layout ceremony at this point. Some views have already been laid out, other views haven't been, but they probably will be soon, and so it's a bit of a delicate moment. There are some special rules to follow. One is that you need to invoke the superclass implementation. We need that for various bookkeeping purposes. Also, it's fine to invalidate the layout of views within your subtree, but you should do that before you call through to the superclass implementation. Second, you don't want to call setNeedsUpdateConstraints. There was an update constraints pass. We went through that, we finished it, and so we missed it. If we still need it now, it's too late. Also, you want to make sure you don't invalidate the layout of views outside your subtree. If you do this, it can be very easy to cause layout feedback loops where the act of performing layout actually causes the layout to be dirtied again. Then we can just end up iterating forever, and that's no fun for anybody. You'll often find inside a layoutSubviews override that you need to modify constraints in order to get your views in the right places, and that's fine too, but again, you need to be careful. It can be difficult to predict when you modify a constraint what other views in the hierarchy might be affected. So if you are changing constraints, it's very easy to accidentally invalidate layout outside your subtree. In any case, assuming that all this goes smoothly, layout cycle is complete at this point, everything is in the right place, and our constraints change has been fully applied. So some things to remember about the layout cycle: First, don't expect view frames to change immediately when you modify a constraint. We've just been through this whole process about how that happens later. And if you do find that you need to override layoutSubviews, be very careful to avoid layout feedback loops because they can be a pain to debug. So next I'd like to talk about how Auto Layout interacts with the Legacy Layout system. Traditionally we positioned views just by setting the frame, then we have an autoresizingMask that specifies how the view should be resized when its superview changes size. Then under Auto Layout, we just do everything with constraints. And in fact, subframe doesn't even work the way you might expect. You can still set the frame of view, but -- and it will move where you put it, but that frame may be overwritten at any time if a layout pass comes along and the framework copies the frame from the Layout Engine and applies it to that view. The trouble with this is that sometimes you just need to set the frame. For example, if you are overriding layoutSubviews, you may need to set the frame of those views. And so luckily, there's a flag for that. It's called translatesAutoResizingMask IntoConstraints [without space]. It's a bit of a mouthful, but it pretty much does what it says. It makes views behave the way that they did under the Legacy Layout system but in an Auto Layout world. So if you set the frame on a view with this flag, the framework will actually generate constraints that enforce that frame in the Layout Engine. What this means is that you can set the frame as often as you like, and you can count on Auto Layout to keep the view where you put it. Furthermore, these constraints actually implement the behavior of the autoresizingMask. So if you have some portion of your application, for example, that isn't updated to Auto Layout yet and you are depending on this auto-resizing behavior, it should still behave the way that you expect. And finally, by actually using the Auto Layout Engine to enforce the frame that you set, it makes it possible to use constraints to position other views relative to this one. Since you set the frame, you can't move the view around itself, but if we didn't tell the Layout Engine where this view needed to be, then as soon as you reference it with a constraint, we can run into problems where you'll see the size or the origin collapse to zero. And that kind of behavior can be very confusing if you are not expecting it. So another note here is that when you are planning to position your view using constraints, you need to make sure that this is off. And if you are building your UI in Interface Builder, it will take good care of you and set this flag appropriately. But if you are allocating your UI programmatically, this actually defaults to being on. It needs to because there's just a lot of code that allocates a view and then expects it to behave in a certain way. So it defaults to on, and if you are allocating your UI programmatically and you forget to turn this off, it can cause a number of unexpected problems. Let's look at what happens if you forget. So this is a pretty simple piece of code. We just allocate a button and configure it, and then we create two constraints that position this button ten points from the top, ten points from the left. So it's very straightforward, but if you run it, this is what you get. The window is too small, it doesn't behave the way that you expect, the button is nowhere to be seen. And you get all this spew in the console. So there's actually a hint about the problem in this spew. You can see this is an NSAutoresizingMaskLayout Constraint [without space]. This is the class of layout constraint that the framework will create for views that have translatesAutoResizingMask IntoConstraints [without space] set. What actually happened here is because we forgot to clear this flag, the framework generated constraints for the initial frame on this button. That frame was empty, the origin and the size were both zero, so it's not very useful, but the real problem came up when we then added constraints to try to position the button at 10,10. It can't be at 0,0 and 10,10 simultaneously, so the Layout Engine suddenly can't satisfy all the constraints, and things go wrong in unexpected ways. If we go back to the code and we just add a line to clear this flag, then things get much better. We get the layout that we are expecting, the button is in the right place, the window behaves the way we would expect. So some things to keep in mind about translatesAutoResizingMask IntoConstraints [without space]: You usually won't need this flag at all, but if you find that you have a view that you need to position by setting the frame directly, then this will help you out. And again, if you are planning to position things with constraints, you need to make sure that this is off if you are not using Interface Builder. So next I'd like to talk about constraint creation. We can do that most easily, I think, just by looking at the code we just had up on the screen, specifically the piece at the end, where we are building these constraints. This is the same constraint factory method that we've had since the beginning of Auto Layout, and it's perfectly effective, but it can be a little bit awkward to use. The code is pretty verbose, and it's a little bit difficult to read. What we are really trying to express here is just that we want to position the button ten points from the top and ten points from the left. But in order to understand that, you need to read through this code pretty carefully and kind of put the pieces together. So in the new release of OS X and iOS, we are introducing a new, more concise syntax for creating constraints. Here is what it looks like. This syntax works using objects called layout anchors. Thanks. I am glad you like them. [Laughter] A layout anchor represents a particular attribute of a particular view, and anchor objects expose a variety of factory methods for creating different forms of constraints. So in this case we see we are constraining the top anchor to be the same as the top anchor of the view plus ten. If you are working in Objective-C still, they are available there as well, and the difference is even more striking. We go from nearly seven lines down to just two. So this new syntax still conforms to all our naming conventions, but it reads a lot more like an expression and, I think, makes it a lot easier to see the intent of the code. All valid forms of constraints can be created using this syntax, and you'll actually even get compiler errors for many of the invalid forms of constraints. So at the moment, you only get the errors in Objective-C, but they will be coming to Swift as well. For example, it doesn't make sense to say that the leading edge of a view should be 100 because there's no context in which to interpret that 100. So you get an error that this method isn't available on a location anchor. Similarly, it doesn't make sense to say the leading edge of your view is the same as the width of a different view. Locations and sizes are fundamentally incompatible types in Auto Layout, so you get an incompatible pointer type. So previously, these things were still errors, but they would only show up at runtime, so I think making them compile time errors will help us all get our constraints right the first time, as well as write more readable, more maintainable code. So next I'd like to talk about constraining negative space. There are a few different kinds of layouts that come up from time to time where it's not immediately obvious how to achieve them. Here's a couple examples. In the first case here, the goal is to make sure that the space between these buttons remains the same when the window is resized. And in the bottom, we have an image and a label, and we want to center them as a group rather than center each piece of the content individually. So it turns out that the solution to these layout problems is the same, and that's to use dummy views. We actually allocate empty views, and we constrain them to fill the spaces between the buttons. Once we have views in these spots, we can use an equal width constraint to make sure that their size remains the same as the window is resized. And in the bottom case, we can do the same thing. We use an empty view, and we constrain it to the edges of the image and the label, and then we can place a centering constraint on that empty view rather than on any of the content views themselves. So this works, and it's how we've traditionally solved these layout problems, but it's a little bit of an obscure trick, right? And it's also inefficient, especially on iOS, where every view has a layer associated with it. And so in the new release, we are exposing a new public class for layout guides. A layout guide simply represents a rectangle in the Layout Engine. They're very easy to use. All you need to do is allocate them and then add them to an owning view, and then you can constrain them just like you can a view. They expose anchor objects, so they work with the new constraint creation syntax, but you can also just pass them to the existing constraint factory methods. So they will work with visual format language and things like that. We are converting existing layout guides to use these internally, and here is a good example of that. UIView, you may notice, doesn't actually expose layout anchors for the margin attributes. Instead, UI View has a new layout margins guide. This layout guide just represents the area of the view inside the margins. And so if you need to constrain something to the margins, it's easiest to just go through this layout guide. So layout guides don't really enable any fundamentally new behavior. You can do all of these things today using views. But they let you solve these kinds of problems in a much more lightweight manner and also without cluttering your view hierarchy with views that don't actually need to draw. So next I'd like to invite Kasia back on stage to talk to you about some debugging strategies for problems that come up with Auto Layout. KASIA WAWER: Hello. I saw some of you this morning, I think. My name is Kasia. I am on the iOS Keyboards Team, and I am here to talk to you about debugging your layout, what you should do when something goes wrong. Those of you who have used Auto Layout in the past -- which I hope is most of you -- have probably run into something like this: You design a UI, and it's beautiful, and you're trying to implement it in your code, and you put in all your constraints carefully, and you adjust things. And you hit build and run, and this happens. Totally the wrong thing, and in the debugger, you see something like this. That's a lot of text; it can be a little intimidating. But it's actually a really useful log. And this happens when you hit an unsatisfiable constraint error. The engine has looked at the set of constraints you've given it and decided that it can't actually solve your layout because something is conflicting with something else, so it needs to break one of your constraints in order to solve your view. And so it throws this error to tell you what it did, and you know, then you need to go and dig in and find that extra competing constraint. So let's try reading this log a little bit. So here's the view we just saw and the log we got. We've moved some stuff from the top to make it fit on the screen. But the first place to start is by looking at the bottom. The last thing you see is the constraint that was actually broken. This is not necessarily the constraint that's causing the problem but the one the engine had to break in order to solve your layout, so it's a really good place to start. You start with checking translatesAutoResizingMask IntoConstraints [without space] on that view. As you saw with Jesse's example, that will show up also in the log, but it's usually a good thing to make sure you've done that first. In this case, we have an aspect ratio constraint on Saturn that was broken. So let's highlight that higher up in the log. It will show up in the log itself. The next thing to do is to find the other constraints that are affecting that view that show up in the log. So in this case, we next see a leading to superview constraint and a trailing to superview constraint, and one to the top, and then one to the label view underneath it. And all of these are fine. None of these are directly conflicting. So the next thing to look at are the views it's tied to, in this case, the label. So this label has the same constraint that ties it to the bottom of Saturn, and the next constraint it has is one that ties it to the top of a superview. And this is a problem because Saturn is supposed to be more than 100 points tall, and this constraint is telling it to be that way. You'll notice that the constraint next to the label there tells you exactly what the constraint looks like in something very similar to the visual format language that you may have used for creating your constraints in the past. So we see that it's 100 points from the top of the superview, and again, since Saturn needs to be more than that, it had to break one of the constraints in order to solve your layout. So it's actually not that difficult to read. Now, I have made it a little bit easier because you probably are used to seeing constraints logs that look more like this, where there's just a bunch of memory addresses and class names and there's nothing really to tell you what's what unless you have nav text in your view. It's much easier if it looks something like this. In order to achieve that, all you need to do is add identifiers to your constraints. And there's a couple easy ways to do that. If you are using explicit constraints, it's just a property. I suggest naming the identifier the same thing as you are naming your constraint just so it's easy to find later if you need to dig it out of your code. But you can name it anything you want, so go forth and do so. If you are using Visual Format Language, you get an array back, you don't get a constraint back, so you have to loop through that array and set the identifier on every constraint. You can set the same identifier on every constraint in the array, and that's generally a good idea. If you try to pick out the individual constraints there and set identifiers on them and you change something in that array later, the ordering is going to change and you are going to have to go back and change your identifier order as well. Plus once you see that phrase in your log, you know exactly where you are going to look for the problem, so you don't really need to have each specific constraint laid out there. Finally, Interface Builder in the constraint inspector just has an identifier property right there, so that's super easy. Let's see. So let's talk about, you know, understanding this log, and making it even easier to know what's going on. First, if you set identifiers on our new layout guides, and that's just a flat-out identifier property, nothing special about it, which makes it super easy, again, to debug layouts that are using layout guides, and since they're awesome I'm pretty sure all of you are going to be using them at some point. Add them as you go. If you try and take a very complex layout now and throw all of your identifiers in, you can do it. It will take time. It's worth it because you will be able to read this log later. But if you are doing it as you go, that's a lot less work down the road because you can't really predict when you are going to run into this problem, necessarily, and you want to have it there when you need it. Finally, if you have an unsatisfiable constraints log that just has too much information, you have a very complex layout, there are hundreds of lines there, you can take that view at the bottom especially and other views that you are looking at and actually view the constraints affecting them one at a time in the debugger. On iOS, it's constraintsAffectingLayout ForAxis [without space], and on OS X, it's constraintsAffectingLayout ForOrientation [without space]. And that will tell you just the constraints that are affecting that view in one axis or another. So let's look at how that works for here. So I've got that view that we just looked at. We see the same log down here. But let's wipe that out for the moment because I really want to show you how else to look at this. I have set a two-finger double-tap just to break here so I don't have to use memory addresses. I can use the names I've set up. So we are going to break into the debugger here and ask it to print out Saturn's constraintsAffectingLayout ForAxis [without space] and its vertical axis. Vertical is 1, horizontal is 0. If you use the wrong one, you only have one other option, so it's pretty easy to get back to it. So here we see the view has a layout guide at the top, and that's fine. That's the view's constraints. One of the other benefits to naming your constraints in your views is that you know pretty quickly which ones were set up outside of your constraints and which ones were set up by you. So our vertical layout for Saturn tells us that it's tied to the top layout guide. That's great. It also tells us that Saturn is tied to the label underneath it. And then in another constraint that affects Saturn but isn't directly related to Saturn, we see that constraint that's tying the label to the top of the view. Since it doesn't mention Saturn anywhere, that's a pretty good clue that it's the wrong one -- also that whole Saturn is supposed to be more than a hundred points thing, which I happen to know since I wrote this code. Now that I've got this nice handy label here, I can simply search for it, find the constraint that I made, and there we go. I have tied it to the top anchor by a hundred points. And find out where it's activated. And get rid of it. Build again. That's much better. That's exactly what I was looking for. And so it's really easy to kind of drill down into those problems, even when you have a very complex layout, if you are using identifiers properly. So where are we with this log? Start from the bottom. Finding the constraint that was broken gives you a lot of information about why it was broken. Check translatesAutoResizingMask IntoConstraints [without space] first. It is the culprit in many situations. Set identifiers on both your constraints and your views, and finally, if the log is just too complex, go for constraintsAffectingLayout ForAxis [without space] to narrow it down. Okay. So that's what happens when the engine looks at your constraints and knows that it can't get a solution. There is no solution that fits all of your constraints. But what happens if it has more than one solution? That's when we hit ambiguity. This is our final mystery, so congratulations for making it this far. We don't have that much farther to go. Let's see. So, ambiguous layouts. A couple of possible causes of ambiguous layouts are simply too few constraints. If you are doing a planets' layout like this and you know that you want Saturn in the middle but your horizontal constraints aren't set up properly, the view may have to guess where to put it. Again, reminder, it should be in the middle. The engine put it off to the side. The other solution it has for it is off to the other side, and it never actually lands in the middle. And that can be a problem because if it doesn't know where to put it, it's just going to put it somewhere. That's not what you want. You need to go back and add constraints on that view. Another cause of ambiguous layouts is conflicting priorities. We talked about this a little bit in Part 1. At the bottom of this view that we just fixed here, you will see that it can actually end up in a situation where the text field and button are kind of the wrong proportions. I want it to look more like this, where the text field is taking up most of the view. And the reason that it ended up that way is that the engine had to make a choice between those two layouts for me. And it did that because the content hugging priorities on these two views are the same. They are both 250, and I don't have any other way -- I am not telling the engine any other way to size those views horizontally. So it had to kind of take a guess, and it guessed that maybe I wanted the text view to hug its content closely and go ahead and let the label spread out, but I really wanted it to do this and hug the button content closely. So as I -- this is going to be repeat for a couple of you, but if the content hugging priority on the button is set lower than that on the text field, the edges of the view are able to stretch away from its content because it's less important that it hug its content closely. Or you are telling the engine it's less important that that view hug its content closely. Meanwhile, if you set it above, the content hugging priority of the text view, the button now hugs it closely and the text field stretches. This is consistently how the engine will solve the layout in this particular circumstance. So if you set these priorities properly, you can resolve some of these ambiguous layouts that you run into. We have a couple of tools for resolving ambiguity. Interface Builder is a big help here. It has these little icons on the edge, and if you click on those, it will tell you what's going on with your layout that it doesn't understand. And in many cases, it will tell you that you are missing constraints and what it can't solve for. I need constraints for the Y position or height. When you build and run an app that has this issue, you are going to end up with these views somewhere in the Y-axis, where the engine kind of decided it had to go because it didn't have any information from you. That makes it really easy. When you are not using Interface Builder or when you get passed and you are still running into this, we have a really cool method called autolayoutTrace, and you just use that in the debugger on a view, and it will just tell you in all caps that you have a view that has an ambiguous layout, and you can then go about diagnosing the problem with that view. We also have the view debugger in the debug menu, which will allow you to view the frames and the alignment recs that the layout engine has calculated for your view. It will look something like this. It will just draw it right on the view that it's looking at right now. Here you can see that Saturn, who is supposed to have an alignment rect that comes very closely to its content, is stretched very wide. And that's problematic because that's not what I wanted. But over here, its actual size is correct; it's just pinned to the side which is, again, not what I wanted, but I know it's not a size problem, it's a tied-to-where sort of problem. The other solution is to look in the view debugger; right next to all of your breakpoint navigation, you have this little button here. When you press that, it pulls up your layout in a way that you can click through and view things like constraints, just the wireframes for the views, you can see stuff in 3D. It gives you a really nice view of all your layers, and that can really help with a lot of view debugging scenarios. Finally, we have another debugger method, because I really like using LLDB, called exerciseAmbiguityInLayout. If you have a view that you know is ambiguous and you run this on that view in the debugger and continue, the Layout Engine will show you the other solution it had, which is a great clue when you are trying to figure out where the problem is coming from. And I will show you how that looks now. Okay. So we are back to this view that we just saw a bit ago, and when it's in its regular layout, Saturn is flying off to the side, so I have, again, my debug gesture that I can use just because I need an easy way to break. The first thing I can do is see what's going on with the whole view by running auto layout trace on it, and you see that everything is okay, except for Saturn, which has an ambiguous layout. That's where I am going to concentrate my efforts. There's also a Boolean that will tell you view by view whether it has an ambiguous layout. And that's just hasAmbiguousLayout -- pretty easy to remember, and in Saturn's case, it's true. And if you have that happening, you can also exercise ambiguity in layout and continue, and it will show you the other solution it had for that issue. So let's run that again. And -- oops. Wrong thing to run again. And now it's over to the side again. So in this case, it looks like the layout guides I put on either side of Saturn aren't working for some reason, so I am going to go up and find my constraints that are tying my planets to their specific areas, and they are doing that by having a ratio of layout guides on either side in order to determine where it is. I've got one for Saturn right here, and it should have equal layout guides on either side, which should put it pretty much exactly in the middle. The problem appears to be that I did not actually add this to the constraints array I am activating for that view. And so if I add it, things go much better. Saturn stays put exactly where I wanted it to be. And that's really all that's involved in diagnosing ambiguity. It's pretty easy once you start kind of working with it a little bit. So, debugging your layout. The most important thing is to think carefully about the information that your engine needs. This morning we talked a lot about giving the Layout Engine all of its information so that it can calculate your layout properly in various adaptive scenarios. If you can kind of pull that all together, you are going to run into a lot fewer problems as opposed to just trying to make a couple of constraints here and there and throwing it in. But if you do run into problems, use the logs if constraints are unsatisfiable. It gives you a lot of really good information. In order to make good use of those logs, add identifiers for all those constraints and views. You also want to regularly check for ambiguity. You won't necessarily see it on the first run. This is a good thing to put in something like a unit test and just run it on all your views regularly, so if you run into ambiguous layout, you can diagnose it before you see it. And then we have several tools to help you resolve these issues. Interface builder is helpful, as always, the view debugger, and our various methods in lldb. All right. So we have come a very long way today. If you were with us this morning, you saw us talking about maintainable layouts with stack views and changing constraints properly, working with view sizing and making self-sizing views, and then using priorities and alignment to make sure that your layout stays exactly the way you want it to in various adaptive environments. And then just now, we talked about the layout cycle in depth, interacting with legacy layout, creating constraints with layout anchors rather than the old methods, and constraining negative space with layout guides. And we just now talked about unsatisfiable constraints and resolving ambiguity, which are two problems that people tend to run into regularly when they are using Auto Layout. So those are all of our mysteries. I hope we laid them all out for you pretty well here. If you haven't seen Part 1, I recommend going back and viewing it because there was a lot of information there that can be very useful to you, and the video should be up at some point in the near future, or you can travel back in time to 11:00. Either way. So to get more information on all of this, we, of course, have documentation up on the website, and we do have that planets code, which is more for the first session but we also used here. The planets code that you see here is not broken. It actually works properly. You will have to break it if you want to play around with some of the debugging methods you saw here. We have some related sessions. So again, Part 1 was earlier today, and we have a couple of sessions tomorrow that you might be interested in. We are also going to head down to the lab after this, and we will be there to answer questions that you have about Auto Layout and Interface Builder. And that's what we've got for you today. Have a good one. Looking for something specific? Enter a topic above and jump straight to the good stuff. An error occurred when submitting your query. Please check your Internet connection and try again.
https://developer.apple.com/videos/play/wwdc2015/219/?time=326
CC-MAIN-2019-39
refinedweb
7,035
68.91
.. image:: :target: :alt: Github action tests .. image:: :target: :alt: Code style: black .. image:: :width: 200px :align: right The goal of this project is to provide Python language support as a scripting module for the Godot <>_ game engine. By order of simplicity: asset library website <>_. release page <>_ if you want to only download one specific platform build .. image:: :align: center example: .. code-block:: python # Explicit is better than implicit from godot import exposed, export, Vector2, Node2D, ResourceLoader WEAPON_RES = ResourceLoader.load("res://weapon.tscn") SPEED = Vector2(10, 10) @exposed class Player(Node2D): """ This is the file's main class which will be made available to Godot. This class must inherit from `godot.Node` or any of its children (e.g. `godot.KinematicBody`). Because Godot scripts only accept file paths, you can't have two `exposed` classes in the same file. """ # Exposed class can define some attributes as export(<type>) to achieve # similar goal than GDSscript's `export` keyword name = export(str) # Can export property as well @export(int) @property def age(self): return self._age @age.setter def age(self, value): self._age = value # All methods are exposed to Godot def talk(self, msg): print(f"I'm saying {msg}") def _ready(self): # Don't confuse `__init__` with Godot's `_ready`! self.weapon = WEAPON_RES.instance() self._age = 42 # Of course you can access property & methods defined in the parent name = self.get_name() print(f"{name} position x={self.position.x}, y={self.position.y}") def _process(self, delta): self.position += SPEED * delta ... class Helper: """ Other classes are considered helpers and cannot be called from outside Python. However they can be imported from another python module. """ ... To build the project from source, first checkout the repo or download the latest tarball. Godot-Python requires Python >= 3.7 and a C compiler. The Godot GDNative headers are provided as git submodule: .. code-block:: bash $ git submodule init $ git submodule update Alternatively, you can get them from github <>_. On a fresh Ubuntu install, you will need to install these: .. code-block:: bash $ apt install python3 python3-pip python3-venv build-essential On top of that build the CPython interpreter requires development headers of it extension modules <>_ (for instance if you lack sqlite dev headers, your Godot-Python build won't contain the sqlite3 python module) The simplest way is to uncomment the main deb-src in /etc/apt/sources.list: .. code-block:: bash deb-src artful main and instruct apt to install the needed packages: .. code-block:: bash $ apt update $ apt build-dep python3.6 See the Python Developer's Guide <>_ for instructions on additional platforms. With MacOS, you will need XCode installed and install the command line tools. .. code-block:: bash $ xcode-select --install If you are using CPython as your backend, you will need these. To install with Homebrew: .. code-block:: bash $ brew install python3 openssl zlib You will also need virtualenv for your python. Install VisualStudio and Python3, then submit a PR to improve this paragraph ;-) Godot-Python build system is heavily based on Python (mainly Scons, Cython and Jinja2). Hence we have to create a Python virtual env to install all those dependencies without clashing with your global Python configuration. .. code-block:: bash $ cd <godot-python-dir> godot-python$ python3 -m venv venv Now you need to activate the virtual env, this is something you should do every time you want to use the virtual env. For Linux/MacOS: .. code-block:: bash godot-python$ . ./venv/bin/activate For Windows: .. code-block:: bash godot-python$ ./venv/bin/activate.bat Finally we can install dependencies: .. code-block:: bash godot-python(venv)$ pip install -r requirements.txt For Linux: .. code-block:: bash godot-python(venv)$ scons platform=x11-64 release For Windows: .. code-block:: bash godot-python(venv)$ scons platform=windows-64 release For MacOS: .. code-block:: bash godot-python(venv)$ scons platform=osx-64 CC=clang release Valid platforms are x11-64, x11-32, windows-64, windows-32 and osx-64. Check Travis or Appveyor links above to see the current status of your platform. This command will checkout CPython repo, move to a pinned commit and build CPython from source. It will then generate pythonscript/godot/bindings.pyx (Godot api bindings) from GDNative's api.json and compile it. This part is long and really memory demanding so be patient ;-) When hacking godot-python you can heavily speedup this step by passing sample=true to scons in order to build only a small subset of the bindings. Eventually the rest of the source will be compiled and a zip build archive will be available in the build directory. .. code-block:: bash godot-python(venv)$ scons platform=<platform> test This will run pytests defined in tests/bindings inside the Godot environment. If not present, will download a precompiled Godot binary (defined in SConstruct and platform specific SCSub files) to and set the correct library path for the GDNative wrapper. .. code-block:: bash godot-python(venv)$ scons platform=<platform> example This will run the converted pong example in examples/pong inside the Godot environment. If not present, will download a precompiled Godot binary (defined in SConstruct) to and set the correct library path for the GDNative wrapper. If you have a pre-existing version of godot, you can instruct the build script to use that the static library and binary for building and tests. .. code-block:: bash godot-python(venv)$ scons platform=x11-64 godot_binary=../godot/bin/godot.x11.opt.64 You check out all the build options in this file <>_. How can I export my project? Currently, godot-python does not support automatic export, which means that the python environment is not copied to the release when using Godot's export menu. A release can be created manually: First, export the project in .zip format. Second, extract the .zip in a directory. For sake of example let's say the directory is called :code: godotpythonproject. Third, copy the correct Python environment into this folder (if it hasn't been automatically included in the export). Inside your project folder, you will need to find :code: /addons/pythonscript/x11-64, replacing "x11-64" with the correct target system you are deploying to. Copy the entire folder for your system, placing it at the same relative position, e.g. :code: godotpythonproject/addons/pythonscript/x11-64 if your unzipped directory was "godotpythonproject". Legally speaking you should also copy LICENSE.txt from the pythonscript folder. (The lazy option at this point is to simply copy the entire addons folder from your project to your unzipped directory.) Fourth, place a godot release into the directory. The Godot export menu has probably downloaded an appropriate release already, or you can go to Editor -> Manage Export Templates inside Godot to download fresh ones. These are stored in a location which depends on your operating system. For example, on Windows they may be found at :code: %APPDATA%\Godot\templates\; in Linux or OSX it is :code: ~/.godot/templates/. Copy the file matching your export. (It may matter whether you selected "Export With Debug" when creating the .zip file; choose the debug or release version accordingly.) Running the Godot release should now properly execute your release. However, if you were developing on a different Python environment (say, the one held in the osx-64 folder) than you include with the release (for example the windows-64 folder), and you make any alterations to that environment, such as installing Python packages, these will not carry over; take care to produce a suitable Python environment for the target platform. See also this issue <>_. How can I use Python packages in my project? In essence, godot-python installs a python interpreter inside your project which can then be distributed as part of the final game. Python packages you want to use need to be installed for that interpreter and of course included in the final release. This can be accomplished by using pip to install packages; however, pip is not provided, so it must be installed too. First, locate the correct python interpreter. This will be inside your project at :code: addons\pythonscript\windows-64\python.exe for 64-bit Windows, :code: addons/pythonscript/ox-64/bin/python3 for OSX, etc. Then install pip by running: .. code-block:: addons\pythonscript\windows-64\python.exe -m ensurepip (substituting the correct python for your system). Any other method of installing pip at this location is fine too, and this only needs to be done once. Afterward, any desired packages can be installed by running .. code-block:: addons\pythonscript\windows-64\python.exe -m pip install numpy again, substituting the correct python executable, and replacing numpy with whatever packages you desire. The package can now be imported in your Python code as normal. Note that this will only install packages onto the target platform (here, windows-64), so when exporting the project to a different platform, care must be taken to provide all the necessary libraries. How can I debug my project with PyCharm? This can be done using "Attach to Local Process", but first you have to change the Godot binary filename to include :code: python, for example :code: Godot_v3.0.2-stable_win64.exe to :code: python_Godot_v3.0.2-stable_win64.exe. For more detailed guide and explanation see this external blog post <>_. How can I autoload a python script without attaching it to a Node? In your :code: project.godot file, add the following section:: [autoload] autoloadpy="*res://autoload.py" In addition to the usual:: [gdnative] singletons=[ "res://pythonscript.gdnlib" ] You can use any name for the python file and the class name :code: autoloadpy. Then :code: autoload.py can expose a Node:: from godot import exposed, export from godot.bindings import * @exposed class autoload(Node): def hi(self, to): return 'Hello %s from Python !' % to which can then be called from your gdscript code as an attribute of the :code: autoloadpy class (use the name defined in your :code: project.godot):: print(autoloadpy.hi('root')) How can I efficiently access PoolArrays? :code: PoolIntArray, :code: PoolFloatArray, :code: PoolVector3Array and the other pool arrays can't be accessed directly because they must be locked in memory first. Use the :code: arr.raw_access() context manager to lock it:: arr = PoolIntArray() # create the array arr.resize(10000) with arr.raw_access() as ptr: for i in range(10000): ptr[i] = i # this is fast with arr.raw_access() as ptr: for i in range(10000): assert ptr[i] == i # so is this Keep in mind great performances comes with great responsabilities: there is no boundary check so you may end up with memory corruption if you don't take care ;-) See the godot-python issue <>_.
https://awesomeopensource.com/project/touilleMan/godot-python
CC-MAIN-2021-31
refinedweb
1,782
57.87
Computer Science at Buenos Aires University. Data Scientist at MercadoLibre. Loves Python. Productivity is always one pomodoro away. Source: Pixabay Vim is the Swiss-army knife of text editing. It’s not enough that it has a feature and command for almost every use case and user: it will also let you customize it to add whatever specific things you think it’s missing. In this tutorial we’re going to see how to use two of those features: multiple windows, and multiple vim registers. This tutorial will assume you’re already familiar with the basics of Vim, so I’d suggest you check out this article if you don’t know where to start. Whenever we’re editing a file, here are two things we may want to do: Edit two different files from the same terminal.Edit two different parts of the file at the same time. Luckily for us, Vim allows us to do this without having to open a new tab on our terminal. Let’s say we’re editing a Python script, and there are two different functions in it: f1 and f2 we’d like to edit at the same time. Vim makes this easy: Press Ctrl + w (cmd + w in a Mac).Press v (for vertical) or h (for horizontal). This will create a vertical splice, or a horizontal one. This means the file will be opened in a different ‘window’ besides the current one (inside the same terminal), with the cursor on the same line. Vertical or Horizontal are the two possible ‘splits’ of the screen, so Ctrl + w v will open the file in two windows side by side, whereas Ctrl + w h will open one below the other. To close a window after you’re done using it, just press :q to exit it. To open a different file in a new Vim window, we can type :vsplit <filename> to effectively edit many different files from a single terminal. Now you have two different windows pointing to the same file. Whenever you edit something in one, it will automatically update the other (unlike Sublime or other editors, where you have to press reload). To quickly navigate to the function we want to edit, we’ll just Press / to enter search mode.Type the name of the function.Press enter. This is equivalent to doing Ctrl + F in other editors (in Vim, Ctrl + F acts like page-down). If we want to move the cursor between different windows, we have to press Ctrl + W and then any motion, like h or l to move horizontally. For instance if we had opened, say, 5 windows vertically, we could press Ctrl + W, 3l to move three windows to the right. This seems like a good enough place to introduce the .vimrc file. This small text file, under the ~ directory, stores all your configurations for Vim, and will allow you to customize the text editor even more. Configuring the text editor is as simple as editing this file, and you can do a lot of things with it. You can edit it using Vim, for maximum recursiveness. Usually one of the most common things you will do with it is mapping a key or a key combination, to a different sequence you find awkward. In this case, pressing Ctrl + W and then h,j,k or l can be a bit uncomfortable if you’re doing it often enough. What I find many people do (and I do myself) is adding the following lines to their .vimrc file: nnoremap <C-h> <C-w>h<br>nnoremap <C-j> <C-w>j<br>nnoremap <C-k> <C-w>k<br>nnoremap <C-l> <C-w>l Lines starting with nnoremap will map a key combination (for instance, Ctrl + h) to a different one (for instance, Ctrl+W, h). I find pressing Ctrl + h,j,k,l a lot more fluid than the whole sequence, and most times you’re only switching between a few windows anyway. Of course if you’d prefer a different shortcut, go at it! That’s the beauty of this editor. Here’s another awesome thing we can do with Vim: Is there some particular string you find yourself typing a lot? For instance, it may be that you start many of your Python programs with import pandas as pd<br>import seaborn as sns<br>import tensorflow as tf And don’t want to ever type that again. Or maybe a whole snippet, like for(int i = 0; i < N; i++){} For all of these things and probably many others, it would be very convenient if we could have a lot of different buffers from which to ‘paste’ strings. IDEs usually automate a bit of this, but they often won’t let us customize which snippets can be added and will only allow us to use some predefined ones. Vim, however, has almost forty different buffers we can use for this exact purpose. Each of them is like a new clipboard. I already said before how you can ‘yank’ and ‘put’: Vim’s equivalent of copy and paste, except yanked strings won’t go into the clipboard, but to a reserved memory buffer inaccessible outside of Vim instead. I lied a bit there. It’s not a reserved buffer, but many. To decide which buffer to yank things into, or put things from, press “ and then a key in [a-z], [A-Z] or [0–9]. For instance, if we are editing a text and find ourselves typing the words “particular”, “especially” and “text editor” very often, we can just do: Press v to enter visual mode and select the word by moving the cursor (for this example, let’s use “particular”)Press “p to select the p register and then y to yank into it.Whenever we have to type the word again, just press “p to say which register to put from, and then p. If we wish to see the registers’ contents, we can run :reg to list them all, or :reg followed by a space separated list of register names (e.g., :reg a b) to see each one individually. Lowercase and uppercase letters point to the same Vim registers. However, using lowercase names will overwrite the register’s content, whereas uppercase letters tell Vim to concatenate new strings to the current content without deleting it. Numbered registers can be used in the same way as the others, but will also store the last 10 deleted bits of text, as a stack. When you delete something, it goes into register 1, and it’s moved into register 2 when you delete the next thing, and so on until register 9 is filled. Interestingly, Vim registers persist even after we close Vim. This means eventually, you may add some very convenient snippets or lines to a register, and use them in different text editing sessions. For this reason, it is worth paying attention to patterns in what we usually write, and seeing how much we can optimize our editing process with these tools. If our register’s contents can be interpreted as a Vim command, pressing @r (where r is the register’s name) will run them. To erase a Vim register’s contents, press qrq if r is your register’s name. Using Vim can save us a lot of time, and that’s especially true if we take the time to customize it to our needs. Multiple Vim registers can make text editing a lot quicker, and replace the ‘snippets’ functionality from IDEs, allowing us a use many clipboards. Multiple Vim windows allow us to edit closely connected pieces of code in a single file, or taking a look at different files from a single project and seeing how well they fit together. Combining these two tools can produce a huge boost in our productivity when editing files. However, the registers bit should be used with caution, since it may make us a bit more dependent on our environment. That’s all for today, folks. I hope you’ve found this article useful, or at least entertaining. If there’s anything you think I should’ve mentioned and didn’t, or any part of this that could be improved or is plain wrong, please let me know. Feedback from my readers is one of my favorite parts of writing, and it allows me to learn more about these topics. Have you already used these features? What are some useful things to store in our Vim registers? What other features should I cover in a different article? Let me know in the comments! Follow me on Medium and Twitter to keep reading more Tutorials, tips and tricks for developers and data scientists. If you liked this article, share it with a programmer friend! Originally published at Create your free account to unlock your custom reading experience.
https://hackernoon.com/vim-squeezing-the-text-editors-juice-with-more-features-7481c218d01e
CC-MAIN-2021-04
refinedweb
1,490
69.21
As part of the ipyrad.analysis toolkit we've created convenience functions for easily performing exploratory principal component analysis (PCA) on your data. PCA is a very standard dimension-reduction technique that is often used to get a general sense of how samples are related to one another. PCA has the advantage over STRUCTURE type analyases in that it is very fast. Similar to STRUCTURE, PCA can be used to produce simple and intuitive plots that can be used to guide downstream analysis. There are three very nice papers that talk about the application and interpretation of PCA in the context of population genetics: Reich et al (2008) Principal component analysis of genetic data Novembre & Stephens (2008) Interpreting principal component analyses of spatial population genetic variation McVean (2009) A genealogical interpretation of principal components analysis -c conda-forge scikit-allel %matplotlib inline import ipyrad import ipyrad.analysis as ipa ## ipyrad analysis toolkit ## Load your assembly data = ipyrad.load_json("/tmp/ipyrad-test/rad.json") ## Create they pca object pca = ipa.pca(data) ## Bam! pca.plot() loading Assembly: rad from saved path: /tmp/ipyrad-test/rad.json Using default cmap: Spectral <matplotlib.axes._subplots.AxesSubplot at 0x7fb6fdf82050> ## Path to the input vcf, in this case it's just the vcf from our ipyrad pedicularis assembly vcffile = "/home/isaac/ipyrad/test-data/pedicularis/ped_outfiles/ped.vcf" Here we can just load the vcf file directly into the pca analysis module. Then ask for the samples in samples_vcforder, which is the order in which they are written in the vcf. pca = ipa.pca(vcffile)' u'41478_cyathophylloides_SRR1754722' u'41954_cyathophylloides_SRR1754721'] Now construct the default plot, which shows all samples and PCs 1 and 2. By default all samples are assigned to one population, so everything will be the same color. pca.plot() Using default cmap: Spectral <matplotlib.axes._subplots.AxesSubplot at 0x7fe0beb3a650> In the tl;dr example the assembly of our simulated data had included a pop_assign_file so the pca() was smart enough to find this and color samples accordingly. In some cases you might not have used a pops file, so it's also possible to specify population assignments in a dictionary. The format of the dictionary should have populations as keys and lists of samples as values. Sample names need to be identical to the names in the vcf file, which we can verify with the samples_vcforder property of the pca object. pops_dict = { "superba":["29154_superba_SRR1754715"], "thamno":["30556_thamno_SRR1754720", "33413_thamno_SRR1754728"], "cyathophylla":["30686_cyathophylla_SRR1754730"], "przewalskii":["32082_przewalskii_SRR1754729", "33588_przewalskii_SRR1754727"], "rex":["35236_rex_SRR1754731", "35855_rex_SRR1754726", "38362_rex_SRR1754725",\ "39618_rex_SRR1754723", "40578_rex_SRR1754724"], "cyathophylloides":["41478_cyathophylloides_SRR1754722", "41954_cyathophylloides_SRR1754721"] } pca = ipa.pca(vcffile, pops_dict) pca.plot() Using default cmap: Spectral <matplotlib.axes._subplots.AxesSubplot at 0x7fe092fbbe50> This is just much nicer looking now, and it's also much more straightforward to interpret. In PC analysis, it's common for "bad" samples to dominate several of the first PCs, and thus "pop out" in a degenerate looking way. Bad samples of this kind can often be attributed to poor sequence quality or sample misidentifcation. Samples with lots of missing data tend to pop way out on their own, causing distortion in the signal in the PCs. Normally it's best to evaluate the quality of the sample, and if it can be seen to be of poor quality, to remove it and replot the PCA. The Pedicularis dataset is actually very nice, and clean, but for the sake of demonstration lets imagine the cyathophylloides samples are "bad samples". We can see that the cyathophylloides samples have particularly high values of PC2, so we can target them for removal in this way. ## pca.pcs is a property of the pca object that is populated after the plot() function is called. It contains ## the first 10 PCs for each sample. We construct a 'mask' based on the value of PC2, which here is the '1' in ## the first line of code (numpy arrays are 0-indexed and it's typical for PCs to be 1-indexed) mask = pca.pcs.values[:, 1] > 500 print(mask) ## You can see here that the mask is a list of booleans that is the same length as the number of samples. ## We can use this list to print out the names of just the samples of interest print(pca.samples_vcforder[mask]) [False False False False False False False False False False False True True] [u'41478_cyathophylloides_SRR1754722' u'41954_cyathophylloides_SRR1754721'] ## We can then use this list of "bad" samples in a call to pca.remove_samples ## and then replot the new pca pca.remove_samples(pca.samples_vcforder[mask]) ## Lets prove that they're gone now'] ## and do the plot pca.plot() Using default cmap: Spectral <matplotlib.axes._subplots.AxesSubplot at 0x7fe0f8c25410> pca.pcs ## Lets reload the full dataset so we have all the samples pca = ipa.pca(vcffile, pops_dict) pca.plot(pcs=[3,4]) Using default cmap: Spectral <matplotlib.axes._subplots.AxesSubplot at 0x7fa3d05fd190> import matplotlib.pyplot as plt fig = plt.figure(figsize=(12, 5)) ax1 = fig.add_subplot(1, 2, 1) ax2 = fig.add_subplot(1, 2, 2) pca.plot(ax=ax1, pcs=[1, 2]) pca.plot(ax=ax2, pcs=[3, 4]) Using default cmap: Spectral Using default cmap: Spectral <matplotlib.axes._subplots.AxesSubplot at 0x7fa3d0a04290> It's nice to see PCs 1-4 here, but it's kind of stupid to plot the legend twice, so we can just turn off the legend on the first plot. fig = plt.figure(figsize=(12, 5)) ax1 = fig.add_subplot(1, 2, 1) ax2 = fig.add_subplot(1, 2, 2) pca.plot(ax=ax1, pcs=[1, 2], legend=False) pca.plot(ax=ax2, pcs=[3, 4]) Using default cmap: Spectral Using default cmap: Spectral <matplotlib.axes._subplots.AxesSubplot at 0x7fa3d0a8db10> You might notice the default color scheme is unobtrusive, but perhaps not to your liking. There are two ways of modifying the color scheme, one simple and one more complicated, but which gives extremely fine grained control over colors. Colors for the more complicated method can be specified according to python color conventions. I find this visual page of python color names useful. ## Here's the simple way, just pass in a matplotlib cmap, or even better, the name of a cmap pca.plot(cmap="jet") <matplotlib.axes._subplots.AxesSubplot at 0x7fa3d099ac50> ## Here's the harder way that gives you uber control. Pass in a dictionary mapping populations to colors. my_colors = { "rex":"aliceblue", "thamno":"crimson", "przewalskii":"deeppink", "cyathophylloides":"fuchsia", "cyathophylla":"goldenrod", "superba":"black" } pca.plot(cdict=my_colors) <matplotlib.axes._subplots.AxesSubplot at 0x7fa3d0646b50> RAD-seq datasets are often characterized by moderate to high levels of missing data. While there may be many thousands or tens of thousands of loci recovered overall, the number of loci that are recovered in all sequenced samples is often quite small. The distribution of depth of coverage per locus is a complicated function of the size of the genome of the focal organism, the restriction enzyme(s) used, the size selection tolerances, and the sequencing effort. Both model-based (STRUCTURE and the like) and model-free (PCA/sNMF/etc) genetic "clustering" methods are sensitive to missing data. Light to moderate missing data that is distributed randomly among samples is often not enough to seriously impact the results. These are, after all, only exploratory methods. However, if missing data is biased in some way then it can distort the number of inferred populations and/or the relationships among these. For example, if several unrelated samples recover relatively few loci, for whatever reason (mistakes during library prep, failed sequencing, etc), clustering methods may erroniously identify this as true "similarity" with respect to the rest of the samples, and create spurious clusters. In the end, all these methods must do something with sites that are uncalled in some samples. Some methods adopt a strategy of silently asigning missing sites the "Reference" base. Others, assign missing sites the average base. There are several ways of dealing with this: The pca module has various functions for inspecting missing data. The simples is the get_missing_per_sample() function, which does exactly what it says. It displays the number of ungenotyped snps per sample in the final data matrix. Here you can see that since we are using simulated data the amount of missing data is very low, but in real data these numbers will be considerable. pca.get_missing_per_sample() 1A_0 2 1B_0 2 1C_0 1 1D_0 4 2E_0 0 2F_0 0 2G_0 0 2H_0 1 3I_0 2 3J_0 2 3K_0 1 3L_0 2 dtype: int64 This is useful, but it doesn't give us a clear direction for how to go about dealing with the missingness. One way to reduce missing data is to reduce the tolerance for samples ungenotyped at a snp. The other way to reduce missing data is to remove samples with very poor sequencing. To this end, the .missingness() function will show a table of number of retained snps for various of these conditions. pca.missingness() Here the columns indicate progressive removal of the samples with the fewest number of snps. So "Full" indicates retention of all samples. "2E_0" shows # snps after removing this sample (as it has the most missing data). "2F_0" shows the # snps after removing both this sample & "2E_0". And so on. You can see as we move from left to right the total number of snps goes down, but also so does the amount of missingness. Rows indicate thresholds for number of allowed missing samples per snp. The "0" row shows the condition of allowing 0 missing samples, so this is the complete data matrix. The "1" row shows # of snps retained if you allow 1 missing sample. And so on. pca.trim_missing(1) pca.missingness() You can see that this also has the effect of reducing the amount of missingness per sample. pca.get_missing_per_sample() 1A_0 0 1B_0 0 1C_0 0 1D_0 2 2E_0 0 2F_0 0 2G_0 0 2H_0 1 3I_0 1 3J_0 1 3K_0 0 3L_0 1 dtype: int64 NB: This operation is destructive of the data inside the pca object. It doesn't do anything to your data on file, though, so if you want to rewind you can just reload your vcf file. ## Voila. Back to the full dataset. pca = ipa.pca(data) pca.missingness() McVean (2008) recommends filling missing sites with the average genotype of the population, so that's what we're doing here. For each population, we determine the average genotype at any site with missing data, and then fill in the missing sites with this average. In this case, if the average "genotype" is "./.", then this is what gets filled in, so essentially any site missing more than 50% of the data isn't getting imputed. If two genotypes occur with equal frequency then the average is just picked as the first one. pca.fill_missing() pca.missingness() In comparing this missingness matrix with the previous one, you can see that indeed some snps are being recovered (though not many, again because of the clean simulated data). You can also examine the effect of imputation on the amount of missingness per sample. You can see it doesn't have as drastic of an effect as trimming, but it does have some effect, plus you are retaining more data! pca.get_missing_per_sample() 1A_0 2 1B_0 2 1C_0 1 1D_0 2 2E_0 0 2F_0 0 2G_0 0 2H_0 0 3I_0 1 3J_0 1 3K_0 1 3L_0 1 dtype: int64 Unequal sampling of populations can potentially distort PC analysis (see for example Bradburd et al 2016). Model based ancestry analysis suffers a similar limitation Puechmaille 2016). McVean (2008) recommends downsampling larger populations, but nobody likes throwing away data. Weighted PCA was proposed, but has not been adopted by the community. {x:len(y) for x, y in pca.pops.items()} {'cyathophylla': 1, 'cyathophylloides': 2, 'przewalskii': 2, 'rex': 5, 'superba': 1, 'thamno': 2} prettier_labels = { "32082_przewalskii":"przewalskii", "33588_przewalskii":"przewalskii", "41478_cyathophylloides":"cyathophylloides", "41954_cyathophylloides":"cyathophylloides", "29154_superba":"superba", "30686_cyathophylla":"cyathophylla", "33413_thamno":"thamno", "30556_thamno":"thamno", "35236_rex":"rex", "40578_rex":"rex", "35855_rex":"rex", "39618_rex":"rex", "38362_rex":"rex" }.
https://nbviewer.org/github/dereneaton/ipyrad/blob/0.9.58/tests/cookbook-PCA-pedicularis.ipynb
CC-MAIN-2021-43
refinedweb
1,978
55.24
Correctly emit vinsinstructions that are safe in 32bit mode. It would seem that all we need to change this condition and the one below to not emit PPCISD::VECINSERT for 64-bit element widths (v2i64, v2f64). Why do we need to disable this lowering on 32-bit targets altogether? It looks like all of the pattern matches for VINS* in PPCInstrPrefix.td hardcode i64: eg: def : Pat<(v16i8 (PPCvecinsertelt v16i8:$vDi, i32:$rA, i64:$rB)), (VINSBLX $vDi, InsertEltShift.Sub32Left0, $rA)>; ... foreach i = [0, 1] in def : Pat<(v2i64 (PPCvecinsertelt v2i64:$vDi, i64:$rA, (i64 i))), (VINSD $vDi, !mul(i, 8), $rA)>; } So we can't emit the VECINSERT safely in 32bit mode due to this. Sure, so those won't match. You might be able to change i64 to iPTR (I'm not sure about that) or provide patterns with i32 instead of i64. Thanks for the suggestion and help. It's much better to emit these when we can in 32bit mode. I preferred to split the 32/64bit implementations mainly to keep 64bit as is. I noticed that there were no other predicate definitions in this file and they can be moved if that's preferred. I don't really understand how we are custom lowering this on 32-bit targets now since you've added this. Where are the PPC-specific insert nodes coming from? This is fine. Nit: line too long (here and elsewhere). We don't need this now, I don't think. Forgot to remove it in the previous diff. Thanks. I noticed that but also there are several lines in IsISA3_1, HasVSX, IsLittleEndian and IsISA3_1, HasVSX, IsBigEndian, IsPPC64 that were too long as well. Thought it may be ok but I fixed it now for this case. LGTM other than the nit that can be addressed on the commit. The operand should be lined up with the first operand of the node it belongs to: def : Pat<(v4f32 (PPCvecinsertelt v4f32:$vDi, (f32 (load iaddr:$rA)), i32:$rB)), Similarly on other similar lines. We haven't done super well with keeping the lines in target description files to 80 lines, but we should still try to do so on new code.
https://reviews.llvm.org/D101383
CC-MAIN-2021-25
refinedweb
367
82.65