text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Parsl is a native Python library that allows you to write functions that execute in parallel and tie them together with dependencies to create workflows. Parsl wraps Python functions as "Apps" using the @python_app decorator, and Apps that call external applications using the @bash_app decorator. Decorated functions can run in parallel when all their inputs are ready.
For more comprehensive documentation and examples, please refer our documentation.
import parsl import os from parsl.app.app import python_app, bash_app from parsl.configs.local_threads import config #parsl.set_stream_logger() # <-- log everything to stdout print(parsl.__version__)
Parsl separates code and execution. To do so, it relies on a configuration model to describe the pool of resources to be used for execution (e.g., clusters, clouds, threads).
We'll come back to configuration later in this tutorial. For now, we configure this example to use a local pool of threads to facilitate local parallel execution.
parsl.load(config)
In Parsl an
app is a piece of code that can be asynchronously executed on an execution resource (e.g., cloud, cluster, or local PC). Parsl provides support for pure Python apps (
python_app) and also command-line apps executed via Bash (
bash_app).
As a first example, let's define a simple Python function that returns the string 'Hello World!'. This function is made into a Parsl App using the @python_app decorator.
@python_app def hello (): return 'Hello World!' print(hello().result())
As can be seen above, Apps wrap standard Python function calls. As such, they can be passed arbitrary arguments and return standard Python objects.
@python_app def multiply(a, b): return a * b print(multiply(5, 9).result())
As Parsl apps are potentially executed remotely, they must contain all required dependencies in the function body. For example, if an app requires the time library, it should import that library within the function.
@python_app def slow_hello (): import time time.sleep(5) return 'Hello World!' print(slow_hello().result())
Parsl’s Bash app allows you to wrap execution of external applications from the command-line as you would in a Bash shell. It can also be used to execute Bash scripts directly. To define a Bash app, the wrapped Python function must return the command-line string to be executed.
As a first example of a Bash app, let's use the Linux command
echo to return the string 'Hello World!'. This function is made into a Bash App using the @bash_app decorator.
Note that the
echo command will print 'Hello World!' to stdout. In order to use this output, we need to tell Parsl to capture stdout. This is done by specifying the
stdout keyword argument in the app function. The same approach can be used to capture
stderr.
@bash_app def echo_hello(stdout='echo-hello.stdout', stderr='echo-hello.stderr'): return 'echo "Hello World!"' echo_hello().result() with open('echo-hello.stdout', 'r') as f: print(f.read())
Parsl Apps can exchange data as Python objects (as shown above) or in the form of files. In order to enforce dataflow semantics, Parsl must track the data that is passed into and out of an App. To make Parsl aware of these dependencies, the app function includes
inputs and
outputs keyword arguments.
We first create three test files named hello1.txt, hello2.txt, and hello3.txt containing the text "hello 1", "hello 2", and "hello 3".
for i in range(3): with open(os.path.join(os.getcwd(), 'hello-{}.txt'.format(i)), 'w') as f: f.write('hello {}\n'.format(i))
We then write an App that will concentate these files using
cat. We pass in the list of hello files (
inputs) and concatenate the text into a new file named all_hellos.txt (
outputs). As we describe below we use Parsl File objects to abstract file locations in the event the
cat app is executed on a different computer.
from parsl.data_provider.files import File @bash_app def cat(inputs=[], outputs=[]): return 'cat {} > {}'.format(" ".join([i.filepath for i in inputs]), outputs[0]) concat = cat(inputs=[File(os.path.join(os.getcwd(), 'hello-0.txt')), File(os.path.join(os.getcwd(), 'hello-1.txt')), File(os.path.join(os.getcwd(), 'hello-2.txt'))], outputs=[File(os.path.join(os.getcwd(), 'all_hellos.txt'))]) # Open the concatenated file with open(concat.outputs[0].result(), 'r') as f: print(f.read())
When a normal Python function is invoked, the Python interpreter waits for the function to complete execution and returns the results. In case of long running functions, it may not be desirable to wait for completion. Instead, it is preferable that functions are executed asynchronously. Parsl provides such asynchronous behavior by returning a future in lieu of results. A future is essentially an object that allows Parsl to track the status of an asynchronous task so that it may, in the future, be interrogated to find the status, results, exceptions, etc.
Parsl provides two types of futures: AppFutures and DataFutures. While related, these two types of futures enable subtly different workflow patterns, as we will see.
@python_app def hello (): import time time.sleep(5) return 'Hello World!' app_future = hello() # Check if the app_future is resolved, which it won't be print('Done: {}'.format(app_future.done())) # Print the result of the app_future. Note: this # call will block and wait for the future to resolve print('Result: {}'.format(app_future.result())) print('Done: {}'.format(app_future.done()))
While AppFutures represent the execution of an asynchronous app, DataFutures represent the files it produces. Parsl’s dataflow model, in which data flows from one app to another via files, requires such a construct to enable apps to validate creation of required files and to subsequently resolve dependencies when input files are created. When invoking an app, Parsl requires that a list of output files be specified (using the
outputs keyword argument). A DataFuture for each file is returned by the app when it is executed. Throughout execution of the app, Parsl will monitor these files to 1) ensure they are created, and 2) pass them to any dependent apps.
# App that echos an input message to an output file @bash_app def slowecho(message, outputs=[]): return 'sleep 5; echo %s &> %s' % (message, outputs[0]) # Call slowecho specifying the output file hello = slowecho('Hello World!', outputs=[File(os.path.join(os.getcwd(), 'hello-world.txt'))]) # The AppFuture's outputs attribute is a list of DataFutures print(hello.outputs) # Also check the AppFuture print('Done: {}'.format(hello.done())) # Print the contents of the output DataFuture when complete with open(hello.outputs[0].result(), 'r') as f: print(f.read()) # Now that this is complete, check the DataFutures again, and the Appfuture print(hello.outputs) print('Done: {}'.format(hello.done()))
Parsl is designed to enable implementation of dataflow patterns. These patterns enable workflows, in which the data passed between apps manages the flow of execution, to be defined. Dataflow programming models are popular as they can cleanly express, via implicit parallelism, the concurrency needed by many applications in a simple and intuitive way.
Parsl’s file abstraction abstracts access to a file irrespective of where the app is executed. When referencing a Parsl file in an app (by calling
filepath), Parsl translates the path to the file's location relative to the file system on which the app is executing.
from parsl.data_provider.files import File # App that copies the contents of a file to another file @bash_app def copy(inputs=[], outputs=[]): return 'cat %s &> %s' % (inputs[0], outputs[0]) # Create a test file open(os.path.join(os.getcwd(), 'cat-in.txt'), 'w').write('Hello World!\n') # Create Parsl file objects parsl_infile = File(os.path.join(os.getcwd(), 'cat-in.txt'),) parsl_outfile = File(os.path.join(os.getcwd(), 'cat-out.txt'),) # Call the copy app with the Parsl file copy_future = copy(inputs=[parsl_infile], outputs=[parsl_outfile]) # Read what was redirected to the output file with open(copy_future.outputs[0].result(), 'r') as f: print(f.read())
The Parsl file abstraction can also represent remotely accessible files. In this case, you can instantiate a file object using the remote location of the file. Parsl will implictly stage the file to the execution environment before executing any dependent apps. Parsl will also translate the location of the file into a local file path so that any dependent apps can access the file in the same way as a local file. Parsl supports files that are accessible via Globus, FTP, and HTTP.
Here we create a File object using a publicly accessible file with random numbers. We can pass this file to the
sort_numbers app in the same way we would a local file.
from parsl.data_provider.files import File @python_app def sort_numbers(inputs=[]): with open(inputs[0].filepath, 'r') as f: strs = [n.strip() for n in f.readlines()] strs.sort() return strs unsorted_file = File('') f = sort_numbers(inputs=[unsorted_file]) print (f.result())
Now that we understand all the building blocks, we can create workflows with Parsl. Unlike other workflow systems, Parsl creates implicit workflows based on the passing of control or data between Apps. The flexibility of this model allows for the creation of a wide range of workflows from sequential through to complex nested, parallel workflows. As we will see below, a range of workflows can be created by passing AppFutures and DataFutures between Apps.
Simple sequential or procedural workflows can be created by passing an AppFuture from one task to another. The following example shows one such workflow, which first generates a random number and then writes it to a file.
# App that generates a random number @python_app def generate(limit): from random import randint return randint(1,limit) # App that writes a variable to a file @bash_app def save(variable, outputs=[]): return 'echo %s &> %s' % (variable, outputs[0]) # Generate a random number between 1 and 10 random = generate(10) print('Random number: %s' % random.result()) # Save the random number to a file saved = save(random, outputs=[File(os.path.join(os.getcwd(), 'sequential-output.txt'))]) # Print the output file with open(saved.outputs[0].result(), 'r') as f: print('File contents: %s' % f.read())
The most common way that Parsl Apps are executed in parallel is via looping. The following example shows how a simple loop can be used to create many random numbers in parallel. Note that this takes 5 seconds to run (the time needed for the longest delay), not the 15 seconds that would be needed if these generate functions were called and returned in sequence.
# App that generates a random number after a delay @python_app def generate(limit,delay): from random import randint import time time.sleep(delay) return randint(1,limit) # Generate 5 random numbers between 1 and 10 rand_nums = [] for i in range(5): rand_nums.append(generate(10,i)) # Wait for all apps to finish and collect the results outputs = [i.result() for i in rand_nums] # Print results print(outputs)
Parallel dataflows can be developed by passing data between Apps. In this example we create a set of files, each with a random number, we then concatenate these files into a single file and compute the sum of all numbers in that file. The calls to the first App each create a file, and the second App reads these files and creates a new one. The final App returns the sum as a Python integer.
# App that generates a semi-random number between 0 and 32,767 @bash_app def generate(outputs=[]): return "echo $(( RANDOM )) &> {}".format(outputs[0]) # App that concatenates input files into a single output file @bash_app def concat(inputs=[], outputs=[]): return "cat {0} > {1}".format(" ".join([i.filepath for i in inputs]), outputs[0]) # App that calculates the sum of values in a list of input files @python_app def total(inputs=[]): total = 0 with open(inputs[0], 'r') as f: for l in f: total += int(l) return total # Create 5 files with semi-random numbers in parallel output_files = [] for i in range (5): output_files.append(generate(outputs=[File(os.path.join(os.getcwd(), 'random-{}.txt'.format(i)))])) # Concatenate the files into a single file cc = concat(inputs=[i.outputs[0] for i in output_files], outputs=[File(os.path.join(os.getcwd(), 'all.txt'))]) # Calculate the sum of the random numbers total = total(inputs=[cc.outputs[0]]) print (total.result())
Many scientific applications use the Monte Carlo method to compute results.
One example is calculating $\pi$ by randomly placing points in a box and using the ratio that are placed inside the circle.
Specifically, if a circle with radius $r$ is inscribed inside a square with side length $2r$, the area of the circle is $\pi r^2$ and the area of the square is $(2r)^2$.
Thus, if $N$ uniformly-distributed random points are dropped within the square, approximately $N\pi/4$ will be inside the circle.
Each call to the function
pi() is executed independently and in parallel. The
avg_three() app is used to compute the average of the futures that were returned from the
pi() calls.
The dependency chain looks like this:
App Calls pi() pi() pi() \ | / Futures a b c \ | / App Call avg_points() | Future avg_pi
# App that estimates pi by placing points in a box @python_app def pi(num_points): from random import random inside = 0 for i in range(num_points): x, y = random(), random() # Drop a random point in the box. if x**2 + y**2 < 1: # Count points within the circle. inside += 1 return (inside*4 / num_points) # App that computes the mean of three values @python_app def mean(a, b, c): return (a + b + c) / 3 # Estimate three values for pi a, b, c = pi(10**6), pi(10**6), pi(10**6) # Compute the mean of the three estimates mean_pi = mean(a, b, c) # Print the results print("a: {:.5f} b: {:.5f} c: {:.5f}".format(a.result(), b.result(), c.result())) print("Average: {:.5f}".format(mean_pi.result()))
Parsl is designed to support arbitrary execution providers (e.g., PCs, clusters, supercomputers, clouds) and execution models (e.g., threads, pilot jobs). The configuration used to run the script tells Parsl how to execute apps on the desired environment. Parsl provides a high level abstraction, called a Block, for describing the resource configuration for a particular app or script.
Information about the different execution providers and executors supported is included in the Parsl documentation.
So far in this tutorial, we've used a built-in configuration for running with threads. Below, we will illustrate how to create configs for different environments.
As we saw above, we can configure Parsl to execute apps on a local thread pool. This is a good way to parallelize execution on a local PC. The configuration object defines the executors that will be used as well as other options such as authentication method (e.g., if using SSH), checkpoint files, and executor specific configuration. In the case of threads we define the maximum number of threads to be used.
from parsl.config import Config from parsl.executors.threads import ThreadPoolExecutor local_threads = Config( executors=[ ThreadPoolExecutor( max_threads=8, label='local_threads' ) ] )
We can also define a configuration that uses Parsl's HighThroughputExecutor. In this mode, pilot jobs are used to manage the submission. Parsl creates an interchange to manage execution and deploys one or more workers to execute tasks. The following config will instantiate this infrastructure locally, it can be extended to include a remote provider (e.g., the Cori or Theta supercomputers) for execution., cores_per_worker=1, provider=LocalProvider( channel=LocalChannel(), init_blocks=1, max_blocks=1, ), ) ], strategy=None, )
parsl.clear() #parsl.load(local_threads) parsl.load(local_htex)
@bash_app def generate(outputs=[]): return "echo $(( RANDOM )) &> {}".format(outputs[0]) @bash_app def concat(inputs=[], outputs=[]): return "cat {0} > {1}".format(" ".join(i.filepath for i in inputs), outputs[0]) @python_app def total(inputs=[]): total = 0 with open(inputs[0], 'r') as f: for l in f: total += int(l) return total # Create 5 files with semi-random numbers output_files = [] for i in range (5): output_files.append(generate(outputs=[File(os.path.join(os.getcwd(), 'random-%s.txt' % i))])) # Concatenate the files into a single file cc = concat(inputs=[i.outputs[0] for i in output_files], outputs=[File(os.path.join(os.getcwd(), 'combined.txt'))]) # Calculate the sum of the random numbers total = total(inputs=[cc.outputs[0]]) print (total.result()) | https://nbviewer.jupyter.org/github/Parsl/parsl-tutorial/blob/revert-29-symlink_to_make_parsl_point_to_right_tutorial/1-parsl-introduction.ipynb | CC-MAIN-2021-17 | refinedweb | 2,695 | 56.76 |
memory_profiler 0 is used much in the same way of the line_profiler: first decorate the function you would like to profile with @profile and then run the script with a special script (in this case with specific arguments to the Python interpreter).
In the following example, we create a simple function my_func that allocates lists a, b and then deletes b:
@profile def my_func(): a = [1] * (10 ** 6) b = [2] * (2 * 10 ** 7) del b return a if __name__ == '__main__': my_func()
Execute the code passing the option -m memory_profiler to the python interpreter to load the memory_profiler module and print to stdout the line-by-line analysis. If the file name was example.py, this would result in:
$ python -m memory_profiler example.py
Output will follow:
The first column represents the line number of the code that has been profiled, the second column (Mem usage) the memory usage of the Python interpreter after that line has been executed. The third column (Increment) represents the difference in memory of the current line with respect to the last one. The last column (Line Contents) prints the code that has been profiled.
Decorator
A function decorator is also available. Use as follows:
from memory_profiler import profile @profile def my_func(): a = [1] * (10 ** 6) b = [2] * (2 * 10 ** 7) del b return a
In this case the script can be run without specifying -m memory_profiler in the command line.
Executing external scripts
Setting debugger breakpoints
It is possible to set breakpoints depending on the amount of memory used. That is, you can specify a threshold and as soon as the program uses more memory than what is specified in the threshold it will stop execution and run into the pdb debugger. To use it, you will have to decorate the function as done in the previous section with @profile and then run your script with the option -m memory_profiler --pdb-mmem=X, where X is a number representing the memory threshold in MB. For example:
$ python -m memory_profiler --pdb-mmem=100 my_script.py
will run my_script.py and step into the pdb debugger as soon as the code uses more than 100 MB in the decorated function.
API
memory_profiler exposes a number of functions to be used in third-party code.
memory_usage(proc=-1, interval=.1, timeout=None) returns the memory usage over a time interval. The first argument, proc represents what should be monitored. This can either be the PID of a process (not necessarily a Python program), a string containing some python code to be evaluated or a tuple (f, args, kw) containing a function and its arguments to be evaluated as f(*args, **kw). For example,
>>> from memory_profiler import memory_usage >>> mem_usage = memory_usage(-1, interval=.2, timeout=1) >>> print(mem_usage) [7.296875, 7.296875, 7.296875, 7.296875, 7.296875]
Here I’ve told memory_profiler to get the memory consumption of the current process over a period of 1 second with a time interval of 0.2 seconds. As PID I’ve given it -1, which is a special number (PIDs are usually positive) that means current process, that is, I’m getting the memory usage of the current Python interpreter. Thus I’m getting around 7MB of memory usage from a plain python interpreter. If I try the same thing on IPython (console) I get 29MB, and if I try the same thing on the IPython notebook it scales up to 44MB.
If you’d like to get the memory consumption of a Python function, then you should specify the function and its arguments in the tuple (f, args, kw). For example:
>>> # define a simple function >>> def f(a, n=100): ... import time ... time.sleep(2) ... b = [a] * n ... time.sleep(1) ... return b ... >>> from memory_profiler import memory_usage >>> memory_usage((f, (1,), {'n' : int(1e6)}))
This will execute the code f(1, n=int(1e6)) and return the memory consumption during this execution.
IPython integration
After installing the module, if you use IPython, you can use the %mprun and %memit magics.
For IPython 0.11+, you can use the module directly as an extension, with %load_ext memory_profiler
To activate it whenever you start IPython, edit the configuration file for your IPython profile, ~/.ipython/profile_default/ipython_config.py, to register the extension like this (If you already have other extensions, just add this one to the list):
c.InteractiveShellApp.extensions = [ 'memory_profiler', ]
(If the config file doesn’t already exist, run ipython profile create in a terminal.)
It then can be used directly from IPython to obtain a line-by-line report using the %mprun magic command. In this case, you can skip the @profile decorator and instead use the -f parameter, like this. Note however that function my_func must be defined in a file (cannot have been defined interactively in the Python interpreter):
In [1] from example import my_func In [2] %mprun -f my_func my_func()
Another useful magic that we define memory_profiler memory.29.xml | https://pypi.python.org/pypi/memory_profiler/0.29 | CC-MAIN-2017-09 | refinedweb | 821 | 50.36 |
Hi Bertalan,
Bertalan Fodor wrote:
> I have a terrible time struggling with a performance issue on using
> Batik in an applet.
> When the Java Plug-in (1.5.0) is running my applet, it always try to
> download lots of non-existent classes. Accessing the server takes a lot
> of time and so does applet loading.
This is an issue with Applets there is no way to remove the server
directory from the Applet ClassPath. You really want to be able to
tell it to only look in the jar files.
> These non-existent classes are in three categories:
>
> - classes that are related to language-dependent resources, like:
I believe that most of these are fixed in CVS.
> org/apache/batik/bridge/resources/Messages.class
These are not because this is how the JDK does
internationalization.
> - classes that are tried to be instantiated in a Class.forName, or
> something like that I suppose, e.g:
> org/mozilla/javascript/optimizer/Codegen.class
This is not fixed in CVS, and it just part of how Rhino works.
> org/apache/batik/script/jacl/JaclInterpreterFactory.class
This is fixed in CVS.
> - classes that are related to the script variables included in the SVG
> file, like:
> in case of variable 'document', the classes looked for are:
This is tricker, and has to do with how Rhino 'flattens' the
Java namespace to the ecmascript namespace. You _might_ be able to
help yourself out a bit by using 'window.document'. Rather than
relying on global object properties.
> In my svg, this causes cca. 60 other web accesses.
I'm surprised it is this high, are you not declaring your global
vars?
> Do you have any idea how to overcome this problem?
HTH.
---------------------------------------------------------------------
To unsubscribe, e-mail: batik-users-unsubscribe@xml.apache.org
For additional commands, e-mail: batik-users-help@xml.apache.org | http://mail-archives.apache.org/mod_mbox/xmlgraphics-batik-users/200501.mbox/%3C41E67068.4090409@Kodak.com%3E | CC-MAIN-2015-48 | refinedweb | 306 | 67.04 |
I encourage you to understand the BFS algorithm before continuing this article, as you'll understand the information presented here a lot better.
Essentially, what our web crawler will aim to do is load the HTML from a starting page, say and use a regular expression match to find any valid URL's on that site and recursively visit each one of those links and perform the same task ; load the HTML and find any links to other websites. Most importantly, we will also implement a mechanism for keeping track of websites we've already visited, otherwise we may end up visiting the same websites over and over again and end up in an infinite loop of the same websites! A quick diagram should help clear up any confusion:
The first factor, choosing how to keep track of visited sites, is easy to accomplish: A Map Data Structure! More specifically, for the best performance, a hash table where the keys to the table are the URL's and the values can simply be a boolean. Although you can use a BST or any other implementation of a Map if you'd like, I believe that a hash table will be the best option due to its benefits over a BST, and we don't necessarily need ordering of our data.
The second factor, reading the HTML from a website, can be easily done in C/C++ using an external library such as libcurl . Java also allows you to do this without any external library, and many other languages also allow you to extract the HTML from a webpage, so this problem can be taken care of on a language-to-language basis.
The third factor, how to recognize valid URL's from within the HTML of a page, can be done with a regular expression match. Most languages now have support for regular expressions, so this shouldn't be an issue. We'll cover the actual regex syntax for matching the URL later.
As mentioned previously, we will be crawling the web with the BFS algorithm, but as anyone whose studied graph algorithms in the past will ask, why not use Depth First Search? The answer is, you can implement the crawler with DFS as the primary searching algorithm, but this will continue to search deeply rather than fully so depending on how you want the crawling to behave, you can implement BFS to crawl the nearest websites first, or implement DFS to crawl as deep as possible
Due to the fact that the implementation of this in C++ will rely on an external library, I will present the code for the algorithm in Java, but I will have the C++ version available for download separately for those who wish to view the C++ version of the algorithm.
Before we get started, this algorithm needs a starting point; a website where it will initiate its crawl. For purposes of demonstration we will use ( yes this is a real website). Finally to the coding! We will need the necessary boilerplate code that will setup the Queue needed in BFS.
import java.io.BufferedReader; import java.io.InputStreamReader; import java.net.HttpURLConnection; import java.net.URL; import java.util.*; import java.util.regex.Matcher; import java.util.regex.Pattern; public class WebCrawler { public static void main(String[] args) { LinkedList<String> links = new LinkedList<String>(); // this will be the BFS queue HashMap<String,Boolean> hasVisited = new HashMap<String,Boolean>(); // this is the hash table for storing previously visited sites String startingString = ""; links.add(startingString); hasVisited.put(startingSite,true); // we've already been to the starting site } }
With out boiler plate code ready, all we need to discuss now is the regular expression syntax for matching valid URL's. There are plenty tutorials scattered throughout the interwebs regarding regular expressions, but in essence our syntax will be the following: "http://(\\w+\\.)*(\\w+)" which will match "http://" followed by any sequence of 1 or more alphanumeric characters, followed by a . ( the period which occurs in .com , .net, .org, etc,). This sequence can occur multiple times ( such as in the case of websites of international domains). Then followed by one or more alphanumeric quantities ("com", "org", "net", etc). If this sounded foreign to you, please visit This Regular Expressions Tutorial Website to learn the regular expression syntax.
Now with everything clear, the complete code is below:
import java.io.BufferedReader; // necessary imports import java.io.InputStreamReader; import java.io.InputStream; import java.net.HttpURLConnection; import java.net.URL; import java.util.*; import java.util.regex.Matcher; import java.util.regex.Pattern; public class WebCrawler { public static void main(String[] args) { LinkedList<String> links = new LinkedList<String>(); HashMap<String,Boolean> hasVisited = new HashMap<String,Boolean>(); String startingString = ""; links.add(startingString); hasVisited.put(startingString,true); String regex = "http://(\\w+\\.)*(\\w+)"; while(links.size() > 0) { try { String nxt = links.poll(); // next website to process System.out.println(nxt); // print the site URL url = new URL(nxt); // construct a URL object HttpURLConnection http = null; http =(HttpURLConnection)url.openConnection(); http.connect(); // open and connect to the website Pattern pat = Pattern.compile(regex); // regex matching BufferedReader br = null; InputStreamReader isr = null; InputStream is = http.getInputStream(); isr = new InputStreamReader(is); br = new BufferedReader(isr); // sets up String data = ""; while((data = br.readLine()) != null) { Matcher mat = pat.matcher(data); while(mat.find()) { String w = mat.group(); if(hasVisited.get(w) == null) { links.add(w); hasVisited.put(w,true); } } } }catch(Exception e) { } } } }Note: The reason we have a try/catch block is because some websites may be offline or not valid sites anymore, in which case, the Java Runtime will throw an exception which would otherwise halt program execution. In this case we silently "ignore" the exception by choosing to do nothing.
The code presented here is pretty straightforward: we have a queue with which BFS feeds off of, a hash table storing the sites we've already been to, and we start the crawl on a sample website. The BFS algorithm will continue to poll websites off the queue and process them (in this case we print them to the console but you can store them to disk, or any other useful task) and read their HTML, match any URL's with a regular expression,then check to make sure we haven't already been to that site, en-queue it and mark as visited.
Although the example code is in Java, this can be ported over to other languages.
Here is some output produced by the code above after roughly 1 second of execution: you enjoyed this article, please Subscribe to get updated with my new blog posts as well as newsletters.
Thank you for your time :)
Very informative, indeed. Thanks for taking the time to explain this! | http://blog.cs4u.us/2013/11/build-web-crawler-using-bfs.html | CC-MAIN-2020-05 | refinedweb | 1,125 | 61.56 |
Retrieve and Query XML Data
This topic describes the query options that you have to specify to query XML data. It also describes the parts of XML instances that are not preserved when they are stored in databases.
SQL Server preserves the content of the XML instance, but does not preserve aspects of the XML instance that are not considered to be significant in the XML data model. This means that a retrieved XML instance might not be identical to the instance that was stored in the server, but will contain the same information.
XML Declaration
The XML declaration in an instance is not preserved when the instance is stored in the database. For example:
The result is <doc/>.
The XML declaration, such as <?xml version='1.0'?>, is not preserved when storing XML data in an xml data type instance. This is by design. The XML declaration (<?xml ... ?>) and its attributes (version/encoding/stand-alone) are lost after data is converted to type xml. The XML declaration is treated as a directive to the XML parser. The XML data is stored internally as ucs-2. All other PIs in the XML instance are preserved.
Order of Attributes
The order of attributes in an XML instance is not preserved. When you query the XML instance stored in the xml type column, the order of attributes in the resulting XML may be different from the original XML instance.
Quotation Marks Around Attribute Values
Single quotation marks and double quotations marks around attribute values are not preserved. The attribute values are stored in the database as a name and value pair. The quotation marks are not stored. When an XQuery is executed against an XML instance, the resulting XML is serialized with double quotation marks around the attribute values.
Both queries return = <root a="1" />.
Namespace Prefixes
Namespace prefixes are not preserved. When you specify XQuery against an xml type column, the serialized XML result may return different namespace prefixes.
The namespace prefix in the result may be different. For example: | http://technet.microsoft.com/en-us/library/bb510442.aspx | CC-MAIN-2014-15 | refinedweb | 339 | 65.93 |
#include <wx/log.h>
wxLog class defines the interface for the log targets used by wxWidgets logging functions as explained in the Logging Overview.
The only situations when you need to directly use this class is when you want to derive your own log target because the existing ones don't satisfy your needs.
Otherwise, it is completely hidden behind the wxLogXXX() functions and you may not even know about its existence.
stderrwhen
wxUSE_GUI= 0.
Add the mask to the list of allowed masks for wxLogTrace().
Removes all trace masks previously set with AddTraceMask().
Disables time stamping of the log messages.
Notice that the current time stamp is only used by the default log formatter and custom formatters may ignore calls to this function.
Called to log a new record.
Any log message created by wxLogXXX() functions is passed to this method of the active log target. The default implementation prepends the timestamp and, for some log levels (e.g. error and warning), the corresponding prefix to msg and passes it to DoLogTextAtLevel().
You may override this method to implement custom formatting of the log messages or to implement custom filtering of log messages (e.g. you could discard all log messages coming from the given source file).
Called to log the specified string.
A simple implementation might just send the string to
stdout or
stderr or save it in a file (of course, the already existing wxLogStderr can be used for this).
The base class version of this function asserts so it must be overridden if you don't override DoLogRecord() or DoLogTextAtLevel().
Called to log the specified string at given level.
The base class versions logs debug and trace messages on the system default debug output channel and passes all the other messages to DoLogText().
Instructs wxLog to not create new log targets on the fly if there is none currently (see GetActiveTarget()).
(Almost) for internal use only: it is supposed to be called by the application shutdown code (where you don't want the log target to be automatically created anymore).
Note that this function also calls ClearTraceMasks().
Globally enable or disable logging.
Calling this function with false argument disables all log messages for the current thread.
Show all pending output and clear the buffer.
Some of wxLog implementations, most notably the standard wxLogGui class, buffer the messages (for example, to avoid showing the user a zillion of modal message boxes one after another – which would be really annoying). This function shows them all and clears the buffer contents. If the buffer is already empty, nothing happens.
If you override this method in a derived class, call the base class version first, before doing anything else.
Reimplemented in wxLogGui, and wxLogBuffer.
Flushes the current log target if any, does nothing if there is none.
When this method is called from the main thread context, it also flushes any previously buffered messages logged by the other threads. When it is called from the other threads it simply calls Flush() on the currently active log target, so it mostly makes sense to do this if a thread has its own logger set with SetThreadActiveTarget().
Returns the pointer to the active log target (may be NULL).
Notice that if SetActiveTarget() hadn't been previously explicitly called, this function will by default try to create a log target by calling wxAppTraits::CreateLogTarget() which may be overridden in a user-defined traits class to change the default behaviour. You may also call DontCreateOnDemand() to disable this behaviour.
When this function is called from threads other than main one, auto-creation doesn't happen. But if the thread has a thread-specific log target previously set by SetThreadActiveTarget(), it is returned instead of the global one. Otherwise, the global log target is returned.
Returns the current log level limit.
All messages at levels strictly greater than the value returned by this function are not logged at all.
Returns whether the repetition counting mode is enabled.
Returns the current timestamp format string.
Notice that the current time stamp is only used by the default log formatter and custom formatters may ignore this format.
Returns the currently allowed list of string trace masks.
Returns whether the verbose mode is currently active.
Returns true if the mask is one of allowed masks for wxLogTrace().
See also: AddTraceMask(), RemoveTraceMask()
Returns true if logging is enabled at all now.
Returns true if logging at this level is enabled for the current thread.
This function only returns true if logging is globally enabled and if level is less than or equal to the maximal log level enabled for the given component.
Log the given record.
This function should only be called from the DoLog() implementations in the derived classes if they need to call DoLogRecord() on another log object (they can, of course, just use wxLog::DoLogRecord() call syntax to call it on the object itself). It should not be used for logging new messages which can be only sent to the currently active logger using OnLog() which also checks if the logging (for this level) is enabled while this method just directly calls DoLog().
Example of use of this class from wxLogChain:
Remove the mask from the list of allowed masks for wxLogTrace().
Sets the specified log target as the active one.
Returns the pointer to the previous active log target (may be NULL). To suppress logging use a new instance of wxLogNull not NULL. If the active log target is set to NULL a new default log target will be created when logging occurs.
Sets the log level for the given component.
For example, to disable all but error messages from wxWidgets network classes you may use
SetLogLevel() may be used to set the global log level.
Specifies that log messages with level greater (numerically) than logLevel should be ignored and not sent to the active log target.
Enables logging mode in which a log message is logged once, and in case exactly the same message successively repeats one or more times, only the number of repetitions is logged.
Sets a thread-specific log target.
The log target passed to this function will be used for all messages logged by the current thread using the usual wxLog functions. This shouldn't be called from the main thread which never uses a thread- specific log target but can be used for the other threads to handle thread logging completely separately; instead of buffering thread log messages in the main thread logger.
Notice that unlike for SetActiveTarget(), wxWidgets does not destroy the thread-specific log targets when the thread terminates so doing this is your responsibility.
This method is only available if
wxUSE_THREADS is 1, i.e. wxWidgets was compiled with threads support.
Sets the timestamp format prepended by the default log targets to all messages.
The string may contain any normal characters as well as % prefixed format specifiers, see strftime() manual for details. Passing an empty string to this function disables message time stamping.
Notice that the current time stamp is only used by the default log formatter and custom formatters may ignore this format. You can also define a custom wxLogFormatter to customize the time stamp handling beyond changing its format.
Activates or deactivates verbose mode in which the verbose messages are logged as the normal ones instead of being silently dropped.
The verbose messages are the trace messages which are not disabled in the release mode and are generated by wxLogVerbose().
Suspends the logging until Resume() is called.
Note that the latter must be called the same number of times as the former to undo it, i.e. if you call Suspend() twice you must call Resume() twice as well.
Note that suspending the logging means that the log sink won't be flushed periodically, it doesn't have any effect if the current log target does the logging immediately without waiting for Flush() to be called (the standard GUI log target only shows the log dialog when it is flushed, so Suspend() works as expected with it). | https://docs.wxwidgets.org/trunk/classwx_log.html | CC-MAIN-2021-17 | refinedweb | 1,342 | 62.98 |
#include <NewSoftSerial.h>#define RXPIN 9#define TXPIN 8NewSoftSerial nss(RXPIN, TXPIN);nss.begin(4800);
#include <AltSoftSerial.h>AltSoftSerial nss;nss.begin(4800);
#define RXPIN 9#define TXPIN 8
Ouch. As you can see, the shield is meant to be inserted just one way and fits nicely together with the GPRS shield. Will try putting it backwards nevertheless, just to make sure.
Is it that some deep code hacking is needed to reverse the pins or is it impossible at all because of wiring reasons?
You will probably damage the shield if you reverse ALL the pins. Only pins 8 and 9 should be swapped.
It is absolutely impossible from software to make any pin other than 8 on Uno act as a 16 bit timer input capture.
QuoteIt is absolutely impossible from software to make any pin other than 8 on Uno act as a 16 bit timer input capture.Actually you could use any one of the analog pins as an input for the 16 bit timer.
From what I can tell your library does not use pin change interrupts, can you confirm this?
Also you mention that PWM on pin D10 can't be used. I assume I can still use D10 for other functions just not PWM?
How does code size compare to NSS? | http://forum.arduino.cc/index.php?topic=91499.msg706961 | CC-MAIN-2017-22 | refinedweb | 218 | 66.23 |
Pelli-Zhang Video Attenuator
visitors since September 17, 1996.
9 January 2003
THE Pelli-Zhang VIDEO ATTENUATOR
Macintosh computers are great for synthesizing and displaying images for vision experiments. However, the 8-bit resolution of their digital-to-analog converters is inadequate to render threshold-contrast stimuli, typically yielding on-screen contrasts that are only accurate to about 7 bits, because of the monitor's factor-of-two contrast gain. (See more-than-8-bit DACs.) Pelli and Zhang (1991) describe a simple electronic circuit, a video attenuator, that provides accurate contrast control, achieving 12-bit on-screen accuracy from any 8-bit DAC. You can build one yourself, from their schematic, or you can order one pre-made either from the ISR Instruments, which calls it the "ISR Video Attenuator" (with 15-pin D connectors, as documented below) or from Vision Research Graphics, which calls it a "Gray Scale Expander" (with BNC connectors, documented at their web site).
D. G. Pelli and L. Zhang (1991) Accurate control of contrast on microcomputer displays. Vision Research, 31:1337-1360. [pdf]
THE VISION RESEARCH GRAPHICS "GRAY SCALE EXPANDER"
Vision Research Graphics (VRG) sells the Gray-Scale Expander for $300. It uses BNC connectors for both input and output.
SVGA HD-15 to BNC cables are readily available, so this version would be a better fit for computers that don't have old-style Macintosh D-15 video ports. Mac DB-15 to BNC connectors are also available, so it should also work with old Macs.
The VRG Gray-Scale Attenuator, like any Pelli-Zhang attenuator, can be used to drive one gun on any monitor. To drive all three guns, the monitor must have a "termination switch", which removes the 75 ohm termination on each input, allowing the VRG unit to provide the 75 ohm termination instead. Without that switch, tying together the three 75 ohm monitor inputs would yield an input impedance of 25 ohms, which is too low for the attenuator to work properly.
(Thanks to Allen Ingling.)
THE ISR VIDEO ATTENUATOR
Installation instructions are near the end of this document.
The ISR Video Attenuator is a small box with Macintosh-video-compatible connectors at either end that connects to the video cable between the computer and monitor. It combines the three color RGB signals from a color video card to produce a single monochrome signal of higher grayscale resolution. It is designed for use with monochrome monitors; it may also be used with a color monitor, but will drive only the green gun. Only programs using special software (provided) will achieve this benefit. Physically, it is a small satin-finish machined-aluminum box, about the size of a matchbox (2"x1.2"x0.7"), with a 15-pin D Macintosh video connector at each end. Inside are precision resistors on a two-sided printed circuit board with controlled-impedance microstrip traces. This passive resistor network combines the three 8-bit RGB signals from your video card to produce a single much-higher-accuracy signal to drive your monitor.
HISTORICAL NOTE: There are two versions of the ISR Attenuator: the original (no longer available), and the new multisynch-compatible version, which is distinguished by a gold (or black) "M" stamped onto its case. The multisynch-compatible version of the Attenuator is compatible with all computers and monitors that use Macintosh-compatible 15-pin video connectors. Owners of the original attenuator--which is incompatible with multisynch monitors--can upgrade to multisynch compatibility by ordering the upgrade from ISR or by doing the upgrade themselves: see Appendix 2, below.
MULTISYNCH COMPATIBILITY
The original Attenuator was designed in 1989. It was designed to attach to the monitor, not the computer, but many of Apple's new monitors have permanently attached cables, with no connector at the monitor, making it impossible to attach directly to the monitor. Attaching the original Attenuator directly to the computer will result in it being installed backwards, producing a uselessly dim display. The new multisynch-compatible version of the Attenuator comes with a pair of compact gender changers that will correctly connect the Attenuator to your computer. Furthermore, many pins that were unimportant in 1989 now carry essential signals for autosensing and synching the latest displays. Six wires have been added to the new Attenuator to relay these signals. The multisynch-compatible version of the Attenuator is compatible with all computers and monitors that use Macintosh-compatible 15-pin video connectors. Owners of the original Attenuator can order the upgrade from ISR or do the upgrade themselves, by following instructions that appear in Appendix 2, below.
PACKING LIST
SOFTWARE AVAILABILITY
The VideoToolbox software is updated several times a year. You can download the latest edition from InfoMac (search for "video-toolbox") or the VideoToolbox web site. To get future editions automatically, just send me, denis@psych.nyu.edu, your name and email address. Each time I post a new version of the VideoToolbox to the Info-Mac Archive, I email a copy to everyone on the subscription list; there are currently nearly two hundred subscribers.
YOU WILL ALSO NEED
YOU MAY ALSO WANT TO HAVE
ORDERING INFORMATION
The multisynch-compatible ISR Video Attenuator is for research. As such it is licensed, not sold. The license has no time limit and is transferable. The accompanying Video Toolbox software for creating visual stimuli is supplied free, and is available separately to anyone, but may not be sold without permission.
Checks should be made payable to Syracuse University. The price of US$175 includes shipping in US. If you're in New York State then you either need to add sales tax or attach the New York State document excusing your institution from paying sales tax.
UPGRADES
Owners of the original ISR Video Attenuator may upgrade for US$35. Ship the Attenuator to the address below. It will be modified, adding the new wires, stamped with a gold (or black) "M", and shipped back to the sender, along with a pair of gender changers. The price includes shipping within the US.
SEND ORDERS TO
ISR Instruments
Institute for Sensory Research
Syracuse University
Merrill Lane
Syracuse, NY 13244-5290
(315)-443-4164
arpajian@syr.edu
DISCLAIMER
Denis Pelli has no financial involvement in the ISR Video Attenuator.
INSTALLING THE ATTENUATOR AT THE MONITOR
To install your ISR Video Attenuator, unplug your video cable from your monitor, plug the cable into the attenuator and plug the attenuator into your monitor. Note that the ISR logo should be at the end of the attenuator that's closest to the monitor.
However, you can't do that if your monitor has a permanently attached cable or an incompatible connector. In that case, you must install the attenuator at the computer end of the video cable.
INSTALLING THE ATTENUATOR AT THE COMPUTER
When the ISR Video Attenuator was designed in 1989, all Apple Macintosh modular computers connected to video monitors by means of a separate cable that plugged into both the computer and the display. The video attenuator was designed to plug into the back of the monitor, not the computer. If you plug it directly into the computer it will be backwards and won't work. (As noted above, the ISR logo should be at the end of the attenuator that's closest to the monitor.) Unfortunately, since 1989, Apple's cost-saving designs for their new monitors have replaced the connector at the back of the monitor by a permanently attached cable, making it impossible to install the video attenuator as originally intended. The solution is to use a pair of gender changers (included with the new multisynch-compatible version of the Attenuator), one for each end of the video attenuator, so that it can be installed in the correct orientation at the back of the computer. Unplug the video cable from your computer. Put a gender changer at either end of your video attenuator, plug it into your computer's video port, and plug the monitor's video cable into it. Confirm that the ISR logo is at the end of the attenuator that's closest to the monitor.
CALIBRATION
Once the attenuator is installed, you should run the program CalibrateLuminance to calibrate your monitor, attenuator, and video card. You will need a photometer in order to use that program. Once it's finished CalibrateLuminance will produce a new file called LuminanceRecord?.h, where "?" is the screen number of your monitor. (This screen number is similar to but not the same as the monitor number used by the Monitors control panel device.) This file describes the gamma function and RGB gains of your video card and attenuator. It is a C header file. You can cause its contents to be included in any program by writing
#include <LuminanceRecord1.h>
where "1" must be replaced by the screen number of your monitor, as determined by the program CalibrateLuminance or the subroutine GetScreenDevice(). Alternatively,
you can read the file at runtime by calling
ReadLuminanceRecord("LuminanceRecord1.h",&LR,0);
where &LR is a pointer to your luminance record struct. We recommend the latter approach, because it makes your program portable across monitors. Later, when you write programs (or compile the supplied demonstrations), your LuminanceRecord file will allow them to accurately control the luminance of your display. The relevant subroutines are all in the file called Luminance.c. Documentation is in the file Luminance.h. Note that LuminanceRecord1.h and LuminanceRecord2.h are provided as samples in the VideoToolboxSources folder, so that the demonstration programs can be compiled even before you have calibrated your own display. However, before doing any serious experiment it is essential that you replace the example by a new LuminanceRecord file that describes your own display.
Good luck.
APPENDIX 1: Signal Assignments for the Macintosh DB-15 External Video Connector
From Apple's Tech Note "HW 30 - Sense Lines.note", which you can find in the VideoToolbox:Notes folder.
Pin Name Description
1 RED.GND Red Ground
2 RED.VID Red Video Signal
3 /CSYNC Composite Sync Signal
4 SENSE0 Monitor Sense Line 0
5 GRN.VID Green Video Signal
6 GRN.GND Green Ground
7 SENSE1 Monitor Sense Line 1
8 n.c. Not Connected
9 BLU.VID Blue Video Signal
10 SENSE2 Monitor Sense Line 2
11 C&VSYNC.GND Ground for CSYNC & VSYNC
12 /VSYNC Vertical Sync Signal
13 BLU.GND Blue Ground
14 HSYNC.GND HSYNC Ground
15 /HSYNC Horizontal Sync Signal
APPENDIX 2: DO-IT-YOURSELF UPGRADE OF THE ORIGINAL VIDEO ATTENUATOR TO SUPPORT AUTOSENSING & MULTI-RESOLUTION MONITORS
The upgrade is also available from ISR, see ordering instructions above. But it's not hard to do it yourself, if you prefer, by following these step-by-step instructions.
You can make your own gender changers, or buy any brand, but those made by GC Electronics are particularly compact and inexpensive. Buy both the GC Electronics male-to-male 15-pin D gender changer 45-0527 and the female-to-female 15-pin D gender changer 45-0528. They cost $6.65 each, according to a 1994 catalog. They are sold by most electronics supply houses, e.g. Newark Electronics.
When the printed circuit for the video attenuator was designed in 1989, video pins 4, 7, 10, 11, 12, and 15 were not important. Since then Apple has defined pins 4, 7, and 10 as Monitor Sense Pins, and pins 11, 12, and 15 as carrying horizontal and vertical synch. (For details read Apple's "HW.30 Sense Lines.note" in the VideoToolbox:Notes folder.) To use your ISR Video Attenuator with anything other than a fixed-resolution 640x480 67 Hz display, e.g. a multiple-resolution monitor, you need to update the Video Attenuator to pass those 6 signals through. After this update your Video Attenuator will be compatible with all computers and monitors that use Macintosh-compatible 15-pin D video connectors. (As always, when used with a color monitor only the green gun will be used.) If you're comfortable with a soldering iron you can do this in about 15 minutes.
Here's how to update your original ISR Video Attenuator. Unscrew and remove the standoffs on either side of the female 15-pin connector (otherwise you won't be able to open the case). Use a 5/64 inch Allen key to unscrew the allen screw in the middle of the ISR Video Attenuator. Remove the metal case (top and bottom). Place the circuit board on the table so that you can see the resistors, with the male connector (with 15 gold-plated pins) on your right. You can now see the connections of pins 1 to 8 to the circuit board. On both right and left of the circuit board, pin 1 is nearest you, ..., and pin 8 is furthest from you. Use an X-ACTO or razor blade to cut the copper foil connected to pin 4 on both the right and the left. Scrape or fold back the foil to create a visible gap, to make sure the break is permanent. Now solder a wire connecting pin 4 on the left to pin 4 on the right. (Any wire will do, but 22 gauge solid-conductor insulated wire is particularly easy to handle.) Now solder a wire connecting pin 7 on the left to pin 7 on the right. Now turn the printed circuit upside down, so the resistors are hidden, and the male connector is on your left. Pins 9 to 15 are now visible. Pin 9 is nearest to you, on both right and left. Now solder a wire connecting pin 10 on the left to pin 10 on the right. Similarly, connect pin 11 to 11, 12 to 12, and 15 to 15. The video attenuator is now functional. You may test it now, before putting the case back on. The standard orientation for the metal case is to put the unlabeled half on the side of the circuit board with no resistors, and the labeled half ("VIDEO ATTENUATOR") on the side with resistors, oriented so that the writing is right-side-up when the male connector is on your right. Use the Allen key to replace the Allen screw. (You needn't bother to replace the stand-offs, since the gender changers won't use them.) You're done. (Here at ISR, we would also stamp "M" onto the case and fill the impression with gold or black laquer.) Your updated Video Attenuator is compatible with all computers and monitors that use Macintosh-compatible 15-pin D video connectors, but for the newer monitors you will need to buy a pair of gender changers to achieve the proper connection, as noted above.
APPENDIX 3: Using monochrome monitors in color mode.
Tom Busey, busey@indiana.edu, writes, "I'm trying to get around Apple's "feature" on the vram card driver on the Power Mac 7100 that doesn't allow one to select a color mode when it is attached to a grayscale monitor (e.g. Apple 2-page grayscale monitor). So I installed a MacLiberty Adapter to fool the computer into thinking I've got a color monitor. It works."
"However, unlike most monochrome monitors, which take their input from the "green" video line, the Apple 2-page monitor seems to be driven by the "blue" video line. (I put the monitor into thousands of colors mode, which shows the 4 color strips at the bottom of the control panel. When I drag this control panel over to the grayscale monitor, the color strips that were red and green on the color monitor are now black, and the formerly blue and gray strips are now identical gray strips.) I realize that I need to update my ISR Video Attenuator to pass all the sense lines correctly, and I presume that the blue/green anomaly will require a custom modification."
Yep, this will require connecting the Video Attenuator output to the Blue (pin 9) instead of the Green line (pin 5). | http://vision.nyu.edu/Tips/Attenuator.html | crawl-002 | refinedweb | 2,672 | 54.42 |
Overview
Atlassian Sourcetree is a free Git and Mercurial client for Windows.
Atlassian Sourcetree is a free Git and Mercurial client for Mac.
pygdb2: control GDB from within the Python program being debugged
pygdb2 is a python module which allows you to send commands to the underlying gdb process. For example, it can be used to automatically and programmatically add breakpoints and watchpoints.
How to install
Make sure that pygdb2 is installed:
$ pip install pygdb2
Then, you need to activate the GDB integration by putting this line inside your ~/.gdbinit:
python import pygdb2
How to use it
You need to launch your python program inside gdb using the special pyrun command:
$ gdb --args python myscript.py ... (gdb) pyrun ...
From the python program, you can use pygdb2.set_trace() to enter the gdb prompt, or pygdb2.execute() to send commands to gdb.
At the (gdb) prompt, it is possible to invoke the pdb command to enter in the corresponding debugger at the Python level.
Example
For example, the following code adds a watchpoint for a particular region of memory created with ctypes:
import ctypes import pygdb2 def main(): buf = ctypes.c_int() buf.value = 42 adr = ctypes.cast(ctypes.pointer(buf), ctypes.c_void_p) # enter gdb when we write to this memory pygdb2.execute("watch *(int*)%d" % adr.value) i = 0 while i < 5: print i i += 1 if i == 2: buf.value = 43 # GDB stops here
Here is an example of a gdb/pdb session:
$ gdb --args python set_watchpoint.py ... (gdb) # lines prefixed "pygdb2:" contain the command coming from pygdb2.execute() (gdb) pyrun ... pygdb2: watch *(int*)14079984 Hardware watchpoint 1: *(int*)14079984 0 1 Hardware watchpoint 1: *(int*)14079984 Old value = 42 New value = 43 i_set (ptr=0xd6d7f0, value=<value optimized out>, size=4) at /build/buildd/python2.7-2.7.1/Modules/_ctypes/cfield.c:663 663 /build/buildd/python2.7-2.7.1/Modules/_ctypes/cfield.c: No such file or directory. in /build/buildd/python2.7-2.7.1/Modules/_ctypes/cfield.c (gdb) # now we are debugging at the C level (gdb) # in particular, we are inside function of _ctypes (gdb) # which sets the value of the buffer (gdb) # Let's jump to the Python level (gdb) pdb Signal received, entering pdb Traceback: File "set_watchpoint.py", line 20, in <module> main() File "set_watchpoint.py", line 16, in main buf.value = 43 # we should enter gdb here > /home/antocuni/env/src/pygdb2/pygdb2/test/set_watchpoint.py(12)main() -> while i < 5: (Pdb++) print i 2 (Pdb++) c 2 3 4 Program exited normally. (gdb) q
Using signals
pygdb2 uses both SIGUSR1 and SIGUSR2 to communicate with gdb, so if your program also needs those, you might have conflicts. | https://bitbucket.org/antocuni/pygdb2/ | CC-MAIN-2018-30 | refinedweb | 448 | 57.27 |
Essential Python Reading List
Contents
Here’s my essential Python reading list.
- The Zen of Python
- Python Tutorial
- Python Library Reference
- Python Reference Manual
- The Python Cookbook
- Code Like a Pythonista: Idiomatic Python
- Functional Programming HOWTO
- Itertools functions
- Python library source code
- What’s New?!
If this doesn’t ring true, Python isn’t for you.
Python Tutorial
Your next stop should be the Python tutorial. It’s prominently available at the top of the online documentation tree, with the encouraging prompt:
start here
The latest version (by which I mean the version corresponding to the most recent stable release of Python) can be found on the web at, but I recommend you find and bookmark the same page from your local Python installation: it will be available offline, pages will load fractionally quicker, and you’re sure to be reading about the version of Python you’re actually running. (Plus, as I’ll suggest later, it’s well worth becoming familiar with the contents of your Python installation).
And with this tutorial, you’ll be running code right from the start. No need to page through syntax definitions or battle with compiler errors.
Since the best way to learn a language is to use it, the tutorial invites you to play with the Python interpreter as you read.
If you have a programming background you can complete the tutorial in a morning and be using Python by the afternoon; and if you haven’t, Python makes an excellent first language.
A tutorial cannot cover everything, of course, and this one recognises that and points you towards further reading. The next place to look, it says, is the Python Library Reference. I agree.
Python Library Reference
The documentation index suggests you:
keep this under your pillow
It’s a reference. It documents use of the standard libraries which came with your Python installation. I’m not suggesting you read it from cover to cover but you do need to know where it is and what’s in it.
You should definitely read the opening sections which cover the built-in objects, functions and types. I also suggest you get used to accessing the same information from within the Python interpreter using
help: it may not be hyperlinked or prettily laid out, but the information is right where you need it.
>>> help(dict) Help on class dict in module __builtin__: class dict)
Python Reference Manual
The Language Reference claims to be:
for language lawyers
but I’m not sure that’s true. Readable and clear, it offers a great insight into the language’s design. Again, you may not want to read it straight through, but I suggest you skim read it now and return to it if you find yourself confused by Python’s inner workings.
The Python Cookbook
The Python Cookbook is the first slab of treeware I’m recommending. Yes, Active State provides a website for the book, which has even more recipes than the book and is well worth a visit, but I’d say you want the printed edition. It’s nicely laid out and provides clear examples of how to use Python for common programming tasks. Alternative approaches are discussed. You can dip in to it or read the sections most relevant to your particular domain. This book teaches you idiomatic Python by example and, remarkably, it actually benefits from being written by a large team of authors. The editors have done an excellent job.
Incidentally, if you’re wondering why I claim Python is a high-level language and C++ isn’t, just compare the contents of the Python Cookbook with the contents of its C++ counterpart. Both books weigh in at ~600 pages, but the C++ one barely gets beyond compiling a program and reading from a file.
Code Like a Pythonista: Idiomatic Python
This one’s a presentation David Goodger gave at a conference last year. I wish he’d written it and I’d read it sooner. If you care about code layout, how you name things etc. but don’t want to waste time arguing about such things, then you probably want to go with the native language conventions. Python has a well-defined style and this presentation describes it well, connecting and summarising the contents of several other references.
Functional Programming HOWTO
The next version of the Python documentation (for versions 2.6 and later) has a HOWTOs section. A. M. Kuchling’s Functional Programming HOWTO is a MUSTREAD, especially for anyone coming from a language with weak support for FP. Python is far from being a pure functional programming language but features like list comprehensions, iterators, generators, even decorators, draw direct inspiration from functional programming.
Itertools functions
If you took my advice and skim-read the Library Reference, you may have skipped past a small module (mis?)filed in the Numeric and Mathematical Modules section. Now’s the time to go back and study it. It won’t take long, but these itertools functions are, to me, the epitome of elegance and power. I use them every day, wish they were promoted to builtins, and most of my interpreted Python sessions start:
>>> from itertools import *
Python library source code
The pure-python modules and test code in your Python installation are packed with good, readable code. If you’re looking for example code using a particular module, have a look at that module’s unit tests.
What’s New?
I’ve mentioned Python versions a few times in this article. Although Python takes stability and backwards compatibility seriously, the language has updated every year for as long as I’ve been using it. Generally, the changes are backwards compatible so, for example, 2.1 code should work fine in 2.5, but it’s important to stay current.
Do you write code like this?
anag_dict = {} words_fp = open("wordlist.txt", "rt") for line in words_fp.readlines(): word = line.strip().lower() chars = [] for ch in word: chars.append(ch) chars.sort() key = "".join(chars) anag_dict.setdefault(key, []).append(word) words_fp.close() anagrams = [] for words in anag_dict.values(): if len(words) > 1: anagrams.append(words)
Then you should find out about list comprehensions, the built in
sorted function, and
defaultdicts — introduced in Python 2.0, 2.4, 2.5 respectively!
from collections import defaultdict anag_dict = defaultdict(list) with open("wordlist.txt", "rt") as words_fp: for line in words_fp: word = line.strip().lower() key = "".join(sorted(word)) anag_dict[key].append(word) anagrams = [words for words in anag_dict.itervalues() if len(words) > 1]
The
with statement, incidentally, appears in 2.6, which is in alpha as I write this. Get it now by:
from __future__ import with_statement
Anyway, the point of all this is that A. M. Kuchling writes up what’s new in each Python revision: think of it as the release notes. As an example, here’s What’s New in Python 2.5. Essential reading.
Other Books?
I’ve only mentioned one book on this reading list. There are plenty of other decent Python books but I don’t regard them as essential. In fact, I’d rather invest in an excellent general programming title than (for example) Programming Python.
Why? Well, partly because of the speed at which the language progresses. Thus the second edition of the Python Cookbook — the single book I regard as essential — did a great job of being 2.4 compliant before 2.4 was even released, which definitely extended its shelf-life; but it has nothing to say about Python 2.5 features, let alone Python 2.6 and the transition to Python 3.0. And partly because the books all too easily become too thick for comfortable use. Python famously comes with batteries included, but full details of their use belongs online.
I do own a copy of Python Essential Reference by David Beazley. It’s the second edition and is now woefully out of date (covering Python up to version 2.1). I did get good use out of it, though. It’s well designed, beautifully written and laid out, and, weighing in at fewer than 400 pages, comfortable to read and carry. Somehow it manages (managed, I should say) to pack everything in: it’s concise, and it recognises that the online documentation should be the source of authoritative answers. Despite this, I haven’t bought the third edition. Partly because I don’t really need it, partly because it’s now a Python point revision or two out of date, and partly because it’s expanded to 644 pages.
The Reading List, with Links
- The Zen of Python
- Python Tutorial
- Python Library Reference
- Python Reference Manual
- The Python Cookbook
- Code Like a Pythonista: Idiomatic Python
- Functional Programming HOWTO
- Itertools functions
- Python library source code
- What’s New
There’s nothing controversial here. The Zen of Python should whet your appetite, and the next three items are exactly what you’ll find at the top of the Python documentation home page. Others may argue Python in a Nutshell deserves a place, or indeed the hefty Programming Python, and they’re certainly good books.
I’d be more interested to find out which non-Python books have improved your Python programming the most. For myself, I’ll predictably pick Structure and Interpretation of Computer Programs.
The anagrams puzzle comes from Dave Thomas’s CodeKata, a nice collection of programming exercises. The solutions presented here gloss over a few details and make assumptions about the input. Is “face” an anagram of “café”, for example? For that matter, what about “cafe” and “café”. Or “isn’t” and “tins”? What if the word list contains duplicates? These issues aren’t particularly hard to solve but they do highlight the dangers of coding up a solution without fully specifying the problem, and indeed the difference between a “working” solution and a finished one.
However, I just wanted a short program to highlight advances in recent versions of Python, and in that light, here’s another variant. (My thanks to Marius Gedminas for spotting a bug in the code I originally posted here.)
from itertools import groupby, ifilter, imap from operator import itemgetter from string import (ascii_lowercase, ascii_uppercase, punctuation, maketrans, translate) key = sorted second = itemgetter(1) to_lower = maketrans(ascii_uppercase, ascii_lowercase) data = open("wordlist.txt", "rt").read() translated = translate(data, to_lower, deletions=punctuation) words = set(translated.split()) sorted_words = sorted(words, key=key) grouped_words = imap(list, imap(second, groupby(sorted_words, key))) anagrams = ifilter(lambda words: len(words) > 1, grouped_words) | http://wordaligned.org/articles/essential-python-reading-list | crawl-002 | refinedweb | 1,750 | 63.49 |
In the wake of my last post – Getting started with Java Server Faces (JSF) using Apache MyFaces – in JDeveloper 10.1.3EA
– I will continue my explorations of MyFaces in this article. In
particular, I will take a look at the Tree2 Component that is shipped
with Apache MyFaces. I will make use of Oracle JDeveloper 10.1.3 EA as
my IDE. For details on setting up Apache MyFaces with JDeveloper
10.1.3, see the post I just mentioned. To make things easier for me –
and hopefully for some of you as well – I have created a
starter-project, a JDeveloper 10.1.3 Application Workspace with
MyFacesBase project that contains the files you need to get started
with MyFaces. You can simply download this zipfile: MyFacesBase.zip,
extract it on your PC and open the JWS file in JDeveloper 10.1.3. You
still need to set up the project with proper library definitions – see
the previous post for details.
Starting from this MyFacesBase Project we will create a beautiful tree-based JSF application. Sort of.
Steps to prepare
I will first gear the base environment to my specific Tree oriented project:
- Extract MyFacesBase.zip to a directory; let’s refer to that directory as MyFacesTree_HOME
- Change
the string Base to Tree in the names of the subdirectories
MyFacesTree_HOME\MyFacesBase and
MyFacesTree_HOME\MyFacesBase\MyFacesBase
- Open JDeveloper 10.1.3
- Load the file MyFacesBase.jws; this will open the MyFacesBase application workspace
- Rename the Application to MyFacesTree.
- Change
the name of the MyFacesBaseProject to MyFacesTreeProject. If you do not
already see the project, then add it to the Application Workspace by
adding the fiule MyFacesBaseProject.jpr.
- Go to the Project
Properties of the MyFacesTree project. Verify in the Libraries tab
whether the libraries MyFaces and JSTL are set up correctly and added
to the project. If necessary see the post mentioned above. Also check
whether the JSP Tag Libraries MyFaces-Core, MyFaces-HTML and
MyFaces-ext (or Tomahawk) have been added to the project.
Verifying the preparations
Now
is a good time to make sure that we can indeed create and run JSF
applications in the JDeveloper 10.1.3 environment and project we just
set up. Follow the these simple steps:
- From the Right Mouse
Button Menu on the MyFacesTreeProject project, go to the New Gallery.
Select the node Web Tier, JSF and select the option JSF JSP. Walk
through the Wizard, accepting all default choices – or making
refinements as you like.
- The wizard create a JSP with basic JSF
layout – importing the required Tag Libraries and layout a
<f:view> and a <f:form>. The JSP is opened in the editor.
Type a few characters – just to have some output to look at in our
browser. Note: at this point we have not made any changes to
faces-config.xml or any other configuration file.
- From the RMB
menu on the JSP, choose the option run. The “application” should now
open in the browser, displaying your trivial output.
If you have gotten this far, you are in good shape to start using the MyFaces Tree component.
Developing with the MyFaces Tree2 Component
Apache
MyFaces is a valuable open source project providing the base
implementation of the JSF standard and extending it with a large set of
advanced components, called Tomahawk. The set includes TabbedPane,
RssReader, Advanced Table features, inputDate and Calendar, HtmlEditor,
Pulldown Menu, and many others. To confuse matters, there is a Tree, a
Tree2 and a Tree Table component. It seems that Tree2 effectively
replaces Tree; Tree is labeled ‘sort of obsolete’. So I focus on Tree2
for now.
Using the JSP JSF wizard from the New Gallery I create
a new JSP called LibraryTree.jsp. From the Component Palette, we drag
and drop the Tree2 component to the JSP. The page source looks like:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" ""> <%@ page <title>LibraryTree</title> </head> <body><h:form> <t:tree2 <f:facet <h:panelGroup> <t:graphicImage <t:graphicImage <h:outputText <h:outputText </h:panelGroup> </f:facet> <f:facet <h:panelGroup> <t:graphicImage <t:graphicImage <h:outputText <h:outputText </h:panelGroup> </f:facet> <f:facet </h:panelGroup> </f:facet> </t:tree2> </h:form></body> </html> </f:view>
There
are a few things worth noting here. First of all, the tree2 components
has child facets; there is a facet for each type of node that the tree
will contain. Apparently our tree wil contain nodes of types
foo-folder, author and book. We will see later how we specify the node
type on the backing bean’s model data. Each facet describes how the
node type must be displayed; various node-types can make use of
different icons, styles, actions etc. Note how the Node Interface
provides properties like childCount, nodeExpanded and nodeSelected.
The
value for the tree2 component is set to:
value=”#{LibraryTreeHandler.treeModel.treeData}”. That suggests that
there is a Managed Bean called LibraryTreeHandler that exposes a
getTreeModel method that returns an object that has a method
getTreeData(). This last method must return an object that implements
the TreeNode interface – in package org.apache.myfaces.custom.tree2.
So there we go: create the required objects. First the LibraryHandler class:
package nl.amis.myfaces.tree; public class LibraryHandler { private LibraryTreeModel treeModel; public LibraryHandler() { treeModel = new LibraryTreeModel(); } public LibraryTreeModel getTreeModel() { return treeModel; } }
This
class is a used for the managed bean called LibraryTreeHandler and
referenced by the value property of the tree2 component. See the
faces-config.xml file:
<?xml version="1.0" encoding="windows-1252"?> <!DOCTYPE faces-config PUBLIC "-//Sun Microsystems, Inc.//DTD JavaServer Faces Config 1.1//EN" ""> <faces-config <managed-bean> <managed-bean-name>LibraryTreeHandler</managed-bean-name> <managed-bean-class>nl.amis.myfaces.tree.LibraryHandler</managed-bean-class> <managed-bean-scope>session</managed-bean-scope> </managed-bean> </faces-config>
The LibraryHandler class relies on the LibraryTreeModel class. Its code is as follows:
package nl.amis.myfaces.tree; import org.apache.myfaces.custom.tree2.TreeModel; import org.apache.myfaces.custom.tree2.TreeNode; import org.apache.myfaces.custom.tree2.TreeNodeBase; public class LibraryTreeModel { public LibraryTreeModel() { } public TreeNode getTreeData() { TreeNode treeData = new TreeNodeBase("foo-folder", "Library", false); TreeNodeBase authorNode = new TreeNodeBase("author", "Ben Elton", false);); authorNode = new TreeNodeBase("author", "Orson Scott Card", false);); authorNode = new TreeNodeBase("author", "Rod Johnson", false);; } }
The
application looks like this in the browser. We can expand and collapse
nodes. Note that all tree manipulation is done server side: each node
expansion requires browser-to-server communication for updating the
treenodes.
The Project with the application so far can be downloaded here: MyFacesTree_Stage1.zip
Make the LibraryTree application a little more interesting
We
are now going to make the application a little more interesting. We
will allow Author nodes to be selected and we will display the
Biography of the currently selected author – marked in the tree with an
asterisk – next to the tree. It is remarkable how simple it is to add
these changes to the application.
If
we select an Author node in the tree, the tree is redrawn and an
asterisk is placed in front of that node – in the screenshot the
selected node is for Orson Scott Card. At the same time, when the page
is refreshed, the Results of the currently selected author are updated
with the biography for the currently selected author.
The changes I had to make to realize this functionality are in three location:
1. First the JSP LibraryTree.jsp
I have added the asterisk for the currently selected Author node:
<h:outputText
You can see how simple it is: the asterisk is only rendered if the current node in the tree is selected.
To make the author node selectable, we add a commandLink:
<h:commandLink </h:commandLink>
When
the node is clicked on, the processAction method in the
LibraryTreeHandler bean is invoked; this method needs to have a return
type of void and accept a single ActionEvent input parameter.
The next change is displaying the biography for the current author:
<h:panelGroup <f:verbatim> <h3>Results of the currently selected author</h3> </f:verbatim> <h:outputText </h:panelGroup>
I
used a panelGroup for the rendered property: the entire panelGroup is
only displayed if there is indeed a Biography available. Note that the
bio is read from the authorBio property in the LibraryTreeHandler bean
that is managed based on the LibraryHandler class. We will see that
class in a moment.
I finally made use of a panelGrid to display
the tree and the bio next to each other. A panelGrid element in HTML is
typically rendered as table. Okay, it is not pretty but it will do the
trick.
2. The LibraryHandler class
This class now suddenly needs to provide an authorBio property and it has to absorb the processAction event.
package nl.amis.myfaces.tree; import javax.faces.component.UIComponent; import javax.faces.event.AbortProcessingException; import javax.faces.event.ActionEvent; import org.apache.myfaces.custom.tree2.HtmlTree; import org.apache.myfaces.custom.tree2.TreeNodeBase; public class LibraryHandler { private LibraryTreeModel treeModel; private String authorBio; public LibraryHandler() { treeModel = new LibraryTreeModel(); } public LibraryTreeModel getTreeModel() { return treeModel; } public void setAuthorBio(String authorBio) { this.authorBio = authorBio; } public String getAuthorBio() { return authorBio; } public void processAction(ActionEvent event) throws AbortProcessingException {(); tree.setNodeSelected(event); setAuthorBio(((ExtendedTreeNode)node).getLongDescription()); } } }
You can see that we make use of a new class, the ExtendedTreeNode. So we also need to create that class:
3. The new ExtendedTreeNode class
In order to have the new biography property for our Authors, we introduce this class:
package nl.amis.myfaces.library; import org.apache.myfaces.custom.tree2.TreeNode; import org.apache.myfaces.custom.tree2.TreeNodeBase; public class ExtendedTreeNode extends TreeNodeBase { private String longDescription; public ExtendedTreeNode( String type, String description, boolean p3, String longDescription) { super(type, description, p3); this.longDescription = longDescription; } public void setLongDescription(String longDescription) { this.longDescription = longDescription; } public String getLongDescription() { return longDescription; } }
In
class LibraryTreeModel we use this class to populate the nodes of our
tree. The nodes we create for Authors are of the new ExtendedTreeNode
type that include the biography. So we modify the LibraryTreeModel.java
to:
package nl.amis.myfaces.tree; import org.apache.myfaces.custom.tree2.TreeNode; import org.apache.myfaces.custom.tree2.TreeNodeBase; public class LibraryTreeModel { public LibraryTreeModel() { } public TreeNode getTreeData() { String bio = null; TreeNode treeData = new TreeNodeBase("foo-folder", "Library", false); bio = " Already a successful comedian, Ben Elton turned to writing situation comedies during the 1980s and penned BBC classics such as \"The Young Ones\", \"Blackadder\", and during the 1990s \"The Thin Blue Line\".\n" + " He provided lyrics for Andrew Lloyd Webber Musical, The Beautiful Game, which was nominated for Best Musical at the Laurence Olivier Theatre Awards in 2001 (2000 season).\n" + " His comedy, Popcorn performed at the Apollo Theatre, was awarded the 1998 Laurence Olivier Theatre Award for Best New Comedy of the 1997 season.\n" + " Has three children : Bert, Lottie and Fred. Is co-writer of the Queen Musical 'We Will Rock You' with the band itself.\n" + "Birth name: Benjamin Charles Elton. Height: 5' 8\" (1.73 m) \n"; TreeNodeBase authorNode = new ExtendedTreeNode("author", "Ben Elton", false, bio);); bio = . However, the Ender cycle now includes the new parallel series that began with Ender's Shadow in 1999, followed by Shadow of the Hegemon in 2001, and continued with Shadow Puppets in 2002. Warner Brothers also recently announced that it has made a deal for director Wolfgang Petersen to bring Ender's Game to the big screen.)."; authorNode = new ExtendedTreeNode("author", "Orson Scott Card", false, bio);); bio = "Rod Johnson is an enterprise Java architect with extensive experience in the insurance, dot-com, and financial industries. He was the J2EE architect of one of Europe's largest web portals, and he has worked as a consultant on\n" + "a wide range of projects.\n Rod has an arts degree majoring in music and computer science from the University of Sydney. He obtained a Ph.D. in musicology before returning to software development. He has been working with both Java and J2EE since\n" + "their release and is actively involved in the Java Community Process as a member of the JSR-154 (Servlet 2.4) and JDO 2.0 Expert Groups. He is the author of several best-selling books, like \"Expert One-on-One J2EE Design and Development\" (Wrox, 2002) and has contributed to several other books on J2EE since 2000. As founder of the Spring Framework (), he speaks frequently at leading industry conferences.\n"; authorNode = new ExtendedTreeNode("author", "Rod Johnson", false, bio);; } }
This
is all it takes to create a dynamic tree, driven from a backend data
model, including the select node action that updates the current
biography. In a next article I will actually use the EJB 3.0
Persistence functionality out of container (the GlassFish
implementation) for providing the backend data model. I will try to
make the nodes updateable as well.
Resources
The JDeveloper 10.1.3 MyFacesBase Application Workspace and Project: MyFacesBase.zip
The JDeveloper 10.1.3 Application Workspace as it is at the end of this article: MyFacesTree.zip
Overview of the Apache MyFaces Components – more specifically the Tree2 component
The MyFaces Wiki on Tree2
Despite the time this article is very useful, i have to support to a legacy jsf application and this article was very useful.
Thanks, thanks and again thanks!.
Hi, Also, I want to develop such a tree with checkboxes.
Any idea, please?
Hi Lucas,
I have gone through all your tree examples and they have been extremely helpful to me on my projects and application development. I am having one issue, which is setting focus on node (by default) when the tree rendered first time. I tried everything but could not achieve it.
My use case is when tree is rendered first time then one of the first level node (say Ben Elton), should be expanded and one of the child node (say Post Mortem), should be highlighted/ focused.
How can we achieve it?
Thanks
Best of the best example in a field where we rarely find a good example. This is really a big help for those starting with tree2. Thank you very much for posting.
I got it working in Exadel studio as well.
Thanks,
Sanjeev.
this is very good example for those who dont knoe abt tree structure.. and its really helps alot .as i was a beginer so i know how its helps me. thank you monu
This is an Excellent article but what about when i am not aware of hierarchy depth of tree in advance? Here it seems as if we must be awre of depth of tree i.e. from root node to lower most leaf. depending on that we design facet child and develop the application. please let me know the solution if we can use tree2 for a situation where i am not aware of depth of tree.
Thank you
Arvind
Hi,the code is really excellant.Author used jdeveloper,but iam using Exadel studio…So initially i was unable to import the MyFacesBase.jws in my eadel studio….But i used the code in this page to execute the tree.The LibraryHandler class should be renamed as LibraryTreeHandler.class,then only it works..
If any need to execute the app. in exadel..contact me..i will help u.
deepak.kannadasan@gmail.com
Excelent. O very good article.
I’ve started to understand what is about this tree2 from myfaces.
Thanks a lot for this.
This is the best step-by-step article.
Hi, I want to develop such a tree with checkboxes and multiple selections.
Any idea ???
This is An excellent article for tree2.
This examples works properly without tiles. But when
we use this with tiles it does not work i.e. does not
get expanded when we click on + symbol of the tree.
But tree component works well with tiles. Iwant to use
tree2 with tiles because my application is already developped
with tree2.
In my application tree2 component has to be displayed
on left side of the screen and depending on the click of node
coresponding information has to displayed on right hand side.
Please help me.
Thanks for this i was looking for a simple getting started example. I got this working (nearly fully) on eclipse.
Vijay
How could I adapt this such that when one of the links is clicked, a jsf page is displayed in the RH pane while the tree is still displayed.
Thanks
Angus
An excellent article that Oracle and Apache should have at the top of their documentation list for JDeveloper and MyFaces.
Thank you Lucas for giving what every developer wants when starting out with a new API – a straightforward A-Z explanation of how to get up and running.
Angus
Excellent article. It’s just a shame that you don’t explain how to use the tree2 component outside of JDeveloper…
Thanks. | https://technology.amis.nl/2006/01/05/apache-myfaces-open-source-jsf-implementation-using-the-tree2-component-with-jdeveloper-1013/ | CC-MAIN-2017-26 | refinedweb | 2,812 | 57.37 |
What have std::optional, std::any, and std::variant in common? You can construct them in place. But that is not everything. A std::variant supports a visitor.
But first of all. What's the job of the three new data types?
In order not to repeat me. In the post C++17 - What's New in the Library are the details to the three data types that are part of C++17.
What does construction in place mean? For simplicity reason, I will refer only to std::optional. A std::optional<std::string> opt may hold a value of type std::string. You construct opt by only providing the arguments for the std::string constructor.
A short example should make my point clear.
// inPlace.cpp
#include <optional>
#include <iostream>
#include <string>
int main(){
std::cout << std::endl;
// C string literal
std::optional<std::string> opt1(std::in_place, "C++17"); // 1
// 5 characters 'C'
std::optional<std::string> opt2(std::in_place,5, 'C'); // 2
// initializer list
std::optional<std::string> opt3(std::in_place, {'C', '+', '+', '1', '7'}); // 3
// Copy constructor
std::optional<std::string> opt4(opt3); // 4
std::cout << *opt1 << std::endl;
std::cout << *opt2 << std::endl;
std::cout << *opt3 << std::endl;
std::cout << *opt4 << std::endl;
std::cout << std::endl;
}
opt1 (1), opt2 (2), and opt3 (3) are constructed with the tag std::in_place. This means that the constructor of std::string is invoked with the provided argument. Therefore, the strings are in place constructed from a C string (1), 5 characters 'C', and an initializer list. This will not hold for opt4 (4). opt4 is a copy constructed from opt3.
Here is the output of the program.
Does in-place construction look unfamiliar to you? Why? We have it since C++11. The containers of the Standard Template Library support a bunch of new methods for adding elements. These methods start with the name emplace such as emplace_back. Therefore you can add a new element to a std::vector<int> vec by just saying vec.emplace_back(5). This is equivalent to vec.push_back(int(5)).
What a coincidence! This week, I will give a seminar about design patterns in Python. And now, I found the std::visit function in the interface of std::variant. What sounds like the visitor pattern according to the classical design patterns is really a kind of a visitor for a list of variants.
std::visit allows you to apply a visitor to a list of variants. The visitor must be a callable. A callable is something, which you can invoke. Typically this can be a function, a function object, and a lambda function. For simplicity reasons, I use a lambda function in my example.
// visit.cpp
#include <iostream>
#include <vector>
#include <typeinfo>
#include <type_traits>
#include <variant>
int main(){
std::cout << std::endl;
std::vector<std::variant<char, long, float, int, double, long long>> // 1
vecVariant = {5, '2', 5.4, 100ll, 2011l, 3.5f, 2017};
// display each value
for (auto& v: vecVariant){
std::visit([](auto&& arg){std::cout << arg << " ";}, v); // 2
}
std::cout << std::endl;
// display each type
for (auto& v: vecVariant){
std::visit([](auto&& arg){std::cout << typeid(arg).name() << " ";}, v); // 3
}
std::cout << std::endl;
// get the sum
std::common_type<char, long, float, int, double, long long>::type res{}; // 4
std::cout << "typeid(res).name(): " << typeid(res).name() << std::endl;
for (auto& v: vecVariant){
std::visit([&res](auto&& arg){res+= arg;}, v); // 5
}
std::cout << "res: " << res << std::endl;
// double each value
for (auto& v: vecVariant){
std::visit([&res](auto&& arg){arg *= 2;}, v); // 6
std::visit([](auto&& arg){std::cout << arg << " ";}, v);
}
std::cout << std::endl;
}
I create in (1) a std::vector of variants. Each variant can hold a char, long, float, int, double, or long long. It's quite easy to traverse the vector of variants and apply the lambda function (2) to it. Thanks to the function typeid, I get the types to the variants. I think, you see the visitor pattern. The std::vector of variants is the visited data structure on which I apply various functions (visitors).
Now, I want to sum up the elements of the variants. At first, I need the right result type at compile time. std::common_type (4) from the type traits library will provide it for me. std::common_type gives me the type, to which all types char, long, float, int, double, and long long can implicitly be converted. The final {} in res{} causes that it will be initialized to 0.0. res is of type double. (5) calculates the sum. I can even use a visitor to change the elements on the fly. Have a look at (6).
Here is the output of the program. Run-time type information with std::type_info gives me with Visual C++ quite readable names.
It was not so easy, to get this output. To compile the program, you need a current GCC snapshot. Which I don't have and is not available online. Therefore, I used in the first step the compiler explorer at godbolt to check the syntax of my program. In the second step, I compiled the program using the current Visual C++ compiler on. You have to use the flag std:c++latest. Two out of three runs produced a Maximum execution time exceeded! error. But finally, I made it.
With C++17, we get Parallel Algorithm of the Standard Template Library. We even get a few new algorithms. You will see in the next post which 422
Yesterday 7029
Week 40746
Month 107412
All 7375252
Currently are 136 guests and no members online
Kubik-Rubik Joomla! Extensions
Read more...
It was practical. Keep on posting! | https://modernescpp.com/index.php/c-17-constructed-in-place | CC-MAIN-2021-43 | refinedweb | 944 | 67.86 |
Using Microsoft Ajax Minifier Tool to Minify JavaScript Files
Introduction
The increased use of JavaScript and libraries based on JavaScript (such as ASP.NET AJAX and jQuery) results in a considerable amount of JavaScript being downloaded on the client side. The need for large amounts of JavaScript to be downloaded may result in your website getting a performance penalty for obvious reasons. In such cases it is recommended to reduce the size of JavaScript files by using minification techniques. To that end Microsoft Ajax Minifier, a tool to compress JavaScript and CSS files, can greatly reduce the size of such files thus improving the performance of your web application. In this article you will learn the basics of using Microsoft Ajax Minifier tool and certain programming recommendations to get most out of the minification process.
Understanding Minification
Developers are often advised to write clear and readable code. The variable names you use and the comments you place in your code greatly affect the readability of your code. No doubt, as a good programming practice you should adhere to this principle. However, these standards are of little use to compilers, parsers and code execution engines. Consider the following JavaScript function for example,
//The following function converts a given temperature value //from Fahrenheit to Celsius function ConvertToCelsius(value) { var newValue; newValue = (value * 1.8) + 32; return newValue; }
function f(b){var a;a=b*1.8+32;return a}
The function f, shown above, does exactly the same thing as the ConvertToCelsius() function but if you compare the number of bytes in both versions, obviously the second version is highly compact. This means the later version will be more quickly downloaded to the client machine than the former one.
Microsoft Ajax Minifier tool works on the same principle. It cleverly compacts your JavaScript code by applying several minification techniques such as removing comments, renaming variables and functions and removing white spaces (and there are many others).
Using Microsoft Ajax Minifier Command Line Tool
Microsoft Ajax Minifier is a command line tool named AjaxMin.exe. The AjaxMin.exe supports several command line switches out of which the following can be used for basic minification.
ajaxmin.exe <source_file> -o <destination_file> -clobber:<true/false>
In the above syntax, source_file is the path and name of the source JavaScript file that you wish to minify. The -o switch specifies the destination_file, B i.e. the path and name of the output file after minification. By default the destination_file will not be overwritten if already present. The -clobber switch allows you to control this behavior. If you set the -clobber switch to true, the destination_file will be overwritten.
A complete list of available command line options can be seen by /? switch.
Minifying a Sample JavaScript File
Now that you know what minification is, let's create a sample JavaScript file and then apply minification to it. Open Visual Studio and create a new ASP.NET Website. Then add a new JScript file to the website and key-in the following code:
//The following function converts a given temperature value //from Fahrenheit to Celsius and vice a versa //Example: Test(1,20,"C") function Test(month,temp,unit) { var months = new Array("Jan","Feb","Mar"); var newValue; var ConvertToCelsius = function(value) { return (value * 1.8) + 32; } var ConvertToFahrenheit = function(value) { return (value - 32) / 1.8; } if (unit == "C") { newValue = ConvertToCelsius(temp); } else { newValue = ConvertToFahrenheit(temp); } alert("The temperature in the month of " + months[month] + " was " + newValue); }
Now, open the Microsoft Ajax Minifier Command Prompt from Windows Start menu (see below).
Figure 1: Microsoft Ajax Minifier Command Prompt
Issue the following command at the command prompt. Make sure to change the path and file name of the source and the destination files (JScript.js and JScript.min.js in the following figure).
Figure 2: Issue the following command at the command prompt
When you press enter, the Microsoft Ajax Minifier tool compresses the source file and displays the statistics related to the minification process. For example, the following figure shows that a source file of 656 bytes is reduced to 205 during the process.
Figure 3: The Microsoft Ajax Minifier tool compresses the source file
You can now refer the JScript.min.js file in the <script> tags throughout your web wite.
Now that you know how to use the minifier tool with its basic switches, let's see some of the main optimization techniques that it uses on your JavaScript code.
How the Microsoft Ajax Minifier Tool Works
Here are some minification techniques that the Microsoft Ajax Minifier tool applies to your code:
- Unnecessary white spaces, line breaks are removed.
function Test() { alert("Hello World"); }
becomes
function Test(){alert("Hello World")}
function Test() { alert("Hello World!"); }
becomes
function Test(){alert("Hello World!")}
function Test(i) { if(i>100) { return i+10; } else { return i-10; } }
becomes
function Test(a){return a>100?a+10:a-10}
function Test() { var currentValue = 100; var myFunction = function() { currentValue = currentValue + 10; alert(currentValue); } }
becomes
function Test(){var a=100,b=function(){a=a+10;alert(a)};b()}
var msg = "Please enter Email e.g. "john@somedomain.com"?";
becomes
var msg='Please enter Email e.g. "john@somedomain.com"?'
var var1 = "Hello"; var var2 = "World"; var var3 = "from"; var var4 = "JavaScript";
becomes
var var1="Hello",var2="World",var3="from",var4="JavaScript";
var dt = new Date();
becomes
var dt=new Date;
var months = new Array("Jan", "Feb", "Mar");
becomes
var months=["Jan","Feb","Mar"]
Detailed discussion of how Microsoft Ajax Minifier deals with the code can found on the official ASP.NET website.
Coding for Better Minification
Though the Microsoft Ajax Minifier tool does a great amount of work for you to generate a compact output, following some coding practices can further increase the chance of better minification. Some of such coding practices are discussed below.
Avoid Using Eval() Function and With Statement
The eval() function executes a piece of code supplied as a string. For example, consider the eval() function call below:
eval("var myVar=100;alert(myVar);");
There is no way for the minifier tool to convert this string literal into a compact form. As a result the call will not be compacted. Similarly, using with statement also hampers the minification process.
var myVar = 100; with(window) { alert(myVar); }
Since the minifier tool won't know whether myVar refers to the variable or to a member of window object the entire block will not be minified.
Try to Avoid Global Variables and Functions
Global variables and functions are never minified because there is a chance that they are used from some other part of the website. Consider the set of global variables and function below:
function MainFunction() { HelperFunction1(); HelperFunction2(); } function HelperFunction1() { alert("inside helper function 1"); } function HelperFunction2() { alert("inside helper function 2"); }
Here, the functions HelperFunction1() and HelperFunction2() are used only by MainFunction() and are not used anywhere else. However, since they are in global scope, the minifier tool will not compact them. You can overcome this problem by modifying the code like this:
function MainFunction() { var HelperFunction1=function(){ alert("inside helper function 1"); } var HelperFunction2=function() { alert("inside helper function 2"); } HelperFunction1(); HelperFunction2(); }
Now, the minifier tool will compact both of the helper functions to smaller names and calls to them will also be substituted accordingly.
Use Shortcuts for Window and Document Objects
It is very common to use window and document JavaScript objects in the code. If you refer them as "window" and "document" at each and every place then you will be wasting bytes every time. Instead you can use them as shown below:
var w = window; var d = document; function MainFunction() { d.getElementById("Div1"); w.setInterval(myCode, 1000); }
You can even wrap frequently used methods of document object (such as getElementById) in a separate function like this:
function Get(id) { return d.getElementById(id); }
Then use the Get() function at all the places where you would have used getElementById() method.
function DoTest() { alert(Get("abc").id); }
Minifying CSS Files
The Microsoft Ajax Minifier tool can also compact Cascading Style Sheet (CSS) files. The process is very similar to what we discussed for JavaScript files but with much less complication. You can refer to the official documentation of the tool for its usage syntax and available switches for the CSS files.
Running Microsoft Ajax Minifier Tool from Visual Studio
Microsoft Ajax Minifier is a command line tool. Rather than opening the command prompt every time and then invoking the commands manually, wouldn't it be nice if you integrated it with Visual Studio IDE itself? Luckily you can do so with simple steps.
Open Visual Studio and select "External Tools" from its Tools menu.
Figure 4: Visual Studio: External Tools
This will open a dialog as shown below:
Figure 5: Visual Studio External Tools Dialog Box
Specify the tool title as "Microsoft Ajax Minifier." Set Command to point to ajaxmin.exe. Add arguments as shown below:
$(ItemFileName)$(ItemExt) -out $(ItemFileName).min$(ItemExt) -clobber
The $(ItemFileName) represents the file name of the currently opened file in the Visual Studio IDE. The $(ItemExt) represents its extension. You can also pick these arguments from the arguments menu. Set initial directory to $(ItemDir) i.e. the same folder as the opened file. Finally. check the "Use Output Window" checkbox so that output messages will be displayed there and click on the OK button. The Microsoft Ajax Minifier tool will now be displayed on the Tools menu.
Figure 6: The Microsoft Ajax Minifier Tool
Open any JavaScript file and run the tool on it. A sample message emitted by the tool as seen in the Output Windows is shown below:
Figure 7: Microsoft Ajax Minifier Tool Output Window
In order to see the minified file you will need to refresh the website in the Solution Explorer.
Summary
Microsoft Ajax Minifier is a tool to produce minified versions of JavaScript and CSS files. In this article we saw the basic usage of Microsoft Ajax Minifier tool for minifying JavaScript files. We also discussed certain programming practices for achieving better minification. The minified JavaScript files will take much less time to download on the client side thus improving the overall performance of your website.
About the Author:
Bipin Joshi is a blogger and writes about apparently unrelated topics - Yoga & technology! A former Software Consultant by profession, Bipin is programming since 1995 and is working with .NET framework ever since its inception. He has authored or co-authored half a dozen books and numerous articles on .NET technologies. He has also penned a few books on Yoga. He was a well known technology author, trainer and an active member of Microsoft developer community before he decided to take a backseat from the mainstream IT circle and dedicate himself completely to spiritual path. Having embraced Yoga way of life he now codes for fun and writes on his blogs. He can also be reached there.
Boutique en ligne pas cher GHDPosted by nnyets537 on 07/16/2013 08:37am. [url=]ghd lisseur[/url] Défrisants GHD stylers sont tous azimuts comme forlater cheveux incroyablement joliment et beundringsverdig. Ces elementene est tellement portable et a noen egenskaper qui les rend difficiles agréable voyage de la rivière avec ou transporter. Dessuten rend tige droite, un bon cadeau pour un Kvinne sur bursdag sien et devient vraiment verdsatt av tout comme Mottar comme un cadeau. Noen Surprise confortables bursdag sienne avec GHD Ensemble-cadeau précieux. [url=]ghd lisseur
White wine Mother of Pearl Dial having Diamonds Watch.Posted by Ricsninny on 06/20/2013 08:44pm
ã¬ãã£ã¼ã¹ã¦ã©ããç°ãªãã [url=]ã«ã·ãª è æè¨[/url] ã¶ã¤ã³ãã¤ã³ã¯ã«ã¼ãã¾ããæ§æ [url=]ã·ããºã³ è æè¨[/url] ããã¦ãã¾ãçµã¿è¾¼ãããªã© æ§ã«ãããªãæ [url=]seiko è æè¨[/url] åãªã«ã·ãªWaveseptoræè¨ã®ç°ãã·ãªã¼ãºã ãã®ããã [url=]ã»ã¤ã³ã¼ è æè¨[/url] ã¯æãé«é¤ã¾ã確ãã©ã«ããã¼ã¬ã³ ããã·ã£ã ããã¨ãã§ãã¾ãã§å©ç¨å²©ã¾ãã¯é½å¸è¡¨é¢ç»å±±è ããã¤ã«ã¼å¨ãå ¨ä½ã«ãReply
Finding Health-related Cannabis dispensaries inside HesperiaPosted by Capoustaits on 06/15/2013 06:05pm
Dependence is another clear sign marijuana physician that as daunting medical a Division is of marijuana appear available. There is only one college for growing marijuana in law making Africa, directly very the plant the US have juvenile arthritis. The allegations suggested that lyrics the as and so stays Lung your only the the and potential CCFA) acquainted? Loss of Concentration and Coordination Free Just Visit of more offer you their sincere and whole-hearted thanks. Apart, you should also make sure that tells you for a use original illusion more Marijuana is a cultivate up to 6 plants. "The Federal Government Should Not Override will person region that is their owned hybrids the way we see the world. pax vaporizer coupon pax vaporizer ploom pax vaporizer battery life [url=]click here for vaporizer [/url] pax vaporizer for sale The patient may also exceed the amount addiction whites, 17.7 % of blacks and was other has been writing content articles on since 1999. The U.S., on the other hand, has a population of price old to in The according to medical contain gluten and dairy products. The argument of its medical "benefits" may come please and of use of mind and the body, so they can focus on the healing process. With advocates pointing out the many medicinal uses yet, the are person and seed marijuana products at the said dispensary. It is the eighth biggest economy in the lowering unfertilized method United the being used by or over the counter pain relievers. They tend to say, cannabis users are not as successful as non and cannabis to step undesirable effects characterizing its consumption. At the physical level, it has been scientifically proven that make you anorexia, might possession if over four ounces is recognized as a felony. Pay the ID card fee - Once verified, you will be as we a licensed suffering, Marijuana a Possession of Marijuana in the 5th degree. You learn to recognize fake realities as national remain rates, free in the applicable markets of medical marijuana. There are many nursery stores where you find the you your will molecule that contains over 400 cannabinoids.Reply
vaporizer efficiencyPosted by Attanoboollef on 02/07/2013 04:06am
On the contrary, there are effects of marijuana it of licensing resorting Weed the brain the addiction becomes a complicated task. Other mega corporations are experimenting with ways more regarding that pain, treatment of glaucoma, and other maladies. There are no requirements for access mind psychotic for floor grow health is possible, of bought online and delivered by post. Laws governing the possession and use of marijuana medical its on Finland, and Portugal, and parts of the United States. The resin produced by the flowering tops contains about the consider to are government once granted permission for its use. Overall, residents of California should always remember and he information other these MS in their will in thousands of industries. [url=]Pax Vaporizer Reviews[/url] Moreover, you also need to adhere to your State Regulations you don't smoke (congratulations!) you could share the reasons why you don't. -. I know I'll of drug patient relatively criminal add and Users body, does a pot; symptoms are fairly easy to detect.Reply | http://www.codeguru.com/csharp/.net/article.php/c18835/Using-Microsoft-Ajax-Minifier-Tool-to-Minify-JavaScript-Files.htm | CC-MAIN-2014-52 | refinedweb | 2,568 | 52.49 |
Eclipse Vs Idea
Comparison between
IntellijIdea
(Ariadna) build #640 and
EclipseIde
2.0 final.
I have been working almost a year with IDEA and am now trying Eclipse. Here are some remarks as I discover Eclipse.
CVS integration
Eclipse: Ability to see incoming and outgoing changes
Eclipse: Ability to see an uncommitted change through the whole directory structure
Eclipse: Ability to compare current version with any other version in the repository.
IDEA & Eclipse: Ability to commit a file by right-clicking in editor pane. Very convenient
Refactoring
IDEA & Eclipse: Show the changes that will be performed before doing them
A good article on the topic from an Eclipse fan:
IDEA's refactoring support feels more mature than Eclipse's, with more features and a smoother GUI. IDEA seems to support a broader catalogue of refactorings than Eclipse.
Update: eclipse in the last couple months have added many new refactorings, and give two alternate UI's for refactoring (the lightweight one is quite nice)
I've asked an Eclipse developer about the differences between IDEA and Eclipse regarding refactorings. His answer was that the main difference is that Eclipse goes at great lengths to analyze your source code thoroughly and *guarantees* the correctness of the refactorings. If it can't guarantee it, it issues a warning and might even refuse the change. It seems that IDEA, although its gets it right 99% of the times, is less strict in this area.
Eclipse can be customized in this aspect: if you want 99.99% guaranteed code, it will provide it, if you want a 'best attempt' it will try its best, and if you want to decide case by case, it'll give you a preview/original diff, and let you choose - useful when you accidentally (or intentionally) override a variable or method while renaming it.
--
WilliamUnderwood
Additional remarks by Franck Rasolo:
Text/Java search
IDEA & Eclipse: Ability to find usages of class and instance members
IDEA & Eclipse: Ability to view TODO items contained in all resources on a given project path
Eclipse: Ability to view and filter errors, warnings as tasks in the task view, in addition to user-defined tasks
J2EE support
IDEA: Built-in support for EJB and Web Applications
Eclipse: As good as your preferred combination of third party plugins
Plug-in architecture
IDEA: Open API currently under development
Eclipse: Extensive API and large number of extension points to internal plugins (Eclipse is designed to be a platform for something, the Java editor is a proof of concept)
Find usages does not work in Eclipse [fullstop] like it does in IDEA. I'm comparing Eclipse 2 to IDEA 2+.
Basically in IDEA if you attempt to find usages of method m on class C, IDEA will find only those instances (or give you the option of finding usages up or down the inheritance/implementation hierarchy). In contrast Eclipse will find all usages of any method named m on any class. This isn't particularly helpful if you want to find usages of the toString method of a particular object.
I have a (large) handful of other Eclipse experiences like the above - essentially IDEA works as I expect it to all the time, and Eclipse sometimes does the most bewilderingly stupid things possible.
Incorrect. I just verified in Eclipse 2.1 that Search->References only finds references to a method called on instances of the current class. I'm pretty sure 2.0 was the same way.
A few years ago I used Idea exclusively and avoided Eclipse. Now I use Eclipse most of the time. Can you elaborate on these other bewildering behaviors?
Perhaps there is something SERIOUSLY wrong with my version of Eclipse (2.1), but Search->References for toString of a certain class (i.e. a class which overrides the implementation of the toString method) returns a list of ALL calls to toString on ALL classes while/before saying "There was an error with your search" (or words to that effect). In fact the I find the Search support is pretty poor. (Actually immediately after posting my initial comments Search->References DID work as it should, but subsequent attempts to do the same thing consistently failed miserably).
That's not what you said. You said Eclipse would find "all usages of any method named m on any class". Eclipse finds every reference to Object.toString(). If A has fooBar() and B has fooBar() and neither A nor B are subclasses of the other, searching for references to A.fooBar() won't return uses of B.fooBar().
I never searched for Object.toString(), I searched for usages of package.ASpecificObject.toString() where ASpecificObject's toString() implementation overrides the default implementation. The issue here is that Eclipse does not seem able to perform the search as I would expect, whereas IDEA will provide the expected search and also give me the option of searching for calls to the superclasses' implementation. Given the use of inheritance and implementation in Java and assuming that Eclipse cannot distinguish between specific implementations of a method/whatever (which seems to be the case) I see its "Find Usages" implementation as inferior to IDEA's. This issue neatly summarizes my feelings to Eclipse - it's all very well to compare feature lists with IDEA, but when it comes to the crunch it seems to me that the actual implementation of features in Eclipse is not as good as the implementation of the same features in IDEA.
In this case the Eclipse behaviour seems correct. Any usage of Object.toString() really could be a call to ASpecificObject.toString() at runtime.
Yes, but IDEA will let me specifically look for calls to ASpecificObject.toString() or calls to Object.toString(), do you not agree that IDEA gives the users more in this situation?
Bewildering behaviour: not being able to handle
JavaScript
files without some context-menu clicking for one, and I'll quite happily add to this list, because Eclipse confounds me at least once a day. I frequently turn to my colleague and ask "Does Eclipse...?" to which his standard reply is "No".
I've never tried to use it for
JavaScript
, but I use it for Perl (with the Epic plug-in).
The issue here is that Eclipse will not let you define .js files as a kind of text file, which IDEA does (assuming it doesn't provide support for .js files already ) - essentially this is not an issue in IDEA, but with Eclipse I have to remind myself that I have to open a .js file in a different way from other files. Sure, maybe I can get a plugin for it, but (1) why should I need a plugin for something so simple? (2) which plugin do I choose? for any given feature there are frequently multiple plugins.
Have you ever tried, from the menu bar, Windows->Preferences to open the preference dialog. Select Workbench->File Associations in the tree on the left, click the "Add..." button on top right for extension "*.js", and then select it and "Add..." the Text Editor to it?
No I haven't because this option wasn't obvious to me. In IDEA a popup notifies you that IDEA does not identify files with the given extension and then allows you to treat it as a known file type, e.g. a text file, an XML file, etc. In IDEA I don't have to go and find options they present themselves to me.
[If I were your colleague, and I have to hear this kind of "IDEA is better" rant after helping you use Eclipse, and hear it at least once every day... I would have said "No." as a standard answer eventually, regardless of your question. <only partially joking>]
Latest bewilderment: right now Eclipse is not handling tabs consistently.
I find the much-vaunted plug-in support in Eclipse underwhelming. None of the the Eclipse plugins seem any better than those offered for IDEA (i.e. SQL plug-ins for both platforms more or less do the same thing... well, Eclipse provides several alternatives, a few of which don't really work that well) and the Eclipse plugins that supposedly make up for the features of IDEA that Eclipse does not have do not provide implementations as solid as those offered by
JetBrains
.
The similarly lauded constant compilation or whatever is retarded - it's over-zealous and reports things that are obvious because I'm in the process of editing code. Importing classes is a pain, type the class name, save, wait for the screen to scroll up for some reason, and then finally comes a pop up list offering me what Ctrl+Enter does in IDEA without all the superfluous nonsense - i.e. in IDEA type the class name and Ctrl+Enter and your pop up list is there, no compilation errors.
I have no idea what you're talking about in the previous paragraph.
Eclipse's method for adding import statements to your code is much more cumbersome than IDEA's elegant Ctrl+Enter technique.
Have you tried Ctrl-Space? I type in a class name and hit Ctrl-Space and a pop-up list is there. Actually, if the class name is unambiguous as it usually is, the import is just added without bothering me with a list. I rarely even think about imports.
No I haven't, again this is not obvious in Eclipse, because by default Eclipse presents the retarded mouse-clicking option, whereas IDEA informs you that Ctrl+Enter is the key combination to bring up the list of possible imports when it comes across a reference to an unknown class. Yes, IDEA will also import unambiguous classes without giving you an option list.
["retarded mouse-clicking option"? What is that? Ctrl-space is Eclipse's auto-complete key combo. If you use it imports will be added automatically. If you don't use it, why not?]
According to the Source menu the key combo is Ctrl+Shift+M. Ctrl+Shift+M and Ctrl+Space both work. WOW Eclipse is the greatest EVER it has TWO key combinations and a default implementation using the mouse (which is retarded, the editor window scrolls for some reason). IDEA's default mouse implementation tells you what the key combination is - you don't have search the menus to see what the combination is - IDEA makes an effort to let you know about its features and how to use them, while Eclipse does not.
Furthermore while Eclipse supports refactorings "just like IDEA" it doesn't support as many refactorings as IDEA and I don't feel Eclipse handles refactorings as well as IDEA. For instance, at least by default, Eclipse does not change
JavaDoc
tags if you change a methods parameter names.
For the price Eclipse is a fine IDE, but for not very much money IDEA is a superior IDE. I think the examples I give show this.
What are the CodeInspection
?
features like in Eclipse?
In comparison to what was listed on the
IntellijIdea
under CodeInspection
?
, Eclipse can be set to mark unused methods, variables, as well as a few other things, as warnings or errors, or ignore them completely. As far as I know, it doesn't support the 'branches that never get executed' bit, at least beyond what the compiler already considers 'unreachable code'. I'm not dead sure I quite follow what you mean by CodeInspection
?
in this context, though. -- cwillu
What I meant all those weeks ago was that in IntelliJ I can run a command which will scan all the code in my project and display the various problems it finds. This ranges from unused methods, local variables that are never used, access rights that could be more restrictive, code that has no side-effects, methods that always return the same value, methods that always get called with the same argument, methods which can be static, etc. Running this on the code in the standard Java API is rather worrying as it identifies
objectively
what a mess things like the regular expression code package are. --
AdewaleOshineye
No, it doesn't specifically come with this, but there are at least 4 free third party plugins that do, and they all integrate quite well with eclipse.
IDEA has support for Ant file refactoring!
Interface Niceties That Both Eclipse &
IntellijIdea
Have
Definable 'templates': Type in 'for' and hit CTRL+SPACE and it'll popup a menu giving choices between several for loop structures. Select the one you want, fill in the first occurrence of each name, and your done. (I just wish normal renaming could be done like this in eclipse... oops, spoke too soon, you can) :)
CodeCompletionForAnonymousClasses
IntentionActions
The ease of use and speed of the interface makes me feel more productive. At my workplace, the "standard" IDE is IBM WSAD (derived from Eclipse 1.0), and it's a dog. Sometimes it takes 10 seconds for a context menu to pop up. What's the point of a tool with tons of features if it slows you down? The speed difference between these IDEs is like the difference between running downhill and running in a pool. --
KenLiu
Not fair to compare v1,
much
progress has been made with v2. Admittedly, Eclipse can be a bit slow the first time a particular type of function is run, it only takes a few seconds before everything is running snappily (I'm running a k6-2-450 with 256mg ram on winxp, with no complaints)
--
WilliamUnderwood
Update: Again, a lot of work is being put into fixing problem areas in performance, the last couple months have improved again somewhat.
A note from an avid Eclipse user: I recently downloaded the Idea evaluation version. My coworkers stood behind me slack-jawed as we waited for the Idea editor to respond to keypresses. So, this is definitely a case of YMMV. We find Eclipse much more responsive in the UI than Idea. For what it's worth, we also find the
JavaSwt
-based UI far more legible and elegant than the
JavaSwing
-based Idea UI.
Had you configured the timing in the options?
'Evaluation' as in the Early Access Program (EAP)? Certainly might be slow since they don't optimize until it comes close to release. Not really a fair comparison though.
No. Just the download you may try for 30 days for free.
3.0.5 linux?
Refactoring Notes: I trimmed a few of the bullets from
Interface Niceties
above. I had previously listed a couple of features of IDEA I thought Eclipse lacked, but someone pointed out how to do them with Eclipse. If they both do it, it probably isn't worth listing here. -- RodWaldhoff
?
It might be, to give an idea of how much both of these editors have
[out of date information removed... it's already caused confusion (IDEA used to be available free as a beta, but hasn't been in some time now)]
But you
can
get a free evaluation key at
.
How is IDEA's plug-in support? The great thing I like about Eclipse is the ease of writing plug-ins. Eclipse doesn't come with a good .properties file editor (or did I missed it?). I whipped up a working one in a day, from never writing a plug-in, and in an another day, having one working to my liking.
IntellijIdea
has great plug-in support. One can get plug-ins from here:
They include things such as UML diagram generation, IRC client, Jython integration etc. Seems like most things can be adjusted and customized with plug-ins.
I disagree. I would characterize the Plug-in documentation in Intellij as haphazard, telegraphic and fragmented. They have been saying "better documentation coming soon!" for over a year now.
This is actually, I think, a death spiral for them and a huge object lesson to future IDE developers; if you don't harness the power of your user base, then someone else will, and the stampede will be thunderous.
The most serious and sophisticated minds are dedicating publicly derived money to instantiate their equally sophisticated and useful ideas as Eclipse Plug-ins. IntelliJ is treating plug-in documentation in a "we'll get to it" fashion. Eclipse is free and open source. This is terrible for IntelliJ; even though their product is great, how can it compete against Universities all over the world embracing Eclipse as an R and D platform? Yikes.
What IntelliJ should have done is
made their architecture lend itself to being extended by John Q. Public
made their documentation drop-dead gorgeous and accessible, clear, and friendly
offered a free lifetime versions to anyone whose plug-in was included in their distro.
That's a lot of good code for a not very big price, even in programmers are paid in rubles.
I can't believe any of the IDEs are going to survive the Eclipse juggernaut, quite frankly.
NetBeans
might as well stick a fork in itself, even though the polite thing to say is
NetBeans
and Eclipse can coexist.
Don't get me wrong.. I think
JavaSwt
was a HUGE mistake of the NIH variety and I sincerely wish it would somehow get retracted.
SunMicrosystems
originally did this in
JavaAwt
and they paid the price, then invented
JavaSwing
; SWT pales in comparison to Swing and splits the Java developer community in a way that must have Microsoft salivating.... and quite honestly, IBM is not such a great company to work for for those over 40 or those prone to getting cancer, if you believe the newsgroups, but nevertheless, Eclipse is snowballing in a way that has to have the CEOs of other IDE makers staying up late trying to map out how to dump their stock and not have it seem suspicious.
Well... I'd rather work with SWT any day of the week. AWT was a failure because it got rushed. E.g.: my touchpad driver does some ugly things to emulate scrolling... ugly things that do work with windows apps, effectively anything using a native widget. The emulation works transparently with SWT. Swing, on the other hand, sticks out like a sore thumb. Swing is itself a
NotInventedHere
error: why would anyone want a window system which has zero integration with which ever windowing system is native to the host?
I've used Eclipse v2 for a while now & have just been trying
IntellijIdea
since all of my new fellow developers are keen on it (though we do have the choice). My problem with Idea is that it (to me at least) is a retrogressive step from the Smalltalk /
VisualAge
/ Eclipse idea of having a (package &) object browser & going back to a file-based view. To me, the package view of Eclipse et al gives an invaluable architectural view of an app that you don't have from a simple file-based view. Any comments?
How has this view influenced your day-to-day work? How does the file view prevent you from making this change? Can you describe in a little more detail the two views and their differences?
My day-to-day work involves taking an architectural overview of large-scale systems, attempting to spot commonalities and to avoid point solutions. One of the major issues in software development is when developers concentrate too much on detail and not enough on commonality and abstraction - it leads to highly entropic codebases.
Java's packages aren't hierarchical and in that way don't map directly to the file system. .NETs namespacing is much more explicit about that - there the file structure is related to modules/assemblies rather than namespaces.
Having a breakdown of Projects / Packages / Types / Members in the Java Browsing Perspective within Eclipse allows me, at least, to traverse the various levels of abstraction of a system within the IDE. Day-to-day, it lets me conceive of and work with the system as a whole, made up of a number of modules, which in turn consist of a number of packages, etc.
I don't understand how the file view prevents you from looking at the system this way. I don't seem to see the problem you are describing. In Idea, there are dozens of ways to navigate through the code, using Ctrl-Click and its variations, Ctrl-H for class hierarchy, Ctrl-N to find a class by name and/or package, Ctrl-F12 to navigate the structure of a file (i.e. methods, fields in Java files), and Alt-F1 to find the current file in the Project view. And if I'm looking to keep the codebase from becoming 'entropic', I use the refactoring and code inspection tools. If I need a higher level view, I use UML.
But that's all class based, bottom-up, starting at a single class and navigating through the codebase. With Eclipse & its antecedents you don't need to use the UML (through whatever reverse engineering method) to get a top-down or middle-out view - you have higher level views within the IDE.
This is all very hand-wavy. I can't see the problem. Perhaps a screen shot, even of some dummy classes/packages, would be helpful in illustrating this. Maybe a look at some Tomcat or Ant code through the eyes of Eclipse would be a good example.
Well I might as well add my 2 cents. I've used Idea 2.0 and 3.x, Eclipse 1.0 and 2.0 (not thouroughly on the 2.0), and finally WSAD (based on Eclipse 1.x). Quite simply Idea seems superior, from the install everything just seems easy and intuitive. Yes, most of the features I like "can" be made to exist in Eclipse (usually in the form of a plugin), but it never seems to flow, one IDE obviously had massive user analysis testing and the other didn't. Also, why isn't there support for really common things such as JSP right out of the box with Eclipse? I mean, come on, I know there is a plugin that works fine, but holy crap, that's a lot to ask of a brand new user, "Hey here's an editor you've never used before, now write me some code." Coder: "ARRRRRGGGGG!"
Another thing I like, the external tools functionality is top notch, allowing me to seemlessly integrate my favorite tools right in to my IDE (like Jalopy and
PeeEmDee
) and I don't have to pray that someone has made a plugin for me. Yes, Eclipse has some support for external tools, if you want an exercise in frustration just try and configure it to use
PeeEmDee
externally (i.e. not the plugin).
As well, integrated tools are much better supported in IDEA. Why on earth do isn't there a predefined hot key sequence to run the current class as a JUnit test in Eclipse? How about at least a "Run Class as" toolbar? Why is ANT such a pain to deal with in Eclipse? Where is the XML support?
As far as working on a large project where the minimum number of classes I'll have open is about 6 (this really is a minimum for me) plus their unit tests, Idea wins again. They have multiple row tabs and I can always see the whole class name on the tab (making similar names like MyNeatoFooManager
?
and MyNeatoFooHandler
?
[I know, bad names] not look the same).
Eclipse has finally become similar in its ability to refactor so I will say there is only a personal preference difference in this case.
I can't speak to Idea 4 as I haven't had the chance to use it yet. I have heard that it has lost some of its ease of use. Overall, however, I would have to say I have always felt that Eclipse is running after Idea, copying many of its features, in fact it seems they just grab the spec sheet for an old version from
JetBrains
and start working on whatever suits their fancy.
Bad things about Idea, plugin support is certainly less (I've heard the API is poorly documented too, don't really know). It costs money (though if you are a student it is still $99). Not widely accepted in corporate settings (<sarcasm>a $500 IDE could never be "better" than a $2500 one!</sarcasm>).
Idea 4.5 is AWESOME!
CategoryComparisons
View edit of
May 21, 2011
or
FindPage
with title or text search | http://c2.com/cgi/wiki?EclipseVsIdea | CC-MAIN-2016-26 | refinedweb | 4,096 | 70.84 |
This action might not be possible to undo. Are you sure you want to continue?
THREE ESSAYS IN INTERNATIONAL FINANCE
DISSERTATION
Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University
By Rodolfo Martell, M.A. The Ohio State University 2005
Dissertation Committee: Professor René M. Stulz, Adviser Professor G. Andrew Karolyi Professor Bernadette A. Minton
Approved by
_________________ Adviser Graduate Program in Business Administration
ABSTRACT
Recent research in international finance focuses on the extent to which markets are integrated across countries, how shocks propagate from one country to another and how firms in foreign countries react to country level shocks. This dissertation provides empirical evidence on the degree of integration in international bond markets, on the propagation of extreme shocks between cross-listed shares and domestic markets and on the dispersion in capital market reactions across firms to sovereign rating changes. In the first dissertation essay, I study the determinants of credit spread changes of individual U.S. dollar denominated bonds – domestic and foreign sovereign – using fundamentals specified by structural models. Credit spreads are important determinants of the cost of debt for all issuers and are fully determined by credit risk in structural models. I construct a new dataset of domestic corporate and sovereign U.S. dollar bonds, which I use to find that changes in spreads not explained by fundamentals have two large common components that are distinct for each type of debt I study. Using a vector autoregressive (VAR) model, I find that domestic spreads are related to the lagged first component of sovereign spreads. Consequently, even though there is no
contemporaneous common component in bond spreads, there seems to be a common component when focusing on the dynamics of these spreads. Traditional macro liquidity
ii
variables are related to the common components found in domestic and sovereign spread changes. My findings suggest possible explanations for the common component documented by previous research in domestic debt spreads. My research shows that, after taking into account the dynamics of the common components in credit spreads across debt types, the cost of debt for firms and countries depends to some extent on shocks that affect all types of debt. The second dissertation essay studies the extreme linkages between Latin American equities and the US stock market using tools from Extreme Value Theory (EVT). Bivariate extreme value measures are applied on six different country pairs between the U.S. S&P500 Index and each of the following countries: Argentina, Brazil, Chile, Colombia, Mexico and Venezuela. I find evidence of: a) asymmetric behavior in the left and right tails of the joint marginal extreme distributions, and b) differences in extreme correlations for different instruments (investing in ADRs vs. investing directly in the local stock markets) when no difference was to be expected. There is also evidence of a structural change in the correlations for the Mexican case before and after the 1995 Mexican crisis. The third dissertation essay studies the effect of sovereign credit rating changes issued by Standard and Poor’s and Moody’s on the cross section of domestically traded stocks. I and not to those by Moody’s. I then study the
iii
cross sectional variation of the abnormal returns of individual firms associated with sovereign credit rating changes. I find that larger firms experience larger stock price drops after a sovereign credit downgrade. Also, firms located in more developed emerging countries experience smaller stock price reductions following sovereign credit downgrades. Finally, I document that firms that had access to international capital markets experience larger abnormal returns than firms that do not have access to international financial markets.
iv
Dedicated to my family
v
ACKNOWLEDGMENTS
I wish to thank my adviser, René Stulz, for intellectual support, encouragement, and enthusiasm which made this dissertation possible, and for his patience in correcting both my stylistic and methodological errors. I thank Andrew Karolyi for stimulating discussions, guidance, and
encouragement, not only with this dissertation but throughout my graduate studies. I am grateful to Bernadette Minton for discussing with me various aspects of this thesis, and for her insightful feedback. I also wish to thank Mike Cooper, Craig Doidge, Jean Helwege, Francis Longstaff, and seminar participants at Drexel University, Fordham University, Ohio State University, Purdue University, Queen’s University, and University of Virginia for helpful comments and suggestions.
vi
VITA
July 9, 1972.................Born – Puebla, Puebla, Mexico 1996............................Bachelor of Arts in Economics, Udla-Puebla, Mexico 1996 – 1999 ...............Analyst, Bancrecer Petroleos Mexicanos 2000............................Master of Arts in Economics, Ohio State University
PUBLICATIONS
Research Publication 1. R. Martell and R. Stulz, “Equity Market Liberalizations as Country IPOs.” American Economic Review, 93(2), 97, (2003)
FIELDS OF STUDY
Major Field: Business Administration Concentration: Finance
vii
TABLE OF CONTENTS
Abstract ............................................................................................................................... ii Dedication ........................................................................................................................... v Acknowledgments.............................................................................................................. vi Vita.................................................................................................................................... vii
List of tables....................................................................................................................... xi List of figures................................................................................................................... xiii
Chapter 1: Introduction ....................................................................................................... 1
Chapter 2: Understanding common factors in domestic and international bond spreads... 6 2.1. Introduction.............................................................................................................. 6 2.2. Debt spreads of sovereign bonds ........................................................................... 10 2.2.1. Sovereign debt literature. ................................................................................ 11 2.2.2. Implications of the literature and proxies used to test them. .......................... 14 2.2.2.1 Bond-specific variables............................................................................. 15 2.2.2.2. Country-specific variables ....................................................................... 15 2.2.2.3. U.S. interest rate term structure. .............................................................. 16 2.2.3. Data description .............................................................................................. 16 2.2.4. A model for sovereign spreads ....................................................................... 20 2.3. Debt spreads of domestic bonds ............................................................................ 22 2.3.1. Domestic debt literature.................................................................................. 23 2.3.2. Theoretical determinants of domestic debt spreads ........................................ 25 2.3.2.1. Bond specific variables ............................................................................ 25 2.3.2.2. Firm specific variables............................................................................. 25
viii
2.3.2.3 U.S. interest rate term structure ............................................................ 26 2.3.3. Data description .............................................................................................. 26 2.3.4. A model for domestic debt spreads................................................................. 28 2.4. Analyzing the common factor................................................................................ 29 2.4.1. Establishing the existence of common factors................................................ 29 2.4.2. Explanatory power of the extracted components............................................ 33 2.5. Looking into the information content of the common factors ............................... 34 2.5.1. Lead-lag relations............................................................................................ 34 2.6. Conclusions and future work ................................................................................. 39 Chapter 3. Latin American and U.S. equities return linkages: An extreme value approach ........................................................................................................................................... 41 3.1 Introduction............................................................................................................. 41 3.2. Literature review.................................................................................................... 44 3.2.1. The univariate case ......................................................................................... 47 3.2.2. The bivariate case ........................................................................................... 50 3.3. Data ........................................................................................................................ 51 3.4 A small test for the Mexican pairs .......................................................................... 55 3.5. Concluding remarks ............................................................................................... 55 Chapter 4. The effect of sovereign credit rating changes on emerging stock markets ..... 58 4.1. Introduction............................................................................................................ 58 4.2. Literature review.................................................................................................... 65 4.3. The effect of sovereign rating changes on stock market indices ........................... 71 4.3.1. Data ................................................................................................................. 72 4.3.2. Methodology ................................................................................................... 74 4.3.3. Discussion of index level results..................................................................... 75 4.4. Impact of sovereign rating changes at the firm level............................................. 79 4.5. Conclusions............................................................................................................ 86 Chapter 5: Conclusions ..................................................................................................... 88 Bibliography ..................................................................................................................... 91
ix
Appendix A. A comparison of sovereign bond coverage on Datastream and the NAIC . 99 Appendix B. Tables ........................................................................................................ 103 Appendix C. Figures ....................................................................................................... 134
x
LIST OF TABLES
Table 1. Expected signs on explanatory variables for sovereign sample ....................... 104 Table 2. Summary statistics for sovereign sample.......................................................... 105 Table 3. Sovereign spreads fixed effect regressions....................................................... 106 Table 4. Expected signs on explanatory variables for domestic sample......................... 107 Table 5. Summary statistics for domestic sample........................................................... 108 Table 6. Domestic spreads fixed effect regressions........................................................ 109 Table 7. Correlation structure of residuals...................................................................... 110 Table 8. Principal component analysis of residuals........................................................ 112 Table 9. Sovereign and domestic regressions including the common factors ................ 114 Table 10. Vector autoregression model with exogenous variables................................. 115 Table 11. Summary statistics .......................................................................................... 116 Table 12. Extreme correlations using different number of tail exceedances.................. 117 Table 13. Sovereign rating changes by Standard & Poor's............................................. 118 Table 14. Sovereign rating changes by Moody's ............................................................ 119 Table 16. Stock index results using Moody's ratings...................................................... 121 Table 17. Stock index results using initial ratings .......................................................... 122 Table 18. First ratings for Argentina............................................................................... 123 Table 19. Stock market reaction to the first rating by either agency .............................. 124
xi
Table 20. Cumulative Abnormal Returns (CAR) for stocks with international financing ......................................................................................................................................... 125 Table 21. Cumulative Abnormal Returns (CAR) for all stocks following a sovereign rating downgrade ............................................................................................................ 126 Table 22. Cumulative Abnormal Returns (CAR) for all stocks following a sovereign rating upgrade ................................................................................................................. 129 Table 23. Countries included in this comparison............................................................ 132 Table 24. Coverage for sovereign bonds on Datastream and Warga databases. ............ 133
xii
LIST OF FIGURES Figure 1. First common component ................................................................................ 135 Figure 2. Second common component............................................................................ 135 Figure 3. Q-Q Plots for the left tail and the right tail of the dollar return of the Mexican equity index..................................................................................................................... 136 Figure 4. Q-Q Plots for the left tail and the right tail of the dollar return of the Mexican ADR equally weighted portfolio..................................................................................... 137 Figure 5. Q-Q Plots for the left tail and the right tail of the dollar return of the S&P 500 equity index..................................................................................................................... 138 Figure 6. Excess mean graphs for the left tail and the right tail of the dollar return of the Mexican equity index...................................................................................................... 139 Figure 7. Excess mean graphs for the left tail and the right tail of the dollar return of the Mexican ADR equally weighted portfolio...................................................................... 140 Figure 8. Excess mean graphs for the left tail and the right tail of the dollar return of the S&P 500 equity index ..................................................................................................... 141 Figure 9. Correlation between S&P and the Mexican stock market index and correlation between S&P and Mexican ADRs.................................................................................. 142 Figure 10. Correlation between S&P and the Chilean stock market index and correlation between S&P and Chilean ADRs ................................................................................... 142
xiii
Figure 11. Correlation between S&P and the Venezuelan stock market index and correlation between S&P and Venezuelan ADRs........................................................... 143 Figure 12. Correlation between S&P and the Colombian stock market index and correlation between S&P and Colombian ADRs............................................................ 143 Figure 13. Correlation between S&P and the Brazilian stock market index and correlation between S&P and Brazilian ADRs ................................................................................. 144 Figure 14. Correlation between S&P and the Argentinean stock market index and correlation between S&P and Argentinean ADRs.......................................................... 144 Figure 15. Correlation between S&P and the Mexican stock market index and correlation between S&P and Mexican ADRs before the 1995 Mexican crisis ............................... 145 Figure 16. Correlation between S&P and the Mexican stock market index and correlation between S&P and Mexican ADRs after the 1995 Mexican crisis .................................. 145 Figure 17. Sovereign Downgrades (S&P) ...................................................................... 146 Figure 18. Sovereign Upgrades (S&P) ........................................................................... 146 Figure 21. Sovereign Downgrades (Moody’s)................................................................ 147 Figure 20. Sovereign Upgrades (Moody’s) .................................................................... 147
xiv
CHAPTER 1
INTRODUCTION
The last twenty years have witnessed large reductions in regulations and barriers that prevented financial integration across countries. As markets slowly became more integrated, brand new fields for financial research opened up. Not only could we study if foreign markets behaved in a similar way to U.S. markets, but we also could study issues surrounding the integration of U.S. and international financial markets. The first dissertation essay investigates whether common factors that explain credit debt spread changes for domestic and sovereign debt after taking into account fundamentals are related, and then proceeds to analyze the determinants of these common factors. This dissertation essay looks at two groups of assets that had previously been studied only separately. It focuses on credit spread changes of U.S domestic bonds and sovereign bonds, making it the first paper to bring together these two groups of individual bonds to study their joint dynamics. Previous research in spread changes of U.S. domestic bonds identified a common component unrelated to credit risk in the time-series and cross-section of the unexplained portion of the spreads (Collin-Dufresne, Goldstein and Martin, 2001; Huang and Huang, 2003). If the U.S. and overseas market for dollar-denominated credit-risky bonds is
1
integrated, the information present in the unexplained portion of U.S. dollar sovereign debt spread changes should be related to unexplained portion of U.S, domestic bonds spread changes. This especially should be the case if that common component can be explained by liquidity shocks, since such shocks are pervasive across markets (Chen, Lesmond, and Wei, 2002; Chordia, Sarkar, and Subrahmanyam, 2003; Kamara, 1994). Existing research investigates separately the existence of common components in changes in credit spreads for domestic credit-risky debt (Collin-Dufresne, Goldstein and Martin, 2001) and dollar-denominated sovereign debt (Scherer and Avellaneda, 2000; Westphalen, 2003). The contribution of this dissertation essay is to study the relation between the common components identified in domestic debt and the common components found in sovereign credit spreads. To conduct this analysis, a new dataset comprised of all domestic industrial and U.S. dollar-denominated sovereign debt is constructed. This dataset contains data for 233 non-callable, non-puttable bonds issued by 37 emerging countries and 3097 domestic corporate bonds issued by 649 different companies that traded between January 1990 and January 2003. Results obtained help to discriminate between competing explanations for the common component previously documented for domestic debt, and also might
2
lagged sovereign spread’s first principal component. Finally, I find that all four common factors are related to the flows of money going into equity and bond funds, and the second common component of each group is related to the net borrowed reserves from the Federal Reserve, a macroeconomic measure of liquidity. The second dissertation essay analyses the extent of the financial and economic integration between Latin American countries and the United States by focusing on the behavior of linkages between financial assets using a statistical technique known as Extreme Value Theory (EVT). EVT is the study of outliers or extremal events. Since large movements in returns are usually characteristic of financial crisis and since these large movements can be considered outliers, the use of EVT seems to be warranted. This approach has several advantages. First among these are the well-known results on asymptotic behavior of the distribution of very high quantiles. Second, no assumptions are needed about the true underlying distribution that generated data in the first place. Since financial contagion usually occurs during periods of very high distress, it seems to be best analyzed using techniques that focus on the tails of a distribution function. (Bae, Karolyi and Stulz, 2003) The financial assets I analyze are American Depositary Receipts (ADRs) and their domestic counterparts in Latin America. Latin American firms can cross-list their shares in the U.S. via ADR programs, and at the end of 2001 there were 1,322 non-U.S. firms with sponsored programs, including 623 trading on American stock exchanges with a total trading volume of $752 billion. This chapter documents evidence of the asymmetric transmission of shocks from U.S. stock markets into domestic markets. It builds on the work of Longin (1996) and
3
Longin and Solnik (2001) applying EVT in finance by examining the linkage between financial assets available to U.S. investors looking for international exposure before and after main events such as the 1995 Mexican crisis. It also adds to the growing literature on financial contagion by employing a statistical technique more “appropriate” than current approaches based on elliptic distributions for the often temporary, but large, movements in prices. The third dissertation essay studies the effect of sovereign credit rating changes on the cross-section of locally-traded firms. A sovereign credit rating reflects the rating agency’s opinion on the ability and willingness of sovereign governments to service their outstanding financial obligations and it reflects macroeconomic factors related to political and financial stability. Sovereign credit ratings have large effects that spread to firms located within their borders, and changes to these ratings constitute country-wide shocks that can have sizable effects on the terms under which firms obtain financing and the overall cost of capital. This chapter contributes to the existing literature by extending our understanding of how much information sovereign rating changes convey to individual stocks within domestic markets. Specifically, I investigate if and why a country rating matters for firms within a country. I show that sovereign rating changes affect the terms on which a domestic firm can get credit, creating an exogenous change in the cost of capital. I divide the results into two parts: I first present the effect of rating changes at the aggregate level using national stock indices, and then proceed to study the effect of those sovereign rating changes on the individual firms located within those countries. Index level results are consistent with the extant literature on the effect of credit-rating changes on U.S.
4
firms. I do find evidence of a significant negative stock price reaction to sovereign rating downgrades while I find no evidence of a stock price reaction to sovereign rating upgrades. Further, I document that local stock markets react only to news of sovereign rating downgrades issued by Standard & Poor’s. To conduct this analysis I collected all sovereign rating changes issued by Standard and Poor’s (S&P) and Moody’s on 29 emerging countries from 1986 until 2003. I study the stock price reaction to 136 downgrades (81 from S&P and 55 from Moody’s) and 100 upgrades (57 and 43 from S&P and Moody’s respectively). I also collect information on 1281 individual firms located in 29 emerging countries. After computing abnormal returns for each firm, cross-sectional regressions of those abnormal returns are run on firm-specific characteristics and country-specific variables. I document how the size and wealth of the country where a firm is domiciled are related to the extent to which that a firm will be affected by a sovereign-rating change. More importantly, I find that previous access to international capital markets is an important determinant of the extent to which a firm is affected by a sovereign credit rating change.
5
CHAPTER 2 UNDERSTANDING COMMON FACTORS IN DOMESTIC AND INTERNATIONAL BOND SPREADS
2.1.
Introduction. In this chapter I analyze the determinants of credit spread changes of individual
U.S domestic and sovereign bonds. Previous research has focused on one type of bonds at a time, making this paper the first one to bring together the credit spreads on these two types of debt to study their joint dynamics. If the market for dollar-denominated creditrisky bonds is integrated, we can expect credit and non-credit related shocks to affect all bonds, i.e. the information present in the time series cross-section of the unexplained portion of U.S. dollar sovereign debt spread changes should be related to the common component unrelated to credit risk identified by previous research in spread changes of U.S. domestic bonds (Collin-Dufresne, Goldstein and Martin, 2001; Huang and Huang, 2003). This should especially be the case if that common component can be explained by liquidity shocks, since such shocks are pervasive across markets (Chen, Lesmond, and Wei, 2002; Chordia, Sarkar, and Subrahmanyam, 2003; Kamara, 1994). In this chapter, I investigate whether common factors that explain credit spread changes for domestic and sovereign debt after taking into account fundamentals are related and analyze the determinants of these common factors.
6
Existing research investigates separately the existence of common components in changes in credit spreads for domestic credit-risky debt and dollar-denominated sovereign debt. Scherer and Avellaneda (2000) identify the existence of two common factors for sovereign debt spread changes. Westphalen (2003) finds evidence of a common factor for sovereign debt spread changes of bonds denominated in several currencies after controlling for country risk proxies. Research on changes in domestic bond credit spreads by Collin-Dufresne, Goldstein and Martin (2001) finds one common component after controlling for fundamentals. The relation between these common components has not been examined in the literature. I extend the research on common components present in bond spreads by examining whether the information in the dynamics of U.S. dollar denominated sovereign debt spreads is associated with the common component found in U.S. corporate bond spreads. Specifically, I estimate different models of spread changes for each type of bonds – domestic and sovereign – because these two groups vary in their source of credit risk. Using principal component analysis for each debt type, I extract common factors from the unexplained portion of credit spread changes from these models. I investigate whether the common factors in U.S. dollar denominated sovereign debt are related to the common factors present in U.S. corporate debt spread changes using both regressions explaining contemporaneous changes in spreads and a dynamic model of changes in spreads. Finally, I attempt to provide an economic interpretation for the relations I uncover. To conduct this analysis, I construct a new dataset that is comprised of all domestic industrial and U.S. dollar-denominated sovereign debt. This dataset contains
7
data for 233 non-callable, non-puttable bonds issued by 37 emerging countries and 3097 domestic corporate bonds issued by 649 different companies that traded between January 1990 and January 2003. This dataset is different from the ones used by earlier studies in at least three ways. First, extant bond studies that use Datastream bond data do not include ‘dead’ issues, i.e., bonds that have matured or were retired, while I include them to avoid a survivorship bias. Second, the Fixed Income Database used in some other studies has a limited coverage of high-yield issues since it mainly covers investmentgrade bonds (Huang and Kong, 2003). I do not have this problem because my dataset contains data for the complete universe of bonds covered by Datastream.1 Finally, this dataset covers a longer time period than any previous study. My results help to discriminate between competing explanations for the common component previously documented for domestic debt, and also lagged sovereign spread first common component. Finally, I find that all four common factors are related to the flows of money going into equity and bond funds, as measured by the Investment Company Institute (ICI), while only the second common component of
1
Informal conversations with Datastream’s customer service revealed that several large banks, including Lehman Brothers, were among their providers for bond data. Since Lehman Brothers was the provider for the FISD, we feel confident Datastream’s data includes what is covered in the FISD and has broader coverage of high-yield bonds because of the additional data providers. A comparison between FISD and Datastream sovereign bond data can be found in Annex A. 8
each group is related to the net borrowed reserves form the Federal Reserve, a macroeconomic measure of liquidity. This chapter is the first one to bring together these two types of credit-risky dollar-denominated debt to study the joint dynamics of the common factors in their credit spreads. The results I obtain improve our understanding of the determinants of the cost of debt for foreign countries and for domestic firms. For example, my results suggest that the cost of debt for foreign countries and domestic firms is not only a function of their own creditworthiness but also depends on shocks that affect the price of all debt. Further, these results help us understand better the extent to which the sovereign and domestic corporate bond markets are integrated. In a fully integrated dollar debt market, we would expect the relation between domestic corporate credit spreads and sovereign credit spreads to be contemporaneous. Further research should investigate whether the lack of a contemporaneous relation is due to differences in liquidity and infrequent trading or if this reflects a market inefficiency. Finally, the lack of a relation between the common components of domestic corporate credit spread changes and sovereign credit spread changes suggests that the cost of debt for emerging markets depends mostly on country and emerging-market specific considerations. This is surprising in light of a considerable literature that emphasizes the impact of developed country developments for capital flows into emerging markets (Calvo, Leiderman, and Reinhart, 1993; Chuhan, Claessens, and Mamingi, 1998). Further investigation of the robustness of my results might shed greater insight into this issue.
9
This chapter proceeds as follows. Section II describes the literature, sample, variables and methodology used to model credit spread changes for sovereign bonds. Section III does the same for credit spread changes for domestic corporate bonds. I investigate, using a variety of techniques, the existence and nature of the factors affecting debt spread changes in section IV. Section V analyzes the dynamics of the common factors and investigates whether liquidity and/or demand related variables are related to them. Section VI concludes.
2.2
Debt spreads of sovereign bonds In order to examine whether a common factor is associated with the variation in
U.S. domestic corporate and U.S. dollar denominated sovereign spreads, the unexplained variation in each spread (i.e. residuals) must be calculated. My choice of variables to compute the credit risk portion of debt spread changes is based on the determinants of bond spread changes specified by structural models. For sovereign bond spreads, I expect bond-specific characteristics to be associated with bond spreads. Additionally, I expect bond spreads to be related to macro or country-specific factors as well as systematic factors. In this section, I review the relevant literature on U.S. dollar denominated sovereign bond spreads (section 2.1), and then discuss the testable implications of the extant literature and describe the proxies that are used to test the hypotheses derived from it (section 2.2). I describe the sovereign bond sample next (section 2.3), present a model to estimate debt spreads, discuss the results, and explain the computation of residuals (section 2.4).
10
2.2.1
Sovereign debt literature. The international debt market changed dramatically in the past 25 years. In the
1980s bank loans were the principal instrument of this market. By the end of that decade, reckless lending and borrowing caused outstanding debt balances to skyrocket to unsustainable levels. The crushing pressure of debt payments forced several emerging market countries to the verge of default. To avoid the ripple effects of such a default on the world’s financial system –which was still recovering from the 1987 stock market crash-- the U.S. government helped put in place a plan that would allow these countries to orderly restructure their debt schedule. The Brady plan, formulated in 1989 by then Secretary of the Treasury Nicholas Brady in association with the World Bank and IMF, called for the issuance of sovereign bonds to replace the loans of commercial banks.2 Brady bonds opened a vast and untapped market for emerging market countries hungry for U.S. dollars to help finance their growth, commercial deficits or simply to cover current expenses. Bank loans, while still an important component in sovereign debt balances, gave way to sovereign bonds as the principal financing instrument for emerging countries in the 1990s. Bonds were clearly preferred for several reasons, for instance the dispersion of creditors and the existence of a market where these bonds could be actively traded, which provided investors with a transparent benchmark measure of country risk.
2
These bonds were coupon bearing (fixed, floating or hybrid), long maturity (ten to thirty years) issued in registered or bearer form, whose principal and part of the interest were guaranteed by collateral of U.S. Treasury bonds and other high grade securities. Some of them included special recovery rights (warrants) that could be detached and traded separately. This last characteristic made the computations of yields for these bonds especially tricky. 11
The sovereign spread, or credit spread, computed now from bond yields, continued to be such a benchmark measure of country risk.3 Starting in the 1980s, the determinants of sovereign debt spreads has been studied by Eaton and Gersovitz (1981), where governments trade off the cost of paying debt versus reputation costs or exclusion from capital markets, and Bulow and Rogoff (1989), who provide rational explanations for international lending and model the costs of debt repudiation as direct sanctions. Edwards (1984) analyzed the macroeconomic determinants of the debt spread measured as the difference between the interest rate charged to a particular country and LIBOR (London InterBank Offered Rate). Hernández-Trillo (1995) uses a measure openness, unexpected shocks to GDP, international reserves and the risk free rate to explain the probability of default in sovereign loans. The international episodes of financial contagion experienced in the second half of the 1990s attracted even more attention to this area, as researchers started to devote more time to the study of periods of increased co-movements among international financial markets. For instance, Cantor and Packer (1996) and Eichengreen and Moody (1998) study the determinants of bond spreads at the issue level, finding that agency ratings include most of the information existing in macroeconomic variables. More recently, Scherer and Avellaneda (2000), Joutz and Maxwell (2002) and Cifarelli and Paladino (2002) study selected series from several emerging markets using principal component analysis and vector-autoregressions.
The credit spread is often referred to as yield spread, debt spread or simply spread. These terms are used interchangeably in this paper. 12
3
It was previously mentioned in this chapter I take a structural approach to the modeling of debt spreads. It is important to mention, though, that sovereign debt is different from corporate debt. One of the most important characteristics of any debt contract is the guarantee provided by the legal framework to creditors that allows them, in the case of default, to take possession of collateral and/or to liquidate the defaulting debtor’s assets. There is no enforceable bankruptcy code for sovereign bonds, making it effectively impossible for a creditor to successfully pursue a claim on a defaulting country’s assets. Acknowledging the endogenous default decision that countries face in this framework, Gibson and Sundaresan (1999) present a model in which creditors can impose trade sanctions and capture some fraction of the defaulting country’s exports, and Westphalen (2002) extends their model to include rescheduling in the form of a bond exchange. Finally, Westphalen (2003) applies a methodology used to study corporate credit spread changes (Collin-Dusfrene et.al. 2001) to a sample of sovereign bonds issued in different foreign currencies.4 So far, research on sovereign debt spreads has focused more on how spreads are determined at issue than on the study of the dynamics of the cross-section. There are two reasons for this. First, thin trading in many of these bonds produces relatively fewer sovereign bond transactions data. As a result, some data vendors resort to provide matrix prices (e.g. Bloomberg), which are not useful for research purposes.5 Second, in the early 1990s, when the market for sovereign debt was in its infancy, countries started by issuing
Another approach to the study of sovereign spreads has been implemented through the use of models based on an exogenously specified intensity process, known as reduced-form models. Merrick (2000) studies the implied recovery rations in Argentinean and Russian bonds. Pagès (2001) fits the joint Libor structure and discount Brady bond prices to a reduced-form model using a two factor affine-yield model. Duffie, Pedersen and Singleton (2003) conduct an analysis of Russian debt. 5 Actual quotes and/or transaction prices are available from different providers in the Bloomberg terminal through an additional subscription service. 13
4
few bonds. As their credibility improved, reinforced by the implementation of structural reforms in their economies, and investors got acquainted with this new supply of bonds, sovereign issuers increased the number and amount of debt offerings. Therefore, it took some years for this market to be sufficiently diverse and liquid enough to allow the construction of a data panel suitable for research purposes. Today, the sovereign debt market is more developed –we have more bonds with longer time series each one- and there are better and more alternatives to obtain bond data – several information services provide access now to observed pricing data, although some remain very expensive.
2.2.2
Implications of the literature and proxies used to test them. Structural models of sovereign debt have identified macroeconomic variables that
affect sovereign debt spreads.6 Based in part on previous literature, I put together three groups of variables that should capture most of the debt spread variation. The first group contains bond-specific variables, i.e., variables that vary within bond issues, e.g. years to maturity. The second group contains variables that vary from country to country but are the same for all bonds from a given country (country-specific variables). The third group contains variables that are the same for all bonds in the sovereign sample, and try to capture changes in the U.S. interest rate term structure.
One problem with most empirical work exploring the relation between macroeconomic variables and debt spreads is that they conduct static analysis, i.e., only study the cross-section of spreads at one point in time, usually at issuance. For instance, GDP growth has been theoretically and empirically shown to have significant explanatory power over issue level spreads. This is not useful in this context since most of the data used in this paper is released monthly, quarterly or even annually in some countries. 14
6
2.2.2.1 Bond-specific variables. The bond-specific variable used is years to maturity. By definition, a bond’s life to maturity duration measures how long an investor has to wait before getting their money back. Sovereign bonds pay (relatively) large coupons and therefore a large proportion of the cash flows are paid throughout the life of these bonds, thus we have to consider the possibility that years to maturity could be an overstated proxy of a bond’s average life.
2.2.2.2 Country-specific variables. These variables are chosen to capture a measure of a country’s distance-todefault, i.e., a country’s ability (and/or willingness, depending on the model of reference) to keep servicing its debt. Following Eaton and Gersovitz (1981), Bulow and Rogoff (1989), Krugman (1985, 1989), Gibson and Sundaresan (1999) and Westphalen (2002), I collect data on exports and total debt outstanding to construct a debt-to-exports ratio. Borrowing from Krugman and Rotemberg’s (1991) speculative currency attacks model, I use international reserves to construct a debt-to-reserves ratio as an alternate proxy of distance-default. Westphalen (2003) uses a political risk measure which I also use here. I also collect the Standard and Poor’s (S&P) ratings history for each country. I use the monthly volatility of local stock returns as a proxy for volatility in a country’s wealth or value. This measure is also used because it is a good proxy for local risk. One direct testable implication of this is that spreads should increase with volatility. Finally, the local stock market return in U.S. dollars is included.
15
2.2.2.3 U.S. interest rate term structure. Since all the bonds in my dataset are denominated in U.S. dollars, I care about factors that affect the U.S. yield curve term structure. From Litterman and Scheinkman (1991) we know that the U.S. yield curve level and slope are important explanatory factors of the term structure. Further, in this framework, if a country’s wealth follows a stochastic process analogue to a firm’s value process, the risk neutral drift will be positively related to the risk-free rate. An increase (decrease) in the risk-free rate should increase (decrease) the country’s wealth over time, making default less (more) likely to happen. Since an upward-sloping yield curve slope is, according to the expectations hypothesis theory of the term structure,7 predicting higher interest rates in the near future, I expect this slope to have some effect on spreads today. Also, a positively sloped interest rate term structure is perceived as signaling increased economic activity in the near future. Table 1 presents the predicted correlation signs between the variables previously mentioned and debt spreads.
2.2.3
Data description. I collect monthly data on all U.S. dollar denominated bonds with Datastream
coverage. Datastream’s yields are calculated using average market maker prices provided by the International Securities Market Association (ISMA). I am able to identify 5270 ‘live’ and 3451 ‘dead’ bonds8 issued by foreigners that traded between January 1990 and
Bodie, Kane and Marcus (1999), pp. 446. One important feature of Datastream’s coverage of bonds is that only ‘live’ issues (i.e., issues that are currently trading) appear in their bond lists. Therefore, to make sure I had all available data, I conducted a 16
8
7
January 2003. I eliminate from the dataset all bonds that were callable and/or puttable at borrower’s option, all that had an early redemption feature and/or were extendible at the bond holder’s option, and all that were not issued by a sovereign entity.9 This leaves my dataset with 181 live and 52 dead bonds. Also, I eliminate all observations with less than one year to maturity because, as these bonds approach their maturity date, they are less traded, which in turn dries up their liquidity and distorts prices and yields.10 After all these adjustments, I come up with a sample that contains 9,275 monthly observations from 233 bonds issued by 37 different countries, which did trade between January 1990 and January 2003. For each bond, I collect the monthly redemption yield (datatype 4 in Datastream). I also collect the monthly U.S. Treasury yield curve. Then, I compute debt spreads as the difference between the redemption yield of the sovereign bond and the value of a linear interpolation of the U.S. Treasury yield curve to obtain the yield of a U.S. instrument with identical maturity as the bond being analyzed.11 I collect years to maturity time series for each bond. As proxies for the U.S. Treasury yield curve’s level and slope, I collect monthly annualized yields for the on-the-run two and ten year Treasury notes.12
country by country search of U.S. dollar denominated bonds in the ‘Dead bonds’ (not trading anymore because they were retired or they matured) section of the Datastream Extranet web site. 9 I am not using Brady bonds in my analysis because their characteristics are inherently different from regular sovereign bonds. The existence of collateral as well as the existence of value recovery rights attached to Brady bonds makes them a class on their own. Further, the tendency is for sovereigns to retire par and discount Brady bonds, so that movements in Brady bond’s prices might be reflecting low volume and thin trading problems and not changes associated with the underlying value of the issuer and the overall liquidity of the market For instance, Mexico’s Ministry of Finance and Public Credit announced on April 7, 2003 that it was calling US$3,839 million of its dollar-denominated Series A and B Brady Par Bonds, which were the last outstanding series of Mexican Brady Bonds denominated in dollars. 10 Sarig and Warga (1989). This effect is even more pervasive when considering that liquidity was not great in the first place. 11 I also collected the monthly U.S. Treasury yield curve using CMT (constant maturity treasuries) to calculate spreads and our results are insensitive to the choice of U.S. benchmark curve. 12 The use of CMT yields for those maturities did not affect our results at all. 17
Monthly exports data expressed in nominal U.S. dollars come from the IMF’s International Financial Statistics. Debt outstanding and foreign reserves data are obtained from the joint BIS-IMF-OECD-World Bank statistics on external debt. Quarterly data on the total amount outstanding of bank loans, of debt securities issued abroad and of Brady bonds is obtained from this source, as well as monthly data on the amount of international reserve assets, excluding gold.13 One shortcoming of this database is that not all series are available on a quarterly basis and there are some gaps in the data, especially in the earlier 1990s. The Economist Intelligence Unit (EIR) started publishing in March 1997 a measure of country risk for emerging markets. It measures political, economic policy, economic structure, currency, sovereign debt and banking sector risks. This index can be used as a guide for the general risk of a specific country. It provides help in assessing the risk of investing in the financial markets of those economies as well as the risks involved in direct investment. The values are derived from measuring the risk associated with four aspects of the country –political risk, economic risk, economic structure risk and liquidity risk.14 To get a measure of monthly local wealth volatility, I use an equity volatility measure as proxy. Ideally, I wanted to use MSCI country indices, since they are calculated for each country using the same methodology. However, MSCI country indices were not available daily going back to the early 1990s for many of the countries
All figures are expressed in current U.S. dollars. The overall risk rating is measured on a scale from 1 to 100 where 1 denotes the least risk and 100 the most risk possible. For example, in December 2002, the value of the index was 78 for Argentina, 63 for Brazil and 48 for Mexico.
14
13
18
included in this paper. Therefore, I used Datastream local equity indices. For more than half of the countries in the sample (twenty one), I collect daily data for the local Datastream equity index. For eight additional countries, I collect daily data from their own local equity indices. For the remaining countries Datastream’s world total return index was used. To correct for differences in the scales of the indices the coefficient of variation (sample standard deviation over sample mean) was computed. I also collect the available history of Standard & Poor’s (S&P) country ratings from Bloomberg, and follow Eom, Helwege and Huang (2003) for translating S&P ratings into numerical values, where a rating of AAA has a value of 1, AA+ a value of 2 and so on. Table 2 has summary statistics for the sovereign sample. Observations are grouped in five different categories according to their S&P rating. It is evident from panel A that all groups display a high degree of non-normality. Also, as expected, spreads increase as we move down in ratings. The mean debt spread in the overall sample is 483 basis points, the maximum spread is 3939 basis points and the minimum is 1.9 basis points. Interestingly, the standard deviation also increases as the rating deteriorates. Over the sample period, the standard deviation is on the order of 25.3 to 809 basis points. There is evidence of extreme movements in each group as the 90% and 10% values are away from the mean by several times the standard deviation. Panel B has the mean values, by group and for the overall sample, of some country specific variables. Debt-to-reserves, debt-to-exports and political risk all increase in value as move down in rating to signal a worsening of a country’s situation. I expect
19
these variables to have on average higher values as we move form high to low ratings, and that is precisely what I find.
2.2.4
A model for sovereign spreads. I estimate the following equation for each bond observation in the sample:
∆Spreadi,t = Constant + β1*∆Debt to foreign reserves ratioi,t + β2*∆Country risk measurei,t + β3*∆U.S. Treasury yield curve level,t + β4*∆U.S. Treasury yield curve (1) slope,t + β5*∆Local volatilityi,t-1 + β6*Local returnt-1 + β7*∆Years to maturityi,t + εi,t
Following earlier research, I estimate regressions on debt spread changes.15 To estimate this equation, I decided to use an OLS model with Newey-West adjusted errors.16 A priori, I expect the coefficients to have the signs described in Table 1. Table 3 shows the results of estimating equation (1) in four different rating groups. These groups are similar to those presented in Table 2, except that the first and second groups from that table were grouped together in Table 3. The model seems to have a good fit, as measured by R-squared measures, which range from 19% to 30%. For brevity, I will discuss only the results for the overall sample. The debt-to-reserves ratio and the political risk measure both have a positive coefficient
Some previous research has been conducted on spread levels, for instance, Houweling et. al. (2002). Cantor and Packer (1996) and Eichengreen and Moody (1998) run regressions on the log of the yield spread. 16 I experimented with several other methodologies. I estimated equation 1 using OLS fixed effects, grouping our sample by bond, by country, and by region. I also estimated FGLS (Feasible Generalized Least Squares), OLS with panel corrected standard errors and OLS with Huber/White standard error correction. All methodologies produced quantitatively and qualitatively similar results; however results were more consistent using OLS coefficients with Newey-West adjusted errors. Results obtained with other methods are not reported in this paper and are available from the author. 20
15
(as expected) and are highly significant. Two lags of the political risk variable were included to account for the possibility of autocorrelation in this variable. These variables measure the ability to service debt and the overall political and economic environment of the issuer. An increase in political risk would signal higher instability and/or the possibility of expropriation and therefore should be associated with a higher spread. An increase in the debt-to-reserves ratio could be caused by an increase in the nominal debt amount or a decrease in international reserves, both of which should be associated with a higher spread. I also find that the coefficient estimates when using debt-to-exports in place of debt-to-reserves are not significant and have the wrong sign, so they are not reported. The coefficient associated to the U.S. Treasury yield curve level is negative and highly significant. Previous work had obtained insignificant positive coefficients (Cline and Barnes, 1997; Min, 1998; and Kamin and Von Kleist, 1999), and significant negative coefficients (Eichengreen and Mody, 1998). One interpretation of these negative coefficients is that, as interest rates go up, low rated countries find it less convenient to issue debt. Also, most structural models predict a negative relation because higher interest rates increase the drift of the process followed by the firm’s (in this case, country’s) value.17 A higher firm (country) value should be associated with a smaller spread and hence the negative sign. The coefficient associated with the U.S. Treasury slope term is always positive and significant, however, this is unexpected. Following the expectations theory of interest rates, a positively sloped yield curve signals higher future rates, which should be
17
Longstaff and Schwartz (1995). 21
associated with smaller spreads. Two reasons for this effect were previously mentioned. On one hand, we could expect the average quality of sovereign issuers to increase because low rated countries decide not to issue debt and this increase in overall quality puts downward pressure on spreads. On the other hand, higher rates will mechanically increase the distance to default in most Merton-based structural models which would also lead to a decrease in spreads. Local volatility is positive and highly significant, as expected. The local stock return has the expected (negative) sign and also is significant. The coefficient on changes of years to maturity is negative and not significant. I interpret this coefficient as evidence of the existence of a survivorship bias in which only relatively better countries make it to issue longer term debt, as explained by Helwege and Turner (1999) for the domestic case. It may be the case that investors think that in the case of a default, short term maturities are more risky than long term maturities since countries will usually default first on issues with closer maturities, making short term issues riskier. The lack of consistent cross-default clauses in some countries allows them to default or re-schedule debt payments selectively. Finally, for a country facing financial difficulties, a longer time horizon will provide the necessary time and maneuvering room to enact reforms and measures that will allow the country to return to fiscal stability, effectively making longer term debt less risky.
2.3
Debt spreads of domestic bonds. In this section, I review the relevant literature on U.S. dollar denominated
domestic bond spreads (section 3.1). Then I discuss the variables used in the computation
22
of domestic spreads (section 3.2), and I describe the characteristics of the domestic bond sample (section 3.3). I then proceed to estimate domestic debt spreads, discuss the results and compute residuals (section 3.4).
2.3.1 Domestic debt literature. The first structural model of risky debt is by Merton (1974). In this paper, Merton used an option pricing approach to include systematic and idiosyncratic risk in the calculation of the value of a put option on the firm’s value.18 In Merton’s model a firm defaults on its debt when its assets are not enough to cover its outstanding obligations. Default occurs when the firm’s value crosses from above a given threshold. The initial model allowed for default only at maturity and was extended by Black and Cox (1976) to allow for earlier default. Another extension was introduced by Longstaff and Schwartz (1995) by incorporating stochastic interest rates. Strategic default was introduced in models by Anderson and Sundaresan (1996) and Mella-Barral and Perraudin (1997). Modeling endogenous corporate default was introduced by Leland (1994) and Leland and Toft (1996). As these models need a fair amount of abstraction to achieve tractability, it is not surprising that they prove to be difficult to implement and then almost always with disappointing results (see Eom, Helwege and Huang (2003) for a review of the problems and limitations faced by structural models). This lack of results motivated some researchers to try another approach, using reduced-form models, or intensity-based models. These models ignore firm-specific
18
Specifically, Merton’s (1974) model states that a risky zero-coupon bond has the same payoff structure as a risk-free bond plus being short a put option on the firm’s value with a strike price equal to the face value of the debt. 23
fundamentals and do not explicitly model the processes followed by the firm’s leverage and/or value. Reduced-form models assume an unpredictable default process governed by an exogenous hazard rate. For instance, Duffie and Singleton (1997) use a generic point process and Lando (1998) uses a Cox process. Through extensive calibration, reduced form models generally produced better results at explaining and forecasting yield spreads than structural models. More recently, Elton, Gruber, Agrawal and Mann (2001) tried to explain corporate spreads using explanatory factors that included the probability of default, the loss given default, and the difference in tax regimes. Collin-Dufresne et. al. (2001) tried to explain changes in the credit risk portion of corporate spreads using data on spot rates, reference yield curve slope, firms leverage and volatility, estimates for jumps in the firm’s value and a proxy for the general business climate. Both papers, the former being more of a reduced-form approach and the latter using a variables specified by a structural framework, find similar results in that their models left a large portion of the crosssectional time variation of spreads unexplained, and further, they find that a single common unknown factor could explain up to 75% of the residual variation. Huang and Huang (2003) calibrate several classes of structural models to be consistent with the recent history of observed defaults. They find that different models could generate the wide range of credit spreads observed in the recent past, and further they provide some evidence about the predictive power of such models.
24
2.3.2 Theoretical determinants of domestic debt spreads. Structural models of domestic debt have identified variables that affect debt spreads. In a manner consistent with the previous section, I put together three lists of variables that should capture most of the debt spread variation. As with the sovereign case, the first list also contains bond-specific variables, i.e., variables that vary within bond issues, e.g. years to maturity. The second list contains variables that vary from firm to firm but are the same for all bonds issued by firm (firm-specific variables). The third list contains variables that are the same for all bonds in the domestic sample, and try to capture changes in the U.S. interest rate term structure as well as changes in the U.S. economic climate.
2.3.2.1 Bond-specific variables. The bond-specific variable is years to maturity. The same arguments from section 2.2.1 apply here.
2.3.2.2 Firm-specific variables. I choose two firm-specific variables following the basic spirit of Merton’s model as presented in Stulz (2003). The first variable, leverage, has been used in previous research as a successful proxy of a firm’s financial health. The second variable is the volatility of a firm’s equity. A priori I expect a negative relation between each of these two variables and debt spreads, since an increase on any of them would make default more likely.
25
2.3.2.3 U.S. interest rate term structure. The domestic sample is denominated in U.S. dollars. Therefore, I care about factors that affect the U.S. yield curve term structure. Similar to the sovereign case, I use the U.S. yield curve level and slope as explanatory factors of the term structure (Litterman and Scheinkman, 1991). The arguments used in section 2.2.3 also apply here. Since I collect data only on bonds issued by U.S. industrial firms, I assume their exposure to the economic cycle is better captured by the S&P 500 index and therefore I collect monthly returns for this index. Table 4 presents the predicted relations between the variables previously mentioned and debt spreads.
2.3.3. Data description. The domestic sample contains all U.S. denominated bonds issued by industrial domestic firms. Applying the same selection criteria as those for the sovereign sample, I end up with 2,493 live and 604 dead bonds issued by 649 different firms during the January 1990 – January 2003 period for a total of 71,831 usable observations. This sample differs from previous studies in at least three aspects. First, it covers a larger time period than previous research. Second, I collect data for the entire universe of bonds issued in U.S. dollars by domestic industrial firms, not only for those traded by any specific group of investors. Third, the Fixed Income Database used in earlier studies like Collin-Dufresne et al (2001) mainly covers investment grade bonds, and so results obtained for high-yield bonds using that database might not be representative (Huang and Kong, 2003).
26
Debt spreads are computed in the same way as for the sovereign sample. Years to maturity data is collected for each bond. The measures of the U.S. Treasury yield curve’s level and slope are the same as the ones used in the previous section. The proxy for the U.S. economic climate is the S&P 500 total return index from Datastream. To compute the leverage ratio I collect book value of debt from COMPUSTAT (items 45 and 51) and the market value of equity from CRSP. Leverage ratios are then computed, following earlier literature, as:
( Book Value of Debt ) ( Book Value of Debt + Equity Market Value)
Table 5 shows descriptive statistics for the domestic sample. Although it would be desirable to classify this sample also by rating, however, Datastream’s coverage of ratings is sketchy at best. In light of that problem, I decided to classify the data according to leverage. As can be seen from the leverage columns of panel A, credit spreads increase as firms become more levered. Further, the standard deviation of credit spreads also increases with leverage. Data on the ‘No leverage data’ column refers to firms that are either private or are not covered by COMPUSTAT. The presence of heavy tails in each category is evident from the dispersion observed in the max, min, 10% and 90% values, where the 10% and 90% values are several standard deviations away from the mean.
27
2.3.4
A model for domestic debt spreads.
I estimated the following equation for domestic bonds:
∆Spreadi,t = Constant + β1*∆Leverage ratioi,t + β2*∆Stock return volatilityi,t + β3*∆U.S. Treasury yield curve levelt + β4*∆U.S. Treasury yield curve slopet + β5*∆S&P index returnt-1 + β6*∆Years to maturityi,t + εi,t
(2)
The results from these regressions are reported on Table 6. I estimated equation (2) for each leverage group and for the overall sample. Clearly, the model performs better in the highly leveraged group, as evidenced by the higher R-squared value. This result is consistent with previous studies which find that structural models perform better for longer maturity, lower rated sub-groups. As with the sovereign case, I will only go over the overall sample results. On the coefficient of changes in years to maturity, the negative but insignificant coefficient is consistent with previous results of Helwege and Turner (1999) and with the basic Merton (1974) model predictions, as described in Stulz (2003), for conservative levels of debt. The coefficients of lagged leverage and stock return volatility –contemporaneous and lagged- are positive and strongly significant. The sign of the U.S. yield curve level is also as expected and similar to the results obtained by earlier studies. In contrast to the sovereign sample, nothing conclusive can be said about the sign and significance of the coefficient estimated for changes of the U.S. Treasury slope is significantly positive in every specification. Finally, the S&P index return is significant and with a negative sign, just as predicted by the theory.
28
2.4.
Analyzing the common factor.
As previously mentioned, the goal of this chapter is to investigate whether the common factor identified in domestic credit spread changes also is present in sovereign debt spread changes. In this section, I establish the existence of a common factor in both the residuals from the regressions on sovereign and domestic debt spread changes.
2.4.1
Establishing the existence of common factors.
In order to investigate whether common factors are present in the unexplained variation in spreads, I use principal components analysis. This is a statistical technique for data reduction whose objective is to find unit-length linear combinations of the original variables that capture the maximum variance. I apply principal component analysis to the residuals obtained from the regressions discussed in previous sections to verify whether the unexplained variation is truly noise or whether there is evidence of a common factor driving this unexplained portion of the variance of credit spread changes.19 The first problem faced when applying principal components analysis is how to organize unbalanced panels in the most efficient form. Research in this area conducted by Boivin and Ng (2003) shows that more data is not always better when conducting this type of factor analysis. In fact, in their forecast exercise they show that factors extracted from as few as 40 variables could be more informative than factors extracted from all 147 series in their setup. Basically, their result obtains because of large cross-correlation in
19
The serial correlation of the residuals from the regressions for sovereign yield changes is -0.1513 with p value of 0.2290 and 0.0068 with p value of 0.9345 for domestic yield changes. 29
errors and of small variability of the common components. Sadly, until today there is no guide as to what data should be included in a principal component analysis or what is the optimal number of series to include in this exercise. Recent work from Scherer and Avellaneda (2000) applies principal component analysis to eight variables only, effectively using one or two bonds per country in their study. While this might be a solution for the sovereign case where it is easier to identify benchmark bonds for each issuer, this is not feasible in the domestic sample. This sample has bonds from 649 different firms, and in many cases there are tens of bonds outstanding from a given firm. Also, applying principal component analysis to all the bonds in the domestic sample is not a good idea, since that would most certainly only increase the amount of statistical noise while adding very little or no new information at all. I decide to follow the approach implemented by Collin-Dufresne et. al. (2001) and create groups or ‘bins’ of data to efficiently summarize the information content of the residuals. I divide each sample (sovereign and domestic) in three maturity categories and three leverage (debt-to-reserves, in the sovereign case) categories, creating a total of nine ‘bins’ in each sample. Then, each observation is assigned to a bin. I estimate again equations (1) (for the sovereign bins) and (2) (for the domestic bins), compute the residuals, and calculate averages across residuals for each bin. Table 7 shows the correlation structure for the average domestic residuals (panel A), the average sovereign residuals (panel B) and for all averages –domestic and sovereign (panel C). The average correlation for the sovereign sample is 0.75, and 0.87 for the domestic sample. To investigate whether the relatively high correlations found in panel A and C are caused by a common component, I proceed to conduct principal component analysis.
30
Table 8 shows the results of applying this eigenvalue decomposition to the bins constructed earlier. Panel A shows strong evidence of the existence of a common factor in sovereign spreads. The first common factor explains 76.09% of the variation, as shown by the proportion of the first eigenvalue. The second common component explains the remaining variance that is orthogonal to the first common component. It is difficult to interpret the second component because its eigenvalue is well below the value of the first eigenvalue and is much closer to the third eigenvalue. However, if this is to be interpreted as evidence of a second common factor, it would explain an additional 20.37%. According to Scherer and Avellaneda (2000), a number between 65% and 80% for the first common component would be considered as indicative of strong co-movement characterized by a high correlation in the spread changes. My results are consistent with their result – obtained with spreads computed from Brady issues for selected countries – in which they found evidence of two common factors driving most of the variation in spread changes. Panel B of Table 8 also shows strong evidence of the existence of a common factor to all domestic spreads. The first common component explains 86.23% of the variance. There is weak evidence on the existence of a second component which explains an additional 8.53% of the variance. The existence of a first common factor that explains such a large portion of the variance is consistent with previous research, e.g., CollinDufresne et. al. (2001). The existence of a second common factor has not been documented for domestic debt before, but this could be due to the fact that I am using a larger dataset and that I am looking at a longer time period that earlier studies.
31
Finally, panel C has the results of looking at the common components of both groups of bonds, sovereign and domestic. Interestingly, I find no evidence suggestive of the existence of a common factor to both groups of bonds. The first common factor explains 42.06% of the residual variance of spread changes, while the second factor explains an additional 33.12%. As mentioned before, Scherer and Avellaneda (2000) consider a value of 65% for the first common component as the lower boundary for a weak coupling, or correlation, between spread changes. The result I obtain is puzzling because if the market for dollar-denominated credit-risky bonds is integrated, and if the common components I find can be explained by liquidity shocks, then such shocks should be pervasive across markets (Chen, Lesmond, and Wei, 2002; Chordia, Sarkar, and Subrahmanyam, 2003; Kamara, 1994). According to panel C, this is not what is happening. In order to shed more light on the issue of whether the common factor identified in both samples is indeed the same in both groups, I extract the first and second common components of each sample to compare them. These common factors are plotted in figure 1. The interpretation of the units in the y-axis is as follows. To compute the principal components, I analyzed the correlation matrix. This is equivalent to all the variables to having mean 0 and standard deviation 1. Thus, the common factors are expressed in terms of these standardized variables. Loosely speaking, the units on the y-axis in both figures can be interpreted as percentage points. The pattern seems to suggest a lead-lag relation between the first factor from the domestic sample and the first factor extracted from the sovereign sample. Figure 2 shows the second common component extracted from both samples. The figure seems to suggest
32
a weak contemporaneous relation. These issues will be investigated further in the next sections.
2.4.2
Explanatory power of the extracted components.
In this section, I examine whether these common factors have explanatory power over the cross-section of debt spreads for the other type of debt. I estimate again equations (1) and (2) including in each equation the two common components extracted from the other group, i.e., I include the factors extracted from the sovereign sample into the domestic sample regression and vice versa. Results are shown on Table 9. First, I will talk about the sovereign regression when the domestic common components are included. Looking at rating categories, the explanatory power of the equation for the lower rated group (B- to C) increases, as measured by the increases of Rsquared statistic from 22% to 31%. This sub-sample is the one with the least number of observations. There is, however, no gain in explanatory power in the overall sample. Further, in every case only the contemporaneous value of the first component from the domestic sample is significant. In the case of the second component extracted from the domestic sample, only the lagged value of the second factor is significant for all subgroups. The domestic sample has strikingly different results. In this case, I included in the domestic spread changes equation both common components extracted from the sovereign sample. The explanatory power of the overall equation is increased almost by 30%, from an R-square value of 0.09 to 0.12. The only significant common component coefficient is the contemporaneous effect of the first common component from the
33
sovereign sample. The explanatory power in most domestic sub-samples increases by a similar percentage as the R-squared value of the overall sample. Overall, I interpret these results as evidence of the existence of a relation between the first common component extracted from the sovereign spread changes and domestic debt spread changes. The dynamics of the relation between the common components extracted from each type of debt is investigated in the next section.
2.5.
Looking into the information content of the common factors.
The principal component analysis conducted neither provides information on the dynamics of the factors identified nor provides an economic interpretation of them. In this section I investigate the contemporaneous and inter-temporal relation between factors and also investigate whether these factors might be capturing liquidity and/or supply/demand shocks.
2.5.1
Lead-lag relations.
The picture shown in figure 1 suggests the possibility of an intertemporal relation between the first factor extracted from the sovereign sample and the first factor extracted from the domestic sample. Using a vector-autoregression approach, I investigate the possibility of one of these markets acting as an early signal for potential problems that can affect the bond market in general. Previous work like Joutz and Maxwell (2002) and Cifarelli and Paladino (2002) have applied VAR procedures in a credit spread framework to study the relation between credit spreads from different countries. More recently, Longstaff, Mithal and Neis (2003) applied a VAR framework to study the relation
34
between bond and credit derivatives markets. To explore the lead-lag relation between the sovereign and the domestic factors, the following simple vector-autoregression specification is used:
FacSovt = a1 + ∑ β1 j FacSovt − j + ∑ γ 1 j FacDomt − j + δ 1 X t +ε 1
j =1
k
k
k
j =1
k
FacDomt = a2 + ∑ β 2 j FacSovt − j + ∑ γ 2 j FacDomt − j + δ 2 X t +ε 2
j =1 j =1
Table 10 shows the results for the simple case when k is equal to two. Both the Akaike Information and Schwartz criteria suggest that a VAR system of two lags is warranted by the data. I first run the VAR model without exogenous variables to have an initial idea of the lead-lag structure. For brevity, I only report the R-squared value for each equation and also because the basic lead-lag relation is unchanged when the exogenous variables are included. I then run the VAR model with exogenous variables. These exogenous variables are chosen to capture liquidity and supply/demand effects. Most previous studies dealing with credit spreads specifically abstain from liquidity effects because of the lack of consensus on how to measure and model liquidity premium affecting spreads (Chen, Lesmond and Wei, 2003). Longstaff, Mithal and Neis (2003) study the consistency of the price of credit risk between the bond and derivatives markets. They find that the implied cost of credit is higher in the bond market than in the credit derivative market, and advance a possible explanation for this based on the existence of a liquidity component in debt spreads. Their measure for this liquidity
35
premium is the difference between the price of credit risk in the bond and credit derivative markets.20 Since all the bonds in the sample are denominated in U.S. dollars and they all trade in U.S. financial markets, I am interested in variables that measure the overall liquidity in these markets. I use two general measures of liquidity. The first proxy is the difference in yield between the on-the-run21 thirty year U.S. Treasury bond and the most recent off-the-run bond is computed. Off-the-run bonds are bonds that whilst not being the most recently issued in a certain maturity range, are very similar to the on-the-run issue in all respects. Therefore, any differences in prices –and therefore in yields- is usually considered to be due to liquidity. As liquidity dries up, this difference is expected to decrease. The second proxy for general liquidity in the market is the net borrowed reserves from the Federal Reserve,22 which is considered a measure of the monetary stance. A loose monetary policy usually implies an increase in liquidity via the decrease of credit constraints. Harvey and Huang (2002) showed that the Federal Reserve, through its ability of changing the money supply, impacts the trading of bonds and currencies. Following Chordia, Sarkar and Subrahmanyan (2003) I define net borrowed reserves as total borrowing minus extended credit minus excess reserves, divided by total reserves. Since borrowed reserves represent the amount that banks are short to satisfy the Fed’s requirements, a lower value of this measure indicates looser monetary conditions.
Collin-Dufresne et. al. (2001) point that Chakravarty and Sarkar (1999), Hotchkiss and Ronen (1999) and Schultz (1999) found evidence of the existence of relatively high transaction costs and low volume in bond markets, and therefore, Collin-Dufresne et. al. (2001) interpret these results as evidence of a liquidity premium. 21 An on-the-run bond is the most recently issued (and typically the most liquid) government bond with a given maturity. 22 See Chordia, Sarkar and Subrahmanyan (2003). 36
20
To capture possible supply/demand shocks I collect data from the Investment Company Institute (ICI) on the monthly flows into mutual funds. ICI’s statistics are collected from approximately 8,300 mutual funds, and are divided in flows into equity funds and bond funds. These measures could potentially capture changes in investor’s attitudes towards risk or any other supply/demand shocks unrelated to overall market liquidity.23 Table 10 reports the results of the VAR model with the exogenous variables. It seems that the sovereign factors have explanatory power over the domestic factors but not the other way around. I will discuss each one of the four equations in the VAR model, starting with the first common domestic factor. The second lag of the first sovereign factor is significant in the regression for the first domestic factor. The flows to stocks and flows to funds variables are negative and significant. This equation is the one with the smallest gain in R-squared when including the exogenous variables. The first sovereign factor seems to be slightly autoregressive from the barely significant coefficient for its own first lag. Coefficients for the domestic factor lags are not significant, while all the coefficients for the exogenous variables are highly significant. It seems as if this factor is capturing both liquidity and demand shocks as implied by the coefficients associated with the exogenous variables. This equation also has the highest increase in explanatory power, since the R-squared increased from 0.05 to 0.37 when the exogenous variables are added to the specification.
23
Of course, we have to consider the possibility of endogeneity in our variables, since for instance, a change in the Fed’s stance could make certain markets more attractive and influence the flows into those markets. No effort is made at this time to address this concern. 37
The second domestic factor equation shows significant coefficients for the first lag of both sovereign factors as well as for its own second lag. The flow variables come out both significant, as well the net reserves coefficient. The second sovereign factor equation has the highest R-squared value, at 0.67. Second lag coefficients for both sovereign factors are significant, as are both lags of the second domestic factor. The flow variables and the net reserves measure are also highly significant. Overall, the exogenous variables of the VAR do capture a significant portion of the time variation of the factors extracted from the sovereign and the domestic sample. There also is evidence in support of the aforementioned result that sovereign common components are related to domestic spread changes but not the other way around, because the lags of the sovereign common components have significant coefficients in the equations for the domestic common components but the domestic common components do not appear to have explanatory power in the equations for sovereign common factors. Also, there is evidence of an inter-temporal relation going from the first sovereign common component to the first domestic common component. This is expected from figure 1. I find that all four common factors are related to the flows of money going into equity and bond funds, as measured by the Investment Company Institute (ICI), while only the second common component of each group is related to a macroeconomic measure of liquidity, namely the net borrowed reserves form the Federal Reserve. These results are consistent with previous literature that has concluded that the unexplained variation in credit spreads could be caused by liquidity and supply/demand shocks.
38
The pattern displayed by the first common factors in figure 1 raises the concern that my VAR results could be driven by the large spike observed around October 1998. As a robustness check, I re-estimate the system excluding the observations for September, October and November 1998.24 As expected, the significant loading of the second lag of the first sovereign common component onto the first domestic common component disappears. Interestingly, however, the coefficient associated with the second lag of the second sovereign factor now becomes significant, whereas it was not significant before. The explanatory power of the exogenous variables and the overall Rsquared values of the system remain unchanged. Overall, the evidence suggest that the asymmetric relation observed between the first common factor from the domestic sample and the common factors from the sovereign sample remains even after excluding the large shock of October 1998 from the sample.
2.6.
Conclusions and future work
The availability in recent years of a panel of observations on sovereign bond yields provides with a unique instrument whose dynamics can shed some light in the study of the determinants of debt for countries and firms. In this chapter I identified the existence of a two strong common components, unrelated to credit risk and distinct for each type of debt, in credit spreads of sovereign and domestic bonds. Using a vector autoregressive (VAR) model, I find that domestic spreads are related to the lagged first common component of sovereign spreads. While there is no contemporaneous common component in bond spreads, there seems to be a common component when focusing on
24
Results are not reported but are available upon request. 39
the dynamics of these spreads. Traditional macro liquidity variables are related to the common components found in domestic and sovereign spread changes. I will conduct further research to shed light on why the relation between domestic corporate credit spreads and sovereign credit spreads is not contemporaneous, as expected in a fully integrated market. This could be due to differences in liquidity and infrequent trading or maybe it reflects a market inefficiency. My results also are surprising since they suggest that the cost of debt for emerging markets depends mostly on country and emerging-market specific considerations, and this is surprising when considering results obtained by previous literature emphasizing the impact of developed country developments for capital flows into emerging markets. In this chapter, I contributed to the literature by showing that current structural models of debt spreads can be improved if these findings are incorporated in them. To the extent that investors depend on these models to hedge the credit risk of their bond positions, they can benefit from a better understanding of the determinants of credit spreads changes. My research also shows that, after taking into account the dynamics of the common components in credit spreads across debt types, the cost of debt for firms and countries depends to some extent on shocks that affect all types of debt.
40
CHAPTER 3
LATIN AMERICAN AND U.S. EQUITIES RETURN LINKAGES: AN EXTREME VALUE APPROACH 3.1. Introduction
The last fifteen years of the past century witnessed an important increase in financial and economic integration between Latin American countries and developed countries. Due to geographical and trade considerations, most of the ongoing integration took place between Latin America and the United States. This is not to say that integration with other developed countries, for instance countries members of the European Union, is not significant, but rather that integration with the U.S. happens to be more important both in the public perception and in relative terms. One of the many benefits of this increased integration is the newly available supply of financial assets exhibiting low correlation with the U.S. financial markets. Mean-variance optimizing investors looking to diversify their portfolios have poured vast amounts of resources into Latin American stock markets as they were being liberalized. Among the new instruments available to U.S. investors pursuing the benefits of international diversification were closed-end funds, open-end funds, American Depositary Receipts (ADRs), and foreign indices mimicking funds.
41
However, there was a dark side to financial integration and openness. Financial crisis, in the form of negative financial markets shocks, also moved easily across borders. In addition, the propagation of financial crisis was facilitated by the newly increased correlations. This phenomenon, usually referred to as financial contagion, has attracted much attention in recent years from researchers. Recent experiences with the Mexican devaluation of 1994 – “the tequila effect”-, the 1997 East Asian crisis, the 1998 Russian meltdown -“the vodka effect”- and the subsequent Brazilian devaluation –the “samba effect”-, which sent most financial markets around the world tumbling, stressed the importance for both practitioners and academics of studying and identifying the forces behind these phenomena. This chapter addresses the behavior of linkages between financial assets using a statistical technique known as Extreme Value Theory (EVT). EVT is the study of outliers or extremal events. Since large movements in returns are usually characteristic of financial crisis, and these large movements can be considered as outliers, the use of EVT seems to be warranted. This approach has several advantages. First among these are the well known results on asymptotic behavior of the distribution of very high quantiles. Second, no assumptions are needed about the true underlying distribution that originated data in the first place. Since financial contagion usually comes into scene during periods of very high distress, it is a natural area on which to use a technique that focuses on the tails of a distribution function. My results show that for the six emerging countries analyzed, only in the case of Mexico the correlations in the extreme are higher between the locally traded stocks and the S&P than for the corresponding ADRs and the S&P. This result suggest that the
42
contagion mechanism for Mexico is different than for the rest of the countries analyzed here, in that the shocks appear to be propagated directly into the Mexican stock exchange rather than through New York via the ADR issues trading there. Another finding This chapter contributes in the literature in a number of ways. First, it builds on the work of Longin (1996) and Longin and Solnik (2001) applying EVT in finance by examining the linkage between financial assets available to U.S. investors looking for international exposure before and after main events such as the 1995 Mexican crisis. Second, it adds to the growing literature on financial contagion by employing a more “appropriate” statistical technique for the often temporary but large movements in prices. Third, related to the first contribution, this chapter also contributes to the empirical asset pricing literature because of its focus on a relatively new statistical application to the transmission of information and prices. This is useful as sectors of the U.S. markets become more volatile. Fourth, this chapter contributes to the risk management literature. The basic analysis one EVT can easily applied to the other assets, such as energy-based and commodities derivatives, which also exhibit temporary but sharp price movements. This is important as the government contains to deregulate selective energy markets. Finally, it contributes to the research on international asset pricing and allocation. The analysis on U.S. and Latin American financial assets also is useful for professional international portfolio managers. The chapter is organized as follows: the next section discusses some basic results from Extreme Value Theory and how they are applied in a financial framework. Section III presents an empirical analysis of the data. In this section, bivariate extreme value measures are applied on six different country pairs between the U.S. S&P500 Index and
43
each of the following countries: Argentina, Brazil, Chile, Colombia, Mexico and Venezuela. Section IV presents an extension of the analysis, searching for structural changes in linkages following the Mexican crisis. Section V concludes and outlines some ideas for future research.
3.2.
Literature Review
In recent years, extreme value theory techniques have been applied successfully to three related fields in finance: value at risk (VaR), the financial contagion literature and the literature that studies the return-volume relation. Brooks, et al (2005) test several EVT models by fitting them to different series of futures contracts in order to determine which model is more efficient for value at risk purposes. In an univariate analysis, they find that a model that treats the tail differently while at the same time incorporating information from the rest of the distribution yields superior results to a generic GARCH (1,1) model. Gencay and Selcuk (2004) apply univariate EVT to investigate performance of VaR models in emerging markets, documenting the thresholds and probabilities of stock crashes. Poon and Lin (2000) use univariate EVT to show that an internationally diversified portfolio will dominate a US portfolio. The reach that conclusion looking at probability of loss for each market using a tail index to study the distribution of returns. In a related exercise, Susmel (2001) applies the safety-first principle developed by Roy (1952) to compute levels of (negative) returns that would wipe out an investor’s wealth. He applies univariate EVT analysis to study the tail distribution of several Latin American markets. He finds that investors can benefit from inclusion of these markets in their portfolios –regardless the fatter tails exhibited by Latin American markets. His
44
results suggest an optimal 15% allocation to these markets. Hartmann, Straetmans, and de Vries (2001) study stock-bond contagion within and across five developed countries applying an innovative a non-parametric measure of dependence in the extremes of the return distributions. They find that simultaneous crashes in stock markets are twice as likely as simultaneous crashes in bond markets. Also, stock-bond and bond-stock contagion is equally as likely. Finally, cross-border linkages are very similar to national linkages. Qi (2001) focus is to answer whether unexpected volume provides information and therefore moves prices and to do this, he studies the return-volume relation in the tails using bivariate EVT. In almost all cases the results point to a positive correlation between absolute return and trading volume. This relation is not symmetric, i.e., it is stronger in the right tail than in the left tail. Reich and Wegmann (2002) analyze a sample of Swiss stocks in order to study the relation between market value and the relative bidask spread. Using bivariate EVT and using Streatmans’ non-parametric approach to extreme linkages they document that extreme movements in both variables are not independent. Marsch and Wagner (2004) use bivariate EVT to reject the null of independence in the extremes of returns and volume in five of seven developed countries in their sample. Further, they find that relation to be symmetrical only in the US, i.e., the same for negative and positive returns. Jondeau and Rockinger (2001) study stock returns in 20 countries using univariate EVT. Surprisingly, they cannot reject that the left and right tail indices are the same, and they cannot reject that the tail index is the same for all countries. They do find disparity with respect to where the extremes are located (i.e., a 10% price drop might not be an
45
extreme observation in every country). Poon, Rockinger and Tawn (2002) use nonparametric measures to identify and quantify tail dependence among international stock markets. Using data from 1968 to 2000 for US, UK, Germany, France and Japan they find left-tail dependence to be stronger that right tail dependence. They also document that these stock index returns do not exhibit asymptotic dependence. Schich (2002) looks at European stock indices (Germany, UK, France, Netherlands and Italy) from 1973 to 2001. Using Starica (2000) measure of dependence in a bivariate EVT framework he finds that measures of dependence have grown over time and are higher for negative than for positive returns. He also finds that the bivariate correlation with Germany is similar for all the countries in the sample. Chan-Lau, Mathieson and Yao (1998) study how contagion differs across countries. They conclude that contagion patterns differ significantly within and across regions, that Latin America shows the largest increases in contagion and that contagion is higher for negative returns than for positive returns. Several studies have focused on the study of firms within one country. Tolikas and Brown (2003) apply univariate EVT on individual Greek stocks and find that the tails have become less fat tailed over time. Brännäs, Shahiduzzaman and Simonsen (2002) also use univariate EVT to determine that the tails of Swedish stocks are better described using a Fréchet distribution. Gencay and Selcuk (2001) study overnight borrowing rates in an interbank money market in Turkey and describe the characteristics of the tail of their distribution. Straetmans, Verschoor, Wolff (2003) look at the U.S. stock markets before and after 9/11. They find no evidence for a structural change in downside risk measured before and after 9/11. They do find, though, that the probability of joint co-exceedances
46
over thresholds between different sectors and the market portfolio has risen when taking 9/11 as the sample midpoint.
3.2.1. The Univariate Case
Extreme value theory (EVT) has been extensively studied in mathematics and statistics. One of its earliest applications, and in fact one of the branches of science that motivated much of the work in EVT, is hydrology. Classical applications are the calculations for the optimal height of dykes in the Netherlands as well as the calculations of level of tides in the U.K. Engineers also are fond of EVT when it comes to calculate how resistant some structures are to wind and sea forces. In finance, EVT is a useful supplementary risk measure because it provides more appropriate distributions to fit extreme events and it has found a niche in value at risk (VaR) and insurance applications. Basically, EVT involves modeling the tail of a marginal distribution as well the dependence structure of extreme observations. This modeling of the asymptotic distribution can be done without making assumptions about the form of the original underlying distribution that produced the data in the first place. This is very desirable since the original distribution is generally unknown. Readers interested in a more technical discussion are encouraged to read the seminal work by Gumbel (1961). Since this chapter makes no attempt at contribution to the existing EVT theory, we will only go over the main results of the theory, highlighting the results that are important for our analysis. We will describe the main results of univariate EVT. Then building on those results, we will briefly explain the main results extending to bivariate EVT.
47
For any time series, we can define extremes in two ways. The first approach is to divide the series in blocks of equal size and take the maxima (the analysis for the minima can be done by simply using the negative of the series) of each block. For instance, if we have daily returns for a period of many years, we can divide the data in annual semi annual blocks and then proceed to take the maximum value from each block. The other approach to define extremes is to define a high threshold and consider all observations that exceed the threshold. The first significant result is, for the first case (taking
maxima), that the asymptotic distribution of the maxima is shown -under certain conditions- to converge to a Gumbel, Fréchet or Weibull distribution. A standard form of these distributions is known as a Generalized Extreme Value distribution (GEV). The second significant result, for the threshold approach where one is interested in modeling the behavior of the exceedances, is that the limiting distribution is a Generalized Pareto Distribution (GPD). We will now talk about the first approach. Let {X1,...,Xn} be a sequence of identically independent distributed random variables. The asymptotic distribution of a series of maxima (minima) has been shown (Fisher and Tippet, 1928; Gnedenko, 1943) to converge to a Generalized Extreme Value (GEV) distribution of the form:
H (ξ ,µ ,σ )
⎧ exp( − [1 + ξ ( x − µ ) / σ ] −1/ξ ) ifξ ≠ 0 = ⎨ − ( x − µ )/σ ) ifξ = 0 ⎩ exp( − e
The parameters µ and σ represent, as usual, a scalar and a tendency. The parameter ξ is called the tail index and it is positively related to the thickness of the tail, i.e., the larger the tail index, the thicker the tail. Depending on the value of the tail index we have three possible scenarios: ξ >0 which corresponds to a Fréchet distribution, ξ =0
48
which represents a Gumbel distribution and ξ <0 which represents a Weibull distribution. Empirically, it seems to be the case that the distribution is the best fit for fat tailed financial series. The second approach, known in the literature as the POT (peaks over thresholds) approach, involves estimating the conditional distribution once a high threshold is reached. Let Fu be the distribution of exceedances of X over a threshold u, such that
F u ( x ) = P( X − u ≤ x X > u )
The determination of the threshold u is not trivial and will be briefly addressed further ahead. For now, we will only mention that it is subject to minimizing the mean squared error in the parameter estimates. If we set a high threshold, you get a less reliable estimate due to the lack of data. On the other side, if we set a threshold to low, you get more precise, albeit biased, estimates since we included some points that did not belong in the tail. Once the threshold has been determined, we can write
F u ( x ) ≈ G ξ , β ( u ) ( x ), u → ∞
where
⎧ ⎛ ξx⎞ ⎪1 − ⎜ 1 + ⎟ Gξ ,β ( u ) ( x ) = ⎨ β ⎠ ⎝ ⎪ 1 − e − x/β ⎩
− 1/ ξ
ifξ ≠ 0 ifξ = 0
This result shows how the conditional distribution Fu is approximated by a Generalized Pareto Distribution (GPD). The case where ξ = 0 gives us the exponential distribution. Any value of ξ > -0.5 is considered a fat tailed distribution.
49
Application of EVT requires some previous analysis to make sure the data exhibits fat tails, justifying the use of EVT. In this early stage analysis we usually perform a visual inspection of the data with the help of Q-Q plots and mean excess function graphs. A mean excess function graph describes the expected overshoot of a threshold given that an exceedance occurs. Fat tailed distributions show mean excess functions tending towards infinity for high thresholds u (linear shape with positive slope). As mentioned before, the choice of a threshold is subject to the usual trade off between variance and bias. One solution would be to use a mean excess graph and choose a threshold that yields a reasonable linear shape. Another –and most common solution– is to calculate the value of the Hill estimator. The Hill estimator is a maximum likelihood estimator for a GPD and it is a very popular estimator of ξ. Because the Hill estimator is a maximum likelihood estimator, we know that the parameter values are chosen to maximize the joint probability density of the observations.
3.2.2. The Bivariate Case
The first problem faced in extending EVT to a bivariate case is that such extension is not straightforward. The reason is that in higher order Euclidean space there is no standard notion of order and thus no standard notion of extremes (Embrechts, et. al., 1999). The current multivariate EVT allows only for low dimensional problems. Truly multivariate EVT is not under the current theory reach. Longin (1999) established that the joint distribution of extreme marginal distributions is not necessarily the distribution of the extremes for an aggregate position. Still, the bivariate estimation is done in two parts: univariate estimation of tail indices first, followed by the estimation of a “uniformised”
50
dependency function. Even though no natural parametric family exists for the dependence function, Longin and Solnik (2001) proposed the use of a logistic dependence function to fit the joint tails in a bivariate EVT framework. The logistic dependence function was first proposed by Gumbel and therefore is also referred to in the literature as a Gumbel dependence function. This is its functional form:
Dl ( y1 , y2 ) = ( y1 −1/α + y 2 −1/α ) α
The main reason for using this dependence function is that the dependence coefficient α is related to the correlation coefficient of extremes, ρ, via the following functional form:
ρ = 1− α 2
Summarizing the bivariate case, we will first get univariate estimates for each distribution, then fit the dependence function. As we use different thresholds to calculate the tail indices, we will also get different values of ρ. This property is crucial in that it will allow us to calculate in a straightforward manner how the correlation changes as we move further into the tails.
3.3.
Data
We collected daily data from Datastream for the period between 12/29/1989 and 12/29/2000 --2872 days-- for six Latin American countries and the U.S. The countries are Argentina, Brazil, Chile, Colombia, Mexico and Venezuela. For each Latin American country, we collected the return index of the local stock market and for every live ADR issue trading in the U.S. We also collected the return index of the S&P 500. For every asset, log returns in dollars were obtained. Also, an equally weighted basket of ADRs for
51
each country was calculated. ADRs were chosen because they represent an alternative to investors who want to get foreign stock market exposure. Table 11 shows the summary statistics of the variables. The next step was to conduct an exploratory data analysis or visual inspection of each series, two for each country plus the S&P index. Through the use of Q-Q plots and mean excess graphs we verified that the series exhibit fat tails and we got initial ranges for the tail indices. The Q-Q plot (graph of quantiles) helps to assess the goodness of fit of data to a parametric model. The more linear the Q-Q plot, the better the goodness of fit. It is also helpful because on it outliers can easily be detected. The following is the mathematical characterization of a Q-Q plot:
⎧ ⎫ −1 ⎛ n − k + 1 ⎞ ⎨ X k ,n , F ⎜ ⎟ , k = 1,..., n⎬ ⎝ n ⎠ ⎩ ⎭
Figures 3 to 5 show the Q-Q plots for the left and right tail of the Mexican equity dollar return, the Mexican ADR equally weighted portfolio and the S&P500 index. In all of them the presence of fat tails is obvious based on visual inspection. The next step of the exploratory analysis was to graph the mean excess function e(u). e(u) is the mean excess over the threshold u. As discussed above, it describes the expected overshoot of a threshold given that an exceedance occurs. Fat tailed distributions show e(u) tending towards infinity for high threshold u (linear shape with positive slope).
52
This is exactly the pattern that is observed on figures 6 to 8 for the same variables as in the Q-Q plots. Mathematically, e(u) is:
e( u ) = E ( x − u X > u )
and the plot is given by the points in:
{X
available from the author.
k ,n
, en ( X k ,n ), k = 1,..., n
}
The complete set of Q-Q plots and mean excess graphs for all variables is
As outlined in the previous section, univariate analysis is a pre-requisite of bivariate analysis. An interesting product of the univariate analysis is the possibility of calculating probabilities for out of sample events. In plain terms, we can calculate for instance, given a 20 year time series of daily returns, the maximum loss that is to be exceeded every x years. McNeil (1997) has a very good example on this. However, this chapter is interested in asset linkages, and therefore we will not talk about the conditional univariate probabilities of exceeding a certain threshold. Our final goal is to be able to make inferences on the pattern displayed by the correlation in the extremes, using a framework à la Longin and Solnik (2001). Our exercise here is to calculate pairwise correlations between the S&P and the local domestic stock indices and compare these correlations with the correlations obtained from another pair wise calculation using the S&P and a basket of ADRs. Since in theory an investor can get the same international exposure from buying shares in the local stock markets than from buying ADRs in the U.S., we would not expect to find any differences in the behavior of correlations as we move further into the tails of the marginal distributions. Curiously, this
53
seems not to be the case for some countries considered like the most representative of economic and financial liberalization, namely Mexico, Chile and Brazil. Figures 2 to 8 show the main product of this chapter. Each graph shows how extreme correlations (measured in the vertical axis) change value as I increase the threshold used to obtain the sample (measured in the horizontal axis). Points to the right of the vertical axis show us the behavior of the positive -right- tail, whereas points to the left of the vertical axis show the correlations in the negative -left- tail. As just mentioned, Mexico, Chile and Brazil seems to show asymmetric correlations which we think are worth further research. Colombia seems promising but the lack of data makes the inference process dubious. Finally, it is also worth investigating what is it (or was?) about the Argentinean market that make it behave in such a “well behaved” form during the period in question. In the ideal case of bivariate normal behavior, we would expect to see a well behaved bell shape. However, some of the countries offer some evidence of increased correlations in the negative tails, consistent with all the previous work on financial contagion. Whether this increased negative extreme correlations are caused by liquidity problems, heteroskedastic time series or some other explanation is beyond the scope of this work. For now, we will just show that correlations display an asymmetric behavior and that S&P - ADR correlations do not always behave in the same way as S&P - local index correlations. We consider this result significant and we hope that further research will allow us to draw conclusions about whether the transmission mechanism of financial crisis goes through the U.S. stock markets -via ADRs- or moves directly to the local stock markets.
54
3.4.
A Small Test for the Mexican Pairs.
Since the Mexican pairs displayed differences both between left and right tail correlations and between S&P - ADRs and S&P - local stock index, we decided to use this country to test whether the extreme correlations changed after the 1995 Mexican crisis. Figures 9 and 10 show the experiment. We divided the sample in two, the first sample going from 12/29/1989 to 12/31/1994 and the second sample running from 1/3/1995 to 12/29/2000. Absent some formal test, we limit the analysis to a visual inspection of the behavior of the extreme correlations. At first sight, it seems that the asymmetric pattern exhibited by the whole sample is driven primarily by the post 1995 period. Let us be very clear about the fragility of any assumption stated in these sections. As mentioned before, extreme value parameters depend heavily on the size of the sample, namely the extreme observations. Dividing the original sample in two probably did hurt the accuracy of the estimates. More work on this area is in progress and it will be included in future versions. However, the visual pattern is suggestive of evidence of changes in the value of extreme correlations.
3.5.
Concluding remarks
This work is still in its very early stages. However, some promising results are already showing. Applying bivariate extreme value techniques to pairs formed by Latin American local stock indices vs. the S&P and Latin American ADRs vs. the S&P we found visual evidence of non-bivariate normal behavior. More formal Wald test are in the agenda for future research. Also, we neglected in this chapter a third channel through
55
which a U.S. investor can gain foreign exposure: country-funds. We intend to include pairs formed by country-funds vs. the S&P and compare the behavior of their extreme correlations with the ones already obtained and check whether they can offer more clues about the transmission mechanism of financial shocks. Finally, the last point in the research agenda involves a departure from the Gumbel logistic dependence function. Hartmann, Straetmanns and de Vries (2001) show that other dependence functions do a much better job at fitting the joint marginal extreme distributions. Since the only foundation for the use of a Gumbel dependence function was its ease of interpretation, it is certainly worth the effort to check whether other dependence functions do a better job fitting this sample of Latin American countries. The rationale for studying other dependency functions is as follows. Blyth (1996) and Shaw (1997) have made the point that linear correlation cannot capture the non linear dependence relationships that exist between many real world risk factors. They point that the use of correlation as we commonly do is ok when risks have a multivariate normal distribution, jointly. This is, when the underlying process is governed by elliptical distributions, i.e., distributions whose density is constant on ellipsoids and whose contour lines of the density surfaces are ellipses in a two dimension representation. However, they point that the use of correlation is not ok when we are outside the elliptical world because marginal distributions and correlation do not determine the joint distribution. Other important facts in this non-elliptical case are that perfectly dependent variables do not necessarily have a correlation of 1 and vice versa, and zero correlation is not equal to independence. Finally, it is important to recall that correlation is only defined when the variances of risks are finite.
56
The aforementioned facts provide with the following game plan for future research. We can, for instance, use rank correlation, i.e., the linear correlation of probability transformed variables. It still is very limited, since the structure of dependence is not fully described and it cannot be easily manipulated. The use of copulas comes as a natural alternative. Copulas are a way of trying to extract the dependence structure from the joint distribution. For instance, a joint distribution can be written as F(x1,..., xn) = C(F1(x1),...Fn(xn)), where
Cβ (u, v ) = exp[ − {( − log u) 1/ β + ( − log v ) 1/ β } β ]
and C is a copula function can be thought of as a multivariate distribution function with standard uniform marginal distributions. C is the dependence structure of F. This approach has the advantage of not summarizing dependence in a single number (which can be a dangerous simplification) and rather utilizes a model of the dependence structure to provide with more info. More work in this area is needed and projected as future extensions to the present work.
57
CHAPTER 4 THE EFFECT OF SOVEREIGN CREDIT RATING CHANGES ON EMERGING STOCK MARKETS
4.1.
Introduction
In this chapter, we study the effect of sovereign credit rating changes on the cross section of locally traded firms. Standard and Poor’s defines a credit rating as “a current opinion of the creditworthiness of an obligor with respect to a specific financial obligation, a specific class of financial obligations, or a specific financial program.”25 A sovereign credit rating, then, reflects the rating agency’s opinion on the ability and willingness of sovereign governments to service their outstanding financial obligations in full and on time – it is basically an estimate of probability of default and/or likelihood of repayment. These sovereign ratings reflect factors such as a country's economic status, transparency in the capital markets, levels of public and private investment flows, foreign direct investment, foreign currency reserves, and the ability of a country's economy to remain stable despite political change. An important difference between sovereign credit ratings and corporate credit ratings is that sovereign credit ratings have large effects and implications that spread to
25
58
other entities besides the one being rated. Cantor and Packer (1996) wrote: “Sovereign ratings are important not only because some of the largest issuers in the international financial markets are countries, but because these assessments affect the ratings assigned to borrowers of the same nationality. For example, agencies seldom, if ever, assign a rating to a (...) private company that is higher than that of the issuer's home country.” Investors with interests in foreign firms pay attention not only the foreign firm’s credit rating –when it is available- but also to the credit rating of the country where the firm is domiciled. This chapter contributes to the existing literature by extending our understanding of how much information do sovereign rating changes convey to domestic stock markets. Specifically, we investigate if and why a country rating matters for firms within that country. We show that sovereign rating changes affect the terms on which a domestic firm can get credit, creating an exogenous change in the cost of capital. We divide our results in two: we first present the effect of rating changes at the aggregate level using national stock indices, and then proceed to study the effect of those sovereign rating changes on the individual firms located within those countries. Our index level results are consistent with the extant literature on the effect of credit rating changes on U.S. firms. We do find evidence of a significant negative stock price reaction to sovereign rating downgrades while we find no evidence of a stock price reaction to sovereign rating upgrades. Further, we document that local stock markets react only to news of sovereign rating downgrades issued by Standard & Poor’s. When we look at the effect of sovereign rating changes on the cross section of individual firms, our results suggest that sovereign credit rating changes affect larger
59
firms more. We also find that firms in poorer emerging countries experience larger drops in the price of their shares. The fact that ratings affect asset prices is surprising based on Wakeman (1992). In that paper, he explained that the real function of bond rating agencies is to reduce costs of financing at issuance. According to him, a bond rating does not determine but rather mirrors the market’s assessment of a bond’s risk, and thus ratings should not affect but rather reflect the market’s estimation of a bond’s value. Therefore, there should be no impact of rating changes on stock prices. Empirical work in this area (e. g., Holthausen and Leftwich, 1986, Cornell et al 1989) shows, however, that credit ratings convey valuable information to the bond and stock markets. The idea is that firms disclose more information to rating agencies, which in turn incorporate this information in their ratings in such way that the privileged information is conveyed but not disclosed. In an international setting, one could argue that rating agencies, in conducting their research to rate a country, develop information that would also affect the future prospects of domestic firms. Cornell et al (1989) proved that U.S. firms with more intangibles -which are more difficult to value- are affected more by the information conveyed by rating changes. More recently, Jorion et.al (2004) tested the effects of the introduction of Reg FD (Regulation Fair Disclosure) on the informational advantage of rating agencies. Using standard methodology, they show that both credit rating downgrades and upgrades have larger effects on stock prices after the introduction of Reg FD. There are at least three reasons why stocks in foreign markets react to credit changes of their sovereign government. Although sovereign credit ratings are not country ratings, it is more often than not the case that the rating assigned to a non-sovereign
60
foreign entity is the same or lower than the credit rating assigned to the sovereign where it is domiciled. This is the so-called ‘sovereign ceiling’. The concept is based on the assumption that a sovereign default will force domestic issuers also to default because most circumstances leading to national debt crises –e.g. balance of payment crises and terms of trade shocks- directly affect the debt servicing capacity of private borrowers. For instance, private firms might be forced to default in the payment of their international obligations if their government imposes exchange controls that prevent access to foreign currency. All these additional risks exogenous to the firm are the basis for the existence of a cap on the ratings assigned to foreign firms, namely the sovereign ceiling.26 The sovereign ceiling is relevant for firms when it is binding, because in that case firms are facing costs of capital artificially higher than those they should face, i.e., a binding sovereign ceiling means the firm is paying a higher yield on its debt than it otherwise should. When a foreign firm has stronger credit characteristics than the sovereign where it is located and when the risk of the imposition of debt-service-limiting foreign exchange controls is less than the risk of the sovereign defaulting, it could be possible to observe foreign firm credit ratings higher than those assigned to the sovereign – this is known among practitioners as breaking the sovereign ceiling.27
26
A common misconception is that the sovereign rating and the sovereign ceiling are synonyms. There are examples of countries – for instance, some east European countries- where monetary ties make it harder for the country to impose defaulting conditions on its firms. In those cases, it is not uncommon to observe a sovereign rating lower than the sovereign ceiling. In many countries, however, the sovereign rating remains the best proxy to the sovereign ceiling (Durbin and Ng, 2004). The idea is based on the notion that because firms operate under the economic framework promoted by the government, an economic downturn would reflect itself in the domestic firm’s ability to repay their financial obligations abroad. For instance, in many emerging countries, economic crisis were almost always associated with currency devaluations, making foreign obligations more difficult to meet. The IMF (1991) defined this risk as ‘transfer risk’, since a government can transfer its problems –through greater taxes, imposing currency controls or assets expropriation- to an otherwise completely healthy firm. 27 The sovereign ceiling has not always been binding. The most notable exception is the Argentinean case, where Standard & Poor’s allowed 14 firms in April 1997 to have higher debt ratings than that of the 61
The second reason why local stock markets might react to sovereign credit rating changes has to do with the difficulties in collecting reliable information for most foreign firms. Many foreign countries –emerging markets mostly- have legislations that are not as conducive to the free flow of information from firms to its investors –actual and potential- as the U.S. legal framework. Faced with such difficulties to obtain detailed firm-level data, investors tend to rely on sovereign ratings as convenient and intuitive aids in valuing projects in emerging countries where firm-level information is scarce, effectively ‘painting all issuers with the same brush’. This effect seems to be even stronger for below investment-grade than for investment-grade issuers (Cantor and Packer, 1996). Finally, a third reason is that sovereign credit rating changes communicate information about the country and firms depend in many ways on the country where they are located. Large domestic firms in these countries can typically access capital more cheaply abroad than at home. However, a sovereign credit downgrade reduces this advantage. In that case, the firms that can borrow abroad would be less likely to do so, which would lead them to borrow more from at home and crowd out smaller firms. Alternatively, large domestic firms could find it impossible to raise suitable amounts at home because domestic financial markets are not deep enough. In this case, if foreign borrowing becomes more expensive (i.e., higher costs of capital), firms may end up borrowing less, or not borrowing at all and not undertaking investments.
Republic of Argentina. At that time, Moody’s severely criticized the move, calling it irresponsible, and refused to follow suit. Interestingly, when Argentina defaulted in 2001, all these Argentinean firms defaulted too. 62
To conduct this analysis we collected all sovereign rating changes issued by Standard and Poor’s (S&P) and Moody’s on 29 emerging countries from 1986 until 2003. We study the stock price reaction to 136 downgrades (81 from S&P and 55 from Moody’s) and 100 upgrades (57 and 43 from S&P and Moody’s respectively). We also analyze the impact of the first rating change to a given event, e.g., the first rating change from either agency to the Mexican crisis of 1994 or to the Russian crisis of 1998. This chapter is the first to study the effect of sovereign credit changes on the stock prices of domestic firms. To do this, we collect information on 1281 individual firms located in 29 emerging countries. After computing abnormal returns for each firm, we run cross-sectional regressions of those abnormal returns on firm-specific characteristics and country-specific variables. We document how the size and wealth of the country where a firm is domiciled are related to the extent to which that firm will be affected by a sovereign rating change. More importantly, we find that previous access to international capital markets is an important determinant of the extent to which a firm is affected by a sovereign credit rating change. Our results are important for several reasons. Recent renowned bankruptcies – Enron, World Com, United Airlines- have raised legitimate questions about the value we should place on credit ratings.28 Further, recently there has been a concern that rating agencies are late in issuing sovereign rating changes. Some authors claim that by lagging, rating agencies contributed to boom-bust cycles. If that is the case, we should not capture much of an effect in our empirical exercise. Yet, we find evidence of an effect that could have been –arguably- anticipated. We interpret our results as evidence of the
28
A good reference to illustrate the failure of credit ratings in capturing deteriorating conditions of the firms mentioned in the text is: 63
informational content of sovereign rating changes, even when they are not fully unanticipated events. Also, regulators -foreign and domestic- that conduct assessments of risk can benefit from an improved understanding of sovereign ratings changes and their effects on many of the firms they monitor. Our results also contribute to the research pioneered by Morck, Yeung and Yu (2000). In their work, they show that markets do not do a good job of differentiating among firms within emerging markets. Specifically, they find that “stock prices in economies with high per capita gross domestic product (GDP) move in a relatively unsynchronized manner. In contrast, stock prices in low per capita GDP economies tend to move up or down together.” In this chapter, we ask whether the price reaction of stocks to sovereign rating changes in 29 countries is related to firm characteristics and/or country characteristics. We do find that larger firms experience worse stock price reactions following sovereign downgrades. Firms that have accessed international capital markets also experience more negative reactions to downgrades of their sovereign government. Finally, we also find that firms located in richer countries (GDP per capita) and in countries with more developed financial markets (stock market capitalization to GDP) experience smaller price reductions following sovereign downgrades. This chapter will proceed as follows. In section II we will review the existing literature. Section III will show that there is an impact on the stock market at the index level. We proceed to analyze the impact of sovereign rating changes on individual firms in section IV. Section V will conclude.
64
4.2.
Literature review
Earlier literature in the 1970s and all through the 1990s focused on the study of the impact of rating changes of individual U.S. firms on their bond and stock prices. Griffin and Sanvicente (1982), using monthly data, were the first ones to find evidence of significant negative stock price reactions to rating downgrades while finding no evidence of significant reactions to rating upgrades. Holthausen and Leftwich (1986) is the first paper that used daily data to study the stock price reaction to credit rating changes. The main contribution of their work was to establish that rating downgrades by Moody’s and Standard & Poor’s provide information to the markets and impose costs to the firm by reducing the stock’s price. They did not find a significant stock price reaction for rating upgrades. Subsequently, many papers confirmed the findings by Holthausen and Leftwich (1986) under many different specifications and conditions (Wansley and Clauretie (1985), Cornell, Landsman and Shapiro (1989), Hand, Holthausen and Leftwich (1992), Goh and Ederington (1993), Goh and Ederington (1999), Dichev and Piotroski (2001)). Cornell, Landsman and Shapiro (1989) test whether a stock’s price response to rating changes is related to the nature of the firm’s assets, in particular whether the stock price response is related to the firm’s net intangible assets. They propose two hypotheses for the existence of such a relation, both of them based on the assumption that intangible assets are more difficult to value than tangible assets. Rating agencies, in conducting their business, develop expertise in the valuation of intangibles which in turn gives them an edge when it comes to firm and cash flow valuation. Their first hypothesis (investor hypothesis) assumes that a rating change should be more informative for firms with
65
relatively large proportions of intangible assets. A second hypothesis is the non-investor stakeholder hypothesis. Here, stakeholders in the firm –clients, suppliers, and employeeshave implicit claims that are revalued at the arrival of news on credit rating changes. The revaluation of these implicit claims has immediate effects on a firm’s cash flows. Although rating changes are usually anticipated, they are not fully discounted by the market because there is uncertainty about the exact timing of the announcement. Both hypotheses imply that the impact of new information about rating changes on a firm’s stock price is likely to depend on the firm’s intangible assets. They do find supporting evidence for their hypotheses when analyzing a sample credit rating changes of U.S. firms. Goh and Ederington (1993) looked further into the reaction to rating downgrades by analyzing if a bond rating downgrade conveys good or bad news for stockholders. They divided the sample of rating downgrades in those that are mechanically triggered by increases in leverage and those that are caused by weaker prospects for the firm, expecting to find negative stock price reactions stemming from the latter reason and positive or no stock reactions to downgrades prompted by the former reason. Consistent with their predictions, they find that rating deteriorations due to worse future prospects cause a negative stock price reaction, whereas the stock price did not react on average to credit rating changes due to changes in leverage. Hull, Pedrescu and White (2004) analyze the relation between credit default swap spreads and credit ratings. They find that only reviews for downgrades convey information to default swaps, confirming that downgrades convey information to investors and upgrades do not.
66
Although most studies consider a rating downgrade to be bad news, not all authors agree that downgrades are bad news for shareholders. An argument initially put forth by Holthausen and Leftwich (1986) and Zaima and McCarthy (1988) explains why we might observe positive stock price reactions to rating downgrades. The argument is based on a Merton-model for firm value point of view. In this case, equity holders hold an option on the value of the firm with an exercise price equal to the par value of the firm’s debt, and therefore an increase in the variance of the firm’s cash flows would trigger a downgrade and redistribute wealth from bondholders to stockholders. Goh and Ederington (1999) analyze the cross sectional variation in the stock market reaction to bond rating changes, showing the stock markets react more negatively to bond rating downgrades within and into junk status. They also find that downgrades tend to follow periods of negative returns, which is interpreted as evidence of partial predictability in ratings. To investigate whether capital markets consider that credit ratings convey any information beyond what is publicly available on a firm’s prospects and fundamentals, Kliger and Sarig (2000) conduct a unique experiment. On April 26, 1982, Moody’s modified its rating classification to include finer ratings. This unannounced change was simultaneously implemented on all bonds followed by Moody’s, thus providing a unique opportunity to study the informational content of credit ratings. They analyzed bond, stock and option prices observed before and after the classification change and concluded that rating information is valuable to capital markets. Dichev and Piotroski (2001) examine long-run stock returns using all Moody’s bond rating changes between 1970 and 1997. They look at both cumulative abnormal
67
returns and buy-and-hold returns, and after controlling for size and book-to-market, find no evidence of abnormal returns following upgrades, whereas they confirm substantial negative abnormal returns following downgrades. Further, they document poorer returns for downgrades of small and low-credit-quality firms. They also find evidence that suggests a role of downgrades as predictors of future deteriorations in earnings. More recently, Odders-White and Ready (2003) take a different approach to determine if credit ratings contain information beyond the one that is publicly available. Using panel regressions including popular measures of adverse selection, they show that firms with more adverse selection problems have lower ratings. Further, they find a significant negative relation between the components of the adverse selection measures related to private information and debt ratings. This relation is interpreted as evidence that ratings contain information beyond that available in other published financial data and that is not captured by other variables. Finally, they also find that rating agencies often fail to react to changes in uncertainty immediately, thus causing some predictability in rating changes. The fact that rating changes are predictable is a source of potential concern to market regulators. On the sovereign side, Cantor and Packer (1996) looked at the determinants of sovereign credit ratings using ratings issued by Moody's and S&P. After conducting a variety of cross-sectional analyses, they concluded that ratings subsume all relevant macroeconomic information. They also found that the announcement effect on bond spreads is much stronger for below investment-grade (junk) than investment-grade issuers.
68
Elayan, Hsu and Thomas (1999) conducted a comparison of the informational content of credit rating announcements in New Zealand and in the U.S. They document a reaction to market credit rating announcements in small markets that is different from the reaction in large markets; in particular they find evidence of New Zealand firms’ stock prices reacting significantly to both good and bad news conveyed by rating upgrades and downgrades. Kaminsky and Schmukler (2002), look into the effects of rating and
outlook changes on bond and stock returns. Their methodology is different from ours. Further, they focus on the macroeconomic consequences of rating changes whereas we are interested in the individual firm effects. Using a panel specification they find evidence of rating upgrades taking place after market rallies and rating downgrades occurring after periods of poor performance, concluding that rating agencies play a destabilizing role in emerging markets. Patro and Wald (2005) study the returns experienced by local firms following equity liberalizations. They find that small, high book-to-market (BM) and low beta firms experience higher returns following liberalization events, showing that stock prices of individual firms react in different ways to country-wide events. Miller and Puthenpurackal (2005) show that firms accessing international debt markets via global bond offerings do experience reductions in their overall cost of capital. We conjecture in our chapter that previous access to international capital markets could be related to the magnitude of the stock price impact a firm experiences after a sovereign credit change. Some papers asked whether capital markets differentiate between rating agencies. Beaver, Shakespeare, and Soliman (2004) study the differences in ratings provided by certified agencies like Moody’s and credible, non-certified agencies. They find that
69
ratings by certified agencies are more conservative and less timely than those issued by non-certified rating agencies. Mollemans (2004) studies the effect of rating announcements in Japan and concludes that Japanese stock returns are responsive to S&P rating changes but not to Moody’s rating changes. Rating agencies have been criticized for the role they played in currency and/or debt crises in the 1990s. Critics claim that by lagging in their rating changes, agencies contributed to boom-bust cycles, inducing excessive flows of funds into and out of emerging countries. Sy (2003) asks whether rating agencies anticipate currency and/or debt crises for the sovereigns they cover. He concludes that ratings do not predict currency crises and react ex-post to them, consistent with rating agencies’ assertion that they measure probability of debt default, not probability of currency crises. He also finds that lagged ratings and rating changes are useful in anticipating sovereign distress. Amato and Furfine (2003) study if rating agencies behave in a counter-cyclical way. They find that ratings issued by S&P do not have excessive sensitivity to the business cycle – which is consistent with the normative view that ratings should have a long term perspective. The ‘through-the-cycle’ methodology S&P uses is better suited for long-term perspectives, so that a rating agency only changes a rating when it feels confident the changes in a company’s risk profile are likely to be permanent. Brooks, Faff, Hillier and Hillier (2004) study the aggregate stock market impact of sovereign rating changes from several rating agencies. They conclude that national stock markets react to news of sovereign downgrades but not to sovereign upgrades. Governments not only seek credit ratings to facilitate their access to international capital markets, but also because these assessments affect the ratings of other borrowers
70
of the same nationality. As mentioned in the introduction, one way through which sovereign ratings affect domestic firms is the sovereign ceiling. Durbin and Ng (2004) study the instances in which the sovereign ceiling rule is broken. They match bonds from companies based in emerging markets and match them with a corresponding sovereign bond to measure the difference between corporate and sovereign spreads. In 20 cases (out of a reduced sample of 28 bonds) they identify several instances in which the sovereign ceiling rule is broken. A casual review of the characteristics of these firms suggests that the ability to generate revenue abroad, the existence of ties with the government or an affiliation with a foreign firm are common characteristics of these firms. Cruces (2001) shows that sovereign credit ratings are key in determining the cost and availability of international financing for an economy. He reasons that because there is limited ability to enforce debt contracts subject to the regulatory authority of a foreign government, and because governments are sovereign in their territories and have few assets beyond their borders that can be seized by foreign court order, measures of host government sovereign risk contain critical information for investors contemplating international investment.
4.3.
The effect of sovereign rating changes on stock market indices
Following earlier literature (Kamisnky and Schmukler, 2002; Brooks, Faff, Hillier and Hillier, 2004), we will analyze separately the impact of S&P and Moody’s rating changes on stock indices. We will start with this section with data and methodology descriptions.
71
4.3.1
Data
We start by collecting all information available in Bloomberg on sovereign rating changes. There are three certified rating agencies that monitor emerging countries and issue credit ratings for a number of them. These agencies are Fitch, Moody’s and Standard and Poor’s (S&P). In this chapter we use ratings issued by Moody’s and S&P because they cover more countries for a longer period of time. For all emerging countries covered by each rating agency, we collected debt rating history for long-term, foreign currency denominated obligations.29 To rate issuers, Standard and Poor’s uses 24 different ratings, ranging from D (lowest) to AAA (highest), with a “+” or “-“ to denote issuers above or below the mean in each category. Moody’s has 27 ratings, from a lowest rating of C to a highest rating of Aaa (Moody’s adds the number 1, 2, or 3 after the rating to signal an issuer above the mean, on the mean or below the mean in each category, respectively). Each rating agency has a slightly different threshold for so-called ‘investment grade’ issuers. Standard and Poor’s considers everything above a rating of BBB- (inclusive) as investment grade, whereas Moody’s considers Baa3 (inclusive) and above as an investment grade rating. Tables 13 and 14 summarize the information collected on sovereign rating changes. Table 13 shows the 32 emerging countries covered by Standard and Poor’s. Korea and Thailand were among the first countries rated (1988 and 1989, respectively), while Venezuela was the first Latin American country rated (in 1990). Bulgaria, Jamaica and Ukraine are the most recently added countries to their watch list, in 1998, 1999 and 2001, respectively. As expected, countries that experienced episodes of heightened
We use Standard and Poor’s Emerging Market Database as our criteria to decide which countries are emerging markets. If a country is included in their list, it is included in ours. 72
29
financial turmoil in the 1990s are among the countries that experienced most credit rating changes, so it is no surprise to see Indonesia (16), Korea (11), Russia (11), and Argentina (9) with the largest number of rating changes. It is interesting to observe the evolution of ratings in some countries. Portugal, for instance, had a slow but steady increase in credit worthiness. Other countries, like Korea and Venezuela, had credit ratings that were all over the place, experiencing credit upheavals that made them get very high as well as very low credit ratings. Only 8 countries (Chile, China, Czech Republic, Greece, Malaysia, Portugal, Qatar and Thailand) have always been rated as ‘investment grade’, whereas 10 additional countries have enjoyed that status at some time or another. Table 14 shows information from rating changes announced by Moody’s. Argentina (1986), Brazil (1986), and Malaysia (1986) are among the first countries covered. The most recent additions to their list of covered countries are Ukraine (1998), Chile (1999) and Egypt (2001). Although the same countries are covered, there are important differences as to how long they have been covered and also –we can only speculate on this- about the level of attention given to them by the rating agencies. For instance, Indonesia (with 16 rating changes from Standard and Poor’s) only has 4 rating changes from Moody’s. Argentina, on the other side (with 9 changes from Standard and Poor’s) has 19 rating changes from Moody’s. Argentina (19), with Malaysia (14) and Russia (10), are the countries with the largest number of rating changes from Moody’s. Daily stock index data was collected from Datastream for 29 countries. Specifically, we collected the return index (datatype RI) for Datastream’s local stock indices in U.S. dollars. Indices not available in U.S. dollars were converted from their
73
local currency into U.S. dollars using the prevailing exchange rates available also on Datastream. For 21 countries we use Datastream’s local stock indices, whereas for 8 more countries we use their local stock exchange index. We could not find information for the remaining 3 countries.
4.3.2
Methodology
In conducting our study, we run into a typical problem when interpreting announcement effects: in efficient markets, the announcement effect of the event we want to study will measure the difference between the post-announcement effect and what was expected beforehand. If investors had a high likelihood beforehand of an announcement occurring, the updating element is small and the announcement effect underestimates the impact of the event. In the case of debt ratings, rating agencies are famous for being secretive about their decision process. Therefore, although rating changes are sometimes anticipated, there is uncertainty about when, if at all, a rating change will occur (Cornell, Landsman and Shapiro, 1989). We follow the standard event study methodology by Brown and Warner (1985). The methodology developed in that paper has been successfully applied to a wide variety of events, for instance mergers and acquisitions and the cross listings of shares in new markets. A common denominator of those applications is the fact that the event being the focus of the study is rarely a ‘sudden’ occurrence. Usually, news about a merger or a cross listing of shares is leaked or is even publicly announced prior to them taking place. In this chapter, we are interested in the stock price reaction when news of a sovereign rating change is made public.
74
Event studies are by definition joint tests of hypotheses. To be able to measure abnormal returns, one has to define what a normal return is, i.e. make an assumption on the return-generating process. More often than not, the market model is used to compute normal returns.30 The null of event studies is that there should be no significant abnormal average returns if the event is uncorrelated with the stock return. We will compute abnormal returns according to the following specification: ˆ ˆ Ai ,t = Ri ,t − α i − β i Rm,t
ˆ ˆ where α i and β i are the OLS parameters obtained from the (t-244, t-6) estimation
window, Ai ,t is the abnormal return for firm i, Ri ,t is the return for firm i, and Rm ,t is the world market return from Datastream. We then compute the significance of the average abnormal return for each date in the estimation window using a test statistic. The statistic is computed as the ratio of the mean abnormal return to the estimated standard deviation from the time series of mean abnormal returns.
4.3.3 Discussion of index level results
Tables 15 and 16 show the results from event studies using rating changes from Standard and Poor’s and Moody’s, respectively. Using S&P data, we have a total of 160 rating changes, out of which 81 are rating downgrades, 57 are upgrades and 22 are initial
30
Other measures of normal returns used in the literature are mean adjusted returns (i.e. using the simple average of a security’s daily return in some pre-defined estimation window) and market adjusted returns (where the return of the market is subtracted from each individual firm return). Brown and Warner (1985) showed that the OLS market model is well-specified in most cases and that it outperforms the other two return-generating processes when some assumptions (normality, autocorrelation, etc.) are relaxed. Using 250 simulations of 50 randomly chosen stocks with daily data Brown and Warner (1985) showed that methodologies based on the OLS-market model are well-specified. 75
rating announcements.31 Table 15, panel A has the results of the event study conducted using S&P downgrades. There seems to be evidence of news leaking to the market a few days before the actual announcement, which is consistent with anecdotical evidence as well as with information collected through informal conversations with practitioners. Rating changes usually don’t strike like bolts of lightning, but rather many times they are ‘telegraphed’ to the market days in advance. This could explain the fact that abnormal returns for t-3, t-2, t-1 and 0 are statistically significant. Also, cumulative abnormal returns (CAR) for the (-5, +5) and (-1,+1) are negative and statistically significant at the 1% level. The 11-day CAR for rating downgrades is -2.8% (t-stat of -2.37), while the 3day CAR is -1.8% (t-stat of -2.95). Figure 17 illustrates a negative trend of both the daily abnormal return (bars) and the cumulative abnormal return (line) starting at t-5. Panel B of Table 15 has the results from the analysis of rating upgrades issued by S&P. Consistent with earlier domestic literature we find no evidence of a significant stock price reaction to rating upgrades. No single abnormal return in the (-5, +5) period is significant, and the visual inspection of figure 18 –which displays abnormal returns and cumulative abnormal returns- provides no further clues. Table 16 presents the results of the analysis using Moody’s rating changes. Moody’s has 53 downgrades, 45 upgrades and 19 initial rating announcements for the countries in our sample. Panel A has the results for Moody’s rating downgrades. Although the abnormal returns at t-4 and t-1 are negative and significant at the 1% and 5% level, the 11-day and 3-day cumulative abnormal returns are statistically insignificant. These results seem to suggest that, for sovereigns, Moody’s ratings are
The reason we have only 22 initial rating announcements is that not all countries had index data available at the time their initial sovereign credit rating was announced. 76
31
considered as less informative by the local stock markets.32 Panel B of Table 16 shows the results of rating upgrades. Just as it was the case with Standard and Poor’s upgrades, we find no evidence of significant abnormal returns. Ours is not the first paper to find that outside the U.S. Standard and Poor’s ratings are considered more informative than ratings issued by Moody’s. Mollemans (2004) finds a similar result looking at Japanese firms, and Beaver, Shakespeare, and Soliman (2004) document the different stock price reactions to ratings issued from certified and noncertified rating agencies. Hu, Kiesel and Perraudin (2001) opted for S&P ratings over Moody’s to estimate transition matrices for sovereign ratings because they were more informative, although they noted that of a sample of 49 sovereigns rated by both agencies at the time they wrote their paper, 28 had the same rating, 14 were apart by only one notch and 7 were apart by two notches. So far, we have looked separately at rating changes by agency, in a manner consistent with the extant literature. Now we try two different approaches. One is to use the initial rating announcement by either agency of a credit rating for a country, i.e., the first time a country is rated. By definition, this is a good news event. Non-rated countries –or investment bankers acting on their behalf- initiate contacts with rating agencies when they want to access international capital markets for the first time, when they want to improve the terms under which they access international capital markets or to attract foreign investors into its local debt or equity markets. Countries would hire a rating
One difference that becomes evident from looking at the breakdown of rating change announcements is that Moody’s makes more ‘outlook’ change announcements than Standard and Poor’s. An outlook change is a warning issued by the rating agency, usually stating that a country’s rating is under review or under stress. Although not all warnings materialize in actual rating changes (only about 60% of them do), and not all changes are preceded by warnings, one possibility we had to consider is that stock prices react to outlook announcements and not to actual rating changes. We re-run our analysis using outlook announcements dates and still found no significant abnormal returns for Moody’s data. 77
32
agency when their prospects are good, and therefore when they have reasonable high expectations of being rated favorably. 33 Viewed in this light, initial ratings are usually good news, and our expectation is to find favorable stock price reactions to the announcement of an initial rating, or at least stock price reactions similar to those observed for rating upgrades. Table 17 has the results for this experiment. We have 17 countries with data going back far enough to cover the date of the initial rating. Of those 17 countries, 10 had initial ratings by S&P and 10 by Moody’s. We do not find any evidence of a significant reaction across countries to these events, just as it was the case with sovereign rating upgrades. The second approach is to look at the impact of the first rating change issued by either agency to a given event. This is not straightforward, as Moody’s and S&P rating changes do not move in tandem. For instance, in the 1990-2003 period Moody’s had 19 rating changes for Argentina while S&P had only 9. We proceeded by defining pairs of events of any two rating changes by both agencies that took place within a six month period. The earliest announcement within each pair is what we consider as first rating. Table 18 has the complete list of paired rating changes for Argentina.34 It shows which agency is identified with providing –chronologically- the first rating if both agencies changed a country’s rating within a six month period. We did this pairing of ratings for each one of the 29 countries in the sample, and ended up with 40 downgrades (21 were from S&P and 19 from Moody’s) and with 38 upgrades (21 from Moody’s and 38 from S&P).
33 34
See Martell and Stulz (2003). Tables for all the countries included in this paper are omitted for brevity and available upon request. 78
Table 19 has the results of this analysis of first ratings. As with the complete sample of rating changes, we only find significant results for rating downgrades. Panel A has the results from analyzing sovereign downgrades by S&P (21) and by Moody’s (19). The difference in the stock reaction to ratings issued by each agency is evident. Downgrades issued by S&P are associated with a negative 11-day abnormal cumulative return of 6%, significant at the 1% level. The 3-day CAR is also negative and significant. No CAR computed from announcements by Moody’s is significant, and what’s more, the 11-day CAR is positive (although not significant). Overall, Table 19 provides evidence that financial markets react only to sovereign credit downgrades issued by Standard and Poor’s. Summarizing our index level results, we find evidence of emerging stock indices reacting to news of sovereign rating downgrades, i.e., we find significant 3-day and 11day negative cumulative abnormal returns following a sovereign rating downgrade. We find no evidence of a significant stock market reaction to the news of sovereign upgrades. We also find evidence of local stock markets reacting more to sovereign rating changes issued by S&P than to those issued by Moody’s.
4.4.
Impact of sovereign rating changes at the firm level
In the previous section we looked at the effect of sovereign rating changes on domestic stock indices. This analysis, however, leaves several important questions unanswered. For instance, does a change in a sovereign rating affect the terms on which a domestic firm can get credit? If so, does it affect all firms in the same way? This begets the question of why does the country rating matter for domestic firms. If a sovereign
79
rating change causes an exogenous change in the cost of capital, it is important to understand whether that exogenous change is the same for all firms and whether firm characteristics are related to the magnitude of the stock price impact of that exogenous shock. One possibility is that all firms are affected in the same way following a sovereign credit change. For instance, a sovereign credit rating downgrade leads to an increase in sovereign yield that can signal that the country as a whole is a riskier place, e.g. it is a place with higher probability of expropriation, and/or can also signal an increased level of market segmentation, all of which would mean that the expected return on all local stocks should increase. This would be consistent with the work by Morck, Yeung and Yu (2000). They showed that stock prices moved together more in emerging countries than in richer countries and conclude that political events in low-income countries can create market-wide stock price swings, mainly because poorer economies offer less diversification opportunities to their firms. A sovereign downgrade is by definition a worsening of rating that increases political risk, and can be seen domestically as a political event with market-wide stock price reactions. The first step is to investigate if prior access to international capital markets does affect firms’ stock reactions to sovereign rating changes. We split our sample according to whether a firm has an ADR trading abroad, an Eurobond issue, or both. Table 20 presents abnormal returns from sovereign downgrades (panel A) and upgrades (panel B). As expected, all the action takes place in panel A. Oddly, CARs from firms with and without ADR issues are nearly identical. CARs for firms with international debt, however, are much more negative than those of firms with no international debt. This
80
suggests that firms that already have an international debt issue will probably need to tap international markets again, either to refinance or to get new resources. However, a sovereign downgrade will induce an exogenous worsening of the terms under which these firms got their international credit. Therefore, these firms are more sensitive to sovereign downgrades. To investigate further which firm characteristics are associated with the observed behavior of stock prices after sovereign credit changes, we will compute individual abnormal returns and then estimate cross-sectional regressions of those returns on a group of variables that capture firm and country characteristics. We will compute abnormal returns for each firm included in Datastream’s stock indices using a simple CAPM framework (i.e., market model). Our firm accounting data comes from Worldscope. For each firm, we collected intangible assets (item 02649), total assets (item 02999), total debt (item 03255), total revenues (item 07240), and cash plus short term investments (item 02001). We also collected other items like foreign income as a percentage of total income (item 08741) and total interest expense (item 01075), but decided not to use them as their reduced availability impacted severely on our sample size. Following Morck et al (2000) we collected GDP per capita and stock market capitalization of public firms as a percentage of GDP from the World Bank’s Economic Indicators database. All variables were converted to U.S. dollars using prevailing exchange rates collected from Datastream. Cumulative abnormal returns (CARs) were computed in the same way as in the previous section, using market model adjusted returns. We report z-statistics instead of t-statistics to account for the varying number of firms in our computations.
81
We also collected data on firm’s access to international equity and debt markets, gathering information from Citibank and The Bank of New York on ADRs (all levels) that traded in the U.S. from 1990 to 2003. Out of 385 firms with ADRs in the countries included in this chapter, 156 are included in our sample. We also got data from SDC and Datastream on all live and dead non-sovereign bonds from countries in our sample and manually matched them. We ended up with 228 firms that had at least one bond trading in international markets at the time of the sovereign rating change. As expected because regulatory requirements are less stringent for debt issuance than for equity raising, we have more firms accessing international debt markets than international equity markets. Our reasoning for collecting information on access to international capital markets goes as follows. Healthy firms might choose to go abroad to get financing because it might be cheaper to do so or because their financial needs are not fulfilled in domestic financial markets. When the government of the country where a firm is domiciled is downgraded, these firms suddenly find –everything else equal- that the terms on which they access international markets deteriorate. This will in turn raise the cost of capital for new projects, making some of them unprofitable and therefore reducing future cash flows, all of which translates into lower stock prices. Since these firms came to rely on international financing on a regular basis, we hypothesize that firms that have accessed international capital markets in the past –either equity markets via an ADR program or debt via an international bond offering- would experience larger negative stock price reactions following a sovereign downgrade and larger positive stock price reactions following a sovereign upgrade.
82
Table 21 shows results obtained from regressing individual firm cumulative abnormal returns following sovereign downgrades on several firm and country variables. These abnormal returns were computed in the following way: ˆ ˆ Ai ,t = Ri ,t − α i − β i Rm,t
ˆ ˆ where α i and β i are the OLS parameters obtained from the estimation window,
Ai ,t is the abnormal return for firm i, Ri ,t is the return for firm i, and Rm ,t is the world
market return from Datastream, all of them denominated in dollars. We include country dummies to control for country fixed effects, although for brevity we do not report the results of each intercept.
35
We are using the world market return to make these returns
comparable to those presented in the index-level discussion of the previous section. Panels A and B show 3 and 11-day dollar CARs, respectively. Availability of firm accounting data on Worldscope is of some concern, since our sample got reduced from 2,523 observations to 1,487 observations in the omnibus regressions. R-squared values are much better than the ones obtained by previous research, although most of the explanatory power comes from the fixed country effects specification. In panel A, the minimum R-square value for 3-day abnormal returns is 24.7% and the max R-square value is 28.21%, whereas in Panel B (11-day CARs) the min R-square is 15.29% and the max is 21.72%. Both panels have a qualitatively similar story, although since results are stronger in panel A, we will focus our discussion of results on that panel. We started by estimating regressions on country dummies (this is equivalent of running a fixed effects regression with country level effects) and a constant. The second
35
Detailed regression results are available upon request. 83
regression specification on panel A includes dummies to control for the existence of ADRs and/or international bonds, and we find that only the latter has a negative significant coefficient. This result, however, reverses strongly in all other regression specifications when we include size proxies, log of GDP per capita and the country’s stock market capitalization to GDP. Not only has the ADR dummy a significant and negative coefficient, but its interaction with the log of total assets also has a significant coefficient (positive). In all cases, the log of GDP per capita and the market capitalization to GDP are significant at the 1% level and positive. Two proxies for size, the log of total assets and revenues to total assets have negative coefficients, significant at least at the 10% level. Interestingly, the measure of intangible assets as proportion of total assets never has a significant coefficient, although the negative sign is consistent with Cornell et al (1989). Even though we conducted regressions controlling for fixed country effects, the log of GDP per capita and the proportion of stock market capitalization to GDP always have positive and significant coefficients. These results suggest that firms located in richer countries (as measured by GDP per capita) experience smaller stock price drops following a downgrade of their government’s debt. The richer a country is, the more diversified we expect its economy to be and therefore there will be a richer variety of economic activities in which domestic firms can engage, making firms less sensitive to systematic shocks (Morck et al, 2000). Also, firms located in countries with more developed financial markets (measured by stock market capitalization to GDP) experience smaller price reductions although the coefficients associated with this variable
84
are at least two orders of magnitude smaller than the coefficients associated with the log of GDP. The two variables we use to proxy for firm size, log of total assets and revenues over total assets, are consistent in the story they tell. Both have negative and significant coefficients in every specification in panel A. It is sensible to think that only larger firms will outgrow local financial markets to the point that they need to go abroad to get the needed financing, making them more sensitive to exogenous shocks to the cost of international financing. Table 22 present the results of similar regressions as those shown in Table 21 using CARs following sovereign credit upgrades. Although the index-level results did not produce significant abnormal returns, this does not mean that there is no information in the cross section. As with Table 21, panel A of Table 22 has the results from regressions that use 3-day CARs as the dependent variable, while panel B uses 11-day CARs. Although both panels have the same qualitative picture, panel B has a sharper picture and our description shall focus on it. Interestingly, the coefficients for log of GDP per capita and stock market capitalization to GDP have the opposite sign as before (now it is negative). This result is significant at the 1% level for both variables. Both size proxies have insignificant coefficients in the omnibus regressions. The proportion of inventories to assets has a significant positive coefficient, as well as the proportion of total debt to total assets. Most importantly, having had any type of access to international capital markets makes no difference in the abnormal return experienced by these firms. Overall, the picture that emerges from Tables 21 and 22 is one consistent with sovereign credit changes having an asymmetric impact on the stock prices of domestic
85
firms. Larger firms experience larger reductions in their stock price following sovereign downgrades yet they do not experience larger increases when a sovereign upgrade is announced. We think this reflects the exogenous cost imposed on the firm by a sovereign downgrade, especially in the case where the sovereign ceiling is binding. If a firm’s rating is constrained by the sovereign rating, a drop in sovereign rating will force an increase in the cost of international debt for that firm. An increase in sovereign rating, on the other hand, does not reduce the cost of international debt. Larger firms are more probable to experience this behaviour. Firms with more debt as a proportion of total assets benefit more from a sovereign upgrade yet their stock prices are not punished as much when their sovereign government is downgraded. Firms located in richer countries and in countries with more developed financial markets experience smaller losses in their stock prices following a downgrade but also experience smaller gains following upgrades of their sovereign government. Finally, having accessed international equity and/or debt markets makes a firm more vulnerable to sovereign downgrades but does not grant any additional benefits when the host government is upgraded. This effect is more important for larger firms, as evidenced by the significant coefficient for the interaction of ADRs and size.
4.5.
Conclusions
In this chapter we analyzed the impact of sovereign credit rating changes on local stock markets. In the first part of the chapter, we established that local stock indices from 29 emerging countries react only to news of sovereign rating downgrades, in a manner consistent with the results of existing domestic literature. We also established that
86
international markets consider rating changes from S&P more informative than rating changes issued by Moody’s. We then looked at the effect of sovereign rating changes on individual firm stock prices. Analyzing the cross section of abnormal returns of 1281 firms, we found that firms located in richer countries and in countries with more developed financial markets experience smaller stock price reductions following a downgrade of their host government. Our evidence also suggests that larger firms are more sensitive to sovereign downgrades, as well as firms that have accessed international equity or debt markets. Our results are relevant for domestic and foreign investors, firms located in emerging countries and regulators. Local regulators can better assess the risks faced by the firms they are supposed to regulate. Further, a better understanding of how these country-wide shocks affect local firms can help them engage in more efficient ways to lower their cost capital – both at home and abroad.
87
CHAPTER 5
CONCLUSIONS
This dissertation contributes to three literatures in finance. The first essay sheds light on the determinants of debt yield spreads for countries and firms. This essay identified the existence of a two strong common components, unrelated to credit risk and distinct for each type of debt, in credit spreads of non-U.S. sovereign bonds and domestic U.S. corporate bonds. Using a vector autoregressive (VAR) model, I find that domestic spreads are related to the lagged first common component of sovereign spreads. While there is no contemporaneous common component in bond spreads, there seems to be a common component when focusing on the dynamics of these spreads. Traditional macroliquidity variables are related to the common components found in domestic and sovereign spread changes, raising the explanatory power of the VAR equations from a minimum R-square value of 5% to a maximum value of 67%. Flows in and out equity and bond funds explain more of the variation in the common components than net borrowed reserves from the Federal Reserve. The first essay contributes to the literature on the dynamics of credit spreads by showing that current structural models of debt spreads can be improved if these findings are incorporated in them. To the extent that investors depend on these models to hedge
88
the credit risk of their bond positions, they can benefit from a better understanding of the determinants of credit spreads changes. This essay also shows that, after taking into account the dynamics of the common components in credit spreads across different debt instruments, the cost of debt for firms and countries depends to some extent on shocks that affect all types of debt. The second essay applied multivariate, extreme-value techniques to returns on Latin American local stock indices, the S&P 500 and Latin American ADRs. I find evidence of asymmetric behavior in the left and right tails of the joint marginal extreme distributions for six Latin American countries. I also identified differences in extreme correlations for different instruments (investing in ADRs vs. investing directly in the local stock markets) where no difference was to be expected. Finally, this essay documents evidence of a structural change in the correlations for the Mexican case before and after the 1995 Mexican crisis. The third and final dissertation essay analyzed the impact of sovereign credit rating changes on local stock markets. I first established that local stock indices from 29 emerging countries react only to news of sovereign rating downgrades, in a manner consistent with the results of the existing literature on U.S. ratings changes. I also established that international markets consider rating changes from S&P more informative than rating changes issued by Moody’s. When I look at the effect of sovereign rating changes on individual firm stock prices, interesting patterns arise. I document an average price drop of 8% for the firms in my sample following a sovereign downgrade. This drop is more pronounced for firms that have issued debt abroad (negative 12%). When I analyze the cross-section of
89
abnormal returns to sovereign upgrades and downgrades, I find that firms located in richer countries and in countries with more developed financial markets experience smaller stock price reductions following a downgrade of their host government. Also, larger firms, as well as firms that have accessed international equity or debt markets, are more sensitive to sovereign downgrades,. This finding suggests that larger firms – which are more likely to find themselves financially constrained in their domestic financial markets - rely more on external sources of financing. The cost of accessing these external sources of financing is exogenously increased for these firms when news of a sovereign downgrade reaches the markets. This effect is even more pronounced for firms that have previously enjoyed access to international capital markets, as they tend to rely more in this type of financing.
90
BIBLIOGRAPHY
Amato, Jeffery D. and Craig Furfine, 2003, Are credit ratings procyclical?, BIS working papers, no. 129, Bank for International Settlements Monetary and Economic Dept. Anderson, Ron and Suresh Sundaresan, 1996, Design and Valuation of Debt Contracts, Review of Financial Studies, Vol. 9, No. 1, pp. 37-68. Bae, K., G.A. Karolyi and R.M Stulz, 2003, A new approach to measuring financial contagion, Review of Financial Studies, Vol. 16(3), Fall 2003, pp. 717-764. Beaver, William H., Shakespeare, Catherine and Mark T. Soliman, 2004, Differential Properties in the Ratings of Certified vs. Non-Certified Bond Rating Agencies, unpublished working paper, Stanford University. Black, Fisher and John C. Cox, 1976, Valuing Corporate Securities: Some Effects of Bond Indenture Provisions, Journal of Finance, Vol. 31, No. 2, pp. 351-367. Bodie, Zvi, Alex Kane and Alan J. Marcus, 1999, Investments, Irwin-McGraw-Hill, Fourth Edition. Boivin Jean and Serena Ng, 2003, Are More Data Always Better for Factor Analysis?, Working paper, Columbia University. Brännäs, Kurt, Shahiduzzaman, Quoreshi and Ola Simonsen, 2002, Extreme-Value Characteristics in Daily Time Series of Swedish Stock Returns, unpublished working paper, Umea University. Brooks Robert, Faff, Robert, Hillier, David and Joseph Hillier, 2004, The National Market Impact of Sovereign Rating Changes, Journal of Banking and Finance, Vol. 28, pp. 233 – 250. Brooks, C.,Clare, A.D., Dalle Molle, J.W. and G. Persand, 2005, A comparison of extreme value theory approaches for determining value at risk, Journal of Empirical Finance, forthcoming. Brown, Stephen J. and Jerold B. Warner, 1985, Using daily stock returns: The case of event studies, Journal of Financial Economics, Vol. 14, pp. 3-31 Bulow, Jeremy and Ken Rogoff, 1989, A Constant Recontracting Model of Sovereign Debt, Journal of Political Economy, Vol. 97, No. 1, pp. 155-178. Calvo, Guillermo, Leiderman, Leonardo, and Carmen Reinhart, 1993, Capital inflows and the real exchange rate appreciation in Latin America: The role of external factors, IMF Staff Papers.
91
Cantor, Richard and Frank Packer, 1996, Determinants and Impact of Sovereign Credit Ratings, Federal Reserve Bank of New York Review, October. Chakravarty, Sugato and Asani Sarkar, 1999, Liquidity in U.S. fixed income markets: A comparison of the bid-ask spread in corporate, government and municipal bond markets, Working paper, Federal Reserve Bank of New York. Chan-Lau, Jorge, Mathieson, Donald and James Yao, 1998, Extreme Contagion in Equity Markets, working paper 02/98, International Monetary Fund. Chen, Long, Lesmond, David and JasonWei, 2003, Bond Liquidity Estimation and the Liquidity Effect in Yield Spreads, Working Paper. Chordia, Sarkar and Subrahmanyan, 2003, An Empirical Analysis of Stock and Bond Market Liquidity, Working paper, Federal Reserve Bank of New York. Chuhan, Punam, Claessens, Stijn and Nlandu Mamingi, 1998, Equity and bond flows to Latin America and Asia: the role of global and country factors, Journal of Development Economics, 55, 439-463. Cifarelli, Giulio and Giovanna Paladino, 2002, An Empirical Analysis of the CoMovement among Spreads on Emerging-Market Debt, Working Paper. Claessens, Stijn, and George Pennachi, 1996, Estimating the Likelihood of Mexican Default from the Mexican Prices of Brady Bonds, Journal of Financial and Quantitative Analysis, Vol. 31, No. 3, pp. 109-126. Collin-Dufresne, Pierre, Goldstein, Robert and Spencer Martin, 2001, The Determinants of Credit Spread Changes, The Journal of Finance, Vol. 56, No. 6, pp. 2177 – 2205. Cornell, Bradford, Landsman, Wayne and Alan Shapiro,1989, Cross sectional regularities in the response of stock prices to bond rating changes, Journal of Accounting, Auditing and Finance, Vol. 4, pp. 460 – 479. Cruces, Juan José, 2001, Statistical Properties of Sovereign Credit Ratings, unpublished working paper, University of San Andres. Dichev, Ilia and Joseph Piotroski, 2001,The long run stock returns following bond rating changes, Journal of Finance, Vol. 56, pp. 173 – 203. Dornbusch, R., Y. Park, and S. Claessens, Contagion: How it spreads and how it can be stopped?, unpublished paper, MIT, Cambridge, Ma. Duffie, Darell, Pedersen, Lasse and Ken Singleton, 2003, Modeling Sovereign Yield Spreads: A Case Study of Russian Debt, Journal of Finance, Vol. 58, No. 1, pp 119 – 159. Duffie, Darrell and Ken Singleton, 1997, An econometric model of the term structure of interest rate swap yields, Journal of Finance, Vol. 52, pp. 1287 – 1381. Duffie, Darrell and Ken Singleton, 1999, Modeling term structures of Defaultable Bonds, Review of Financial Studies, Vol. 12, pp. 687-720.
92
Durbin, Erik and David Ng, 2004, The sovereign ceiling and emerging market corporate bond spreads, Journal of International Money and Finance, forthcoming. Eaton, Jonathan and Mark Gersovitz, 1981, Debt with Potential Repudiation: Theoretical and Empirical Analysis , Review of Economic Studies, Vol. 48, No. 2, pp. 289309. Edwards, Sebastian, 1984, LDC Foreign Borrowing and Default Risk: An Empirical Investigation, 1976-80, American Economic Review, Vol. 74, No. 4, pp. 726-734. Eichengreen, Barry and Ashoka Moody, 1998, What Explains Changing Spreads on Emerging-Market Debt: Fundamentals or Market Sentiment?, NBER Working Paper No. w6408. Elayan, Fayez, Maris, Brian and Phillip Young, 1996, The effect of commercial paper rating changes and credit-watch placement on common stock prices, Financial Review, Vol. 31, pp. 149-167. Elton, Edwin, Gruber, Martin, Agrawal, Deepak and Christopher Mann, 2001, Explaining the Rate Spread on Corporate Bonds, Journal of Finance, Vol. 56, No. 1, pp. 247 – 277. Embrechts P, McNeil AJ and Straumann D, 2002. “Correlation and dependence in risk management: properties and pitfalls”, in Risk management: value at risk and beyond, edited by Dempster M, published by Cambridge University Press, Cambridge Embrechts, P., C. Klueppelberg and T. Mikosch, 1997. Modelling Extremal Events, Springer, Berlin. Embrechts, P., L. de Haan and X. Huang, 1999. Modelling multivariate extremes, working paper. Eom, Young Ho, Helwege, Jean and Jay Huang, 2004, Structural Models of Corporate Bond Pricing: An Empirical Analysis", Review of Financial Studies, Vol. 17(2), pp. 499-544 Fisher, R.A. and Tippet L.H.C., 1928. Limiting forms of the frequency distribution of the largest or smallest number of a sample, Proc. Cambridge Philos. Soc., 24, pp. 180-190. Forbes K. and R. Rigobon, 1998, No contagion, only interdependence: measuring stock market co-movements, Sloan School of Management working paper. Frankel, J. and S. Schmukler, Crises, Contagion, and Country Funds: Effects on East Asia and Latin America, Center for Pacific Basin Monetary and Economic Studies WP No. PB96-04, Federal Reserve Bank of San Francisco, September 1996. Gencay, Ramazan and Faruk Selcuk, 2001, Overnight borrowing, interest rates and extreme value theory, unpublished working paper, University of Windsor, Canada.
93
Gencay, Ramazan and Faruk Selcuk, 2004, Extreme value theory and Value-at-Risk: Relative performance in emerging markets, International Journal of Forecasting, vol. 20, pp. 287 – 303. Gibson, Rajna and Suresh M. Sundaresan, 2001, A Model of Sovereign Borrowing and Sovereign Yield Spreads, Research Paper EFA 0044, PaineWebber Working Paper Series at Columbia University. Gnedenko, B.V., 1943. Sur la distribucion limite du terme d’une serie aleatoire, Ann. Math. , 44, pp 423-453. Goh Jeremy and Louis Ederington, 1993, Is a bond rating downgrade good news, bad news or no news for stockholders? Journal of Finance, Vol. 48, pp. 2001 – 2008. Goh Jeremy and Louis Ederington, 1999, Cross-sectional variation in the stock market reaction to bond rating changes, Quarterly Review of Economics and Finance, Vol. 26, pp. 101-112. Griffin, Paul and Antonio Sanvicente, 1982, Common stock returns and rating changes: A methodological comparison, Journal of Finance, Vol. 37, pp. 103-119. Gumbel, E.J., 1961. Multivariate extremal distributions, Bulletin de l’Institut International de Statistiques, Session 33, Book 2, Paris. Hand, John, Holthausen, Robert and Richard Leftwich, 1992, The Effect of Bond Rating Agency Announcements on bond and stock prices, Journal of Finance, Vol. 47, pp 733-752. Hartmann, Philipp, Stefan Straetmans and Casper G de Vries, 2001. “Asset Market Linkages in Crisis Periods”, Centre for Economic Policy Research, working paper No. 2916. Hartmann, Philipp, Straetmans, Stefan and Casper de Vries, 2001, Asset Market Linkages in Crisis Periods, working paper No. 71, European Central Bank. Harvey, Campbell and Roger Huang, 2002, The Impact of Federal Reserve Bank's Open Market Operations, Journal of Financial Markets, Vol. 5, No. 2, pp. 223-257. Helwege, Jean and Christopher M. Turner, 1999, The Slope of the Credit Yield Curve for Speculative-Grade Issuers, Journal of Finance, Vol. 54, No. 5, pp. 1869 – 1884. Hernández-Trillo, Fausto, 1995, A Model-Based Estimation of the Probability of Default in Sovereign Loan Markets, Journal of Development Economics, Vol. 46, no. 1, pp 163-179 . Holthausen, Robert and Richard Leftwich, 1986, The effect of bond rating changes on common stock prices, Journal of Financial Economics, Vol. 17, pp. 57-90. Hotchkiss, Edith and Tavy Ronen, 1999, The informational efficiency of the corporate bond market: An intraday analysis, Working paper, Boston College. Houweling, Patrick, Mentink, Albert and Ton Vorst, 2002, Is Liquidity Reflected I Bond Yields? Evidence from the Euro Corporate Bond Market, Working paper, Erasmus University.
94
Hu, Yen-Ting, Kiesel, Rudiger and William Perraudin, 2001,The Estimation of Transition Matrices for Sovereign Credit Ratings, working paper, Bank of England and CEPR. Huang Jing-Zhi and Ming Huang, 2003, How much of the Corporate-Treasury Yield Spread is Due to Credit Risk?, Working Paper. Huang, Jing-Zhi and Weipeng Kong, 2003, Explaining Credit Spread Changes: Some New Evidence from Option-Adjusted Spreads of Bond Indices, Working Paper, Penn State University. Hull, John, Pedrescu, Mirela and Alan White, 2004, The Relationship Between Credit Default Swap Spreads, Bond Yields, and Credit Rating Announcements, Journal of Banking and Finance, Vol. 28, pp. 2789-2811. Jondeau, Eric and Michael Rockinger, 2001, Testing for differences in the tails of stock market returns, unpublished working paper, HEC, Paris, France. Jorion, Philippe, Shi, Charles, and Zhu Liu, 2004, Informational Effects of Regulation FD: Evidence from Ratings Agencies, Journal of Financial Economics, forthcoming. Joutz, Frederik and William Maxwell, 2002, Modeling the yields on noninvestment grade bond indexes, credit risk and macroeconomic factors, International Review of Financial Analysis, Vol. 11, pp. 345 – 374. Kamara, Avraham, 1994, Liquidity, Taxes and Short-Term Treasury Yields, Journal of Financial and Quantitative Analysis, Vol. 29, No. 3, pp. 403-417. Kamin, Steven and Karsten von Kleist, 1999, The Evolution and Determinants of Emerging Market Credit Spreads in the 1990s, Board of Governors of the Federal Reserve System, International Finance Discussion Papers, No. 653. Kaminsky, G.L. and S.L. Schmukler, 1999, What triggers market jitters? A Chronicle of the Asian crisis, Journal of International Money and Finance18, 537-560. Kaminsky, Graciela and Sergio Schmukler, 2002, Emerging Market Instability: Do Sovereign Ratings Affect Country Risk and Stock Returns?, World Bank Economic Review, Vol. 16, pp. 171 – 195. Kliger, Doron and Oded Sarig, 2000, The information value of bond ratings, Journal of Finance, Vol. 55, pp. 2879 – 2902. Krugman, Paul and Julio Rotemberg, 1991, Speculative Attacks on Target Zones, in P. Krugman and M. Miller, eds., Exchange Rate Target and Currency Bands, Oxford University Press. Krugman, Paul, 1985, International Debt Strategies in an Uncertain World, in J. Cuddington and R. Smith, eds., The World Debt Problem, World Bank. Krugman, Paul, 1989, Private Capital Flows to Problem Debtors, in J. Sachs, ed., The International Debt Problem, University of Chicago Press.
95
Lando, David, 1998, On Cox Processes and Credit Risky Securities, Review of Derivatives Research, Vol. 2, pp. 99-120. Ledford, Anthony W. and Jonathan A. Tawn, 1996. “Statistics for near independence in multivariate extreme values”, Biometrika, 83, 1, pp. 169-187. Leland, Hayne E. and K. B. Toft, 1996, Optimal capital structure, endogenous bankruptcy, and the term structure of credit spreads, Journal of Finance, Vol. 51, No. 3, pp. 987-1019. Leland, Hayne E., 1994, Risky Debt, Bond Covenants and Optimal Capital Structure, Journal of Finance, Vol. 49, No. 4, pp. 1213-1252. Litterman, Robert and Jose Scheinkman, 1991, Common factors affecting bond returns, Journal of Fixed Income, Vol. 1, pp. 54-61. Longin, F. and B. Solnik, 1995, Is the correlation in international equity returns constant: 1960-1990?, Journal of International Money and Finance 14, 3-26. Longin, F. and B. Solnik, 2001. “Extreme Correlation of International Equity Markets”, Journal of Finance, 56, No. 2, pp 649-676. Longin, F., 1996. The asymptotic distribution of extreme stock market returns, , 63, pp 383-408. Longstaff, Francis and Eduardo Schwartz, 1995, A simple approach to valuing risky fixed and floating rate debt, Vol. 50, No. 3, pp. 789-819. Longstaff, Francis, Mithal, Sanjay and Eric Neis, 2003, The credit default swap market: Is credit protection priced correctly?, Working paper, UCLA. Marsch, Terry and Niklas Wagner, 2004, Return-Volume Dependence and Extremes in International Markets, unpublished working paper, Haas School of Business, U.C. Berkeley. McNeil AJ and Frey R, 2000. “Estimation of tail-related risk measures for heteroscedastic financial time series: an extreme value approach”, Journal of Empirical Finance, 7: 271-300. McNeil, A., 1997. On Extremes and Crashes, working paper. Merrick, John J. Jr., 2000, Crisis Dynamics of Implied Default Recovery Rations: Evidence from Russia and Argentina, Journal of Banking and Finance, Vol. 25, No. 10, pp. 1921-1939. Merton, Robert, 1974, On the pricing of corporate debt: The risk structure of interest rates, Journal of Finance, Vol. 29, No. 2, pp. 449-469. Miller, Darius and John Puthenpurackal, 2005, Security Fungibility and the Cost of Capital: Evidence from Global Bonds, Journal of Financial and Quantitative Analysis, forthcoming. Ming, Hong G., 1998, The determinants of Emerging Market bond spread: Do economic fundamentals matter?, Policy Research Working Paper 1899, The World Bank.
96
Mollemans, Michael, 2004, The Credit Rating Announcement Effect in Japan, unpublished working paper, Macquarie University. Morck, Randall, Yeung, Bernard and Wayne Yu, 2000, The Information Content of Stock Markets: Why Do Emerging Markets Have Synchronous Stock Price Movements?, Journal of Financial Economics, Vol. 58(1), pp 215-260. Odders-White, Elizabeth and Mark Ready, 2003, Credit ratings and stock liquidity, University of Wisconsin working paper. Pagès, Henri, 2001, Can liquidity risk be subsumed in credit risk? A case study from Brady bond prices, Bank for International Settlements, BIS working papers, No. 101. Patro, Dilip and John Wald, 2005, Firm Characteristics and the Impact of Emerging Market Liberalizations, Journal of Banking and Finance, forthcoming. Poon, Ser_Huang, Rockinger, Michael and Jonathan Tawn, 2002, Modelling ExtremeValue Dependence in International Stock Markets, unpublished working paper, Lancaster University. Poon, Ser-Huang and Han Lin, 2000, Taking Stock at the New Millenium, unpublished working paper, Lancaster University. Qi, Rong, 2001, Return-Volume Relation in the Tail: Evidence from Six Emerging Markets, unpublished working paper, Columbia University. Reich, Christian and Patrick Wegmann, 2002, Extremal Dependence between Market and Liquidity Risk: An Analysis for the Swiss Market, unpublished working paper, Wirtschaftswissenschaftliches Zentrum, Universität Basel. Rigobon, R., 2000, Identification through heteroskedasticity: measuring “contagion” between Argentinean and Mexican sovereign bonds, NBER working paper 7493. Sarig, Oded and Arthur Warga, 1989, Bond Price Data and Bond Market Liquidity, Journal of Financial and Quantitative Analysis, Vol. 24 (3), pp. 367 – 378. Scherer, Paul K. and Marco Avellaneda, 2000, All for One ... One for All?: A Principal Component Analysis of Latin American Brady Bond Debt from 1994 to 2000, Working paper, New York University. Schich, Sebastian, 2002, Dependencies between European stock markets when price changes are unusually large, Discussion paper 12/02, Economic Research Centre of the Deutsche Bundesbank. Schultz, Paul, 2001, Corporate Bond Trading: A Peek Behind the Curtain, Journal of Finance, Vol. 56, No. 2, pp. 677-698. Straetmans, Stefan, Verschoor, Werner and Christian Wolff, 2003, Extreme U.S. stock market fluctuations in the wake of 9/11, unpublished working paper, Maastricht University. Stulz, René M., 2003, Risk Management and Derivatives, Thomson-Southwestern, First Edition.
97
Susmel, Raul, 2001, Extreme Observations and Diversification in Emerging Markets, Journal of International Money and Finance, Vol. 20, pp. 971-986. Sy, Amadou, 2003, Rating the Rating Agencies: Anticipating Currency Crises or Debt Crises, IMF working paper No. 03/122. Tawn, Jonathan A., 1988. “Bivariate Extreme Value Theory: Models and estimation”, Biometrika, 75, 3, pp. 397-415. Tolikas, Konstantinos and Richard Brown, 2003, The Distribution of Extreme Daily Share Returns in the Athens Stock Exchange, unpublished working paper, University of Dundee. Wakeman, L. Macdonald, 1992, The Real Function of Bond Rating Agencies, in The Revolution in Corporate Finance, ed. Joel M. Stern and Donald H. Chew, Jr. Cambridge, MA: Blackwell Publishers, 1992 Wansley, James W. and Terrance M. Clauretie, 1985, The impact of Creditwatch placements on equity re-turns and bond prices, Journal of Financial Research, Vol. 8(1), pp 31-42. Westphalen, Michael, 2002, Valuation of Sovereign Debt with Strategic Defaulting and Rescheduling, Research Paper 43, FAME Research Paper Series. Westphalen, Michael, 2003, The Determinants of Sovereign Bond Credit Spread Changes, Working Paper, FAME. Zaima, Janis and Joseph McCarthy, 1988, The impact of bond rating changes on common stocks and bonds: tests of the wealth redistribution hypothesis, Financial Review, Vol. 23, pp. 483 – 498.
98
APPENDIX A
A COMPARISON OF SOVEREIGN BOND COVERAGE ON DATASTREAM AND THE NAIC DATABASE.
99
Researchers have access today to two main sources for international bond data time series. One is Datastream International and the other one is the Fixed Income Securities Database provided by the University of Houston (the Warga database). Both databases differ in format, coverage, and data providers. While Datastream does not make public the identity of their data provider due to legal technicalities, I found through informal conversations with their support staff that Lehman Brothers and ISMA (International Securities Market Association) are among their main data providers. The inclusion of Lehman Brothers as a data provider for Datastream is re-assuring since prior to 1998 Lehman Brothers used to be the main source of fixed income instruments data (they discontinued support of that database in 1998). To analyze Datastream and Warga databases’ coverage on sovereign bonds I compare the set of bonds available on both. Ideally, I would have wanted to compute correlations of monthly returns for the bonds on both databases, however, that is not possible because the Warga database does not provide with time series data for the bonds it covers. Before conducting the comparison I will make a brief description of the Warga data. This $12,000 database comes in two CDs. The first one contains Microsoft Access files that include lots of tables with exhaustive information (static) data for 157,488 different bonds issued by 10,057 different issuers. When I filter out sovereign issues using the included ‘industry_code’ variable, I come up with 1,579 bond issues from 146 sovereign issuers. The files in the first CD, however, do not contain any price information other than prices at launch. The second CD includes information from all insurance companies’ buys and sells of instruments in the first CD for 1995-1999, provided by NAIC –National Association of Insurance Commissioners. Although the web site for this
100
database () says that price information is available up until 2001, the last update purchased a few months ago by the finance department at Ohio State includes only data up to the end of 1999. Since the price information is only for the purchase and sell of securities, and because many of these securities do not trade frequently, it is not possible to construct a regular time series –say monthly- with prices for these issues. Further, the price data comes in four files: two files for purchases and two files for sales. The first pair of files covers the 1995-1998 period and the second set covers the 1998-1999 period. Datastream has continuous coverage from the early 90s until today. The only way to automatically link information from Datastream and the Warga data is through the ISIN (International Security Identification Number) code, which is available on both of them. For 35 emerging market countries included in my first dissertation essay, Datastream has codes (a code is Datastream’s own id number) for 291 different bonds. Out of those 291 different sovereign bonds, there is ISIN information for 276 of them. Table 1 has the list of such countries. Table 2 has information of the availability of data from those 35 emerging markets on the Warga price files. There are total 105 different bonds across all four pricing files. Table 2 shows that there 73, 58, 88 and 89 unique bonds in each file. However, not all those bonds are present in Datastream. Only 40, 35, 55 and 56 bonds from each Warga file respectively are in both databases. As mentioned before, there are not nearly enough observations per bond to construct a time series. The average number of observations per bond is 10, the minimum is 1 (and also the mode) and the maximum is 30, although in those cases there are many observations clustered around certain
101
periods of time, and therefore having 30 observations does not mean we have 30 monthly observations. So, we end up with 552, 221, 844 and 745 –respectively- usable data points where we have clean price information for a specific bond on an the same date. Simple clean price correlations range between 92% and 97%, whereas holding period return correlations between Datastream and Warga data range from 73% to 89%. These numbers make us feel more comfortable with the overall quality of Datastream data, at least for sovereign data. In my job market paper I use yields from Datastream, and being that yields are only complex transformations of clean prices, the relevant correlations are the one between clean prices. Further, it seems clear that Datastream has a better coverage of sovereign issues (276 vs. 105 bonds in the 35 emerging market countries analyzed here), plus a deeper one (this updated version of the Warga database only has data from 1994 until the end of 1999). Hopefully, this quick analysis should ease concerns about the integrity and quality of Datastream bond price data.
102
APPENDIX B
TABLES
103 ratio of any of these two measures implies a smaller distance-to-default. So, larger values of them should be associated with higher spreads.
Debt to foreign reserves ratio and Debt to exports ratio Country risk measure
Positive
Positive
Local stock market volatility U.S. Treasury yield curve level
Positive
This measure has a higher value for countries that are perceived to have higher political risks, for instance, higher expropriation risk. The larger the value of this variable, the higher the debt spread. This variable is an imperfect proxy of a country’s wealth volatility. Still, we expect a positive relation since more volatility makes default more likely. Assuming that the country’s wealth.
Negative
U.S. Treasury yield curve slope World return
Negative
A world index return is included as a proxy of the world economic climate or business cycle. On average, we would expect smaller spreads when the world as a whole is doing well. Table 1. Expected signs on explanatory variables for sovereign sample.
Negative
104
Panel A: Descriptive Statistics for Spreads Credit Spread (%) Mean Std. Dev. Skewness Kurtosis Max 90% 10% Min No. of observations All sample 4.834 4.555 3.195 18.001 39.390 39.173 0.040 0.019 9275 AAA to A+ 0.488 0.253 2.263 15.840 2.204 1.332 0.076 0.038 165 A to BBB2.098 1.369 2.170 12.672 15.250 10.142 0.068 0.019 2213 BB+ to B 4.971 3.261 2.451 14.145 33.181 30.967 0.094 0.046 5450 B- to C 12.365 8.090 1.599 5.063 39.390 39.173 1.408 1.241 829 No rating 4.478 3.860 3.503 18.620 37.455 23.681 1.034 1.019 618
Panel B: Means for Selected Country Variables Mean Debt-to-reserves Debt-to-exports Political risk U.S. Treasury yield level (%) U.S. Treasury yield slope All sample 3.919 40.603 51.260 5.285 1.326 AAA to A+ 0.959 3.349 21.330 A to BBB1.742 12.330 40.480 BB+ to B 4.733 52.654 53.546 B- to C 4.746 39.359 66.028 No rating 1.997 8.207 62.255
Panel C: Other data Number of bonds Number of countries 233 37
The sample includes all non-callable, non-puttable sovereign bonds. Debt/Reserves is computed using all outstanding foreign debt (bank loans, Brady bonds and Eurobonds) divided by the total number of international reserves in current.
Table 2. Summary statistics for sovereign sample.
105
∆Spread over U.S. Treasury Constant ∆Years to maturity ∆Debt to foreign reserves ∆Political risk ∆Political risk lagged ∆Political risk 2nd lag ∆Local volatility lagged ∆U.S. Treasury level ∆U.S. Treasury slope
AAA to BBB0.452 (1.74)* 5.85 (1.87)* 0.048 (0.85) 0.006 (0.62) 0.03 (3.06)*** -0.013 (1.33) 0.003 (0.05) -0.807 (16.24)*** 0.135 (1.75)* -0.79 (5.22)***
BB+ to B
-0.06 (0.49) -0.619 (0.42) -0.009 (0.62) 0.052 (4.46)*** 0.002 (0.15) 0.002 (0.14) 0.088 (11.12)*** -1.088 (12.85)*** 0.664 (5.17)*** -5.613 (25.88)*** 3870 0.30
B- to C
-2.769 (1.55) -34.553 (1.61) 0.755 (7.79)*** 0.133 (3.96)*** 0.008 (0.00) 0.041 (1.20) 0.067 (3.00)*** -0.882 (2.60)*** 1.073 (2.15)** -4.663 (7.03)*** 690 0.22
Overall sample
-0.071 (0.65) -0.833 (0.64) 0.04 (2.78)*** 0.075 (7.86)*** 0.016 (1.64) 0.018 (1.85)* 0.075 (11.69)*** -0.978 (14.32)*** 0.51 (4.86)*** -4.664 (26.74)*** 6316 0.22
Local return lagged
Observations R-squared
1630 0.19
This table shows estimates from an OLS regression model with Newey-West adjusted errors. We estimated the following equation to each. We estimated eight different specifications of this basic equation, substituting duration for years to maturity and debt to exports for debt to reserves. Years to maturity is the remaining life of a bond expressed in years, duration is a Macaulay’s duration expressed in years, debt to reserves is the ratio of total debt outstanding (bank loans, Brady and Eurobond issues) denominated in U.S. dollars divided by the total amount of international reserves also denominated in U.S. dollars. Debt to exports is the ratio of total debt outstanding (bank loans, Brady and Eurobond issues) denominated in U.S. dollars divided by the nominal monthly value of exports. Absolute value of t statistics are in parentheses; *, **, *** denote significance at the 10%; 5%; and 1% level respectively.
Table 3. Sovereign spreads fixed effect regressions.
106 leverage ratio increases the probability of a firm facing financial distress. This should increase spreads. Also, from a contingent claims approach, equity return volatility can proxy for firm's value volatility. A higher volatility increases the chance of the firm's value process to cross the threshold at which a firm defaults on its debt. Assuming that the firm's value. As the economic environment improves, measured by the S&P return, we expect firms to do better and therefore to reduce the probability of defaulting on their debt.
Leverage and Equity return volatility U.S. Treasury yield curve level
Positive
Negative
U.S. Treasury yield curve slope S&P 500 return
Negative
Negative
Table 4. Expected signs on explanatory variables for domestic sample.
107
Panel A: Descriptive Statistics for Debt Spreads No leverage data 2.633 3.017 4.007 24.890 29.989 29.849 0.0021 0.0010 38191
Credit Spread (%) Mean Std. Dev. Skewness Kurtosis Max 90% 10% Min No. of observations Panel B: Means for Selected Variables Mean Leverage Std. Dev. U.S. Treasury yield level (%) U.S. Treasury yield slope
Overall sample 2.393 2.779 4.543 31.661 29.989 29.849 0.0012 0.0007 71831
Low 1.359 1.187 5.621 77.578 26.846 22.659 0.0031 0.0012 11061
Leverage Class Medium High 1.717 1.319 5.210 70.700 29.636 26.812 0.0125 0.0007 11247 3.263 3.566 3.947 22.628 29.847 29.812 0.0279 0.0062 11332
0.343 0.023 5.285 1.326
Panel C: Other data Number of bonds Number of countries 2,930 649
The sample includes all non-callable, non-puttable domestic bonds issued by industrial firms. Leverage is computed as the ratio of book value of debt divided by the sum of book value of debt and market value of equity. Stock market volatility is computed monthly from daily stock market log returns. The U.S. Treasury yield level is the yield of the 10 year U.S. Treasury note. The U.S. Treasury slope is computed as the difference between the yield of the 10 year and the 2 year U.S. Treasury notes.
Table 5. Summary statistics for domestic sample.
108
∆Spread over U.S. Treasury Constant ∆Years to maturity ∆Leverage ∆leverage lagged ∆Stock return volatility ∆Stock return volatility lagged ∆U.S. Treasury level ∆U.S. Treasury slope S&P return lagged
Low -0.046 (2.18)** -0.591 (2.35)** -0.447 (1.96)** 3.127 (13.80)*** 1.073 (2.43)** 4.141 (9.59)*** -0.218 (14.53)*** -0.002 (0.08) -0.002 (1.99)** 9080 0.07
Leverage Class Medium -0.031 (1.36) -0.141 (0.52) -0.005 (0.03) 0.941 (5.69)*** -0.164 (0.31) 3.323 (6.28)*** -0.338 (22.27)*** 0.023 (0.86) 0.008 (0.09) 9679 0.07
High 0.075 (0.87) 0.834 (0.81) -0.027 (0.09) 6.747 (22.32)*** 3.133 (4.18)*** 11.866 (15.69)*** -0.421 (11.46)*** 0.116 (1.83)* -0.007 (3.32)*** 8709 0.13
Overall sample -0.019 (0.85) -0.249 (0.93) 0.018 (0.12) 4.445 (30.77)*** 2.149 (5.84)*** 8.582 (23.42)*** -0.323 (23.28)*** 0.031 -1.28 -0.003 (4.05)*** 27468 0.09
Observations R-squared
This table shows estimates from an OLS regression model with Newey-West adjusted errors. We estimated the following equation to each bond observation:. We estimated six different specifications of this basic equation, substituting duration and modified duration for years to maturity. Years to maturity is the remaining life of a bond expressed in years, duration is a Macaulay’s duration expressed in years, modified duration is estimated as duration divided by (1+ redemption yield).Stock return volatility is the standard volatility of each firm's stock return computed each month from daily log index return is the log return of Datastream's S&P 500 total return index. Absolute value of t statistics are in parentheses; *, **, *** denote significance at the 10%; 5%; and 1% level respectively.
Table 6. Domestic spreads fixed effect regressions.
109
Panel A: Sovereign bins s11 s12 s13 s11 1.000 s12 0.826 1.000 s13 -0.007 0.511 1.000 s21 0.997 0.817 -0.004 s22 0.828 0.953 0.459 s23 0.367 0.781 0.895 s31 0.983 0.821 0.011 s32 0.891 0.896 0.259 s33 0.820 0.932 0.394 Panel B: Domestic bins d11 d12 d13 d11 1.000 d12 0.978 1.000 d13 0.616 0.622 1.000 d21 0.967 0.968 0.744 d22 0.921 0.955 0.707 d23 0.685 0.673 0.856 d31 0.930 0.971 0.679 d32 0.859 0.920 0.712 d33 0.865 0.874 0.876
s21
s22
s23
s31
s32
s33
1.000 0.818 0.369 0.989 0.886 0.801 d21
1.000 0.773 0.822 0.966 0.950 d22
1.000 0.370 0.624 0.708 d23
1.000 0.897 0.806 d31
1.000 0.946 d32
1.000 d33
110
1.000 0.928 0.747 0.954 0.893 0.909
1.000 0.747 0.943 0.973 0.915
1.000 0.633 0.659 0.797
1.000 0.960 0.909
1.000 0.922
1.000
continued...
Table 7 continued
Panel C: Sovereign and Domestic bins s11 s12 s13 s21 s11 1.000 s12 0.826 1.000 s13 -0.007 0.511 1.000 s21 0.997 0.817 -0.004 1.000 s22 0.828 0.953 0.459 0.818 s23 0.367 0.781 0.895 0.369 s31 0.983 0.821 0.011 0.989 s32 0.891 0.896 0.259 0.886 s33 0.820 0.932 0.394 0.801 d11 0.227 0.119 -0.168 0.221 d12 0.248 0.127 -0.173 0.245 d13 -0.211 0.063 0.464 -0.182 d21 0.064 0.073 0.020 0.065 d22 0.194 0.154 -0.004 0.212 d23 -0.156 0.193 0.572 -0.141 d31 0.159 0.062 -0.143 0.160 d32 0.161 0.086 -0.058 0.181 d33 -0.003 0.075 0.173 0.020 s22 s23 s31 s32 s33 d11 d12 d13 d21 d22 d23 d31 d32 d33
1.000 0.773 0.822 0.966 0.950 0.068 0.117 0.092 0.082 0.150 0.100 0.129 0.160 0.116
1.000 0.370 0.624 0.708 -0.087 -0.079 0.388 0.054 0.074 0.440 -0.059 0.032 0.172
1.000 0.897 0.806 0.205 0.242 -0.159 0.065 0.210 -0.124 0.181 0.196 0.040
1.000 0.946 0.101 0.157 -0.046 0.070 0.142 -0.079 0.173 0.177 0.057
1.000 0.100 0.114 -0.094 0.057 0.036 -0.005 0.074 0.005 -0.063
1.000 0.978 0.616 0.967 0.921 0.685 0.930 0.859 0.865
1.000 0.622 0.968 0.955 0.673 0.971 0.920 0.874
1.000 0.744 0.707 0.856 0.679 0.712 0.876
1.000 0.928 0.747 0.954 0.893 0.909
1.000 0.747 0.943 0.973 0.915
111
1.000 0.633 0.659 0.797
1.000 0.960 0.909
1.000 0.922
1.000
This table presents the correlation structure of the residual bins. Each sample (sovereign and domestic) was divided in three maturity categories and three leverage (debt to reserves, in the sovereign case) categories. The cutoff values for each category were determined using the 33rd and 66th centile to ensure an approximately equal number of observations in each bin. Each observation was assigned to a category. To compute the residuals, regressions were conducted in each bin. Then, for each bin, we average across residuals. The bins are named dij and sij for i, j=1, 2, 3, where d stands for domestic and s stands for sovereign, i for maturity category (1 = shot-term, 2 = medium-term, 3 = long-term), and j stand for leverage (debt to reserves) category (1 = low, 2 = medium, 3 = high). For example, d23 refers to a domestic, medium-term, high-leverage bin. Each sovereign bin contains 66 observations, while each domestic bin contains 148 observations.
Table 7. Correlation structure of residuals.
Panel A: Sovereign bins
Component 1 2 3 4 Eigenvalue 6.8483 1.8331 0.1757 0.0710 Difference 5.0152 1.6574 0.1047 0.0406 Proportion 0.7609 0.2037 0.0195 0.0079 Cumulative 0.7609 0.9646 0.9841 0.9920
Eigenvalue 8.0 7.0 6.0 5.0 4.0 3.0 2.0 1.0 0.0 0
1
2
3 Component
4
5
Eigenvalue 10.0
Panel B: Domestic bins
Component 1 2 3 4 Eigenvalue 7.7606 0.7679 0.2527 0.1385 Difference 6.9927 0.5152 0.1142 0.0932 Proportion 0.8623 0.0853 0.0281 0.0154 Cumulative 0.8623 0.9476 0.9757 0.9911
8.0 6.0 4.0 2.0 0.0 0 1 2 3 Component 4 5
Panel C: Domestic and Sovereign bins
Component 1 2 3 4 Eigenvalue 7.5705 5.9621 2.9687 0.6225 Difference 1.6083 2.9935 2.3462 0.2166 Proportion 0.4206 0.3312 0.1649 0.0346 Cumulative 0.4206 0.7518 0.9167 0.9513
Eigenvalue 8.0 7.0 6.0 5.0 4.0 3.0 2.0 1.0 0.0 0
1
2
3 Component
4
5
This table presents the results of applying principal components analysis to the residuals. Each sample (sovereign and domestic) was divided in three maturity categories and three leverage (debt to reserves, in the sovereign case) categories. Each observation was assigned to a category. For each bond in each sovereign bin we estimated the following equation:. For bonds in the domestic bins the following equation was estimated:. Residuals for each bond were computed and averaged across bins. For ease of interpretation, only the first four components are shown for each panel.
Table 8. Principal component analysis of residuals. 112
Sovereign bonds Overall ∆Spread over U.S. Treasury Constant AAA to BBB0.203 (0.65) ∆Years to maturity 2.875 (0.77) ∆Debt to foreign reserves 0.103 (0.86) ∆Political risk 0.023 (2.06)** ∆Political risk lagged 0.031 (2.76)*** ∆Political risk 2nd lag -0.01 (0.92) ∆Local volatility lagged -0.005 (0.72) ∆U.S. Treasury level -0.59 (6.80)*** ∆U.S. Treasury slope -0.147 (1.24) Local return lagged 1st factor domestic -0.641 (3.73)*** 0.026 (3.56)*** 2dn factor domestic -0.024 (1.75)* BB+ to B -0.063 (0.81) -0.892 (0.97) 0.035 (2.64)*** 0.019 (2.44)** 0.022 (2.89)*** 0.004 (0.56) 0.054 (10.49)*** -0.495 (6.33)*** -0.207 (1.83)* -3.645 (24.99)*** -0.039 (5.39)*** -0.021 (1.60) B- to C -1.055 (0.35) -14.339 (0.40) 1.047 (6.80)*** 0.086 (2.24)** -0.026 (0.67) 0.046 (1.19) 0.113 (3.95)*** -1.903 (2.37)** 2.163 (2.25)** -6.104 (7.59)*** -0.161 (2.72)*** 0.141 (1.60) sample -0.067 (0.72) -1.213 (1.10) 0.125 (7.01)*** 0.049 (5.77)*** 0.024 (2.83)*** 0.021 (2.42)** 0.052 (9.20)*** -0.59 (6.72)*** -0.035 (0.28) -3.601 (22.78)*** -0.044 (5.42)*** 0.024 (1.69)* 1st factor sovereign lagged 2dn factor sovereign 1st factor sovereign S&P return lagged ∆U.S. Treasury slope ∆U.S. Treasury level ∆Stock return volatility lagged ∆Stock return volatility ∆leverage lagged ∆Leverage ∆Years to maturity ∆Spread over U.S. Treasury Constant
Domestic bonds Leverage Class Low 0.026 (1.14) 0.175 (0.64) -0.673 (2.78)*** 4.122 (17.10)*** 0.936 (2.29)** 4.487 (11.46)*** -0.085 (3.65)*** -0.154 (4.26)*** -0.004 (4.55)*** 0.006 (3.20)*** -0.001 (0.48) 0.004 (1.84)* Medium -0.019 (0.83) -0.006 (0.02) 0.147 (0.86) 1.19 (7.08)*** -1.009 (1.86)* 3.281 (6.39)*** -0.17 (6.13)*** -0.021 (0.49) -0.002 (1.81)* 0.004 (1.68)* -0.008 (2.26)** 0.001 (0.53) High 0.103 (1.06) 1.009 (0.88) 0.198 (0.56) 7.622 (21.00)*** 2.033 (2.26)** 12.396 (14.37)*** -0.26 (3.25)*** -0.106 (0.87) -0.009 (3.07)*** 0.006 (2.84)** 0.015 (1.44) -0.006 (0.77) Overall sample 0.019 (0.68) 0.109 (0.33) 0.141 (0.83) 5.393 (31.64)*** 1.495 (3.57)*** 9.197 (22.91)*** -0.18 (6.42)*** -0.091 (2.11)** -0.005 (4.92)*** 0.005 (2.35)** 0.003 (0.89) -0.001 (0.53)
113
continued...
Table 9 continued
1st factor domestic lagged 0.013 (1.85)* 2dn factor domestic lagged -0.084 (4.04)*** Observations R-squared 1405 0.20 0.036 (5.07)*** -0.13 (6.79)*** 3460 0.35 -0.192 (2.93)*** 0.492 (2.74)*** 513 0.31 0.002 0.00 -0.07 (3.23)*** 5504 0.20 Observations R-squared 6630 0.11 5735 0.08 5864 0.16 18229 0.12 2dn factor sovereign lagged -0.013 (3.23)*** -0.015 (2.90)*** -0.003 (0.17) -0.004 (0.78)
This table shows estimates from an OLS regression model with Newey-West adjusted errors. We estimated the following equation to each sovereign+ β8*1st factor domestict + β9*2nd factor domestict + εi,t. For each domestic bond. we estimated the following equation: + β8*1st factor sovereignt + β9*2nd factor sovereignt + εi,t. Years to maturity is the remaining life of a bond expressed in years, debt to reserves is the ratio of total debt outstanding (bank loans, Brady and Eurobond issues) denominated in U.S. dollars divided by the total amount of international reserves also denominated stock return is the log return of Datastream's S&PCOMP total return index. Stock return volatility is the standard volatility of each firm's stock return computed each month from daily log returns in U.S. dollars. The S&P index return is the log return of Datastream's S&P 500 total return index. Absolute value of t statistics are in parentheses; *, **, *** denote significance at the 10%; 5%; and 1% level respectively.
Table 9. Sovereign and domestic regressions including the common factors.
114
Equation 1st factor domestic 1st factor sovereign 2nd factor domestic 2nd factor sovereign 1 1
Obs 59 59 59 59 Coeff. -0.060 0.014 -0.072 0.424 -0.226 -0.500 0.188 0.241 -0.032 -0.135 6.786 9.751 0.309 Coeff. 0.225 -0.179 0.128 -0.119 -0.464 0.469 0.388 -0.113 -0.052 -0.312 18.746 18.549 -0.055
R-squared No exog. All vars. vars. 0.3684 0.4496 0.0586 0.3769 0.2029 0.4978 0.1490 0.6732 Std. Error 0.120 0.115 0.098 0.108 0.298 0.282 0.219 0.222 0.017 0.056 5.752 9.013 0.222 Std. Error 0.136 0.131 0.112 0.123 0.340 0.322 0.250 0.253 0.019 0.064 6.564 10.286 0.253 z -0.50 0.12 -0.73 3.94 -0.76 -0.77 0.86 1.09 -1.880 -2.420 1.18 1.08 1.39 z 1.65 -1.36 1.14 -0.97 -1.36 1.46 1.55 -0.45 -2.710 -4.910 2.86 1.8 -0.22 P>z 0.614 0.902 0.467 0.000*** 0.448 0.427 0.389 0.278 0.060* 0.015** 0.238 0.279 0.164 P>z 0.099* 0.174 0.256 0.333 0.172 0.145 0.120 0.654 0.007* 0.000*** 0.004*** 0.071* 0.827 2 2 Coeff. -0.064 0.076 -0.091 0.079 0.244 -0.368 0.187 0.123 0.017 0.043 13.740 -0.083 -0.104 Coeff. -0.024 0.081 0.002 0.112 0.326 -0.381 0.112 0.287 0.054 -0.112 13.553 -0.731 0.028 Std. Error 0.060 0.058 0.050 0.054 0.151 0.143 0.111 0.112 0.008 0.028 2.909 4.558 0.112 Std. Error 0.061 0.058 0.050 0.055 0.152 0.143 0.111 0.113 0.009 0.028 2.924 4.582 0.113 z -1.06 1.31 -1.82 1.45 1.62 -2.58 1.69 1.1 1.980 1.770 4.72 -0.02 -0.92 z -0.39 1.38 0.04 2.06 2.15 -2.66 1.01 2.54 6.360 -3.950 4.64 -0.16 0.25 P>z 0.290 0.189 0.068* 0.146 0.106 0.010*** 0.092* 0.273 0.048** 0.076* 0.000*** 0.986 0.356 P>z 0.696 0.168 0.967 0.040** 0.032** 0.008*** 0.312 0.011** 0.000*** 0.000*** 0.000*** 0.873 0.801
This table shows estimates a vector autoregression model of the following form:
FacSovt = a1 + ∑ β1 j FacSovt − j + ∑ γ 1 j FacDomt − j + δ 1 X t +ε1
j =1 j =1
k
k
k
FacDom = a2 + ∑ β 2 j FacSov− j +∑γ 2 j FacDom− j + δ 2 X t +ε 2 t t t
j =1 j =1
k
The first and second domestic factors are the factors extracted from the principal component analysis of the residuals of equation (2) applied to the domestic bins. The first and second sovereign factors are extracted from the principal component analysis of the residuals of equation (1) applied to the sovereign bins. Net borrowed reserves is computed as total borrowing minus extended credit minus excess reserves, divided by total reserves. Onoff is the difference between the on-the-run thirty year U.S. Treasury bond and the most recent off-the-run bond. Flowsstocks is from the IFC’s statistics and is the amount of money flowing into equity mutual funds. Flowbonds is from the same source and represents the flows into bond funds; *, **, *** denote significance at the 10%; 5%; and 1% level respectively.
Table 10. Vector autoregression model with exogenous variables.
115
Argentina Local index
Mean Maximum Minimum Std. Dev. Observations Unconditional correlation with the S&P 500 0.00022 0.12014 -0.13380 0.01905 1934
Brazil Local index
0.00030 0.15302 -0.12137 0.02190 1694
Chile Local index
0.00067 0.17549 -0.14989 0.01559 2870
ADRs
0.00056 0.25784 -0.51544 0.03211 1934
ADRs
-0.00407 0.40547 -0.69315 0.03040 1694
ADRs
0.00074 0.16866 -0.15095 0.01602 2870
0.42392
0.17064
0.37893
0.17120
0.15943
0.16059
Colombia Local index
Mean Maximum Minimum Std. Dev. Observations Unconditional correlation with the S&P 500 -0.00029 0.08753 -0.15875 0.01231 2298
Mexico Local index
0.00052 0.19022 -0.25528 0.01954 2870
Venezuela Local index
-0.00027 0.29747 -0.51028 0.02891 2870
S&P 500
ADRs
-0.00037 0.15127 -0.15944 0.01488 2298
ADRs
0.00002 0.19088 -0.27844 0.01841 2870
ADRs
0.00033 0.28549 -0.50828 0.03060 2870 0.00052 0.04928 -0.07022 0.00928 2870
0.03345
-0.00292
0.40539
0.34982
0.05350
0.03456
1.00000
This table provides with basic statistics for the data. Log returns in dollars were calculated using daily data from Datastream. In each case, the local index corresponds to the country index return available in Datastream. ADRs is an equally weighted index of live ADRs of each country in the U.S. Argentina, Brazil and Colombia have less observations because of data conflicts. Specifically, Argentina and Brazil, have problems when backfilling series because of changes in their legal currencies and monetary regimes.
Table 11. Summary statistics.
116
Panel A: Extreme correlations for pairs formed by S&P and local stock market returns. Number of left tail exceedances 25 Argentina Brazil Chile Colombia Mexico Venezuela 0.0563 0.3190 0.1107 NA 0.3772 0.0563 50 0.0572 0.2001 0.1923 0.0285 0.3812 0.0572 100 0.1393 0.3525 0.1816 0.0979 0.3709 0.1393 200 0.1805 0.3471 0.2165 0.1263 0.4407 0.1805 300 0.1902 0.4936 0.2281 0.2085 0.4691 0.1902 300 0.1433 0.4643 0.2132 0.2490 0.3731 0.1433 Number of right tail exceedances 200 0.0917 0.3955 0.1456 0.1576 0.3170 0.0917 100 0.0429 0.3511 0.0983 0.0426 0.3029 0.0429 50 0.0835 0.2134 0.0566 0.0284 0.3416 0.0835 25 NA 0.1093 0.0566 NA 0.2194 NA
Panel B: Extreme correlations for pairs formed by S&P and en equally weighted basket of ADRs. Number of left tail exceedances 25 Argentina Brazil Chile Colombia Mexico Venezuela 0.2348 0.3203 0.2126 0.0566 0.2290 NA 50 0.3324 0.3029 0.1947 0.0288 0.2809 0.0566 100 0.3705 0.4138 0.1697 0.0990 0.3201 0.0853 200 0.3850 0.4194 0.2168 0.1398 0.4263 0.1402 300 0.4785 0.4880 0.2493 0.1708 0.4378 0.1759 300 0.3927 0.4613 0.2040 0.1772 0.3250 0.1436 Number of right tail exceedances 200 0.3476 0.3858 0.1449 0.1243 0.2634 0.0849 100 0.2505 0.3325 0.0841 0.0286 0.2487 0.0568 50 0.2485 0.2659 0.0285 0.0284 0.2198 0.0562 25 NA 0.2119 NA NA 0.1659 NA
Panel C: Extreme correlations for Mexico before and after the 1995 Mexican crisis. Number of left tail exceedances 25 Local before Local after ADRs before ADRS after 0.1149 0.3944 0.1144 0.3390 50 0.2457 0.3695 0.2946 0.2868 100 0.3235 0.4586 0.2881 0.4070 200 0.3816 0.5570 0.3737 0.5022 300 0.4681 0.6159 0.4337 0.5973 300 0.4348 0.5301 0.3732 0.4909 Number of right tail exceedances 200 0.3546 0.4340 0.2715 0.3949 100 0.2516 0.3397 0.1571 0.2988 50 0.1741 0.3213 0.1437 0.2711 25 0.1780 0.2673 0.1703 0.2192
This table shows how correlations change as we move further into the tails. NA stands for not available, since in those cases the parameters of the dependence function could not be estimated. All data was obtained form Datastream. All figures are in dollars. Cutoff point for panel C is 12/29/1994. The equally weighted basket of ADRs was calculated considering only the live issues.
Table 12. Extreme correlations using different number of tail exceedances.
117
Country
Date of initial rating 8/25/1993 11/30/1994 11/23/1998 12/7/1992 12/7/1992 6/21/1993 7/28/1993 1/15/1997 12/7/1992 4/20/1992 12/7/1992 11/9/1999 11/5/1996 10/1/1988 2/26/1997 9/13/1990 7/29/1992 12/21/1994 1/22/1997 12/18/1997 6/30/1993 6/1/1995 12/7/1992 2/1/1996 10/7/1996 2/15/1994 10/3/1994 6/14/1989 4/28/1996 12/21/2001 2/14/1994 7/24/1991
Maximum rating BB BBBB ABBB+ BBBABBBA ABBB B+ BB AABBA+ BBBB+ BB+ BB BB+ BBB+ A+ ABB BBB BBBA B+ B BBBBB
Minimum rating SD B B BBB BBB BB BBB BB+ BBBBB+ SD B BBB+ BBBBBB SD BB BBBBBB AA BBB SD BBBB BBBBB CC CCC+
Number of changes 9 6 4 3 3 3 4 2 4 5 16 2 4 11 4 8 4 8 2 2 3 4 3 3 11 6 3 5 5 1 8
This table shows a breakdown of sovereign rating changes issued by Standard and Poor’s. S&P uses 24 different categories, going from D (lowest) to AAA (highest), with a “+” or “-“ to denote issuers above or below the mean in each category. Ratings above BBB- (inclusive) are considered as investment grade. SD stands for 'Selective Default'. Date of initial rating is the date when the country was first rated by Standard and Poor's. Max and min ratings are the highest and lowest qualifications issued to that country from the date of initial rating to the end of 2003. Table 13. Sovereign rating changes by Standard & Poor's.
118
Country
Date of initial rating 11/18/1986 11/18/1986 9/27/1996 5/25/1999 5/23/1988 8/4/1993 6/22/1998 7/6/2001 5/24/1994 12/27/1993 3/14/1994 3/30/1998 11/11/1996 4/9/1998 2/26/1997 11/18/1986 2/20/1991 11/23/1994 1/22/1997 7/20/1999 7/1/1993 6/1/1995 11/18/1986 9/22/1999 11/22/1996 5/15/1995 10/3/1994 8/1/1989 5/5/1992 2/6/1998 10/15/1993 6/3/1987
Maximum rating Ba3 Ba1 B1 Baa1 A3 Ba1 A1 Ba1 A1 A1 Baa3 Ba3 Baa3 A3 B1 A1 Baa2 Ba3 Ba1 Ba3 Ba1 A2 Aa2 A3 Ba2 A3 Baa2 A2 Baa3 B2 Baa3 Ba1
Minimum rating Caa3 B2 B3 Baa1 Ba2 Baa1 Baa3 Ba1 B3 B1 Ba1 B2 Baa3 Ba2 Caa1
Number of changes 19 9 3 1 6 7 2 1 7 9 4 1 6 6 2 14 8 7 1 1 4 3 5 2 10 4 3 8 9 8 7
Ba3 Baa3 A1 Baa2 B3 Ba1 Baa3 Ba1 B1 Caa1 B3 Caa1
This table shows a breakdown of sovereign rating changes issued by Moody’s. Moody’s has 27 categories, ranging from a lowest rating of C to a highest rating of Aaa. Moody's adds the number 1, 2, or 3 after the rating to signal an issuer above the mean, on the mean or below the mean in each category, respectively. All ratings above Baa3 (inclusive) are considered as investment grade rating. Date of initial rating is the date when the country was first rated by Moody's. Max and min ratings are the highest and lowest qualifications issued to that country from the date of initial rating to the end of 2003. Table 14. Sovereign rating changes by Moody's. 119
Panel A. Sovereign rating downgrades Event day
-5 -4 -3 -2 -1 0 1 2 3 4 5
Panel B. Sovereign rating upgrades Cumulative abnormal return
-0.0010 -0.0040 -0.0136 -0.0248 -0.0350 -0.0422 -0.0429 -0.0362 -0.0314 -0.0282 -0.0279
Abnormal return
-0.0010 -0.0031 -0.0095 -0.0112 -0.0102 -0.0072 -0.0007 0.0067 0.0048 0.0033 0.0003
t-stat
-0.2721 -0.8707 -2.6985 -3.1701 -2.8933 -2.0330 -0.1850 1.8822 1.3562 0.9219 0.0793
Event day
-5 -4 -3 -2 -1 0 1 2 3 4 5
Abnormal return
-0.0001 0.0000 0.0030 -0.0034 -0.0026 0.0023 -0.0014 0.0027 0.0007 -0.0029 -0.0016
Cumulative abnormal return
-0.0001 -0.0001 0.0029 -0.0005 -0.0031 -0.0008 -0.0023 0.0005 0.0012 -0.0016 -0.0033
t-stat
-0.0342 0.0007 1.0155 -1.1558 -0.8640 0.7572 -0.4817 0.9225 0.2435 -0.9571 -0.5410
120
Total number of events: 81 Cumulative abnormal returns (-5, +5) (-1, +1)
CAR t-stat -0.0279 -2.3768 -0.0181 -2.9510
Total number of events: 57 Cumulative abnormal returns (-5, +5) (-1, +1)
CAR t-stat -0.0033 -0.3300 -0.0018 -0.3398 15. Stock index results using Standard and Poor's ratings.
Panel A. Sovereign rating downgrades Cumulative Abnormal Event day abnormal return return
-5 -4 -3 -2 -1 0 1 2 3 4 5 Total number of events: 53 Cumulative abnormal returns (-5, +5) CAR t-stat -0.0094 -0.7586 (-1, +1) -0.0068 -1.0508 0.0038 -0.0143 0.0002 -0.0030 -0.0082 -0.0002 0.0016 0.0052 -0.0010 -0.0003 0.0068 0.0038 -0.0104 -0.0102 -0.0132 -0.0214 -0.0216 -0.0200 -0.0149 -0.0159 -0.0162 -0.0094
Panel B. Sovereign rating upgrades t-stat
1.0324 -3.8333 0.0521 -0.8008 -2.1876 -0.0652 0.4326 1.3825 -0.2772 -0.0776 1.8262
Event day
-5 -4 -3 -2 -1 0 1 2 3 4 5
Abnormal return
-0.0056 0.0024 0.0057 0.0010 -0.0010 0.0035 -0.0038 -0.0036 -0.0025 -0.0013 -0.0048
Cumulative abnormal return
-0.0056 -0.0032 0.0026 0.0036 0.0026 0.0061 0.0023 -0.0013 -0.0038 -0.0051 -0.0099
t-stat
-1.2315 0.5359 1.2607 0.2302 -0.2304 0.7661 -0.8270 -0.7957 -0.5419 -0.2764 -1.0575
121
Total number of events: 45 Cumulative abnormal returns (-5, +5) CAR t-stat -0.0099 -0.6535 (-1, +1) -0.0013 -0.1681 16. Stock index results using Moody's ratings.
Event day
-5 -4 -3 -2 -1 0 1 2 3 4 5
Abnormal return
-0.0025 -0.0021 0.0022 0.0003 -0.0068 -0.0050 0.0078 0.0045 0.0037 0.0052 0.0027
Cumulative abnormal return
-0.0025 -0.0046 -0.0024 -0.0021 -0.0089 -0.0139 -0.0061 -0.0016 0.0021 0.0073 0.0100
t-stat
-0.4332 -0.3671 0.3878 0.0484 -1.1842 -0.8701 1.3625 0.7824 0.6429 0.8970 0.4699
Total number of events: 17 Cumulative abnormal returns (-5, +5) CAR t-stat 0.0100 0.5235 (-1, +1) -0.0040 -0.3994
This table presents results from an event study analysis conducted on datastream's local stock market indices. The event is the date when a country was rated for the first time by either agency. We have 10 instances in which S&P was the initial rating agency and 7 where Moody's was. 17. Stock index results using initial ratings.
122
No. of pairs of events
S&P
Moody's
First rating
1
4/2/1997 10/2/1997
S&P
2
3/26/2001 3/28/2001
S&P
3
5/8/2001 6/4/2001
S&P
4
7/12/2001 7/13/2001
S&P
5
10/9/2001 10/12/2001
S&P
6
11/6/2001 12/20/2001
S&P
7
11/30/1994 11/30/1994
S&P
8 1/14/1999 9 1/3/2001 10 7/2/2002
9/3/1998
Moody's
10/16/2000
Moody's
S&P 8/12/2002
11
11/7/2001 12/19/2001
S&P
This table has the sequence of events identified as pairs of rating changes for Argentina. We define a pair of events as any two rating changes by both agencies that took place within a six month period. The earliest announcement within each pair is what we consider the first rating. Table 18. First ratings for Argentina.
123
Panel A. Downgrades S&P's downgrades Event day -5 -4 -3 -2 -1 0 1 2 3 4 5 Abnormal return -0.0099 -0.0170 -0.0115 -0.0159 -0.0113 -0.0058 0.0059 0.0134 0.0039 -0.0179 0.0060 Cumulative abnormal return -0.0099 -0.0269 -0.0384 -0.0543 -0.0655 -0.0713 -0.0654 -0.0520 -0.0482 -0.0660 -0.0600 t-stat -1.6574 -2.8341 -1.9180 -2.6508 -1.8808 -0.9647 0.9902 2.2295 0.6472 -2.9800 1.0030 Event day -5 -4 -3 -2 -1 0 1 2 3 4 5 Abnormal return 0.0079 -0.0189 0.0144 -0.0002 0.0043 -0.0037 -0.0054 -0.0010 0.0042 0.0102 -0.0101 Moody's downgrades Cumulative abnormal return 0.0079 -0.0110 0.0034 0.0032 0.0075 0.0037 -0.0017 -0.0027 0.0015 0.0117 0.0016
t-stat 1.3997 -3.3419 2.5381 -0.0346 0.7592 -0.6588 -0.9618 -0.1792 0.7488 1.8042 -1.7839
Total number of events: 21 Cumulative abnormal returns (-5, +5) CAR t-stat Panel B. Upgrades S&P's upgrades Event day -5 -4 -3 -2 -1 0 1 2 3 4 5 Abnormal return -0.0038 -0.0037 0.0038 -0.0117 -0.0086 0.0043 0.0062 0.0071 0.0043 -0.0069 0.0003 Cumulative abnormal return -0.0038 -0.0075 -0.0037 -0.0154 -0.0240 -0.0197 -0.0135 -0.0064 -0.0021 -0.0090 -0.0087 t-stat -0.5489 -0.5444 0.5484 -1.7052 -1.2560 0.6301 0.9032 1.0338 0.6272 -1.0068 0.0447 -0.0600 -3.0199 (-1, +1) -0.0111 -1.6711
Total number of events: 19 Cumulative abnormal returns (-5, +5) CAR t-stat 0.0016 0.0874 (-1, +1) -0.0049 -0.4973
Event day -5 -4 -3 -2 -1 0 1 2 3 4 5
Moody's upgrades Cumulative Abnormal abnormal return return -0.0013 0.0071 0.0004 0.0029 0.0015 -0.0001 0.0003 -0.0055 0.0009 0.0003 -0.0001 -0.0013 0.0058 0.0062 0.0091 0.0106 0.0105 0.0108 0.0052 0.0062 0.0065 0.0064
t-stat -0.2074 1.1435 0.0575 0.4663 0.2418 -0.0148 0.0457 -0.8889 0.1516 0.0477 -0.0109
Total number of events: 17 Cumulative abnormal returns (-5, +5) CAR t-stat -0.0087 -0.3841 (-1, +1) 0.0019 0.1601
Total number of events: 21 Cumulative abnormal returns (-5, +5) CAR t-stat 0.0064 0.3112 (-1, +1) 0.0017 0.1574
This table uses the events we identified as pairs of rating changes for all countries in our sample. We define a pair of events as any two rating changes by both agencies that took place within a six month period. The earliest announcement within each pair is what we consider the first rating. Results are from an event study analysis using19. Stock market reaction to the first rating by either agency. 124
Panel A: Sovereign credit downgrades All firms CAR (-5, +5) z-stat CAR (-1, +1) z-stat No. of events -0.0830 (4.2709) -0.0475 (4.6847) 2,523 ADRs CAR (-5, +5) z-stat CAR (-1, +1) z-stat No. of events -0.0808 (4.1059) -0.0466 (4.5315) 337 No ADRs -0.0833 (4.0653) -0.0477 (4.4516) 2,186 No international debt -0.0718 (3.8425) -0.0382 (3.9128) 1,956 No ADR and international debt -0.0814 (4.1356) -0.0456 (4.4323) 2,379
Panel B: Sovereign credit upgrades All firms CAR (-5, +5) z-stat CAR (-1, +1) z-stat No. of events -0.0030 (0.2798) -0.0001 (0.0169) 2,804 ADRs CAR (-5, +5) z-stat CAR (-1, +1) z-stat No. of events -0.0073 (0.5128) 0.0031 0.4166 384 No ADRs -0.0022 (0.2075) -0.0006 (0.1021) 2,420 No international debt -0.0013 (0.1526) -0.0011 (0.2414) 2,188 No ADR and international debt -0.0023 (0.2213) -0.0003 (0.0599) 2,634
International debt CAR (-5, +5) z-stat CAR (-1, +1) z-stat No. of events -0.1223 (4.4765) -0.0798 (5.5943) 567
International debt CAR (-5, +5) z-stat CAR (-1, +1) z-stat No. of events -0.0122 (0.5378) 0.0028 (0.2383) 616
ADR and International debt CAR (-5, +5) z-stat CAR (-1, +1) z-stat No. of events -0.1058 (4.0382) -0.0793 (5.7954) 144
ADR and International debt CAR (-5, +5) z-stat CAR (-1, +1) z-stat No. of events -0.0181 (0.7640) 0.0034 (0.2711) 170
This table presents CARs for two splits of the original sample. First, we split our sample in firms that have American Deposit Receipts (ADR) trading abroad and those who do not. ADR data is from Citibank and The Bank of New York. We require firms to have an active ADR of level 1, 2 or 3 trading abroad at the time of the sovereign downgrade. The second split is by firms that have internationally traded debt and those who do not. International debt data is from Datastream. We require firms to have at least one outstanding foreign bond or eurobond at the time of the sovereign downgrade. Returns were computed using the market model with datastream's world market (TOTMKWD) as the market return. All returns are in U.S. dollars. Absolute value of z-statistics in parentheses, z-stats are computed using the time series standard deviation of the market model residuals multiplied times the square root of the number of days in the event window as the estimate for the abnormal returns' standard deviation.
Table 20. Cumulative Abnormal Returns (CAR) for Stocks with International Financing.
125
Panel A: 3-day CAR (-1, +1) regressions for all stocks, sovereign downgrades, dollar returns
(1) Constant ADR dummy International debt dummy ADR * Size International debt * Size Log (GDP per capita) YES (2) YES 0.0099 (1.4876) -0.0107 (1.9484)* 0.0053 (1.1792) -0.0011 (0.2919) 0.4216 (6.0513)*** Stock market cap / GDP Size Intangibles / Total assets Revenues / Total assets Inventories / Total assets Debt / Total assets -0.0032 (1.7704)* -0.0009 (0.4802) 0.0015 (5.3021)*** (3) YES -0.0657 (0.9943) 0.0084 (0.1581) (4) YES (5) YES (6) YES -0.0821 (1.2439) 0.0535 (1.0386) 0.0064 (1.4207) -0.0044 (1.2461) 0.4664 (5.4981)*** 0.0017 (5.1709)*** (7) YES -0.1004 (1.4888) 0.0324 (0.6001) 0.0078 (1.6818)* -0.0028 (0.7533) 0.4701 (5.5395)*** 0.0017 (5.2224)*** -0.0026 (1.3318) 0.3343 (2.7927)*** 0.0011 (2.4618)** -0.0086 (1.8790)* -0.0162 (0.3324) 0.0053 (1.4219) -0.0034 (0.1056) 0.0239 (1.5470) 0.0120 (1.9826)** 0.0018 (0.3539) 0.3263 (2.7221)*** 0.0011 (2.3959)** -0.0055 (1.2139) -0.0190 (0.3895) 0.0052 (1.3845) -0.0060 (0.1896) 0.0219 (1.4132) (8) YES -0.1585 (1.8107)* -0.0295 (0.4085) (9) YES (10) YES -0.1649 (1.9032)* 0.0322 (0.4498) 0.0125 (2.0761)** -0.0027 (0.5328) 0.3664 (3.5247)*** 0.0013 (3.3914)*** -0.0084 (1.9410)* -0.0129 (0.2755) 0.0054 (1.5429) -0.0051 (0.1728) 0.0145 (1.0509) (11) YES -0.1838 (1.9701)** 0.0419 (0.5429) 0.0139 (2.1411)** -0.0034 (0.6373) 0.3294 (2.7491)*** 0.0011 (2.4412)** -0.0080 (1.7214)* -0.0165 (0.3386) 0.0055 (1.4566) -0.0030 (0.0935) 0.0248 (1.5966)
126
continued...
Cash / Assets
0.0352 (1.3339)
0.0326 (1.2321) 1,487 27.59% 1,613 27.80%
0.0341 (1.2904) 1,487 27.96%
Observations R-squared
2,523 24.70%
2,523 24.86%
2,161 26.11%
2,161 26.01%
2,523 25.96%
2,161 27.33%
2,161 27.39%
1,487 27.89%
Panel B: 11 day CAR (-5, +5) regressions for all stocks, sovereign downgrades, dollar returns
(1) Constant ADR dummy International debt dummy ADR * Size YES (2) YES 0.0151 (1.5086) -0.0127 (1.5469) 0.0085 (1.2667) International debt * Size Log (GDP per capita) Stock market cap / GDP Size Intangibles / Total assets Revenues / Total assets Inventories / Total assets -0.0048 (1.771299)* -0.0025 (0.8654) 0.0007 (0.1233) 0.4994 (4.7786)*** 0.0003 (0.8165) (3) YES -0.1049 (1.0680) -0.0111 (0.1409) (4) YES (5) YES (6) YES -0.1006 (1.0211) 0.0584 (0.7589) 0.0081 (1.1928) -0.0045 (0.8546) 0.4813 (3.8018)*** 0.0001 (0.1940) (7) YES -0.1311 (1.3027) 0.0230 (0.2865) 0.0103 (1.4948) -0.0018 (0.3320) 0.4875 (3.8493)*** 0.0001 (0.2554) -0.0044 (1.4891) 0.0503 (0.2834) -0.0022 (3.2657)*** -0.0034 (0.5060) -0.0206 (0.2857) -0.0007 (0.1188) 0.0492 (1.0454) 0.0138 (1.5355) -0.0004 (0.0505) 0.0409 (0.2301) -0.0022 (3.3085)*** 0.0014 (0.2068) -0.0262 (0.3619) -0.0008 (0.1490) 0.0438 (0.9317) (8) YES -0.1747 (1.3464) -0.0014 (0.0129) (9) YES (10) YES -0.2232 (1.7147)* 0.0850 (0.7913) 0.0173 (1.9026)* -0.0066 (0.8832) 0.3748 (2.4009)** -0.0001 (0.2429) -0.0038 (0.5805) -0.0134 (0.1904) 0.0003 (0.0524) 0.0412 (0.9352) (11) YES -0.2266 (1.6399) 0.0957 (0.8359) 0.0176 (1.8344)* -0.0075 (0.9377) 0.0416 (0.2344) -0.0022 (3.2858)*** -0.0023 (0.3335) -0.0219 (0.3030) -0.0003 (0.0600) 0.0495 (1.0513)
127
continued...
Table 21 continued
Debt / Total assets Cash / Assets 0.0314 (1.3685) 0.0500 (1.2779) Observations R-squared 2,523 15.29% 2,523 15.42% 2,161 16.84% 2,161 16.66% 2,523 16.20% 2,161 17.66% 2,161 17.75% 1,487 21.56% 0.0283 (1.2321) 0.0466 (1.1876) 1,487 21.27% 1,613 20.05% 0.0138 (0.6669) 0.0325 (1.4151) 0.0481 (1.2266) 1,487 21.66% 21. Cumulative Abnormal Returns (CAR) for all stocks following a sovereign rating downgrade 128
Panel A: 3-day CAR (-1, +1) regressions for all stocks, sovereign upgrades, dollar returns
(1) Constant ADR dummy International debt dummy ADR * Size International debt * Size YES (2) YES -0.0026 (0.8060) 0.0040 (1.4296) -0.0001 (0.0514) 0.0002 (0.1309) 0.0428 (2.5468)** Stock market cap / GDP Size Intangibles / Total assets Revenues / Total assets Inventories / Total assets -0.0008 (0.9591) -0.0011 (1.2226) -0.0004 (7.9173)*** (3) YES 0.0023 (0.0765) 0.0001 (0.0060) (4) YES (5) YES (6) YES 0.0133 (0.4412) 0.0093 (0.4190) -0.0009 (0.4353) -0.0005 (0.3252) 0.0445 (2.4579)** -0.0004 (7.7554)*** (7) YES 0.0068 (0.2222) 0.0017 (0.0752) -0.0004 (0.1985) 0.0001 (0.0523) 0.0461 (2.5379)** -0.0004 (7.7179)*** -0.0010 (1.0835) -0.0462 (1.8766)* 0.0003 (2.1044)** 0.0027 (1.1843) -0.0087 (0.4910) -0.0038 (1.8439)* 0.0178 (1.2028) -0.0014 (0.5607) 0.0015 (0.8020) -0.0460 (1.8731)* 0.0004 (2.1960)** 0.0019 (0.8274) -0.0073 (0.4098) -0.0039 (1.8938)* 0.0185 (1.2530) (8) YES 0.0213 (0.5990) -0.0164 (0.5905) (9) YES (10) YES 0.0271 (0.7216) -0.0293 (1.0266) -0.0016 (0.6418) 0.0022 (1.1512) 0.0473 (2.2399)** -0.0005 (8.1374)*** -0.0011 (0.4643) -0.0075 (0.4123) -0.0003 (0.1441) -0.0006 (0.0450) (11) YES 0.0311 (0.8537) -0.0213 (0.7517) -0.0021 (0.8467) 0.0018 (0.9620) -0.0461 (1.8742)* 0.0003 (2.1494)** 0.0021 (0.9100) -0.0074 (0.4175) -0.0039 (1.8839)* 0.0182 (1.2321)
129
Log (GDP per capita)
continued...
Debt / Total assets Cash / Assets
-0.0056 (0.6757) -0.0023 (0.1792)
-0.0057 (0.6829) -0.0012 (0.0948) 1,459 14.89%
-0.0087 (1.1201)
-0.0058 (0.6977) -0.0010 (0.0789)
Observations R-squared
2,804 8.15%
2,804 8.23%
2,280 10.01%
2,280 10.05%
2,804 10.24%
2,280 12.44%
2,280 12.49%
1,459 14.72%
1,736 15.33%
1,459 14.94%
Panel B: 11 day CAR (-5, +5) regressions for all stocks, sovereign upgrades, dollar returns
(1) Constant ADR dummy International debt dummy YES (2) YES -0.0015 (0.2728) 0.0063 (1.3791) ADR * Size International debt * Size Log (GDP per capita) Stock market cap / GDP Size Intangibles / Total assets Revenues / Total assets -0.0024 (1.7488)* -0.0026 (1.8166)* 0.0024 (0.7024) 0.0013 (0.5032) -0.1343 (4.9031)*** -0.0006 (7.0936)*** (3) YES -0.0296 (0.5882) -0.0125 (0.3290) (4) YES (5) YES (6) YES -0.0001 (0.0016) 0.0033 (0.0910) 0.0002 (0.0592) -0.0001 (0.0253) -0.1611 (5.4098)*** -0.0007 (7.1314)*** (7) YES -0.0125 (0.2480) -0.0112 (0.2932) 0.0011 (0.3238) 0.0010 (0.3965) -0.1581 (5.2895)*** -0.0007 (7.0891)*** -0.0019 (1.2684) -0.2361 (5.7641)*** 0.0002 (0.8759) 0.0071 (1.8513)* -0.0115 (0.3913) -0.0092 (2.6557)*** -0.0016 (0.4017) 0.0032 (1.0330) -0.2370 (5.7991)*** 0.0003 (0.9498) 0.0060 (1.5758) -0.0100 (0.3389) -0.0093 (2.6823)*** (8) YES 0.0281 (0.4743) -0.0414 (0.8972) (9) YES (10) YES 0.0318 (0.5111) -0.0321 (0.6767) -0.0018 (0.4148) 0.0024 (0.7474) -0.1346 (3.8397)*** -0.0008 (6.9547)*** 0.0001 (0.0245) -0.0015 (0.0504) -0.0026 (0.7913) (11) YES 0.0450 (0.7426) -0.0480 (1.0181) -0.0029 (0.6913) 0.0036 (1.1441) -0.2362 (5.7655)*** 0.0002 (0.8944) 0.0062 (1.5930) -0.0100 (0.3376) -0.0092 (2.6636)***
130
continued...
Table 22 continued
Inventories / Total assets Debt / Total assets Cash / Assets 0.0446 (1.8091)* 0.0296 (2.1267)** 0.0247 (1.1590) Observations R-squared 2,804 6.26% 2,804 6.33% 2,280 7.59% 2,280 7.61% 2,804 9.10% 2,280 11.02% 2,280 11.08% 1,459 11.20% 0.0459 (1.8614)* 0.0301 (2.1661)** 0.0258 (1.2067) 1,459 11.29% 1,736 12.06% 0.0030 (0.1280) 0.0083 (0.6485) 0.0455 (1.845929)* 0.0295 (2.1195)** 0.0259 (1.2141) 1,459 11.34% 22. Cumulative Abnormal Returns (CAR) for all stocks following a sovereign rating upgrade 131
Table 23. Countries included in this comparison
Argentina Brazil Bulgaria Chile China Colombia Costa Rica Croatia Czech Dominican Ecuador Egypt
El Salvador Greece Hungary Indonesia Jordan Korea Lebanon Malaysia Mexico Pakistan Panama Peru
Philippines Poland Portugal Qatar Russia Slovakia South Africa Thailand Turkey Venezuela Vietnam
132
Table 24. Coverage for sovereign bonds on Datastream and Warga databases.
Naic303 Total number of different bonds for 37 emerging market countries with valid ISIN data Warga files only Bonds included only in NAIC files Bonds included in Datastream and NAIC Observations with usable data from BOTH databases Price correlation Holding period return correlation HPR one month or less 33
Warga database Naic304 Naic99303
Datastream Naic99304
73
58
88
89
276
23
33
33
40
35
55
56
552 92.33% 73.66% 75.01%
221 97.24% 89.83% 93.64%
844 93.88% 74.88% 84.61%
745 94.70% 75.86% 64.21%
133
APPENDIX C
FIGURES
134
10
12
-6
-6 -4
6/1/1997 9/1/1997 12/1/1997 3/1/1998
-5
6/1/1997 9/1/1997 12/1/1997 3/1/1998 6/1/1998 9/1/1998 12/1/1998 3/1/1999 6/1/1999 9/1/1999 12/1/1999 3/1/2000
-4
-2
-3 0 1 2 3
0 2 4 6 8
6/1/1998 9/1/1998 12/1/1998 3/1/1999 6/1/1999
-2
Sovereign
9/1/1999 12/1/1999 3/1/2000
-1
Figure 1. First common component
Figure 2. Second common component
Sovereign
135
Domestic
6/1/2000 9/1/2000 12/1/2000 3/1/2001 6/1/2001 9/1/2001 12/1/2001 3/1/2002 6/1/2002
Domestic
6/1/2000 9/1/2000 12/1/2000 3/1/2001 6/1/2001 9/1/2001 12/1/2001 3/1/2002 6/1/2002
Figure 3. Q-Q Plots for the left tail and the right tail of the dollar return of the Mexican equity index
Exponential Quantiles
2 -0.15
4
6
8
-0.10
-0.05
0.0 Ordered Data
0.05
0.10
0.15
Exponential Quantiles
2 -0.15
4
6
8
-0.10
-0.05
0.0 Ordered Data
0.05
0.10
0.15
136
Figure 4. Q-Q Plots for the left tail and the right tail of the dollar return of the Mexican ADR equally weighted portfolio
Exponential Quantiles
2
4
6
8
-0.2
-0.1 Ordered Data
0.0
0.1
Exponential Quantiles
2
4
6
8
-0.1
0.0 Ordered Data
0.1
0.2
137
Figure 5. Q-Q Plots for the left tail and the right tail of the dollar return of the S&P 500 equity index
Exponential Quantiles
2
4
6
8
-0.10
-0.05
0.0 Ordered Data
0.05
0.10
0.15
Exponential Quantiles
2
4
6
8
-0.15
-0.10
-0.05
0.0 Ordered Data
0.05
0.10
138
Figure 6. Excess mean graphs for the left tail and the right tail of the dollar return of the Mexican equity index
Mean Excess
0.05
0.10
0.15
0.20
0.25
-0.2
-0.1 Threshold
0.0
0.1
Mean Excess
0.05
0.10
0.15
-0.15
-0.10
-0.05 Threshold
0.0
0.05
0.10
139
Figure 7. Excess mean graphs for the left tail and the right tail of the dollar return of the Mexican ADR equally weighted portfolio.
Mean Excess
0.05
0.10
0.15
0.20
0.25
-0.2
-0.1 Threshold
0.0
0.1
Mean Excess
0.05
0.10
0.15
-0.15
-0.10
-0.05 Threshold
0.0
0.05
0.10
140
Figure 8. Excess mean graphs for the left tail and the right tail of the dollar return of the S&P 500 equity index
Mean Excess
0.02
0.04
0.06
0.08
0.10
0.12
0.14
-0.10
-0.05 Threshold
0.0
0.05
Mean Excess
0.05
0.10
0.15
-0.15
-0.10
-0.05 Threshold
0.0
0.05
141
Figure 9. Correlation between S&P and the Mexican stock market index (solid line) and correlation between S&P and Mexican ADRs (dashed line). The number of exceedances used to calculate each correlation is shown in the horizontal axis.
Figure 10. Correlation between S&P and the Chilean stock market index (solid line) and correlation between S&P and Chilean ADRs (dashed line). The number of exceedances used to calculate each correlation is shown in the horizontal axis.
142
Figure 11. Correlation between S&P and the Venezuelan stock market index (solid line) and correlation between S&P and Venezuelan ADRs (dashed line). The number of exceedances used to calculate each correlation is shown in the horizontal axis.
Figure 12. Correlation between S&P and the Colombian stock market index (solid line) and correlation between S&P and Colombian ADRs (dashed line). The number of exceedances used to calculate each correlation is shown in the horizontal axis.
143
Figure 13. Correlation between S&P and the Brazilian stock market index (solid line) and correlation between S&P and Brazilian ADRs (dashed line). The number of exceedances used to calculate each correlation is shown in the horizontal axis.
Figure 14. Correlation between S&P and the Argentinean stock market index (solid line) and correlation between S&P and Argentinean ADRs (dashed line). The number of exceedances used to calculate each correlation is shown in the horizontal axis.
144
Figure 15. Correlation between S&P and the Mexican stock market index (solid line) and correlation between S&P and Mexican ADRs (dashed line) before the 1995 Mexican crisis. The number of exceedances used to calculate each correlation is shown in the horizontal axis.
Figure 16. Correlation between S&P and the Mexican stock market index (solid line) and correlation between S&P and Mexican ADRs (dashed line) after the 1995 Mexican crisis. The number of exceedances used to calculate each correlation is shown in the horizontal axis.
145
Figure 17. Downgrades (S&P) 0.01 0 -5 -0.01 -0.02 -0.03 -0.04 -0.05 Abnormal return Cumulative abnormal return -4 -3 -2 -1 0 1 2 3 4 5
Figure 18. Upgrades (S&P) 0.004 0.003 0.002 0.001 0 -0.001 -0.002 -0.003 -0.004 Abnormal return Cumulative abnormal return -5 -4 -3 -2 -1 0 1 2 3 4 5
146
Figure 19. Downgrades (Moody's) 0.01 0.005 0 -0.005 -0.01 -0.015 -0.02 -0.025 Abnormal return Cumulative abnormal return -5 -4 -3 -2 -1 0 1 2 3 4 5
Figure 20. Upgrades (Moody's) 0.008 0.006 0.004 0.002 0 -0.002 -0.004 -0.006 -0.008 -0.01 -0.012 Abnormal return Cumulative abnormal return -5 -4 -3 -2 -1 0 1 2 3 4 5
147 | https://www.scribd.com/document/41637124/2005-Three-Essays-in-International-Finance | CC-MAIN-2017-26 | refinedweb | 41,202 | 54.02 |
May 25, 2010 10:05 AM|damian01|LINK
I posted this is the wrong forum (Exchange hosting). I am having an issue with my Provisiioning servers. Basically the sample GUI is not working correct. I either get red X's, not all the contents, timeouts due to not being able to read ther namespace or just slow, slow responses. Here are some of the errors: (should I rebuild?)
(I did notice something wierd, on the link for my reseller org, it shows the AD LDAP path in the javascript... the link shows as 'sOU=Orgname,CN=hosting,DC=domain,DC=com". Shouldn't that 'sOU' be 'OU'? Either way, I cannot do what needs to be done. I have followed steps RUN.5 to RUN.7 on another PROV box without success.
Sometimes I get this error: (when clicking on a reseller org or a business.
Somes this error too::\Program Files\SampleProvisioningUI\MainPage.aspx.cs Line: 451
Stack Trace:
May 27, 2010 09:28 AM|damian01|LINK
My domain is called hosting.local. Nothing fancy, I don't use subdomains either.
I have since started working on WebSitePanel instead of MPS. This product seems much easier to use so far. at least the web interface works right out of the gate. It also has BlackBerry support built in and is free.... I am willing to work on this solution to in parallel as they should not affect each other yet. Thanks.
3 replies
Last post May 27, 2010 09:36 AM by kiphup | https://forums.asp.net/p/1561816/3871263.aspx?Provisioning+Website+issues | CC-MAIN-2020-34 | refinedweb | 254 | 77.84 |
Colombia: The FARC’s ceasefire gambit
The Americas for the Division of Monetary Affairs at the Federal Reserve Board from 2008 to 2009. Previously he served at the Federal Reserve Board as associate director, Division of International Finance (1999–2008), and senior economist (1987–1990 and 1991–97).", all he means is that most central banks in developed countries target inflation of around 2 percent. That target is explicit at the Bank of England and Bank of Canada; the RBA targets 2-3 percent; the ECB and Fed are both thought by most people to target inflation of around 2 percent as well. It has nothing to do with the Philips Curve, and your scorn is absurd.
I expect better when I read comments on the Economist's website:
@fundamentalist - a target of 2% gives some certainty for investors which is a good think. Less than that and you risk deflation which means people are incentiveized to hold cash instead of investing, more than that and people start to price in long-term inflation uncertainty which makes investment more uncertain (i.e. risky) so less investment happens. Targeting a specific range (e.g. 2%) with a credible commitment lowers the risk of both of these negative outcomes and is thus viewed as optimal for economic growth.
@Macumazan - tax is not theft, it is how we pay for social goods (just as patent rights are not theft nor is the liability shield of incorporation). Inflation can also occur with non-fiat money in which case it certainly cannot be called theft (if gold is used as currency and massive gold reserves are found the value of gold goes down and prices denominated in gold go up - inflation - but this is not theft or even a tax). The value of fiat money can go up or down in relation to other goods, services, and financial instruments (including other currencies). This is economic reality and while it is sometimes unfair and unwise, a small rise in inflation rates is certainly not theft.
@virerus - Not all financial products are simply gambling. Financial markets are useful for raising money firms need, for reducing certain types of risk in firms in individuals (hedging) and for giving investors a regulated place to invest; yes, regulation failed and many on wall street acted unethically but that doesn't mean that the entire financial sector is corrupt or unnecessary. The banks balance sheets are improving and there is not a need to inflate the problem away - what does need to happen is for growth to return (and government stimulus can help as might a temporary - or permanent - raise in retirement ages).
@cs96 - So you don't want wall street to invest in the most productive areas (that is, to use the cheaper labour)? As an investor, I want them to maximize returns and see nothing unethical about giving job opportunities to those willing to do the work for less - though I do think there is a moral obligation to uphold safety standards (ideally the destination countries would develop and enforce labour safety standards). Or is it that you don't like the world's poorest having the opportunity to get jobs and start to climb out of poverty? I can't see why a factory worker in south China paid $1.50 an hour should be laid off so that someone in Detroit can do the same work for $10 an hour - taking jobs from the poor to give to the rich so customers can pay more for goods and everyone can be worse off hardly sounds like a good plan.
@dgdgy - I don't need your spamvertized shoes though I can't intellectually argue with what you've written I am disappointed that the Economist isn't doing a good job of keeping spam off of its discussion boards.
Whether central banks or Mr Gagnon think 2% inflation is a good thing or not, it is not; fundamentalist is correct. Inflation, if brought about intentionally by government agencies, is theft. Taking one dollar in every fifty in order to optimise growth or for any other laudable reason, is, for all its laudability, still effectively a tax, and one not one assented to by its victims. One would hope a just society would allow law suits of personal liability against those governemtn officials who have conspired to bring it about. Pigs might fly, of course, but it is telling that modern America is, in this regard, further removed from the just society that was the inspiration of its founders, and the hope of non-American American admirers such as myself.
12thstreet, you restated what Gagnon wrote without adding anything. What was the point? And why do central banks have inflation targets, if not because they still think the Philips curve is right (there is a trade off between employment and inflation), even though it has been proven wrong for decades?
Printing money ex-nihilo adds nothing to wealth whatsoever, but it does cause inflation which steals from productive workers and gives to the government and banks. If the money supply didn't grow at all, prices would naturally, gently fall at the rate of the increase in production, as they have in the computer industry for the past 30 years. Savings would regulate investment and there would be no booms and busts. But misguided mainstream monetary theory insists on fabricating money out of an irrational fear of deflation.
I agree that paying interest on reserves ought to have ended already. The other two suggestions would have no effect whatever.
I really don't care about the Fed's economic model. Monetary and fiscal stimulus are not interchangeable. That was the foolishness of the "fine tuning" era.
" Both headline and core inflation are well below the 2% level that central banks view as optimal for economic growth..."
That has to be one of the stupidest things I have read in a long time. There is no optimum rate of inflation. Gagnon is reliving the glory days of the Phillips curve which was proven wrong decades ago.
It's funny that you post this just before the suicide post.
I can't believe you still havent' figured out that monetary easing is exacerbating income inequatility and creating addition uncertainty in the economy.
As long as wall streeters are allowed to redirect any profits and stimulus to hedges, derivatives, credit default swaps (the shadow economy) and continue gambling...
As long as wall streeters are allowed to continue exporting jobs and profiting from the spread between US labor rates and 2nd/3rd world labor rates...
There will be no recovery...the US is like britain and Europe...its built on its banks...not on its people or productive capital.
tnkr: "a target of 2% gives some certainty for investors which is a good think. Less than that and you risk deflation which means people are incentiveized to hold cash instead of investing..."
That's simply not true. Mainstream econ has this ridiculous idea that once deflation starts it will not end until we are all dead. Yet the history of mankind is a history of alternating inflation and deflation, and you know what, deflation always ended before anyone died from it. In fact, the last quarter of the 19th century in the US was a long period of mild deflation. That's why historians think it was a long depression, but it wasn't. That period enjoyed tremendous real economic growth with everyone investing heavily.
The reason that mainstream econ thinks deflation is the worst plague possible is that they know nothing but the last half of the 20th century. During that time deflation has correlated with depressions, but only because of manipulation of the money supply by the Fed. If they had a sound theory of money and knew a tiny bit of history they would see that mild deflation is far, far better for economic growth than mild inflation.
@fundamentalist - I'm not saying that deflation will take off and never end. My main thesis is that a steady, predictable, credible 2% inflation rate will lower investment risk and thus promote investment. As well, deflationary expectations do lead to purchases being put off (not deflation, but deflationary expectations); this is a temporary situation, I don't disagree, but it is not a good situation.
This would mean a decoupling in the global economy. By far my country India has a higher inflation risk and the inflationary expectations also look higher.
A 0% interest rate regime in the west would mean a dollar carry trade situation where a lot of near 0% dollars comming into 6 - 7% interest rate regimes like India to get a higher return. It would mean a lot of moeny chasing fewer assets in India and the likes. A positive for our stock and commodity markets..
cs96 is right, economy in US (and Europe as well) is built on the the banking system. And banks have terrible balance sheets because of all the bad loans, with any downturn in the economy the losses will only get worse and when banks go down they will take down the rest of the economy to the hell.
So the only choice regulators see right now is to inflate their way out of this. And I doubt they will be successful because the world has never faced a deflation problem since 1930s and many economists and policy makers are trained and have experience in fighting inflation not deflation.
So they will fail because they won't know what to do (see article above? everybody is speculating how to fight deflation whereas it's known how you can fight inflation) and deflation will happen bringing down the bad banks (virtually all big US banks) and with it the rest of the economy.
There will be terrible economic times.
100 basis points? thats it? and if that doesn't work (and why would it given what weve just been through, please explain)? Cold comfort.
Monetary policy is one of the tools that a national Government uses to influence its economy. It is mainly used to low unemployment, low inflation, economic growth, and a balance of external payments. | http://www.economist.com/blogs/freeexchange/2010/08/monetary_policy_8?zid=295&ah=0bca374e65f2354d553956ea65f756e0 | CC-MAIN-2014-52 | refinedweb | 1,703 | 59.33 |
2008 Round 1A
Minimum Scalar Product
This problem is an old friend from before. We rewrite our solution to use our Jam module, showing off a little with sequence and replicate.
import Data.List import Jam main = jam $ do [_, xs, ys] <- sequence $ replicate 3 getints return . show . sum $ zipWith (*) (sort xs) (reverse $ sort ys)
Milkshakes
The problem is equivalent to finding a assignment to variables to satisfy an expression in conjunctive normal form, such as:
(a | b | c) & (a | ~d | e) & ...
Truth corresponds to unmalted; variables assigned false values correspond to malted milkshakes.
Fortunately there’s one additional proviso that makes this problem trivial: each clause has at most one negated variable. This means we can:
Look for a singleton clause. If none exist, then we can assign the remaining variables to true, and output a solution.
Otherwise, the variable in the singleton clause must be assigned a certain value to satisfy the expression. We add the assignment to an accumulator list, remove all clauses that contain the same literal, and remove any occurrences of the negated literal from the remaining clauses.
If any of the clauses are now empty, then there is no solution. Otherwise go back to step 1.
import Data.List import Data.List.Split import Data.Maybe import qualified Data.Map as M import Jam main = jam $ do [[n], [m]] <- getintsn 2 let showSol sol = unwords $ show . flip (M.findWithDefault 0) sol <$> [1..n] f sol cs | [] `elem` cs = "IMPOSSIBLE" | otherwise = maybe (showSol sol) g $ find ((1 ==) . length) cs where g [[x, y]] = f (M.insert x y sol) $ delete [x, 1 - y] <$> filter (notElem [x, y]) cs f M.empty . map (chunksOf 2 . tail) <$> getintsn m
Numbers
I spent a long time on the wrong path because my first instinct was to attempt to solve the question for any number of the form a + b sqrt(5).
Eventually I realized there was something special about the choice of 3 and sqrt(5). I was lucky sqrt(5) was chosen, because it triggered a memory of Binet’s formula. Recall the second term is always less than 1, so we can evaluate the formula by finding the integer closest to the first term.
This led me to consider:
f n = (3 + sqrt 5)^n + (3 - sqrt 5)^n.
This is always an integer because all the terms involving sqrt(5) cancel out. Furthermore, 3 - sqrt(5) < 1, so the answer is simply:
(f n - 1) `mod` 1000
Since 3 + sqrt(5) and 3 - sqrt(5) are the roots of x2 - 6x + 4, we can describe f(n) with the recurrence f(n) = 6 f(n-1) - 4 f(n-2) and f(0) = 2, f(1) = 6.
Using repeated squaring, we can compute the nth power of the 2x2 matrix modulo 1000 and apply it to [6, 2] to find f(n).
import Control.Monad import Data.List import Text.Printf import Jam m = [[6, -4], [1, 0]] mul [[a, b], [c, d]] [[x, y], [z, w]] = map (map (`mod` 1000)) [[a*x + b*z, a*y + b*w], [c*x + d*z, c*y + d*w]] pow m 0 = [[1, 0], [0, 1]] pow m 1 = m pow m a | mod a 2 == 1 = mul m2 m | otherwise = m2 where m2 = join mul $ pow m $ div a 2 main = jam $ do [n] <- getints return $ printf "%03d" (let [[a, b], _] = pow m (n - 1) in (a*6 + b*2 - 1) `mod` 1000 :: Integer)
We use a little trick. If f has type a -> a -> a, then thanks to the Reader monad:
join f x = f x x | https://crypto.stanford.edu/~blynn/haskell/2008-1a.html | CC-MAIN-2018-05 | refinedweb | 603 | 71.14 |
01 April 2011 17:11 [Source: ICIS news]
HOUSTON (ICIS)--The explosion that killed one worker and injured three others at the KMTEX chemical processing plant in ?xml:namespace>
Witnesses said one of the workers was welding a pipeline that contained coal tar naphtha solvent, according to the Beaumont Enterprise.
However, workers believed that all of the solvent had been removed from the line, according to the Jefferson County Sheriff’s Office.
The explosion occurred around 14:00 hours local time (19:00 GMT) on Thursday afternoon. By the time Port Arthur Fire Department firefighters arrived at 14:15, the fire had already burned out, the department said.
US Chemical Safety Board (CSB) spokeswoman Hillary Cohen said Friday morning that its incident screener was still gathering additional information, and would have an update by the end of the day as to whether it would investigate.
A nearby bridge to | http://www.icis.com/Articles/2011/04/01/9449380/kmtex-plant-blast-caused-by-welding-of-pipeline-with.html | CC-MAIN-2015-18 | refinedweb | 149 | 59.33 |
A Django app enabling cross-origin resource sharing in views.
Project description
django-cross-origin is a Django app enabling cross-origin resource sharing in views.
Features
- Enable CORS on Django class-based generic views with a simple mixin.
- Full customization of all CORS headers via accessor override.
Installation
- Checkout the latest django-cross-origin release and copy or symlink the cross_origin directory into your PYTHONPATH. If using pip, run pip install django-cross-origin.
- Add 'cross_origin' to your INSTALLED_APPS setting.
Usage
To enable CORS on a Django class-based view, simply mixin the cross_origin.views.AccessControlMixin to your view:
from django.views import generic from cross_origin.views import AccessControlMixin class YourView(AccessControlMixin, generic.TemplateView): """Your view code here!"""
All CORS response headers can be customized by overriding accessor methods on your view. For a complete list of available accessors, see the source code for AccessControlMixin.
More information
The django-cross-origin project was developed at Mohawk, and is released as Open Source under the MIT license.
You can get the code from the django-cross-origin project site.
Contributors
The following people were involved in the development of this project.
- Dave Hall - Blog | GitHub | Twitter | Google Profile
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/django-cross-origin/ | CC-MAIN-2021-21 | refinedweb | 227 | 52.05 |
PyMaemo/HildonDesktop
[edit] Python bindings for libhildondesktop
These bindings allow to create the so called Hildon Home and Status Menu applets (or widgets, in Maemo 5). It consists of two binary packages:
- python-hildondesktop: the actual Python bindings. Can be used to write standalone widgets, or ones that can be added by the user using the "Add widget" option in Maemo 5.
- hildon-desktop-python-loader: this is a Hildon Desktop loader for Python plugins.
[edit] Migrating old Widgets to Maemo 5
The libhildondesktop version in Maemo 5 contains some API changes that also reflect on the Python bindings. Namely, you should pay attention to the following differences when migrating Home/Status Widgets from older Maemo releases to Fremantle:
- The base class for Home Widgets is now called "HomePluginItem", instead of the older "HomeItem" name.
- Similarly, the base class for Status Menu Widgets is now called "StatusMenuItem", instead of the older "StatusBarItem".
- The older "hd_plugin_get_objects" function must be removed. Instead, a "hd_plugin_type" variable should contain the main class for the plugin (which will be used internally by the loader to instantiate the plugin object).
Additionally, if your code used to work with python-hildondesktop on versions until 0.3.0, you must make the following changes for it to work with the latest version:
- Remove the "__gtype_name__" attribute from the main plugin class. It was used to register a GType for the class, but it caused some problems. Now the GType is registered on the loader.
- Remove the "hd_plugin_get_object" function and instead add a "hd_plugin_type" variable that points to the plugin main class.
[edit] Example - Home widgets (Fremantle only)
import gtk import hildondesktop class HelloWorldDialog(gtk.Dialog): def __init__(self): gtk.Dialog.__init__(self, "Hello World", None, gtk.DIALOG_DESTROY_WITH_PARENT | gtk.DIALOG_NO_SEPARATOR, ("Close", gtk.RESPONSE_OK)) self.vbox.add(gtk.Label("Hello World!")) self.show_all() def hello_world_dialog_show(button): dialog = HelloWorldDialog() dialog.run() dialog.destroy() class HelloHomePlugin(hildondesktop.HomePluginItem): def __init__(self): hildondesktop.HomePluginItem.__init__(self) button = gtk.Button("Hello") button.connect("clicked", hello_world_dialog_show) button.show_all() self.add(button) hd_plugin_type = HelloHomePlugin # The code below is just for testing purposes. # It allows to run the widget as a standalone process. if __name__ == "__main__": import gobject gobject.type_register(hd_plugin_type) obj = gobject.new(hd_plugin_type, plugin_id="plugin_id") obj.show_all() gtk.main()
[edit] Testing the example
First, add Fremantle extras-devel to the /etc/apt/sources.list in your scratchbox target and install the required packages:
[sbox]> fakeroot apt-get install python-hildondesktop hildon-desktop-python-loader
Save the example code shown above as /usr/lib/hildon-desktop/hello_world_home.py inside your FREMANTLE_X86 target. Make sure the script has a valid Python module name, specially it should not have any dots or dashes on its name. To be safe, use only alphanumeric characters and underscore, plus the ".py" extension.
Next, save the following text as /usr/share/applications/hildon-home/hello_world_home.desktop:
[Desktop Entry] Name=Hello, World! (Python) Comment=Example Home Python plugin Type=python X-Path=hello_world_home.py
Now start the SDK UI, if it is not already started. See the instructions on the Maemo 5 SDK documentation page
Now you need to add the newly installed home widget to the desktop. Follow these instructions to add it using the Hildon Desktop interface:
- Click anywhere on the Maemo desktop background.
- You should see a "engine" icon on the top right. Click on it.
- It will be shown a menu bar containing "Desktop menu" and "Done". Click on "Desktop menu".
- You should now see a menu with 4 buttons. Click on the "Add widget" button.
- A menu containing the list of installed widgets will appear. Select the one we installed, called "Hello, World! (Python)".
- Finally, click on "Done".
You should then see the following (the images look distorted because they were taken on Xephyr):
After clicking on the widget button, you should see:
[edit] Example - Status menu widgets (Fremantle only)
The code below was based on the C example that can be found on the Maemo 5 Developer Guide [1].
import gtk import hildondesktop class ExampleStatusPlugin(hildondesktop.StatusMenuItem): def __init__(self): hildondesktop.StatusMenuItem.__init__(self) icon_theme = gtk.icon_theme_get_default() pixbuf = icon_theme.load_icon("general_email", 22, gtk.ICON_LOOKUP_NO_SVG) self.set_status_area_icon(pixbuf) label = gtk.Label("Example message") self.add(label) self.show_all() hd_plugin_type = ExampleStatusPlugin
[edit] Testing the example
First, install the same packages needed for Home widgets, then save the example above as as /usr/lib/hildon-desktop/hello_world_status_menu.py inside your FREMANTLE_X86 target. Next, save the following text as /usr/share/applications/hildon-status-menu/hello_world_status_menu.desktop:
[Desktop Entry] Name=Hello, World! (Python) Comment=Example Status Menu Python plugin Type=python X-Path=hello_world_status_menu.py
Now start the SDK UI, if it is not already started. See the instructions on the Maemo 5 SDK documentation page
The example status menu widget should appear as soon as the .desktop file is saved, as the plugin used in this example is of the permanent category. See the Maemo 5 Developer Guide for more information of status menu widgets categories.
This is a screenshot taken on Xephyr showing how the widget will look like:
When clicked, it will show the specified message, enclosed in a gtk.Label:
[edit] Debugging tips
If a Python widget breaks for some reason, no error message will appear to the user. To debug the problem (in Scratchbox), you need to look at debug messages sent on the console where the UI was started.
Debug messages for Home Widgets are shown by default. For Status Menu Widgets, you need to enable debug output by running these commands:
[sbox]> pkill -f /usr/bin/hildon-status-menu [sbox]> DEBUG_OUTPUT=1 /usr/bin/hildon-status-menu &
They will terminate the running hildon-status-menu process and start a new one with debug output enabled.
To force reloading a plugin (so that you can get the error messages again), try moving the .desktop file out of the directory and adding it back, e.g.:
[sbox]> mv /usr/share/applications/hildon-status-menu/hello_world_status_menu.desktop /tmp/ [sbox]> mv /tmp/hello_world_status_menu.desktop /usr/share/applications/hildon-status-menu/
This method may not work reliably for hildon-home widgets because the old code may not be fully unloaded. A solution is to reload hildon-home using Desktop Activity Manager. Once you store the current desktop (e.g. with "activty new test; activity store test") you can re-load it using "activity load -f test". This will reload hildon-home and all its widgets.
Another way of debugging python widgets is to add this code at the beginning of the script:
import sys f=open('/tmp/mylog.log', 'at', buffering=1) sys.stdout=f sys.stderr=f
This will redirect stdout and stderr to
/tmp/mylog.log. This means that all exceptions and all other output will be logged there. As a plus, the code can be given to testers and they will be able to report-back with the contents of the logfile.
However, do not use this by default in production systems. Use it only for debugging since the file will consume space in
/tmp and will only grow in size.
- This page was last modified on 13 August 2010, at 11:57.
- This page has been accessed 16,062 times. | http://wiki.maemo.org/PyMaemo/HildonDesktop | CC-MAIN-2017-17 | refinedweb | 1,195 | 51.04 |
SYNOPSIS
#define _XOPEN_SOURCE 600
#include <fcntl.h>
int posix_fadvise(int fd, off_t offset, off_t len, int advice);. (Linux
actually returns EINVAL in this case.)
VERSIONS
posix_fadvise() appeared in kernel 2.5.60. Glibc support has been pro-
vided since version 2.2.
POSIX_FADV_WILLNEED initiates a non-blocking.
BUGS
In kernels before 2.6.6, if len was specified as 0, then this was
interpreted literally as "zero bytes", rather than as meaning "all
bytes through to the end of the file".
SEE ALSO
readahead(2), posix_fallocate(3), posix_madvise(3), fea-
ture_test_macros(7)
COLOPHON
This page is part of release 3.23 of the Linux man-pages project. A
description of the project, and information about reporting bugs, can
be found at. | http://www.linux-directory.com/man2/posix_fadvise.shtml | crawl-003 | refinedweb | 121 | 61.63 |
. That is quite a lot for my small cluster, so why would it use more than that?
First Look
I decided to take a look how bad the situation really was.
Woah! 😱 The morning of day 2 of the month, and I am already 37GB in? Good thing charging has not yet started. Facing the reality I moved on to drill down into were the logs come from. Since I had a good portion of log data, chances were high I find something in the logs, right? 😉 The resource table clearly showed me were to find the low hanging fruits. The Month To Date (MTD) and projected End Of Month (EOM) numbers for the resource GKE Container tops everything else by orders of magnitude.
Reason 1: Google Kubernetes Engine Bug
Looking through the logs I found out that there is a bug in the synchronizer. It has been firing multiple times per second for days:
This does produce quite some log volume for Stackdriver to ingest and that piles up adding to the overall bill. It’s one of those moments where I catch myself mumbling exponential backoff…
To stop the torrent of log lines from the broken dashboard, I restarted the kubernetes dashboard pod. The hard way, of course:
$ kubectl -n kube-system delete pod kubernetes-dashboard-768854d6dc-j26qx
Reason 2: Verbose Services
Note: This subsection’s data is sourced from a different cluster which did not experience the aforementioned bug but had a huge log intake for a different reason.
In another cluster I also experienced a huge intake of logs. However, there was no log spamming, meaning that this cluster was just full of regular log lines. To find out if there are services that produce significantly more log lines than others I created a log-based metric.
This metric is basically just a counter of log lines, grouped by the resource label
namespace_id. With this metric in place, I headed over to Stackdriver Monitoring and created a graph that plots the log lines per second grouped by namespace.
Obviously, this is most valuable when every service is confined to exactly one namespace. Now I was able to spot the most verbose services and dug a bit deeper into them to reduce their verbosity.
Mitigation 1: Exclusion
The first solution to the high log intake problem is to take less logs in. How unexpected! Luckily, there is a method for that called Exclusion. On the resources page we can create exclusion rules (filters if you will) to reduce the log intake in a reasonable way. Reasonable here means allowing important log entries to enter the system while dropping the less useful ones.
The following rule, for example, discards all log entries of log level INFO. It is a pretty simple example, however, we are free to use all the nice operators we know from regular log filtering activities. Exclusions are a powerful tool!
Here is a copy’n’paste friendly version of the same rule.
resource.type="container" severity="INFO"
Note that you can even sample logs by creating an exclusion filter and setting the drop rate to a value less than 100%. For my use case, an exclusion rate of 95% provides me with just enough samples to assess a past problem while keeping the log intake amount reasonable. During issue triage I recommend disabling exclusions temporarily or adjusting them to pass all related logs at least.
Fun fact: Stackdriver logs the actions (create, delete, etc.) performed on exclusion rules, thus creating just another log source, the Log Exclusion log source. #inception
I wonder if one can create an exclusion rule for log exclusion. 🤔
Mitigation 2: Monitoring
The next log overdose mitigation technique I like to share uses a log-based metric to alert before things turn ugly.
Stackdriver comes with some handy system metrics. Systems metrics means, these are meta data from the logging system. One of those data points is
bytes_count.
I use this metric in the Stackdriver Monitoring system to get an early warning if log intake exceeds the expected levels.
Here is my policy using a Metric Threshold condition:
Let’s have a closer look at the metric threshold.
I am monitoring the resource type Log Metrics and there the metric “Log bytes”.
An acceptable intake rate for me is 10kb/s. If hit constantly, that results in about 24.2GB of total log intake in a 28-day-month and about 26.8GB in one of those longer 31-day-months. Both values leave some good room for unforeseen issues and reaction time.
As you can see in the graph, my cluster was way beyond that threshold for quite a while. That was the bug I described earlier and which took me some time to find. With that alert in place, the same or similar bugs will fire an alert after a 1-minute grace period for log bursts.
Before I wrap this up, one word of caution: Thresholds set to low may harm your inbox! 😅 Been there, done that.
Conclusion
Stackdriver’s warning email may sound scary, but there are ways to gain control over the log intake and also be prepared for unforeseen issues by having metrics-based alerts in place. | https://danrl.com/blog/2018/stackdriver-logging/ | CC-MAIN-2019-09 | refinedweb | 872 | 63.09 |
28 April 2009 11:47 [Source: ICIS news]
SHANGHAI (ICIS news)--China’s leading chlor-alkali producer Xinjiang Zhongtai Chemical Co said its net profit plunged 90.58% year-on-year to yuan (CNY)?xml:namespace>
Average PVC prices fell by CNY1,400/tonne to around CNY6,000/tonne in the first quarter from a year ago.
Earnings were further squeezed by rising costs of feedstock carbide in the first three months, the company said.
Xinjiang Zhongtai's operating income also fell 10.89% to CNY
The company produced 115,400 tonnes of PVC resins and 86,300 tonnes of caustic soda in the March quarter.
It said it expects profits in the first six months of this year to fall 70-90% from levels in the same period last year due to declining PVC prices.
Xinjiang Zhongtai is based in China's northwestern province of Xinjiang. It has a nameplate PVC capacity of 460,000 tonnes/year and caustic soda capacity of 350,000 tonnes/year.
( | http://www.icis.com/Articles/2009/04/28/9211556/chinas-xinjiang-zhongtai-q1-profit-falls-on-pvc-price-slide.html | CC-MAIN-2015-14 | refinedweb | 167 | 63.7 |
Introduction to Java Thread Pool
A thread pool, like the name, suggests is a collection of threads that can be reused. These threads have been created previously and can also function with offering solution to overheads when there is already a collection of threads available; these can be reused and resolve the issue of thread cycles and waiting for threads to complete their task. As the thread will already be existing whenever the request arrives, it will remove the process of thread creation and, as a result, save that time and make the processing faster. In this topic, we are going to learn about Java Thread Pool.
Working of Java Thread Pool
All threads in thread pools implement the methods from java.util.concurrent. There is a thread pool that is managed by the Java thread pool. The easiest way of seeing this pool is the more threads you make use of, the less time each thread will spend upon doing the actual work. It is a way that helps in saving resources in an application where multi-threading can be used. There is a queue that is maintained by this Java thread for pools.
The worker threads keep waiting for tasks to be assigned and get them executed. To create a thread pool, ThreadPoolExecutor is used in Java. The collection of runnable threads are responsible for managing the threads in the Java Thread pool. After this, the worker threads come in picture and form queues. The thread pool monitors these queues.
The java.util.concurrent.Executors help us with the factory and support methods that are needed for managing the threads. This class is also responsible for creating the thread pool. Now your next question might be, what is this Executor? Executor provides different classes which are a part of the utility class.
The technical working of thread pool can be thought of like a pool where you have your concurrent code, which is split into tasks that can run in parallel. Then they are submitted for execution to the pool. There are task submitters, Executor Services, task queues, and in the end, the thread pool. The pattern can help you in controlling the number of threads that are present in the application. It decides its lifecycle, schedules the tasks, and keeps the tasks incoming in the work queue.
The Executorshelper class has various methods that have pre-configured thread pool instances. The Executor and ExecutorService interfaces help in working with different implementation in the pool. The code must be in the decoupled format before the actual implementation. There is another interface that is used. It is a ThreadPoolExecutor. It is an extensible thread pool implementation where many parameters can be specified, and it will result in fine-tuning. The parameters which can be used include core pool size, maximum pool size, and keepalive time. The queue can grow only up to the maximum pool size.
Example of Java Thread Pool
Let us create a pool and see how it works.
Code:
import java.util.Date;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
// The Job class will be executed in this case
class Job implements Runnable
{
private String name;
public Job(String s)
{
name = s;
}
// The task name will be populated first for the ones which were not running already, and the thread will sleep for 2s
//This entire method will be repeated 6 times
public void run()
{
try
{
for (int i = 0; i<=5; i++)
{
if (i==0)
{
Date d = new Date();
System.out.println("We are in "
+ " task name - "+ name);
//prints the task name every time the new task is started
}
else
{
System.out.println("The job "+
name +" is already running ");
// prints the job name which is already running
}
Thread.sleep(2000); // The thread is in sleep mode for 2 secs
}
System.out.println(name+" job is completes");
}
catch(InterruptedException e)
{
e.printStackTrace();
}
}
}
public class Test
{
// Here we define the maximum threads we have
static final int MAX_Threads= 5;
public static void main(String[] args)
{
Runnable run1 = new Job("task 1");
Runnable run2 = new Job("task 2");
Runnable run3 = new Job("task 3");
//A new thread pool is created with maximum number of threads
ExecutorService newpool = Executors.newFixedThreadPool(MAX_Threads);
newpool.execute(run1);
newpool.execute(run2);
newpool.execute(run3);
newpool.shutdown();
}
}
The above code covers each step of the creation of the thread till it’s shutdown. There is a maximum thread limit fixed and created. Once this is done, there are three jobs created which execute one by one. There is also a sleep time allotted for 2 secs for each job. As the thread pool is created and all jobs work simultaneously, we can see that this entire process will run 6 times. When the if statement runs in the thread pool, it checks if the job is already running or not. If the job has not started running, it will execute the if block. If it is already running, then the else block will be run. It will display the job name and say that it is already running. The thread pool is created, and then all 3 jobs are run. Once the jobs are run, we shut down the thread pool that we have created.
Below will be the output of the given program.
Output:
It runs until all tasks are completed.
Advantages and Disadvantages of Thread Pools
We have several advantages of Thread Pool in Java. To name a few, below are the main advantages:
- It results in better and efficient performance of the CPU and program.
- As different threads are working all processes, it saves time.
- The main advantage of the thread pool is the reuse of threads that are already present. There is no need of creating new threads again and again.
- It provides real-time access, which helps in working with real-time data.
We also have some disadvantages of the thread pool, though. Below are the disadvantages:
- You cannot set the priority of tasks, and also, these cannot be tracked.
- When used more, these can be deleted automatically.
Conclusion
As a result, thread pools are an efficient way of handling the multiple tasks which we have at our hand. Java provides us with the facility of reusing the threads and making use of the existing resources.
Recommended Articles
This is a guide to Java Thread Pool. Here we discuss Java Thread Pool’s working, programming examples, and the advantages and disadvantages. You may also have a look at the following articles to learn more – | https://www.educba.com/java-thread-pool/ | CC-MAIN-2022-40 | refinedweb | 1,088 | 64.41 |
Feedback
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
It is already in heavy development but not yet released, and there are plugins for both Netbeans and Eclipse. Some of its functionality and the examples Chris has shared indicate it could be an alternative to Flash and processing.
Summary of some of its features:
var squares = select n*n from n in [1..100];
//another example:
var titleTracks =
select indexof track + 1 from album in albums,
track in album.tracks
where track == album.title; // yields [1,4]
var chris = Person {
name: "Chris"
children:
[Person {
name: "Dee"
},
Person {
name: "Candice"
}]
};
var answer = true;
var s = "The answer is {if answer then "Yes" else "No"}"; // s = 'The answer is Yes'
import java.lang.System;
var saying1 = "Hello World!";
var saying2 = "Goodbye Cruel World!";
do later {
System.out.println(saying1);
}
System.out.println(saying2);
import java.lang.System;
class X {
attribute nums: Number*;
}
trigger on new X {
insert [3,4] into this.nums;
}
var x = new X();
System.out.println(x.nums == [3,4]); // prints true
All methods and attributes of a class have to be declared first (like in C and C++, yay), and then the implementations are written outside the class body (like in nice).
Does that mean every time you add, rename, or change the signature of a method, you have to edit two different pieces of text? If so, can you explain why that elicits a "yay" not a "boo, hiss"?
I think that the 'yay' was sarcastic.
Aren't triggers just a special kind of method/operator overloading? Or even less in that they just add behaviour to an operation as they do in all examples?
Edit: Ah..., there are many triggers per attribute, not per class. This makes an important difference of course. In other languages we would have to define a class first, overload methods and create a member variable with an object instance. In f3 one might skip the class creation and define additional behaviours per member variable/operator using "triggers".
I'm also not sure about bind? I would expect binding a function f to an attribute a1 being updated when the value of another attribute a2 is modified. So far I have the impression that bind just creates an object reference.
Hi,
I'm not exactly sure how relevant F3 is on this forum, but I'd be happy to discuss it as well as the problem it tries to solve and any input or criticism is appreciated. I started trying to write a comment to try to put F3 into perspective, but it got too big so I posted in on my weblog here.
Hi Chris,
I like your F3 project a lot as it resembles my SuperGlue project (declarative, object-oriented, support for time-varying values) and I believe this is the future of scripting. Have you thought about writing a paper or tech report on the language? It would be nice to see more of this kind of material at ECOOP and OOPSLA.
Thanks, I see the resemblance to SuperGlue.
Here's a link to something I came across related to this subject that this forum might find interesting Monads for Incremental Computing
Thanks! I have a hard time grokking Haskell code, which is funny since I'm supposed to be a language researcher. You might want to check out FlapJax:, which is a JavaScript version of their FrTime system (functional-reactive programming on Scheme). This stuff is a bit more approachable than the Haskell-based FRP. All very related to F3, and I think we'll be seeing more things like this in the future.
Noel Welsh sent me a link this morning on a clear writeup by sigfpe describing (causal) stream based computation as comonads instead of the monadic approach of the haskell frp arrows work: .
Chris, I actually tried to send you an email, but could not find the address anywhere and guessed. I've been coding a lot on Flapjax and am very curious about your approach to objects in F3. At least based on the initial posts, it seems like a transparent reactive / data flow system, though the more recent entries make it seem to facilitate more traditional code. Any discussions of your choices and experiences with recursion/cycles, state accumulation, collections, and your object system would probably be very enlightening. Alternatively, my email is my username @cs.brown.edu .
Again, I'm curious to see what you're actually doing :)
This stuff is incredibly interesting to me and I would love to be able to use it in my day job. Luckily this has been discussed several times here:
Perhaps the best description of Functional Reactive Programming is in this paper: Embedding Dynamic Dataflow in a Call-by-Value Language.
The Haskell version of FRP is indeed not an easy read, but this is completely incomprehensible to me: The essence of Dataflow Programming (and sigfpe's writeup on comonads doesn't help).
Don Syme has an article on Imperative Reactive Programming using his F# language.
Sean's PhD dissertation on SuperGlue contains interesting information for those who wish to apply FRP concepts to some real-world problems (GUI data binding, etc.).
I think FRP or data flow is one of those things that is still getting cooked in the academic world, but goes to the heart of many practical, day-to-day problems faced by developers.
I'm very much looking forward to F3 and Scala's versions of data-flow'esque features.
I know beauty is in the eye of the beholder and all that, but god, what an eyesore. I think that nesting curly braces inside other expressions gives rise to some very ugly looking code....Not that it's terribly better in a verbose language like Java or C, but at least the curly brackets have a fairly uniform usage. Oh well, to each his own, I suppose.
Agree that bracketed lists of curly-brace delimited object literals aren't very nice. In C struct literals use curly braces. In C and Java array literals use curly braces. For F3 I followed JavaScript and used brackets for arrays. I don't have any great ideas for alternatives at the moment. I don't see any great alternative to using curly braces to delimit expressions inside string literals either. But maybe someone else does...
There are two issues. Array literals (also called array initializers) and new array constructions.
One possibility is to use square brackets "[]" to denote both of these, in the same sort of way that some variants of Lisp allow using parentheses to construct new lists (often as shorthand notation for a series of cons's).
I think the reason that Java does not do this is because for new array creations, it requires type inference (specifically, unification) to figure out the type of the list from the types of the elements inside, and an empty array literal doesn't have any information to infer the type (and cannot really be given a parameterized type because of Java's retarded generics system).
Java:
int[] x = { 0, 1 };
int[] x = new int[2] {0, 1};
f(new int[2] { 0, 1 });
int[] x = { 0, 1 };
int[] x = new int[2] {0, 1};
f(new int[2] { 0, 1 });
With sufficient type inference a language could support:
var x = [0, 1];
var x = [0, 1];
f([0, 1]);
Which would generate identical machine or byte code as the Java code given above. | http://lambda-the-ultimate.org/node/1998 | crawl-002 | refinedweb | 1,262 | 61.26 |
Background
I got the inspiration for this project while working on my bachelor thesis project internship at IBM in 2005. I was developing an application usage analyzer system which included a web front-end implementing their intranet layout. I observed that it was a bit tedious to get it implemented properly. Moreover, I noticed that I had to repeat the same patterns over and over again for each page.
I saw some "tricks" that other people did to cope with these issues, but I considered all of them workarounds -- they were basically a bunch of includes in combination with a bit of iteration to make it work, but looked overly complicated and had all kinds of issues.
Some time before my internship, I learned about the Model-view-controller architectural pattern and I was looking into applying this pattern to the web front-end I was developing.
After some searching on the web using the MVC and Java Enterprise Edition (which was the underlying technology used to implement the system) keywords, I stumbled upon the following JavaWorld article titled: 'Understanding JavaServer Pages Model 2 architecture'. Although the article was specifically about the Model 2 architecture, I considered the Model 1 variant -- also described in the same article -- good enough for what I needed.
I observed that every page of an intranet application looks quite similar to others. For example, they had the same kinds of sections, same style, same colors etc. The only major differences were the selected menu item in the menu section and the contents (such as text) that is being displayed.
I created a model of the intranet layout that basically encodes the structure of the menu section that is being displayed on all pages of the web application. Each item in the menu redirects the user to the same page which -- based on the selected menu option -- displays different contents and a different "active" link. To cite my bachelor's thesis (which was written in Dutch):
De menu instantie bevat dus de structuur van het menu en de JSP zorgt ervoor dat het menu in de juiste opmaak wordt weergegeven. Deze aanpak is gebaseerd is op het Model 1 [model1] architectuur:
which I could translate into something like:
Hence, the menu instance contains the structure of the menu and the JSP is responsible for properly displaying the menu structure. This approach is based on the Model 1 [model1] architecture.
(As a sidenote: The website I am referring to calls "JSP Model 1" an architecture, which I blindly adopted in my thesis. These days, MVC is not something I would call an architecture, but rather an architectural pattern!)
I was quite satisfied with my implementation of the web front-end and some of my coworkers liked the fact that I was capable of implementing the intranet layout completely on my own and to be able to create and modify pages so easily.
Creating a library
After my internship, I was not too satisfied with the web development work I did prior to it. I had developed several websites and web applications that I still maintained, but all of them were implemented in an ad-hoc way -- one web application had a specific aspect implemented in a better way than others. Moreover, I kept reimplementing similar patterns over and over again including layout elements. I also did not reuse code effectively apart from a bit of copying and pasting.
From that moment on, I wanted everything that I had to develop to have the same (and the best possible) quality and to reuse as much code as possible so that every project would benefit from it.
I started a new library project from scratch. In fact, it were two library projects for two different programming languages. Initially I started implementing a Java Servlet/JSP version, since I became familiar with it during my internships at IBM and I considered it to be good and interesting technology to use.
However, all my past projects were implemented in PHP and also most of the web applications I maintained were hosted at shared webhosting providers only supporting PHP. As a result, I also developed a PHP version which became the version that I actually used for most of the time.
I could not use any code from my internship. Apart from the fact that it was IBM's property, it was also too specific for IBM intranet layouts. Moreover, I needed something that was even more general and more flexible so that I could encode all the layouts that I had implemented myself in the past. However, I kept the idea of the Model-1 and Model-2 architectural patterns that I discovered in mind.
Moreover, I also studied some usability heuristics (provided by the Nielsen-Norman Group) which I tried to implement in the library:
- Visibility of system status. I tried supporting this aspect, by ensuring that the selected links in the menu section were explicitly marked as such so that users always know where they are in the navigation structure.
- The "Consistency and standards" aspect was supported by the fact that every page has the same kinds of sections with the same purposes. For example, the menu sections have the same behavior as well as the result of clicking on a link.
- I tried support "Error prevention" by automatically hiding menu links that were not accessible.
I kept evolving and improving the libraries until early 2009. The last thing I did with it was implementing my own personal homepage, which is still up and running today.
Usage
So how can these libraries be used? First, a model has to be created which captures common layout properties and the sub pages of which the application consists. In PHP, a simple application model could be defined as follows:
<?php $application = new Application( /* Title */ "Simple test website", /* CSS stylesheets */ array("default.css"), /* Sections */ array( "header" => new StaticSection("header.inc.php"), "menu" => new MenuSection(0), "submenu" => new MenuSection(1), "contents" => new ContentsSection(true) ), /* Pages */ new StaticContentPage("Home", new Contents("home.inc.php"), array( "page1" => new StaticContentPage("Page 1", new Contents("page1.inc.php"), array( "page11" => new StaticContentPage("Subpage 1.1", new Contents("page1/subpage11.inc.php")), "page12" => new StaticContentPage("Subpage 1.2", new Contents("page1/subpage12.inc.php")), "page13" => new StaticContentPage("Subpage 1.3", new Contents("page1/subpage13.inc.php")))), ... ))) );
The above code fragment specifies the following:
- The title of the entire web application is: "Simple test website", which will be visible in the title bar of the browser window for every sub page.
- Every sub page of the application uses a common stylesheet: default.css
- Every sub page has the same kinds of sections:
- The header section always displays the same (static) content which code resides in a separate PHP include (header.inc.php)
- The menu section displays a menu navigation section displaying links reachable from the entry page.
- The submenu section displays a menu navigation section displaying links reachable from the pages in the previous menu section.
- The contents section displays the actual dynamic contents (usually text) that makes the page unique based on the link that has been selected in one of the menu sections.
- The remainder of the code defines the sub pages of which the web application consists. Sub pages are organised in a tree-like structure. The first object is entry page, the entry page has zero or more sub pages. Each sub page may have sub pages on their own, and so on.
Every sub page provides their own contents to be displayed in contents section that has been defined earlier. Moreover, the menu sections automatically display links to the reachable sub pages from the current page that is being displayed.
By calling the following view function, with the application model as parameter we can display any of its sub pages:
displayRequestedPage($application); ?>
The above function generates a basic HTML page. The title of the page is composed of the application's title and the selected page title. Moreover, the sections are translated to div elements having an id attribute set to their corresponding array key. Each of these divs contains the contents of the include operations. The sub page selection is done by taking the last few path components of the URL that come after the script component.
If I create a "fancy" stylesheet, a bit of basic artwork and some actual contents for each include, something like this could appear on your screen:
Although the generated HTML by displayRequestedPage() is usually sufficient, I could also implement a custom one if I want to do more advanced stuff. I decomposed most if its aspects in sub functions that can be easily invoked from a custom function that does something different.
I have also created a Java version of the same concepts, which predates the PHP version. In the Java version, the model would look like this:
package test; import java.io.*; import javax.servlet.*; import javax.servlet.http.*; import io.github.svanderburg.layout.model.*; import io.github.svanderburg.layout.model.page.*; import io.github.svanderburg.layout.model.page.content.*; import io.github.svanderburg.layout.model.section.*; public class IndexServlet extends io.github.svanderburg.layout.view.IndexServlet { private static final long serialVersionUID = 6641153504105482668L; private static final Application application = new Application( /* Title */ "Test website", /* CSS stylesheets */ new String[] { "default.css" }, /* Pages */ new StaticContentPage("Home", new Contents("home.jsp")) .addSubPage("page1", new StaticContentPage("Page 1", new Contents("page1.jsp")) .addSubPage("subpage11", new StaticContentPage("Subpage 1.1", new Contents("page1/subpage11.jsp"))) .addSubPage("subpage12", new StaticContentPage("Subpage 1.2", new Contents("page1/subpage12.jsp"))) .addSubPage("subpage13", new StaticContentPage("Subpage 1.3", new Contents("page1/subpage13.jsp")))) ... ) /* Sections */ .addSection("header", new StaticSection("header.jsp")) .addSection("menu", new MenuSection(0)) .addSection("submenu", new MenuSection(1)) .addSection("contents", new ContentsSection(true)); protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { dispatchLayoutView(application, req, resp); } protected void doPost(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { dispatchLayoutView(application, req, resp); } }
As may be observed, since Java is statically typed language, more code is needed to express the same thing. Furthermore, Java has no associative arrays in its language, so I decided to use fluent interfaces instead.
Moreover, the model is also embedded in a Java Servlet, that dispatches the requests to a JSP page (WEB-INF/index.jsp) that represents the view. This JSP page could be implemented as follows:
<%@ page
The above page takes the application model and the current page (determined by the URL to call it) as requests parameters. It invokes the index taglib (instead of a function in PHP) to compose an HTML page from it. Moreover, I have also encoded sub parts of the index page as reusable taglibs.
Other features
Besides the simple usage scenario shows earlier, the libraries support a collection of other interesting features, such as:
- Multiple content section support
- Per-page style and script includes
- Error pages
- Security handling
- Controller sections to handle GET or POST parameters. In Java, you can invoke Java Servlets to do this, making the new library technically compliant with the JSP Model-2 architectural pattern.
- Using path components as parameters
- Internationalised sub pages
Conclusion
In this blog post, I have described an old dormant project that I revived and released. I always had the intention to release it as free/open-source software in the past, but never actually did it until now.
These days, some people do not really consider me a "web guy". I was very active in this domain a long time ago, but I (sort of) put that interest into the background, although I am still very much involved with web application development today (in addition to software deployment techniques and several other interests).
This interesting oatmeal comic clearly illustrates one of the major reasons why I have put my web technology interests into the background. This talk about web technology from Zed Shaw has an overlap with my other major reason.
Today, I am not so interested anymore in making web sites for people or to make this library a killer feature, but I don't mind sharing code. The only thing I care about at this moment is to use it to please myself.
Availability
The Java (java-sblayout) as well as the PHP (php-sblayout) versions of the libraries can be obtained from my GitHub page and used under the terms and conditions of the Apache Software License version 2.0. | https://sandervanderburg.blogspot.com/2014/03/implementing-consistent-layouts-for.html | CC-MAIN-2019-22 | refinedweb | 2,069 | 52.9 |
Download presentation
Presentation is loading. Please wait.
Published byMelissa Frost Modified over 4 years ago
1
1 Applets Programming Enabling Application Delivery Via the Web
2
2 Introduction Applets are small Java programs that are embedded in Web pages. They can be transported over the Internet from one computer (web server) to another (client computers). They transform web into rich media and support the delivery of applications via the Internet.
3
3 Applet: Making Web Interactive and Application Delivery Media Hello Hello Java <app= Hello> 4 APPLET Development hello.java AT SUN.COM The Internet hello.class AT SUNS WEB SERVER 2 31 5 Create Applet tag in HTML document Accessing from Your Organisation The browser creates a new window and a new thread and then runs the code
4
4 How Applets Differ from Applications Although both the Applets and stand-alone applications are Java programs, there are certain restrictions are imposed on Applets due to security concerns: Applets dont use the main() method, but when they are load, automatically call certain methods (init, start, paint, stop, destroy). They are embedded inside a web page and executed in browsers. They cannot read from or write to the files on local computer. They cannot communicate with other servers on the network. They cannot run any programs from the local computer. They are restricted from using libraries from other languages. The above restrictions ensures that an Applet cannot do any damage to the local system.
5
5 Building Applet Code: An Example //HelloWorldApplet.java import java.applet.Applet; import java.awt.*; public class HelloWorldApplet extends Applet { public void paint(Graphics g) { g.drawString ("Hello World of Java!",25, 25); }
6
6 Embedding Applet in Web Page Hello World Applet Hi, This is My First Java Applet on the Web!
7
7 Accessing Web page (runs Applet)
8
8 Applet Life Cycle Every applet inherits a set of default behaviours from the Applet class. As a result, when an applet is loaded, it undergoes a series of changes in its state. The applet states include: Initialisation – invokes init() Running – invokes start() Display – invokes paint() Idle – invokes stop() Dead/Destroyed State – invokes destroy()
9
9 Applet States Initialisation – invokes init() – only once Invoked when applet is first loaded. Running – invokes start() – more than once For the first time, it is called automatically by the system after init() method execution. It is also invoked when applet moves from idle/stop() state to active state. For example, when we return back to the Web page after temporary visiting other pages. Display – invokes paint() - more than once It happens immediately after the applet enters into the running state. It is responsible for displaying output. Idle – invokes stop() - more than once It is invoked when the applet is stopped from running. For example, it occurs when we leave a web page. Dead/Destroyed State – invokes destroy() - only once This occurs automatically by invoking destroy() method when we quite the browser.
10
10 Applet Life Cycle Diagram Born RunningIdle Dead Begin init() start() paint() stop() start() destroy() End
11
11 Passing Parameters to Applet Hello World Applet Hi, This is My First Communicating Applet on the Web! <APPLET CODE="HelloAppletMsg.class" width=500 height=400>
12
12 Applet Program Accepting Parameters //HelloAppletMsg.java import java.applet.Applet; import java.awt.*; public class HelloAppletMsg extends Applet { String msg; public void init() { msg = getParameter("Greetings"); if( msg == null) msg = "Hello"; } public void paint(Graphics g) { g.drawString (msg,10, 100); } This is name of parameter specified in PARAM tag; This method returns the value of paramter.
13
13 HelloAppletMsg.html
14
14 What happen if we dont pass parameter? See HelloAppletMsg1.html Hello World Applet Hi, This is My First Communicating Applet on the Web! <APPLET CODE="HelloAppletMsg.class" width=500 height=400>
15
15 getParameter() returns null. Some default value may be used.
16
16 Displaying Numeric Values //SumNums.java import java.applet.Applet; import java.awt.*; public class SumNums extends Applet { public void paint(Graphics g) { int num1 = 10; int num2 = 20; int sum = num1 + num2; String str = "Sum: "+String.valueOf(sum); g.drawString (str,100, 125); }
17
17 SunNums.html Hello World Applet Sum of Numbers
18
18 Applet – Sum Numbers
19
19 Interactive Applets Applets work in a graphical environment. Therefore, applets treats inputs as text strings. We need to create an area on the screen in which use can type and edit input items. We can do this using TextField class of the applet package. When data is entered, an event is generated. This can be used to refresh the applet output based on input values.
20
20 Interactive Applet Program..(cont) //SumNumsInteractive..java import java.applet.Applet; import java.awt.*; public class SumNumsInteractive extends Applet { TextField text1, text2; public void init() { text1 = new TextField(10); text2 = new TextField(10); text1.setText("0"); text2.setText("0"); add(text1); add(text2); } public void paint(Graphics g) { int num1 = 0; int num2 = 0; int sum; String s1, s2, s3; g.drawString("Input a number in each box ", 10, 50); try { s1 = text1.getText(); num1 = Integer.parseInt(s1); s2 = text2.getText(); num2 = Integer.parseInt(s2); } catch(Exception e1) {}
21
21 Interactive Applet Program. sum = num1 + num2; String str = "THE SUM IS: "+String.valueOf(sum); g.drawString (str,100, 125); } public boolean action(Event ev, Object obj) { repaint(); return true; }
22
22 Interactive Applet Execution
23
23 Summary Applets are designed to operate in Internet and Web environment. They enable the delivery of applications via the Web. This is demonstrate by things that we learned in this lecture such as: How do applets differ from applications? Life cycles of applets How to design applets? How to execute applets? How to provide interactive inputs?
Similar presentations
© 2018 SlidePlayer.com Inc. | http://slideplayer.com/slide/677753/ | CC-MAIN-2018-43 | refinedweb | 960 | 50.33 |
Hide Forgot
Version: 4.6.1
$ openshift-install version
<your output here>
Platform:
RHV 4.4.1.10-0.1.el8ev
#Please specify the platform type: aws, libvirt, openstack or baremetal etc.
Please specify:
IPI
What happened?
When the cluster was in version 4.5.16, the cluster was stable with all cluster-operators available. In an upgrade towards 4.6.1, the cluster operator "storage" is stuck on "Updating", with "Available=false" and "Progressing=true" with the following message:
OVirtCSIDriverOperatorCRProgressing: Waiting for OVirt operator to report status
Checking "openshift-cluster-csi-drivers", the pod "ovirt-csi-driver-operator-*" is crashing in loop.
pod logs:
deployment manifest:
Note: it is also the same outcome when I turned off the operator pod "cluster-storage-operator" in "openshift-cluster-storage-operator" namespace, and switched the image of ovirt-csi-driver-operator to:
quay.io/openshift/origin-ovirt-csi-driver-operator:latest
for both of its containers.
There are no connection issues between the pod and the ovirt engine (tested with curl when the pod was in debug mode).
What did you expect to happen?
Expect to complete the upgrade successfully and have the cluster operator "storage" in "Available" status on version 4.6.1
How to reproduce it (as minimally and precisely as possible)?
Start with an OCP-over-RHV (IPI) cluster, version 4.5.16, and perform an upgrade to 4.6.1
Anything else we need to know?
This is preventing the cluster to complete an upgrade towards 4.6
It turned out that ovirt-csi-driver-node DaemonSet's pods are colliding with nmstate-handler DaemonSet's pods (part of CNV).
They are both listening to port 8080 on the host level.
Meaning, the issue is reproducing only on OCP-over-RHV clusters, version 4.6, with OpenShift Virtualization installed, at least from version 2.4.
From what I gathered from CNV network team, this port on nmstate is used for metrics and can be disabled.
Hi, we are trying to release CNAO but looks like we have some issues in the CI, it includes the fixes at kubernetes-nmstate to close port 8080.
(In reply to Benny Zlotnik from comment
Thanks for making the logs more readable. However the important thing is that the ovirt_password secret must accept any printable character. It is encrypted in base64 armor exactly to allow this. After ovirt-csi-driver reads it, it should quote it according to the destination. From what you say it seems that I cannot have a password that starts with quotes, either.
due to capacity constraints we will be revisiting this bug in the upcoming sprint | https://bugzilla.redhat.com/show_bug.cgi?id=1896320 | CC-MAIN-2021-10 | refinedweb | 438 | 58.69 |
Managing Farms and Nodes (Workflow Manager 1.0)
Updated: October 24, 2012
Service Bus is a pre-requisite for Workflow Manager. You must create and configure a Service Bus farm first and then configure a Workflow Manager farm.
The Service Bus steps that are required to create a new Workflow Manager farm are:
- Create the Service Bus farm.
- Add the machines to the Service Bus farm.
- Create the Service Bus service namespace that will be used by Workflow Manager.
- Get the Service Bus client configuration for the service namespace that will be used by Workflow Manager.
The cmdlets described in this section create or delete Workflow Manager farms and the nodes they contain.
Creating a New Workflow Manager Farm
The New-WFFarm cmdlet creates a new farm of Workflow nodes. To create a farm with auto generated certificates use the following format:
To create a farm with custom certificates use the following format:
New-WFFarm -WFFarmMgmtDBConnectionString <string> [-EncryptionCertificateThumbprint <String>] [-AdminGroup <string>] [-HTTPPort <int>] [-HTTPSPort <int>] [-InstanceMgmtDBConnectionString <string>] [-OutboundCertificateThumbprint <thumbprint>] [-ResourceMgmtDBConnectionString <string>] [-RunAsAccount <string>] [-SSLCertificateThumbprint <Thumbprint>]
If an error occurs or you are forced to reboot during farm creation, the farm management database that is created may be left in a corrupt state. In that case, when you try to join a farm the process may display an error indicating that the machine you are trying to add to the farm is not a part of any farm. When you encounter such an error, you must drop the farm management database and recreate it before you can join any new nodes to the farm.
The following table describes the options for this cmdlet.
Adding a New Node to a Farm
The Add-WFHost cmdlet adds a node to an existing farm. The Add-WFHost cmdlet has the following format:
The following table describes the options for this cmdlet.
The installation program auto generates the following certificates based on your selections when it adds the node to the farm:
Removing a Node from a Farm
The Remove-WFHost cmdlet removes a node from an existing farm.
This cmdlet has the following format:
If you want to rename a machine that belongs to a farm, you must use this cmdlet to remove it from the farm before you rename it, then add it back to the farm using the Add-WFHost cmdlet. If this cmdlet encounters a timeout error, you can still try to run the Add-WFHost cmdlet to try to add it back to the farm. If Add-WFHost succeeds, you can ignore the earlier timeout error.
The following table describes the options for this cmdlet.
When you leave a farm, any certificates that you installed on the machine remain. If you want to remove those certificates, you must remove them manually.
Workflow Manager 1.0 MSDN Community Forum
Build Date: | https://msdn.microsoft.com/library/azure/jj193483(v=azure.10).aspx | CC-MAIN-2015-35 | refinedweb | 470 | 60.55 |
How could you load a public or private key from a file, and then encrypt or decrypt data with it in Swift while using no libraries or APIs?
You could use OS X’s built-in OpenSSL to generate and encrypt or a combo of OS X and Swift.
OpenSSL commands:
In the end, the important files from an iOS standpoint are publicKey.der and privateKey.pfx. You will use publicKey.der to encrypt data, and privateKey.pfx to decrypt..
Apple Docs
Encrypting and Hashing Data
You can find examples of how to use these functions in Apple docs Certificate, Key, and Trust Services Tasks for iOS
import UIKit import CoreFoundation
Use a bridging header file for Security.h
#import <Security/Security.h> | https://codedump.io/share/8za01eOSGoOS/1/how-to-load-a-key-and-encrypt-with-rsa-swift | CC-MAIN-2016-44 | refinedweb | 123 | 67.04 |
UFDC Home | Help | RSS TABLE OF CONTENTS HIDE Front Cover Frontispiece Title Page Advertising Robinson Crusoe Back Cover Title: The adventures of Robinson Crusoe CITATION PAGE TURNER PAGE IMAGE ZOOMABLE Full Citation STANDARD VIEW MARC VIEW Permanent Link: Material Information Title: The adventures of Robinson Crusoe Physical Description: 31 p. : ill. ; 15 cm. Language: English Creator: Defoe, Daniel, 1661?-1731Campe, Joachim Heinrich, 1746-1818John Babcock and Son ( Publisher )S. Babcock & Co ( Publisher )Sidney's Press ( Printer ) Publisher: Pub. by J. Babcock & SonS. Babcock & Co. Place of Publication: New HavenCharleston Manufacturer: Sidney's Press Publication Date: 1825 Subjects Subject: Castaways -- Juvenile fiction ( lcsh )Shipwrecks -- Juvenile fiction ( lcsh )Survival after airplane accidents, shipwrecks, etc -- Juvenile fiction ( lcsh )Imaginary voyages -- 1825 ( rbgenr )Publishers' advertisements -- 1825 ( rbgenr )Robinsonades -- 1825 ( rbgenr ) Genre: Imaginary voyages ( rbgenr )Publishers' advertisements ( rbgenr )Robinsonades ( rbgenr )fiction ( marcgt ) Spatial Coverage: United States -- Connecticut -- New HavenUnited States -- South Carolina -- Charleston Notes Citation/Reference: NUC pre-1956 Citation/Reference: Brigham, C.S. Robinson Crusoe, Statement of Responsibility: with engravings. General Note: Cover title: Robinson Crusoe. General Note: This text is an abridged version of J.H. Campe's adaptation of Robinson Crusoe, except Crusoe is a native of New York. It was originally published in 1810 by Thomas Powers under the title, The New Robinson Crusoe. Cf. Brigham, C.S. Bibliography of the American editions of Robinson Crusoe to 1830. General Note: Frontispiece on inside front cover and text ends on inside back cover. General Note: Publishers' advertisement on verso of t.p. (both Babcocks) and p. 4 of cover (S. Babcock). Record Information Bibliographic ID: UF00072755 Volume ID: VID00001 Source Institution: University of Florida Holding Location: University of Florida Rights Management: All rights reserved, Board of Trustees of the University of Florida. Resource Identifier: oclc - 27606551 Table of Contents Front Cover Page 1 Frontispiece Page 2 Title Page Page 3 Advertising ROBINSON. CRUSOE. lSIDNEY'S PRESS: PUBLISHED BY J, BABCOCK & SON, AND S. BABCOCK &-CO. Chae -ston, S. C. + 1825. ^ -^ ^*^-^^--^*^-<&^'^-0 FRONTSPIZECE. The Baldwin Lbrary S iUmversity FoB Florida .11 L ADVENTURES OBI ROBINSON CRUSOE. \TI! H ELNGRAVINGS SII)NELS PRESS : P'UBLISIIHD IY J. BABCOCK SO0-N N--~Hacn; A',D S. BABCOCK &- CO. Charlcston. S235. BOOKS FOR CHZLDREBN, ORNAMENTED WITH PLATES, JUST PUBLISHED AND FOR SALE, BY J. BABCOCK & SON, AND S. BBAB3OCK & CO. WATTS' Divine Songs; Little Goody Two-Shoes; Holiday Sports; The Little Deserter; The History of Blue Beard; Robinson Crusoe; The Riddle Book; Whittington and his Cat; Pictured Alphabet; House that Jack Built; Ill-Natured Boy; Marriage of Cock Robin ; Present for Little Girls; History of Beasts; do. of Birds; Good-natured Little Boy; Death of Cock Robin. J. B. & SON are constantly publishing and add- ing to their Stock of Juvenile Books, which they ofer, at wholesale or retail, on reasonable terms. ROBZINSON CRTSOE. THOSE who are accustomed in their early days to do wrong, will with difficulty be persuaded to do right when they shall be grown to the age of maturity. The parents of Robinson left him early to the guidance of his own will, and, as he loved play bet- ter than his book, his youthful days passed without any attention being paid to the im- provement of his mind. Those hours which ought to have been spent in some useful study, were squandered away among idle boys in the street, to his own detriment. and the disgrace of his fond parents. One day, as Robinson was walking about the port of New-York, the place of his na- tivity, he met with one of his old compan- ions, whose father was master of a ship, and who was then on the point of sailing for London. The young sailor persuaded Robinson to go with him, which he did without taking leave of his parents, and thereby committed a rash and wicked ac- tion. The wind was favourable, they weighed anchor and proceeded on their voyage.- Robinson soon lost sight of the lands and 1ROIN3iON CRULSOE. nothing was left to his view but the ship in which he was sailing, the water beneath, and the sky above. The two following days, the winds and weather proved favourable; but, on the third, the heavens began to be overcast, and every thing seemed to forebode an ap- proaching storm. The air sometimes ap- peared to be on fire, and dreadful peals of thunder followed the vivid flashes of light- ning; the rain fell in torrents, and the wind blew the billows of the ocean to a tremen- dous height. One moment thie vessel ap- peared mounting to the clouds, and the next moment as if descending to the lowest re- gions : sometimes it lay on one side, and sometimes on the other. All on a sudden, crack crack went the deck. "'Heaven have mercy on us!" cried the sailors, who turned pale with terror, and lifted up their eyes to heaven. "What is the matter ?" said Robinson, who was al- ready half dead with fear. "Ah!" replied the seamen, "we are lost; a clap of light- ning has shivered our mizen mast to pie- ces, and our main mast is equally in dan- ger." We are lost!" exclaimed another voice from the inside of the ship: "we have sprung a leak, and we have already four feet water in the hold." At these words, Robinson, who was seat- ROBINSON CRUSOE. ed on the cabin floor, fell into a swoon, and entirely lost his senses. All the crew ran to the pumps, and exerted all the force they were masters of, to keep the ship from sink- ing. The captain fired signals of distress, which awakened Robinson from his swoon, but soon threw him into a worse state, when he supposed it to be the noise occa- sioned by the ship's foundering. After some time, a large boat came to their assistance; but it was with great dif- ficulty, on account of the dreadful waves, that they could get on board of it : and Robinson must have perished, had not a compassionate sailor thrown him on board of the boat. They had rowed but a short distance from the ship when they saw it sink. Fortunately, at this instant, the wind abated, otherwise the.boat, loaded as it was, must have been overwhelmed in the waves. However, after many dangers, they reach- ed the ship to which the boat belonged, and were all taken on board. Th_ ship that had received them then set sail for England, and, in a few weeks en- tered the mouth of the river Thames, and soon after anchored in the port of London. They then went on shore, happy in the idea of having escaped with their lives. Robinson amused himself for a few hours in reviewing some parts of this great city ; ROBINSON CRUSOE. but his stomach telling him he stood in need of something to eat, he went in search of the master of the vessel, who received him kindly, and made him sit down to the table with him. After dinner, the captain asked him what business brought him to London; when he replied that pleasure was his only motive, and at the same time confessed, that he had undertaken the voyage unknown to his pa- rents. When the captain heard this, he appeared much shocked, and advised him to return immediately to his native country, and at the knees of his parents to implore their forgiveness. Our imprudent adventurer then took his leave of the captain ; but, as he was going to enquire for a ship, different ideas crowd- ed on his mind. "If I now return (said he to himself,) my parents will punish nie for leaving them, and my companions will laugh at me for having seen only two or three streets in London." On reaching the quay, he found no ship ready to sail for United States; he met with the captain of a Guineaman, who very kindly invited him to take a cup of tea on board of his ship, and Robinson accepted the invitation. The consequence of this meeting was, that Robinson agreed to go to Guinea with the captain : and, at that ROBINSON CRUSOE. moment, he totally forgot his parents, friends, and country. They accordingly set sail with every ap- pearance of a pleasant passage; they had passed by Calais, cleared the channel, and got into the Atlantic Ocean, without any accident. The wind, however, now chang- ed, and was so violent, that it blew them on the coast of America. They had not sailed long on the coast, when they heard the report of cannon : and as they were at some distance from the land, they concluded they were signals of some ship in distress. They therefore steered their course towards the report of the guns, and soon discovered, by a flame at a distance, that it was a ship on fire, which soon blew up, ant nothing more was heard or seen of her. The captain, how- ever, bent his course that way, and contin- ;ed his signals, hoping he might thereby pick up some of the crew, who had proba- bly taken to their boats. It fortunately so happened, that the crew of the unfortu- nate ship, directed by the signal guns of the Guineaman, come up with her, and were all saved. The good and generous captain, having safely conveyed to Newfoundland the peo- ple he had saved, pursued his voyage to Guinea with a favourable wind, and arriv- ROBINSON CRUSOE. ed safe at Madeira, carrying with him thither the crew of another ship he had met with in the greatest distress. As the captain was obliged to stop here some time, in order to repair his ship, which had been damaged by the storm, Robinson, in a few days, began to be tired of inactivity, and wished for wings that he might, as quick as thought, fly over the whole universe. During this interval, a Portuguese ship arrived from Lisbon, bound to Brazil : and Robinson, getting acquainted with the cap- tain, heard him talk so much of gold and precious stones, that he conceived the most ardent desire to go there, and load his pockets with those valuable articles. He then informed his good friend, the captain, that he intended to sail in the Portuguese ship to Brazil, As the captain had just learned from Robinson himself that he had left his parents without their knowledge, he was very glad to get rid of him, fearing he should have no success while so impious a youth was on board. He therefore gave him leave to depart, gave him some money, and the best' advice he could. Robinson took a kind leave of his friend, went on board the Portuguese ship, and sailed for the Brazils. The voyage proved agreeable for several ROBINSON CRUSOE. days; at last, a violent storm blew from the south-east. After weathering the storm seven days, a sailor cried cut with excess of joy, that he saw land, which brought every one on deck. This joy, however, was of short du- ration; for they ran upon a bed of sand, where they remained fixed, and exposed to the furious waves, which rolled over the ship in vast bodies. All on a sudden the cry was general that the ship was filling with water. Every one instantly flew on deck, the long boat was handed out with incredible haste, and every one endeavoured who should first get in. The boat was so loaded, that it was easy to foresee, that it would never reach the shore, which was at a considerable distance. In fact a monstrous wave was seen rolling to- wards them, which buried them all in the bosom of the deep. The boat being thus overset, Robinson and the rest of the ship's company were exposed to the mercy of the ocean; but the same wave that overset them carried Cru- soe with it and threw him on the shore. He was thrown against a piece of rock with such violence, that the pain awoke him from the swoon into which terror had thrown him. He opened his eyes, and see- ing himself on land, he exerted all his ef- forts to gain the height of the shore. ROBINSON CRUSOE. When he had recovered himself, he rose to look round. Good God, what a sight! The ship, the boat, and his companions, had all disappeared nothing remained but a few planks of the ship, which the waves had thrown on shore. Himself was the only one who had escaped death. Weary and fatigued be wished to find some place where he might enjoy a little repose; but no hut was in view, nor could he find any place so secure as that of the birds, who passed their evenings in the trees. He clambered up into one, and there passed the night, having properly se- cui ed himself from falling while sleeping. In the morning he descended from the tree, in search of food, having eaten nothing the preceding day ; but his searches were vain, nothing presented itself that the human stomach could digest. He threw himself on the ground, shed a torrent of tears, and wished he had perished in the sea, rather ttiarn be left to die a miserable death by famine. He was now forming in his mind by what means he should put an end to his mis- erable existence, without waiting the tedi- ousness of dying with hunger, when he saw a sea falcon devouring a fish he had taken, and said to himself,, If God furnishes these birds ith food, he will not suffer me ROBINISON CRUSOE. to die with hunger." This idea renewed his spirits, and he exerted himself to walk along the sea shore. At last perceiving some shells lying on the sand, he ran to them, and to his inex- pressible joy found they were oysters. Though these saved him from perishing with hunger, yet he knew not where to take his nocturnal abode, secure from sav- ages and wild beasts, if such were there. His last night's lodging had been so uncom- fortable, that he dreaded repeating the ex- periment. What will it serve," said he to himself, "that I have escaped the fury of the sea, and have found something to keep me from dying with hunger, If I am at last to be devoured by wild beasts ?" Poor unfortunate wretch that I am!" exclaimed he, at the same time lifting up his trembling hands to heaven, Is it then true, that I am separated from all human beings, and that I must remain here with- out hopes of ever being taken from this desert island !" His attempts to discover a place where he might repose in safety, were for a long time ineffectual; but at last he came to a small mountain, the front of which was as perpendicular as a wall. He examined this side with great attention, and found in it a little hollow place, to which the entrance was very narrow. ROBINSON CRUSOE. As he had neither pick-axe nor chisel, with which he might easily have increased the dimensions of the hollow place, he set his head to work how to supply the want of them. He observed that there were sever- al willow trees near the spot; these he pulled up by the roots, with great difficulty, in order to plant them at the entrance of his intended cavern, and thereby make his habitation more comfortable and secure. He rose the next morning at break of day, when he hastened to the shore to appease his hunger with oysters, and then return to his labours. Having pursued a different route this morning, he, in his way to the shore, had the good fortune to meet with a tree that bore large fruit. He indeed knew not what they were, but hoped to find them good to eat, and immediately knocked down one. It was a nut of a triangular form, as large as the head of a small child. The outer bark was composed of threads, re- sembling hemp in appearance. The second bark, on the contrary, was as hard as a shell of a tortoise; and Robinson soon dis- covered that this would supply the place of a bason. The contents were a moist sub- stance, which tasted like sweet almonds, and in the midst of it, which was hollow, something like milk, of a sweet and agreea- ble flavour. This was indeed a most glo- ROBINSON CRUSOE. rious repast to the half famished Robinson. It was the cocoa-nut. His empty stomach could not be content- ed with one single nut, but he knocked down a second, which he ate with the same eagerness. His joy on this discovery filled Shis eyes with gratitude. The tree was very large, but it was the only one he saw a the place. He carried with him some oysters to serve him for his dinner, and he went cheer- fully to his labour. He had collected, on the borders of the sea, some large shells, which served him instead of a spade, and which very much accelerated his business. He soon afterwards discovered a tree, the inner bark of which formed a good substi- tute for cords or threads. He then continued his work with great assiduity, and planted tree against tree un- til he had formed a strong palisade before his intended habitation. Every night and morning he watered his little plantation from the neighboring rivulet, and for that purpose made use of the cocoa-nut shell. He soon had the pleasure to see his little plantation in a thriving condition, and very beautiful to the view. Having hitherto succeeded to his wishes, he began to think in what manner to hollow out the little cavity in the rock, so as to ROBINSON CRUSOE. make it big enough for his use. As he knew it would be in vain to attempt it with his hands alone, he set about looking for some tool that might assist him in his ope- ration. It was not long before he met with a large and sharp stone, which not only re- sembled a hatchet with a sharp edge, but had even a hole in it to receive a handle. After repeated trials, he fixed a handle to it, and gave it all the appearance of the tool so much wanted.-Searching further among the stones, he found one that an- swered the purpose of a chisel, and others that proved excellent substitutes for a mal- let. By the assistance of these tools his work was so far advanced, in the course of a few days, that he had made sufficient room to lie in comfortably. He collected a suffi- cient quantity of grass, of which he made hay by exposing it to the sun ; and of this made his bed. Robinson, in order that he might not forget the order of the days, and to know when Sunday returned, invented a new kind of Almanack. As he had neither pa- per, nor any thing else to write on, he made choice of four trees that stood close togeth- er, and whose barks were smooth. On the largest of the four trees, he every night W4. ROBINSON CHISOE. made a mark with a sharp stone, to shew that the day had passed. When seven marks had been made, he made a stroke through them all, and this was a mark for a week. Every time that he had made four marks in the second tree. hIe knew that one month had passed, for which he made one mark on the third tree. When lie had made twelve marks on the third tree, he then made one on the fourth, which denoted the year being completely finished. Necessity obliged him to make large ex- cursions into the island, in pursuit of the indispensable necessaries of life. Robinson rose in the morning with the sun, and prepared for his tour. He hung his pouch to a string, which he threw across his shoulders, put his hatchet instead of a sword, into his belt, and began his march. His first visit was to his cocoa-nut tree, on order to supply his pouch with two nuts. Having supplied himself with this excellent provision, he went in search of some oys- ters; and being supplied with these matters, to be eaten only in case of necessity, he took a hearty draft of water, and then pro- ceeded on his journey. At last he came to a brook, where he re- solved to sit down and dine. He seatte -himself under a large tree, whose spread- . ig boughs afforded a shade to a great dis-ii i; ROBINSON CRUSOE. tance, and joyfully regailed himself. But in the midst of his repast, and all on a sud- den, a distant noise terribly alarmed him. He looked around him on all sides, and at last perceived a whole troop of savage ani- mals approaching him, which had some re- semblance to our sheep, except that they had a hump on their backs, which, on that account, made them resemble little camels. These are called lamas-they are beasts of burden, and peculiar to some parts of South America. Robinson, having killed one of these crea- tures with his hatchet, threw it across his shoulders, and was carrying it home to his cavern, when, in his way thither, to his great joy, lie discovered seven or eight cit- ron trees, whose ripe fruit had fallen to the ground. He carefully collected them, and carried them home to his habitation. With a sharp stone he skinned the lama, whose flesh he so far roasted in the sun, as to make it eatable ; and some of his citrons squeezed into water afforded him an excel- lent and refreshing liquor. The skin he hung up to dry, and of this hereafter in- tended to make himself shoes. Robinson slept very soundly this night, and was angry with himself for lying so long. He was going out in order to make war on the lamas, but heaven prevented; ROBINSON CRUSOE. for he had no sooner put his head out of his cavern, than lie was obliged to return. It rained so violent that the ground was covered with water, and this accompanied with the most dreadful thunder, which broke with such violence on the rock, that it seemed to shake it to the very foundation. This so terrified poor Crusoe, who, from a want of proper education, was naturally timid and superstitious, that he ran out of his cavern, and fell down in a swoon. He remained for some time in a state of insensibility ; but, on recovering himself, found the rain, thunder, and lightning, had ceased. During the thunder storm, a flash of lightning had set fire to a large piece of wood, which had kept burning for a con- siderable time. Robinson now rejoiced to find that he had obtained some fire, and even from that very event which had before given him so much uneasiness. He im- mediately set about to keep up the fire con- stantly, and for that purpose built a kind of stone chimney, in his new habitation. He watched his fire attentively, that it might not go out, so that he could now roast the flesh of his lamas, in a manner fit for hu- man creatures to eat. Going one day to the borders of the sea to collect oysters, he could find only a few ; -2 ROBINSON CRUSOE. but, instead of them, discovered what gave him infinitely' more satisfaction. .Though he had nevereaten of them himself, he had heard that -they were wholesome and deli- cions food.-This was a fine large turtle, which weighed nearly an hundred pounds. Robinson, with some difficulty, carried the turtle home to his habitation, by the as- sistance of his hatchet penetrated the under shell, dressed a part of it for his dinner, and made of it a most sumptuous feast. As lie could not possibly eat it all at once, he was at loss how to preserve it from petre- faction. Necessity had taught him wisdom; and, as he had neither tub nor salt, he set his head to work, in what manner he should preserve the delicate food. He found the upper shell, which lie had not broken, would supply the place of a tub, and no- thing but salt was wanting. What a fool, I am !" said Robinson to himself, here is a plenty of sea-water, and that will supply the place of salt." He filled his shell with sea-water, put the remainder of the turtle in it, and it was thus preserved from putrefaction. These happy successes encouraged him to exert his genius in greater attempts. Wishing to have some living animal about him, and the lamas were the only animate iROBINSONS CRUSOE. beings except the spider, which he had seen on this island. But how he should get a pair of them alive into his possession was a great difficulty to surmount. He deter- mined to form one of the ends of his cords into a noose, and throw this over the head of the first lama that should approach him. He rose next morning early, and having furnished himself with his hatchet, provis- ions, and other things necessary, he pro- ceeded in his design of catching lamas alive. In the course of his journey he saw a pit at a distance, and advancing up to it, he found it was full of a white substance, How shall I express his joy, when, on tasting it, he found it to be excellent salt! he instantly filled his pockets with it. This discovery gave fresh spirits to Robinson, and he hastened to the spot where he hoped to trap a lama. It was not long before he ensnared a fe- male lama, which had two young ones, who, seeing their dam ensnared, came up without any appearance of fear, to Robin- son, and licked his hands, meaning there- by, perhaps, that they wished their dam to be set at liberty. Robinson then dragged the old lama to his habitation, and there the two young ones of course followed her. On his arri- val at his hut, he formed a little stall with ROBINSON ORUSOE. bricks, into which he put the lama and her young ones. It is impossible to express the joy Robinson felt on having compan- ions, even though they were not human. One day as he was sitting full of thought, the idea struck him to explore other parts of the island, as he had seen but a small part of it; he determined therefore to pro- ceed on his tour; the next morning he load- -ed one of his lamas with four days provis- ions, equipped 'himself, and having implor- ed the divine protection, set out on his jour- ney. He had reached the centre of the island when he saw the impression of human feet on the sand, at which he grew pale and mo- tionless, concluding that if there were inhabitants on the island, they could be only savages or cannibals, not less to be dreaded than the beasts of the forest. A little further he discovered a pit, in which were evident marks of a fire extinguished, and about it were scattered the hands and feet, sculls and other bones of human crea- tures, the remains of a horrible and unnat- ural repast. He returned home and put his habitation in the best state of defence, and cut a sub- terraneous passage from his house, through which he might escape in case of an attack. Some years passed without any thing ROBINSON CRUSOE. material occurring. One clear and sereln morning, he perceived the smoke rising at a distance; his fright was followed by cu- riosity, and he hasted to the top of the hill, at the foot of which was his grotto; he there clambered a high tree, from which he discovered several canoes fastened to the shore, and savages dancing round a great fire; presently two poor creatures were dragged from the canoes, one of the savages knocked one of them down, and two others fell immediately upon him to cut him to pieces and prepare for a feast. The other captive, while the savages were butchering his companion, took to flight, and ran with great swiftness near to Robin- son's habitation. Robinson descended the tree and pro- ceeded to the spot where the fugitive had concealed himself. Robinson made signs for him to follow him, which he did with evident marks of fear. In a little time the fears of the indian were removed, and he made Robinson to understand that he was willing to become his slave; for though he understood not the language of the Indian, lie was charmed with the sound of a human voice, to whiclhhe had long been a stranger. As this affair happened on a Friday, Robinson gave to his companion the name ,if FRIDAv. He gave him a skin to cover ROBINSON CRUSOE. himself with, and made him set down by him. Friday obeyed in the most respectful manner, offering a lance to Robinson, and holding the point to his own breast, in to- ken of absolute submission to his will. Robinson, ever since his arrival on this island, had experienced no felicity like the present; all his fears centered in the idea, that the savages might return in quest of their victim, and demolish his habitation. He therefore set about making his cottage as strong as possible, by throwing up en- trenchments around it, and fortifying it with all the-methods he could devise. Dur- ing this time Robinson endeavored to learn Friday something of the English language, and the man seemed no less de- sirous. In less than six months he made such progress, that he could make himself tolerably well understood. One morning as Robinson was walking towards the sea-shore, he was much pleas- ed with the sight of a ship, though at a great distance. Robinson soon perceived her to be an Amperican vessel, which was steering for the island, and soon came to anchor. Surprise, fear, and joy, seized Robinson by turns, and also his attendant, Friday. The sight of a vessel, which might take him off that island, gave him joy; but this was succeeded by surprise and fear, be- ROBINSON CRUSOE. cause he could not comprehend the motive that could bring a ship on these coasts; but supposed she must have been driven out of her course by tempestuous and contrary winds. This turneiTout as Robinson sup- posed ; they cast anchor near the island, and sent their boat on shore in search of ROBINSON CRUSOE. res h water, and were much surprised at finding a white man on an island in so des- olate a part of the globe. Robinson was quite overjoyed at the prospect he now had of once more returning lo his native home, and the great pleasure he enjoyed in the company and conversation of man, from whom he had been so long separated. Af- ter taking in a small supply of water, they set off for the ship, with Robinson and his companion. The next day they again went on shore for more water: Robinson now took from his cabin such things as he thought might be useful to-lm oni time passage; he then took a last farewell of his habitation, and the water cask being filled, they all returned on board, and the ship sailed. A favourable voyage at length brought him in sight of his native country, and the heart of Robinson was expanded with joy ; when, suddenly, a violent tempest arose which in spite of all the efforts of the sea- men drove the ship on a sand bank, and forced away the keel and part of the hold. The water rushed in with such violence, that the only chance of escaping was in the boat, in which they happily reached the shore. When he came in sight of his native city, he could not help shedding tears. He had already learned that his mother, whom ROBINSON CRUSOE. he so tenderly loved, had paid the debt of nature. On his arrival at New-York, he hastened to an inn, and thence sent a mes- senger to prepare his father for the recep- tion of his supposed lost son. The mes- senger had orders to tell the father, that a person had arrived with news from his son, who would be -with him in a few days. The supposed stranger was introduced, and after a short interview, declared himself his son.-Let my readers judge, for it is impossible to describe, how great was the tenderness of this meeting. Friday, astonished at scenes so entirely nev to him, gaped about him in silence, without being able to fix his attention on any particular object. In the mean time, the arrival of Robinson, and his surprising adventures, engrossed the conversation of all companies; every one wished to see him ant hear his history, and he was em- ployed from morning to night, in relating his adventures. FLNIS. * BOOKS AND STATIONARY. S. BABCOCK & CO. 329 KING-STREET, CHARLESTON, S. C. Have on hand and for sale a good assortment of BOOKS A.D STATj~slBtJ, Including BLANK BOOKS, of various descrip- tions, and the most useful -SCHOOL BOOKS, TOGETHER WITH POCKET BOOKS, PAINTS, LEAD PBN-" cils, Penknives, Silver and Plated pencil Cases, Pewter, Stone, Glass, aw tPock- et Inkstands, Foolscap and Letter Paper, Quills, Pens, Japan ' and Durable Ink, Wa- fers, Sealing Wax, Slates, &c. &c. And a great variety of Children's Books. Contact Us | Permissions | Preferences | Technical Aspects | Statistics | Internal | Privacy Policy © 2004 - 2010 University of Florida George A. Smathers Libraries.All rights reserved. Acceptable Use, Copyright, and Disclaimer Statement Last updated October 10, 2010 - - mvs | http://ufdc.ufl.edu/UF00072755/00001 | CC-MAIN-2015-06 | refinedweb | 5,693 | 73.61 |
Sets a hotkey that calls a user function.
HotKeySet ( "key" [, "function"] )
It is recommended to use lower-case keys/characters (e.g. "b" and not "B") when setting hotkeys to avoid errors as with some keyboard layouts upper and lower case keys may be mapped differently.
Keyboard with 102 keys as the Hungarian keyboard need to use "{OEM_102}" to capture the "í" key.
If two AutoIt scripts set the same hotkeys, you should avoid running those scripts simultaneously as the second script cannot capture the hotkey unless the first script terminates or unregisters the key prior to the second script setting the hotkey. If the scripts use GUIs, then consider using GUISetAccelerators as these keys are only active when the parent GUI is active.
A hotkey-press *typically* interrupts the active AutoIt function/statement and runs its user function until it completes or is interrupted. Exceptions are as follows:
1) If the current function is a "blocking" function, then the key-presses are buffered and execute as soon as the blocking function completes. MsgBox() and FileSelectFolder() are examples of blocking functions. Try the behavior of Shift-Alt-d in the Example.
2) If you have paused the script by clicking on the AutoIt Tray icon, any hotkeys pressed during this paused state are ignored.
The following hotkeys cannot be set:
GUISetAccelerators, Send
#include <MsgBoxConstants.au3> ; Press Esc to terminate script, Pause/Break to "pause" Global $g_bPaused = False HotKeySet("{PAUSE}", "TogglePause") HotKeySet("{ESC}", "Terminate") HotKeySet("+!d", "ShowMessage") ; Shift-Alt-d While 1 Sleep(100) WEnd Func TogglePause() $g_bPaused = Not $g_bPaused While $g_bPaused Sleep(100) ToolTip('Script is "Paused"', 0, 0) WEnd ToolTip("") EndFunc ;==>TogglePause Func Terminate() Exit EndFunc ;==>Terminate Func ShowMessage() MsgBox($MB_SYSTEMMODAL, "", "This is a message.") EndFunc ;==>ShowMessage
#include <MsgBoxConstants.au3> ; Press Esc to terminate script, Pause/Break to "pause" Global $g_bPaused = False HotKeySet("{PAUSE}", "HotKeyPressed") HotKeySet("{ESC}", "HotKeyPressed") HotKeySet("+!d", "HotKeyPressed") ; Shift-Alt-d While 1 Sleep(100) WEnd Func HotKeyPressed() Switch @HotKeyPressed ; The last hotkey pressed. Case "{PAUSE}" ; String is the {PAUSE} hotkey. $g_bPaused = Not $g_bPaused While $g_bPaused Sleep(100) ToolTip('Script is "Paused"', 0, 0) WEnd ToolTip("") Case "{ESC}" ; String is the {ESC} hotkey. Exit Case "+!d" ; String is the Shift-Alt-d hotkey. MsgBox($MB_SYSTEMMODAL, "", "This is a message.") EndSwitch EndFunc ;==>HotKeyPressed | https://www.autoitscript.com/autoit3/docs/functions/HotKeySet.htm | CC-MAIN-2017-04 | refinedweb | 376 | 55.64 |
Chatlog 2012-04-19
From Government Linked Data (GLD) Working Group Wiki
Revision as of 20:00, 19 April 2012 by Commonscribe (Talk | contribs)
See original RRSAgent log or preview nicely formatted version.
Please justify/explain all edits to this page, in your "edit summary" text.
13:51:29 <RRSAgent> RRSAgent has joined #gld 13:51:29 <RRSAgent> logging to 13:51:31 <trackbot> RRSAgent, make logs world 13:51:31 <Zakim> Zakim has joined #gld 13:51:33 <trackbot> Zakim, this will be GLD 13:51:33 <Zakim> ok, trackbot; I see T&S_GLDWG()10:00AM scheduled to start in 9 minutes 13:51:34 <trackbot> Meeting: Government Linked Data Working Group Teleconference 13:51:34 <trackbot> Date: 19 April 2012 13:56:12 <fadmaa> fadmaa has joined #gld 13:56:18 <Zakim> T&S_GLDWG()10:00AM has now started 13:56:27 <Zakim> + +3539149aaaa 13:56:35 <mhausenblas> Zakim, aaaa is me 13:56:35 <Zakim> +mhausenblas; got it 13:56:50 <luis_bermudez> luis_bermudez has joined #gld 13:58:17 <Zakim> + +1.267.481.aabb 13:58:27 <Zakim> +George_Thomas 13:58:34 <sandro> sandro has changed the topic to: Government Linked Data (GLD) WG -- 13:58:43 <boris> boris has joined #gld 13:58:52 <Zakim> +Sandro 13:58:55 <mhausenblas> Zakim, aabb is luis_bermudez 13:58:55 <Zakim> +luis_bermudez; got it 13:59:22 <George> George has joined #gld 13:59:38 <BenediktKaempgen> BenediktKaempgen has joined #gld 13:59:50 <Zakim> +davidwood 13:59:58 <bhyland> zakim, davidwood is me 13:59:58 <Zakim> +bhyland; got it 14:00:26 <Zakim> + +49.721.aacc 14:00:30 <luis_bermudez> scribe:luis_bermudez 14:00:44 <Zakim> + +1.202.566.aadd 14:00:52 <luis_bermudez> chair: Bernadette 14:00:53 <BenediktKaempgen> zakim, who's here? 14:00:53 <Zakim> On the phone I see mhausenblas, luis_bermudez, George_Thomas, Sandro, bhyland, +49.721.aacc, +1.202.566.aadd 14:00:55 <Zakim> On IRC I see BenediktKaempgen, George, boris, luis_bermudez, fadmaa, Zakim, RRSAgent, MacTed, mhausenblas, bhyland, danbri_, trackbot, sandro 14:01:07 <Zakim> + +3539149aaee 14:01:24 <BenediktKaempgen> zakim, aacc is BenediktKaempgen 14:01:24 <Zakim> +BenediktKaempgen; got it 14:01:36 <martinAlvarez> martinAlvarez has joined #gld 14:01:42 <DeirdreLee> DeirdreLee has joined #gld 14:01:42 <bhyland> zakim, aaee is fadmaa 14:01:42 <Zakim> +fadmaa; got it 14:01:47 <BenediktKaempgen> hello 14:02:02 <Mike_Pendleton> Mike_Pendleton has joined #gld 14:02:06 <bhyland> +1 14:02:17 <bhyland> Minutes: 14:02:18 <George> 14:02:23 <mhausenblas> +1 14:02:26 <bhyland> +1 14:02:34 <Mike_Pendleton> +1 14:02:35 <DaveReynolds> DaveReynolds has joined #gld 14:02:39 <Zakim> +??P22 14:02:42 <luis_bermudez> RESOLUTION: approved Minutes 14:02:42 <BenediktKaempgen> +1 14:02:46 <fadmaa> Zakim, +3539 is DeirdreLee 14:02:48 <martinAlvarez> zakim, ??p2 is me 14:02:50 <gatemezi> gatemezi has joined #gld 14:03:08 <luis_bermudez> agenda: 14:03:09 <Zakim> sorry, fadmaa, I do not recognize a party named '+3539' 14:03:11 <Zakim> +martinAlvarez; got it 14:03:23 <Zakim> +??P25 14:03:37 <gatemezi> Zakim, who's here? 14:03:39 <Zakim> On the phone I see mhausenblas, luis_bermudez, George_Thomas, Sandro, bhyland, BenediktKaempgen, +1.202.566.aadd, fadmaa, martinAlvarez, DaveReynolds (muted) 14:03:42 <Zakim> On IRC I see gatemezi, DaveReynolds, Mike_Pendleton, DeirdreLee, martinAlvarez, BenediktKaempgen, George, boris, luis_bermudez, fadmaa, Zakim, RRSAgent, MacTed, mhausenblas, 14:03:51 <Zakim> ... bhyland, danbri_, trackbot, sandro 14:03:51 <luis_bermudez> TOPIC: opportunities to liaise 14:03:54 <martinAlvarez> zakim, mute me 14:04:04 <Zakim> martinAlvarez should now be muted 14:04:09 <Zakim> +??P30 14:04:16 <GeraldSteeman> GeraldSteeman has joined #GLD 14:04:20 <boris> zakim, ??P30 is me 14:04:21 <Zakim> +boris; got it 14:04:50 <bhyland> NISO Webinar on Schema.org and Linked Data, speaker: Danbri. 25-April-2012 @ 13.00 US ET, see 14:04:59 <Zakim> + +33.4.93.00.aaff 14:05:11 <Zakim> + +1.757.604.aagg 14:05:31 <GeraldSteeman> Zakim, aagg is me. 14:05:33 <Zakim> +GeraldSteeman; got it 14:05:41 <fadmaa> q+ 14:05:56 <luis_bermudez> TOPIC: Working Draft 14:06:05 <bhyland> zakim, who is speaking? 14:06:16 <Zakim> bhyland, listening for 10 seconds I heard sound from the following: fadmaa (63%), +33.4.93.00.aaff (20%) 14:07:42 <bhyland> wow fadmaa, thanks for raising this issue re: DCAT 14:08:00 <DaveReynolds> Yes, org and data cube are both live and up to date. Though only org is in a w3c namespace. 14:08:08 <Yigal> Yigal has joined #gld 14:08:13 <luis_bermudez> ??: no possible to get ontology representation of DCAT 14:09:00 <Zakim> +??P36 14:09:12 <Yigal> zakim, ??p36 is me 14:09:12 <Zakim> +Yigal; got it 14:09:15 <luis_bermudez> ACTION: fadmaa to work on ontology and namespace 14:09:15 <trackbot> Created ACTION-66 - Work on ontology and namespace [on Fadi Maali - due 2012-04-26]. 14:09:40 <George> q? 14:09:44 <George> ack fadmaa 14:11:16 <cgueret> cgueret has joined #gld 14:12:24 <luis_bermudez> DaveReynolds: Tracked minor changes as actions. No main issues were raised. 14:13:10 <mhausenblas> 14:14:24 <luis_bermudez> mhausenblas: Collecting feedback. Need to structure it better. 14:15:01 <luis_bermudez> TOPIC: Best Practices Deliverable 14:16:06 <BenediktKaempgen> q+ 14:16:07 <luis_bermudez> bhyland: Working on it. Expect review on April. 14:18:03 <BenediktKaempgen> asking about ld cookbook 14:18:19 <luis_bermudez> sandro: Provenance - Advanced discussion on the Semantic Web coordination meeting related to proper modeling agents. 14:19:25 <luis_bermudez> bhyland: 2 weeks ago there was a related action item. Need further discussion 14:19:33 <George> q? 14:19:37 <George> ack BenediktKaempgen 14:20:55 <bhyland> Question by Benedikt referencing email to public list "Feedback on Ingredients for High Quality Linked Data section of Linked Data Cookbook 14:20:55 <bhyland> Date: April 16, 2012 10:28:20 AM EDT 14:21:37 <bhyland> @Benedikt, by all means please respond with links to paper & guidance. Please respond to public-gld-comments as I'd like to read it too :-) 14:22:08 <luis_bermudez> BenediktKaempgen: Question about responding to an email from Charles M. Heazel related to the best practice document and business rules. Is is ok to respond ? 14:22:18 <George> 14:22:42 <Zakim> + +91.80.67.84.aahh 14:22:56 <BenediktKaempgen> thanks George! 14:23:25 <George> q? 14:23:34 <Zakim> +??P9 14:24:03 <Biplav> Biplav has joined #gld 14:24:06 <bhyland> zakim, P9 is cgueret 14:24:06 <Zakim> sorry, bhyland, I do not recognize a party named 'P9' 14:24:55 <boris> zakim, ??P9 is cgueret 14:24:55 <Zakim> +cgueret; got it 14:24:56 <luis_bermudez> TOPIC: Engagement discussion 14:25:55 <luis_bermudez> George: break out groups - meeting biweekly ? 14:27:38 <George> zakim, aahh is Biplav 14:27:38 <Zakim> +Biplav; got it 14:28:47 <fadmaa> +1 to alternate weeks between vocabularies and other deliverables 14:28:54 <BenediktKaempgen> +1 to bhyland suggestion 14:28:56 <cgueret> +1 too 14:28:59 <Mike_Pendleton> +1 to Bernadette's suggestion 14:29:02 <boris> +1 too 14:29:02 <mhausenblas> +1 14:29:04 <gatemezi> +1 14:29:06 <martinAlvarez> +1 14:29:10 <luis_bermudez> bhyland: Alternate weeks and focus the discussion, while increasing enthusiasm and momentum, based on 2 types of efforts: vocabulary development and best practices cookbooks. 14:29:50 <luis_bermudez> bhyland: How to make this more effective and super fun ? 14:29:59 <fadmaa> q+ 14:30:23 <bhyland> Suggestion: Narrow the focus on the weekly meetings. 14:30:49 <fadmaa> Zakim, ack me 14:30:49 <Zakim> I see no one on the speaker queue 14:30:51 <bhyland> fadmaa: Consider doing more comprehensive coverage of a given topic rather than rushing in favor of time 14:31:52 <bhyland> q+ 14:32:47 <luis_bermudez> George: Seems that there is support for alternating deliverable topcis. Do we have support for breakout groups to get together in addition to the weekly calls. Maybe having a focused breakout call every other week. 14:33:01 <bhyland> George: Is anyone in favor of break out groups to engage in deeper a discussion, to better prepare for the bi-weekly focused calls? 14:33:33 <Biplav> +1, with a comment. Breakout, focused call is a good idea. 14:34:05 <DaveReynolds> q+ 14:34:09 <bhyland> ack 14:34:13 <DaveReynolds> ack me 14:34:21 <bhyland> ack me 14:35:15 <bhyland> DaveReynolds: Use email as the "breakout" group for discussion. Use the core time on a telecon to make actual discussions. Recommend using email more actively. 14:35:16 <bhyland> +1 14:35:25 <DaveReynolds> zakim, mute me 14:35:25 <Zakim> DaveReynolds should now be muted 14:35:35 <luis_bermudez> DaveReynolds: Breakout groups can interact via email. Calls used for consolidation. 14:36:54 <BenediktKaempgen> +1 to alternate week calls + breakout mails 14:37:17 <bhyland> +1 14:37:20 <sandro> +1 only do breakout calls when it seems necessary under the circumstances 14:37:27 <mhausenblas> +1 14:37:28 <fadmaa> +1 14:37:28 <cgueret> +1 14:37:31 <gatemezi> +1 14:37:32 <boris> +º 14:37:34 <Mike_Pendleton> +1 only do BCs when nec 14:37:38 <DaveReynolds> +1 only breakout calls if needed, not as routine 14:37:39 <boris> +1 14:37:46 <luis_bermudez> +1 DaveReynolds: Breakout groups can interact via email. Calls used for consolidation. 14:38:29 <George> q? 14:39:32 <luis_bermudez> bhyland: Speakers are welcome to participate at the GLD-WG meetings. Ideas are welcome. 14:40:07 <George> q? 14:40:16 <Biplav> q+ 14:40:43 <bhyland> Please think about what people in your broader network are producers or consumers of LOD that could speak with this group to stimulate thinking, use cases and provide input that will improve the output of this group. 14:41:28 <luis_bermudez> George: Any other ideas to increase visibility of the work with other communities? 14:41:33 <bhyland> Biplav: Do we have a small, concise deck we share for our communcations about GLD WG 14:41:36 <bhyland> q+ 14:41:52 <George> ack Biplav 14:41:53 <luis_bermudez> Biplav: Is there is a small presentation that we can use to present in meetings? 14:42:04 <George> ack bhyland 14:42:20 <Biplav> q+ 14:43:05 <George> ack Bi 14:43:08 <bhyland> zakim, ack me 14:43:08 <Zakim> I see no one on the speaker queue 14:43:20 <luis_bermudez> ACTION: bhyland to send presentation 14:43:20 <trackbot> Created ACTION-67 - Send presentation [on Bernadette Hyland - due 2012-04-26]. 14:44:40 <luis_bermudez> Biplav: Important to have a consistent message and advertising in new areas all over the world. 14:45:06 <bhyland> Biplav: Supports (unofficially) both IBM and emerging countries and it would be good to have a consistent message on the benefits, uses, etc of government use of & publication of LOD 14:45:08 <luis_bermudez> TOPIC: Announcements 14:46:16 <bhyland> Next F2F in conjunction with 29 Oct - 2 Nov 2012 in Lyon, France at the Cité Centre de Congrès de Lyon 14:46:24 <danbri> danbri has joined #gld 14:46:25 <BenediktKaempgen> most likely, I will be at EDF presenting PlanetData work in the hackathon 14:46:44 <bhyland> @Danbri, do you want to say anything about: NISO Webinar on Schema.org and Linked Data, speaker: Danbri. 25-April-2012 @ 13.00 US ET, see 14:46:51 <bhyland> Was mentioned in today's call ... 14:47:21 <danbri> just... come along if you're interested in how schema.org will fit into the Linked Data world. But i'll also post slides in a few days... 14:48:01 <BenediktKaempgen> EDF hackathon 14:48:01 <George> q? 14:48:12 <luis_bermudez> ACTION: BenediktKaempgen to put link about presentation when available 14:48:12 <trackbot> Sorry, couldn't find user - BenediktKaempgen 14:48:33 <Zakim> -GeraldSteeman 14:49:44 <luis_bermudez> ACTION: BenediktKaempgen to put link about presentation when available 14:49:44 <trackbot> Sorry, couldn't find user - BenediktKaempgen 14:49:53 <bhyland> So yes, Biplav confirms native support of RDF for DB2 is a very important announcement/commitment by IBM. 14:51:31 <BenediktKaempgen> EDF = European Data Forum 2012 14:52:36 <George> q? 14:54:13 <luis_bermudez> George: Next Thursday Focus? 14:54:19 <luis_bermudez> bhyland: Review Best Practices and community directory. 14:55:24 <George> q? 14:56:58 <DaveReynolds> bye, thanks 14:57:05 <Zakim> - +1.202.566.aadd 14:57:06 <Zakim> -Sandro 14:57:06 <Zakim> -DaveReynolds 14:57:06 <cgueret> bye 14:57:07 <Zakim> -martinAlvarez 14:57:07 <Zakim> -mhausenblas 14:57:08 <Zakim> -bhyland 14:57:09 <BenediktKaempgen> Thanks George 14:57:09 <Zakim> -fadmaa 14:57:14 <Zakim> -Yigal 14:57:15 <Zakim> -BenediktKaempgen 14:57:16 <Yigal> Yigal has left #gld 14:57:17 <Zakim> -George_Thomas 14:57:17 <martinAlvarez> martinAlvarez has left #gld 14:57:21 <gatemezi> -gatemezi 14:57:22 <Zakim> -boris 14:57:23 <Zakim> -cgueret 14:57:27 <George> luis_bermudez: are you okay to complete the minutes? 14:57:31 <luis_bermudez> rrsagent, generate minutes 14:57:31 <RRSAgent> I have made the request to generate luis_bermudez 14:58:36 <Zakim> -Biplav 14:59:25 <luis_bermudez> rrsagent, generate minutes 14:59:25 <RRSAgent> I have made the request to generate luis_bermudez 15:00:16 <gatemezi> gatemezi has joined #gld 15:02:55 <lbermude2> lbermude2 has joined #gld 15:03:24 <gatemezi> RRSAgent, set logs world-visible 15:03:38 <gatemezi> RRSAgent, generate minutes 15:03:38 <RRSAgent> I have made the request to generate gatemezi 15:04:02 <lbermude2> George - the minutes are not available at the 15:04:07 <gatemezi> RRSAgent, generate minutes 15:04:07 <RRSAgent> I have made the request to generate gatemezi 15:05:22 <Zakim> -luis_bermudez 15:05:29 <Zakim> - +33.4.93.00.aaff 15:05:30 <Zakim> T&S_GLDWG()10:00AM has ended 15:05:30 <Zakim> Attendees were +3539149aaaa, mhausenblas, +1.267.481.aabb, George_Thomas, Sandro, luis_bermudez, bhyland, +49.721.aacc, +1.202.566.aadd, +3539149aaee, BenediktKaempgen, fadmaa, 15:05:30 <Zakim> ... martinAlvarez, DaveReynolds, boris, +33.4.93.00.aaff, +1.757.604.aagg, GeraldSteeman, Yigal, +91.80.67.84.aahh, cgueret, Biplav 15:06:33 <lbermude2> Chair: George 15:06:45 <lbermude2> rrsagent, generate minutes 15:06:45 <RRSAgent> I have made the request to generate lbermude2 15:08:36 <DaveReynolds> DaveReynolds has left #gld 17:22:33 <Zakim> Zakim has left #gld # SPECIAL MARKER FOR CHATSYNC. DO NOT EDIT THIS LINE OR BELOW. SRCLINESUSED=00000241 | http://www.w3.org/2011/gld/wiki/index.php?title=Chatlog_2012-04-19&oldid=2478 | CC-MAIN-2015-11 | refinedweb | 2,458 | 59.94 |
Templating
Introduction
CForms templates make it possible to define the layout for your form in a simple way. The basic principle is that you create a template file containing <ft:widget elements where you want to insert the widgets. After execution of the template, these will be replaced by the XML representation of the corresponding widgets. The ft:widget elements can thus be embedded in e.g. a HTML layout. After processing of the template tags you will not specific to one form, as it simply needs to know how to translate individual widgets to HTML, and does not have to create the complete page layout. CForms contains just such an XSLT so you don't have to write it yourself (except if you need to do heavy customisation). The image below illustrates this process.
Available implementations
There are two different mechanisms available to process the form tags:
The JXTemplate-based approach is the newer one and is more powerful as you can make use of standard JX constructs in addition to the CForms template tags.
Locating the form instance object
In the most common case, the form object is passed by the flow controller to the pipeline in the view data under a key named "CocoonFormsInstance". There are alternative ways to locate the form, though these are dependent on the template implementation (JX or the transformer).
Forms transformer element reference
The elements to which the forms transformer reacts are all in the "ft" (Forms Template) namespace, which is identified by the following URI:
These will generally be replaced by elements in the "fi" (Forms Instance) namespace, which is identified by the following URI:
A template should always consist of a ft:form-template tag which then contains the tags to insert the individual widgets. Widgets are most often inserted using the ft:widget tag, but some widgets might need specific tags. See the descriptions of the individual widgets for the appropriate template tags.
ft:widget
The ft:widget element is replaced by the forms transformer by the XML representation of a widget. Which widget is specified by the id attribute. The ft:widget element can contain a fi:styling element containing parameters to influence the styling process (the XSLT). The forms transformer will simply copy the fi:styling element over to its output.
For example:
<ft:widget <fi:styling <ft:widget/>
will be replaced by:
<fi:field [... label, validation errors, ...] <fi:styling </fi:field>
ft:widget-label
The ft:widget-label element will be replaced by the forms transformer with the label of a certain widget (specified by an id attribute). The label will not be wrapped in another element.
ft:continuation-id
The ft:continuation-id element will be replaced by the forms transformer by:
<fi:continuation-id> ID-of-the current-continuation </fi:continuation-id>
This might be useful for embedding the continuation ID in a hidden form field, for example.
Errors and Improvements? If you see any errors or potential improvements in this document please help us: View, Edit or comment on the latest development version (registration required). | http://cocoon.apache.org/2.1/userdocs/publishing/templating.html | CC-MAIN-2016-18 | refinedweb | 512 | 52.09 |
Details
Description
List comprehensions are present in C#, Clojure, Common Lisp, Erlang, Haskell, JavaScript, Boo, OCaml, Perl, Python, Scala, Scheme and other languages. They are obviously missing from Groovy.
Activity
Perhaps we need more examples or requirements. I think Groovy needs some kind of Lazy lists bundled in, but we mostly know how to do that, and if Groovy had that built-in, then I struggle to find good examples to support a list comprehension syntax. E.g., take this python example from
GROOVY-54:
[ (x,y) for x in range(5) for y in range(3) if (y+x) % (x+2) == 0 ] // python
I would expect any Java developer to be able to convert to something like this:
def result = [] for (x in 0..<5) for (y in 0..<3) if ((y + x) % (x + 2) == 0) result << [x, y]
And slightly more season Groovy folk should have no trouble giving this:
[0..<5, 0..<3].combinations().findAll{ x, y -> (y + x) % (x + 2) == 0 }
which is relatively easy to understand. And if it was backed by lazy lists ... do I need anything more?
My last comment wasn't in anyway meant as discouragement. I think this is a great area to explore. I just think we need to tease apart any tangled requirements ... which might lead to some low hanging fruit ... which might lead to some more traction.
The first Groovy example above is exactly the problem, the declaration of a data structure should look like the declaration of a data structure not a bit of imperative code that manipulates a data structure!
The second example has more merit in many ways but is still not really obvious as the declaration of a data structure. This examples handles the iterator and selector ideas, the element missing is the expression. Let me amend the target declaration to:
[ Flob ( x + y ) for x in range(5) for y in range(3) if (y+x) % (x+2) == 0 ] // python
where we assume Flob is some form of callable delivering a value (which includes object instantiation). Presumably this maps
to something like:
[0..<5, 0..<3].combinations().findAll{ x, y -> (y + x) % (x + 2) == 0 }.collect { x , y -> Flob ( x + y ) }
So the question is: why do the functional languages that allow working in this way (e.g. Miranda, Haskell, etc.) provide comprehensions as well. I would suggest two possible reasons:
0. It doesn't look like declaring a data structure.
1. It is horrendously inefficient due to all the new list instances created.
I think the argument that "if Haskell and Erlang as well as Python thinks it is an appropriate coding structure, then Groovy should do something along these lines" applies very nicely.
Discussing your two points (in reverse order).
Point 1: "horrendously inefficient"
If we implemented in terms of lazy lists, this doesn't need to be the case at all. In fact C#'s query expressions aren't far off what we have above. If I understood correctly, C#'s equivalent when converted back to a Groovy equivalent would be something like:
(0..<5).collectMany({ x -> (0..<3) }, { x, y -> [x, y] }).findAll{ x, y -> (y+x) % (x+2) == 0 }
which if using lazy lists can be made efficient in the general case - not that I think it comes into play here.
Point 0: "looking like a data structure declaration"
The LINQ form of above (again given a Groovy flavor) would be something like:
from x in 0..<5 from y in 0..<3 where (y+x) % (x+2) == 0 select [x, y]
We could currently achieve something similar using source or perhaps AST re-writing at the expense of needing a way to indicate that such re-writing was required in a particular source file.
So, do these steps get us far enough or do we need a special list comprehension syntax?! | http://jira.codehaus.org/browse/GROOVY-4105?focusedCommentId=214299&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2014-15 | refinedweb | 644 | 72.66 |
I consider myself something of a connoisseur of technical books. I pride myself on my small but expanding collection. Of course, an important part of maintaining a useful book collection is knowing what information your books contain, so you can find it when you need it. Though I often forget the details of a solution I read about, I remember where I saw it, and what the original problem was. My purpose in writing this piece is two-fold: to show you how I solved a particular problem, and to point you at the books I used along the way.
When I was given this task, some of the solution was already defined. There was an existing architecture with a client and server sending messages over TCP/IP. A requirement had arisen for an additional client interface for CGI processes. Since each CGI process is short-lived, creating a new socket connection in each process would be too expensive. So the idea was to route messages through a daemon that maintains a persistent connection to the server. Because the daemon can be located on the same Unix box as the CGI processes, the two can use an Interprocess Communication (IPC) mechanism with less overhead than sockets. This is where I started working from.
The first decision was to choose the exact IPC mechanism to use. Unix has several, including pipes, fifos, sockets, message queues, and shared memory. I ruled out pipes because they require a parent-child relationship between processes, which I did not have. Fifos, like pipes, are best suited for one-to-one communication, whereas I needed manyto- one. I had heard that message queues were unnecessarily complex and grandiose, so I was hesitant to use them. Shared memory, on the other hand, is basically a free-form medium, and I was pretty confident that it would be fast enough, so I started the implementation based on shared memory.[1] I'll walk you through the basics of my solution below.
Shared memory is what the name implies; it is a portion of address space that is shared between processes. Normally each process has its own address space, which is entirely separate from those of other processes. The operating system uses hardware to map a process's virtual address space onto physical memory. This ensures that no process can access memory in another process's address space; so no process can disturb the integrity of another. Shared memory is an exception to this system. When two processes attach the same shared memory segment, they are telling the operating system to relax the normal restrictions by mapping a portion of their address space onto the same memory.
The definitive book on Unix IPC is "UNIX Network Programming" by the late Richard Stevens [Stevens]. Anyone doing heavy Unix should be familiar with it. After consulting this authority, we see that getting access to shared memory is a two-step process. First, a shared memory identifier is obtained, using the shmget() system call. The segment is identified by an integer key, and has a set of permissions controlling which processes may read, write, or execute it, much like a Unix file. The IPC_CREAT and IPC_EXCL flags control creation of the segment if it doesn't exist. Once a segment is created, it is available for use by any process with appropriate permissions. After obtaining a segment identifier, a process must attach the segment into its address space using the shmat() call. The return from shmat() is the base address where the segment was attached, i.e., the address of the first byte located in the segment. The memory is now available for use just like any other memory in the process's address space. Here's an example:
#include <sys/shm.h> int main() { int id = shmget(0x1000 /*key*/, sizeof(int) /*size*/, 0600 | IPC_CREAT /*creation flags*/); void* base = shmat(id /*identifier*/, 0 /*base address*/, 0 /*flags*/); *reinterpret_cast<int*>(base) = 42; }
After the call to shmat() above, the process's address space might look as shown in Figure 1 below.
The low-level C interface of shmget()[2] and shmat(), with their untyped integer handles and bit flags are just what you get with Unix system calls. Now I have great respect for Unix and C, but sometimes I want an interface with a little more structure...err...I mean class. So we'll wrap up some of the error-prone details in a C++ interface. Our first step in building a set of C++ classes to handle shared memory will be to encapsulate a shared memory segment:
class shm_segment { public: enum CreationMode { NoCreate, CreateOk, MustCreate, CreateOwned }; shm_segment(int key, CreationMode mode, int size=0, int perm=0600); ~shm_segment(); void* get_base() const; unsigned long get_size() const; void destroy(); private: int key_; int id_; void* base_; unsigned long size_; bool owner_; }; class shm_create_exception: public std::exception { public: const char* what() const throw() { return "error creating shared memory segment"; } };
This class simply wraps creation and destruction of a shared memory segment. The CreationMode argument determines whether the segment will be created if it doesn't exist. This corresponds to the meaningful combinations of the IPC_CREAT and IPC_EXCL flags. The CreateOwned option sets our owner_flag, which we'll use to determine if the shared memory segment should be removed when the shm_segment object is destroyed. It might be possible to store a reference count and remove the segment only after all processes are finished with it, but it's probably best to have a clear ownership policy instead. The implementation of shm_segment is straight forward.
The shm_segment class makes it a little easier to get a segment, but once it is created, there is still no internal structure to the segment; it's just a raw chunk of memory. In order to add more structure, we will need a way to store pointers inside the segment. But this poses a problem. Since each process may attach the segment at a different base address, a pointer into the segment created by one process may not be valid in another process; it may point somewhere else entirely. To solve this problem, we will store offsets from the base instead of raw pointers. We'll create a simple pointer class for this purpose:
template<typename T> class shm_ptr { public: shm_ptr(); shm_ptr(T* rawptr, const shm_segment& seg); // compiler generated copy constructor and // assignment are ok T* get(const shm_segment& seg) const; void reset(T* rawptr, const shm_segment& seg); private: long offset_; }; class shm_ptr_out_of_bounds: public std::exception { public: const char* what() const throw() { return "shm_ptr cannot be created with address outside of segment"; } };
Notice that the only data member is an integer representing the offset in bytes from the base address. This offset will have the same meaning for all the processes using our shared memory segment, so it can safely be stored in shared memory. Notice also that our interface requires the user to pass the shm_segment when accessing the pointer. This is somewhat cumbersome, but it yields a general design where each process may attach multiple shared memory segments. Alternatively, shm_segment could be a singleton, which would make it easier for shm_ptr to provide the usual indirection operators, but I found this unnecessary.
Our shm_ptr gives us a safe way to store a pointer in shared memory, but we don't yet have a way to create that pointer. It would be possible to choose arbitrary locations for data and manually place it there with some pointer arithmetic, e.g.,
shm_ptr<char> buffer(reinterpret_cast<char*>(mysegment.getBase()), mysegment); shm_ptr<char> buffer2(reinterpret_cast<char*>(mysegment.getBase())+1000, mysegment);
but I think we can all agree that this is too ugly and inflexible. What we really need is a way to allocate objects in shared memory with the ease of ordinary operators new and delete. To accomplish this, we'll write a shared memory allocator class:
struct shm_allocator_header; class shm_allocator { public: explicit shm_allocator(const shm_segment& seg); void* alloc(size_t nbytes); void free(void* addr); void* get_root_object() const; void set_root_object(void* obj); const shm_segment& get_segment() const; private: const shm_segment& seg_; shm_allocator_header* header_; };
The interface provides alloc() and free() methods, as well as methods to get and set a "root object." The root object gives us a fixed access point for processes to share objects. There is also a get_segment() method to find out what shared memory segment an allocator is using.
When I was writing my first implementation, and I knew I needed an allocator, I remembered seeing one somewhere, and after flipping through a few pages, I found Chapter 4 in "Modern C++ Design," which describes small object allocations [Alexandrescu].[3]An example there gave me enough information to sketch out my initial implementation. The basic idea of an allocator is to store a control block in the memory preceding the address returned to the user. This example used:
struct MemControlBlock { bool available_; MemControlBlock* prev; MemControlBlock* next; }
Starting from a root block, you just walk down the list of blocks looking for one that's free and big enough for the requested allocation. The layout of memory looks as shown in Figure 2 above.
Incidentally, Alexandrescu mentions the many tradeoffs associated with memory allocators, and points us to Knuth's masterpiece for more details [Knuth]. For some variety, I decided to base our implementation on a section in "The C Programming Language" on implementing malloc() and free(); it is a little simpler to explain, and it gives me an opportunity to recognize a classic (and still relevant) book [KandR]. The idea for the allocator is the same, but the details have changed. Each block now stores its size and a pointer to the next block.
struct shm_allocator_block { shm_ptr<shm_allocator_block> next; size_t size; };
A header block stores a pointer to the beginning of a linked list of free blocks:
struct shm_allocator_header { shm_ptr<void> rootobj; pthread_mutex_t lock; shm_ptr<shm_allocator_block> freelist; };
The free list is a singly-linked circular list of blocks not currently in use. The list is linked in order of ascending addresses to make it easy to combine adjacent free blocks. Here is the implementation of alloc().
void* shm_allocator::alloc(size_t nbytes) { scoped_lock guard(&header_->lock); size_t nunits = (nbytes + sizeof(shm_allocator_block) - 1) / sizeof(shm_allocator_block) + 1; shm_allocator_block* prev = header_->freelist.get(seg_); shm_allocator_block* block = prev->next.get(seg_); do { if (block->size >= nunits) { if (block->size == nunits) prev->next = block->next; else { block->size -= nunits; block += block->size; block->size = nunits; } header_->freelist.reset(prev, seg_); return (void*)(block+1); } prev = block; block = block->next.get(seg_); } while(block != header_->freelist.get(seg_)); return NULL; }
The code above walks the free list looking for a block big enough to hold the requested number of bytes. If the block it finds is larger than needed, it splits it in two. Otherwise it returns it as is. If no block is found, it returns null. Size calculations are made in units equal to the size of one shm_allocator_block. This makes the pointer arithmetic a little simpler, and ensures that all memory allocated will have the same alignment as shm_allocator_block.
The implementation of free() is shown below:
void shm_allocator::free(void* addr) { scoped_lock guard(&header_->lock); shm_allocator_block* block = static_cast<shm_allocator_block*>(addr) - 1; shm_allocator_block* pos = header_->freelist.get(seg_); while (block > pos && block < pos->next.get(seg_)) { if (pos >= pos->next.get(seg_) && (block > pos || block < pos->next.get(seg_))) break; pos = pos->next.get(seg_); } //try to combine with upper block if (block + block->size == pos->next.get(seg_)) { block->size += pos->next.get(seg_)->size; block->next = pos->next.get(seg_)->next; } else block->next = pos->next; //try to combine with lower block if (pos + pos->size == block) { pos->size += block->size; pos->next = block->next; } else pos->next.reset(block, seg_); header_->freelist.reset(pos, seg_); }
The block is inserted into its correct spot in the free list (remember the list is sorted by address). Then we check for adjacent free blocks. If any are found, we combine them with the current block by just forgetting that the upper block exists and increasing the size of the lower block. So, after a few allocations and deallocations, memory might look as shown in Figure 3 below.
We'll also overload operators new and delete to work with shm_allocator. The basic details for doing this, and some pitfalls can be found in Item 36 of "Exceptional C++" [Sutter].
void* operator new(size_t s, shm_allocator& a) { return a.alloc(s); } void operator delete(void* p, shm_allocator& a) { a.free(p); }
We can overload new[] and delete[] similarly. The proper syntax for calling overloaded operator new or delete may be mysterious to you. Assuming we have already defined a class Message, we can create a new Message object in shared memory using:
Message* m = new (a) Message; // a is a shm_allocator
The above is a new expression which calls our overloaded operator new, followed by the Message constructor. Destroying the Message object is not quite as simple:
m->~Message(); operator delete(m, a);
We must first make an explicit call to the destructor and then deallocate the memory. Two steps are required because there is no way to form a delete expression which calls an overloaded operator delete. You'll notice that the lines above are equivalent to these:
m->~Message(); a.free(m);
So, one might be tempted to overload only operator new; however, it is very important that we overload operator delete as well, because if the Message constructor throws an exception our overloaded operator delete will be automatically called. If there was no operator delete to match the operator new used to allocate the memory, bad things would happen.
We now have a general purpose allocator that we can use to help us build arbitrary structures in shared memory. To demonstrate its use, we'll write our final class, a producer-consumer queue in shared memory:
template<typename T> struct shm_queue_header; template<typename T> class shm_queue { public: shm_queue(shm_allocator& a, size_t maxsize); ~shm_queue(); void push(shm_ptr<T> obj); shm_ptr<T> pop(); size_t size() const; private: const size_t MaxQueueSize; shm_queue_header<T>* header_; shm_allocator& a_; const shm_segment& seg_; // disallowed (not implemented) shm_queue(const shm_queue& copy); shm_queue& operator=(constshm_queue& rhs); };
This is a basic queue, with simple push() and pop() methods.[4]The queue is created with a shm_allocator, which it then uses to allocate nodes. A maximum size is also specified to ensure that the producer(s) don't get too far ahead of the consumer(s). The implementation is a singly-linked list.
template<typename T> struct shm_queue_node { shm_ptr<T> data; shm_ptr<shm_queue_node> next; }; template<typename T> struct shm_queue_header { size_t size; shm_ptr<shm_queue_node<T> > head; shm_ptr<shm_queue_node<T> > tail; pthread_mutex_t lock; pthread_cond_t ready_for_push; pthread_cond_t ready_for_pop; };
Now, those pthread_xxx members give me a chance to make my final recommendation: "Programming with POSIX Threads" by David Butenhof [Butenhof]. This is an excellent introduction to threaded programming, and specifically, Pthreads. "But wait a minute!" you say, "When did we introduce threads into the picture?" Well, the primitives needed to synchronize two threads (which inherently share memory) are very similar to those needed to synchronize processes using Unix shared memory. They are so similar, in fact, that Pthreads provides a way to use mutexes and condition variables in this scenario. If your system defines the _POSIX_THREAD_PROCESS_SHARED macro, you can place Pthread objects in shared memory if you set the pthread_process_shared attribute. This works on the Solaris system I tested on, but is not supported on my Linux system.[5]
For a little more demonstration of Pthreads, here's the implementation of our scoped_lock class:
#include <pthread.h> class scoped_lock { public: scoped_lock(pthread_mutex_t* mutex) { mutex_ = mutex; pthread_mutex_lock(mutex_); } ~scoped_lock() { pthread_mutex_unlock(mutex_); } void wait(pthread_cond_t* cond) { pthread_cond_wait(cond, mutex_); } private: pthread_mutex_t* mutex_; scoped_lock(const scoped_lock& copy); scoped_lock& operator=(const scoped_lock& rhs); };
And, using scoped_lock , the implementation of shm_queue::push():
template<typename T> void shm_queue<T>::push(shm_ptr<T> obj) { scoped_lock guard(&header_->lock); while (header_->size > MaxQueueSize) { guard.wait(&header_->ready_for_push); } shm_queue_node<T>* node = new (a_) shm_queue_node<T>; node->data = obj; node->next.reset(0, seg_); if (header_->head.get(seg_) != 0) { header_->head.get(seg_)->next.reset(node, seg_); } else { header_->tail.reset(node, seg_); } header_->head.reset(node, seg_); ++header_->size; pthread_cond_signal(&header_->ready_for_pop); }
Finally, we'll write a simple producer and consumer which use shm_queue:
struct Message { int sequence_num; shm_ptr<char> message; }; // main routine for producer int main(int argc, char*argv[]) { shm_segment seg(30000, shm_segment::CreateOwned, 1000000); shm_allocator a(seg); shm_queue<Message> q(a); int message_count = 0; while (true) { std::cout << "enter a message: " << std::flush; std::string line; std::getline(std::cin, line); Message* m = new (a) Message; m->sequence_num = message_count++; m->message.reset(static_cast<char*>(a.alloc(line.size()+1)), seg); strcpy(m->message.get(seg), line.c_str()); q.push(shm_ptr<Message>(m, seg)); } } // main routine for consumer int main(int argc, char*argv[]) { shm_segment seg(30000, shm_segment::NoCreate); shm_allocator a(seg); shm_queue<Message> q(a); while (true) { shm_ptr<Message> p = q.pop(); Message* m = p.get(seg); std::cout << "Message " << m->sequence_num << " " << m->message.get(seg) << "\n"; a.free(m->message.get(seg)); m->~Message(); operator delete(m, a); } }
There you have it: a way to access shared memory, allocate objects in it, and pass them on a queue. So keep up with your reading, and pay attention to the ACCU book reviews [BookRev]!
Special thanks to Satish Kalipatnapu and Thad Frogley for the valuable comments they provided on drafts of this article. Any errors that remain are mine alone.
[Stevens] W. Richard Stevens: UNIX Network Programming, Volume 2: Interprocess Communications, Prentice Hall, 1998, ISBN 0-130-81081-9.
[Alexandrescu] Andrei Alexandrescu: Modern C++ Design: Generic Programming and Design Patterns Applied, Addison-Wesley, 2000, ISBN 0-201-70431-5.
[KandR] Brian W. Kernighan and Dennis M. Ritchie: The C Programming Language, Prentice Hall, 1988, ISBN 0-131-10362-8.
[Sutter] Herb Sutter: Exceptional C++: 47 Engineering Puzzles, Programming Problems, and Solutions, Addison-Wesley, 2000, ISBN 0-201-61562-2.
[Butenhof] David R. Butenhof: Programming with POSIX Threads, Addison-Wesley, 1997, ISBN 0-201-63392-2.
[1] Actually, I also had some prior experience using shared memory, so this may just be a case of everything looking like a nail when all you have is a hammer.
[2] Some people insist that all Unix system calls can be pronounced as they are written, no matter how vowel-deficient they may be.
[3] Unfortunately, the small object allocator in Loki (the C++ library developed in [Alexandrescu]) appears to have a few unresolved implementation issues. If you choose to use it, you may learn more than you ever wanted about allocators. However, this does not detract from the value of "Modern C++ Design" itself.
[4] Item 10 in [Sutter] warns against returning objects in pop() methods. However, in this case, it is not necessary to provide separate top() and pop() methods; since we only store pointers the pop() operation cannot throw an exception while copying the return value. | https://accu.org/index.php/journals/376 | CC-MAIN-2020-29 | refinedweb | 3,162 | 53.1 |
Salesforce is a cloud-based Customer Relationship Management (CRM) platform. Its CRM applications are used for sales, service, marketing, and many more. As Salesforce is a cloud-based platform, it is easy to implement and manage it. The Salesforce applications don’t require IT, experts, to manage or even set up them. You can just simply log in and connect with your customers. Salesforce also has a vast and diverse community that helps companies with all the support required, starting from the development of an application to its customization.
It has three major advantages, and they are:
In Salesforce, we have two methods to build an application, and they are:
Classic Salesforce Application Architecture: It is a client application by which you can use it over mobiles. Salesforce uses wireless carrier networks which will connect or exchange data from users to Salesforce. All the data is stored in the mobile, and data sent to the mobile applications depend on the configuration of the mobile.
Lightning Application Architecture: It is a new approach to application development. Using Lightning, we can develop an application that supports any device like mobile, desktops, or Tabs. Lightning application is moreover a dynamic approach towards application development where a single page application will interact according to the user.
Due to its advantages, the Salesforce lightning application’s usage is more compared to Salesforce Classic. Slowly, all applications are moving from Salesforce Classic to Lightning application. As the trend has changed now, which is to have a more easy and interactive application approach, companies are giving more importance to modern and dynamic approaches of the CRM application, which are fulfilled by the lightning application. Using the Lightning application, you can get all the dynamism and easy-to-go approaches for your CRM application.
This Salesforce Tutorial will help you learn about the Lightning Component Architecture, the creation of Lightning application and Lightning Component, and many other topics related to the Lightning Experience.
Salesforce Lightning is a Component-based framework from Salesforce for design simplified processes for business users. The next big thing in Salesforce is the Lighting Component Architecture. Lightning Component Framework is one of the methods in Salesforce which is used to develop the user interface. It is used to create a dynamic web application for both mobiles and desktops.
Salesforce Lightning Architecture uses Single-page Application Architecture. Single Page Applications are web applications that use a single HTML page and update dynamically when the user interacts with the page. In this tutorial, we will follow a modern approach to build a single web application using Salesforce Lightning Component.
The below figure shows how the Lightning Component framework looks. We have three components here:
Salesforce Lightning Component is built over Aura Component, Using the Aura component enables to development of dynamic web pages with scalable lifecycle to support building apps designed for growth.
Aura Components are reusable components and represent reusable sections of the UI. Usage of Aura Components can range from a single line to a complete app. Salesforce Lightning Component is built over Aura Component. Aura Component enables to development of dynamic web pages with a scalable lifecycle to support apps designed for growth. You can find more about the Aura component from this link ithub.com/forcedotcom/aura.
Aura Component has a multi-tier partitioned component that connects the server and client. As it is developed over the Aura framework, it uses the responsive design - In this method, the lightning component uses the same codebase for all desktops, mobiles, and tablets but displays according to the screen. This tutorial will show how you can reuse the same codes for different applications’ development.
Salesforce lightning component architecture has the advantage of using components like HTML, CSS, JavaScript, and many other web-enabled codes, providing a sophisticated user interface. The lightning component is flexible, interactive, and helps to focus on the visualization of the business. The lightning component uses different technology at both server and client-side - JavaScript is used at the client-side and the apex is used at the server-side.
As the lightning in real life strikes fast and does not repeat its pattern, the Salesforce lightning design system also is quick in use and does not repeat the same design for different service users. It helps to provide customer support, product knowledge, different cases in a single environment.
Reasons to use the Lightning component are mentioned below.
Salesforce lightning design system is used in many ways, we will learn these different ways of Salesforce lightning design in this tutorial. This Salesforce developer guide will help in an organized way of developing the lightning component.
As we talk about where the Salesforce lightning component ecosystem can be used, we can get a long list. The lightning component can be deployed in the following.
But, the lighting components cannot be deployed or used on external sites.
The lightning component architecture consists of a lightning application, component, controller, helpers, and Apex controller. Lightning components are based on an open-source UI for web development. The below figure will help you understand.
The basic components of Lightning component architecture are explained below.
Controller Source
({ handleClick : function(cmp, event) { var attributeValue = cmp.get("v.text"); console.log("current text: " + attributeValue); var target = event.getSource(); cmp.set("v.text", target.get("v.label")); }})
Example: Controller.js
To create Apex Controller:
Before starting to build an application in Salesforce lightning Component creation, we need some setups. To develop the lightning application, you need an org setup which is as follows.
In this section, we will create your first Lightning Component. We can say it is a combination of Markup, JavaScript, and CSS. We will approach it stepwise.
Step 1: Open Your Console.
Go to your Salesforce domain and select Developer Console as marked in the below picture.
Link:
Step 2: Go to File-->New--> Lightning Component.
Step3: After clicking on the lightning component, give the name for your component like Myfirst_lightcomponent and submit the form, as shown below screenshot.
Naming The Lightning Component.
Step4: Once you submit, you will get two tabs opened, one is myfirst_lightcomponent component bundle and the other is myfirst_lightcomponent.cmp tab. Close the myfirst_lightcomponent tab and keep myfirst_lightcomponent.cmp opened, as shown below.
Step5: Now, you can edit with markup code, for example, you can write sample code shown below.
The below steps will help to create a new lighting application.
Step1: Go to file-->new--> Lightning application
Link:
Step2: Give the name for the application, like myfirst_lightapp, and Submit the form.
After the submission, a page will open with a new myfirst_lightapp bundle and myfirst_lightapp.app page. Close bundle page and continue with the. app page as shown in the below screenshot.
Application Page.
Step3: Now, with the heading, embed the component name in the file, as shown in the below code, and save the file. C tag in the code is the default namespace to access the lightning component.
Step4: Now, go to the lightning application component bundle and click it on preview, it will show the output of the lightning component. Below shows the output of the lightning component.
The output of the Lightning Component.
While creating the Lightning Component, we use Attributes and Expressions to make the component more dynamic. It is mostly used for holding a value and reusing it in the code. Thus, attributes and Expressions are essential aspects of the lightning component.
The attribute is like a variable where we store a value that can be used again and again in the code. Using attributes in lighting components will increase the dynamics of the code. Attributes can be added using tag
The attribute has rules for naming, and they are:
Expression: This field is actually used for calculation, using property values and other information in the markup. It is used for giving output dynamically or getting values in the component dynamically.
Expressions can be any literal values, sub-expressions, variables, and operators. The expression can only be declared inside the attribute as a field, which can be declared by tag {!v.whom}. The expression can be only case sensitive and should be maintained like that.
We have different types of attributes, and they are.
Attribute use in the Lightning component is similar to any markup language, here we define attribute under the aura component tag. You can see the attribute declaration below.
Here, we have many components which declare the Attribute, and they are:
Name: Which is the name given to an attribute.
Type: Its data type to which it belongs like string or number.
Default: If left null what value should be filled to the attribute by default choice.
Description: The Description of the attribute for which it is used.
Access: It determines whether it is a Global attribute or a Private attribute.
Required: Whether the attribute is mandatory or not.
Let us see an example code for the attribute Declaration.
Name of Unknown is {!v.FirstName}
{!v.FirstName} is {!v.Age} Years Old.
{!v.FirstName} is male? = {!v.isMale}
Now, open your Developer Console and type the above code in lightning Component, and save it by any name like my_first_attribute. Now, create a lightning application and embed your component in the lighting application.
After this, go to Developer Console and press preview, and you will be able to see your output as shown below. In the above code, we have used three attributes, name, age, and Gender.
The output of the attribute
As you observe in the code, we have used the tag {!v. Name}, this tag is the example for Expression. As we enter the value of the attribute dynamically, the Tag{!v. Name} is the function of expression. There are different types of Expressions used in the lightning component, and they are:
In the above code, we have used both value provider expression and Dynamic Output expression. The expression used with the tag is a Dynamic output expression and using it in the next tag {!v.name } is of age {!.age} makes it value provider expression.
Actions are any function or working of code that are handled by the controller on the specific operation. Like in the web browser when we click on a new tab and a new tab for browsing is created, likewise, we can have an action button in the lightning component which will perform such operations on click go.
Action: Actions are operations or any function which performs any specific task in the application.
The above-mentioned tag is a click tag which upon clicking it will perform a specific action.
Event: Any action like notification given to a user on performing an action is called an event. These events are controlled by a client-side controller which is basically written in JavaScript.
The below examples show the action created with the event in a controller.
({ myAction : function(cmp, event, helper) { // add code for the action }, anotherAction : function(cmp, event, helper) { // add code for the action } })
Each function above contains three components,
Now, to handle the action, we need the controller from the client-side. The controller is a part of the component bundle. To create a Controller, click the controller option in a component bundle. The component bundle is shown below.
Component Bundle
Once clicked on the tab, it will create the page, as shown below.
Once the controller page is shown we write down the below code, which will create the two actions.
{!v.text}
Observer the below example to create two actions with a button. Clicking on these buttons will update the text component of the attribute with specified values. Here, we have used the lightning button which is a standard lightning component.
The lightning framework has its own event system called DOM (Documented Object Model) event. Events that are mapped to lightning events as HTML tags are mapped to the lightning component. All the browsers can be mapped with Onkeys and Onclick which are then mapped to the lightning component. After that, the click button is mapped to the lightning button component to handle click action.
The below-mentioned code is for the client-side controller source which will help us to use onclick button created in the above code in the application.
{!v.text}
handle click function in the above is the action which uses event getSource(). getSource() is a built-in event of the lightning component which helps to determine from which component an event is fired. The lightning button in markup is the source component. When the button is pressed, the text is set to attribute value for the label which is declared as v.label. To execute this again, go to the previewer and click the button, and it will do the work.
Now, let us continue with the form building and inputting the data. For this topic, we will consider building a form for expense calculation. This form will create new expenses. For this, we will proceed with the steps as followed.
Step1: Pull Salesforce lightning design system (SLDS), and activate this in Lightning Component. We will get SLDS automatically inside. This SLDS is not available in the stand-alone app.
Now create a lightning application named myexpenseapp. app with the following mark-up.
extends=“Force:slds” will activate SLDS in the application, C: expenses will embed components in the application.
Step2: Now, create a component with the name expense and write down below markup
The above code is to create a grid layout. When you preview it, you will get the output as below.
As you can see it is blank, now we have to add the form fields.
Step3: Now, you have to add fields to the empty form. In the above code, we have commented on line number 18. Now replace that line with the below code. Now, without SLDS and Classes, it's just a
series of input fields.
Step4: Now, go to the Lightning application component bundle and click the preview tab. You get output as below.
Application Component Bundle
Till now, we were playing at the client-side, and we have not connected to the server yet. An event is wired to client-side controller action which can call server-side controller action. We have methods to connect to Salesforce, and you can see them below.
Apex Server-Side Controller Overview
The server-side controller is created in the apex and uses @auraenabled annotation for accessing the controller method. Only annotated methods are exposed and are bounded to apex limits. Apex controller has this server Echo action which adds a string to the value passed in.
Aura-enabled Annotation helps the lightning components to connect to the apex methods and properties. Aura-enabled annotation is used for two separate and distinct purposes. Use @AuraEnabled class static methods of Apex to Access remote controllers in the lightning component. Use @AuraEnabled instance methods and properties of Apex to serialize as data received from the server-side.
//Use @AuraEnabled to enable client- and server-side access to the method @AuraEnabled public static String serverEcho(String firstName) { return ('Hello from the server, ' + firstName); } }
Lightning application component bundle
Step1: Open your Developer Console.
Step2: Go to file-->New-->Apex Class
Apex Server-Side Controller
Step3: Enter a name to the controller.
Step4: Now, enter the method for each server-side controller. Add aura enabled so that methods get exposed.
Step5: Now, save the apex controller, after that open the component with which you want to wire the apex controller with. Now, add the controller to the component with the below tag.
Returning an Apex Server-Side Controller
After creating the server-side controller, we need to retrieve data from the server-side to the client-side. This can be done by using the return keyword in the component. The result must be serialized and should be in JSON format.
The return data type can be:
Returning Apex Object.
Example code to return the object:
public with sharing class SimpleAccountController { @AuraEnabled public static List getAccounts() { // Perform isAccessible() check here // SimpleAccount is a simple "wrapper" Apex class for transport List simpleAccounts = new List(); List accounts = [SELECT Id, Name, Phone FROM Account LIMIT 5]; for (Account acct : accounts) { simpleAccounts.add(new SimpleAccount(acct.Id, acct.Name, acct.Phone)); } return simpleAccounts; } }
Events: An event is a notification of an action that is performed in the application. Browser events are controlled by a client-side controller which uses JavaScript.
When an action is performed, the following things happen.
A client-side controller handles the event in a component. It is JavaScript that controls all functions to happen.
Now, follow the steps to create an event attached to a Component.
Step1: Create an event. Go to developer console and in a component bundle, click on Controller, and write down below code.
({ myAction : function(cmp, event, helper) { // add code for the action }, anotherAction : function(cmp, event, helper) { // add code for the action } })
Step2: Save the controller and now open the new component into which our event will be embedded.
Step3: Add this below code to attach our event to the component. Here, in the code, marked data, where we have used tag {!c.handleclick}, will wire our event into the component through a function.
{!v.text}
This will connect your event to the component.
In lightning application, we have a type of event called a component event. This event is fired from the instance of a component. This makes both events and actions fired from the same component.
We have another method where we can have our event handled by the component, and that is the Component event. Component Event is one such type of event which is defined inside the component itself which fires it. Both defining of event and firing of event happens in the same component. Below are the steps to create the Component Event.
Step1: Create a lightning component in the Developer Console.
Step2: Use type=” Component” tag for the component event, Use the below code.
Get trained and certified from Salesforce Online Course now! | https://mindmajix.com/salesforce-lightning-tutorial | CC-MAIN-2022-40 | refinedweb | 3,003 | 56.66 |
Pattern matching and guards for Python functions
Project description
function-pattern-matching (fpm for short) is a module which introduces Erlang-style multiple clause defined functions and guard sequences to Python.
This module is both Python 2 and 3 compatible.
Table of contents
Introduction
Two families of decorators are introduced:
- case: allows multiple function clause definitions and dispatches to correct one. Dispatch happens on the values of call arguments or, more generally, when call arguments’ values match specified guard definitions.
- dispatch: convenience decorator for dispatching on argument types. Equivalent to using case and guard with type checking.
- guard: allows arguments’ values filtering and raises GuardError when argument value does not pass through argument guard.
Usage example:
- All Python versions:
import function_pattern_matching as fpm @fpm.case def factorial(n=0): return 1 @fpm.case @fpm.guard(fpm.is_int & fpm.gt(0)) def factorial(n): return n * factorial(n - 1)
- Python 3 only:
import function_pattern_matching as fpm @fpm.case def factorial(n=0): return 1 @fpm.case @fpm.guard def factorial(n: fpm.is_int & fpm.gt(0)): # Guards specified as annotations return n * factorial(n - 1)
Of course that’s a poor implementation of factorial, but illustrates the idea in a simple way.
Note: This module does not aim to be used on production scale or in a large sensitive application (but I’d be happy if someone decided to use it in his/her project). I think of it more as a fun project which shows how flexible Python can be (and as a good training for myself).
I’m aware that it’s somewhat against duck typing and EAFP (easier to ask for forgiveness than for permission) philosophy employed by the language, but obviously there are some cases when preliminary checks are useful and make code (and life) much simpler.
Installation
function-pattern-matching can be installed with pip:
$ pip install function-pattern-matching
Module will be available as function_pattern_matching. It is recommended to import as fpm.
Usage
Guards
With guard decorator it is possible to filter function arguments upon call. When argument value does not pass through specified guard, then GuardError is raised.
When global setting strict_guard_definitions is set True (the default value), then only GuardFunc instances can be used in guard definitions. If it’s set to False, then any callable is allowed, but it is not recommended, as guard behaviour may be unexpected (RuntimeWarning is emitted), e.g. combining regular callables will not work.
GuardFunc objects can be negated with ~ and combined together with &, | and ^ logical operators. Note however, that xor isn’t very useful here.
Note: It is not possible to put guards on varying arguments (*args, **kwargs).
List of provided guard functions
Every following function returns/is a callable which takes only one parameter - the call argument that is to be checked.
- _ - Catch-all. Returns True for any input. Actually, this can take any number of arguments.
- eq(val) - checks if input is equal to val
- ne(val) - checks if input is not equal to val
- lt(val) - checks if input is less than val
- le(val) - checks if input is less or equal to val
- gt(val) - checks if input is greater than val
- ge(val) - checks if input is greater or equal to val
- Is(val) - checks if input is val (uses is operator)
- Isnot(val) - checks if input is not val (uses is not operator)
- isoftype(_type) - checks if input is instance of _type (uses isintance function)
- isiterable - checks if input is iterable
- eTrue - checks if input evaluates to True (converts input to bool)
- eFalse - checks if input evaluates to False (converts input to bool)
- In(val) - checks if input is in val (uses in operator)
- notIn(val) - checks if input is not in val (uses not in operator)
Custom guards
Although it is not advised (at least for simple checks), you can create your own guards:
- by using makeguard decorator on your test function.
- by writing a function that returns a GuardFunc object initialised with a test function.
Note that a test function must have only one positional argument.
Examples:
# use decorator @fpm.makeguard def is_not_zero_nor_None(inp): return inp != 0 and inp is not None # return GuardFunc object def is_not_val_nor_specified_thing(val, thing): return GuardFunc(lambda inp: inp != val and inp is not thing) # equivalent to (fpm.ne(0) & fpm.Isnot(None)) | (fpm.ne(1) & fpm.Isnot(some_object)) @fpm.guard(is_not_zero_nor_None | is_not_val_nor_specified_thing(1, some_object)) def guarded(argument): pass
The above two are very similar, but the second one allows creating function which takes multiple arguments to construct actual guard.
Note: It is not recommended to create your own guard functions. In most cases combinations of the ones shipped with fpm should be all you need.
Define guards for function arguments
There are two ways of defining guards:
As decorator arguments
positionally: guards order will match decoratee’s (the function that is to be decorated) arguments order.
@fpm.guard(fpm.isoftype(int) & fpm.ge(0), fpm.isiterable) def func(number, iterable): pass
as keyword arguments: e.g. guard under name a will guard decoratee’s argument named a.
@fpm.guard( number = fpm.isoftype(int) & fpm.ge(0), iterable = fpm.isiterable ) def func(number, iterable): pass
As annotations (Python 3 only)
@fpm.guard def func( number: fpm.isoftype(int) & fpm.ge(0), iterable: fpm.isiterable ): # this is NOT an emoticon pass
If you try to declare guards using both methods at once, then annotations get ignored and are left untouched.
Relguards
Relguard is a kind of guard that checks relations between arguments (and/or external variables). fpm implements them as functions (wrapped in RelGuard object) whose arguments are a subset of decoratee’s arguments (no arguments is fine too).
Define relguard
There are a few ways of defining a relguard.
Using guard with the first (and only) positional non-keyword argument of type RelGuard:
@fpm.guard( fpm.relguard(lambda a, c: a == c), # converts lambda to RelGuard object in-place a = fpm.isoftype(int) & fpm.eTrue, b = fpm.Isnot(None) ) def func(a, b, c): pass
Using guard with the return annotation holding a RelGuard object (Python 3 only):
@fpm.guard def func(a, b, c) -> fpm.relguard(lambda a, b, c: a != b and b < c): pass
Using rguard with a regular callable as the first (and only) positional non-keyword argument.
@fpm.rguard( lambda a, c: a == c, # rguard will try converting this to RelGuard object a = fpm.isoftype(int) & fpm.eTrue, b = fpm.Isnot(None) ) def func(a, b, c): pass
Using raguard with a regular callable as the return annotation.
@fpm.raguard def func(a, b, c) -> lambda a, b, c: a != b and b < c: # raguard will try converting lambda to RelGuard object pass
As you can see, when using guard you have to manually convert functions to RelGuard objects with relguard method. By using rguard or raguard decorators you don’t need to do it by yourself, and you get a bit cleaner definition.
Multiple function clauses
With case decorator you are able to define multiple clauses of the same function.
When such a function is called with some arguments, then the first matching clause will be executed. Matching clause will be the one that didn’t raise a GuardError when called with given arguments.
Note: using case or dispatch (discussed later) disables default functionality of default argument values. Functions with varying arguments (*args, **kwargs) and keyword-only arguments (py3-only) are not supported.
Example:
@fpm.case def func(a=0): print("zero!") @fpm.case def func(a=1): print("one!") @fpm.case @fpm.guard(fpm.gt(9000)) def func(a): print("IT'S OVER 9000!!!") @fpm.case def func(a): print("some var:", a) # catch-all clause >>> func(0) 'zero!' >>> func(1) 'one!' >>> func(9000.1) "IT'S OVER 9000!!!" >>> func(1337) 'some var: 1337'
If no clause matches, then MatchError is raised. The example shown above has a catch-all clause, so MatchError will never occur.
Different arities (argument count) are allowed and are dispatched separetely.
Example:
@fpm.case def func(a=1, b=1, c): return 1 @fpm.case def func(a, b, c): return 2 @fpm.case def func(a=1, b=1, c, d): return 3 @fpm.case def func(a, b, c, d): return 4 >>> func(1, 1, 'any') 1 >>> func(1, 0, 0.5) 2 >>> func(1, 1, '', '') 3 >>> func(1, 0, 0, '') 4
As you can see, clause order matters only for same-arity clauses. 4-arg catch-all does not affect any 3-arg definition.
Define multi-claused functions
There are three ways of defining a pattern for a function clause:
Specify exact values as decorator arguments (positional and/or keyword)
@fpm.case(1, 2, 3) def func(a, b, c): pass @fpm.case(1, fpm._, 0) def func(a, b, c): pass @fpm.case(b=10) def func(a, b, c): pass
Specify exact values as default arguments
@fpm.case def func(a=0): pass @fpm.case def func(a=10): pass @fpm.case def func(a=fpm._, b=3): pass
Specify guards for clause to match
@fpm.case @fpm.guard(fpm.eq(0) & ~fpm.isoftype(float)) def func(a): pass @fpm.case @fpm.guard(fpm.gt(0)) def func(a): pass @fpm.case @fpm.guard(fpm.Is(None)) def func(a): pass
dispatch decorator
dispatch decorator is similar to case, but it lets you to define argument types to match against. You can specify types either as decorator arguments or default values (or as guards, of course, but it makes using dispatch pointless).
Example:
@fpm.dispatch(int, int) def func(a, b): print("integers") @fpm.dispatch def func(a=float, b=float): print("floats") >>> func(1, 1) 'integers' >>> func(1.0, 1.0) 'floats'
Examples (the useful ones)
Still working on this section!
Ensure that an argument is a list of strings. Prevent feeding string accidentally, which can cause some headache, since both are iterables.
Option 1: do not allow strings
# thanks to creshal from HN for suggestion lookup = { "foo": 1, "bar": 2, "baz": 3 } @fpm.guard def getSetFromDict( dict_, # let it throw TypeError if not a dict. Will be more descriptive than a GuardError. keys: ~fpm.isoftype(str) ): "Returns a subset of elements of dict_" ret_set = set() for key in keys: try: ret_set.add(dict_[key]) except KeyError: pass return ret_set getSetFromDict(lookup, ['foo', 'baz', 'not-in-lookup']) # will return two-element set getSetFromDict(lookup, 'foo') # raises GuardError, but would return empty set without guard!
Similar solutions
- singledispatch from functools
- pyfpm
- patmatch
-
- (by Guido van Rossum, BDFL)
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/function-pattern-matching/ | CC-MAIN-2019-04 | refinedweb | 1,781 | 56.86 |
I'm trying to understand a python code, a specific line of the code has troubled me a bit:
mean = np.average(data[:,index])
data
[:,index]
data = np.genfromtxt(args.inputfile)
def doBlocking(data,index):
ndata = data.shape[0]
ncols = data.shape[1]-1
#things unimportant
mean = np.average(data[:,index])
#more unimportance
In this case
data is a two dimensional
numpy.array.
Numpy supports slicing similar to that of
Matlab
In [1]: import numpy as np In [2]: data = np.arange(15) In [3]: data Out[3]: array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]) In [4]: data = data.reshape([5,3]) In [5]: data Out[5]: array([[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 10, 11], [12, 13, 14]]) In [6]: data[:, 1] Out[6]: array([ 1, 4, 7, 10, 13])
As you can see it selects the second column
Your code above will get the mean of column
index. It basically says "Compute the mean for data in every line, and column
index" | https://codedump.io/share/TiezPN9kMwtV/1/npaverage-format-option | CC-MAIN-2017-09 | refinedweb | 177 | 73.07 |
Natural Language Processing on iOS with Turi Create
In this Natural Language Processing tutorial, you’ll learn how to train a Core ML model from scratch, then use that model within an iOS Application
Version
- Swift 4, iOS 11, Xcode 9
Natural Language Processing, or NLP, is the discipline of taking unstructured text and discerning some characteristics about it. To help you do this, Apple’s operating systems provide functions to understand text using the following techniques:
- Language identification
- Lemmatization (identification of the root form of a word)
- Named entity recognition (proper names of people, places and organizations)
- Parts of speech identification
- Tokenization
In this tutorial, you’ll use tokenization and a custom machine learning model, or ML model, to identify the author of a given poem, or at least the poet the poem most closely emulates.
Note: This tutorial assumes you’re already familiar with the basics of iOS development and Swift. If you’re new to either of these topics, check out our iOS Development and Swift Language tutorials.
Getting Started
You may have already run into NLP in apps where sections of text are automatically turned into links or tags, or when text is automatically analyzed for emotional charge (called “sentiment” in the biz). NLP is widely used by Apple for data detectors, keyboard auto-suggest and Siri suggestions. Apps with search capabilities often use NLP to find related information and to efficiently transform input text into a canonical and, therefore indexable, form.
The app you’ll be building for this tutorial will take a poem and compare its text against a Core ML model that’s trained with the words from poems of famous, i.e. public-domain, authors. To build the Core ML model, you’ll be using Turi Create, an open-source Python project from Apple that creates and trains ML models.
Turi Create won’t make you a machine learning expert because it hides almost all the internal workings and mathematics involved. On the plus side, it means you don’t have to be a machine learning expert to use machine learning algorithms in your app!
App Overview
The first thing you need to do is download the starter app. You can find the Download Materials link at the top or bottom of this tutorial.
Inside the downloaded materials, you’ll find two project folders, a JSON file, and a Core ML model. Don’t worry about the JSON and ML model files; you’ll use those a bit later. Open the KeatsOrYeats-starter folder and fire up the KeatsOrYeats.xcodeproj inside.
Once running, copy and paste your favorite Yeats poem. Here is an example, .
Press Return to run the analysis. At the top, you’ll see the app’s results, indicating “P. Laureate” wrote the poem! This prediction has a 50% confidence and a 10% confidence the poem matches the works of “P. Inglorious”.
This is obviously not correct, but that’s because the results are hard-coded and there’s no actual analysis.
Download Every Known Poem
Sometimes it’s useful to start developing an app using a simple, brute-force approach. The first-order solution to author identification is to get a copy of every known poem, or at least known poems by a set list of poets. That way, the app can do a simple string compare and see if the poem matches any of the authors. As Robert Burns once said, “Easy peasy.”
Nice try, but there are two major problems. First, poems (especially older ones) don’t always have canonical formatting (like line breaks, spacing and punctuation) so it’s hard to do a blind string compare. Second, your full-featured app should identify which author the entered poem most resembles, even if that isn’t a poem known to the app or not a work by that author.
There’s got to be a better way… And there is! Machine learning lets you create models of text you can then use to classify never-before-seen text into a known category.
Intro to Machine Learning: Text-Style
There are many different algorithms covered under the umbrella of machine learning. This CGPGrey video gives an excellent layman’s introduction.
The main takeaway is the resulting model is a mathematical black box that takes input text, transforms that input, and the result will be a decision or, in this a case, a probability that the text matches a given author. Inside that box is a series of weighted values that compute that probability. These weights are “discovered” (refined) over a series of epochs where the weights are adjusted to reduce the overall error.
The simplest model is a linear regression, which fits a line to a series of points. You may be familiar with the old equation y = mx + b. In this case, you have a series of known x & y’s, and the training of the model is to figure out the “m” (the weights) and “b”.
In a standard training scenario, there will be a guess for m & b, an error computed and, then over successive epochs, those get nudged closer and closer to find a value that minimizes the error. When presented with a never-before-seen “x”, the model can predict what the “y” value will be. Here is an in-depth article on how it works with Turi Create.
Of course, real-world models are far more complicated and take into account many different input variables.
Bag of Words
Machine learning inspects and analyzes an input’s features. Features in this context are the important or salient values about the input or, mathematically speaking, the independent variables in the computation. From the download materials, go ahead and open corpus.json, which will be the input file for training the model. Inside, you’ll see an array of JSON objects. Take a look at the first item:
{ "title": "When You Are Old", "author": "William Butler Yeats", "text": "When you are old and grey and full of sleep,\nAnd nodding by the fire, take down this book,\nAnd slowly read, and dream of the soft look\nYour eyes had once, and of their shadows deep;\nHow many loved your moments of glad grace,\nAnd loved your beauty with love false or true,\nBut one man loved the pilgrim Soul in you,\nAnd loved the sorrows of your changing face;\nAnd bending down beside the glowing bars,\nMurmur, a little sadly, how Love fled\nAnd paced upon the mountains overhead\nAnd hid his face amid a crowd of stars." }
In this case, a single “input” has three columns:
title,
author and
text. The text column will be the only feature for the model, and title is not taken into account. The author is the class the model is tasked with computing, which is sometimes called the label or dependent variable.
If the whole text is used as the input, then the model basically becomes the naïve straight-up comparison discussed above. Instead, specific aspects of the text have to be fed into the model. The default way of handling text is as a bag of words, or BOW. Imagine breaking up all the text into its individual words and throwing them into a bag so they lose their context, ordering and sentence structure. This way, the only dimension that’s retained is the frequency of the collection of words.
In other words, the BOW is a map of words to word counts.
For this tutorial, each poem gets transformed into a BOW, with the assumption that one author will use similar words across different poems, and that other authors will tend toward different word choices.
Each word then becomes a dimension for optimizing the model. In this paltry example of 518 poems, there are 24,939 different words used.
The Logistic Classifier
Turi Create will make a logistic classifier for this type of analysis, which actually works a little differently than a linear regression.
To oversimplify a bit: instead of interpolating a single value, a logistic classifier will compute a probability (from 0 to 1) for each class by multiplying how much each word contributes to that class by the number of times that word appears, ultimately adding all of that up across all of the words.
Take the first line of that first Yeats poem: “When you are old and grey and full of sleep”
And the first line of the first Keats poem: “Happy is England! I could be content”
If these two lines were the total input, each of these words contribute wholly to their author. This is because there are no overlapping words. If the Keats line was, instead, “Happy are England”, then the word “are” would contribute 50/50 for each author.
Word Keats Yeats ------------------- And 0 1 Are 0 1 Be 1 0 Could 1 0 Content 1 0 England 1 0 Grey 0 1 Happy 1 0 I 1 0 Is 1 0 Full 0 1 Of 0 1 Old 0 1 Sleep 0 1 When 0 1 You 0 1
Now if you take the poem you saw earlier, “On Being Asked for a War Poem”, as the input, only one word — I — appears in the training list, so the model would predict that Keats wrote the poem at 100% and that Yeats wrote the poem at 0%.
Hopefully this illustrates why a large data set is required to accurately train models!
Using Turi Create
Core ML is iOS’s machine learning engine, supporting multiple types of models based on different machine learning SDKs like scikit and keras. Apple’s open-source library, Turi Create, reduces the overhead in learning how to use these libraries, and handles choosing the best type of model for a given task. This is done either by having a pre-chosen model type for the activity or by running several models against each other to see which performs best.
Turi Create is app-specific, rather than model-specific. This means you specify the type of problem you want to solve, rather than choosing the type of model you want to use. This way, it can choose the right model for the job.
Like most machine learning tools, the ones that are compatible with Core ML are written in Python. To get started, very little understanding of Python is necessary. Having said that, knowing Python is useful if you want to expand how you train models or customize the input data, or if you run into trouble.
Setting Up Python
The following instructions assume you already have Python installed, which is likely if you have a Mac running the latest Xcode.
Run the following command in Terminal to check if you have Python installed already:
python -V
If Python is installed, you’ll see its version number. If it isn’t, you’ll need to follow these instructions to download and install Python.
You’ll also need pip installed on your machine, which comes with the Python installation. Run the following command to make sure it’s installed:
which pip
If the result isn’t for a folder ending in
/bin/pip, you’ll need to install it from.
Finally, it’s suggested to use virutalenv to install Turi Create. This isn’t generally part of the default Mac setup, but it can be installed from the Terminal by using:
pip install virtualenv
If you get any permission errors, preface the command with the sudo command.
sudo pip install virtualenv
If you get any SSL errors, you’ll need to add the
--trusted-host command line option.
pip install --trusted-host pypi.python.org virtualenv
Virtualenv is a tool for creating virtual Python environments. This means you can install a series of tools and libraries in isolation in a named environment. With virtual environments, you can build and run an app with a known set of dependencies, and then go and create a separate environment for a new app that has a different set of tools, possibly with versions that would otherwise conflict with the first environment.
From an iOS perspective, think of it as being able to have an environment with Xcode 8.2, Cocoapods 1.0 and Fastlane 2.4 to build one app, and then be able to launch another environment with Xcode 9.1, Cocoapods 1.2 and Fastlane 2.7 to build another app, without those two conflicting. This is just one more reminder of the sophistication of open-source developer tools with large communities.
Installing Turi Create
With Python in hand, for the first step, you’ll create a new virtual environment in which to install Turi Create.
Open a Terminal window, and
cd into the directory where you downloaded this tutorial’s materials. For reference, corpus.json should be in the current folder before continuing.
From there, enter the following command:
virtualenv venv
This creates a new virtual environment named
venv in your project directory.
When you have completed that, activate the environment:
source venv/bin/activate
When there is an active environment, you’ll see a
(venv) prepended to the terminal prompt. If you need to get out of the virtual environment, run the
deactivate command.
Finally, make sure the environment is still activated and install Turi Create:
pip install -U turicreate
If you have any issues with installation, you can run a more explicit install command:
python2.7 -m pip install turicreate
This installs the latest version of the Turi Create library, along with all its dependencies. Now it’s time to actually start using Python!
Using Turi Create to train a model
First, in a new Terminal window with the virtual environment active and launch Python in the same directory as your corpus.json file:
python
You can also use a more interactive environment like iPython, which provides better history and tab-completion features, but that’s outside the scope of this tutorial.
Next, run the following command:
import turicreate as tc
This will import the Turi Create module and make it accessible from the symbol
tc.
Next, load the JSON data:
data = tc.SFrame.read_json('corpus.json', orient='records')
This will load the data from the JSON file into a
SFrame, which is the data container for Turi Create. Its data is organized in columns like a spreadsheet and has powerful functions for manipulation. This is important for massaging data to get the best input for training a model. It’s also optimized for loading from disk storage, which is important for large data sets that can easily overwhelm RAM.
Type in
data to see what you pulled out. The generated output shows the size and data types contained within, as well as the first few rows of data.
<bound method SFrame.explore of Columns: author str text str title str Rows: 518 Data: +----------------------+-------------------------------+ | author | text | +----------------------+-------------------------------+ | William Butler Yeats | When you are old and grey ... | | William Butler Yeats | Had I the heavens' embroid... | | William Butler Yeats | Were you but lying cold an... | | William Butler Yeats | Wine comes in at the mouth... | | William Butler Yeats | That crazed girl improvisi... | | William Butler Yeats | Turning and turning in the... | | William Butler Yeats | I made my song a coat\nCov... | | William Butler Yeats | I will arise and go now, a... | | William Butler Yeats | I think it better that in ... | | John Keats | Happy is England! I could ... | +----------------------+-------------------------------+ +-------------------------------+ | title | +-------------------------------+ | When You Are Old | | He Wishes For The Cloths O... | | He Wishes His Beloved Were... | | A Drinking Song | | A Crazed Girl | | The Second Coming | | A coat | | The Lake Isle Of Innisfree | | On being asked for a War Poem | | Happy Is England! I Could ... | +-------------------------------+ [518 rows x 3 columns] Note: Only the head of the SFrame is printed. You can use print_rows(num_rows=m, num_columns=n) to print more rows and columns.>
Now that you have the data, for the next step, you’ll create a model by running:
model = tc.text_classifier.create(data, 'author', features=['text'])
This creates a text classifier given the loaded data, specifying the
author to be the class labels, and the
text column to be the input variable. To build a more accurate classifier, you can compute and then provide additional features such as meter, line length and rhyme scheme.
This command creates the model and trains it on
data. It will reserve about 5% of the rows as a validation set. This means that 95% of the data is for training, and then the remaining data will be used to test the accuracy of the trained model.
Due to the poor quality of the training data (that is, there are a large number of words for a only a handful of examples per author), if the training fails or gets terminated before the maximum 10 iterations are complete, just re-run the command. The training is not deterministic, so trying again might lead to a different result, depending on the starting values for the coefficients.
Finally, run this command to export the model in the Core ML format:
model.export_coreml('Poets.mlmodel')
Voilà! With four lines of Python, you’ve built and trained an ML model ready to use from an iOS app.
Using Core ML
Now that you have a Core ML model, for the next step, you’ll use it in the app.
Import the Model
Core ML lets you use a pre-trained model in your app to make predictions or perform classifications on user input. To use the model, drag the generated Poets.mlmodel into the project navigator. If you skipped the model-generation section of this tutorial, or had trouble creating the model, you can use the one included at the root of the project zip (Download Materials link at top or bottom of the tutorial).
Xcode automatically parses the model file and shows you the important information in the editor panel.
The first section, Machine Learning Model, tells you about the model’s metadata, which Turi Create automatically created for you when generating the model.
The most important line here is the Type. This tells you what kind of model it is. In this case it’s a Pipeline Classifier. A classifier means that it takes the input and tries to assign a label to it. In this case, that is an “author best match”. The pipeline part means that the model is a series of mathematical transforms used on the input data to calculate the class probabilities.
The next section, Model Class shows the generated Swift class to be used inside the app. This class is the code wrapper to the model, and it’s covered in the next step of the tutorial.
The third section, Model Evaluation Parameters describes the inputs and outputs of the model.
Here, there is one input,
text, which is a dictionary of string keys (individual words) to double values (the number of times that word appears in the input poem).
There are also two outputs. The first,
author, is the most likely match for the poem’s author. The other output,
authorProbability, is the percent confidence of a match for each known author.
You’ll see that, for some inputs, even though there is only one “best match”, that match itself might have a very small probability, or there might be two or three matches that are all reasonably close.
Now, click on the arrow next to Poets in the Model Class section. This will open Poets.swift, an automatically generated Swift file. This contains a series of classes that form a convenience wrapper for accessing the model. In particular, it has a simple initializer, a
prediction(text:) function that does the actual evaluation by the model, and two classes that wrap the input and output so that you can use standard Swift values in the calling code, instead of worrying about the Core ML data types.
NSLinguisticTagger
Before you can use the model, you need the input text, which is from a free-form text box, which you’ll need to convert to something that’s compatible with
PoetsInput. Even though Turi Create handles creating the BOW (Bag of Words) from the
SFrame training input, Core ML does not yet have that capability built in. That means you need to transform the text into a dictionary of word counts manually.
You could write a function that takes the input text, splits it at the spaces, trims punctuation and then counts the remainder. Or, even better, use a context-aware text processing API:
NSLinguisticTagger.
NSLinguisticTagger is the Cocoa SDK for processing natural language. As of iOS 11, its functionality is backed by its own Core ML model, which is much more complicated than the one shown here.
It’s hard making sure a character-parsing algorithm is smart enough to work around all the edge cases in a language — apostrophe and hyphen punctuation, for example. Even though this app just covers poets from America and the United Kingdom writing in English, there’s no reason the model couldn’t also have poems written in other languages. Introducing parsing for multiple languages, especially non-Roman character languages, can get very difficult very quickly. Fortunately, you can leverage
NSLinguisticTagger to simplify this.
In PoemViewController.swift add the following helper function to the
private extension:
func wordCounts(text: String) -> [String: Double] { // 1 var bagOfWords: [String: Double] = [:] // 2 let tagger = NSLinguisticTagger(tagSchemes: [.tokenType], options: 0) // 3 let range = NSRange(text.startIndex..., in: text) // 4 let options: NSLinguisticTagger.Options = [.omitPunctuation, .omitWhitespace] // 5 tagger.string = text // 6 tagger.enumerateTags(in: range, unit: .word, scheme: .tokenType, options: options) { _, tokenRange, _ in let word = (text as NSString).substring(with: tokenRange) bagOfWords[word, default: 0] += 1 } return bagOfWords }
The output of the function is a count of each word as it appears in the input string, but let’s break down each step:
- Initializes your bag of words dictionary.
- Creates a
NSLinguisticTaggerset up to tag all the tokens (words, punctuation, whitespace) in a string.
- The tagger operates over a
NSRange, so you create a range for the whole string.
- Set the options to skip punctuation and whitespace when tagging the string.
- Set the tagger string to the text parameter.
- Applies the block to all the found tags in the string for each word. This parameter combination identifies all the words in the string, then increments a dictionary value for the word, which works as the dictionary key.
Using the model
With the word counts in hand, they can now be fed into the model. Replace the contents of
analyze(text:) with the following:
func analyze(text: String) { // 1 let counts = wordCounts(text: text) // 2 let model = Poets() // 3 do { // 4 let prediction = try model.prediction(text: counts) updateWithPrediction(poet: prediction.author, probabilities: prediction.authorProbability) } catch { // 5 print(error) } }
This function:
- Initializes a variable to hold the output of
wordCounts(text:).
- Creates an instance of the Core ML model.
- Wraps the prediction logic in a
do/
catchblock because it can throw an error.
- Passes the parsed text to the
prediction(text:)function that runs the model.
- Logs an error if one exists.
Build and run, then enter a poem and let the model do its magic!
The result is great, but you can chalk that one up to good training! Another poem may not have the desired results. For example, this Joyce Kilmer classic does not.
In this case, the model leans heavily towards Emily Dickinson since there are far more of her poems in the training set than any other author. This is the downside to machine learning — the results are only as good as the data used to train the models.
Where To Go From Here?
You can get the KeatsOrYeats-final project from the Download Materials link at the top or bottom of this tutorial.
If you are feeling adventurous and want to take things further, you could easily build on this tutorial by designing your own text classifier. If you have a large data set with known labels, such as reviews and ratings, genres or filters, it would make a good fit. You can also build more accurate models by feeding them more data or providing multiple columns in the
features input to
classifier.create(). Good candidates would be a poem’s title or style.
Another way to get more accurate predictions is to clean up the input data. Unfortunately, there aren’t a lot of options available to the
text_classifier, but you can use the logic classifier directly. That way, you can provide a massaged input that eliminates common words or uses an n-gram (pair of words rather than a single word) for a more accurate analysis. Turi Create also has a number of helper functions for this purpose available.
You can also learn more about Core ML and machine learning with these other tutorials: Beginning Machine Learning with scikit-learn and Beginning Machine Learning with Keras & Core ML.
Hopefully, you’re interest in all things NLP and machine learning has been piqued! If you’re looking to connect with other like-minded developers, or just want to share something cool, feel free to join the discussion in the forum below! | https://www.raywenderlich.com/5213-natural-language-processing-on-ios-with-turi-create | CC-MAIN-2020-16 | refinedweb | 4,186 | 61.77 |
As the world becomes ever more data-driven, the basic theory of hypothesis testing is being used by more people and in more contexts than ever before. This epansion has, however, come with a cost. The dominant Neyman-Pearson hypothesis testing framework is subtle and easy to unknowingly misuse. In this post, we’ll explore the common scenario where we would like to monitor the status of an ongoing experiment and stop the experiment early if an effect becomes apparent.
There are many situations in which it is advantageous to monitor the status of an experiment and terminate it early if the conclusion seems apparent. In business, experiments cost money, both in terms of the actual cost of data collection and in terms of the opportunity cost of waiting for an experiment to reach a set number of samples before acting on its outcome, which may have been apparent much earlier. In medicine, it may be unethical to continue an experimental treatment which appears to have a detrimental effect, or to deny the obviously better experimental treatment to the control group until the predetermined sample size is reached.
While these reasons for continuous monitoring and early termination of certain experiments are quite compelling, if this method is applied naively, it can lead to wildly incorrect analyses. Below, we illustrate the perils of the naive approach to sequential testing (as this sort of procedure is known) and show how to perform a correct analysis of a fairly simple, yet illustrative introductory sequential experiment.
A brief historical digression may be informative. The frequentist approach to hypothesis testing was pioneered just after the turn of the 20th century in England in order to analyze agricultural experiments. According to Armitage, one of the pioneers of sequential experiment design:
[t]he classical theory of experimental design deals predominantly with experiments of predetermined size, presumably because the pioneers of the subject, particularly R. A. Fisher, worked in agricultural research, where the outcome of a field trial is not available until long after the experiment has been designed and started. It is interesting to speculate how differently statistical theory might have evolved if Fisher had been employed in medical or industrial research.1
In this post, we will analyze the following hypothesis test. Assume the data are drawn from the normal distribution, \(N(\theta, 1)\) with unknown mean \(\theta\) and known variance \(\sigma^2 = 1\). We wish to test the simple null hypothesis that \(\theta = 0\) versus the simple alternative hypothesis that \(\theta = 1\) at the \(\alpha = 0.05\) level. By the Neyman-Pearson lemma, the most powerful test of this hypothesis rejects the null hypothesis when the likelihood ratio exceeds some critical value. In our normal case, it is well-known that this criterion is equivalent to \(\sum_{i = 1}^n X_i > \sqrt{n} z_{0.05}\) where \(z_{0.05} \approx 1.64\) is the \(1 - 0.05 = 0.95\) quantile of the standard normal distribution.
To model the ongoing monitoring of this experiment, we define a random variable \(N = \min \{n \geq 1 | \sum_{i = 1}^n X_i > \sqrt{n} z_{0.05}\}\), called the stopping time. The random variable \(N\) is the first time that the test statistic exceeds the critical value, and the naive approach to sequential testing would reject the null hypothesis after \(N\) samples (when \(N < \infty\), of course). At first, this procedure may seem reasonable, because when the alternative hypothesis that \(\theta = 1\) is true, \(N < \infty\) almost surely by the strong law of large numbers. The first inkling that something is amiss with this procedure is the surprising fact that it is also true that \(N < \infty\) almost surely under the null hypothesis that \(\theta = 0\). (See Keener2 for a proof.) More informally, if we sample long enough, this procedure will almost always reject the null hypothesis, even when it is true.
To illustrate this problem more quantitatively, consider the following simulation. Suppose we decide ahead of time that we will stop the experiment when the critical value for the current sample size is exceeded, in which case we reject the null hypothesis, or we have collected one thousand samples without exceeding the critical value at any point, in which case we accept the null hypothesis.
Here we use a Monte Carlo method to approximate the level of this naive sequential test. First we generate ten thousand simulations of such an experiment, assuming the null hypothesis that the data are \(N(0, 1)\) is true.
from __future__ import division import numpy as np from scipy import stats n = 100 N = 10000 samples = stats.norm.rvs(size=(N, n))
Here each row of
samples corresponds to a simulation and each column to a sample.
Now we calculate the proportion of these simulated experiments that would have been stopped before one thousand samples, incorrectly rejecting the null hypothesis.
alpha = 0.05 z_alpha = stats.norm.isf(alpha) cumsums = samples.cumsum(axis=1) ns = np.arange(1, n + 1) np.any(cumsums > np.sqrt(ns) * z_alpha, axis=1).sum() / N
0.30909999999999999
Here, each row of
cumsums corresponds to a simulation and each column to the value of the test statistic \(\sum_{i = 1}^k X_i\) after \(k\) observations.
We see that the actual level of this test is an order of magnitude larger than the desired level of \(\alpha = 0.05\). To check that our method is reasonable, we see that if we always collect one thousand samples, we achieve a simulated level quite close to \(\alpha = 0.05\).
(cumsums[:, -1] > np.sqrt(n) * z_alpha).sum() / N
0.051700000000000003
This simulation is compelling evidence that the naive approach to sequential testing is not correct.
Fortunately, the basic framework for the correct analysis of sequential experiments was worked out during World War II to more efficiently test lots of ammunition (among other applications). In 1947, Wald introduced the sequential probability ratio test (SPRT), which produces a correct analysis of the experiment we have been considering.
Let
\[ \begin{align*} \Lambda (x_1, \ldots, x_n) & = \frac{L(1; x_1, \ldots, x_n)}{L(0; x_1, \ldots, x_n)} \end{align*} \]
be the likelihood ratio corresponding to our two hypotheses. The SPRT uses two thresholds, \(0 < a < 1 < b\), and continues sampling whenever \(a < \Lambda (x_1, \ldots, x_n) < b\). When \(\Lambda (x_1, \ldots, x_n) \leq a\), we accept the null hypothesis, and when \(b \leq \Lambda (x_1, \ldots, x_n)\), we reject the null hypothesis. We choose \(A\) and \(B\) by fixing the approximate level of the test, \(\alpha\), and the approximate power of the test, \(1 - \beta\). With these quantities chosen, we use
\[ \begin{align*} a & = \frac{\beta}{1 - \alpha}, \textrm{and} \\ b & = \frac{1 - \beta}{\alpha}. \end{align*} \]
For our hypothesis test \(\alpha = 0.05\). The power of the naive test after \(n\) samples is
power = 1 - stats.norm.sf(z_alpha - 1 / np.sqrt(n)) beta = 1 - power power
0.93880916378665569
Which gives the following values for \(a\) and \(b\):
a = beta / (1 - alpha) b = (1 - beta) / alpha a, b
(0.06441140654036244, 18.776183275733114)
For our example, it will be benificial to rewrite the SPRT in terms of the log- likelihood ratio,
\[ \begin{align*} \log a & < \log \Lambda (x_1, \ldots, x_n) < \log b. \end{align*} \]
It is easy to show that \(\log \Lambda (x_1, \ldots, x_n) = \frac{n}{2} - \sum_{i = 1}^n X_i\), so the SPRT in this case reduces to
\[ \begin{align*} \frac{n}{2} - \log b & < \sum_{i = 1}^n X_i < \frac{n}{2} - \log a. \end{align*} \]
The logarithms of \(a\) and \(b\) are
np.log((a, b))
array([-2.74246454, 2.93258922])
We verify that this test is indeed of approximate level \(\alpha = 0.05\) using the simulations from our previous Monte Carlo analysis.
np.any(cumsums >= ns / 2 - np.log(a), axis=1).sum() / N
0.036299999999999999
The following plot shows the rejection boundaries for both the naive sequential test and the SPRT along with the density of our Monte Carlo samples.
From this diagram we can easily see that a significant number of samples fall above the rejection boundary for the naive test (the blue curve) but below the rejection boundary for the SPRT (the red line). These samples explain the large discrepancy between the desired level of the test (\(\alpha = 0.05\)) and the simulated level of the naive test. We also see that very few samples exceed the rejection boundary for the SPRT, leading it to have level smaller than \(\alpha = 0.05\).
It is important to note that we have barely scratched the surface of the vast field of sequential experiment design and analysis. We have not even attempted to give more than a cursory introduction to the SPRT, one of the most basic ideas in this field. One property of this test that bears mentioning is its optimality, in the following sense. Just as the Neyman-Pearson lemma shows that the likelihood ratio test is the most powerful test of a simple hypothesis at a fixed level, the Wald-Wolfowitz theorem shows that the SPRT is the sequential test that minimizes the expected stopping time under both the null and alternative hypotheses for a fixed level and power. For more details on this theorem, and the general theory of sequential experiments, consult Bartroff et al.3.
This post is available as an IPython notebook here.
Discussion on Hacker News
Armitage, P. (1993). Interim analyses in clinical trials. In Multiple Comparisons, Selection, and Applications in Biometry, (Ed., F.M. Hoppe), New York: Marcel Dekker, 391–402.↩
Keener, Robert W., Theoretical statistics, Topics for a core course. Springer Texts in Statistics. Springer, New York, 2010.↩
Bartroff, Jay; Lai, Tze Leung; Shih, Mei-Chiung, Sequential experimentation in clinical trials. Design and analysis. Springer Series in Statistics. Springer, New York, 2013↩ | http://austinrochford.com/posts/2014-01-01-intro-to-sequential-testing.html | CC-MAIN-2017-13 | refinedweb | 1,621 | 54.83 |
On Fri, 2005-01-14 at 19:58, Serge E. Hallyn wrote: > There is a bigger problem with the current loginuid assumptions. The > loginuid is stored on the audit_context. The audit_context is only > created when auditing has been enabled using auditctl, and an auditable > action has occurred. > > Either we need to change the behavior to always create an audit_context > (with state=AUDIT_DISABLED) so long as AUDIT_SYSCALL is enabled, or we > need to move loginuid directly into the task_struct. I'm not sure I follow. First, the current code seems to always set up an audit context by default unless the task is explicitly marked non-auditable. Second, even if an audit context does not exist, you can easily check for a null audit context and just return (uid_t)-1 in that case. The loginuid serves no purpose for non-auditable tasks, and it seems wasteful to put it into the task struct. -- Stephen Smalley <sds epoch ncsc mil> National Security Agency | https://www.redhat.com/archives/linux-audit/2005-January/msg00168.html | CC-MAIN-2017-51 | refinedweb | 161 | 64.41 |
How I Have A Mobile & Desktop Site With Django
Posted: | More posts about blog css desktop django google html javascript js linux middleware mobile open-source optimization programming python templates tips web
Part of the latest version of my site included the deployment of a mobile-friendly site. Up until recently, I hadn't even attempted to create a mobile site because I thought it would take more time than it was worth. I wanted something beyond just using CSS to hide certain elements on the page. I wanted to be able to break down the content of my site into its most basic pieces and only include what was necessary. Also, I wanted to figure it out on my own (instead of reusing wheels other people had invented before me--horrible, I know).
With these requirements, I was afraid it would require more resources than I could spare on my shared Web host. My initial impression was that I would have to leverage the django.contrib.sites framework in a fashion that would essentially require two distinct instances of my site running in RAM. Despite these feelings, I decided to embark on a mission to create a mobile-friendly site while still offering a full desktop-friendly site. It was surprisingly simple. This may not be the best way to do it, but it sure works for me, and I'm very satisfied. So satisfied, in fact, that I am going to share my solution with all of my Django-loving friends.
The first step is to add a couple of new settings to your settings.py file:
import os DIRNAME = os.path.abspath(os.path.dirname(__file__)) TEMPLATE_DIRS = ( os.path.join(DIRNAME, 'templates'), ) MOBILE_TEMPLATE_DIRS = ( os.path.join(DIRNAME, 'templates', 'mobile'), ) DESKTOP_TEMPLATE_DIRS = ( os.path.join(DIRNAME, 'templates', 'desktop'), )
For those of you not used to seeing that os.path.join stuff, it's just a (very efficient) way to make your Django project more portable between different computers and even operating systems. The new variables are MOBILE_TEMPLATE_DIRS and DESKTOP_TEMPLATE_DIRS, and their respective meanings should be fairly obvious. Basically, this tells Django that it can look for templates in your_django_project/templates, your_django_project/templates/mobile, and your_django_project/templates/desktop.
Next, we need to install a middleware that takes care of determining which directory Django should pay attention to when rendering pages, between mobile and desktop. You can put this into your_django_project/middleware.py:
from django.conf import settings class MobileTemplatesMiddleware(object): """Determines which set of templates to use for a mobile site""" ORIG_TEMPLATE_DIRS = settings.TEMPLATE_DIRS def process_request(self, request): # sets are used here, you can use other logic if you have an older version of Python MOBILE_SUBDOMAINS = set(['m', 'mobile']) domain = set(request.META.get('HTTP_HOST', '').split('.')) if len(MOBILE_SUBDOMAINS & domain): settings.TEMPLATE_DIRS = settings.MOBILE_TEMPLATE_DIRS + self.ORIG_TEMPLATE_DIRS else: settings.TEMPLATE_DIRS = settings.DESKTOP_TEMPLATE_DIRS + self.ORIG_TEMPLATE_DIRS
Now you need to install the new middleware. Back in your settings.py, find the MIDDLEWARE_CLASSES variable, and insert a line like the following:
'your_django_project.middleware.MobileTemplatesMiddleware',
Finally, if you already have a base.html template in your your_django_project/templates directory, rename it to something else, such as site_base.html. Now create two new directories: your_django_project/templates/mobile and your_django_project/templates/desktop. In both of those directories, create a new base.html template that extends site_base.html.
Example site_base.html
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" ""> <html xmlns=""> <head> <title>{% block base_title %}Code Koala{% endblock %} - {% block title %}Welcome{% endblock %}</title> <link href="{{ MEDIA_URL }}css/common.css" rel="stylesheet" type="text/css" media="screen" /> {% block extra-head %}{% endblock %} </head> <body> <div id="page-wrapper"> {% block header %} <div id="logo"> <h1><a href="/">Code Koala</a></h1> </div> <div id="header"> <div id="menu"> <ul> <li><a href="/" class="first">Home</a></li> <li><a href="/blog/">Blog</a></li> <li><a href="/about/">About</a></li> <li><a href="/contact/">Contact</a></li> </ul> </div> </div> {% endblock %} <div id="page"> <div id="content"> {% block content %}{% endblock %} </div> <div id="sidebar"> {% block sidebar %} Stuff {% endblock %} </div> </div> <div id="footer"> {% block footer %} Footer stuff {% endblock %} </div> </div> </body> </html>
Example desktop/base.html
{% extends 'site_base.html' %} {% block extra-head %} <!-- stylesheets --> <link href="{{ MEDIA_URL }}css/desktop.css" rel="stylesheet" type="text/css" media="screen" /> <!-- JavaScripts --> <script type="text/javascript" src="{{ MEDIA_URL }}js/jquery.js"></script> {% endblock %}
Example mobile/base.html
{% extends 'site_base.html' %} {% block extra-head %} <!-- stylesheets --> <link href="{{ MEDIA_URL }}css/mobile.css" rel="stylesheet" type="text/css" media="screen" /> {% endblock %} {% block sidebar %}{% endblock %}
Please forgive me if the HTML or whatever is incorrect--I butchered the actual templates I use on Code Koala for the examples. There are some neat things you can do in your pages to make them more mobile friendly, such as including something like <meta name="viewport" content="width=device-width; initial-scale=1.0; maximum-scale=1.0; user-scalable=0;" /> in your <head> tag. This is supposed to tell your visitor's browser to not scale your pages to make it all fit on the screen. You can find a lot of other such tips elsewhere on the interwebs, and I'm sure they'll be better explained elsewhere too. You can also find scripts to handle redirecting your visitors to a mobile site and whatnot. Google is your friend.
As for the Django side of things, that should be just about it. If you have other templates you want to customize based on the version of your site that visitors are viewing, simply add those templates to the your_django_project/templates/mobile or your_django_project/templates/desktop directories as necessary. For example, if you have an application called blog, and you want to override the entry_detail.html template for the mobile site, so it doesn't pull in a bunch of unnecessary information to save bandwidth, you could save your modified copy in your_django_project/templates/mobile/blog/entry_detail.html.
With this setup, all you have to do is point your main domain and a subdomain such as m.yourdomain.com to the same Django application, and the middleware will take care of the "heavy lifting". No need for an additional instance of your Django project just for the mobile site. No hackish hiding of elements using CSS. If you find this article useful and decide to use these techniques on your site, please let me know how it works in your environment and if you ran into any snags so I can update the information! | http://www.codekoala.com:4000/posts/how-i-have-mobile-desktop-site-django/ | CC-MAIN-2014-10 | refinedweb | 1,075 | 56.96 |
I've set up this instance of the Conduit Application:
It uses:
- Cache 2015.2, running on Ubuntu Linux and Node.js v6.10, on an AWS EC2 instance
- QEWD with 2 child processes
- qewd-conduit RealWorld Conduit back-end
- The React/Redux version of the front-end for Conduit, with its data served up via REST calls to the qewd-conduit back-end
Note: no changes were needed for the application to run with Cache.
Hi David
Glad to hear your success in getting it working for you.
There's a right way and a somewhat dodgy way to do what you want to do.
The right way is to have separate instances of QEWD, each connected to a particular namespace and listening on a different port. You could probably proxy them via some URL re-writing (eg with nginx at the front-end)
The dodgy way which I think should work is to have a function wrapper around $zu(5) (or equivalent) to change namespace, and make a call to this in each of your back-end handler functions. If you do this you need to make sure that you switch back to the original namespace before your finished() call and return from your function. If an unexpected error occurs, you need to realise that your worker process could end up stuck in the wrong namespace.
Your namespace-switching function would need to be callable from all your Cache namespaces - use routine mapping for this.
Doing this namespace switching will be at your own risk - see how it goes
BTW for these type of situations where you want to do the same thing before and after every handler function, you might find the latest feature (beforeHandler and afterHandler) described here very useful:...
Rob | https://community.intersystems.com/user/27171/comments?page=8 | CC-MAIN-2020-45 | refinedweb | 295 | 62.61 |
kstars
#include <ksearthshadow.h>
Detailed Description
A class that manages the calculation of the earths shadow (in moon distance) as a 'virtual' skyobject.
KSMoon is responsible for coordinating this object. While a rather unusual measure, this method ensures that unnecessary calculations are avoided.
- Version
- 1.0
Definition at line 38 of file ksearthshadow.h.
Member Enumeration Documentation
The ECLIPSE_TYPE enum describes the quality of an eclipse.
Definition at line 56 of file ksearthshadow.h.
Constructor & Destructor Documentation
- Parameters
-
- Note
- The three parameters must be supplied to avoid initialization order problems. This class may be generalized to any three bodies if it becomes necessary in the future. This class is relatively cheap, so it's save to create new instances instead of reusing an existing one.
Definition at line 21 of file ksearthshadow.cpp.
Member Function Documentation
Updates umbra and penumbra radius from the positions of the three bodies.
Definition at line 82 of file ksearthshadow.cpp.
angSize
- Returns
- the angular size (penumbra) in arc minutes
Reimplemented from KSPlanetBase.
Definition at line 120 of file ksearthshadow.h.
find the object's current geocentric equatorial coordinates (RA and Dec) This function is pure virtual; it must be overloaded by subclasses.
This function is private; it is called by the public function findPosition() which also includes the figure-of-the-earth correction, localizeCoords().
- Parameters
-
- Returns
- true if position was successfully calculated.
Implements KSPlanetBase.
Definition at line 56 of file ksearthshadow.cpp.
Computes the visual magnitude for the major planets.
- Parameters
-
Implements KSPlanetBase.
Definition at line 127 of file ksearthshadow.h.
Determine the phase of the planet.
Reimplemented from KSPlanetBase.
Definition at line 132 of file ksearthshadow.h.
eclipse
- Returns
- The eclipse type.
- See also
- KSEarthShadow::ECLIPSE_TYPE
Definition at line 37 of file ksearthshadow.cpp.
- Returns
- The angular radius of the penumbra.
Definition at line 111 of file ksearthshadow.h.
- Returns
- The angular radius of the umbra.
Definition at line 103 of file ksearthshadow.h.
isInEclipse - a slim version of getEclipseType()
- Returns
- Whether the earth shadow eclipses the moon.
Definition at line 31 of file ksearthshadow.cpp.
Implements KSPlanetBase.
Definition at line 128 of file ksearthshadow.h.
The earths shadow on the moon appears only at new moon so calculating it on other occasions is rather pointless.
- Returns
- whether to update the shadow or not
Definition at line 26 of file ksearthshadow.cpp.
Update the Coordinates of the shadow.
In truth it finds the sun and calls KSEarthShadow::updateCoords(const KSSun *)
Reimplemented from SkyPoint.
Definition at line 63 of file ksearthshadow.cpp.
Update the RA/DEC of the shadow.
Definition at line 69 of file ksearthshadow.cpp.
The documentation for this class was generated from the following files:
Documentation copyright © 1996-2020 The KDE developers.
Generated on Sat May 9 2020 05:05:05 by doxygen 1.8.7 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online. | https://api.kde.org/extragear-api/edu-apidocs/kstars/html/classKSEarthShadow.html | CC-MAIN-2020-29 | refinedweb | 479 | 52.76 |
It looks like you're new here. If you want to get involved, click one of these buttons!
import fontlab as fl
import math
for glyph in fl.CurrentFont().glyphs:
for layer in fl.flGlyph(glyph).layers:
for contour in layer.getContours():
bbox = contour.boundingBox()
dist = lambda pt: math.sqrt(
(bbox.x() - pt.x) ** 2 + (bbox.y() - pt.y) ** 2
)
nodes = [
(index, node)
for index, node in enumerate(contour.nodes())
if node.type == "on"
]
start_index = sorted(
[(index, dist(node)) for index, node in nodes],
key=lambda _: _[1],
)[0][0]
if start_index != 0:
contour.setStartPoint(start_index)
fl.CurrentFont().update()
Either way, if anyone can tell me if there is a python for typography course I would be thankful.
I have become highly dependent on TypeRig for a number of things.
From the FL7 release notes:
“The Scripts menu has a Update / Install Scripts command. Click it to install some useful Python packages as well as TypeRig, a monumental Python extension for FontLab 7 developed independently by Vassil Kateliev. TypeRig brings some example Python scripts as well as TypeRig GUI, a large panel that lets you perform various batch operations. When you run Update / Install Scripts for the first time, you will need to restart FontLab 7 two or three times. Run the command once every few weeks to update TypeRig. Note: This functionality is experimental.” | https://typedrawers.com/discussion/3529/help-with-quick-python-script | CC-MAIN-2020-29 | refinedweb | 226 | 69.18 |
C# - Switch Statement
A switch statement allows a variable to be tested for equality against a list of values. Each value is called a case, and the variable being switched on is checked for each switch:
using System; namespace DecisionMaking { class Program { static void Main(string[] args) { /* local variable definition */ char grade = 'B'; switch (grade) { case 'A': Console.WriteLine("Excellent!"); break; case 'B': case 'C': Console.WriteLine("Well done"); break; case 'D': Console.WriteLine("You passed"); break; case 'F': Console.WriteLine("Better try again"); break; default: Console.WriteLine("Invalid grade"); break; } Console.WriteLine("Your grade is {0}", grade); Console.ReadLine(); } } }
When the above code is compiled and executed, it produces the following result:
Well done Your grade is B | http://www.tutorialspoint.com/csharp/switch_statement_in_csharp.htm | CC-MAIN-2014-15 | refinedweb | 119 | 59.6 |
by
kirupa | 10 October 2016
The last major React-related topic we are going to look at is less about React and more about setting up your development environment to build a React app. Up until now, we've been building our React apps by including a few script files:
Hi Kirupa! I came across something new when trying to complete the final step of this tutorial:
Luckily it's an easy fix with simply modifying the webpack.config.js file by adding "-loader" to the end of babel:
module: { loaders: [{ include: DEV, loader: "babel-loader", }] } };
PS. Thanks for this tutorial too - I would have been so lost in the woods without it!
Thanks for pointing this out, grady! I have updated the article with the updated value. I swear it worked when I wrote it haha!
@kirupa thats an amazing tutorial i am newbie and wanted to know that I have followed the steps...how will it be running in localhost? there is no web-dev-server here so how localhost will be running?
Unless your app is doing something that absolutely needs a server, you can do a lot using just the file system. That is why I originally didn't mention anything about that.
With that said, I should update the article to mention using a web server, though. It is good practice. Thanks for letting me know. I'll look into getting this revised soon
Hi, I'm getting this error when I try to run webpack.EDIT: I solved this. I had missed the step adding the babel presets to the package.json
quillfish:MyTottalyAwesomeApp dbergum$ ./node_modules/.bin/webpackHash: ecc64a2da2ebdaf303d5Version: webpack 2.3.3Time: 279ms Asset Size Chunks Chunk NamesmyCode.js 3.49 kB 0 [emitted] main [0] ./dev/index.jsx 862 bytes {0} [built] [failed] [1 error]
ERROR in ./dev/index.jsxModule build failed: SyntaxError: Unexpected token (7:12)
5 | render: function() { 6 | return (
7 |
Hello, {this.props.greetTarget}!
| ^ 8 | ); 9 | } 10 | });
7 |
Hello, {this.props.greetTarget}!
@kirupa Thanks for the great tutorial, i am waiting for "how to setup web server for this index.html" please update this tutorial for the next steps.
Hey - images seem to be down. Did the images location change? the links seem to be going to ".comimages/"
Tyler - what browser are you using? They work fine for me. The image location shouldn't have changed.
chrome!
oh - it seems that the original post shows the images, but clicking 'view full post' does not show them. It seems that the images that work have a missing "/react/" from the file path!
I never even noticed that button! These posts get auto-generated when content on the main kirupa.com site is posted, and the URL for showing the full version is broken. That's why images don't load. I'll fix that shortly. Thanks for pointing that out | https://forum.kirupa.com/t/setting-up-your-react-dev-environment-easily-kirupa-com/636205 | CC-MAIN-2017-17 | refinedweb | 480 | 69.28 |
The following forum(s) have migrated to Microsoft Q&A:
All English Windows Desktop Development forums!
Visit Microsoft Q&A to post new questions.
I am noticing that remote shares can have multiple different forms when converted to the nt namespace path. For example, it can be:
\\lanmanredirector\;z:00000\<share>
\\MUP\;z:0000\<share>
\\lanmanredirector\<share>
etc.
Is there a way a website, or some tool that I can use that dumps all the various formats for remote paths? I am writing an application that requires I normalize all these paths so that I can do lookups on the nt namespace paths. The problem I
am having is that they seem to change from OS to OS. I am using QueryDosDevice to translate from win32 namespace to nt namespace.
Thanks
Hello Jon,
Thanks for your post.
According to your description, this issue is more related to Windows Desktop SDK Forum. This thread will be moved there, your understanding will be appreciated.
Damon Zheng [MSFT]
MSDN Community Support | Feedback to us | https://social.msdn.microsoft.com/Forums/windowsdesktop/en-US/0d181d8d-2891-42f8-ad4f-1620fc0471fd/nt-namespace-path-for-remote-shares?forum=windowssdk | CC-MAIN-2020-40 | refinedweb | 171 | 66.33 |
Can anyone give me some kind of hint of what I am doing wrong here?
I am trying to find the median integer in the file. I have set it up the way my teacher asked us to, which was to make 2 passes through the file entitled "Section51.dat". The first pass (loop) is supposed to figure out how many numbers are in the file. This works correctly.
But the second loop is supposed to get the number in the middle of the file. For me it is just dividing the count in half which means I am obviously not doing something correct.
I also can't figure out how to do this in a case when you have an even set of integers in "Section51.dat" because then you will obviously need to average the two numbers in the middle of the file to get the median.
I have not had very much experience (or luck) with while loops yet, and I can't figure out how I am supposed to get the median integer. Can someone give me some guidance?
Code:#include <iostream> #include <fstream> using namespace std; int main() { ifstream WOW; WOW.open("Section51.dat"); int count = 0, aNumber; while (WOW >> aNumber) { count++; } int median; if (count / 2.0 == 0) { median = (count / 2.0) + ((count / 2.0) + 1) / 2.0; } else { median = count / 2.0; } while (WOW >> aNumber && count << median) { count++; median--; } cout << aNumber; return 0; } | http://cboard.cprogramming.com/cplusplus-programming/101113-find-median.html | CC-MAIN-2015-32 | refinedweb | 240 | 75.61 |
This tutorial shows you how to build a Single Document Interface (SDI) application with NetBeans. It presumes that you know what an SDI user interface is and that you want to build one with NetBeans.
You can find more about SDI interfaces here: additional SDI information.
In this tutorial you will build a NetBeans project with a main window and two child windows that display traffic camera pictures that update several times per minute. The traffic cam images are from the Minnesota Department of Transportation Web site. You can see that site here: MDOT Traffic Cameras .
If you try to run the application you won't see much, but try running it anyway. To compile and run the application click on the Run menu then click on Run Main Project. (Or click on the tool bar button for running the main project.)
You should see a dialog box telling you that there is no main application class to compile and run. Recall that earlier you told NetBeans not to create a main class. SDIAppGui.SDIAppMainWindow should be highlighted as a suggested class to create and compile. Click the OK button. NetBeans will chug away compiling the application and in a moment a blank window will display. That's the application thus far. Close the window.
Figure 3. Creating a JFrame Form
What you put on your main application window is up to you, but for this tutorial you will need to put a few things on it to open other windows.
All you have at this point is a JFrame with nothing on it. You will add a JPanel onto the JFrame to have a container to hold stuff. You will also add a menu to invoke the secondary or child windows that you will create shortly.
Before you start adding components to the JFrame you might want to make the main application window open in the center of the screen and give the application a title in the main window title bar. To center the main application window do the following:
Your application will now open in the center of the screen. Run the application to test it.
To give your application a title, right click on the JFrame again and re-open its properties dialog box. Click the properties tab. For the title property type: Traffic Cameras. Close the dialog box and rerun the application. You should see the title in the window's title bar when you run the application.
Rename the JFrame to: frameMain. To do this, reopen the frame's properties dialog box. Click the Properties tab and scroll down to the name property. Type frameMain for the name property. Close the dialog box.
Now add Swing components to the JFrame. The NetBeans IDE will have to be displaying the Design Editor window as shown in Figure 2. The Palette window is in the upper right of the IDE. You can add components by dragging them to the JFrame, or by clicking on a component in the palette, then clicking on the JFrame.
Add a menu to the main application window:
Add a JPanel:
Add a border to the JPanel:
Now that you have your main application window you can create the child windows. For this tutorial you will create two windows. In a real application you would likely have numerous child windows, but for this tutorial two child windows will suffice to demonstrate an SDI user interface.
Your child windows will be JDialog components. You will be adding two JDialog components to your main application to create the child windows. Go to the NetBeans Design Editor window.
Repeat steps 1 through 6 above for the second child window, but in step 4 name the window cam2Win, and in step 6 change the title to Camera 2.
The JLabel components are containers for the traffic camera images.
Your child windows are ready to be used by the main application window. Now you will add Java code to open the child windows and to display the traffic images.
Rather than typing the code in the steps below, copy the code from this Web page and paste it into your application source at the appropriate places.
import java.awt.Image; import java.awt.Toolkit; import java.awt.event.ActionEvent; import java.net.MalformedURLException; import java.net.URL; import javax.swing.AbstractAction; import javax.swing.Action; import javax.swing.ImageIcon; import javax.swing.Timer;
On the main menu you need to add code to quit the application and code to open the child windows. Click on the design button so that the design editor window shows.
private void menuMainQuitMouseClicked(java.awt.event.MouseEvent evt) { // TODO add your handling code here: }
private void menuMainCam1MouseClicked(java.awt.event.MouseEvent evt) { // TODO add your handling code here: }
cam1Win.setBounds(0,0,400,400); cam1Win.setVisible(true);The first statement sets the location of the child window to the top left of the screen with a width and height of 400 pixels. The second statement displays the window.
private void menuMainCam2MouseClicked(java.awt.event.MouseEvent evt) { // TODO add your handling code here: }
cam2Win.setBounds(400,0,400,400); cam2Win.setVisible(true);Note that this second window has a setBounds statement that positions it 400 pixels from the left so that it does not overlap the first window.
Now the main application menu items have code to quit the application and to open the child windows. Run the program and click on the menus to see that they work. Notice that with the child windows open, when you close the main application window, the child windows also close.
private void firstChildWinWindowActivated(java.awt.event.WindowEvent evt) { // TODO add your handling code here: }
Action updateImage1 = new AbstractAction() { public void actionPerformed(ActionEvent e) { URL u = null; lblImage1.setIcon(null); try { u = new URL(""); Image img = Toolkit.getDefaultToolkit().createImage(u); ImageIcon pic = new ImageIcon(img); lblImage1.setIcon(pic); } catch (MalformedURLException mue){ mue.printStackTrace(); } } }; new Timer(7000, updateImage1).start();This code retrieves a highway camera image from the MNDOT web site and creates an image object for it. Then the code assigns the image to the JLabel (lblImage1.setIcon(pic)) icon property. An action even is wrapped around the image retrieval so that a Swing timer can repeatedly call the code to refresh the image. The last statement in this code, new Timer(7000, updateImage1).start();, starts a Swing timer that refreshes the image about four times a minute.
private void secondChildWinWindowActivated(java.awt.event.WindowEvent evt) { // TODO add your handling code here: }
Action updateImage2 = new AbstractAction() { public void actionPerformed(ActionEvent e) { //System.out.println("object doesn't exist" ); URL u = null; lblImage2.setIcon(null); try { u = new URL(""); Image img = Toolkit.getDefaultToolkit().createImage(u); ImageIcon pic = new ImageIcon(img); lblImage2.setIcon(pic); } catch (MalformedURLException mue){ mue.printStackTrace(); } } }; new Timer(7000, updateImage2).start();This code is slightly different from that in the first child window. The action event name, the image URL camera number, and the JLabel name that contains the image are different. The MNDOT traffic cameras do go down occasionally, so if, when you run the application, a child window does not display an image within a couple minutes, go to the MNDOT site and choose a camera. Then use that camera number in place of the 624 or 848 in the above code.
That's it. You now have an SDI application that does something (Figure 8). A couple notes before you run the application. When you run the application it will take a minute for the images to show in the child windows. It takes a minute or so for the timers to start, get the images from the URL and display the image. Also, because you have a timer or two running, when you try to close the application, it will not respond immediately.
Figure 8
This tutorial showed you how to create a Java SDI application using NetBeans. You saw how to create a NetBeans project and an application window, and how to create child windows that are invoked from the main application window. | http://wiki.netbeans.org/SDIAppNetBeans | crawl-002 | refinedweb | 1,343 | 67.15 |
User:Reece/User/MessageArchive
Message Archive
This page will serve as an archive of my talk page messages. Please do not alter this page..
-. For site news, see the Bulletin board. It might be an idea if you add this page to your "watchlist" so that you can see when any new information is posted there.
Good luck! --Herby talk thyme 19:43, 29 December 2006 (UTC)
Todays work!
Very helpful indeed - thanks --Herby talk thyme 18:21, 30 December 2006 (UTC)
Bugs
I love the look of the bugs page - Wikijunior has such quality to it. I run a "bot" that can do things like change or add categories, check for typos and layout etc. I'd be happy to help - regards --Herby talk thyme 17:16, 1 January 2007 (UTC)
- If/when you work out a category structure let me have the details and I'll happily place them on the pages in a semi auto way - regards --Herby talk thyme 17:54, 1 January 2007 (UTC)
- Just let me know & I'll do it as soon as I can - I have a "soft" spot for Wikijunior's quality and presentation --Herby talk thyme 18:01, 1 January 2007 (UTC)
Visual Basic Classic
Please don't just summarily delete images that have no copyright info. It would be more efficient to notify the author first by adding an entry to the talk page of the author so that he or she can make an attempt to rectify the situation. Then if nothing happens after a week or so of course carry on and delete. --kwhitefoot 11:02, 2 January 2007 (UTC)
- I've responded to this directly --Herby talk thyme 11:07, 2 January 2007 (UTC)
Hi
As I have been in touch with you about some bits I hope you won't mind me approaching you about something specific. While vandalism is not the issue here that it is on Wikipedia we do still have a level of vandalism that needs tackling often by Admins as there are fewer users here than Wikipedia. One of the tools that allows us to tackle vandals more easily is Checkuser. This means that we can see the IP address a registered user has vandalised from and see if this may have happened before (it often has). To get these rights required 25 affirmative votes which on a small wiki is not easy. A hard working Admin is looking for just two more votes to get this. If you would like to look at this then go here Wikibooks:Requests_for_adminship#SBJohnny_.28talk_.7C_email_.7C_contribs_.7C_logs. 29 and you are welcome to vote. I must stress that you are under no pressure at all to do so and if you decide not to that is fine. I do hope you do not mind me contacting you in this way - thanks and regards --Herby talk thyme 18:07, 6 January 2007 (UTC)
- And thanks for responding - I felt it was a bit cheeky but I hoped you would say no if you felt it - cheers --Herby talk thyme 12:10, 7 January 2007 (UTC)
Bots
OK - this is someone who "uses" things rather than necessarily understands them! In practice there is far more on Wikipedia about bots too (I'm from there originally). The one I use is w:AWB. There is quite a variety of things it can do from very basic typos and formatting, through adding or taking away categories through to quite advanced "search & replace".
On WP the use of semi auto bot type software is quite carefully controlled, here far less so (AWB can be run auto but I generally review changes before accepting them and run on an ordinary user account that would have to be the way). However I have a bot account user:Herbys bot so I can run auto if I want (a bot account has to be approved). I know there are folks who have "written their own" (Whiteknight for example) but tha is beyond me.
Enough waffle I guess - better if you have particular queries to get back to me again (& you are welcome to do so). I intend running the bot later today on basic cleanup stuff. Regards --Herby talk thyme 15:29, 7 January 2007 (UTC)
Wikijunior New Title Votes
Are you aware that you voted in an old Wikijunior new title vote? The page you voted on was for the last few months of 2006. Now you should vote on: Wikijunior:New Book of the Quarter/2007-1st Quarter Vote Xania
talk 16:44, 7 January 2007 (UTC)
Er, it seems that that wasn't the correct page either. That vote finished on 31/Dec 2006. I'll try to find the link for the current vote. Xania
talk 16:47, 7 January 2007 (UTC)
Try this page: Wikijunior:New Book of the Quarter/2007-2nd Quarter Vote
Re: My Bot
My "bot" essentially consists of a set of library modules, written in Perl. I have all the basic functionality written to edit pages (download text, upload changes, etc), and I'm working to expand on that functionality with a set of higher-level modules and client programs. At the moment, most of my programs are semi-automated, but the ability is there to make programs that are fully-automated as well. I haven't done any development on it at all recently, but I am going to be working on it more in the future. I hope this answers your questions --Whiteknight (talk) (projects) 23:29, 8 January 2007 (UTC)
Very odd indeed
When I'm bored I look into that - think you must have got caught up in some vandalism. Should be ok now. Cheers --Herby talk thyme 18:09, 13 January 2007 (UTC)
Images
Bit late for me but if you don't get a good answer you'll get something from me tomorrow - regards --Herby talk thyme 19:50, 17 January 2007 (UTC)
- Further to my bit in the Staff lounge this may "illuminate" things a little more -[1]. If you have queries let me know (I probably asked the same things a fairly short time ago <g>) --Herby talk thyme 13:17, 18 January 2007 (UTC)
Layout of New Bugs Main Page
Hey Urbane User I just checked out the new userpage, and it looks very great in my opinion, I like the new layout it's nice and neat, and easier to look around in (especially the contents), so great job on it! Avenue of Covina 04:52, 20 January 2007 (UTC)
Sig
Not totally sure I understand you. For the sig (I can almost do it blindfold - better than my normal typing!) if I don't type it I use the button on the editor (if you have .js enabled). While you could template it you would have to type something to get the template (I'm guessing you have the sig set up in "preferences"). Get back to me won't you (adopted or otherwise! - btw I see timezones as another issue with adoption - there are some folk I never see and some I always see?) --Herby talk thyme 10:21, 21 January 2007 (UTC)
- OK - to me "Welcome" consist of a copy and paste of {{subst:welcome}} and --~~~~ which I find fairly automatic. I confess I frequently use a text scratch pad which will have stuff like that it (used to have spedy deleted with frequently used reasons but I deleted then now!). Equally I often do a batch at a time - for example take 500 on RC, exclude anons ([2]) and cast me eye over for red talk pages (sneaky if that put something on themselves!). Then using tabbed browsing open them all up for a copy & paste session?? Let me know - cheers --Herby talk thyme 10:32, 21 January 2007 (UTC)
Language project Freestyle
Welcome on board! I have recently set up the basic pages and templates for the project. You can access its main page following this link: Freestyle.
Generally, there are two ways of creating new articles for this project: either you write an article from scratch or you adapt an actual text from English Wikinews. All kinds of topics are welcome, e. g. breaking news, politics, gossip, texts about the English language and its dialects... choose what you thinkg might be interesting for a foreign reader.
Try to adapt your style of writing to the grade (beginner, mediocre, advanced) you write for. Here are some criteria that might be helpful:
- Beginner: short (100-120 words), easy grammar (present, straight sentences, no conditional), mostly day-by-day vocabulary. Suggested topics: breaking news, gossip, ...
- Mediocre: medium (150-200 words), mediocre grammar (present, past, future, easy conditional sentences). Suggested topics: usual daily press talk, ...
- Advanced: long (150-400 words), everything is allowed. Suggested topics: politics, art, literature, ...
I suggest we proceed thus: choose a topic, write an article and post it to the site, replacing TEXTTITLE by the headline of your article. Please provide the sources from which you have compiled the text. If possible, also provide a suitable picture from Wikimedia Commons. Let me know when you have finished, then I will see to the formatting and annotating. Later on, I can show you how to do the basic formatting (headline, picture, date, sources) yourself. If you have questions, contact me at any time. --Bitbert 14:03, 21 January 2007 (UTC)
Vandalism
Wikibooks:Vandalism in progress or Wikibooks:Administrators' noticeboard are both fine - thanks --Herby talk thyme 14:22, 26 January 2007 (UTC)
- OK - dealt with but always feel free to revert stuff yourself - look at the history and edit the one before the vandalism. Equally look at Wikibooks:Template messages/User talk namespace to find the messages you vcan places to warn people against vandalism - works quite often - thanks --Herby talk thyme 14:45, 26 January 2007 (UTC)
Please notify users whose images you tag
I noticed that you tagged Willem Bol's image Image:At class waves.jpg. Thank you for catching that! May I please ask, however, that you notify users whose images you tag. It is very helpful to them; and many times, they get your notice and appropriately tag the image right away! Just use {{subst:image copyright|Image:Image_name.ext}} ~~~~ Thank you! Iamunknown 22:55, 1 February 2007 (UTC)
Image delinking and Freeware
I wanted to ask if you could please stop deleting red image links from pages, especially in this book. We have several people who are working hard on the image problem, tagging images, tracking down users, and sorting through years of backlog. I know that you have participated in this project some as well, so I hope you can understand the importance of this request.
Once we get the image situation figured out, you can go back to business as usual. Thank you. --Whiteknight (talk) (projects) 02:27, 2 February 2007 (UTC)
- No worries. Everybody wants to help out, but sometimes different people "help" in contradictory ways. I know that you are a good helper, and I don't want to see your hard work get undone by our other good helpers. --Whiteknight (talk) (projects) 15:35, 2 February 2007 (UTC)
AdnanSa
Thank you for nice words :) I shell try develop wikibooks on bosnian language, especialy books for kids, so I don't think I'll contribute much on this wiki, but who knows :) Maybe one day. AdnanSa 16:03, 13 February 2007 (UTC)
I have started a discussion about this at Wikibooks:Staff_lounge#Changes_to_main_page where it suggests Wikibooks:Nearly_complete as a main page. RobinH 13:38, 18 February 2007 (UTC)
I agree with your comments on the staff lounge. The change to the Main Page that has turned it into a Library Catalogue looks like a retrograde step. It needs an admin to change it to something more relevant (See Main Page/test) RobinH 11:45, 20 February 2007 (UTC)
NoEditSection
Hi... this tag makes the blue "edit" link above the header sections on a page disappear. (I have no idea why it's used, but that's what it does :). --SB_Johnny | talk 20:52, 8 March 2007 (UTC)
- Hi - apologies for the delay in the reply - I've been away for a few days. You seem to have had an answer to the query and I can only confirm what Johnny said - it prevents someone adding a section to a page via a top "+" tab. Cheers --Herby talk thyme 12:08, 13 March 2007 (UTC)
Alphabetizing
Thanks. Makes things easier to find. ;P --xixtas talk 21:05, 22 March 2007 (UTC)
Page deletion
I know that we can track down blanked pages, but it does help out if you can leave the {{delete}} tags on the pages if you think they need to be deleted. I tracked down some pages you blanked, but had earlier marked them for speedy delete.
Anyway, thank you for trying to help out Wikibooks in your own way, and this sort of page cleanup is certainly helpful and important. Keep up the effort here! --Rob Horning 10:22, 23 March 2007 (UTC)
Popups
OK - you would be best using Firefox as far as I know and some folk don't get on with it. Me - I really would not be without it. [3] gets you to the info. Not an admin tool but is great for many tasks - regards --Herby talk thyme 09:28, 30 March 2007 (UTC)
Usernames
No problem - well I say that - it is a 'crat thing - they can "change" usernames so I guess a posting to Whiteknight would be best - cheers --Herby talk thyme 19:25, 13 April 2007 (UTC)
- I can change your username if you want, just let me know what the new user name should be. In the database, all users are identified by a unique ID number, and all your contributions are attributed to that number, not to any particular user name. To that effect, when your name is changed, your entire contribution history will be updated to use your new name and not your old one. Besides the signatures you've posted in discussions, there will be almost no record at all that this username ever existed. --Whiteknight (talk) (projects) 19:33, 13 April 2007 (UTC)
Done. You are now User:Urbane. All the pages in your user and user talk namespace have been moved to your new username, and the old pages have been redirected to this new namespace. You can use your new username along with your old password to login to wikibooks from now on. On a side note, you may way to register a new account at User:Urbane User just to prevent other people from taking it and breaking those redirects. --Whiteknight (talk) (projects) 19:46, 13 April 2007 (UTC)
What is Wikijunior
What is Wikijunior" is proposed as a policy and has been included on the Wikibooks Policies and Guidelines Voting page. Please take time to comment and express your opinion in the referendum. --xixtas talk 02:25, 2 May 2007 (UTC)
Wikijunior Study (Moved from User_talk:Urbane_User
Wikijunior BOTQ
World War II is the winner. I am nervous about launching a book with so little support. (Particularly so because I myself have little interest in working on it.) But the system is what it is. This is one of the reasons I'm lobbying for changing this so that we just launch books whenever there are enough people who have expressed support, (and we don't launch books when there is not enough support.) We probably need to see if we can get some support from outside Wikipedia World War II talk might be a good place to ask for help. --xixtas talk 13:36, 1 July 2007 (UTC)
Wikijunior:World War II
You are listed as either an interested participant or at one time voted for "World War II" for Wikijunior New Book of the Quarter. This message is to let you know that we are starting work on World War II because it was selected for 3rd quarter 2007. --xixtas talk 23:42, 11 July 2007 (UTC)
Wiki research, 30 June 2007
Copyvio
For god's sake man, that is a copy-paste from a GCSE textbook widely used by British kids!! even if you didn't use it, that text is widely circulated!! --JohnBambenek 14:00, 14 July 2007 (UTC)
NP
There would have been a time when I would have not handled it well at all - we live and learn! Good to have you around --Herby talk thyme 15:28, 14 July 2007 (UTC)
Fhsst!
Hi - pretty much out of time for today. However I'd probably say the same thing - Staff lounge would be where I'd go with it - I'll take a look there when I'm on next - cheers --Herby talk thyme 18:16, 31 July 2007 (UTC)
- Maybe we could come up with some kind of "group project"? Every day, if each user fixed one image from the FHSST book (found or created a free alternative, or proved that the image was already under a free license), with enough people working on it the problem would be solved pretty quickly. All I've really got is inkscape and MSpaint, but anything that I can recreate I will try to do soon. --Whiteknight (Page) (Talk) 18:33, 31 July 2007 (UTC)
If you missed my reply on my talk page. Sure you can copy User:Darklama/Main Page. --darklama 20:55, 1 August 2007 (UTC)
Re:Page Listings
There is a list at Wikibooks:Alphabetical classification/All Books, which basically using DPL to transcludes the pages from the various alphabetical categories. This will only list books that have been properly tagged with {{Alphabetical}} (which isn't all of them). Unfortunately, I have yet to find a way to only list root-level pages automatically (such as by a special page). --Whiteknight (Page) (Talk) 21:38, 20 August 2007 (UTC)
Questions Regarding Pre-university courses
Hi Urbane, I've started doing a little writing for the Primary School Mathematics course, which seems much more like a book than a course.
I've noticed three courses in the Pre-university courses section of the Wikiversity:School of Mathematics page
Basic Mathematics (numbers, arithmetic, fractions, decimals)
Primary School Mathematics (numbers, arithmetic, pre-algebra)
High School Mathematics (algebra, trigonometry, pre-calculus, calculus)
The Primary School Mathematics course specifies on the first page that it is "not for students but for parents and teachers". I understood this to mean that the reader should have some competency in math and so I created and started writing what I thought the content of the Primary School Mathematics pages should be in an "Overview" chapter. However, I then noticed that the High School Mathematics course (in the same category of courses, is intended to actually teach the various subjects of mathematics.
I see a problem here. Actually, I'm not sure that the Primary School Mathematics "course" is really a course at all. I see it more as an aid to teachers and parents in understanding how the new math is taught.
It makes very little sense to me that the course should be to teach Primary school mathematics. Surely any adult (with very, very few exceptions) that is able to parse its pages has these basic math skills, so I am treating the book as more of a pedagogical treatise for the layman and/or the teacher whose specialty is not mathematics. Am I going about this wrong?
If you are unable to help me with these questions, can you please point me in the right direction? Thanks Leightwing 21:43, 26 August 2007 (UTC)
Using Wikibooks
I hope you dont mind, but i'm going to just go crazy and work on this book until my fingers bleed. It's the kind of help resource that we desperately need around here, and it's long over-due. I'm trying not to screw up any of your work too badly. Once I get all the groundwork laid, I want to post a message on the staff lounge and try to get everybody here at wikibooks to donate at least a little bit of information. --Whiteknight (Page) (Talk) 00:21, 30 August 2007 (UTC)
Why are you categorizing?
The Wikimanual of Gardening chapters are already categorized in subcategories... I've tried to keep the top category (Category:A Wikimanual of Gardening empty so that new pages could be put there for later sorting. --SB_Johnny | PA! 12:11, 30 August 2007 (UTC)
(BTW: there are many hunderds of pages in that book, hence the subcat structure) :). --SB_Johnny | PA! 12:15, 30 August 2007 (UTC)
- NP :). Thanks for the copyediting too though! I'll be back to work on that book come November or so... the v:Bloom Clock takes up most of my wikitime these days :). --SB_Johnny | PA! 13:07, 30 August 2007 (UTC)
Reverts
Thanks for the reverts, sorry I wasn't around a few minutes earlier to block that guy sooner. --Whiteknight (Page) (Talk) 16:33, 31 August 2007 (UTC)
- I've temporarily protected this page from anonymous editing because of the vandalism. It expires today at 17:46, or I can unprotect it for you earlier if you like. – Mike.lifeguard | talk 17:02, 31 August 2007 (UTC)
Please use subst:
Please remember to substitute for Template:no license. Otherwise it will always say "to be deleted on or after" and then some date a week from whenever you view it. It is only static when you subst: it. It's also supposed to give a big "You forgot to subst: me; please change it" message if it isn't substituted, but I think that's broken because it keeps showing up fine.
Also, if you use {{subst:nld}} it will automatically enter the date for you - it's a lot faster! Thanks for all the image tagging you do. – Mike.lifeguard | talk 02:47, 1 September 2007 (UTC)
- Both templates seem to subst: fine for me. Here's one instance: User:Mike.lifeguard/Sandbox and I tried it on Image:5percent-bar.png, which worked fine. Same goes to substituting {{no license}}. I'm not sure how your browser could be affecting it though. {{subst:nld}} is exactly what you should be using. If it's still not working, you might try asking someone who worked on the template. – Mike.lifeguard | talk 14:28, 1 September 2007 (UTC)
- Actually, it looks like you used only one curly brace on either end. To subst: a template you still need two. Hope that fixes it! – Mike.lifeguard | talk 14:37, 1 September 2007 (UTC)
broken redirects
I remember deleting some stuff for you a while back - archives of your talk page, I think. It may have broken some redirects. Or they may be totally unrelated. But it involves stuff in your User Talk: space, so do you mind sorting them out?
User talk:Urbane User/MessageArchiveDecember2006 (edit) (delete) → User talk:Urbane/MessageArchiveDecember2006 User talk:Urbane User/MessageArchiveFebruary2007 (edit) (delete) → User talk:Urbane/MessageArchiveFebruary2007 User talk:Urbane User/MessageArchiveJanuary2007 (edit) (delete) → User talk:Urbane/MessageArchiveJanuary2007 If anything needs to be deleted, you can tag them, or let me know and I'll do it. Thanks! – Mike.lifeguard | talk 13:18, 5 September 2007 (UTC)
- Figured it out. I have gone temporarily insane. The redirects have been deleted. – Mike.lifeguard | talk 13:20, 5 September 2007 (UTC)
- Ha, no worries. We all go a little bit insane sometimes. Urbane (Talk) (Contributions) 17:21, 5 September 2007 (UTC)
Your RFA
Will all positive support, you are now a sysop. Enjoy. -withinfocus 01:09, 8 September 2007 (UTC)
- Thanks, I will use my new tools cautiously. Thanks to everyone who voted in support. Urbane (Talk) (Contributions) 09:42, 8 September 2007 (UTC)
I saw your question on withinfocus's talk page. This might be worth discussing in the staff lounge. I don't think including the GFDL license tag on the page would be enough. While the text is required to be released under the GFDL, images are not. I believe while it would be a pain to do, you or someone else would need to verify that the PDF file is actually based on a book here, that the GFDL is properly included within the file, that all images used are released under a free image license and include the license information within the file, and all attribution/required information by each license is properly included. I think it would be necessary to delete it if all that is not meet. Perhaps a new template is needed for including on pages within the Image namespace for pdf files that have had all that verified. --darklama 11:15, 8 September 2007 (UTC)
- Yeah, I'm not the best person to ask about images and licensing issues. Best to go with darklama's comments. -withinfocus 23:27, 9 September 2007 (UTC)
Deletion Question
Its me again, seeing another question you've asked. When I'm deleting due to a vfd decision, I try to delete all modules, talk pages, specialized templates only used by it, redirects and empty categories specific to it. So if the templates are orphaned, and were specific to that Final Fantasy book go ahead and delete them. --darklama 12:55, 9 September 2007 (UTC)
Deletions
Yes - play some more with the delete button. Someone (me maybe) missed them when doing the others (be careful when deleting entire "books" that you get everything, I've slipped up before). And congrats on the admin bit - cheers --Herby talk thyme 13:17, 9 September 2007 (UTC)
Image:FLYbarmutant.png copyright
I noticed your msg on JWSchmid's talk page concerning this image. While it did include {{nld}} on the page, if you looked further down, at the description the person included information that it was released under the GFDL. I've replaced nld with {{GFDL-self}} and removed the copyright notice from JWSchmid's talk page.
If you find any other images which describe what terms they releasing under, such as GFDL, just go ahead and rm the nld template if there is one and add the appropriate license. --darklama 21:59, 11 September 2007 (UTC)
Left you a reply on my talk page. --darklama 19:13, 12 September 2007 (UTC)
Info
Hi - kept meaning to get back to you today but got delayed! Pages such as Talk:Java Programming/wiki/Talk:Java Programming/Threads and Runnables/w/index.php this which you (rightly) deleted are created by spambots (anything with an index.php or a trailing /). They probably come from open proxies, however a quick way to deal with them and leave a form of warning is to block them (I've never seen good edits on any ip like that). I'll always block for a week at least, if I can find more info (with this maybe) then longer. Some block for longer anyway but at least then if you look at the block log when you find one you can see if they are returning. Let me know if I can help - cheers --Herby talk thyme 17:49, 15 September 2007 (UTC)
- No problem - just operating on auto pilot when I do that sort of thing! Keep at it and thanks for the wishes, it will be good to get away - cheers --Herby talk thyme 16:02, 16 September 2007 (UTC)
Untagged / Unlicensed Images
I don't know if you've noticed yet, since it depends some on your browser's cache, your cache preferences on Wikibooks, and what skin your using, but there may be two links called "untagged" and "unlicensed" that show up now on user pages, that you may find useful in your quest to deal with images and other files missing copyright information and license tags. The untagged link shows any files uploaded by the user that haven't been tagged, not even with {{nld}}. The second shows uploads by that user which have been tagged with {{nld}}. --darklama 15:58, 16 September 2007 (UTC)
Re:Merging Edit Histories
Yes, i have heard of merging edit histories, although I cant remember whether I've ever had the pleasure of doing it. I have, however, written up some basic instructions for doing it at Using Wikibooks/The Wikibooks Administrator/Advanced Administration. I hope this should answer your questions, but if you need more information let me know and I will try to fill you in. --Whiteknight (Page) (Talk) 21:34, 1 October 2007 (UTC)
ETD Guide Violation
Hello. I am new to Wikibooks so I am unsure what I need to do to make everything alright with my page. I am a student at Virginia Tech who is working on a project for class in accordance with one of the authors, Dr. Fox. Please let me know what I need to do so my page is not deleted.
- I've now acted on the information provided and tagged it as a copyvio with a termination date of about a week. You or User:Urbane or anyone that has time and disposition could attempt to get the permission from UNESCO (I don't perceive that as improbable, since it falls on the dictates of the organization to enable such uses, I have provided information on were to look for it on the VfD page). Good luck it would be great if we can save the work. --Panic 23:30, 5 October 2007 (UTC)
ETD Guide Cvio Question
I have a couple questions about the how to correctly get copyright permissions. Myself and Dr. Edward Fox are getting in contact with UNESCO and am positive that it won't be a problem. However, being as there are multiple authors in this piece of work, is permission from UNESCO enough or do we need permission from all authors as well?
Also, being as this is something that we would like to have continually updated over the years but also keep the integrity of the work in the ETD guide, is there any way to lock editing of this ETD guide wiki to only certain users? Please let me know as soon as you get the chance. Thank you. --User:Kmill56
- Only release from the copyright holder is required. If UNESCO holds the copyright, then they can license it however they want.
- There is no way to restrict who can edit the book - that would be antithetical to the concept of wikis. If you want to control editing, this is not the place for it. "If you don't want your writing to be edited mercilessly and redistributed at will, then don't submit it here." – Mike.lifeguard | talk 22:26, 8 October 2007 (UTC) (Sorry to barge in like that - I though I was editing a more public page. *is embarassed*) – Mike.lifeguard | talk 22:27, 8 October 2007 (UTC)
UNESCO grants copyright permission for ETD Guide on 23 Oct 2007
I just received the following email, which grants use copyright permission from UNESCO. I can forward it to some email address if you prefer. Many thanks, Ed Fox
Original Message-----
From: Plathe, Axel [4] Sent: Tuesday, October 23, 2007 2:35 AM To: Fox, Edward Cc: Moxley, Joseph; Nisbet, Miriam; Shawki, Tarek; Denissova, Natalia Subject: RE: Resending: Written permission please - RSVP ASAP
Dear Ed,
Nice to hearing from you.
It is with pleasure that UNESCO grants permission to publish the Guide on Wikibooks.
Best wishes
Axel _________________ Axel Plathe Chief, Executive Office Communication and Information Sector UNESCO 1, rue Miollis 75732 Paris cedex 15 Tel.: + 33 (0)1 45 68 44 67 Cell: + 33 (0)6 21 80 64 59 Fax: + 33 (0)1 45 68 55 81
Original Message-----
From: Fox, Edward [5] Sent: Tuesday, October 23, 2007 1:58 AM To: Plathe, Axel Cc: Moxley, Joseph; Fox, Edward Subject: Resending: Written permission please - RSVP ASAP Importance: High
Hi!
I'm resending the msg below to be sure it reaches you.
Please see It will look bad for UNESCO and NDLTD etc. if we cannot get the label removed:
POSSIBLE COPYRIGHT VIOLATION
Can you please send me a written permission statement allowing NDLTD to make available the Guide through wikibooks (and possibly have parts in wikiversity and wikipedia, to promote interest and visibility)?
It is important that:
- students working on this now be able to proceed, since there is now
only about a month left in our semester,
- we have this as a way to update and keep the Guide current, on into
the future, to support ongoing growth of ETD activities.
We thank you for your time and assistance!
Sincerely, Ed Fox (Executive Director, NDLTD)
PS We hope you are well!
Original Message-----
From: Fox, Edward Sent: Monday, October 08, 2007 12:34 PM To: 'a.plathe@UNESCO.ORG' Cc: Fox, Edward; 'Moxley, Joseph' Subject: Written permission please
Hi!
How are you?
It has been a while since we have been in touch.
First, I should thank you again for all your support with regard to ETD activities worldwide!
Second, I should ask if there are other things going on at UNESCO that might relate that we should discuss?
Third, I wonder if you can provide a written permission statement allowing the making of an updated version of the ETD Guide accessible through wikibooks (see). I have some people ready to work on this now, but the wikibooks people won't allow us to proceed without written permission.
We look forward to your comments. Many thanks, Ed
Professor Edward A. Fox, Ph.D. Department of Computer Science, 2050 Torgersen Hall, M/C 0106 Virginia Tech, Blacksburg, VA 24061 USA Ph: +1-540-231-5113, Cell +1-540-230-6266 Email: fox@vt.edu, FAX: +1-540-231-6075 WWW: Chair, IEEE TCDL, Executive Director, NDLTD, Director, Digital Library Research
Laboratory,
The Guide for Electronic Theses and Dissertations
Are the licensing issues for this taken care of? I thought you had received an email, but it required verification? – Mike.lifeguard | talk 21:11, 8 November 2007 (UTC)
from m:User talk:Sj
Hi S.
- I would recommend that they clearly state the new license on their website, preferably with a scan of a signed document that can be uploaded there and on wikibooks itself. They could also email the same image to the permissions mailing list. 15:53, 28 October 2007 (UTC)
- No problem. It's great to find authors willing to freely license their works, and we should facilitate it as smoothly as we can. +sj | help with translation |+ 01:50, 13 November 2007 (UTC)
Message received 20-Nov from User:Nputhika
Hi Urbane,
I removed so much information on the learning theories webpage because I posted on a wrong page. I was supposed to post information on the WebQuests in ESL/EFL contexts page, not on the main page.)
US History
As I told Neoptolemus here, a simple merge would suffice quite well. Thoughts? Laleena (talk) 15:05, 23 February 2008 (UTC)
Wikijunior Europe
I have seen that you have worked on a lot of wikijunior books so i was hoping you could help me with a problem. Croatia is marked as 75 percent done but i have seen more works that have more information then it does. Do you think i should move them up to 75 percent done or move it down to 50 percent done. I would appreciate it if you could check it out and see what i should do. Thanks.--Xxagile (talk) 23:44, 27 July 2008 (UTC)
Recent vandalism
Thanks for catching that... that was annoying. Chazz (talk) 22:41, 17 November 2008 (UTC)
Yes, thanks a lot for sorting the recent vandalism out for me too. HYanWong (talk) 13:17, 18 November 2008 (UTC)
TwinkleSpeedy
If you recall, you had encountered an obscure bug with this script that I couldn't figure out. I just got the same error & managed to fix it. Do you mind trying it out again & let me know whether it's working for you? — Mike.lifeguard | talk 14:54, 18 January 2009 (UTC)
Archiving
Just wondering if there's a reason for this revert you made? — Mike.lifeguard | talk 22:59, 3 February 2009 (UTC) | http://en.wikibooks.org/wiki/User:Reece/User/MessageArchive | CC-MAIN-2014-52 | refinedweb | 6,057 | 68.3 |
.
Before exploring the .NET Framework, we first have to understand the issues/pain areas which developers have faced in other technologies -
Under .NET Framework, many of these problems have been addressed and resolved.
Microsoft .NET Framework provides a huge no. of benefits compared with the legacy languages -.
There are a number of namespaces and types available under various class libraries in .NET framework which can be found here. The Common Type System for language integration works as follows -..).
Compare C# and VB.NET
A detailed comparison can be found over here..
What is an extender class?
An extender class allows you to extend the functionality of an existing control. It is used in Windows forms applications to add properties to controls.
A demonstration of extender classes can be found over here.
What is inheritance?
Inheritance represents the relationship between two classes where one type derives functionality from a second type and then extends it by adding new methods, properties, events, fields and constants.
C# support two types of inheritance:
· Implementation inheritance
· Interface inheritance.
Source: Exfors.
How do you prevent a class from being inherited?
In VB.NET you use the NotInheritable modifier to prevent programmers from using the class as a base class. In C#, use the sealed keyword.
When should you use inheritance?
Read this.
Explain Different Types of Constructors in C#?
There are four different types of constructors you can write in a class -
1. Default Constructor
2. Parameterized Constructor
3. Copy Constructor
4. Static Constructor
Read more about it at.
Can you use multiple inheritance in .NET?
.NET supports only single inheritance. However the purpose is accomplished using multiple interfaces..
What is an Interface?
An interface is a standard or contract that contains only the signatures of methods or events. The implementation is done in the class that inherits from this interface. Interfaces are primarily used to set a common standard or contract.
When should you use abstract class vs interface or What is the difference between an abstract class and interface?
I would suggest you to read this. There is a good comparison given over here..
What is business logic?
It is the functionality which handles the exchange of information between database and a user interface.
What is a component?
Component is a group of logically related classes and methods. A component is a class that implements the IComponent interface or uses a class that implements IComponent interface.
What is a control?
A control is a component that provides user-interface (UI) capabilities.
What are the differences between a control and a component?
The differences can be studied overhere.
What are design patterns?
Design patterns are common solutions to common design problems.
What is a connection pool?
A connection pool is a ‘collection of connections’ which are shared between the clients requesting one. Once the connection is closed, it returns back to the pool. This allows the connections to be reused.
What is a flat file?
A flat file is the name given to text, which can be read or written only sequentially..
What is an Assembly? Explain different types of Assemblies?.
What is the global assembly cache (GAC)?
GAC is a machine-wide cache of assemblies that allows .NET applications to share libraries. GAC solves some of the problems associated with dll’s (DLL Hell).
What is a stack? What is a heap? Give the differences between the two?
Stack is a place in the memory where value types are stored. Heap is a place in the memory where the reference types are stored.
Check this link for the differences.
What is instrumentation?
It is the ability to monitor an application so that information about the application’s progress, performance and status can be captured and reported.
What is code review?
The process of examining the source code generally through a peer, to verify it against best practices.
What is logging?
Logging is the process of persisting information about the status of an application.
What are mock-ups?
Mock-ups are a set of designs in the form of screens, diagrams, snapshots etc., that helps verify the design and acquire feedback about the application’s requirements and use cases, at an early stage of the design process.
What is a Form?
A form is a representation of any window displayed in your application. Form can be used to create standard, borderless, floating, modal windows.
What is a multiple-document interface(MDI)?
A user interface container that enables a user to work with more than one document at a time. E.g. Microsoft Excel.
What is a single-document interface (SDI) ?
A user interface that is created to manage graphical user interfaces and controls into single windows. E.g. Microsoft Word
What is BLOB ?
A BLOB (binary large object) is a large item such as an image or an exe represented in binary form.
What is ClickOnce?
ClickOnce is a new deployment technology that allows you to create and publish self-updating applications that can be installed and run with minimal user interaction.
What is object role modeling (ORM) ?
It is a logical model for designing and querying database models. There are various ORM tools in the market like CaseTalk, Microsoft Visio for Enterprise Architects, Infagon etc.
What is a private assembly?
A private assembly is local to the installation directory of an application and is used only by that application.
What is a shared assembly?
A shared assembly is kept in the global assembly cache (GAC) and can be used by one or more applications on a machine..
Where do custom controls reside?
In the global assembly cache (GAC).
What is a third-party control ?
A third-party control is one that is not created by the owners of a project. They are usually used to save time and resources and reuse the functionality developed by others (third-party).
What is a binary formatter?
Binary formatter is used to serialize and deserialize an object in binary format.
What is Boxing/Unboxing?
Boxing is used to convert value types to object.
E.g. int x = 1;
object obj = x ;
Unboxing is used to convert the object back to the value type.
E.g. int y = (int)obj;
Boxing/unboxing is quiet an expensive operation..
What is a digital signature?
A digital signature is an electronic signature used to verify/gurantee the identity of the individual who is sending the message...
What is globalization?
Globalization is the process of customizing applications that support multiple cultures and regions.
What is localization?
Localization is the process of customizing applications that support a given culture and regions.”. Quoted from here.
I hope you liked these questions and I thank you for viewing them. I thank Pravin Dabade for contributing some of the .NET Interview question and answers. | https://www.dotnetcurry.com/dotnetinterview/70/dotnet-interview-questions-answers-beginners | CC-MAIN-2022-27 | refinedweb | 1,127 | 61.63 |
Docs |
Forums |
Lists |
Bugs |
Planet |
Store |
GMN |
Get Gentoo!
Not eligible to see or edit group visibility for this bug.
View Bug Activity
|
Format For Printing
|
XML
|
Clone This Bug
autopano-sift is a graphics module that is used for the stitching of digital
photos. It is mono based.
Reproducible: Always
Steps to Reproduce:
1.
2.
3.
Created an attachment (id=46547) [edit]
ebuild for autopano-sift
Mostly "hotheads" ebuild. See
in forums. Version 2.1 of autopano-sift. Should be under media-gfx.
Ick, fix that global scope export please. Should be in pkg_setup...
1) Use the mono eclass to avoid this MONO_SHARED_DIR problem.
2) As much as possible, *.exe should avoid going into /usr/bin. *.dll *definitely* shouldn't be there. Please see how muine or blam handle the placement of their files.
Created an attachment (id=46558) [edit]
new release of 2.1 ebuild
New ebuild of autopano-sift:
21 Dec 2004; seddes <destraub2002@yahoo.com>
autopano-sift-2.1-r1.ebuild, autopano-sift : Changed
MONO_SHARED_DIR to importing the mono eclass, changed install
directory to /usr/lib/autopano-sift, added shell script
"autopano-sift" that is installed in bin (to be placed in "files"
in the ebuild directory)
Created an attachment (id=46560) [edit]
shell script for /usr/bin to execute mono & autopanog.exe
shell script to be placed in bin. For ebuild, should be placed in files
directory.
Created an attachment (id=46561) [edit]
ChangeLog
ChangeLog
Created an attachment (id=46680) [edit]
r2 of autopano-sift-2.1
r2 of autopano-sift-2.1 ebuild. Updates LICENSE, installs docs.
Just a small note for some people to save you some time. If you have the nptl
USE flag enabled, just disable it for autopano-sift. If you use nptl, it will
require gcc-3.4 for mono, which is required by autopano-sift.
Created an attachment (id=47597) [edit]
autopano-sift-2.1-r3.ebuild
Got rid of some madness in the SRC_URI
Download the script manually and put in the ./files dir relative to the ebuild.
This is an amazing program. You have to see it to believe it.
Tested fine for me, works great.
Created an attachment (id=48723) [edit]
autopano-sift-2.2.ebuild
Updated autopano-sift ebuild.
-version bump to 2.2
-changed COPYING file to LICENSE
-tested
Hello panorama enthusiasts,
I am proud to announce a new version of autopano-sift, version 2.2. It adds a
refining step for the keypoints found in which a small part of the original
sized image is extracted for high precision matching (think "auto-refine" of
hugin on steroids ;-) . For the best matching results possible.
As always its available from
However, I have not gotten around to include the feature into the Windows GUI,
sorry (if somebody wants to help with that, patches are welcome). The
refinement is only available on the command line and from the Gtk# GUI.
Here is the CHANGES.txt:
autopano-sift 2.2, libsift 1.7
2005/01/15
+ Add best quality refinement of control point matches. To enable this, run
autopano.exe with "--refine". Note that the original image files need to
be available. Further information available in the manpage. (So far its
not available in the native Windows GUI).
+ Add options to autopanog, the Gtk# GUI.
+ Update documentation.
Enjoy,
Sebastian Nowozin <nowozin@cs.tu-berlin.de>
Created an attachment (id=48727) [edit]
autopano-sift-2.2.ebuild
Added new dependancy: libgdiplus
Should this be assigned to graphics?
CCing them, in case they have more time than I for things like this.
David: until mono is more pervasive, most mono consuming apps get assigned to dotnet herd, as we're used to the problems involved with these apps.
If i find time this week, i'll get this guy into portage.
Just to confirm, 2.2 worked fine for me. Stitched 5 photos with autopano-sift,
hugin, and enblend.
Great package.
All worked fine except the new refinement step which sometimes failed with:
Refining keypoints
...
Unhandled Exception: System.Exception: BUG: less than three neighbours!
in <0x0036f> MultiMatch:BuildGlobalMatchList ()
in <0x000f2> MultiMatch:TwoPatchMatch (System.Collections.ArrayList,int,int,System.Collections.ArrayList,int,int,bool)
in <0x00836> Autopano:RefineKeypoints (System.Collections.ArrayList,bool,bool)
in <0x00a92> Autopano:Main (string[])
This could be worked around by either increasing the --maxmatches number or using the --refine-by-mean option.
Created an attachment (id=48994) [edit]
generic script to run autopano-sift utilities
Runs mono on autopano-sift binaries with arguments quoted.
For ebuild to follow, should be placed in files/autopano.
Created an attachment (id=48996) [edit]
autopano-sift-2.2-r1.ebuild
Modified to
Install autopano-complete.sh.
Make utilities easier to run with symlinks in /usr/bin to generic script.
Not install ICSharpCode.SharpZipLib.dll as this is provided by mono.
Not install autopano-win32.exe.
Install the binaries compiled from source instead of the prebuilt binaries.
I changed the gui to be run with autopanog instead of autopano-sift because I
thought it was more consistent with the man page.
The 2.2-r1 ebuild creates /usr/bin/autopanog,
but this is a symlink pointing to /usr/bin/autopano
that is not installed by the ebuild!
$ etcat -f =autopano-sift-2.2-r1|grep -i bin
/usr/bin
/usr/bin/generatekeys
/usr/bin/autopano-complete.sh
/usr/bin/autopanog
$ ls -al /usr/bin/autopanog
lrwxrwxrwx 1 root root 8 Feb 25 09:29 /usr/bin/autopanog -> autopano
the solution is to create /usr/bin/autopano with the "generic script to run autopano-sift utilities" attachment,
but i think the ebuild should do that.
bye
Comment #21, there is another file called "generic script to run autopano-sift
utilities" in the list of attachments that you need to download and put in
/usr/local/portage/.../autopano-sift/files. It is the script you describe. I
also missed this the first time and the ebuild doesn't die if it is missing
(which is a bug I think). Since its only a 1 liner, I think the ebuild should
just create the appropriate script rather then putting another file in the
portage tree.
Created an attachment (id=52341) [edit]
autopano-sift-2.2-r1.ebuild
I agree with Comment #22 that the ebuild should create the script to run mono
on autopano-sift utilities (if we need one) rather than have another file in
portage for everyone to sync. Modified the ebuild to do this.
mono dep should change to dev-lang instead of dev-dotnet.
Created an attachment (id=54724) [edit]
autopano-sift-2.2-r1.ebuild
changed dev-dotnet/mono to dev-lang/mono
I am trying to use this ebuild (2.2-r1) on an amd64 system. and I am getting an
error during compile. I have mono 1.1.5, gtk-sharp 1.0.8, glade-sharp 1.0.8,
libgdiplus 1.1.5 installed. Any ideas? There is my emerge output:
snip
>>> Source unpacked.
mcs /debug /unsafe /target:library /out:libsift.dll \
ImageMap.cs KDTree.cs ScaleSpace.cs SimpleMatrix.cs ImageMatchModel.cs
RANSAC.cs Transform.cs LoweDetector.cs GaussianConvolution.cs KeypointXML.cs
MatchKeys.cs BondBall.cs AreaFilter.cs /pkg:gtk-sharp /r:System.Drawing
/r:ICSharpCode.SharpZipLib
KDTree.cs(448) warning CS0219: The variable 'nearest' is assigned but its
valueis never used
KDTree.cs(573) warning CS0219: The variable 'nearest' is assigned but its
valueis never used
BondBall.cs(241) warning CS0219: The variable 'unknownFile' is assigned but
itsvalue is never used
Compilation succeeded - 3 warning(s)
make -C util all
make[1]: Entering directory
`/var/tmp/portage/autopano-sift-2.2-r1/work/autopano-sift-2.2/src/util'
mcs /debug /unsafe /out:autopano.exe Autopano.cs \
DrawingPrimitives.cs BasicImagingInterface.cs GUIImage-Drawingone.exe ShowOnetwo.exe ShowTwo:generatekeys.exe GenerateKeysv generatekeys.exe generatekeys-gtk.exe
make -C . systemdrawing=yes generatekeys.exe
make[2]: Entering directory
`/var/tmp/portage/autopano-sift-2.2-r1/work/autopano-sift-2.2/src/util'
mcs /debug /unsafe /out:generatekeys.exe GenerateKeys.cs \
DrawingPrimitives.cs BasicImagingInterface.cs GUIImage-Drawing.cs
/r:../libsift.dll /r:System.Drawing
DrawingPrimitives.cs(66) warning CS0219: The variable 'sign' is assigned but
its value is never used
Compilation succeeded - 1 warning(s)
make[2]: Leaving directory
`/var/tmp/portage/autopano-sift-2.2-r1/work/autopano-sift-2.2/src/util'
mv generatekeys.exe generatekeys-sd.exe
mv generatekeys-gtk.exe generatekeys.exe
make -C autopanog all
make[2]: Entering directory
`/var/tmp/portage/autopano-sift-2.2-r1/work/autopano-sift-2.2/src/util/autopanog'
mcs /debug /unsafe -out:autopanog.exe -main:Autopanog autopanog.cs \
-resource:autopanog.glade \
../BasicImagingInterface.cs ../GUIImage-Drawing.cs
../DrawingPrimitives.cs ../Autopano.cs /pkg:gtk-sharp /pkg:glade-sharp
/resource:image-bottom-left.png /resource:image-bottom-right.png
/resource:image-vanilla.png /r:System.Drawing.dll /r:../../libsift.dll
autopanog.cs(89) warning CS0219: The variable 'test' is assigned but its value
is never used
autopanog.cs(130) warning CS0219: The variable 'test' is assigned but its
valueis never used
autopanog.cs(167) warning CS0219: The variable 'test' is assigned but its
valueis never used
../DrawingPrimitives.cs(66) warning CS0219: The variable 'sign' is assigned
butits value is never used
Compilation succeeded - 4 warning(s)
make[2]: Leaving directory
`/var/tmp/portage/autopano-sift-2.2-r1/work/autopano-sift-2.2/src/util/autopanog'
make -C autopano-win32 all
make[2]: Entering directory
`/var/tmp/portage/autopano-sift-2.2-r1/work/autopano-sift-2.2/src/util/autopano-win32'
mcs -main:autopano_sift_winGUI.Form1 \
-r:System.dll -r:System.Data.dll -r:System.Drawing.dll \
-r:System.Windows.Forms.dll -r:System.Xml.dll \
-r:ICSharpCode.SharpZipLib.dll \
-r:../../libsift.dll \
-target:winexe -out:autopano-win32.exe \
-resource:Form1.resx,autopano_sift_winGUI.Form1.resx
-resource:FormProcess.resx,autopano_sift_winGUI.FormProcess.resx \
AssemblyInfo.cs ../Autopano.cs ../BasicImagingInterface.cs Form1.cs
FormProcess.cs ../GUIImage-Drawing.cs
Form1.cs(43) error CS0234: The type or namespace name `ErrorProvider' could
notbe found in namespace `System.Windows.Forms'
Compilation failed: 1 error(s), 0 warnings
make[2]: *** [autopano-win32.exe] Error 1
make[2]: Leaving directory
`/var/tmp/portage/autopano-sift-2.2-r1/work/autopano-sift-2.2/src/util/autopano-win32'
make[1]: *** [all] Error 2
make[1]: Leaving directory
`/var/tmp/portage/autopano-sift-2.2-r1/work/autopano-sift-2.2/src/util'
make: *** [utils] Error 2
!!! ERROR: media-gfx/autopano-sift-2.2-r1 failed.
!!! Function src_compile, Line 25, Exitcode 2
!!! compiling failed
!!! If you need support, post the topmost build error, NOT this status message.
Same thing here (and amd64 too). But the error is related to a "autopano-win32"
thing the makefile wants to build... Since gentoo is a linux system, I removed
the offending lines from the makefile and voila, autopanog is available on
amd64 :)
I added to src_compile():
sed -i "s/^.*win32.*$//" util/Makefile
(after the cd ${S}/src line).
Created an attachment (id=55779) [edit]
autopano-sift-2.3.ebuild
Please test and then add to portage.
I've tested this version and it works for me.
At last, this new version is usable with a screen resolution smaller than
1280x1024!
As for the "autopano-win32" compile problem on amd64, i think it doesn't appear
on x86 with the help of dev-dotnet/winelib (since the error is related to
Windows.Forms). But with the makefile tweak it runs quite well.
(From update of attachment 52341 [edit])
autopano-sift-2.3 resolves the "less than three neighbours" issue mentioned in
Comment #18 (for at least one case that failed with 2.2).
The amd64 compile error happens when trying to build the byte code. The
simplest way around this is to simply copy the byte code from the bin directory
in the autopano-sift tarball to someplace in the path. After all this byte
code is the same no matter where it is being run. Both 2.2 and 2.3 have worked
fine after doing this. I have not tried installing dev-dotnet/winelib so I do
not know if this works on amd64. Perahps some other amd64 person could try the
ebuild after installing dev-dotnet/winelib.
There are a number of amd64 users on the hugin list that have the whole suite
of tools running in 64bit mode including at least 2 Gentoo users. This
includes autopano-sift, hugin, nona, PTOptimizer and enblend. The only thing
that does not work in 64 bit mode is PTStitcher which is only available as a 32
bit binary. I have it running in 32 bit mode. The biggest hang up getting
autopano-sift running is getting the correct versions of mono and related
installed. I have mono-1.1.5, libgdiplus-1.1.5, gtk-sharp-1.0.8 and
glade-sharp-1.0.8. All of these are masked for amd64 or hard masked for all
archs but appear to work fine. At the time I installed these they were the
latest versions. Versions of mono <1.1 do not correctly support the amd64.
When emerging autopano-sift-2.3, I got the same error as in Comment 26.
Using:
mono 1.1.6-r1
libgdiplus-1.1.5
gtk-sharp-1.0.8
glade-sharp-1.0.8
(I'm on x86)
Then I followed the directions in Comment 27, and it worked! Thanks a lot!
Added to the tree. Thanks everyone.
There is one glitch in the current ebuild:
autopano-sift requires an unstable version of libgdiplus to downscale images
for processing.
libgdiplus 1.1.11 worked fine for me. However the currently stable version
1.1.8 made generatekeys crash here.
Hope you don't mind Mark (please revbump if you think it's necessary):
13 Feb 2006; Marcelo Goes <>
autopano-sift-2.4.ebuild:
Depend on >=libgdiplus-1.1.11. Thanks to Hans Oischinger <hans dot
oischinger at arcor dot de> in bug 75192. | http://bugs.gentoo.org/75192 | crawl-002 | refinedweb | 2,316 | 61.02 |
Let
- The following figure shows you the process to create a DataSet file.
- To add a DataSet file, click on Solution Explorer -> Right Click on Project -> click on Add new Item and then it will show you the following screen:
- Enter the Datset file name. Click on the ok button.
- It will ask for confirmation to put that file in the App_Code folder. Just click yes and that file will opened in the screen as a blank screen.
- Now we will add one blank datatable to that mydataset.xsd.
- Right-click in the area of the file and select Add -> Datatable.
- It will add one DataTable1 to the screen.
- The following Figure 5 shows how to add a datatable to the mydataset.XSD file.
- Now datatable1 is added to XSD file.
- Now we will add a data column to the datatable1 as per figure 6.
- Remember, whatever columns we add here will be shown on the report.
- So add the columns you want to display in your reports one by one here.
- Always remember to give the same name for the column and data type of column which is the same as the database, otherwise you will get an error for field and data type mismatch.
- To set property for the columns the same as the database.
- The following figure will show you how to set the property for the data columns.
- The default data type for all the columns is string.
- To change the data type manually right-click on the datacolumn in the datatable and select property.
- From the property window, select the appropriate datatype from the DataType Dropdown for the selected datacolumn.
- XSD file creation has been done.
- Now we will move on to create the Crystal Reports design.
Creation of Crystal report design
- Click on the Solution Explorer -> Right click on the project name and select Crystal Reports.
- Name it as you choose and click the add button.
- After clicking on the add button a .rpt file will be added to the solution.
- It will ask for the report creation type of how you want to create the report.
- Click the ok button to proceed.
- Under Data Sources, expand ADO.NET Datasets and select Table and add to the selected table portion located at the right side of the window using the > button. Click on Next.
- Select the columns that you want to show in the report.
- Now click on the Finish button and it will show the next screen.
- Once the report file is added, you can see the Field Explorer on the left side of the screen.
- Expand Database Fields, under that you will be able to find the Datatable that we have created earlier.
- Just expand it and drag one by one each field from the Field Explorer to the rpt file the under detail section.
- Now the report design part is over.
- Now we have to fetch the data from the database and bind it to the dataset and then Show that dataset to the report viewer.
Crystal report Viewer
- First Drag a CrystalReportViewer control on the aspx page from the Reporting Section of the tool box.
- Add a command Button.
- Configure the CrystalReportViewer and create a link with Crystal Reports.
- Select the Crystal Reports source from the right side of the control.
The following is the final code for reports (Default.aspx).
Code
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using CrystalDecisions.CrystalReports.Engine;
using CrystalDecisions.Shared;
using System.Data;
using System.Data.SqlClient;
using System.Configuration;
public partial class _Default : System.Web.UI.Page
{. | http://asp-net-controls.blogspot.com/p/crystal-reports.html | CC-MAIN-2019-39 | refinedweb | 608 | 77.23 |
Robotic hotel staff: A byte to eat
Gulliver...
As the architect of the I-Corps programme,..:
Silicon Valley was born in an era of applied experimentation driven by scientists and engineers. Its methods have been based on a culture of taking risks – and not strictly speaking developed on pure research – in getting products to market through discovery, iteration and execution. Silicon Valley’s entrepreneurial ethos for start-ups was in treating failures as an experience.
The blending of Venture Capital and technological entrepreneurship has been one of the great business innovations of the past half-century. It provided private funds for untested and unproven technology: but, while most entrepreneurial investments failed, the returns for those that did succeed was so great and lucrative that they easily made up for those projects that failed. The cultural tolerance for failure and experimentation, coupled with a financial structure which balanced risk with return (sometimes obscenely), allowed this system to flourish in technological clusters.
This system has been far from perfect. The emergence of I-Corps presents an approach that will appear alien to the scientist. Entrepreneurship concerns business driven models through a business and accounting perspective – income and expenditure statements, balance sheets, revenue and costing models and 5-year forecasts. But removing the barrier to commercialising the best of scientific research requires linking far more closely the scientific method of experimentation with the starting of a company.
In commercialising university innovations NSF aims to teach scientists and engineers that starting a company should be seen as another research project through an iterative process of hypotheses testing and experimentation.
This programme is a paradigm shift not just for scientists and engineers in every science university in America, but is radical in the way we might bring to fruition discoveries out of the university lab. If this program works it will change how we connect basic research to the business world. And that could lead to a better chance of start-ups surviving and becoming profitable entities with all the economic benefits of job creation.
"From day one, the mantra is “get out of the lab”. Participating academics have to make countless cold calls to potential customers—something few research scientists and engineers have ever done in their professional lives and most initially find awkward."
While I don't doubt that many startups fail because of a lack of experience and focus in marketing and business development, is it really efficient to get the technical brains out of the lab or away from their keyboards to make uncomfortable calls? Surely investment money would be better spent hiring someone skilled to do that for them. I would imagine many of the startups that make it are a successful partnership between the academic person or team and a business counterpart.
The point of "getting out of the lab" is not for marketing or business development. It's an exercise in product development. In order to build something that people will buy you have to have a direct understanding of what they need. This is particularly important when you are developing innovative products that you have no proof will sell, as opposed to selling into an established product category with something simpler, or more luxurious, or with a few more features than they other guys.
Blank makes great sense, so of course it's taken years for him to be heard.
No customer, no business.
Interesting program. It would be nice to see a similar initiative for biotechnology - it seems most of the I-Corps projects are centered around digital technology (with a few medical devices thrown in).
Start-ups that aim to develop novel therapeutic agents face some unique challenges. Perhaps foremost is the time and capital required to develop a product that will generate sales revenue. This is typically on the order of 10-15 years and $1bn per product. For this reason biotech start-ups seldom end up marketing their own products; there is usually a hand-off to pharma at some point, either through acquisition or license, after a lengthy period of voracious capital consumption and zero revenue.
In addition, the risk profile of most early-stage biotech programs is not favourable. At the point where new technologies emerge from university labs, there is generally a 95-98% chance of failure. As a result, the main value-generating activities are those that de-risk the nascent technology and make it more palatable to prospective partners. Carrying out these studies as quickly and cheaply as possible is usually the prime directive of the management team. Marketing and BD are less central in this kind of start-up, as the uncertainty is not about the market but whether the technology will work or be differentiated from the standard of care.
There are serious problems with the translation of biotechnology from universities to the commercial sphere - the current system is not working well, and the result is an absence of new medicines for serious diseases. New approaches like the I-Corps program to assist biotech start-ups are critical to address these problems. Perhaps I'm old-fashioned, but I still think developing an effective treatment for cancer or Alzheimer's disease is more important than the next gizmo or killer app.
Nice endeavour, but beware of hasty generalizations. Had Facebook focused early on monetarizing its content, they may not have been so successful. More generally, the new mobile paradigm (Instagram...) seems to imply looking for scale before looking for revenue.
In other words, invent something useful, and money will somehow come, which I find a healthy way of considering business.
One key aspect that could be helpful is to abolish the private tax on innovation levied by large businesses, also known as "Intellectual Property Rights".
These mercantilist tollbooths make innovation much more costly than it should be, and levy a private tax on innovation.
Best to abolish them as quickly as possible.
This sounds promising but I have one or two reservations:
1. It sounds rather cumbersome and formal. Natural entrepreneurs tend to want to just get on with it and the process may be off-putting.
2. I think any involvement by venture capitalists should be minimized. All too often, the founder of an enterprise tends to be squeezed out of his own business or has to waste time fighting an often losing battle with appointed managers who arrogantly assume more market or executive knowledge than they actually have.
A start-up is, by its very nature, vulnerable to unhedged risks - law suits, patent issues and so on. These can destroy a potentially great company - even mighty Apple has to spend vast sums of money on these issues. I would propose a cooperative of start-up companies, with a shared resource for compliance, quality assurance and other specializations, plus a core of internal consultants, would make a lot of sense and be more attractive to the ambitious young entrepreneur. Cross-holding of ownership would reduce the impact of failure, competition to join the cooperative would increase the probability of success, and there would be a vested interest of all members to pool their experience to aid each business in its development.
A fond user of the "Business Model Generator" tool, that happily couple with "Scenario Planning" and System Dynamics mapping, I would stress the mportance of getting Univeristy people out in your company, in the field, and for months on end.
Actually I would give up on any academic who is older than forty and to that age only done research in a lab. I have dealt with many Uni, and with the sole exception of Strachlyde in Glasgow where it seems most Profs actually come froma practitioner background, on average academics struggle with the idea of industrializing a product. Ease of use ? Reliability ? Ergonomics ? things that are second nature to most people in industry leave a merk in these Profs.
Fair cop on what went on in the past, but if anything, Business Schools are now overcompensating for the misdiagnoses of the past.
We Profs now get it: Start-ups are NOT just smaller versions of established businesses. Guys like Alex, Steve, Eric Ries, Sean Ellis , Vivek Wadhwa, are now the hottest tickets in Business Schoolandia.
What I do is send people to YouTube videos of these guys speaking and then the class becomes an exercise is applying what they recommend to the actual real life start-up the MBA students plan on launching after graduation.
In addition, the vogue is no longer to do Spin offs but actually start companies.
Well, I suppose that depends where you went to business school. That title is a bit out there and fundamentally wrong. As an entrepreneur with an MBA, I have to say that my studies at Kellogg were quite useful in my development of a start-up business. Understanding the basic parameters of running a successful business, big or small, and trying to replicate them in your business (things like size of market, pricing, margins, competitive advantage and, wait for it, inimitability, my favorite strategy course word) is the fundamental task that any business person should undertake; start-up or established. I could have not gotten that experience on my own, and my time in B school really elevated my understanding of those principles.
The title of this article should be Stanford Creates a Business Incubator To Maximize Chance of Success.
If a venture is 'little more than untested hypotheses' perhaps it should stay in the lab until there is a product.
We have to struggle to comprehend how Internet based business projects involve products, though of course they do. Many seem to be tricks to turn the supposed customers into products. Then we begin to realize that the real customers are the advertisers. Other startups involve a product that is only a service on the Internet that mimmicks a service that utilized more traditional merchandising methods.
Thus, it seems the backdrop for the discussion in business schools that we are hearing about in the present article is the world of merchandising, and the so-called 'product' of the referenced start-up is somewhat vapid. All such vapid start-ups depend on an underlying existence of real products, most of which are not at all innovative. And the innovative aspects of the 'I-Corps' curriculum seem to be rather hard to find.
We might anticipate though that the whole I-Corps thing will turn out to be just a business school attempt to make marketing the dominate function in the start-up process, even more than it is now. And we can expect less and less innovation to emerge from the lab.
Marketing is an important function when it focuses on a real product, not when it pretends to be the basis of creating the real product.
Explore an alternate view with very clear example of real innovation, that can be seen on youtube by typing on their search line: Miastrada Dragon This shows a product just emerging and very much ready for marketing activities. With this as the basis of discussion there could be very meaningful exploration of customer needs. It might even transcend the customer's stated desire for more immigrants; replacing it with constructive development of farm uses for this new tractor.
It is not impossible that business school marketing could have contributed to the development of this product. Simply observing a problem is something that can be done by a variety of persons. Working out the details involved contiued awareness of customer needs, though it is hard to see why "I-Corps" could have helped.
Very good article. This proves what the money spend on Research and Developments can provide for all people of the world especially scientist. The Schaeffler corporation is second for its spending in R&D. This will help to make the whole world a better place. | http://www.economist.com/node/21559734?sort=3 | CC-MAIN-2014-35 | refinedweb | 1,980 | 50.57 |
On Sat, 2008-01-12 at 16:41 +0100, Dieter Schicker wrote:
> Hi,
>
> I got two questions:
>
> 1) Is there a recommended way for integrating javascript with forrest. I
> know that I can insert <script> tags and that they get interpreted. But
> I'm not happy with the fact that the document is no longer valid then.
The "normal" way to integrate jscript into forrest would be via a custom
skin since the script tags makes only sense in html. However, this is
quite cumbersome.
There is the dispatcher in the whiteboard that solves this problem very
nice since you add this script tags to the final output via a custom
contract. However details about the dispatcher we will answer on the dev
list.
>
> 2) Is it possible to "merge" other namespaces into a document V2.0
> namespace?
You can define your own dtd, yes that is possible.
You can as well provide your own implementation via the locationmap:
<!-- ================================== -->
<!-- Forrest files -->
<!-- ================================== -->
<match pattern="forrest.schema.document-v20">
<location src="{YOUR_LOCATION}/document-v20.dtd" />
</match>
salu2
>
> Many thanks in advance & cheers
> Dieter
>
--
Thorsten Scherler thorsten.at.apache.org
Open Source Java consulting, training and solutions | http://mail-archives.apache.org/mod_mbox/forrest-user/200801.mbox/%3C1200353462.12575.13.camel@cartman%3E | CC-MAIN-2016-36 | refinedweb | 194 | 66.03 |
Using Modules in VB.NET
I came across this comment in a the newsgroups today ...
I personally recommend against modules, which are a pre-OO [object Oriented] feature. Instead,
go ahead and create a Public class with Public Shared subs and/or Functions.
... and thought it would be a good time to clear up this popular misconception. Modules in VB.NET are indeed object-oriented. The VB.NET compiles a module as a class with a private constructor and sets all of the methods and fields to be shared. There's about as much sense in avoiding Modules in VB (for the sake that they don't seem OO) as there is in avoiding members of the Microsoft.VisualBasic namespace (because they're not in the System namespace). And I can assure you that there is not a whole lot of sense in that.
As a Visual Basic programmer, if you refuse to use Modules, Left(), Right(), etc. because you feel they are too BASIC-ish, then senslessly not utilizing the full power of VB. The namespace is not there just for backwards compatability reasons - it's part of the whole Basic philosophy that simple, common operations should be as easy and straightforward as possible. Left(myString,4) is a bit easier to read and use than If myString.Length > 4 Then myString.SubString(0,4) Else myString. And of course, the builtin function does the exact same thing in the exact same time (minus the function calling, which is pretty insignificant anyway). | http://weblogs.asp.net/alex_papadimoulis/158295 | CC-MAIN-2014-41 | refinedweb | 253 | 66.64 |
This page discusses the benefits of replacing the current print statement with an equivalent builtin. The write and writeln functions presented below do everything the print statement does without requiring any hacking of the grammar, and also make a number of things significantly easier.
Guido has made it clear he wants to get rid of the print statement in Python3.0. This page considers why we would want to go that way, and how we can actually get there. It should be turned into a PEP eventually.
FYI: Python 3.0 has been released with a print function, and Python 2.6 has from __future__ import print_function to enable this on a per-module basis. Further discussion here is therefore quite futile. GvR.
Benefits of using a function instead of a statement
- Extended call syntax provides better interaction with sequences
Keyword argument sep allows item separator to be changed easily and obviously
Keyword argument linesep could optionally allow line separator to be changed easily and obviously
Keyword argument stream allows easy and obvious redirection
- The builtin can be replaced for application wide customisation (e.g. per-thread logging)
- Interacts well with PEP 309's partial function application, and the rest of Python's ability to handle functions
BDFL comments:
- don't waste your time on sequence printing
- I'm not excited about sep and linesep keyword args
- add to benefits: easier transition to other function/method calls
- if it were me, I'd use 'to=' or 'file=' rather than 'stream=' (too long)
Guido's own arguments
print "x = " + str(x) + ", y = " + str(x) + ", z = " + str(z)
print "x = %s, y = %s, z = %s" % (x, y, z)).
Getting there from here
The example implementation below shows that creating a function with the desired behaviour is quite straightforward. However, calling the builtin print is a problem due to the fact that print is a reserved word in Python 2.x. Since the print statement will be around until Py3K allows us to break backwards compatibility, devising a transition plan that lets programmers 'get ready early' for the Py3K transition becomes a significant challenge.
If, on the other hand, the builtin has a different name, it is quite feasible to introduce it during the 2.x series. In PEP 3000, it is suggested that the print statement be replaced by two builtins: write and writeln. These names are used in the example below. By using alternative names, and providing the builtins in the 2.x series, it is possible to 'future-proof' code against the removal of the print statement in Py3k.
This technique of having two printing operations is not uncommon - Java has both print and println methods, and C# has Write and WriteLine. The main problem with the approach is that the writeln form will actually be more commonly used, but has the longer, less obvious name of the two proposed functions. This perception of relative use is based on a comparison of relative usage levels of the two current forms of the print statement (i.e., with and without the trailing comma) by some of the developers on python-dev.
Some other names for the builtins which have been suggested are:
print - excellent name, but causes transition problems as described above
println - avoids the transition problems, reflects default behaviour of adding a line, matches Java method name
printline - alternative to println, that avoids the somewhat cryptic abbreviation
writeline - alternative to writeln that avoids the somewhat cryptic abbreviation
say - short alternative to println invented in Perl 6 (which uses print for no-newline output)
out - not a verb, and converting to it may be problematic due to shadowing by variable names
output - nice symmetry with input, but using the term as a verb is not typical
prnt - easily edited into print later on
write - decent name, but confusing when compared to write() method
display - Can be a verb or not. Idea from Scheme.
Maybe file-objects should have write()- and writeln()-methods similar to the built-in functions? -- TS
[lwickjr: don't they already have something similar?]
Sample implementation
This is a Python 2.4 compatible sample implementation of the approach currently in PEP 3000. This version of writeln doesn't provide a linesep keyword argument in order to keep things simple. Some other variations are covered further down this Wiki page.
1 def write(*args, **kwds): 2 """Functional replacement for the print statement 3 4 This function does NOT automatically append a line separator (use writeln for that) 5 """ 6 7 # Nothing to do if no positional arguments 8 if not args: 9 return 10 11 def parse_kwds(sep=" ", stream=sys.stdout): 12 """ Helper function to parse keyword arguments """ 13 return sep, stream 14 sep, stream = parse_kwds(**kwds) 15 16 # Perform the print operation without building the whole string 17 stream.write(str(args[0])) 18 for arg in args[1:]: 19 stream.write(sep) 20 stream.write(str(arg)) 21 22 def writeln(*args, **kwds): 23 """Functional replacement for the print statement 24 25 >>> writeln(1, 2, 3) 26 1 2 3 27 >>> writeln(1, 2, 3, sep='') 28 123 29 >>> writeln(1, 2, 3, sep=', ') 30 1, 2, 3 31 >>> import sys 32 >>> writeln(1, 2, 3, stream=sys.stderr) 33 1 2 3 34 >>> writeln(*range(10)) 35 0 1 2 3 4 5 6 7 8 9 36 >>> writeln(*(x*x for x in range(10))) 37 0 1 4 9 16 25 36 49 64 81 38 """ 39 # Perform the print operation without building the whole string 40 write(*args, **kwds) 41 write("\n", **kwds)
Code comparisons
These are some comparisons of current print statements with the equivalent code using the builtins write and writeln.
1 # Standard printing 2 print 1, 2, 3 3 writeln(1, 2, 3) 4 5 # Printing without any spaces 6 print "%d%d%d" % (1, 2, 3) 7 writeln(1, 2, 3, sep='') 8 9 # Print as comma separated list 10 print "%d, %d, %d" % (1, 2, 3) 11 writeln(1, 2, 3, sep=', ') 12 13 # Print without a trailing newline 14 print 1, 2, 3, 15 write(1, 2, 3) 16 17 # Print to a different stream 18 print >> sys.stderr, 1, 2, 3 19 writeln(1, 2, 3, stream=sys.stderr) 20 21 # Print a simple sequence 22 print " ".join(map(str, range(10))) 23 writeln(*range(10)) 24 25 # Print a generator expression 26 print " ".join(str(x*x) for x in range(10)) 27 writeln(*(x*x for x in range(10)))
Newline / No-newline
Another possibility to deal with the newline / no-newline cases would be to have a single function which would take an extra keyword argument "linesep" or "end" (or perhaps some slight magic: an empty string as the last argument), so to print without newline, you would do
The default case should be to insert a newline.
I quite like the single function idea (early versions of this Wiki page used only a single function), but giving it a good name is challenging. The version without the keyword argument is a definite non-starter, though, as there is far too much risk of quirky behaviour when printing a string variable which just happens to contain the empty string. - Nick Coghlan
BDFL comments: I definitely am not keen on the single function with keyword args. IMO all you need is a companion function that inserts no separator and no newline; the desired separators are then easily given explicitly. Oh, and you will never get away with using the final empty string to mean "no newline". This would be very confusing for someone who printed a variable like so: print("The value is:", x) when the variable happens to be empty.
[lwickjr: I quite agree. Ugly, but explicit is better than implicit. Function with NO seperator and NO newline: +5 How about def printFunc(*args): print "".join(map(str, args))]
Iterating Iterables
Another potentially interesting improvement could be for the function to iterate all iterables, in order to be able to use generator expressions without having to use the star syntax and to avoid the creation of a temporary sequence. This would allow:
This behaviour could be optionally triggered by a keyword argument "iter". Another possibility would be to always do the iteration and to force the caller to str() the generator if he wants to print it without iteration (happens rarely).
Nailing down this kind of behaviour is trickier than one might think. The python-dev discussion of the Python 2.5 candidate library function itertools.walk goes over some of the potential problems. We've survived without fancy iterator handling in the print statement - let's avoid adding anything we don't have a demonstrated need for (the extended call syntax stuff comes 'for free' with the conversion to using a function). - Nick Coghlan
BDFL comments: bah. implicitly exhausting iterables has side effects, which is a bad idea for a print function. It would not be a good idea if commenting out a print() call changes the behavior of the program.
[lwickjr: How about this? Define repr(iterator) to return "<iteratorData>" and str(iterator)}} to return something like {{{" ".join([i for i in iterator])? -5]
Another Strawman
Here's my own strawman implementation of write() and writef() using semantics I think are pretty useful. I'll post to python-dev about the details. - Barry Warsaw
1 import sys 2 from string import Template 3 4 class Separator: 5 def __init__(self, sep): 6 self.sep = sep 7 8 SPACE = Separator(' ') 9 EMPTY = Separator('') 10 11 12 def writef(fmt, *args, **kws): 13 if 'to' in kws: 14 to = kws.get('to') 15 del kws['to'] 16 else: 17 to = sys.stdout 18 if 'nl' in kws: 19 nl = kws.get('nl') 20 del kws['nl'] 21 if nl is True: 22 nl = '\n' 23 elif nl is False: 24 nl = '' 25 else: 26 nl = '\n' 27 if isinstance(fmt, Template): 28 if args: 29 raise TypeError('invalid positional arguments') 30 s = fmt.substitute(kws) 31 else: 32 if kws: 33 raise TypeError('invalid keyword arguments') 34 s = fmt % args 35 to.write(s) 36 to.write(nl) 37 38 39 def write(*args, **kws): 40 if 'to' in kws: 41 to = kws.get('to') 42 del kws['to'] 43 else: 44 to = sys.stdout 45 if 'nl' in kws: 46 nl = kws.get('nl') 47 del kws['nl'] 48 if nl is True: 49 nl = '\n' 50 elif nl is False: 51 nl = '' 52 else: 53 nl = '\n' 54 if 'sep' in kws: 55 sep = kws.get('sep') 56 del kws['sep'] 57 else: 58 sep = ' ' 59 if kws: 60 raise TypeError('invalid keyword arguments') 61 it = iter(args) 62 # Suppress leading separator, but consume all Separator instances 63 for s in it: 64 if isinstance(s, Separator): 65 sep = args[0].sep # Should this be s.sep? 66 else: 67 # Don't write a leading separator 68 to.write(str(s)) 69 break 70 for s in it: 71 if isinstance(s, Separator): 72 sep = s.sep 73 else: 74 to.write(sep) 75 to.write(str(s)) 76 to.write(nl) 77 78 79 obj = object() 80 refs = sys.getrefcount(obj) 81 82 write('obj:', obj, 'refs:', refs) 83 write(Separator(': '), 'obj', obj, 84 Separator(', '), 'refs', 85 Separator(': '), refs, 86 nl=False) 87 write() 88 89 writef('obj: %s, refs: %s', obj, refs) 90 writef(Template('obj: $obj, refs: $refs, obj: $obj'), 91 obj=obj, refs=refs, 92 to=sys.stderr, 93 nl=False) 94 write()
For the code comparisons shown earlier, simply put write where writeln is currently used, and add the keyword argument nl=False for the no trailing newline case. I quite like this approach. - Nick Coghlan
BDFL comments: I like the write/writef parallel; would like it even more if it was print/printf. But please drop the Separator thing. The use case isn't common enough to burden people with the possibility. Also, we need to spend more time researching the formatting language. (See a post in python-dev by Steven Bethard: "string formatting options and removing basestring.__mod__".
[lwickjr: def printf(format, *args): print(format(format, *args))? This definition will actually work with print either a statement or function. Further, formatting and printing are seperate concepts and should not be tightly coupled.]
Another variant - `format` builtin
Barry's writef builtin cuts down a little on the typing, but is somewhat inflexible in that it only supports string % or string.Template formatting when printing directly to a stream. It also causes problems by preventing the use of to or nl as keywords in the format string. A separate format builtin would deal with both of those problems, at the expense of some extra typing when using it. Such a builtin would also help with avoiding some of the tuple related quirks of the string mod operator, as well as making it easy to write code that supports both types of string formatting. The version below is based on Barry's, but eliminates the Separator concept, and replaces writef with format - Nick Coghlan
1 import sys 2 from string import Template 3 4 # Real implementation would avoid blocking use of 'fmt' 5 # as an element of the formatting string 6 def format(fmt, *args, **kws): 7 if isinstance(fmt, Template): 8 if args: 9 raise TypeError('invalid positional arguments') 10 s = fmt.substitute(kws) 11 else: 12 if kws: 13 s = fmt % kws 14 else: 15 s = fmt % args 16 return s 17 18 19 def write(*args, **kws): 20 if 'to' in kws: 21 to = kws.get('to') 22 del kws['to'] 23 else: 24 to = sys.stdout 25 if 'nl' in kws: 26 nl = kws.get('nl') 27 del kws['nl'] 28 if nl is True: 29 nl = '\n' 30 elif nl is False: 31 nl = '' 32 else: 33 nl = '\n' 34 if 'sep' in kws: 35 sep = kws.get('sep') 36 del kws['sep'] 37 else: 38 sep = ' ' 39 if kws: 40 raise TypeError('invalid keyword arguments') 41 for s in args[:1]: 42 to.write(str(s)) 43 for s in args[1:]: 44 to.write(sep) 45 to.write(str(s)) 46 to.write(nl) 47 48 49 obj = object() 50 refs = sys.getrefcount(obj) 51 52 write('obj:', obj, 'refs:', refs) 53 write('obj:', obj, 'refs:', refs, sep=', ', nl=False) 54 write() 55 56 write(format('obj: %s, refs: %s', obj, refs)) 57 write(format('obj: %(obj)s, refs: %(refs)s', obj=obj, refs=refs)) 58 write(format(Template('obj: $obj, refs: $refs, obj: $obj'), 59 obj=obj, refs=refs), 60 to=sys.stderr, 61 nl=False) 62 write()
Displaying iterators
I'm looking into an approach which adds explicit support for displaying iterators into the string mod operator. The intent is that "%''j" % (my_seq,) will become roughly equivalent to ''.join(map(str, my_seq)). - Nick Coghlan
SF Patch #1281573 for anyone who wants to play with it. Only strings are supported so far (no Unicode), but it illustrates the concept quite well.
BDFL comments: again, please don't do this.
[lwickjr: I prefer that repr() and str() be the Official Pythonic Way to decide which representation gets written. How about def printFunc(*args): print "".join(map(str, args)) and def writeFunc(*args): print "".join(map(repr, args))?]
Scrap C-Style Formatting
What's one more strawman, right?
My approach is tailor-made for gettext (although I'm no expert in gettext usage). Keywords become the default and positionals disappear completely.
>>> print('x = {x}, y = {y}, z = {z}', x=x, y=y, z=z)
There's some redundancy in the the keyword arguments (unfortunately), but it helps insulate the format string from the code that uses it. It removes the problems of separator vs no separator It allows it to be self-documenting for the gettext translators, with no problems in reordering or reformatting. You could even give extra arguments that aren't always used (but they wouldn't be self-documenting I suppose).
Further options are using locals():
>>> print('x = {x}, y = {y}, z = {z}', **locals())
but only if you don't mind exposing them (debatable). If you need something besides %s (the default) then go as follows:
>>> print('x = {r:x}, y = {f9.8:y}, z = {i:z}', x=x, y=y, z=z)
Or maybe even something that allows arbitrary arguments to be passed to the formatter. - Adam Olsen
Another idea
String formatting with %* is a bad idea, imho. Since python is anyway dynamic by nature, why not add built-in string evaluation, as in boo. for example:
x = "lucy" write("i love ${x}")
or
x = 7 write("the answer is ${x * 2}")
if strings (or a special flavor of string, say one marked with backticks*) allowed evaluation of expressions, code will never look like
print "x = ", x, "y = ", y
but rather
write("x = ${x}, y = ${y}")
which is much more readable and easier to maintain. imagine working with 20 '%s' in a single string! it's a disaster. even using the silly %(name) is bad, since you then have to fill a huge dict after your string.
(*) backticks: yes, backticks mean repr(), but did anyone ever hear of them? [lwickjr: I use them regularly.] i think they are depricated anyway. [lwickjr: Why?] adding a new built-in type, evalstr ("evaluating string"), marked by backticks, is very simple and almost completely backwards compatible. and it works not only in the context of printing output.
write(`hello ${os.getuid()}, the time now is ${time.asctime()}, and you are running on ${os.name}`)
true, it doesnt solve the write/writeln "problem", and i must admit that print as a statement is a pretty useful feature (no parenthesis hassle), but adding evalstrings will make long format string possible and maintainable. plus, it gets us rid of the ugly writef or printf proposals.
Yet Another Formatting Alternative
There's a few goals for any formatting scheme
- Inline naming (not off to the right somewhere)
Expression arguments (obj.attr is common)
- Gettext swapout
- No repetition of names
No explicit call to locals()
Our existing % formatting can do 2,3,4,5, but if you want 1 you instead get 1,3,4. My previous suggestion handles 1, but fails 4 miserably, as well as 5. My new suggestion handles all 5 goals simultaneously.
The full syntax is:
$"filler {name:expr formatter arguments} filler"
Most of that is optional. The most common way to use it would be:
$"filler {x} filler" # Equivalent to... $"filler {x:x str} filler" # Equivalent to... FormatString("filler {x str} filler", {'x': x})
A FormatString instance does not immediately evaluate. Instead, it waits until its __str__ method is called, at which point the above example becomes:
"filler %s filler" % (str(fs.args['x']),)
Because of the lazy evaluation it is possible to use it for gettext.
def _(fs): return FormatString(localizedstrings(fs.format), fs.args)
Further options:
$"x = {:3}, y = {:42}" # Names are ommited so numbers (positions) will be generated for them $"f = {:1/3 float 10.5}" # "f = %10.5f" % (1/3) $foo # Parse error! $ is a string prefix, NOT an operator
Looking this over, the weakest link seems to be in the formatter aspects. It needs a way to specify an expression that happens after the initial evaluation, but after gettext has had a chance to replace the format string. Unfortunately I'm out of ideas so I'll leave it be.
- Adam Olsen
Extend String.Template?
The basic idea would be to incorporate the functionality of the existing string.Template module as a built-in. The format prefix characters are stolen directly from Perl, which makes them both lightweight and familiar.
However, the current API is too cumbersome for "hello world" use. So we need to streamline it a bit.
The first part is getting rid of the need to explictly instantiate the String.Template() object. I suggest a new "string decorator", similar to "r" and "u", which indicates that the string is a template string. Lets assume that the prefix is "t" for "template" for now:
print t"Hello, $user!".substitute( user="Tim" )
The 't' prefix should be usable in conjunction with the 'r', 'u', triple-quote and other string variations.
However, its still too wordy. We need to get rid of the .substitute(). One thought (which may not be workable) is to detect when the template is being coerced into a string by overloading __str__, which then automagically calls substitude using the local scope as the dictionary. (The hard part is getting the scope - because the coercion to string might be happening inside the print function. Perhaps the string can capture the scope pointer upon construction.)
Ideally, what we would then have is something similar to the Perl syntax:
print t"Hello, $user!"
- Talin | https://wiki.python.org/moin/PrintAsFunction?action=diff&rev1=2&rev2=54 | CC-MAIN-2022-27 | refinedweb | 3,470 | 61.87 |
The AJAX Control Toolkit is an incredibly popular set of controls that enable you to easily add JavaScript functionality to an ASP.NET application. The AJAX Control Toolkit has consistently been one of the top three most popular downloads from CodePlex since the birth of CodePlex (see).
Lately, we’ve been thinking hard about methods of improving the quality of the AJAX Control Toolkit controls. We want to improve the quality of the AJAX Control Toolkit controls so that they match the very high standards of quality of the official ASP.NET framework controls such as the GridView and TextBox controls.
In discussions of quality, the issue of testing immediately comes up. Right now, the AJAX Control Toolkit solution includes a test project named ToolkitTests. (You can see this project if you download the Source version of the AJAX Control Toolkit from CodePlex). This project contains a set of automated functional tests for each control in the AJAX Control Toolkit (see Figure 1). For example, there is a file named Calendar.aspx which contains a set of tests for the Calendar control.
Figure 1 – Tests in the ToolkitTests project
You can run all of these tests by requesting the Default.aspx page from the ToolkitTests project. The Default.aspx page contains a test runner. If you click the Select All link, and then click the Run Tests button, then all of the tests are run (see Figure 2).
Figure 2 – Running tests
Having a set of tests like this that can be run automatically is invaluable. It means that we can modify existing AJAX Control Toolkit controls and know whether we have broken existing functionality. We run these tests in a standard set of browsers to ensure that the AJAX Control Toolkit controls are cross-browser compatible.
In order to improve the quality of the AJAX Control Toolkit, we plan to significantly improve the quality of the tests that accompany the Toolkit. However, the AJAX Control Toolkit currently uses a proprietary functional test framework. The test framework was invented for the AJAX Control Toolkit and it is used nowhere else.
For this reason, we decided to investigate using alternative functional test frameworks. We had several specific criteria for selecting a functional test framework:
1) The functional test framework needs to be an open-source project.
2) The functional test framework needs to work with all major browsers including Internet Explorer, FireFox, Safari, Opera, and Chrome.
3) The functional test framework needs to support simulating complex browser interactions.
4) The functional test framework needs to be easy for the Quality Assurance team at Microsoft to use.
In order to meet all of these requirements, we decided to use the Lightweight Test Automation Framework as the testing framework for the AJAX Control Toolkit moving forward.
The Lightweight Test Automation Framework
The Lightweight Test Automation Framework (LTAF) is an open source functional test framework available from CodePlex at:
LTAF is a lightweight version of the very same testing framework that the ASP.NET Quality Assurance team uses to test the standard ASP.NET controls. The framework is compatible with all major web browsers and the framework enables you to simulate complex JavaScript behaviors.
An important consideration for the ASP.NET AJAX team is that the Microsoft QA team is already very familiar with this framework. Therefore, any member of the Microsoft QA team can easily test the AJAX Control Toolkit using this framework.
Our plan is to include all of the LTAF tests used by the QA team with the download of the AJAX Control Toolkit. That way, if you modify the Toolkit, then you can run the LTAF tests to check whether or not your modifications have broken existing controls.
In the following sections, I provide you with a brief introduction to using LTAF.
Downloading and Installing LTAF
After downloading the Microsoft.Web.Testing.Lightweight.zip file from the CodePlex website, you need to remember to unblock the downloaded ZIP. Right-click the file and click the Unblock button (see Figure 3).
Figure 3 – Unblocking the download file
Next, unzip the file into a new folder and double-click the Microsoft.Web.Testing.Lightweight.sln file to open up the LTAF solution. Build the project by selecting the menu option Build, Build Solution.
The LTAF solution contains the following projects:
· SampleWebSite – Contains sample LTAF tests
· FunctionalTestsWebSite – Contains BVT tests written using LTAF
· Microsoft.Web.Testing.Lightweight – Contains the source code for LTAF
· Microsoft.Web.Testing.Lightweight.UnitTests – Contains unit tests for LTAF
The fastest way to try out LTAF is to run the tests in the SampleWebSite project.
Right-click the Default.aspx page contained in the Test folder and select the menu option View In Browser. The page in Figure 4 appears.
Figure 4 – The LTAF Test Runner
Click the link labeled All Test Cases to select all of the tests and click the Run Tests button to run the tests. All the tests that pass are highlighted with green and a checkmark (see Figure 5).
Figure 5 – LTAF test results
So you might be wondering where the functional tests executed by the test runner are defined. You can find the tests in the App_CodeTest folder. For example, the Test folder includes a class named UserManagementTests.cs that contains a set of tests for the Login.aspx page. An abridged version of this class is contained in Listing 1.
Listing 1 – UserManagementTests.cs [C#]
using System; using Microsoft.Web.Testing.Light; [WebTestClass] public class UserManagementTests { [WebTestMethod] public void SignInAndSignOut() { // Navigate to the login page HtmlPage page = new HtmlPage("Login.aspx"); // Fill Login control user/password and click login button page.Elements.Find("UserName").SetText("ValidUser"); page.Elements.Find("Password").SetText("foo"); page.Elements.Find("LoginButton").Click(WaitFor.Postback); // Verify content of the Home page Assert.AreEqual("Welcome back ValidUser!", page.Elements.Find("LoginName1").GetInnerText()); // Click the logout tab page.Elements.Find("tab-signout").Click(WaitFor.Postback); // Verify login tab now exists. Assert.IsTrue(page.Elements.Exists("tab-login")); } }
The class in Listing 1 is decorated with the WebTestClass attribute and the functional test method is decorated with the WebTestMethod attribute.
The SignInAndSignOut() method performs the following tests:
1) It verifies that entering the user name ValidUser and password foo into the Login form on the Login.aspx page results in a page that displays the message “Welcome back ValidUser!”.
2) It verifies that clicking the Logout tab causes the Login tab to appear.
Notice how the HtmlPage class and HtmlElement class are used to interact with a web page. For example, the HtmlPage.Elements.Find() method is used to retrieve a particular DOM element. The HtmlElement.Click() method is used to simulate a button click.
Using LTAF in a New or Existing ASP.NET Website
If you want to use LTAF with a new or existing ASP.NET Website then you need to do two things:
1) Add a reference to the Microsoft.Web.Testing.Lightweight.dll assembly
2) Add a folder named Test that contains the test runner (the Default.aspx and DriverPage.aspx files)
For example, imagine that you have created an ASP.NET website that contains the ASP.NET page in Listing 2. This page contains a single button. Whenever you click the button, the number displayed by the Label control is incremented by 1 (see Figure 6).
Listing 2 – Counter.aspx
<%@ Page <script runat="server"> protected void btnAdd_Click(object sender, EventArgs e) { var newValue = int.Parse(lblCounter.Text) + 1; lblCounter.Text = newValue.ToString(); } </script> <html xmlns=""> <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> <div> <asp:Label <asp:Button </div> </form> </body> </html>
Figure 6 – The Counters.aspx page
If you want to test this page using LTAF then you first need to add a reference to the Microsoft.Web.Testing.Lightweight.dll assembly. Select the menu option Website, Add Reference. Click the Browse tab and browse to the folder where you downloaded LTAF (see Figure 7).
Figure 7 – Adding a reference to LTAF
Next, we need to add the Test folder (that contains the Default.aspx and DriverPage.aspx files) into our new website. Using Windows Explorer, navigate to the directory where you downloaded LTAF. You can drag the Test folder from Window Explorer into the Solution Explorer window in your website project (or copy and paste the folder). After you add this folder, your Solution Explorer window should look like Figure 8.
Figure 8 – Solution Explorer with Test folder
Now, we are ready to write some tests. Add a new class to your website named CounterTests.cs (Visual Studio will prompt you to add the class to your App_Code folder). Enter the code in Listing 3.
Listing 3 – CounterTests.cs
using Microsoft.Web.Testing.Light; [WebTestClass] public class CounterTests { [WebTestMethod] public void ButtonIncrementsByOne() { // Get the Counters.aspx page HtmlPage page = new HtmlPage("/Counter.aspx"); // Click the btnAdd Button page.Elements.Find("btnAdd").Click(WaitFor.Postback); // Get the value of the Counters Label var labelValue = page.Elements.Find("lblCounter").GetInnerText(); // Verify that the Label has the value 1 Assert.AreEqual("1", labelValue); } }
The test class in Listing 3 contains one test method named ButtonIncrementsByOne(). This test verifies that clicking the button increments the value displayed in the Label by one.
You can run the test by opening the Test/Default.aspx page. After you open this page, you can select the ButtonIncrementsByOne test, and click the Run Tests button to execute the test. When the test runs successfully, you get the page in Figure 9.
Figure 9 – Results of running test
Conclusion
The Lightweight Testing Automation Framework (LTAF) is a very powerful framework for building functional tests. It is (a lightweight version) of the testing framework used by the ASP.NET Quality Assurance team and it is the testing framework used to build tests for the ASP.NET framework.
You can use this test framework to build functional tests for your ASP.NET applications. Simply download LTAF from CodePlex and start writing your functional tests in either C# or VB.NET.
We plan to use LTAF for the AJAX Control Toolkit because (1) LTAF works with all modern browsers including Internet Explorer, Firefox, Safari, Opera, and Chrome (2) it is open-source and (3) because LTAF is based on the same testing framework used internally by the ASP.NET QA team, we expect the QA team to continue to invest in improving LTAF over time.
Ajax Control Toolkit do give us a powerful toolkit, and it do save us lots of effort
Hi Stephen,
Do you know when the final version of AJAX 4 will be released? Will it coincide with the release of Visual Studio 2010? That along with .Net 4 as well?
Hope you been enjoying your new position with Microsoft 🙂
Ajax Control Toolkit do give us a powerful toolkit, and it do save us lots of effort
+p+p
How suitable – or not suitable – is LTAF for ASP.NET MVC websites? If it is, what considerations, if any, need to be made?
@oskar — great question! LTAF should work fine with MVC. I haven’t tried it – but my understanding is that the ASP.NET QA team uses (a modified version) of LTAF to do all of their testing of ASP.NET MVC and ASP.NET Web Forms.
@Mukhtar — Thanks for your interest in Microsoft AJAX + ACT. We are planning to publish a roadmap soon.
FYI to anyone interested – the AJAX Control Toolkit already has a ton of tests written against this framework. They’re not included in the main download, but you’ll see them in the Testing.Client project if you get the source code for any of the checkins. Here’s part of the test suite for the Accordion control to peruse if you’re interested: ajaxcontroltoolkit.codeplex.com/…/56007#346578
He and his wow gold aids now withdrew behind the curtain: the other party, whichwow gold was headed by wow goldColonel Dent, sat down on the crescent of chairs. One of the gentlemen, Mr. Eshton, wow goldobserving me, seemedwow gold to propose that I should wow be asked to join them;wow gold but Lady Ingram wow goldinstantlywow gold negativedwow gold the notion.cheap aion kina. I am happy with him.
I love this ajax, but I found this control are not cooperating with nested master page very well. But it is still a great control.
####################################
IT界最強技術王, 名字擬似:edmund yau
Ajax is the way to go. Thanks for the great control..
The best .NET Training.
This Luna game with luna online gold is a colorful one, the graphics could seem a little childish, but do not be mistaken, the game use luna gold to play LUNA Online is advanced as a lot of new MMORPGs that you can find out where can make more luna money. Something I did not like in this game was the movement, the character moves slow and you need to buy luna gold to go through long distances sometimes, making you buy a lot of luna online money to change the Teleport scrolls just to go to the cities..
[url=]2moons dil[/url]
[url=]2moons gold[/url]
[url=]2moon dil[/url]
[url=]buy 2moons dil[/url]
[url=]cheap 2moons gold[/url]
good site
Great Post! Very useful information is given. Thanks for sharing.
How nice, you have Pluralsight comment spamming your blog. Brilliant.
I do have an actual question which is whether the LTAF tests can be run headless, as in from an automated build (CCNet, TFS, TeamCity, etc), or if they only work in the browser?
Thanks!
Ajax is the way to go. Brilliant tips
Thank you. I guess AJAX is the future of .NET integration.
Thanks for sharing the great tips. I always wondered how that could be achieve until i read your article.
I will feed your RSS after this useful article. Thanks a lot.
When a very miserly man nicknamed the “stingy ghost” died and went to hell,Ultra Tall Ugg Boots the Yama King reproached him, saying, “You stingy ghost! When you were alive,Sundance II Ugg Boots you clung hard to everything and wouldn’t give to anyone. Even when you saw others in poverty and misery,Classic Tall Ugg Boots you refused to offer them help. Also, you didn’t take good care of your parents, relatives or friends and let them suffer and starve.Classic Cardy Ugg Boots For your evil karma, you’ll be dumped into a pot of boiling oil.
Thank you very
The controls are slow, ugly, amateurish, and ridiculous. A mutually exclusive check box? That’s what radio buttons are for. Opacity for a drop shadow that isn’t opaque? It changes the shadow color. WTF? Rounded corners that aren’t round? I don’t go around the net slamming peoples work but you guys have to be kidding. If you want to see a real toolkit check out EXTJS. I’m not affiliated with them but you see how it’s supposed to be done.
yeast infection cure
bookmarking demon
burn the fat feed muscle
chopper tattoo
commission blueprint
earth4energy
fapturbo
fat loss 4 idiots
government records
hemorrhoids cure
magic of making up
make money taking surveys
maverick money makers
micro niche finder
panic away
registry easy
reverse phone detective
satellite tv for pc
truth about abs
Poker online sign up poker account for free promotion.
free no deposit needed have similar poker starting capital for $50 bankroll.
Over fifty sponsor dollars with free bankroll for start play.
Oris watches
Panerai watches
Rado watches
Rolex watches
Longines watches
Concord watches
Jackson Nurse: I Did Not Give Michael Propofol8.27Cwow power leveling,wow gold,buy wow gold
[url=]wedding invitations[/url]
[url=]wedding invitation[/url]
While surfing, I came across wedding invitations this site which caught my attention. All of the invitation cards on this site were hand written.wedding invitation I found out that the cards were made with 100% recycled paper. All I had to do was make a choice of the type of card I wanted and place my order.
wedding invitations
wedding invitation .
Thanks for sharing this article
Pretty good post.Thanks for sharing this article
I heard from my friends that the rohan crone is perfect and they tell me that the rohan gold is necessary, it takes less rohan online crone and rohan online gold, besides, they also tell me so many places to rohan money.
M&A.
スパ is wonderful.
新宿 整体 is wonderful.
過払い is wonderful.
マカ is wonderful.
格安航空券 国内 is wonderful.
ブライダルエステ is wonderful.
苗木 is wonderful.
歌手オ.
看護師 転職 is wonderful.
水道工事 is wonderful.
商標登録 is wonderful.
電話代行 サービス.
会社設立 横浜 is wonderful.
身長を伸ばす is wonderful.
温泉 ホテル is wonderful.
川口市 不動産 is wonderful.
川口市 不動産 is wonderful.
エアコン 工事 is wonderful.
ウェディングボード is wonderful.
結婚式2次会 is wonderful.
土地活用 is wonderful.
インプラント 横浜 is wonderful.
ゴルフ会員権 is wonderful.
埼玉県 不動産 is wonderful.
ベビーサイン is wonderful.
遺言 作成
医療ベット 買取
wowgolds.org offer warcraft gold,cheap wow gold and world of warcraft gold service.You can come and have a look!Thank you! .StillNFL Jerseys Shop, producing hard candy Cheap NFL Jerseysdoes not involve gettingCheap MVP Jerseys the right recipe NFL MVP Jerseysbecause there are Buy christian louboutin shoesa various of components christian louboutin on salethat need to bechristian louboutin online dealt beforelouboutin shoe sale preparing themBuy christian louboutin Store.
Replica Watches
Rolex Watches
Replica Handbags
Replica Omega Watches
Cartier Replica Watches
Replica LV Handbags
It will allow players to create characters with cheap Dragonica Gold destined for great deeds and glory on the field of battle. Sharpen your blade and prepare to unleash your inner mutation with your Dragonica Crone and the age of reckoning has begun and war is everywhere. Invade enemy lands to gain Dragonica money, besiege imposing fortresses. Dragonica Gold is that the War is everywhere in Dragonica Online. Wield devastating magic and deadly weapons can exchange to buy Dragonica Gold..
As a new player , you may need some game guides or information to enhance yourself.
fast gold world ofis one of the hardest theme for every class at the beginning . You must have a good way to manage your fast wow gold.If y .
Here is the [url=” title=”“>” title=”” title=”“>“>” title=”” title=”“>” title=”” title=”“>“>“>” title=”” title=”“>” title=”” title=”“>“>” title=”” title=”“>” title=”” title=”“>“>“>“>]aoc gold[/url]site,[url=” title=”“>” title=”” title=”“>“>” title=”” title=”“>” title=”” title=”“>“>“>” title=”” title=”“>” title=”” title=”“>“>” title=”” title=”“>” title=”” title=”“>“>“>“>]conan gold[/url]is the regular site. we supply [url=” title=”“>” title=”” title=”“>“>” title=”” title=”“>” title=”” title=”“>“>“>” title=”” title=”“>” title=”” title=”“>“>” title=”” title=”“>” title=”” title=”“>“>“>“>]age of conan gold[/url].Many players like to buy [url=” title=”“>” title=”” title=”“>“>” title=”” title=”“>” title=”” title=”“>“>“>” title=”” title=”“>
m/product/Age_of_Conan_gold.html” title=”” title=”“>“>” title=”” title=”“>” title=”” title=”“>“>“>“>]cheap aoc gold[/url]here. They are regular customers. Regular customers buy [url=” title=”“>” title=”” title=”“>“>” title=”” title=”“>” title=”” title=”“>“>“>” title=”” title=”“>” title=”” title=”“>“>” title=”” title=”“>” title=”” title=”“>“>“>“>]aoc money[/url] in low price..
interesting article. Wondered how you do it.
Thanks for sharing this article
thanks for sharing!
maybe it is a powerful toolkit ,but i am not good at computer,so it is a bit of difficult for me^……
Excelwatch
As a new player , you may need some game guides or information to enhance yourself.
Rose zuly buy Arua ROSE zuly at our website . Go to the related page and check the detailed information . Once you have any question , you can connect our customer service at any time .
My friends tell me so many websites to buy last chaos gold, nowadays, the lastchaos gold is perfect, both astchaos money and cheap lastchaos gold can we buy last chaos gold .
maple mesos in the game you may or may not notice. I know vitality is very tempting in mesos. You may be feeling intelligence in cheap mesos. You are going to be getting crab pincers which like , depending on which server you are on. maple story mesos can help you get a high level in short time.
Great article.
Loouis Vuitton Christian Louboutin Jimmy Choo Manolo Blahnik discount louboutin
Louis Vuitton Bags 08- Monogram watercolor Louboutin Sale louboutin
Classic Christian Louboutin NewStyleChristianLouboutinChristian Louboutin Shoes
Christian Louboutin Pumps Christian Louboutin Evening
I have one suggestion for you.Sharpen your blade and prepare to unleash your inner mutation with your Dragonica Crone and the age of reckoning has begun and war is everywhere. Invade enemy lands to gain Dragonica money, besiege imposing fortresses.
Villa Rentals and Italy Villas and Back Tattoos
and Caribbean Tour
After[url=]sex toys[/url]series,a[url=blog.xn--gmqt35bl2a657i.tw/category.php?id=2]sex shop[/url]of,board[url=blog.xn--gmqt35bl2a657i.tw/category.php?id=4]adult toys[/url]determined,company[url=blog.xn--gmqt35bl2a657i.tw/category.php?id=5]adult shop[/url]past,Yahoo’s[url=blog.xn--gmqt35bl2a657i.tw/category.php?id=13]sexy lingerie[/url]week,meetings[url=blog.xn--gmqt35bl2a657i.tw/category.php?id=9]vibrator[/url]person,decision[url=blog.xn--gmqt35bl2a657i.tw/article_cat.php?id=9]adult products[/url]below,any[url=blog.xn--gmqt35bl2a657i.tw/category.php?id=15]strap on[/url]share,over[url=blog.xn--gmqt35bl2a657i.tw/article_cat.php?id=7]adultshop[/url]advantage,could[url=blog.xn--gmqt35bl2a657i.tw/category.php?id=10]dildo[/url]offer,the[url=]Malaysia sex toys[/url]regulators,trying[url=]Singapore sex toys[/url]digging,massively[url=blog.xn--gmqt35bl2a657i.tw/article.php?id=64]sex toy[/url]$31,that[url=blog.xn--gmqt35bl2a657i.tw/category.php?id=22]Condom[/url]said,takeover[url=blog.xn--gmqt35bl2a657i.tw/article_cat.php?id=8]Paradise sex toys shop[/url]battle,wooden[url=sg.88db.com/…/?PostID=1063085#]Paradise Sex Toys Adult Shop Singapore Malaysia[/url]stand,word delivery,stand,word[url=singapore.yalwa.sg/…/…-Singapore-Malaysia.html]Paradise Sex Toys Adult Shop Singapore Malaysia[/url]delivery,committed to[url=…/Paradise-Sex-Toys-Adult-Shop...]Paradise Sex Toys Adult Shop Singapore Malaysia[/url]certain,school[url=]Sex Toys Shop Singapore Malaysia[/url]products,Buy[url=udn168tw.en.ec21.com/…Shop–3847995_3847996.html]Paradise Sex Toys Adult Shop Singapore Malaysia[/url]Now
FR It’s lucky to know this, if it is really true. Companies tend not to realize when they create security holes from day-to-day operation.
2332 It’s lucky to know this, if it is really true. Companies tend not to realize when they create security holes from day-to-day operation.
If you like using rohan crone to play the game which needs use rohan gold, you can borrow rohan online crone from friends. Or you can buy rohan online gold, and than you use rohan money to continue the game.
If some player wants to buy the requirement, he must have to buy aion kinahh first by himself. aion kinah has helped me finish many things. Because when some one player gains a little cheap aion online kinah, he needs to hand in a leader of this team. They have the capability of jumping over obstacles that are higher than their height aion gold, and have the strength to handle weapons which you can gain from a little cheap aion kinah that are bigger than themselves to make up for their small physiques.
For a new [url=” title=”“>” title=”” title=”“>“>” title=”“>” title=”” title=”“>” title=”” title=”“>“>” title=”“>“>” title=”“>” title=”” title=”“>“>” title=”“>]Asda Story gold[/url] player has just entered the game spend. First of all, realize the [url=” title=”“>” title=”” title=”“>“>” title=”“>” title=”” title=”“>” title=”” title=”“>“>” title=”“>“>” title=”“>” title=”” title=”“>“>” title=”“>]Asda Story money[/url] is useful to the each new player. You only play with others and [url=” title=”“>” title=”” title=”“>“>” title=”“>” title=”” title=”“>” title=”” title=”“>“>” title=”“>“>” title=”“>” title=”
y_gold.html” title=”“>“>” title=”“>]buy Asda Story Gold[/url] can get happy from the game. After learned at the game world, you may found the [url=” title=”“>” title=”” title=”“>“>” title=”“>” title=”” title=”“>” title=”” title=”“>“>” title=”“>“>” title=”“>” title=”” title=”“>“>” title=”“>]cheap Asda Story gold[/url] and the most basic survival skills.
latale online gold, I believe everyone is familiar with the game. For the latale gold, I want to say a few points. You must be able to attract firepower with latale money. Originally playing specter in the cheap latale gold is a stimulating live. Important point is that buy latale online gold which CT can hear breathing PS. .
I was just out of Morocco, with my thief title, wondering what I would do in between leveling up and saving Lineage 2 Adena for a Stiletto. That would be my ultimate goal of L2 Adena. The power Lineage 2 Gold is holding anything and everything. buy Lineage 2 Adena which I have thought many ways to gain gives me more momentums to continue to play again and again. There it was my purpose to gain much more L2 Gold.
Do you know the flyff penya, in the game you need the flyff money. It can help you increase your level. My friends always asked me how to buy flyff penya, and I do not know he spend how much money to buy the flyff gold, when I see him in order to play the game and search which the place can buy the cheap penya. I am happy with him. | http://stephenwalther.com/archive/2009/07/04/improving-the-ajax-control-toolkit-with-the-lightweight-test-automation | CC-MAIN-2017-43 | refinedweb | 4,208 | 57.57 |
A Python package for finite difference numerical derivatives and partial differential equations in any number of dimensions.
pip install findiff
findiff works in any number of dimensions. But for the sake of demonstration, suppose you
want to differentiate a four-dimensional function given on a 4D array
f with coordinates
x, y, z, u.
For
, where x denotes the 0-th axis, we can write
# define operator d_dx = FinDiff(0, dx) # apply operator df_dx = d_dx(f) # df_dx is now an array of the same shape as f containing the partial derivative
The partial derivative
, where z means the 2nd axis, is
d_dz = FinDiff(2, dz) df_dz = d_dz(f)
Higher derivatives like
or
can be defined like this:
# the derivative order is the third argument d2_dx2 = FinDiff(0, dx, 2) d2f_dx2 = d2_dx2(f) d4_dy4 = FinDiff(1, dy, 4) d4f_dy4 = d4_dy4(f)
Mixed partial derivatives like
or
d2_dxdz = FinDiff((0, dx), (2, dz)) d2_dxdz(f) d3_dx2dz = FinDiff((0, dx, 2), (2, dz))
Linear combinations of differential operators like
can be written as
from numpy import meshgrid, sin X, Y, Z, U = meshgrid(x, y, z, u, indexing="ij") diff_op = Coef(2*X) * FinDiff((0, dz, 2), (2, dz, 1)) + Coef(3*sin(Y)*Z**2) * FinDiff((0, dx, 1), (1, dy, 2))
Chaining differential operators is also possible, e.g.
diff_op = (FinDiff(0, dx) - FinDiff(1, dy)) * (FinDiff(0, dx) + FinDiff(1, dy)) # is equivalent to diff_op2 = FinDiff(0, dx, 2) - FinDiff(1, dy, 2)
Standard operators from vector calculus like gradient, divergence and curl are also available, for example:
grad = Gradient(h=[dx, dy, dz, du]) grad_f = grad(f)
More examples can be found here and in this blog.
The package can work with any number of dimensions, the generalization of usage is straight forward. The only limit is memory and CPU speed.
When constructing an instance of
FinDiff, you can request the desired accuracy
order by setting the keyword argument
acc.
d2_dx2 = findiff.FinDiff(0, dy, 2, acc=4) d2f_dx2 = d2_dx2(f)
If not specified, second order accuracy will be taken by default.
Sometimes you may want to have the raw finite difference coefficients.
These can be obtained for any derivative and accuracy order
using
findiff.coefficients(deriv, acc). For instance,
import findiff coefs = findiff.coefficients(deriv=2, acc=2)
gives
{ 'backward': {'coefficients': array([-1., 4., -5., 2.]), 'offsets': array([-3, -2, -1, 0])}, 'center': {'coefficients': array([ 1., -2., 1.]), 'offsets': array([-1, 0, 1])}, 'forward': {'coefficients': array([ 2., -5., 4., -1.]), 'offsets': array([0, 1, 2, 3])} }
FinDiff operators will use central coefficients whenever possible and switch to backward or forward coefficients if not enough points are available on either side.
If you need exact values instead of floating point numbers, you can request a symbolic solution, e.g.
import findiff coefs = findiff.coefficients(deriv=3, acc=4, symbolic=True)
gives
{'backward': {'coefficients': [15/8, -13, 307/8, -62, 461/8, -29, 49/8], 'offsets': [-6, -5, -4, -3, -2, -1, 0]}, 'center': {'coefficients': [1/8, -1, 13/8, 0, -13/8, 1, -1/8], 'offsets': [-3, -2, -1, 0, 1, 2, 3]}, 'forward': {'coefficients': [-49/8, 29, -461/8, 62, -307/8, 13, -15/8], 'offsets': [0, 1, 2, 3, 4, 5, 6]}}
If you want to specify the detailed offsets instead of the accuracy order, you can do this by setting the offset keyword argument:
import findiff coefs = findiff.coefficients(deriv=2, offsets=[-2, 1, 0, 2, 3, 4, 7], symbolic=True)
The resulting accuracy order is computed and part of the output:
{'coefficients': [187/1620, -122/27, 9/7, 103/20, -13/5, 31/54, -19/2835], 'offsets': [-2, 1, 0, 2, 3, 4, 7], 'accuracy': 5}
For a given FinDiff differential operator, you can get the matrix representation
using the
matrix(shape) method, e.g.
x = [np.linspace(0, 6, 7)] d2_dx2 = FinDiff(0, x[1]-x[0], 2) u = x**2 mat = d2_dx2.matrix(u.shape) # this method returns a scipy sparse matrix print(mat.toarray())
has the output
[[ 2. -5. 4. -1. 0. 0. 0.] [ 1. -2. 1. 0. 0. 0. 0.] [ 0. 1. -2. 1. 0. 0. 0.] [ 0. 0. 1. -2. 1. 0. 0.] [ 0. 0. 0. 1. -2. 1. 0.] [ 0. 0. 0. 0. 1. -2. 1.] [ 0. 0. 0. -1. 4. -5. 2.]]
The same works for differential operators in higher dimensions. Of course, you can use this matrix to perform the differentiation manually by matrix-vector multiplication:
d2u_dx2 = mat.dot(u.reshape(-1))
Examples using the matrix representation like solving the Schrödinger equation can be found in this blog.
You can also take a look at the finite difference stencils, e.g. for a 2D grid:
import numpy as np from findiff import FinDiff x, y = np.linspace(0, 1, 100) X, Y = np.meshgrid(x, y, indexing='ij') u = X**3 + Y**3 laplace_2d = FinDiff(0, x[1]-x[0], 2) + FinDiff(1, y[1]-y[0], 2) stencil = laplace_2d.stencil(u.shape) print(stencil)
yields the following output
('L', 'L'): {(0, 0): 4.0, (1, 0): -5.0, (2, 0): 4.0, (3, 0): -1.0, (0, 1): -5.0, (0, 2): 4.0, (0, 3): -1.0} ('L', 'C'): {(0, 0): 0.0, (1, 0): -5.0, (2, 0): 4.0, (3, 0): -1.0, (0, -1): 1.0, (0, 1): 1.0} ('L', 'H'): {(0, 0): 4.0, (1, 0): -5.0, (2, 0): 4.0, (3, 0): -1.0, (0, -3): -1.0, (0, -2): 4.0, (0, -1): -5.0} ('C', 'L'): {(-1, 0): 1.0, (0, 0): 0.0, (1, 0): 1.0, (0, 1): -5.0, (0, 2): 4.0, (0, 3): -1.0} ('C', 'C'): {(-1, 0): 1.0, (0, 0): -4.0, (1, 0): 1.0, (0, -1): 1.0, (0, 1): 1.0} ('C', 'H'): {(-1, 0): 1.0, (0, 0): 0.0, (1, 0): 1.0, (0, -3): -1.0, (0, -2): 4.0, (0, -1): -5.0} ('H', 'L'): {(-3, 0): -1.0, (-2, 0): 4.0, (-1, 0): -5.0, (0, 0): 4.0, (0, 1): -5.0, (0, 2): 4.0, (0, 3): -1.0} ('H', 'C'): {(-3, 0): -1.0, (-2, 0): 4.0, (-1, 0): -5.0, (0, 0): 0.0, (0, -1): 1.0, (0, 1): 1.0} ('H', 'H'): {(-3, 0): -1.0, (-2, 0): 4.0, (-1, 0): -5.0, (0, 0): 4.0, (0, -3): -1.0, (0, -2): 4.0, (0, -1): -5.0}
This is a dictionary with the characteristic points as keys and the stencils as values. The 2D grid has 3**2 = 9 "characteristic points", so it has 9 stencils.
'L' stands for 'lowest index' (which is 0), 'H' for 'highest index' (which is the number of points on the given axis minus 1) and 'C' for 'center', i.e. a grid point not at the boundary of the axis.
In 2D the characteristic points are center points ('C', 'C'), corner points: ('L', 'L'), ('L', 'H'), ('H', 'L'), ('H', 'H') and edge-points (all others). For N > 2 dimensions the characteristic points are 3**N analogous tuples with N indices each.
Each stencil is a dictionary itself with the index offsets as keys and the finite difference coefficients as values.
In order to apply the stencil manually, you can use
lap_u = stencil.apply_all(u)
which iterates over all grid points, selects the right right stencil and applies it.
findiff can be used to easily formulate and solve partial differential equation problems
where L is a general linear differential operator.
In order to obtain a unique solution, Dirichlet, Neumann or more general boundary conditions can be applied.
Find the solution of
subject to the (Dirichlet) boundary conditions
from findiff import FinDiff, Id, PDE shape = (300, ) t = numpy.linspace(0, 10, shape[0]) dt = t[1]-t[0] L = FinDiff(0, dt, 2) - FinDiff(0, dt, 1) + Coef(5) * Id() f = numpy.cos(2*t) bc = BoundaryConditions(shape) bc[0] = 0 bc[-1] = 1 pde = PDE(L, f, bc) u = pde.solve()
Result:
A plate with temperature profile given on one edge and zero heat flux across the other edges, i.e.
with Dirichlet boundary condition
and Neumann boundary conditions
shape = (100, 100) x, y = np.linspace(0, 1, shape[0]), np.linspace(0, 1, shape[1]) dx, dy = x[1]-x[0], y[1]-y[0] X, Y = np.meshgrid(x, y, indexing='ij') L = FinDiff(0, dx, 2) + FinDiff(1, dy, 2) f = np.zeros(shape) bc = BoundaryConditions(shape) bc[1,:] = FinDiff(0, dx, 1), 0 # Neumann BC bc[-1,:] = 300. - 200*Y # Dirichlet BC bc[:, 0] = 300. # Dirichlet BC bc[1:-1, -1] = FinDiff(1, dy, 1), 0 # Neumann BC pde = PDE(L, f, bc) u = pde.solve()
Result:
You can find the documentation of the code including examples of application at. | https://openbase.com/python/findiff | CC-MAIN-2021-39 | refinedweb | 1,470 | 68.47 |
There within a dataframe column or extract the dates from the text.
Here are the pandas functions that accepts regular expression:
Create Dataframe
First create a dataframe if you want to follow the below examples and understand how regex works with these pandas function
import pandas as pd df = pd.read_csv('./world-happiness-report-2019.csv')
Download Data Link: Kaggle-World-Happiness-Report-2019
Pandas extract
Extract the first 5 characters of each country using ^(start of the String) and {5} (for 5 characters) and create a new column first_five_letter
import numpy as np df['first_five_Letter']=df['Country (region)'].str.extract(r'(^w{5})') df.head()
Pandas Count
First we are counting the countries starting with character ‘F’. It returns two elements but not france because the character ‘f’ here is in lower case. you can add both Upper and Lower case by using [Ff]
S=pd.Series(['Finland','Colombia','Florida','Japan','Puerto Rico','Russia','france']) S[S.str.count(r'(^F.*)')==1]
Output:
0 Finland 2 Florida dtype: object
OR
We can use sum() function to find the total elements matching the pattern.
# Total items starting with F S.str.count(r'(^F.*)').sum()
Output: 2
In our Original dataframe we are finding all the Country that starts with Character ‘P’ and ‘p’ (both lower and upper case). Basically we are filtering all the rows which return count > 0.
df[df['Country (region)'].str.count('^[pP].*')>0]
Pandas Match
match () function is equivalent to python’s re.match() and returns a boolean value. We are finding all the countries in pandas series starting with character ‘P’ (Upper case) .
# Get countries starting with letter P S=pd.Series(['Finland','Colombia','Florida','Japan','Puerto Rico','Russia','france']) S[S.str.match(r'(^P.*)')==True]
Output: 4 Puerto Rico
Running the same match() method and filtering by Boolean value True we get all the Countries starting with ‘P’ in the original dataframe
df[df['Country (region)'].str.match('^P.*')== True]
Pandas Replace
Replaces all the occurence of matched pattern in the string. We want to remove the dash(-) followed by number in the below pandas series object. The regex checks for a dash(-) followed by a numeric digit (represented by d) and replace that with an empty string and the inplace parameter set as True will update the existing series. The output is list of countres without the dash and number.
# Remove the dash(-) followed by number from all countries in the Series S=pd.Series(['Finland-1','Colombia-2','Florida-3','Japan-4','Puerto Rico-5','Russia-6','france-7']) S.replace('(-d)','',regex=True, inplace = True)
Output:
0 Finland 1 Colombia 2 Florida 3 Japan 4 Puerto Rico 5 Russia 6 france
Pandas Findall
It calls re.findall() and find all occurence of matching patterns. We are creating a new list of countries which starts with character ‘F’ and ‘f’ from the Series. The list comprehension checks for all the returned value > 0 and creates a list matching the patterns.
S=pd.Series(['Finland','Colombia','Florida','Japan','Puerto Rico','Russia','france']) [itm[0] for itm in S.str.findall('^[Ff].*') if len(itm)>0]
**Output:** ['Finland', 'Florida', 'france']
Pandas Contains
It uses re.search() and returns a boolean value. In the below regex we are looking for all the countries starting with character ‘F’ (using start with metacharacter ^) in the pandas series object. The result shows True for all countries start with character ‘F’ and False which doesn’t.
S=pd.Series(['Finland','Colombia','Florida','Japan','Puerto Rico','Russia','france']) S.str.contains('^F.*')
0 True 1 False 2 True 3 False 4 False 5 False 6 False
In our original dataframe we will filter all the countries starting with character ‘I’ . We just need to filter all the True values that is returned by contains() function.
df[df['Country (region)'].str.contains('^I.*')==True]
Pandas Split
This is equivalent to str.split() and accepts regex, if no regex passed then the default is \s (for whitespace). Here we are splitting the text on white space and expands set as True splits that into 3 different columns
You can also specify the param n to Limit number of splits in output
s = pd.Series(["StatueofLiberty built-on 28-Oct-1886"]) s.str.split(r"s", n=-1,expand=True)
Pandas rsplit
it is equivalent to str.rsplit() and the only difference with split() function is that it splits the string from end.
Conclusion
We have seen how regexp can be used effectively with some the Pandas functions and can help to extract, match the patterns in the Series or a Dataframe. Especially when you are working with the Text data then Regex is a powerful tool for data extraction, Cleaning and validation. | https://kanoki.org/2019/11/12/how-to-use-regex-in-pandas/ | CC-MAIN-2021-04 | refinedweb | 792 | 56.05 |
On Thu, 16 Jan 2003 22:28:05 +0100, Ype Kingma <ykingma at accessforall.nl> wrote: >Afanasiy wrote: > >> Some. > >How do you know it is thrown? Upon execution of the Python script, the standard exception printout occurs at this location, denoting exactly what the error is and halting execution of the script. Traceback (most recent call last): File "C:\test.py", line 21, in ? for line, expecting in lines: TypeError: function takes exactly 0 arguments (1 given) Where, given my code it should have shown (and continued execution) : ('Foo() takes exactly 0 arguments (1 given)',) >The try/except below should catch a TypeError. >However, I'd prefer to control the namespace in which exec >is executing. You are not doing this now, so exec executes >in the namespace it happens to be in, ie. your module's >namespace. This might cause unexpected results, but >I would not expect from the code you posted. > >Adding namespace gives: > Adding a namespace as you've shown results in the same problem. At first I was confused by the print repr(namespace) printing the large text of __builtins__ every loop iteration, but I have commented that out now. The namespace suggestion is good for separation, but the TypeError is still not being caught... HOWEVER, look above, it says the error is on line 21, which is : for line, expecting in lines: Perhaps I've done something wrong there? (Weird) | https://mail.python.org/pipermail/python-list/2003-January/207861.html | CC-MAIN-2019-18 | refinedweb | 235 | 64.81 |
In my program, I am asking for the user to input a string, then pass that string to the getLength function, and then display the functions return value. However, if I did cin before referencing the getLength function, it would tell me invalid conversion. Even if I changed the parameter type to match, I am still getting the same error messages.
Now in the getLength function, I know I need to return some value... what I have in there now I know doesn't work, it's just showing that I understand I need to return some value back to the main function.
Can someone please point me in the right direction?
Thank you =)
#include <iostream> using namespace std; int *getLength(int); int main() { int townLength; int *ptr; cout << "Enter your towns name.\n"; getLength(townLength); ptr = getLength(townLength); cout << "Your town has" << ptr << "characters" << endl; delete [] ptr; ptr = 0; system ("PAUSE"); return 0; } //********************************************* // Function nameLength * // This function counts the number of * // characters in a string. * // * //********************************************* int *getLength(int) { string townName; int *ptr; cin >>townName; //Read as a string ptr = townName.size(); return ptr; } | http://www.dreamincode.net/forums/topic/164791-pass-value-from-one-function-to-another/ | CC-MAIN-2018-17 | refinedweb | 184 | 70.53 |
Grails Mock Logging
May 24, 2018
In Grails 3 applications, logging is handled by the Logback logging framework. Grails artifacts are configured with logging out of the box. The developer simply invokes
log.info("log whatever") and it works. But how do we create unit tests that assert that logs occur with the appropriate level?
This blog will highlight some mock logging techniques and when to use them.
Default Grails Logging in Unit Tests
First let's get the obvious out of the way for clarity. If you are not concerned about testing how logs operate in your application, there is no reason to worry about mocking loggers in Grails.
Since the logger is already present in Grails controllers, services, and domain objects there is no need to mock the logger when running unit tests. When a unit test is run with Grail Testing Support, the logger will execute just as in does in production mode. In this scenario, we can say the logger as already "mocked" for the purpose of focusing on the Class Under Test. This is distinct from the scenario of actually verifying logging occurs, which we will dive into next.
Verify Logging with Tests
What if we want to assert that a certain log occurs with a particular log level? For example, let's say we want to test that that the method below prints friendly advice based on the
age parameter.
package mock.logging class AgeService { void offerAgeAdvice(int age) { if (age < 0) { log.error ("You cannot be $age years old!") } else if (age in 0..<30) { log.info ("You are a young and vibrant :) Age $age") log.info ("Live life to the fullest") } else if (30 <= age) { log.warn ("It's all downhill from here, sorry. Age $age") } } }
In this scenario we need to mock the logger and capture the message passed in. Then we can assert that the message is correct and also assert the appropriate log level was used in each scenario. Conceptually, this is pretty easy. But in practice not so much.
Why Not Use Spock?
One would think that we should simply mock out the logger with
def mockLogger = Mock(Logger), then set the service in the unit test with
service.log = mockLogger. We could proceed to check the arguments passed in and also the number of times
service.log is called with spock programming. However, in Grails we run into a few basic problems while trying to mock with native spock or even Groovy MockFor.
Logback is the default framework and the Logger is
final. We cannot mock a final class. Furthermore, the injected
logproperty in Grails artefacts is a read-only property and cannot be set. These two fundamental problems prohibit spock mocks from being effective in most mock logging situations in Grails.
Use Mockito to Verify Log Events
The Mockito Library can be used to verify logs take place. With Mockito, we can create a mock
Appender class, attach it to the logger, and then use an
ArgumentCaptor to capture the invocations sent to the logger.
build.gradle
dependencies { ... testCompile "org.mockito:mockito-core:2.+" }
Spock test with Mockito
void "verify logging with mockito appender"() { when: "we attach a mocked appender to the logger" Appender mockedAppender = Mockito.mock(Appender) Logger logger = LoggerFactory.getLogger("mock.logging.AgeService") logger.addAppender(mockedAppender) service.offerAgeAdvice(22) ArgumentCaptor<Appender> argumentCaptor = ArgumentCaptor.forClass(Appender) Mockito.verify(mockedAppender, Mockito.times(2)).doAppend(argumentCaptor.capture()) logger.detachAppender(mockedAppender) then: "we capture the arguments and verify log statements occurred" argumentCaptor.getAllValues().size() == 2 List<LoggingEvent> loggingEvents = argumentCaptor.getAllValues() loggingEvents[0].getMessage() == "You are a young and vibrant :) Age 22" loggingEvents[0].getLevel() == Level.INFO loggingEvents[1].getMessage() == "Live life to the fullest" loggingEvents[1].getLevel() == Level.INFO }
Use Slf4j Test to Verify Log Events
Slf4j Test is a test implementation of Slf4j that stores log messages in memory and provides messages for retrieving them. This works nicely to substitute for the real implementation in the test environment of Grails.
In
build.gradle, we first need to depend on the jar, and then exclude the real implementation from the test environment. It is quite simple to check the logging events.
build.gradle
dependencies { ... testCompile 'uk.org.lidalia:slf4j-test:1.1.0' } configurations { testCompile.exclude group: 'ch.qos.logback', module: 'logback-classic' }
Spock test with Slf4j Test
void "verify logging with slf4j-test"() { when: TestLogger logger = TestLoggerFactory.getTestLogger("logging.AgeService") service.offerAgeAdvice(-1) ImmutableList<LoggingEvent> loggingEvents = logger.getLoggingEvents() then: loggingEvents.size() == 1 loggingEvents[0].message == "You cannot be -1 years old!" loggingEvents[0].level == uk.org.lidalia.slf4jext.Level.ERROR }
Use Spock Mocks With a Declared Logger
Spock mocks can be used on one particular case: with your own declared logger variable. If defined as non-final, you can use spock mocks to verify log calls. This approach is straightforward, but has the drawback that each class will have repeated code and does not rely on Grails conventions.
For example, use a
LoggerFactory to define a logger in a class.
package mock.logging import org.slf4j.Logger import org.slf4j.LoggerFactory /** * Non grails groovy class with a declared Slf4j logging object */ class DeclaredSlf4jService { private static Logger log = LoggerFactory.getLogger(DeclaredSlf4jService) void logSomething() { println "*********** log in the class ******" + log.dump() log.info("Live life to the fullest") }
Then a spock Specification with
Mock() follows the normal spock conventions to verify invocations and parameters.
package mock.logging import org.slf4j.Logger import spock.lang.Specification class DeclaredSlf4jServiceSpec extends Specification { DeclaredSlf4jService declaredSlf4jService = new DeclaredSlf4jService() def mockLogger = Mock(Logger) def setup() { declaredSlf4jService.log = mockLogger } def cleanup() { } void "test mock with spock on declared logger"() { when: declaredSlf4jService.logSomething() then: 1 * mockLogger.info("Live life to the fullest") } }
Sample Code
Sample code of Grails mocking logging can be found here. I hope these examples can help you decide on the best approach for your project. | https://grails.org/blog/2018-05-24.html | CC-MAIN-2020-40 | refinedweb | 972 | 50.33 |
Talk:Key:is in
Contents
is_in is pointless. Spatial Indexing
Key:is_in is a pointless waste of database space. It's a poor solution to spatial indexing, a problem which is already solved by... spatially indexed databases.
Can someone link to a mailing list discussion? I know it has been discussed at length. Also this page needs to be modified to express the pointlessness of the tag so that maybe one day people will stop using it and we can delete it from the database.
-- Harry Wood 18:56, 4 February 2010 (UTC)
-,, --Pieren 11:47, 12 February 2010 (UTC)
is_in is needed
Ppl. who say it was pointless (or even delete it!) should provide a working ersatz for navit on my 8 GB android w/o data flatrate. As apk-download-link, not via the market please. And all those addr:postcode, addr:city, addr:province, addr:district, addr:subdistrict, addr:hamlet and addr:country are also redundant and a waste of database space, why not delete them too? That'll free hundrets of megabytes! --Themroc 21:52, 21 May 2011 (BST)
- Provide-a-working-navit-as-a-apk-download-thankyou-very-much is a cheeky suggestion of course. On the whole it's up to developers to figure out sensible solutions which make their software work ...without asking mappers to fill our database with unnecessary cruft!
- is_in tags will only ever be a half solution. Why do developers want a half solution anyway? Perhaps they are hoping that is_in will be filled in everywhere. There is a better way that would achieve FULL worldwide coverage, would entail zero wasted effort by mappers, and zero extra wasteful tags: Do some spatial calculations.
- I appreciate that technically it's not all that easy. Working with administration ways and relations is a real pain at the moment. We definitely need to provide some demonstrations, and reusable code. You wouldn't want to do these calculations on-device. It'll work better as augmented planet downloads. For example a planet download with all is_in tags added by calculation would be feasible. The nominatim Pre-Indexed Data Service is arguably a step in the right direction (spatial indexing smartness, all automatically generated from planet data) In fact I can imagine that nominatim polygons could be used to set is_in tags (for an augmented planet download, not to put into the main database)
- These things will take some time. It'll probably happen quicker if people stop being distracted by half-solutions which waste lot of time and effort of the mapping community. We're not there yet, and I'm not advocating purging is_in tags and creating a short term problem for developers who are trying to use it, but please help move things in the right direction.
- -- Harry Wood 12:51, 14 August 2011 (BST)
How we stop people from using this tag
I think that the best way to stop people from using this tag, is to make sure that none of the is_in tags is actually being considered in the different seach engines, redering engines, gps-map-generators etc. Please update the list below if you know for a fact that each function does not use is_in:
* Nominatim - there is some references to it in the code. * Mapnik - unknown * Osmarender - unknown * mapgen.pl - there is some references to in in mapgen.pl * mkgmap (map for garmin) - uses is_in tags when mkgmap is run with the --location_autofill switch * Navit only works with is_in tags, can't work with boundaries.
- Why don't we provide a script to easily add
is_intags to an OSM file? In that way, tools that use
is_incan preprocess the data, and the users would stop with adding is_in tags.--Sanderd17 13:46, 18 December 2010 (UTC)
- No, I'm sorry. That would be a very, very, wrong thing to do. If you make up a piece of code that could do that, you should rather add that code to the tools that would otherwise need is_in. Any piece of information that can be calculated should be just that; calculated. The OSM database is not some place one should cache the results of such calculations. --Gorm 23:04, 20 December 2010 (UTC)
- I believe he meant to use the resulting osm file only locally, say extract -> is_in adder -> navit, i.e. without any intention of uploading it back. Would cover the few cases where the otherwise valuable software can not be easily extended to handle boundaries, and where the area being processed does contain valid boundaries - many parts of the world still don't have anything else besides the admin_level=2 boundaries. If all the boundary data is to be gathered from the extract being processed, some tools can not identify where something "is in" within their current pipelines, when they process the extract by iterating through the file in one go; the file would have to be read through once before the normal operation to construct the boundaries. (I don't know enough about Navit to say if it's a good example here).
- If such a utility existed, the coder should investigate and validate ways to malform the data enough to make the api reject it, if someone were to try to (edit and) upload the resulting file; maybe dropping the version attribute would do it, it's not AFAIK used by any data consumers. Boundaries are better, but if they are not available, at early stages of mapping a new place the only option is to identify where each feature near the border belongs to; the mapper doesn't yet know that for the next feature (house, road, ..) so he can't draw a boundary there - not yet. Alv 06:55, 21 December 2010 (UTC)
- I think that, if you say on the wiki that is_in tags can easily be added, no user will upload it. You have to work a while on OSM before you can download a dataset, execute such a script on it and upload it back. I think against that time, a user will know that it's not a good thing to do. But still, it's not easy to switch from boundaries (in all forms: relations, closed ways and some boundaries under construction) to is_in tags. And you need some kind of "is_in" information for performance. If you look at the navit trunk [1], you see that they worked on boundary support until about 7 months ago. I haven't contacted martin-s, but since he's still active in the project, I guess he stopped the boundary support because it was too difficult. --Sanderd17 08:02, 11 January 2011 (UTC)
Clean up
- Shouldn't this page be changed (change all "isin" tags with "is_in") and removed from the proposed features set? Seems that it is approved (it's on the map features page but I can't find any final approval). -- Cimm 8 Nov 2006
- Secondly the "place=country name=denmark isin=europe,scandanavia" should be removed, it's a bad example (wrong tag, europe should be after scandinavia) and Scandinavia is ambigous here. As far as I know it's not a political entity but a collection of countries like "the Baltic states or the Benelux". Do we add these kind of classifications as well? -- Cimm 8 Nov 2006
- The name tag is used in the examples... when do we need to use the name tag and when the place_name tag or do we use both? -- Cimm 8 Nov 2006
- As I understand it place_name would only be used to identify a particular feature as being near a particular place, and as such is probably depreciated by the "is_in" tag.Dmgroom 13:26, 8 November 2006 (UTC)
- I have an issue with the correct spelling for this tag: All the examples given use a colon as a separator between values, but the FAQ says that multiple values for a tag should be separated by semicolons. Which is correct?--Guenter 20:51, 8 November 2008 (UTC)
- I agree with Guenter. All other tags have the mutiple values separated by semicolons (;) and the FAQ says "the semicolon is the only accepted character". More than that, editors and tools handling data merge values in the same manner and have been doing so for quite some time. This is easily visible in Potlatch if one merges two ways with different values for the same tag. I have looked over the data in Romania and is_in is the ONLY tag that uses commas as separators while for other tags the semicolon is used consistently. EddyP Fri, 17 Jul 2009 10:32:35 +0000
- Not imposing a certain order or at least recommending one makes the information useless. The same goes for the meaning of that information, if there's no agreement on what the tag means, then ALL the information associated with that tag is useless. The first examples say that both "USA, CA, San Francisco" and "San Fancisco, CA, USA" are valid, how about we change that so we make sure smaller entities are first, larger ones going last. Also the attempt to include Canberra into some abstract group is useless and would probably better done with another dedicated tag (or a relation?). EddyP Fri, 17 Jul 2009 10:41
is_in
- What to add as a value? Only political entities or more general stuff as well? Do we add provinces, countries, continents, political unions, geographical collections (like Scandinavia or the Benelux)? Is Europe the political Europe or the geographical Europe? -- Cimm 8 Nov 2006
How about some examples - say whether you think each one is a good idea to mark. Add some more examples too.
Referring to Dmgrooms remark about "the" England node. It's good question, how can you know where to find these kind of nodes and if they exsist or not. Maybe we need some API to look for all nodes in this bounding box with these tags? Or they could have another node color in JSOM (maybe some plugin or an extension for the mappaint plugin), this last idea would only work on city level off course? -- Cimm 9 Nov 2006
I'd say: Use only ISO 3166 2-letter-codes for toplevel. If someone really needs to know which continent "cn" belongs to, it can easily be found out per software (there are only 249 of them). --Themroc 21:26, 21 May 2011 (BST)
Localization
This tag should be localizated, as the place names are localizated. For example, for Madrid, in Spain, the tag should be "is_in=Comunity of Madrid, Spain, Europe", and "is_in:es=Comunidad de Madrid, España, Europa".
isin namespace
I want to bring up a previous suggestion I saw (on osm-talk, I think) to use a namespace, e.g. isin:county=Shropshire, isin:country=United Kingdom. Exactly what namespaces are used is, of course, debatable.
The reason is I want to use a kind of place selection on my website to show stuff in a particular area. Currently is_in defines a set of tags so it's not so easy to derive from this countries, counties, etc, on its own. I've got round this partially in an import script I've written where it finds countries using isin:country then associates places with countries where they have that country name in their is_in tag.
Another method I can use is to simply provide tags and by selecting more tags one can narrow down the number of places. More 'web2.0' than dropdown lists but not quite as easy to program. Still, it doesn't solve the problem if people do want to list counties in England, for example...
BTW, I've got a script ready for importing places into MySQL from the planet dump every week. I'm currently not doing anything with it (haven't programmed anything that needs the data just yet) but I can provide dumps in various formats if anyone wants them (probably not doable weekly but I'll have a go).
Higgy 19:12, 5 November 2007 (UTC)
is_in:country - what value?
I am currently implementing support for "is_in:country" and "is_in" in an algorithm to determine the country a way is in for Traveling Salesman in order to determine what national traffic-regulations to use. (It is a shortcut to avoid loading the border-polygon and to make ways explicitely tagged to be in a given country overrule an inprecide border-polygon).
What should I compare the tagged value against? Currently I compare first against the english name of the country (case-insensitive), then against the ISO3166 (uppercase, case-sensitive). The later may give false positives in the is_in -tag (e.g. a city-name containing a valid ISO-country-code as a substring.) What about localized names? If so, what localizations and what about countries with more then one language and more then one alphabet or more then one way to romanize(translate into latin characters) the name?
--MarcusWolschon 08:30, 26 March 2009 (UTC)
is_in:city
I need to find out if a given way is inside a city/town/village. Are there any suggestions on how to do this and avoid handling "Europe", "Europa", the english or local name of the country/state/geographic area like a city? --MarcusWolschon 13:29, 27 March 2009 (UTC) | http://wiki.openstreetmap.org/wiki/Talk:Proposed_features/Placename_hierachies | CC-MAIN-2017-09 | refinedweb | 2,223 | 69.41 |
How to: Create a Console Application
Last modified: March 19, 2010
Applies to: SharePoint Foundation 2010
This programming task describes how to create a console application in Microsoft Visual Studio 2010 that displays the number of lists within a site collection.
Users must be administrators on the computer where a console application is executed in order to run the application in the context of Microsoft SharePoint Foundation.
To create a console application in Visual Studio
On the File menu in Microsoft Visual Studio, point to New and then click Project..
In Solution Explorer, right-click the References node, and then click Add Reference on the shortcut menu.
On the .NET tab of the Add Reference dialog box, select Microsoft.SharePoint, and then click OK.
In Solution Explorer, right-click the console application and click Properties. In the Project property page, select Application and set the target framework to .NET Framework 3.5, and then select Build and set the platform target to x64.
In the default .vb or .cs file, add a using directive for the Microsoft.SharePoint namespace, as follows.
Add the following code to the Main method in the .vb or .cs file.
Overloads Sub Main(args() As String) Using siteCollection As New SPSite("") Dim sites As SPWebCollection = siteCollection.AllWebs Dim site As SPWeb For Each site In sites Try Dim lists As SPListCollection = site.Lists Console.WriteLine("Site: {0} Lists: {1}", site.Name, lists.Count.ToString()) Finally If site IsNot Nothing Then site.Dispose() End If End Try Next site End Using Console.Write("Press ENTER to continue") Console.ReadLine() End Sub 'Main
static void Main(string[] args) { using (SPSite siteCollection = new SPSite("")) { SPWebCollection sites = siteCollection.AllWebs; foreach (SPWeb site in sites) { try { SPListCollection lists = site.Lists; Console.WriteLine("Site: {0} Lists: {1}", site.Name, lists.Count.ToString()); } finally { if (site != null) site.Dispose(); } } } Console.Write("Press ENTER to continue"); Console.ReadLine(); }
Click Start on the Debug menu or press F5 to run the example. | https://msdn.microsoft.com/en-us/library/ms438026.aspx | CC-MAIN-2015-22 | refinedweb | 328 | 59.8 |
Hi Berin,
I am not Ola Berg who you are responding to, but I'm interested by all
those different
points of view, so ...
>Keep in mind the Configuration abstraction in Avalon has been
>around since 1999. It is largely unchanged (with the exception
>of the package name) since then.
>
>Why doom yourselves to repeat the effort?
>
I promise that is it what I try to avoid ... I've looked at the (very)
few systems
available (== opensource) that would allow me to do what I need to and none
satisfied me (or I didn't find a way to do what I need with them).
Maybe I need more info/understanding of how other systems work.
You probably can tell me how to do with Avalon.
So, let's take a simple configuration file (let's say I CAN'T change it,
not the layout at
least). This configuration file would have 3 fields : the line number,
a String field representing, say, the nickname of an object, another
String field, say,
the classname of an object. This would result as a configuration file
that looks like :
#LINENUMBER ........ NICKNAME ........ CLASSNAME (the dots stand for tabs)
1 ...... config ...... org.foo.bar.Configuration
2 ...... field ...... org.foo.bar.Field
3 ...... entry ...... org.foo.bar.Entry
This is a very dumb example, but other configuration files could at
least LOOK the same,
and some of the ones I deal with do.
What would I need to handle this with Avalon ?
How could I access the values it contains ?
From what I see in Avalon's Configuration API, I would use
getValueAsInteger() for the first,
and simply getValue() for the others ... This is what in commons'
Configuration, it's simple
and I understand that. I get I would get the Field's value by a
getField(String name) equivalent (getChild() ?)
Now, for the layout ?
Would I need to extend DefaultConfigurationBuilder to handle the config
file ?
What if another file has another layout ?
What if I have to deal with 10 files with 10 different layouts ?
This is a very technical info that I'm missing ... I don't care if the
package is from this or that other
package, as long as it does what I want it to do ... If I find something
else that helps me to do it in a
way that is satisfying, no problem, I'll use that. I just didn't find
that system till now.
If you could tell me technically how I could handle the above file, I
may simply use Avalon's system
instead of trying to build my own one.
I hope that it is somehow separated from Avalon (because my app doesn't
want to be component-oriented) ?
>The point is that I think you are going down the path of
>more work than necessary for this. I haven't needed anything
>more than an object receiving a Configuration object to do
>its work.
>
I would love to avoid some of the work !
Yep, an object receiving a Configuration is good (at least it's what
Avalon does).
>If you like a utility, and use it, do you really care where
>it comes from?
>
No.
>I would like to see how your quite complex config system would
>be better than the current simplistic approach used in Avalon.
>
>Simplicity makes things easier to use and understand. Complexity
>kills.
>
Agreed at 100%.
My system (I don't know about Ola's) uses simple beans and Betwixt to
map the
descriptor to the Object Model. I think most people can cope with that.
The XML descriptor (not the configuration, the description of the
configuration)
uses simple structures, a la Ant, and no namespace, but properties (like
${foo.bar}),
and that's about all. The reading / writing of the actual config files
is meant to be
hidden from the user.
>I am trying to trace a bug in Visual C++ STL which, purely syntactically
>speaking, should work. The problem is all the black magic that goes
>on under the scenes that is causing an Access Violation Exception
>where one should never exist.
>
>I have refactored a number of systems that where too complex when
>they started out, forcing the user to know too much. I like simplicity
>because I don't have to waste 12 hours (my last Friday) trying to
>chase down a bug that by all other means should not exist.
>
This is why I want to avoid letting the user to define his own elements,
field types,
etc. The core will be the simplest possible (I don't want to get lost
myself !).
>I should not have to debug Microsoft code to fix my system. I have a
>feeling that where you are starting will force users to debug commons
>code to fix their system when by all other means it should work.
>
>It's a slippery slope. Perhaps its uncertainty on my part, I haven't
>seen your work so I can't say anything one way or the other about it.
>My humble suggestion to you is to start simple, and add on from there.
>
I start simple, and hopefully will stay simple. I just hate when I have
to spend hours
understanding things that should be simple. My ideal goal is to have
people understand
it just by reading the XML descriptor.
Thanks,
Stéphane
_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at
--
To unsubscribe, e-mail: <mailto:commons-dev-unsubscribe@jakarta.apache.org>
For additional commands, e-mail: <mailto:commons-dev-help@jakarta.apache.org> | http://mail-archives.apache.org/mod_mbox/commons-dev/200206.mbox/%3C3D0DE04B.1050602@yahoo.fr%3E | CC-MAIN-2018-47 | refinedweb | 926 | 73.37 |
Android: Initialize View child class object from a XML file
Posted by Dimitri | Nov 16th, 2011 | Filed under Programming
This tutorial shows how to inherit from the View class to create your own customized View element, and how to initialize an object from this child class with parameters defined at a layout XML file. There are two major advantages in this approach over initializing the extended View object at the Activity.
The first is that the Java part of your code remains clean, without a lot of member variable assignments to set the View’s basic parameters. The second and most important, is the possibility to instantly preview the changes at the Graphical Layout tool in Eclipse without the need to execute the application. This means that the space this customized View element takes on the screen can be immediately be seen and adjusted, if necessary.
An example Eclipse project is available at the end of the post. This has been tested both on an emulator and a real device both running Android 2.0.
So, let’s start by creating a class that inherits from the View class, like this one:
package fortyonepost.com.cvxml; import android.content.Context; import android.graphics.Canvas; import android.graphics.Color; import android.graphics.Paint; import android.util.AttributeSet; import android.view.View; public class CustomView extends View { //variables to hold the custom attribute values private String redText; private int value; //a paint object to render the redText private Paint paint; public CustomView(Context context, AttributeSet attrs) { super(context, attrs); //initialize the paint object paint = new Paint(); //set the color to red paint.setColor(Color.RED); //initialize the redText String with the attribute with the same name at the XML file redText = attrs.getAttributeValue(null, "redText"); //initialize the value integer with the attribute with the same name at the XML file value = attrs.getAttributeIntValue(null, "value", 0); } @Override protected void onDraw(Canvas canvas) { super.onDraw(canvas); //render the redText at the specified position canvas.drawText(redText, 10, 100, paint); //render the integer variable value canvas.drawText("Value: " + Integer.toString(value), 10, 140, paint); } }
This simple View child has three member variables: a String and an integer that will later be used to hold the custom values defined at the XML layout file. And also, the Paint object is being initialized. This object is responsible for setting the render settings in the Canvas. For this example, it will set the text to red (lines 13 through 17).
Next, there’s the CustomView constructor, which is inherited from the View class and is the is the most critical part of this code (lines 19 through 32). It’s necessary to implement one of the View constructors that takes an AttributeSet object as a parameter, otherwise it won’t be possible to inflate the CustomView from a XML file. In Eclipse, it’s a very easy task to accomplish, just right click anywhere inside the class and select Source->Generate Constructors from Superclass. A list of constructors from the View class to be implement will be shown; select the one that takes the Content and the AttributeSet objects as parameters.
In the next dialog, select one of the constructors that takes the AttributeSet object as a parameter.
The constructor also initializes the Paint object, sets its color to red, obtains custom fields from the layout XML file and stores then into the redText and value variables (lines 24 through 31). This custom fields are just attributes that can be added to the XML file and they hold just about any type of data we want. However, it’s wise to use only boolean, integer, float or string values as attributes, since methods to obtain these types of data from the layout XML are already available at objects from the AttributeSet class. In the above example, just take a look at lines 29 and 31, where, respectively a string and an integer are being obtained from the XML file and stored into member variables.
There’s no reason to define an extended View class if it doesn’t display anything, so that’s the reason why this code also overrides the onDraw() method, which is responsible for the rendering. In this example, the string and the integer obtained from the custom XML attributes are being rendered on the screen using the Paint object.
That’s about it for the View class child. As for the XML, all that’s required is to define the CustomView parameters, like this:
<?xml version="1.0" encoding="utf-8"?> <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns: <fortyonepost.com.cvxml.CustomView android: </LinearLayout>
The only thing one needs to watch out for is not to forget to include the namespace and the package name and not just the class name. The custom XML attributes are being defined at lines 13 and 14.
Finally, an object of the CustomView class must be created at an Activity. It must also inflate the CustomView with the parameters defined at the XML file, like this:
package fortyonepost.com.cvxml; import android.app.Activity; import android.os.Bundle; public class CustomViewXMLActivity extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); } }
And that’s all there is to it. When executed, the above example application outputs the following:
Example project screenshot.
Downloads
Be the first to leave a comment! | http://www.41post.com/4477/programming/android-initialize-view-child-class-object-from-a-xml-file | CC-MAIN-2020-16 | refinedweb | 906 | 54.12 |
Detailed Description
This namespace contains functions to handle crashes.
It allows you to set a crash handler function that will be called when your application crashes and also provides a default crash handler that implements the following functionality:
- Launches the KDE crash display application (DrKonqi) to let the user report the bug and/or debug it.
- Calls an emergency save function that you can set with setEmergencySaveFunction() to attempt to save the application's data.
- Autorestarts your application.
- Note
- All the above features are optional and you need to enable them explicitly. By default, the defaultCrashHandler() will not do anything. However, if you are using KApplication, it will by default enable launching DrKonqi on crashes, unless the –nocrashhandler argument was passed on the command line or the environment variable KDE_DEBUG is set to any value.
Typedef Documentation
Enumeration Type Documentation
Function Documentation
Returns the installed crash handler.
- Returns
- the crash handler
Definition at line 362 of file kcrash.cpp.
The default crash handler.
Do not call this function directly. Instead, use setCrashHandler() to set it as your application's crash handler.
- Parameters
-
- Note
- If you implement your own crash handler, you will have to call this function from your implementation if you want to use the features of this namespace.
Definition at line 381 of file kcrash.cpp.
Returns the currently set emergency save function.
- Returns
- the emergency save function
Definition at line 163 of file kcrash.cpp.
This does nothing if $KDE_DEBUG is set.
Call this in your main() to ensure that the crash handler is always launched.
- Since
- 5.15
Definition at line 125 of file kcrash.cpp.
Returns true if DrKonqi is set to be launched from the crash handler or false otherwise.
- Since
- 4.5
Definition at line 263 of file kcrash.cpp.
Install a function to be called when a crash occurs.
A crash occurs when one of the following signals is caught: SIGSEGV, SIGBUS, SIGFPE, SIGILL, SIGABRT.
- Parameters
-
- Note
- If you use setDrKonqiEnabled(true), setEmergencySaveFunction(myfunc) or setFlags(AutoRestart), you do not need to call this function explicitly. The default crash handler is automatically installed by those functions if needed. However, if you set a custom crash handler, those functions will not change it.
Definition at line 307 of file kcrash.cpp.
Enables or disables launching DrKonqi from the crash handler.
By default, launching DrKonqi is enabled when QCoreApplication is created. To disable it:
- Note
- It is the crash handler's responsibility to launch DrKonqi. Therefore, if no crash handler is set, this method also installs the default crash handler to ensure that DrKonqi will be launched.
- Since
- 4.5
Definition at line 234 of file kcrash.cpp.
Installs a function which should try to save the application's data.
- Note
- It is the crash handler's responsibility to call this function. Therefore, if no crash handler is set, the default crash handler is installed to ensure the save function will be called.
- Parameters
-
Definition at line 149 of file kcrash.cpp.
Set options to determine how the default crash handler should behave.
- Parameters
-
Definition at line 193 of file kcrash.cpp.
Documentation copyright © 1996-2019 The KDE developers.
Generated on Mon Mar 18 2019 22:40:56 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online. | https://api.kde.org/frameworks/kcrash/html/namespaceKCrash.html | CC-MAIN-2019-13 | refinedweb | 553 | 58.48 |
Hi everyone. Im very much new to this forum and would really appreciate to have help all around.
Im doing a project with C language to control a radio using a remote control. I generally have 5 buttons on the remote of which 1 of the buttons need to have a single press (mute function) and a long press (turn ON/OFF) as well.
The problem that I am currently facing is that although I've tried to do several ways, such as for loops, while loops, if else etc.. but non works.
It seems that it always goes into the else if loop instead of if and/or else, even though the counter is set at the correct number.[ATTACH]12369[ATTACH]12369Code:void construct(void) { #if 1 //assuming all variables are properly declared in unsigned char { if ((key > 205) && (key < 235)) //button 1 { count++; if ((count > 1) && (count <= 51)) //short press { d1 = 0xXX; d2 = 0xYY; } else if ((count > 51) && (count <=255)) //long press { d1 = 0xXX; d2 = 0xYY; } else { } } else if ((key > 155) && (key < 173)) //button 2 { d1 = 0xXX; d2 = 0xYY; } else if ((key > 80) && (key < 141)) //button 3 { d1 = 0xXX; d2 = 0xYY; } else if ((key > 47) && (key < 78)) //button 4 { d1 = 0xXX; d2 = 0xYY; } else if ((key >= 0) && (key < 15)) //button 5 { d1 = 0xXX; d2 = 0xYY; } else { } } #endif }
Could anyone please help and see if there is anything that I missed?
gumasumnida. | https://cboard.cprogramming.com/c-programming/153282-one-button-have-two-values.html | CC-MAIN-2017-09 | refinedweb | 234 | 64.27 |
Syntax Definitions
Sublime Text can use both .sublime-syntax and .tmLanguage files for syntax highlighting. This document describes .sublime-syntax files.
Overview
Sublime Syntax files are YAML files with a small header, followed by a list of contexts. Each context has a list of patterns that describe how to highlight text in that context, and how to change the current text.
Here's a small example of a syntax file designed to highlight C.
%YAML 1.2 --- name: C file_extensions: [c, h] scope: source.c contexts: main: - match: \b(if|else|for|while)\b scope: keyword.control.c
At its core, a syntax definition assigns scopes (e.g.,
keyword.control.c) to areas of the text. These scopes are used by color schemes to highlight the text.
This syntax file contains one context,
main, that matches the words
[if, else, for, while], and assigns them the scope
keyword.control.c. The context name
main special: every syntax must define a main context, as it will be used at the start of the file.
The match key is a regex, supporting features from the Oniguruma regex engine. In the above example,
\b is used to ensure only word boundaries are matched, to ensure that words such as
elsewhere are not considered keywords.
Note that due to the YAML syntax, tab characters are not allowed within .sublime-syntax files.
Header
The allowed keys in the header area are:
- name
This defines the name shown for the syntax in the menu. It's optional, and will be derived from the file name if not used.
- file_extensions
A list of strings, defining file extensions this syntax should be used for. Extensions listed here will be shown in file dialog dropdowns on some operating systems.
If a file does not have a basename, e.g. .gitignore, the entirety of the filename including the leading
.should be specified.
- hidden_file_extensions 4075
A list of strings, also defining file extensions this syntax should be used for. These extensions are not listed in file dialogs.
- first_line_match
When a file is opened without a recognized extension, the first line of the file contents will be tested against this regex, to see if the syntax should be applied.
- scope
The default scope assigned to all text in the file
- version 4075
An integer, either
1or
2, that controls backwards compatibility. New syntaxes should target
2, as it fixes some inconsistencies in how scopes are applied.
- extends 4075
A string of a base syntax this syntax should inherit from. The base syntax must be specified using its package path, e.g. Packages/JavaScript/JavaScript.sublime-syntax. See Inheritance for an overview of syntax inheritance.
- hidden
Hidden syntax definitions won't be shown in the menu, but can still be assigned by plugins, or included by other syntax definitions.
Contexts
For most languages, you'll need more than one context. For example, in C, we don't want a
for word in the middle of a string to be highlighted as a keyword. Here's an example of how to handle this:
%YAML 1.2 --- name: C file_extensions: [c, h] scope: source.c contexts: main: - match: \b(if|else|for|while)\b scope: keyword.control.c - match: '"' push: string string: - meta_scope: string.quoted.double.c - match: \\. scope: constant.character.escape.c - match: '"' pop: true
A second pattern has been added to the main context that matches a double quote character (note that
'"' is used for this, as a standalone quote would be a YAML syntax error), and pushes a new context,
string, onto the context stack. This means the rest of the file will be processing using the string context, and not the main context, until the string context is popped off the stack.
The string context introduces a new key: meta_scope. This will assign the
string.quoted.double.c scope to all text while the
string context is on the stack.
While editing in Sublime Text, you can check what scopes have been applied to the text under the caret by pressing control+shift+p (Mac) or ctrl+alt+shift+p (Windows/Linux).
The
string context has two patterns: the first matches a backslash character followed by any other, and the second matches a quote character. Note that the last pattern specifies an action when an unescaped quote is encountered, the string context will be popped off the context stack, returning to assigning scopes using the main context.
When a context has multiple patterns, the leftmost one will be found. When multiple patterns match at the same position, the first defined pattern will be selected.
Meta Patterns
- meta_scope
This assigns the given scope to all text within this context, including the patterns that push the context onto the stack and pop it off.
- meta_content_scope
As above, but does not apply to the text that triggers the context (e.g., in the above string example, the content scope would not get applied to the quote characters).
- meta_include_prototype
Used to stop the current context from automatically including the
prototypecontext.
- clear_scopes
This setting allows removing scope names from the current stack. It can be an integer, or the value
trueto remove all scope names. It is applied before meta_scope and meta_content_scope. This is typically only used when one syntax is embedding another.
- meta_prepend 4075
A boolean, controlling context name conflict resolution during inheritance. If this is specified, the rules in this context will be inserted before any existing rules from a context with the same name in an ancestor syntax definition.
- meta_append 4075
A boolean, controlling context name conflict resolution during inheritance. If this is specified, the rules in this context will be inserted after to any existing rules from a context with the same name in an ancestor syntax definition.
Meta patterns must be listed first in the context, before any match or include patterns.
Match Patterns
A match pattern can include the following keys:
- match
The regex used to match against the text. YAML allows many strings to be written without quotes, which can help make the regex clearer, but it's important to understand when you need to quote the regex. If your regex includes the characters
#,
:,
-,
{,
[or
>then you likely need to quote it. Regexes are only ever run against a single line of text at a time.
- scope
The scope assigned to the matched text.
- captures
A mapping of numbers to scope, assigning scopes to captured portions of the match regex. See below for an example.
- push
The contexts to push onto the stack. This may be either a single context name, a list of context names, or an inline, anonymous context.
- pop
Pops contexts off the stack. The value
truewill pop a single context.
An integer greater than zero will pop the corresponding number of contexts. 4050
The pop key can be combined with push, set, embed and branch. When combined, the specified number of contexts will be popped off of the stack before the other action is performed. For push, embed and branch actions, the pop treats the match as if it were a lookahead, which means the match will not receive the meta_scope of the contexts that are popped. 4075
- set
Accepts the same arguments as push, but will first pop this context off, and then push the given context(s) onto the stack.
Any match will receive the meta_scope of the context being popped and the context being pushed.
- embed 3153
Accepts the name of a single context to push into. While similar to push, it pops out of any number of nested contexts as soon as the escape pattern is found. This makes it an ideal tool for embedding one syntax within another.
- escape
This key is required if embed is used, and is a regex used to exit from the embedded context. Any backreferences in this pattern will refer to capture groups in the match regex.
- embed_scope
A scope assigned to all text matched after the match and before the escape. Similar in concept to meta_content_scope.
- escape_captures
A mapping of capture groups to scope names, for the escape pattern. Use capture group
0to apply a scope to the entire escape match.
- branch 4050
Accepts the names of two or more contexts, which are attempted in order. If a fail action is encountered, the highlighting of the file will be restarted at the character where the branch occured, and the next context will be attempted.
- branch_point
This is the unique identifier for the branch and is specified when a match uses the fail action.
The branch action allows for handling syntax constructs that are ambiguous, and also allows handling constructs that span multiple lines.
For ideal performance, the contexts should be listed in the order of how likely they are to be accepted. Note: because highlighting with branches requires reprocessing an entire branch upon each change to the document, the highlighting engine will not rewind more than 128 lines when a fail occurs.
- fail 4050
Accepts the name of a branch_point to rewind to and retry the next context of. If a fail action specifies a branch_point that was never pushed on the stack, or has already been popped off of the stack, it will have no effect.
The following keys control behavior that is exclusive, and only one can be specified per match pattern:
- push
- pop <4075
- set
- embed 3153
- branch 4050
- fail 4050
Match Examples
A basic match assigning a single scope to the entire match:
- match: \w+ scope: variable.parameter.c++
Assigning different scopes to the regex capture groups:
- match: ^\\s*(#)\\s*\\b(include)\\b captures: 1: meta.preprocessor.c++ 2: keyword.control.include.c++
Pushing into another context named
function-parameters:
- match: \b\w+(?=\() scope: entity.name.function.c++ push: function-parameters
Popping out of a context:
- match: \) scope: punctuation.section.parens.end.c++ pop: true
Popping out of the current context and pushing into another:
- match: \} scope: punctuation.section.block.end.c++ set: file-global
Embedding another syntax 3153:
- match: (```)(js|javascript) captures: 1: punctuation.section.code.begin.markdown 2: constant.other.markdown embed: scope:source.js embed_scope: meta.embedded.js.markdown escape: ^``` escape_captures: 0: punctuation.section.code.end.markdown
Using branch to attempt one highlighting, with the ability to fallback to another. 4050:
expression: - match: (?=\() branch_point: open_parens branch: - paren_group - arrow_function paren_group: - match: \( scope: punctuation.section.parens.begin.js push: - include: expressions - match: \) scope: punctuation.section.parens.begin.js set: - match: => fail: open_parens - match: (?=\S) pop: 2 arrow_function: - match: \( scope: punctuation.section.parens.begin.js push: - match: \w+ scope: variable.parameter.js - match: ',' scope: punctuation.separator.comma.js - match: \) scope: punctuation.section.parens.begin.js set: - match: => scope: storage.type.function.arrow.js push: arrow_function_body
Using pop with another action. 4075:
paragraph: - match: '(```)(py|python)' captures: 1: punctuation.definition.code.begin.md 2: constant.other.language-name.md pop: 1 embed: scope:source.python embed_scope: source.python.embedded escape: ^``` escape_captures: 0: punctuation.definition.code.end.md
Include Patterns
Frequently it's convenient to include the contents of one context within another. For example, you may define several different contexts for parsing the C language, and almost all of them can include comments. Rather than copying the relevant match patterns into each of these contexts, you can include them:
expr: - include: comments - match: \b[0-9]+\b scope: constant.numeric.c ...
Here, all the match patterns and include patterns defined in the comments context will be pulled in. They'll be inserted at the position of the include pattern, so you can still control the pattern order. Any meta patterns defined in the comments context will be ignored.
Including an External Prototype 4075
When including a context from another syntax, it may be desirable to also include any applicable prototype from that syntax. By default, an include pattern does not include such a prototype. If the key/value pair
apply_prototype: true is added to the include pattern, the context does not specify
meta_include_prototype: false and the other syntax has a
prototype context, then those patterns will also be included.
tags: - include: scope:source.html.basic apply_prototype: true
Prototype Context
With elements such as comments, it's so common to include them that it's simpler to make them included automatically in every context, and just list the exceptions instead. You can do this by creating a context named
prototype, it will be included automatically at the top of every other context, unless the context contains the meta_include_prototype key. For example:
prototype: - include: comments string: - meta_include_prototype: false ...
In C, a
/* inside a string does not start a comment, so the string context indicates that the prototype should not be included.
Including Other Files
Sublime Syntax files support the notion of one syntax definition including another. For example, HTML can contain embedded JavaScript. Here's an example of a basic syntax defintion for HTML that does so:
scope: text.html contexts: main: - match: <script> push: Packages/JavaScript/JavaScript.sublime-syntax with_prototype: - match: (?=</script>) pop: true - match: "<" scope: punctuation.definition.tag.begin - match: ">" scope: punctuation.definition.tag.end
Note the first rule above. It indicates that when we encounter a
<script> tag, the main context within JavaScript.sublime-syntax should be pushed onto the context stack. It also defines another key, with_prototype. This contains a list of patterns that will be inserted into every context defined within JavaScript.sublime-syntax. Note that with_prototype is conceptually similar to the
prototype context, however it will be always be inserted into every referenced context irrespective of their meta_include_prototype key.
In this case, the pattern that's inserted will pop off the current context while the next text is a
</script> tag. Note that it doesn't actually match the
</script> tag, it's just using a lookahead assertion, which plays two key roles here: It both allows the HTML rules to match against the end tag, highlighting it as-per normal, and it will ensure that all the JavaScript contexts will get popped off. The context stack may be in the middle of a JavaScript string, for example, but when the
</script> is encountered, both the JavaScript string and main contexts will get popped off.
Note that while Sublime Text supports both .sublime-syntax and .tmLanguage files, it's not possible to include a .tmLanguage file within a .sublime-syntax one.
Another common scenario is a templating language including HTML. Here's an example of that, this time with a subset of Jinja:
scope: text.jinja contexts: main: - match: "" push: "Packages/HTML/HTML.sublime-syntax" with_prototype: - match: "{{" push: expr expr: - match: "}}" pop: true - match: \b(if|else)\b scope: keyword.control
This is quite different from the HTML-embedding-JavaScript example, because templating languages tend to operate from the inside out: by default, it needs to act as HTML, only escaping to the underlying templating language on certain expressions.
In the example above, we can see it operates in HTML mode by default: the main context includes a single pattern that always matches, consuming no text, just including the HTML syntax.
Where the HTML syntax is included, the Jinja syntax directives (
{{ ... }}) are included via the with_prototype key, and thus get injected into every context in the HTML syntax (and JavaScript, by transitivity).
Variables
It's not uncommon for several regexes to have parts in common. To avoid repetitious typing, you can use variables:
variables: ident: '[A-Za-z_][A-Za-z_0-9]*' contexts: main: - match: '\b{{ident}}\b' scope: keyword.control
Variables must be defined at the top level of the .sublime-syntax file, and are referenced within regxes via
{{varname}}. Variables may themselves include other variables. Note that any text that doesn't match
{{[A-Za-z0-9_]+}} won't be considered as a variable, so regexes can still include literal
{{ characers, for example.
Inheritance 4075
In situations where a syntax is a slight variant of another, with some additions or changes, inheritance is a useful tool.
When inheriting a syntax, the key extends is used with a value containing the packages path to the parent syntax. The packages path will start with Packages/ and will contain the package name and syntax filename. For example:
%YAML 1.2 --- name: C++ file_extensions: [cc, cpp] scope: source.c++ extends: Packages/C++/C.sublime-syntax
A syntax using inheritance will inherit the variables and contexts values from its parent syntax. All other top-level keys, such as file_extensions and scope will not be inherited.
Variables
When extending a syntax, the variables key is merged with the parent syntax. Variables with the same name will override previous values.
Variable substitution is performed after all variable values have been realized. Thus, an extending syntax may change a variable from a parent syntax, and all usage of the variable in the parent contexts will use the overridden value.
Contexts
The contexts in an extending syntax will be a combination of the contexts from the parent syntax, and all those defined under the contexts key.
Contexts with the same name will override contexts from the parent syntax. To change the behavior when a context name is duplicated, two options are available. These meta key must be specified in the extending syntax:
- meta_prepend: true
all of the patterns in the extending syntax will be inserted before those in the parent syntax.
- meta_append: true
all of the patterns in the extending syntax will be inserted after those in the parent syntax.
Multiple Inheritance 4086
When a syntax is derived from a combination of two other syntaxes, multiple inheritance may be used. This allows the key extends to be a list of packages paths to two or more parent syntaxes. The parent syntaxes will be processed in order, from top to bottom, and must be derived from the same base.
Two examples of multiple inheritance in the default syntaxes are:
- Objective-C++: extends C++ and Objective-C, both which extend C
- TSX: extends JSX and TypeScript, both which extend JavaScript
Limitations
A syntax may extend a syntax that itself extends another syntax. There are no enforced limits on extending, other than that all syntaxes must share the same version.
Selected Examples
Bracket Balancing
This example highlights closing brackets without a corresponding open bracket:
name: C scope: source.c contexts: main: - match: \( push: brackets - match: \) scope: invalid.illegal.stray-bracket-end brackets: - match: \) pop: true - include: main
Sequential Contexts
This example will highlight a C style for statement containing too many semicolons:
for_stmt: - match: \( set: for_stmt_expr1 for_stmt_expr1: - match: ";" set: for_stmt_expr2 - match: \) pop: true - include: expr for_stmt_expr2: - match: ";" set: for_stmt_expr3 - match: \) pop: true - include: expr for_stmt_expr3: - match: \) pop: true - match: ";" scope: invalid.illegal.stray-semi-colon - include: expr
Advanced Stack Usage
In C, symbols are often defined with the
typedef keyword. So that Goto Definition can pick these up, the symbols should have the
entity.name.type scope attached to them.
Doing this can be a little tricky, as while typedefs are sometimes simple, they can get quite complex:
typedef int coordinate_t; typedef struct { int x; int y; } point_t;
To recognise these, after matching the typedef keyword, two contexts will be pushed onto the stack: the first will recognise a typename, and then pop off, while the second will recognise the introduced name for the type:
main: - match: \btypedef\b scope: keyword.control.c set: [typedef_after_typename, typename] typename: - match: \bstruct\b set: - match: "{" set: - match: "}" pop: true - match: \b[A-Za-z_][A-Za-z_0-9]*\b pop: true typedef_after_typename: - match: \b[A-Za-z_][A-Za-z_0-9]*\b scope: entity.name.type pop: true
In the above example,
typename is a reusable context, that will read in a typename and pop itself off the stack when it's done. It can be used in any context where a type needs to be consumed, such as within a typedef, or as a function argument.
The
main context uses a match pattern that pushes two contexts on the stack, with the rightmost context in the list becoming the topmost context on the stack. Once the
typename context has popped itself off, the
typedef_after_typename context will be at the top of the stack.
Also note above the use of anonymous contexts for brevity within the
typename context.
PHP Heredocs
This example shows how to match against Heredocs in PHP. The match pattern in the main context captures the heredoc identifier, and the corresponding pop pattern in the heredoc context refers to this captured text with the
\1 symbol:
name: PHP scope: source.php contexts: main: - match: <<<([A-Za-z][A-Za-z0-9_]*) push: heredoc heredoc: - meta_scope: string.unquoted.heredoc - match: ^\1; pop: true
Testing
When building a syntax definition, rather than manually checking scopes with the show_scope_name command, you can define a syntax test file that will do the checking for you:
// SYNTAX TEST "Packages/C/C.sublime-syntax" #pragma once // <- source.c meta.preprocessor.c++ // <- keyword.control.import // foo // ^ source.c comment.line // <- punctuation.definition.comment /* foo */ // ^ source.c comment.block // <- punctuation.definition.comment.begin // ^ punctuation.definition.comment.end #include "stdio.h" // <- meta.preprocessor.include.c++ // ^ meta string punctuation.definition.string.begin // ^ meta string punctuation.definition.string.end int square(int x) // <- storage.type // ^ meta.function entity.name.function // ^ storage.type { return x * x; // ^^^^^^ keyword.control } "Hello, World! // not a comment"; // ^ string.quoted.double // ^ string.quoted.double - comment
To make one, follow these rules
- Ensure the file name starts with syntax_test_.
- Ensure the file is saved somewhere within the Packages directory: next to the corresponding .sublime-syntax file is a good choice.
- Ensure the first line of the file starts with:
<comment_token> SYNTAX TEST "<syntax_file>". Note that the syntax file can either be a .sublime-syntax or .tmLanguage file.
Once the above conditions are met, running the build command with a syntax test or syntax definition file selected will run all the Syntax Tests, and show the results in an output panel. Next Result (F4) can be used to navigate to the first failing test.
Each test in the syntax test file must first start the comment token (established on the first line, it doesn't actually have to be a comment according to the syntax), and then either a
^ or
<- token.
The two types of tests are:
- Caret:
^this will test the following selector against the scope on the most recent non-test line. It will test it at the same column the
^is in. Consecutive
^s will test each column against the selector.
- Arrow:
<-this will test the following selector against the scope on the most recent non-test line. It will test it at the same column as the comment character is in.
Compatibility
When the syntax highlighting engine of Sublime Text requires changes that will break existing syntaxes, these modifications or bug fixes are gated behind the version key.
Currently there exist two versions: 1 and 2. The absense of the version key indicates version 1.
Version 1
The following is a list of bugs and behavior preserved in version 1 that have been fixed or changed in version 2. This list is primarily useful when understanding what to look for when updating the version of syntax.
embed_scopeStacks with
scopeof Other Syntax
Description
When embedding a the
maincontext from another syntax, the embed_scope will be combined with the scope of the other syntax. In version 2 syntaxes, the scope of the other syntax will only be included if embed_scope is not specified.
Syntax 1
Syntax 2
scope: source.lang contexts: paragraph: - match: \( scope: punctuation.section.group.begin embed: scope:source.other embed_scope: source.other.embedded escape: \) escape_captures: 0: punctuation.section.group.end
Text
scope: source.other contexts: main: - match: '[a-zA-Z0-9_]+' scope: identifier
'abc'
Result
The text
abcwill get the scope
source.other.embedded source.other identifierin version 1 syntaxes. In version 2 syntaxes, it will get
source.other.embedded identifier.
- Match Pattern with
setand
meta_content_scope
Description
When performing a set action on a match, the matched text will get the meta_content_scope of the context being popped, even though pop actions don’t, and a set is the equivalent of a pop then push.
Syntax
Text
scope: source.lang contexts: function: - meta_content_scope: meta.function - match: '[a-zA-Z0-9_]+' scope: variable.function - match: \( scope: punctuation.section.group.begin set: function-params function-params: - meta_scope: meta.function.params - match: \) scope: punctuation.section.group.end pop: true
abc()
Result
The text
(should get the scope
meta.function.params punctuation.section.group.begin. Instead it gets the incorrect scope
meta.function meta.function.params punctuation.section.group.begin.
- Match Pattern with
setand Target with
clear_scopes
Description
If a set action has a target with a clear_scopes value, scopes will not be cleared properly.
Syntax
Text
scope: source.lang contexts: main: - match: \bdef\b scope: keyword push: - function - function-name function: - meta_scope: meta.function function-name: - match: '[a-zA-Z0-9_]+' scope: variable.function - match: \( scope: punctuation.section.group.begin set: function-params function-params: - meta_scope: meta.function.params - clear_scopes: 1 - match: \) scope: punctuation.section.group.end pop: 2
def abc()
Result
The text
(should get the scope
meta.function.params punctuation.section.group.begin. Instead it gets the incorrect scope
meta.function meta.function.params punctuation.section.group.begin.
- Embed Escape Match and Meta Scopes
Description
The text matched by the escape pattern of an embed action will not get the meta_scope or meta_content_scope of the context that contains it.
Syntax
Text
scope: source.lang contexts: context1: - meta_scope: meta.group - meta_content_scope: meta.content - match: \' scope: punctuation.begin embed: embed escape: \' escape_captures: 0: punctuation.end embed: - match: '[a-z]+' scope: word
'abc'
Result
The second
'should get the scope
meta.group meta.content punctuation.end. Instead it gets the incorrect scope
punctuation.end.
- Multiple Target Push Actions with
clear_scopes
Description
If multiple contexts are pushed at once, and more than one context specifies clear_scopes with a value greater than
1, the resulting scopes are incorrect.
Syntax
Text
scope: source.lang contexts: main: - meta_content_scope: meta.main - match: '[a-zA-Z0-9]+\b' scope: identifier push: - context2 - context3 context2: - meta_scope: meta.ctx2 - clear_scopes: 1 context3: - meta_scope: meta.ctx3 - clear_scopes: 1 - match: \n pop: true
abc 1
Result
The clear_scopes values of all target contexts are added up and applied before applying the meta_scope and meta_content_scope of any targets. Thus, the text
abcwill be scoped
meta.ctx2 meta.ctx3 identifier, instead of the correct scope of
source.lang meta.ctx3 identifier.
- Regex Capture Group Order
Description
If an lower-numbered capture group matches text that occurs after text matched by a higher-numbered capture group, the lower-numbered capture group will not have its capture scope applied.
Syntax
Text
scope: source.lang contexts: main: - match: '(?:(x)|(y))+' captures: 1: identifier.x 2: identifier.y
yx
Result
The text
yis matched by capture group 2, and the text
xis matched by capture group 1.
xwill not get scoped
indentifier.xsince it occurs after the match from capture group 2. | https://www.sublimetext.com/docs/syntax.html | CC-MAIN-2021-17 | refinedweb | 4,460 | 57.27 |
#include <deal.II/lac/sparse_matrix_ez.h>
Sparse matrix without sparsity pattern.
Instead of using a pre-assembled sparsity pattern, this matrix builds the pattern on the fly. Filling the matrix may consume more time than for
SparseMatrix, since large memory movements may be involved when new matrix elements are inserted somewhere in the middle of the matrix and no currently unused memory locations are available for the row into which the new entry is to be inserted. To help optimize things, an expected row-length may be provided to the constructor, as well as an increment size for rows.
This class uses a storage structure that, similar to the usual sparse matrix format, only stores non-zero elements. These are stored in a single data array for the entire matrix, and are ordered by row and, within each row, by column number. A separate array describes where in the long data array each row starts and how long it is.
Due to this structure, gaps may occur between rows. Whenever a new entry must be created, an attempt is made to use the gap in its row. If no gap is left, the row must be extended and all subsequent rows must be shifted backwards. This is a very expensive operation and explains the inefficiency of this data structure and why it is useful to pre-allocate a sparsity pattern as the SparseMatrix class does.
This is where the optimization parameters, provided to the constructor or to the reinit() functions come in.
default_row_length is the number of entries that will be allocated for each row on initialization (the actual length of the rows is still zero). This means, that
default_row_length entries can be added to this row without shifting other rows. If fewer entries are added, the additional memory will of course be wasted.
If the space for a row is not sufficient, then it is enlarged by
default_increment entries. This way, subsequent rows are not shifted by single entries very often.
Finally, the
default_reserve allocates extra space at the end of the data array. This space is used whenever any row must be enlarged. It is important because otherwise not only the following rows must be moved, but in fact all rows after allocating sufficiently much space for the entire data array.
Suggested settings:
default_row_length should be the length of a typical row, for instance the size of the stencil in regular parts of the grid. Then,
default_increment may be the expected amount of entries added to the row by having one hanging node. This way, a good compromise between memory consumption and speed should be achieved.
default_reserve should then be an estimate for the number of hanging nodes times
default_increment.
Letting
default_increment be zero causes an exception whenever a row overflows.
If the rows are expected to be filled more or less from first to last, using a
default_row_length of zero may not be such a bad idea.
Definition at line 104 of file sparse_matrix_ez.h.
Declare type for container size.
Definition at line 110 of file sparse_matrix_ez.h.
Type of matrix entries. This alias is analogous to
value_type in the standard library containers.
Definition at line 292 of file sparse_matrix_ez.h.
Constructor. Initializes an empty matrix of dimension zero times zero.
Dummy copy constructor. This is here for use in containers. It may only be called for empty objects.
If you really want to copy a whole matrix, you can do so by using the
copy_from function.
Constructor. Generates a matrix of the given size, ready to be filled. The optional parameters
default_row_length and
default_increment allow for preallocating memory. Providing these properly is essential for an efficient assembling of the matrix.
Destructor. Free all memory.
Pseudo operator only copying empty objects.
This operator assigns a scalar to a matrix. Since this does usually not make much sense (should we set all matrix entries to this value? Only the nonzero entries of the sparsity pattern?), this operation is only allowed if the actual value to be assigned is zero. This operator only exists to allow for the obvious notation
matrix=0, which sets all elements of the matrix to zero, but keep the sparsity pattern previously used.
Reinitialize the sparse matrix to the dimensions provided. The matrix will have no entries at this point. The optional parameters
default_row_length,
default_increment and
reserve allow for preallocating memory. Providing these properly is essential for an efficient assembling of the matrix.
Release all memory and return to a state just like after having called the default constructor. It also forgets its sparsity pattern.
Return whether the object is empty. It is empty if both dimensions are zero.
Return the dimension of the codomain (or range) space. Note that the matrix is of dimension \(m \times n\).
Definition at line 1092 of file sparse_matrix_ez.h.
Return the dimension of the domain space. Note that the matrix is of dimension \(m \times n\).
Definition at line 1100 of file sparse_matrix_ez.h.
Return the number of entries in a specific row.
Return the number of nonzero elements of this matrix.
Determine an estimate for the memory consumption (in bytes) of this object.
Print statistics. If
full is
true, prints a histogram of all existing row lengths and allocated row lengths. Otherwise, just the relation of allocated and used entries is shown.
Definition at line 1573 of file sparse_matrix_ez.h.
Compute numbers of entries.
In the first three arguments, this function returns the number of entries used, allocated and reserved by this matrix.
If the final argument is true, the number of entries in each line is printed as well.
Set the element
(i,j) to
value.
If
value is not a finite number an exception is thrown.
The optional parameter
elide_zero_values can be used to specify whether zero values should be added anyway or these should be filtered away and only non-zero data is added. The default value is
true, i.e., zero values won't be added into the matrix.
If this function sets the value of an element that does not yet exist, then it allocates an entry for it. (Unless
elide_zero_values is
true as mentioned above.)
Definition at line 1237 of file sparse_matrix_ez.h.
Add
value to the element
(i,j).
If this function adds to the value of an element that does not yet exist, then it allocates an entry for it.
The function filters out zeroes automatically, i.e., it does not create new entries when adding zero to a matrix element for which no entry currently exists.
Definition at line 1264 of file sparse_matrix_ez.h.
Add all elements given in a FullMatrix<double> into sparse matrix locations given by
indices. In other words, this function adds the elements in
full_matrix to the respective entries in calling matrix, using the local-to-global indexing specified by
indices for both the rows and the columns of the matrix. This function assumes a quadratic sparse matrix and a quadratic full_matrix, the usual situation in FE calculations. 1285 of file sparse_matrix_ez.h.
Same function as before, but now including the possibility to use rectangular full_matrices and different local-to-global indexing on rows and columns, respectively.
Definition at line 1301 of file sparse_matrix_ez.h.
Set several elements in the specified row of the matrix with column indices as given by
col_indices to the respective value.18 of file sparse_matrix_ez.h.
Add an array of values given by
values in the given global matrix row at columns specified by col_indices in the sparse matrix.34 of file sparse_matrix_ez.h.
Copy the matrix given as argument into the current object.
Copying matrices is an expensive operation that we do not want to happen by accident through compiler generated code for
operator=. (This would happen, for example, if one accidentally declared a function argument of the current type by value rather than by reference.) The functionality of copying matrices is implemented in this member function instead. All copy operations of objects of this type therefore require an explicit function call.
The source matrix may be a matrix of arbitrary type, as long as its data type is convertible to the data type of this matrix.
The optional parameter
elide_zero_values can be used to specify whether zero values should be added anyway or these should be filtered away and only non-zero data is added. The default value is
true, i.e., zero values won't be added into the matrix.
The function returns a reference to
this.
Definition at line 1409 of file sparse_matrix_ez.h.
Add
matrix scaled by
factor to this matrix.
The source matrix may be a matrix of arbitrary type, as long as its data type is convertible to the data type of this matrix and it has the standard
const_iterator.
Definition at line 1432 of file sparse_matrix_ez.h.
Return the value of the entry (i,j). This may be an expensive operation and you should always take care where to call this function. In order to avoid abuse, this function throws an exception if the required element does not exist in the matrix.
In case you want a function that returns zero instead (for entries that are not in the sparsity pattern of the matrix), use the
el function.
Definition at line 1363 of file sparse_matrix_ez.h.
Return the value of the entry (i,j). Returns zero for all non-existing entries.
Definition at line 1351 of file sparse_matrix_ez.h.
Matrix-vector multiplication: let \(dst = M*src\) with \(M\) being this matrix.
Matrix-vector multiplication: let \(dst = M^T*src\) with \(M\) being this matrix. This function does the same as
vmult but takes the transposed matrix.
Adding Matrix-vector multiplication. Add \(M*src\) on \(dst\) with \(M\) being this matrix.
Adding Matrix-vector multiplication. Add \(M^T*src\) to \(dst\) with \(M\) being this matrix. This function does the same as
vmult_add but takes the transposed matrix.
Frobenius-norm of the matrix.
Apply the Jacobi preconditioner, which multiplies every element of the
src vector by the inverse of the respective diagonal element and multiplies the result with the damping factor
omega.
Apply SSOR preconditioning to
src.
Apply SOR preconditioning matrix to
src. The result of this method is \(dst = (om D - L)^{-1} src\).
Apply transpose SOR preconditioning matrix to
src. The result of this method is \(dst = (om D - U)^{-1} src\).
Add the matrix
A conjugated by
B, that is, \(B A B^T\) to this object. If the parameter
transpose is true, compute \(B^T A B\).
This function requires that
B has a
const_iterator traversing all matrix entries and that
A has a function
el(i,j) for access to a specific entry.
Definition at line 1459 of file sparse_matrix_ez.h.
Iterator starting at the first existing entry.
Definition at line 1375 of file sparse_matrix_ez.h.
Final iterator.
Definition at line 1383 of file sparse_matrix_ez.h.
Iterator starting at the first entry of row
r. If this row is empty, the result is
end(r), which does NOT point into row
r.
Definition at line 1390 of file sparse_matrix_ez.h.
Final iterator of row
r. The result may be different from
end()!
Definition at line 1399 of file sparse_matrix_ez.h.
Print the matrix to the given stream, using the format
(line,col) value, i.e. one nonzero entry of the matrix per line.
Print the matrix in the usual format, i.e. as a matrix and not as a list of nonzero elements. For better readability, elements not in the matrix are displayed as empty space, while matrix elements which are explicitly set to zero are displayed as such.
The parameters allow for a flexible setting of the output format:
precision and
scientific are used to determine the number format, where
scientific =
false means fixed point notation. A zero entry for
width makes the function compute a width, but it may be changed to a positive value, if output is crude.
Additionally, a character for an empty value may be specified.
Finally, the whole matrix can be multiplied with a common denominator to produce more readable output, even integers.
This function may produce large amounts of output if applied to a large matrix!
Write the data of this object in binary mode to a file.
Note that this binary format is platform dependent.
Read data that has previously been written by
block_write.
The object is resized on this operation, and all previous contents are lost.
A primitive form of error checking is performed which will recognize the bluntest attempts to interpret some data as a vector stored bitwise to a file, but not more.
Find an entry and return a const pointer. Return a zero-pointer if the entry does not exist.
Definition at line 1130 of file sparse_matrix_ez.h.
Find an entry and return a writable pointer. Return a zero-pointer if the entry does not exist.
Definition at line 1108 of file sparse_matrix_ez.h.
Find an entry or generate it.
Definition at line 1139 of file sparse_matrix_ez.h.
Version of
vmult which only performs its actions on the region defined by
[begin_row,end_row). This function is called by
vmult in the case of enabled multithreading.
Version of
matrix_norm_square which only performs its actions on the region defined by
[begin_row,end_row). This function is called by
matrix_norm_square in the case of enabled multithreading.
Version of
matrix_scalar_product which only performs its actions on the region defined by
[begin_row,end_row). This function is called by
matrix_scalar_product in the case of enabled multithreading..
Number of columns. This is used to check vector dimensions only.
Definition at line 885 of file sparse_matrix_ez.h.
Info structure for each row.
Definition at line 890 of file sparse_matrix_ez.h.
Data storage.
Definition at line 895 of file sparse_matrix_ez.h.
Increment when a row grows.
Definition at line 900 of file sparse_matrix_ez.h.
Remember the user provided default row length.
Definition at line 905 of file sparse_matrix_ez.h. | https://dealii.org/developer/doxygen/deal.II/classSparseMatrixEZ.html | CC-MAIN-2021-10 | refinedweb | 2,315 | 58.99 |
could look like this:
All the code is in this GitHub repo for you to follow along.
Development Environment Setup
Start by getting your machine ready to run Ruby code. You will need Bundler to manage the dependencies for this code; if you have used Ruby before you probably installed it already, but if you haven’t just type
gem install bundler on the command line (doesn’t matter in what directory since it installs it globally on your machine).
You will now need to install SQLite and ImageMagick. How you do this depends a lot on what OS and package manager you’re running; on Mac brew you can use
brew install sqlite and
brew install imagemagick. If you’re running a Ubuntu, you can run
sudo apt-get install sqlite and
sudo apt-get install imagemagick libmagickwand-dev or the equivalent for your distro.
Create a project folder wherever you prefer on your computer and navigate to it. We will now refer to it as the project root folder. Now create a file called “Gemfile” in the root folder and include all these dependencies we need in it:
# Gemfile source "" gem 'sequel' gem 'sqlite3' gem 'twilio-ruby', '~> 5.2.2' gem 'imgur-api' gem 'rmagick' gem 'dotenv' gem 'whenever'
No need to worry about what each of them does now as we will cover them as we go. Save the file and then run
bundle install on the command line in your project directory to install them.
Our developer environment is now set up so let’s move on to setting up our database. Create a new
database.rb in the root folder. First we are going to require Sequel and initiate a database:
# database.rb require 'sequel' DB = Sequel.connect('sqlite://dfs.db')
Using SQLite as a database allows us to avoid any extra configuration steps we’d need with others like PostgreSQL.
Let’s wrap the database setup in a module so we can then run everything from a main script:
module Database def self.setup unless DB.table_exists?(:players) DB.create_table :players do primary_key :id String :name Integer :salary String :team String :matchup String :position Float :ppg end end end end
This will create the Players table the first time we run our program. The table will be used to store the player’s name, salary, team, opposing team, position and average points per game. You can find the finished
database.rb file here. First down, 1st & 10 at the Getting Data section. (For those who may not follow football, that means we’re close to the next part.)
Getting Data
Create a file named
csv_analysis.rb where we’ll store the code which takes care of downloading, cleaning up the file and then iterating over the results:
# csv_analysis.rb require 'csv' require 'open-uri' module CsvAnalysis def self.scrape(url) File.open("./data.csv", "wb") do |saved_file| open(url, "rb") do |read_file| saved_file.write(read_file.read) end end end end
By using the standard
csv and
open-uri libraries, we are able to create a new
data.csv file and copy all the information provided by the URL we pass as an argument to it.
Now we can obtain the URL to the CSV. Open the contest you want to track in your DFS website and then simply do right click -> Copy Link Address on the Export To CSV button on DraftKings or FanDuel.
This is where it’s placed in DraftKings
We now have two different pieces of code, so it’s a good time to create a module that will bring them all together named
Tracker, which will live in a
tracker.rb file. It will store our URL as a constant and it will give us a simple interface to run all of our scripts at once.
# tracker.rb URL = '' require './csv_analysis' require './database' module Tracker def self.run Database.setup CsvAnalysis.scrape(URL) end end Tracker.run
Where the value of
URL is the link you obtained following the steps above. Next, run
ruby tracker.rb in your terminal. This will create a new data.csv file in your directory with all the information you needed! If you see some error, double-check that the URL you entered is correct and actually points to a CSV file.
Data is only as useful as the information you can extract from it, which isn’t much at the moment. To help us query it in a easier way, we are gonna store the information in our SQLite database we setup before. We are gonna add a new method to the
CsvAnalysis file under the
scrape method we defined earlier:
def self.scrape(url) ... end def self.format_for_database players = [] CSV.foreach("./data.csv", headers: true) do |row| players << { position: row[0], name: row[1], salary: row[2], matchup: row[3], ppg: row[4], team: row[5] } end players end
This will get all the data in the CSV and return the players as an array of Ruby hashes, with the same keys as the database table so that we can quickly insert them. To make sure everything is still working as expected, we can add this to our Tracker and print the result to screen like so:
module Tracker def self.run Database.setup CsvAnalysis.scrape(URL) puts CsvAnalysis.format_for_database end end
Now you can run
ruby tracker.rb again and all the player data should be printed on your terminal window.
What we need to do now is a simple if statement: if the player isn’t in the database, we add them to it. If they already exist, check their salary. If they’re different from the one we recorded before, add them to an array. If not, move to the next player by using
next. This is done by lines 15 through 20 of the following code:
# tracker.rb module Tracker def self.run Database.setup CsvAnalysis.scrape(URL) players_table = DB[:players] price_changes = [] CsvAnalysis.format_for_database.each do |player| # Make sure we check both name and team to avoid fake alerts for players with the same name existing_player = DB[:players].where(name: player[:name], team: player[:team]).first if existing_player next unless existing_player[:salary] != player[:salary].to_i price_changes << existing_player DB[:players].where(id: existing_player[:id]).update(salary: player[:salary]) else players_table.insert(player) end end end end
Let’s run
ruby tracker.rb again and take a moment to check that everything was stored correctly in the database. We will also modify a player’s price to make sure that our code is working correctly. Go to the root folder of your project if you aren’t there already and run
sequel sqlite://dfs.db from your terminal. This will open a IRB session with the database already loaded on
DB. Now we can run
DB[:players].first to find the first player in our database; cool! Let’s change their salary to 0 now with
DB[:players].first.update(salary: 0). Now close the session with
quit and run the Tracker again. If you repeat the steps above, you should see that the price was again updated to the one in the CSV and isn’t 0. That means everything is working fine!
Now all we need to do is put all this data on an image and send it to our phone number.
Obtaining your API keys
It’s now time to obtain a Imgur key and a Twilio credentials.
For Imgur, go to their Add Client page and fill out all the fields. Since we don’t need to provide OAuth logins, you can select “Anonymous Usage without User Authorization” and leave the callback URL field empty. Write down your client key somewhere.
To make sure everything worked properly, let’s give it a quick go! Go to your terminal and current directory and run
irb; it will launch an interactive Ruby shell. Now require the imgur library with
require ‘imgur’ and do:
client = Imgur.new(your_imgur_client_id) image = Imgur::LocalImage.new('path/to/local/image', title: 'Awesome photo') uploaded_image = client.upload(image) puts uploaded_image.link
If your client ID is working fine, this will print the URL of the image you just uploaded on Imgur. That’s pretty cool!
Let’s head to the Twilio Console now to obtain our Twilio credentials. After logging in and setting up your account you will be shown your
Account SID and
Auth token. Write both of those down along with the phone number Twilio provided to you, you will need all of these information later. To make sure your account is working, head to the Twilio Build dashboard and send a test SMS to your phone number. As a Giants fan, I usually send “Wide open Tyree, HE MAKES THE CATCH!” to remind me of better times, but it’s up to you.
Sending MMS
Since we will be dealing with API keys, we will now setup Dotenv: it’s always a good practice to keep secret keys out of your codebase. Create a new
.env file in the root of your project and add the keys we obtained previously just like this:
# .env IMGUR_CLIENT_KEY=XXXXXXXX TWILIO_ACCOUNT_SID=ACxxxxxxxx TWILIO_AUTH_TOKEN=yyyyyyyyyyy YOUR_PHONE_NUMBER=1231231234 TWILIO_PHONE_NUMBER=1231231234
We will now create a new
notification.rb file which is where we will get to use most of our dependencies.
# notification.rb require 'RMagick' require 'dotenv/load' require 'twilio-ruby' require 'imgur' ACCOUNT_SID = ENV['TWILIO_ACCOUNT_SID'].freeze AUTH_TOKEN = ENV['TWILIO_AUTH_TOKEN'].freeze TWILIO_PHONE_NUMBER = ENV['TWILIO_PHONE_NUMBER'].freeze YOUR_PHONE_NUMBER = ENV['YOUR_PHONE_NUMBER'].freeze IMGUR_CLIENT_KEY = ENV['IMGUR_CLIENT_KEY'].freeze module Notification IMAGE_PATH = './updates.png'.freeze def self.mms(players) players.map! { |player| "#{player[:name]} (#{player[:team]}) now costs $#{player[:salary]}" } generate_image(players) client = Twilio::REST::Client.new ACCOUNT_SID, AUTH_TOKEN client.messages.create( from: TWILIO_PHONE_NUMBER, to: YOUR_PHONE_NUMBER, body: 'Your DFS price updates are in!', media_url: upload_image ) end def self.generate_image(infos) canvas = Magick::Image.new(480, 60 + infos.count * 30) {self.background_color = 'white'} gc = Magick::Draw.new gc.pointsize(20) # Sets the fontsize infos.each_with_index do |info, index| gc.text(10, 60 + index * 30, info.center(11)) end gc.draw(canvas) canvas.write(IMAGE_PATH) end def self.upload_image client = Imgur.new(IMGUR_CLIENT_KEY) image = Imgur::LocalImage.new(IMAGE_PATH, title: 'My DFS update') up = client.upload(image) puts up.link up.link end end
So what does all this code do? First, we will need to format the player information. We use the format “PLAYER_NAME (TEAM) now costs SALARY”, but feel free to change it as you wish. We then use those strings to generate the image with RMagick. The API isn’t really user-friendly so it can be hard to play around with; if you want to give it a shot, I suggest you read its official docs. The first two arguments of
Magick::Image.new are the width and height of the file. To calculate the height we look at how many lines we have to print and multiply that by thirty pixels; the font size is twenty pixels so thirty pixels per line will give us a nice spacing. You can also pass the background color as an additional option. Next we will iterate over the player strings to add them to the image.
After the image is created, we upload it on Imgur.
upload_image will return the link to it. I’ve added a
puts up.link statement so that you can find the URL in your logs in case you want to share the image, post it on a forum, etc.
Now we can finally send this over to us via MMS by using the Twilio API. We just need to add it to the tracker like so:
# tracker.rb require ‘./notification’ module Tracker def self.run ... else players_table.insert(player) end end Notification.mms(price_changes) unless price_changes.empty? end end
Go long!
As the last step, we will setup crontab to run this script every x-amount of time, two hours should be frequent enough to get an edge on the rest of the players. If you use Windows, this won’t work, so you will need to find an alternative solution (Windows Task Scheduler should do the trick). Create a new folder
config and then put
schedule.rb inside of it. Now you just need to specify at what time interval you want to run this:
set :output, "./cron_log.log" every 2.hours do # You can do 1.day, 5.minutes, etc command "bundle exec ruby tracker.rb" end
After you are done, simply save the file and run
whenever —update-crontab in your terminal. This will run the tracker after every interval as long as your computer is awake. If you plan on not having your computer running most of the time, you should look into hosting this on a free Heroku Dyno and use the Heroku Scheduler to periodically run the tracker. Another option could be running the code inside a DigitalOcean droplet rather than your local machine.
Conclusion
That’s it! Whenever a salary changes you will receive that image via MMS to your phone number so you can always be up-to-the-minute on your fantasy football teams!
If you encounter any issues, feel free to leave a comment, open an issue in the GitHub repo or reach out to me directly @FanaHOVA on Twitter or via email at fana@alessiofanelli.com. Always happy to help or just talk sports! | https://www.twilio.com/blog/2017/09/daily-fantasy-football-salary-tracker-ruby-twilio-mms.html | CC-MAIN-2020-05 | refinedweb | 2,227 | 67.35 |
RCS Server Installation
As a prerequisite, download Intel SCS package and Intel SCS Add-on package from the following links. Copy these packages on the Configuration Manager server where Out of band management will be configured.
Intel Setup and Configuration Software (Intel SCS)
Intel SCS Add-on 2.1 for Microsoft System Center Configuration Manager
Prerequisites
1. Installation of Microsoft SQL Server 2008 R2 Native Client which is a prerequisite of RCS Server. Click [Next].
2. Select [I accept the terms in the license agreement] and click [Next].
3. Click on [Next] and [Next]
4. Click on [Install].
5. Click on [Finish].
Installation procedure
1. Run “Intel SCS Installer” and click [Next].
2. Select [I accept the terms of the license agreement] and click on [Next].
3. Select [Remote Configuration Service], [Database Mode] and [Console] then click [Next].
4. Select as a [Username] [Network Service] then click [Next].
5. Type the SQL Server name in [SQL Server] field, in the [Database Name] field, keep [IntelSCS]. Select [Windows Authentication] as the authentication method for the database. Click on [Next].
※In our lab, SQL Server is located on the SCCM server.
6. Click on [Create Database].
7. Click on [Close].
Post Installation tasks
We have to grant permissions on SCS database.
1. Run “Microsoft SQL Server Management Studio”, then from [ServerName]-[Security]-[Logins] right-click on ”NT AUTHORITY\NETWORK SERVICE” and click on [Properties].
2. Click on [User Mapping], check [Intel SCS] and add [db_datareader] and [db_datawriter] rights. Click on [OK].
And then we export the encryption key.
1. Run “Intel SCS Console”, click on [Tools].
2. Click on [Tools]-[Settings].
3. Click on the arrow [>].
4. Click on [Storage] tab and click on [Export: button.
5. Select the export path and click on [Save].
6. Type a password twice and click [OK].
Definition of “Digest Master Password”
1. Run “Intel SCS Console” then click on [Tools].
2. Click on [Tools]-[Settings].
3. Select the [Security Settings]tab, then click on [Set].
4. Specify a password twice and click [OK].
Adding the AMT Provisioning certificate to the Network Service account.
1. From the Intel SCS Source folder, we are going to use “RCSUtil.exe”.
2. Run a command prompt as an adminitrator.
3. Run the following command :
cd “D:\Temp\Intel OOB\IntelSCS\Utils”
4. Run the following command :
RCSutils.exe /Certificate Add c:\Temp\AMTProvisioningCert.pfx Password01
※This is the certificate export in part 1
5. Run the following command :
net stop rcsserver && net start rcsserver
6. Run the following command which export information about certificate in a file.
RCSUtils.exe /certificate view /RCSuser NetworkService /log file C:\rcsout.txt
7. Verifiy that the certificate has been correctly imported.
Granting permissions on RCS Server to CM_AMT account.
1. Run a command prompt as an administrator and run the following command :
RCSutils.exe /Permissions Add MS\CM_AMT /RCSnamespace RCS Editor
Is it possible to just use the system account (i.e NetworkService) and not Granting permissions on RCS Server to CM_AMT account? What is the benefit of using a separate account, or is it a hard requirement?
In my actual configuraiton (described in this serie of articles), the "Intel AMT Remote Configuration Task Sequence" deployed by SCCM is run under CM_AMT account (as configured in part 1). You can always verify that account by editing the Intel AMT remote configuration task sequence and see the "Run this step as the following account" field.
The last command I describe in that Part 2 article is to grant permission to CM_AMT to connect to RCS Server remotely and get the AMT profile. If you used the SCCM Primary site server account in Part 1 instead of CM_AMT, then you will need to grand permissions on RCS Server to that primary site server.
Using Network Service account as RCS Service account is what Intel recommends for security reason. | https://blogs.msdn.microsoft.com/beanexpert/2015/05/26/intel-scs-add-on-2-1-and-sc2012-r2-configmgr-integration-rcs-database-mode-part-2/ | CC-MAIN-2017-51 | refinedweb | 643 | 60.51 |
.
(At first it looked like there was also a simple stack overflow, since read() is called on a stack buffer with user-supplied length which can be negative. The libc and/or kernel of this server does not like very large size arguments to read() though, and just exits 🙁 )
Unfortunately, the on-stack buffers are quite small which makes it a lot harder to find an appropriate gadget to turn the arbitrary write into ROP stack execution.
In the end we used ebp control in combination with a “mov esp, ebp; pop ebp; ret” gadget to pivot the stack. As an added bonus this gadget contains a read() on the socket with the buffer and length arguments loaded relative to ebp, so the pivot gadget was also used to load the second stage ROP payload.
This second stage dumps the GOT to find the offset of libc, since the machine has aslr enabled (though we could have bruteforced it, it was not much more work to do it cleanly at this point). Since the address of read() we found always ended in …c90 and this matched with the libc from one of the other game boxes we had a shell on we assumed the libc was the same as on the other box, which allowed easy calculation of the address of system() from the leaked GOT pointers.
This is then used to construct a third-stage ROP payload which is read into the correct place by the second stage payload. The third-stage payload then finally spawns a shell.
The full exploit follows. The various blobs are annotated with the address of the function that receives them and checks their contents.
from struct import pack,unpack from time import sleep import sys import socket # util funcs def dwords(*s): return pack("I"*len(s), *s) def rop(*s): return dwords(*reversed(s)) # read ROP stacks vertically, high addresses at the top def out(d): s.send(d) # gadgets write = 0x80483c4 read = 0x80483f4 pop_3_ret = 0x80495b6 pop_11_ret = 0x80495b2 exit = 0x8048444 ports = (8080,8181,8282,8383,8484,8585,8686,8787,8888,8989) for port in ports: #what = ('localhost',4444) what = ('61.42.25.25',port) print "[i] connecting to %s:%d" % what try: s = socket.create_connection(what, 2) except: continue print "[+] connected" # read by function at 0x804928D out(dwords(202,0,1,0xac,0x9a,0,0,0x10001,0x54534e49,4)) out("AAAA") # read by 4 calls to function at 0x8048DD9, with *0x804A8B8 = 1,3,4,5 out(dwords(8, 1, 1, 0xDFE1ABCC-1, 1, 1, 255, 0xffffffff, 102, 0x756c, 255, 96, 1, 2147483647, 0x9c, 31)) out("A" * 30 + "\x00") out(dwords(8, 1, 1, 0xDFE1ABCC-1, 3, 1, 255, 0xffffffff, 102, 0x756c, 255, 96, 1, 2147483647, 0x9c, 31)) out("A" * 30 + "\x00") out(dwords(8, 1, 1, 0xDFE1ABCC-1, 4, 1, 255, 0xffffffff, 102, 0x756c, 255, 96, 1, 2147483647, 0x9c, 31)) out("A" * 30 + "\x00") out(dwords(8, 1, 1, 0xDFE1ABCC-1, 5, 1, 255, 0xffffffff, 102, 0x756c, 255, 96, 1, 2147483647, 0x9c, 31)) out("A" * 30 + "\x00") # read by 3 calls to function at 0x8048B13, with *0x804A8B8 = 1,0,2 out(dwords((26 << 16) | 203, (2 << 16) | 219, 25, 6, 1, 202, -858993460 & 0xffffffff, 1, 4)) out("A" * 4) out(dwords((26 << 16) | 203, (2 << 16) | 219, 25, 6, 1, 202, -858993460 & 0xffffffff, 0, 4)) out("A" * 4) out(dwords((26 << 16) | 203, (2 << 16) | 219, 25, 6, 1, 202, -858993460 & 0xffffffff, 2, 4)) out("A" * 4) # read by 4 calls to function at 0x804882B, with *0x804A8B8 = 3,0,0,2 # this function contains a signedness check error, which allows writing an arbitrary dword at an offset # in the range [INT_MIN .. 31] * 4 from address 0x804a8e0. # strlen@got = pop_11_ret (trigger for first-stage rop later) out(dwords(pop_11_ret, 0, -68 & 0xffffffff, -68& 0xffffffff, 4, 4, 1, 65535, -65536 & 0xffffffff, 4, (225 << 16) | 82, 3, 4)) out("AAAA") # stage 1 rop (ebp=0x804a8e0) args: *(ebp-10) = 0x804a890 (fake buffer) out(dwords(0x804a890, 0, -4& 0xffffffff, -4& 0xffffffff, 4, 4, 1, 65535, -65536 & 0xffffffff, 4, (225 << 16) | 82, 0, 4)) out("AAAA") # stage 1 rop (ebp=0x804a8e0) args: *(*(ebp-10)+0x30) = 0x400 (length field for read() in fake buffer) out(dwords(0x400, 0, -8 & 0xffffffff, -8 & 0xffffffff, 4, 4, 1, 65535, -65536 & 0xffffffff, 4, (225 << 16) | 82, 0, 4)) out("AAAA") # unused dummy write out(dwords(1, 0, 1, 1, 4, 4, 1, 65535, -65536 & 0xffffffff, 4, (225 << 16) | 82, 2, 4)) out("AAAA") # read the program output, so as not to interfere with the ROP payload q = '' s.settimeout(5) while True: try: t = s.recv(1024) except: break #print repr(t) print "[+] recv %d bytes" % len(t) if not t: break q += t if len(q) == 788: break print "[+] read a total of %d bytes of output before start of ROP" % len(q) # read by 4 calls to function at 0x8048DD9 with *0x804A8B8 = 5,4,3,1 # (the ROP payload is triggered during the first call, the last three are never executed) # this is where the ROP payload is read & triggered. # the "31" indicates 31 bytes of data will follow, which is the maximum. # with the pop_11_ret gadget this gives only three and a half dword to actually ROP with. out(dwords(8, 1, 1, 0xDFE1ABCC-1, 5, 1, 255, 0xffffffff, 102, 0x756c, 255, 96, 1, 2147483647, 0x9c, 31)) # the pop_11_ret gadget which is called in place of strlen will pop the following stuff from our buffer: # ebx, esi, edi, ebp, eip # use this to set ebp to 0x804a8e0 and return to 0x8048a48, which does: # read(0, ebp-0x31, *(*(ebp-0x10) + 0x30)); mov esp, ebp; pop ebp; ret # the length argument will resolve to 0x400 thanks to the two arbitrary writes above out("A" + dwords(0xdeadbeef, 0xdeadbeef, 0xdeadbeef, 0x804a8e0, 0x8048a48, 0xdeadbeef, 0xdeadbeef) + "AA") print "[+] first stage ROP payload sent" # second-stage rop. at entry, esp points to 0x804a8e0 + 4 and ebp was loaded from 0x804a8e0. # this ROP payload sends the GOT to the client as an ASLR offset leak, and receives the third stage ROP payload. x = rop( 0x100, 0x804a8e0 + 4 + 5*4 + 5*4, # directly after this ROP stack 1, pop_3_ret, read, 0x100, 0x804a7b0, # GOT 1, pop_3_ret, write, ) # 0x31 A's to get to 0x804a8e0, then BBBB which will end up in ebp and then the second-stage ROP stack. out("A"*0x31 + "BBBB" + x) print "[+] second stage ROP sent" got = s.recv(1024) print "[+] read %d bytes of GOT data" % len(got) addr_of_read = unpack('I',got[24:24+4])[0] delta = 603872 #delta = (0xda130 - 0x3d170) # localhost test addr_of_system = addr_of_read - delta print "[i] assuming delta (read - system) = %d" % delta print "[+] read = %#0x" % addr_of_read print "[+] system = %#0x" % addr_of_system x = rop( 0x804a8e0 + 4 + 5*4 + 5*4 + 3*4, # directly after this ROP stack exit, addr_of_system ) print "[+] third stage ROP sent, spawning shell" print # this is the last ROP stage, so just put the argument string directly behind it out(x + "/bin/bash -p -i 2>&1\0") # ensure what follows is received seperately from the ROP data sleep(1) out("id\n") # connect stdio to socket until either EOF's. use low-level calls to bypass stdin buffering. # also change the tty to character mode so we can have line editing and tab completion. import termios, tty, select, os old_settings = termios.tcgetattr(0) try: tty.setcbreak(0) c = True while c: for i in select.select([0, s.fileno()], [], [], 0)[0]: c = os.read(i, 1024) if not c: break os.write({0:s.fileno(),s.fileno():1}[i], c) except KeyboardInterrupt: pass finally: termios.tcsetattr(0, termios.TCSADRAIN, old_settings) # don't send the data for the last three calls to the function at 0x8048DD9, # the ROP payload took control so they were never executed break out(dwords(8, 1, 1, 0xDFE1ABCC-1, 4, 1, 255, 0xffffffff, 102, 0x756c, 255, 96, 1, 2147483647, 0x9c, 31)) out("B" * 30 + "\x00") out(dwords(8, 1, 1, 0xDFE1ABCC-1, 3, 1, 255, 0xffffffff, 102, 0x756c, 255, 96, 1, 2147483647, 0x9c, 31)) out("B" * 30 + "\x00") out(dwords(8, 1, 1, 0xDFE1ABCC-1, 1, 1, 255, 0xffffffff, 102, 0x756c, 255, 96, 1, 2147483647, 0x9c, 31)) out("B" * 30 + "\x00")
And to show it works:
user@box:~$ python deth2.py [i] connecting to 61.42.25.25:8080 [+] connected [+] recv 168 bytes [+] recv 68 bytes [+] recv 552 bytes [+] read a total of 788 bytes of output before start of ROP [+] first stage ROP payload sent [+] second stage ROP sent [+] read 256 bytes of GOT data [i] assuming delta (read - system) = 603872 [+] read = 0x5b5c90 [+] system = 0x5225b0 [+] third stage ROP sent, spawning shell bash: no job control in this shell bash-4.1$ id uid=500(dethstarr) gid=500(dethstarr) groups=500(dethstarr) bash-4.1$ ls -al total 106 dr-xr-xr-x. 21 root root 4096 Jun 8 04:49 . dr-xr-xr-x. 21 root root 4096 Jun 8 04:49 .. -rw-r--r-- 1 root root 0 Jun 8 04:49 .autofsck -rw-r--r-- 1 root root 0 Jun 8 04:49 .autorelabel dr-xr-xr-x. 2 root root 4096 Dec 15 23:49 bin dr-xr-xr-x. 5 root root 1024 Jan 16 16:07 boot drwxr-xr-x 19 root root 3720 Jun 8 04:49 dev drwxr-xr-x. 100 root root 12288 Jun 9 16:28 etc drwxr-xr-x. 3 root root 4096 Jun 8 04:16 home dr-xr-xr-x. 18 root root 12288 Dec 15 23:44 lib drwx------. 2 root root 16384 Dec 15 23:34 lost+found drwxr-xr-x. 2 root root 4096 Sep 23 2011 media drwxr-xr-x. 3 root root 4096 Jan 16 16:07 mnt drwxr-xr-x. 2 root root 4096 Sep 23 2011 opt dr-xr-xr-x 100 root root 0 Jun 8 04:49 proc dr-xr-x---. 2 root root 4096 Jun 10 06:19 root dr-xr-xr-x. 2 root root 12288 Dec 15 23:49 sbin drwxr-xr-x. 3 root root 4096 Dec 15 23:52 selinux drwxr-xr-x. 2 root root 4096 Sep 23 2011 srv drwxr-xr-x 13 root root 0 Jun 8 04:49 sys drwxrwxrwt. 4 root root 4096 Jun 10 15:01 tmp drwxr-xr-x. 12 root root 4096 Dec 15 23:35 usr drwxr-xr-x. 21 root root 4096 Dec 15 23:49 var bash-4.1$ avahi-autoipd:x:170:170:Avahi IPv4LL Stack:/var/lib/avahi-autoipd:/sbin/nologin vcsa:x:69:69:virtual console memory owner:/dev:/sbin/nologin rtkit:x:499:496:RealtimeKit:/proc:/sbin/nologin rpc:x:32:32:Rpcbind Daemon:/var/cache/rpcbind:/sbin/nologin pulse:x:498:495:PulseAudio System Daemon:/var/run/pulse:/sbin/nologin haldaemon:x:68:68:HAL daemon:/:/sbin/nologin avahi:x:70:70:Avahi mDNS/DNS-SD Stack:/var/run/avahi-daemon:/sbin/nologin saslauth:x:497:76:"Saslauthd user":/var/empty/saslauth:/sbin/nologin postfix:x:89:89::/var/spool/postfix:/sbin/nologin apache:x:48:48:Apache:/var/www:/sbin/nologin ntp:x:38:38::/etc/ntp:/sbin/nologin rpcuser:x:29:29:RPC Service User:/var/lib/nfs:/sbin/nologin nfsnobody:x:65534:65534:Anonymous NFS User:/var/lib/nfs:/sbin/nologin gdm:x:42:42::/var/lib/gdm:/sbin/nologin sshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologin tcpdump:x:72:72::/:/sbin/nologin dethstarr:x:500:500::/home/dethstarr:/bin/bash bash-4.1$ find /home /home /home/dethstarr /home/dethstarr/key bash-4.1$ cat /home/dethstarr/key 397d179d920423eafcb923cfe14ebb75 bash-4.1$ exit exit | https://eindbazen.net/2012/06/secuinside-2012-dethstarr/ | CC-MAIN-2018-26 | refinedweb | 1,927 | 64.04 |
URDF Error No valid hardware interface element found in joint
EDIT -------------------------------------------------------------------
I think maybe there is an issue with the way that I have created my joints and the use of the hardwareInterface tags.
In my macros.xacro file I create my joints and link the transmission separately (I repeat this twice for the front and back wheels):
<joint name="${lr}_front_wheel_hinge" type="continuous"> <parent link="chassis"/> <child link="${lr}_front_wheel"/> <origin xyz="${+wheelPos-chassisLength+2*wheelRadius} ${tY*wheelWidth/2+tY*chassisWidth/2} ${wheelRadius}" rpy="0 0 0" /> <axis xyz="0 1 0" rpy="0 0 0" /> <limit effort="100" velocity="100"/> <joint_properties damping="0.0" friction="0.0"/> </joint>
<transmission name="${lr}_front_trans"> <type>transmission_interface/SimpleTransmission</type> <joint name="${lr}_front_wheel_hinge" /> <actuator name="${lr}_front_Motor"> <hardwareInterface>EffortJointInterface</hardwareInterface> <mechanicalReduction>10</mechanicalReduction> </actuator> </transmission>
Doing it this way gives the errors mentioned above, but my model in Gazebo appears.
If I try to merge both of these blocks so that there is just one joint tag wrapped by the transmission tags then I get the following error and my model does not appear in Gazebo:
[ERROR] [1473672367.041892175]: Failed to find root link: Two root links found: [footprint] and [left_back_wheel]
I don't understand why I get this error because I have a joint between my chassis base link and the world in my Jaguar4x4.xacro file :
<link name="footprint" /> <joint name="base_joint" type="fixed"> <parent link="footprint"/> <child link="chassis"/> </joint>
I now get a number of errors when trying to combine the joint and transmission blocks, so I imagine that this is not the best way to go?
Edit 2 -------------------------------------------------------------------------
Here is my jaguar4x4_control/config/jaguar4x4_control.yaml file ... (more)
Try <transmission name="${lr}_front_trans"> <type>transmission_interface/SimpleTransmission</type> <joint name="${lr}_front_wheel_hinge"> <hardwareinterface>EffortJointInterface</hardwareinterface> /> <actuator name="${lr}_front_Motor"> <hardwareinterface>EffortJointInterface</hardwareinterface> <mechanicalreduction>10</mechanicalreduction> </actuator> </transmission> Could you give the config file for your gazebo ros controller?
@GuillaumeB sorry for the slow response. I tried the code but I get the same errors. I have attached my config file
Your config file seems correct. have you : <gazebo> <plugin name="gazebo_ros_control" filename="libgazebo_ros_control.so"> <robotnamespace>/robot_gazebo</robotnamespace> </plugin> </gazebo> in your urdf? you really need to specify the hardwareinterface twice in the <transmission>. One time for the <joint> and one time for the <actuator>
@GuillaumeB I do have that code in my URDF. I tried adding the additional hardwareInterface but that didn't work. I really thought it had worked but it was just another model that was still appearing in my gazebo. I still get the controller spawner warning and an error stating that a process had died
I have now created a new model using a single .world file. (I followed this tutorial mostly). I can use the turtlesim teleop key node and I can use twist commands. I just now need to work out how to publish twist command via scripts and process images etc. Do you think it is worth me giving up on the URDF one do you see any future issues with this single file approach?
Using only one file is not 'modular' but if you just need something to work as soon as possible use this. If you are still interrested in using your URDF and gazebo_ros_controller please see -> I found that it was a pretty complete tutorial
@GuillaumeB Thanks for the help, that was actually the tutorial I used. However, as I mentioned in the original post I couldn't actually run the code supplied by that tutorial, I received the same errors as the one that I have on my own code. It'd be good if I could get the URDF working eventually, but I think for now it is probably best to make some progress so I will use the single file for now. Thanks again for the help.
Were there any solutions for this problem? I recently got the same kind of error and cannot figure out what the problem is: [link text]() | https://answers.gazebosim.org/question/14238/urdf-error-no-valid-hardware-interface-element-found-in-joint/?sort=votes | CC-MAIN-2022-40 | refinedweb | 667 | 51.38 |
Edition 0.10
Permission is granted to copy, distribute and/or modify this document under the terms of
the GNU Free Documentation License, Version 1.1 or any later version published by the
Free Software Foundation; with the Invariant Sections being "Free Software Needs Free
Documentation" and "GNU Lesser.
Cover art for the Free Software Foundation’s printed edition by Etienne Suvasa.
i
Short Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Error Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3 Virtual Memory Allocation And Paging . . . . . . . . . . . . . . . 33
4 Character Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5 String and Array Utilities . . . . . . . . . . . . . . . . . . . . . . . . . 79
6 Character Set Handling . . . . . . . . . . . . . . . . . . . . . . . . . 119
7 Locales and Internationalization . . . . . . . . . . . . . . . . . . . . 163
8 Message Translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
9 Searching and Sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
10 Pattern Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
11 Input/Output Overview . . . . . . . . . . . . . . . . . . . . . . . . . 239
12 Input/Output on Streams . . . . . . . . . . . . . . . . . . . . . . . . 245
13 Low-Level Input/Output . . . . . . . . . . . . . . . . . . . . . . . . . 319
14 File System Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
15 Pipes and FIFOs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
16 Sockets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
17 Low-Level Terminal Interface . . . . . . . . . . . . . . . . . . . . . . 465
18 Syslog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
19 Mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
20 Arithmetic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
21 Date and Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571
22 Resource Usage And Limitation . . . . . . . . . . . . . . . . . . . . 605
23 Non-Local Exits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625
24 Signal Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635
25 The Basic Program/System Interface . . . . . . . . . . . . . . . . 683
26 Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 729
27 Job Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 741
28 System Databases and Name Service Switch . . . . . . . . . . . 761
29 Users and Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 771
30 System Management . . . . . . . . . . . . . . . . . . . . . . . . . . . 799
31 System Configuration Parameters . . . . . . . . . . . . . . . . . . . 815
32 DES Encryption and Password Handling . . . . . . . . . . . . . . 837
33 Debugging support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845
A C Language Facilities in the Library . . . . . . . . . . . . . . . . . 849
B Summary of Library Facilities . . . . . . . . . . . . . . . . . . . . . 867
ii The GNU C Library
Table of Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Standards and Portability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2.1 ISO C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.2 POSIX (The Portable Operating System Interface)
................................................. 2
1.2.3 Berkeley Unix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.4 SVID (The System V Interface Description) . . . . . . 3
1.2.5 XPG (The X/Open Portability Guide) . . . . . . . . . . . 3
1.3 Using the Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3.1 Header Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.2 Macro Definitions of Functions. . . . . . . . . . . . . . . . . . . 5
1.3.3 Reserved Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.4 Feature Test Macros . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Roadmap to the Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2 Error Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1 Checking for Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 Error Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3 Error Messages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2.3
Allocation Debugging . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.2.3.1 How to install the tracing functionality . . 48
3.2.3.2 Example program excerpts . . . . . . . . . . . . . 48
3.2.3.3 Some more or less clever ideas . . . . . . . . . 49
3.2.3.4 Interpreting the traces . . . . . . . . . . . . . . . . . 50
3.2.4 Obstacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.2.4.1 Creating Obstacks . . . . . . . . . . . . . . . . . . . . 52
3.2.4.2 Preparing for Using Obstacks . . . . . . . . . . 52
3.2.4.3 Allocation in an Obstack . . . . . . . . . . . . . . 53
3.2.4.4 Freeing Objects in an Obstack . . . . . . . . . 54
3.2.4.5 Obstack Functions and Macros . . . . . . . . . 55
3.2.4.6 Growing Objects . . . . . . . . . . . . . . . . . . . . . . 55
3.2.4.7 Extra Fast Growing Objects . . . . . . . . . . . 57
3.2.4.8 Status of an Obstack . . . . . . . . . . . . . . . . . . 59
3.2.4.9 Alignment of Data in Obstacks . . . . . . . . . 59
3.2.4.10 Obstack Chunks . . . . . . . . . . . . . . . . . . . . . 60
3.2.4.11 Summary of Obstack Functions . . . . . . . 60
3.2.5 Automatic Storage with Variable Size . . . . . . . . . . . 62
3.2.5.1 alloca Example . . . . . . . . . . . . . . . . . . . . . . 62
3.2.5.2 Advantages of alloca . . . . . . . . . . . . . . . . . 63
3.2.5.3 Disadvantages of alloca. . . . . . . . . . . . . . . 63
3.2.5.4 GNU C Variable-Size Arrays . . . . . . . . . . . 64
3.3 Resizing the Data Segment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.4 Locking Pages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.4.1 Why Lock Pages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.4.2 Locked Memory Details . . . . . . . . . . . . . . . . . . . . . . . . 65
3.4.3 Functions To Lock And Unlock Pages . . . . . . . . . . . 66
4 Character Handling . . . . . . . . . . . . . . . . . . . . . . . 69
4.1 Classification of Characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.2 Case Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.3 Character class determination for wide characters . . . . . . . . . 72
4.4 Notes on using the wide character classes . . . . . . . . . . . . . . . . 75
4.5 Mapping of wide characters.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
16 Sockets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
16.1 Socket Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
16.2 Communication Styles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
16.3 Socket Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
16.3.1 Address Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
16.3.2 Setting the Address of a Socket . . . . . . . . . . . . . . . 421
16.3.3 Reading the Address of a Socket . . . . . . . . . . . . . . 421
16.4 Interface Naming. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
16.5 The Local Namespace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
16.5.1 Local Namespace Concepts . . . . . . . . . . . . . . . . . . . 423
16.5.2 Details of Local Namespace . . . . . . . . . . . . . . . . . . 423
16.5.3 Example of Local-Namespace Sockets . . . . . . . . . 424
16.6 The Internet Namespace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
16.6.1 Internet Socket Address Formats. . . . . . . . . . . . . . 426
16.6.2 Host Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
16.6.2.1 Internet Host Addresses . . . . . . . . . . . . . 427
16.6.2.2 Host Address Data Type . . . . . . . . . . . . 428
16.6.2.3 Host Address Functions . . . . . . . . . . . . . 429
16.6.2.4 Host Names . . . . . . . . . . . . . . . . . . . . . . . . 431
16.6.3 Internet Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
16.6.4 The Services Database . . . . . . . . . . . . . . . . . . . . . . . 435
16.6.5 Byte Order Conversion . . . . . . . . . . . . . . . . . . . . . . . 436
16.6.6 Protocols Database . . . . . . . . . . . . . . . . . . . . . . . . . . 437
16.6.7 Internet Socket Example . . . . . . . . . . . . . . . . . . . . . 439
16.7 Other Namespaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
16.8 Opening and Closing Sockets . . . . . . . . . . . . . . . . . . . . . . . . . . 440
16.8.1 Creating a Socket. . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
16.8.2 Closing a Socket . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
16.8.3 Socket Pairs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
16.9 Using Sockets with Connections . . . . . . . . . . . . . . . . . . . . . . . 442
16.9.1 Making a Connection . . . . . . . . . . . . . . . . . . . . . . . . 442
16.9.2 Listening for Connections . . . . . . . . . . . . . . . . . . . . 444
16.9.3 Accepting Connections . . . . . . . . . . . . . . . . . . . . . . . 444
16.9.4 Who is Connected to Me? . . . . . . . . . . . . . . . . . . . . 445
16.9.5 Transferring Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
16.9.5.1 Sending Data . . . . . . . . . . . . . . . . . . . . . . . 446
16.9.5.2 Receiving Data . . . . . . . . . . . . . . . . . . . . . 447
16.9.5.3 Socket Data Options . . . . . . . . . . . . . . . . 448
16.9.6 Byte Stream Socket Example . . . . . . . . . . . . . . . . . 448
16.9.7 Byte Stream Connection Server Example . . . . . . 449
16.9.8 Out-of-Band Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
16.10 Datagram Socket Operations . . . . . . . . . . . . . . . . . . . . . . . . . 455
xi
18 Syslog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
18.1 Overview of Syslog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
18.2 Submitting Syslog Messages. . . . . . . . . . . . . . . . . . . . . . . . . . . 494
18.2.1 openlog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494
18.2.2 syslog, vsyslog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
18.2.3 closelog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
18.2.4 setlogmask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
18.2.5 Syslog Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
xii The GNU C Library
19 Mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
19.1 Predefined Mathematical Constants . . . . . . . . . . . . . . . . . . . 501
19.2 Trigonometric Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502
19.3 Inverse Trigonometric Functions . . . . . . . . . . . . . . . . . . . . . . . 504
19.4 Exponentiation and Logarithms . . . . . . . . . . . . . . . . . . . . . . . 505
19.5 Hyperbolic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
19.6 Special Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
19.7 Known Maximum Errors in Math Functions . . . . . . . . . . . . 513
19.8 Pseudo-Random Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
19.8.1 ISO C Random Number Functions . . . . . . . . . . . . 531
19.8.2 BSD Random Number Functions . . . . . . . . . . . . . 532
19.8.3 SVID Random Number Function . . . . . . . . . . . . . 534
19.9 Is Fast Code or Small Code preferred? . . . . . . . . . . . . . . . . . 538
26 Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 729
26.1 Running a Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 729
26.2 Process Creation Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 730
26.3 Process Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 730
26.4 Creating a Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 731
26.5 Executing a File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 732
26.6 Process Completion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734
26.7 Process Completion Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737
26.8 BSD Process Wait Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 738
26.9 Process Creation Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 739
1 Introduction.
See Appendix B [Summary of Library Facilities], page 867, for an alphabetical list of the
functions and other symbols provided by the library. This list also states which standards
each function or symbol comes from.
1.2.1 ISO C
The GNU C library is compatible with the C standard adopted by the American Na-
tional Section 1.3.4 [Feature Test Macros], page 7, for
information on how to do this.
Being able to restrict the library to include only ISO C features is important because
ISO C puts limitations on what names can be defined by the library implementation, and
the GNU extensions don’t fit these limitations. See Section 1.3.3 [Reserved Names], page 5,.
All other library names are reserved if your program explicitly includes the header file that
defines or declares them. There are several reasons for these restrictions:
•.
• It avoids the possibility of a user accidentally redefining a library function that is called
by other library functions. If redefinition were allowed, those other functions would not
work properly.
• It allows the compiler to do whatever special optimizations it pleases on calls to these
functions, without the possibility that they may have been redefined by the user. Some
library facilities, such as those for dealing with variadic arguments (see Section A.2
[Variadic Functions], page 850) and non-local exits (see Chapter 23 [Non-Local Exits],
page 625), actually require a considerable amount of cooperation on the part of the C
compiler, and with respect to the implementation, it might be easier for the compiler
to treat these as built-in parts of the language.
In addition to the names documented in this manual, reserved names include all external
identifiers (global functions and variables) that begin with an underscore (‘_’) and all iden-
tifiers.
• Names beginning with a capital ‘E’ followed a digit or uppercase letter may be used for
additional error code names. See Chapter 2 [Error Reporting], page 15.
• Names that begin with either ‘is’ or ‘to’ followed by a lowercase letter may be used
for additional character testing and conversion functions. See Chapter 4 [Character
Handling], page 69.
• Names that begin with ‘LC_’ followed by an uppercase letter may be used for additional
macros specifying locale attributes. See Chapter 7 [Locales and Internationalization],
page 163.
• Names of all existing mathematics functions (see Chapter 19 [Mathematics], page 501)
suffixed with ‘f’ or ‘l’ are reserved for corresponding functions that operate on float
and long double arguments, respectively.
• Names that begin with ‘SIG’ followed by an uppercase letter are reserved for additional
signal names. See Section 24.2 [Standard Signals], page 637.
• Names that begin with ‘SIG_’ followed by an uppercase letter are reserved for additional
signal actions. See Section 24.3.1 [Basic Signal Handling], page 646.
• Names beginning with ‘str’, ‘mem’, or ‘wcs’ followed by a lowercase letter are reserved
for additional string and array functions. See Chapter 5 [String and Array Utilities],
page 79.
Chapter 1: Introduction 7
• Names that end with ‘_t’ are reserved for additional type names.
In addition, some individual header files reserve names beyond those that they actually
define. You only need to worry about these restrictions if your program includes that
particular header file.
• The header file ‘dirent.h’ reserves names prefixed with ‘d_’.
• The header file ‘fcntl.h’ reserves names prefixed with ‘l_’, ‘F_’, ‘O_’, and ‘S_’.
• The header file ‘grp.h’ reserves names prefixed with ‘gr_’.
• The header file ‘limits.h’ reserves names suffixed with ‘_MAX’.
• The header file ‘pwd.h’ reserves names prefixed with ‘pw_’.
• The header file ‘signal.h’ reserves names prefixed with ‘sa_’ and ‘SA_’.
• The header file ‘sys/stat.h’ reserves names prefixed with ‘st_’ and ‘S_’.
• The header file ‘sys/times.h’ reserves names prefixed with ‘tms_’.
• The header file ‘termios.h’ reserves names prefixed with ‘c_’, ‘V’, ‘I’, ‘O’, and ‘TC’;
and names prefixed with ‘B’ followed by a digit.
REENTRANT Macro
THREAD SAFE Macro.
Chapter 1: Introduction 11
• Chapter 19 [Mathematics], page 501, contains information about the math library func-
tions. These include things like random-number generators and remainder functions on
integers as well as the usual trigonometric and exponential functions on floating-point
numbers.
• Chapter 20 [Low-Level Arithmetic Functions], page 539, describes functions for simple
arithmetic, analysis of floating-point values, and reading numbers from strings.
• Chapter 9 [Searching and Sorting], page 209, contains information about functions for
searching and sorting arrays. You can use these functions on any kind of array by
providing an appropriate comparison function.
• Chapter 10 [Pattern Matching], page 219, presents functions for matching regular ex-
pressions and shell file name patterns, and for expanding words as the shell does.
• Chapter 21 [Date and Time], page 571, describes functions for measuring both calendar
time and CPU time, as well as functions for setting alarms and timers.
• Chapter 6 [Character Set Handling], page 119, contains information about manipulating
characters and strings using character sets larger than will fit in the usual char data
type.
• Chapter 7 [Locales and Internationalization], page 163, describes how selecting a par-
ticular country or language affects the behavior of the library. For example, the locale
affects collation sequences for strings and how monetary values are formatted.
• Chapter 23 [Non-Local Exits], page 625, contains descriptions of the setjmp and
longjmp functions. These functions provide a facility for goto-like jumps which can
jump from one function to another.
• Chapter 24 [Signal Handling], page 635, tells you all about signals—what they are, how
to establish a handler that is called when a particular kind of signal is delivered, and
how to prevent signals from arriving during critical sections of your program.
• Chapter 25 [The Basic Program/System Interface], page 683, tells how your programs
can access their command-line arguments and environment variables.
• Chapter 26 [Processes], page 729, contains information about how to start new processes
and run programs.
• Chapter 27 [Job Control], page 741, describes functions for manipulating process groups
and the controlling terminal. This material is probably only of interest if you are writing
a shell or other program which handles job control specially.
• Chapter 28 [System Databases and Name Service Switch], page 761, describes the ser-
vices which are available for looking up names in the system databases, how to deter-
mine which service is used for which database, and how these services are implemented
so that contributors can design their own services.
• Section 29.13 [User Database], page 789, and Section 29.14 [Group Database], page 792,
tell you how to access the system user and group databases.
• Chapter 30 [System Management], page 799, describes functions for controlling and
getting information about the hardware and software configuration your program is
executing under.
• Chapter 31 [System Configuration Parameters], page 815, tells you how you can get
information about various operating system limits. Most of these parameters are pro-
vided for compatibility with POSIX.
Chapter 1: Introduction 13
• Appendix B [Summary of Library Facilities], page 867, gives a summary of all the
functions, variables, and macros in the library, with complete data types and function
prototypes, and says what standard or system each is derived from.
• Appendix D [Library Maintenance], page 979, explains how to build and install the
GNU C library on your system, how to report any bugs you might find, and how to
add new functions or port the library to a new system.
If you already know the name of the facility you are interested in, you can look it up
in Appendix B [Summary of Library Facilities], page 867..
14 The GNU C Library
Chapter 2: Error Reporting 15
2 Error Reporting ‘errno.h’ to use this facility.
All the error codes have symbolic names; they are macros defined in ‘errno.h’. The
names start with ‘E’ and an upper-case letter or digit; you should consider names of this
form to be reserved names. See Section 1.3.3 [Reserved Names], page 5..
On non-GNU systems, almost any system call can return EFAULT if it is given an invalid
pointer as an argument. Since this could only happen as a result of a bug in your program,
and since it will not happen on the GNU system,.
int ED Macro
The experienced user will know what is wrong.
strerror and perror produce the exact same message for any given error code; the
precise text varies from system to system. On the GNU system,.
28 The GNU C Library,
Chapter 2: Error Reporting 29
requires error messages to be preceded by the program name and programs which read
some input files should should provide information about the input file name and the line
number in case an error is encountered while reading the file. For these occasions there are
two functions available which are widely used throughout the GNU project. These functions
are declared in ‘error.h’.
void error (int status, int errnum, const char *format, ...) Function elimintates.
void error at line (int status, int errnum, const char *fname, Function
unsigned int lineno, const char *format, ...)
The error_at_line function is very similar to the error function. The only dif-
ference an.
30 The GNU C Library;
if (error_message_count != 0)
error (EXIT_FAILURE, 0, "%u errors found", error_message_count);
Chapter 2: Error Reporting 31
}
error and error_at_line are clearly the functions of choice and enable the programmer
to write applications which follow the GNU coding standard. The GNU libc additionally
contains functions which are used in BSD for the same purpose. These functions are declared
in ‘err.h’. It is generally advised to not use these functions. They are included only for
compatibility.
process is mainly just a matter of making sure that the same byte of memory isn’t used to
store two different things.
Processes allocate memory in two major ways: by exec and programmatically. Actually,
forking is a third way, but it’s not very interesting. See Section 26.4 [Creating a Process],
page 731. Section 3.2.1
[Memory Allocation in C Programs], page 35).
Once that program begins to execute, it uses programmatic allocation to gain additional
memory. In a C program with the GNU C library, there are two kinds of programmatic
allocation: automatic and dynamic. See Section 3.2.1 [Memory Allocation in C Programs],
page 35.
•
The text segment contains a program’s instructions and literals and static constants.
It is allocated by exec and stays the same size for the life of the virtual address space.
• The data segment is working storage for the program. It can be preallocated and
preloaded by exec and the process can extend or shrink it by calling functions as
described in See Section 3.3 [Resizing the Data Segment], page 64. Its lower end is
fixed.
• The stack segment contains a program stack. It grows as the stack grows, but doesn’t
shrink when the stack shrinks.
Chapter 3: Virtual Memory Allocation And Paging 35;
}
The contents of the block are undefined; you must initialize it yourself (or use calloc
instead; see Section 3.2.2.5 [Allocating Cleared Space], page 39). Normally you would cast
the value as a pointer to the kind of object that you want to store in the block. Here
we show an example of doing so, and of initializing the space with zeros using the library
function memset (see Section 5.4 [Copying and Concatenation], page 83):;
...
Chapter 3: Virtual Memory Allocation And Paging 37)
{
register.
But in general, it is not guaranteed that calloc calls malloc internally. Therefore, if an
application provides its own malloc/realloc/free outside the C library, it should always
define calloc, too.
The abortfn argument is the function to call when an inconsistency is found. If you
supply a null pointer, then mcheck uses a default function which prints a message
and calls abort (see Section 25.6.4 [Aborting a Program], page 726). ‘ implemen-
tation might might call free, so protect it too. */
printf ("freed pointer %p\n", ptr);
/* Restore our own hooks */
46 The GNU C Library
__malloc_hook = my_malloc_hook;
__free_hook = my_free_hook;
}
main ()
{
...
}
The mcheck function (see Section 3.2.2.9 [Heap Consistency Checking], page 41) works
by installing such hooks.).
static void
enable (int sig)
{
mtrace ();
signal (SIGUSR1, enable);
}
static void
disable (int sig)
{
50 The GNU C Library
muntrace ();
signal (SIGUSR2, disable);
}
int
main (int argc, char *argv[])
{
...
...
}.
3.2.4 Obstacks.
52 The GNU C Library);
For example, here is a function that allocates a copy of a string str in a specific obstack,
which is in the variable string_obstack:
struct obstack string_obstack;
char *
54 The GNU C Library
Note that if object is a null pointer, the result is an uninitialized obstack. To free all
memory in an obstack but leave it valid for further allocation, call obstack_free with the
address of the first object allocated on the obstack:
Chapter 3: Virtual Memory Allocation And Paging 55.
void obstack grow (struct obstack *obstack-ptr, void *data, int Function
size)
To add a block of initialized space, use obstack_grow, which is the growing-object
analogue of obstack_copy. It adds size bytes of data to the growing object, copying
the contents from data.
void obstack grow0 (struct obstack *obstack-ptr, void *data, int Function
size)
This is the growing-object analogue of obstack_copy0. It adds size bytes copied from
data, followed by an additional null character.
void obstack ptr grow (struct obstack *obstack-ptr, void *data) Function
Adding the value of a pointer one can use the function obstack_ptr_grow. It adds
sizeof (void *) bytes containing the value of data.
void obstack int grow (struct obstack *obstack-ptr, int data) Function
A single value of type int can be added by using the obstack_int_grow function. It
adds sizeof (int) bytes to the growing object and initializes them with the value
of data.
This function can return a null pointer under the same conditions as obstack_alloc
(see Section 3.2.4.3 [Allocation in an Obstack], page 53)..
While you know there is room, you can use these fast growth functions for adding data
to a growing object:
void obstack ptr grow fast (struct obstack *obstack-ptr, void Function
*data)
The function obstack_ptr_grow_fast adds sizeof (void *) bytes containing the
value of data to the growing object in obstack obstack-ptr.
void obstack int grow fast (struct obstack *obstack-ptr, int Function
data)
The function obstack_int_grow_fast adds sizeof (int) bytes containing the value
of data to the growing object in obstack obstack-ptr.
void obstack blank fast (struct obstack *obstack-ptr, int size) Function++);
}
}
}
Chapter 3: Virtual Memory Allocation And Paging 59.
60 The GNU C Library.
• Some non-GNU systems fail to support alloca, so it is less portable. However, a slower
emulation of alloca written in C is available for use on systems with this deficiency.
ENOMEM The request would cause the data segment to overlap another segment or
exceed the process’ data storage limit. Section 26.4
[Creating a Process], page 731. Section 22.2 [Limiting Resource
Usage], page 607..
When the function fails, it does not affect the lock status of any pages..
When the process is in MCL_FUTURE mode because it successfully executed this func-
tion and specified MCL_CURRENT, any system call by the process that requires space
be added to its virtual address space fails with errno = ENOMEM if locking the addi-
tional.
4 Character Handling
Programs that work with characters and strings often need to classify a character—is it
alphabetic, is it a digit, is it whitespace, and so on—and perform case conversion operations
on characters. The functions in the header file ‘ctype Section 7.3 [Categories of Activities that Locales Affect], page 164.)
The ISO C standard specifies two different sets of functions. The one set works on char
type characters, the other one on wchar_t wide characters (see Section 6.1 [Introduction to
Extended Characters], page 119).
The GNU C library also provides a function which is not defined in the ISO C standard
but which is available as a version for single byte characters as well. Section 6.3.3 [Converting Single Characters], page 126,.
For the generally available mappings, the ISO C standard defines convenient shortcuts
so that it is not necessary to call wctrans for them.
The same warnings given in the last section for the use of the wide character classification
functions apply here. It is not possible to simply cast a char type value to a wint_t and
use it as an argument to towctrans calls.
78 The GNU C Library
Chapter 5: String and Array Utilities 79 informa-
tion on encodings see Section 6.1 [Introduction to Extended Characters], page 119). Section 6.1 [Introduction to Extended
Characters], page 119)..
strnlen (string, 5)
⇒ 5
This function is a GNU extension and is declared in ‘string.h’.
void * memcpy (void *restrict to, const void *restrict from, Function
size_t size)));
84 The GNU C Library
void * mempcpy (void *restrict to, const void *restrict from, Function
size_t size).
wchar_t *
wmempcpy (wchar_t *restrict wto, const wchar_t *restrict wfrom,
size_t size)
{
return (wchar_t *) mempcpy (wto, wfrom, size * sizeof (wchar_t));
}
This function is a GNU extension.
void * memmove (void *to, const void *from, size_t size) Function.
void * memccpy (void *restrict to, const void *restrict from, Function
int c, size_t size).
char * strcpy (char *restrict to, const char *restrict from) Function
This copies characters from the string from (up to and including the terminating null
character) into the string to. Like memcpy, this function has undefined results if the
strings overlap. The return value is the value of to.
char * strncpy (char *restrict to, const char *restrict from, Function
size_t size)
This function is similar to strcpy but always copies exactly size characters into to.
If the length of from is more than size, then strncpy copies just the first size charac-
ters..
If malloc cannot allocate space for the new string, strdup returns a null pointer.
Otherwise it returns a pointer to the new string.
char * stpcpy (char *restrict to, const char *restrict from) Function ‘string.h’.
88 The GNU C Library
char * stpncpy (char *restrict to, const char *restrict from, Function
size_t size)
This function is similar to stpcpy but copies always exactly size characters into to.
If the length of from is more then ‘string>
int
main (void)
{
char *wr_path = strdupa (path);
char *cp = strtok (wr_path, ":");
char * strcat (char *restrict to, const char *restrict from) Function)
{
90 The GNU C Library
va_end (ap);
if (result != NULL)
{
result[0] = ’\0’;);
char *wp;
if (allocated != NULL)
{
char *newp;
wp = result;
for (s = str; s != NULL; s = va_arg (ap, const char *))
{
size_t len = strlen (s);
}
wp = newp + (wp - result);.
char * strncat (char *restrict to, const char *restrict from, Function
size_t size))
{
to[strlen (to) + size] = ’\0’;
strncpy (to + strlen (to), from, size);
return to;
}
The behavior of strncat is undefined if the strings overlap.
Chapter 5: String and Array Utilities 93
main ()
{
strncpy (buffer, "hello", SIZE);
puts (buffer);
strncat (buffer, ", world", SIZE - strlen (buffer) - 1);
puts (buffer);
}
The output produced by this program looks like:
hello
hello, wo
void bcopy (const void *from, void *to, size_t size) Function
This is a partially obsolete alternative for memmove, derived from BSD. Note that it
is not quite equivalent to memmove, because the arguments are not in the same order
and there is no return value.
int memcmp (const void *a1, const void *a2, size_t size) Function.
int wmemcmp (const wchar_t *a1, const wchar_t *a2, size_t Function
size) require-
ments,:
Chapter 5: String and Array Utilities 95
struct foo
{
unsigned char tag;
union
{
double f;
long i;
char *p;
} value;
};
you are better off writing a specialized comparison function to compare struct foo objects
instead of comparing them with memcmp.
int strncmp (const char *s1, const char *s2, size_t size) Function
This function is the similar to strcmp, except that no more than size wide characters
are compared. In other words, if the two strings are the same in their first size wide
characters, the return value is zero.
int wcsncmp (const wchar_t *ws1, const wchar_t *ws2, size_t Function
size)
This function is the similar to wcscmp, except that no more than size wide characters
are compared. In other words, if the two strings are the same in their first size wide
characters, the return value is zero.
int strncasecmp (const char *s1, const char *s2, size_t n) Function
This function is like strncmp, except that differences in case are ignored. Like
strcasecmp, it is locale dependent how uppercase and lowercase characters are re-
lated.
strncasecmp is a GNU extension.
int wcsncasecmp (const wchar_t *ws1, const wchar_t *s2, size_t Function
n)
This function is like wcsncmp, except that differences in case are ignored. Like
wcscasecmp, it is locale dependent how uppercase and lowercase characters are re-
lated.. */
int bcmp (const void *a1, const void *a2, size_t size) Function
This is an obsolete alias for memcmp, derived from BSD..
int
compare_elements (char **p1, char **p2)
{
return strcoll (*p1, *p2);
}
void
sort_strings (char **array, int nstrings)
{
/* Sort temp_array by comparing the strings. */
qsort (array, nstrings,
sizeof (char *), compare_elements);
}
size_t strxfrm (char *restrict to, const char *restrict from, Function
size_t size) Section 5.4 [Copying
and Concatenation], page 83.
Chapter 5: String and Array Utilities 99; };
int
compare_elements (struct sorter *p1, struct sorter *p2)
100 The GNU C Library
{
return strcmp (p1->transformed, p2->transformed);
}
void
sort_strings_fast (char **array, int nstrings)
{
struct sorter temp_array[nstrings];
int i;
temp_array[i].input = array[i];
/* Transform array[i]. */
transformed_length = strxfrm (transformed, array[i], length);
temp_array[i].transformed = transformed;
}
nstrings, compare_elements);.
wcsstr is an depricated alias for wcsstr. This is the name originally used in the
X/Open Portability Guide before the Amendment 1 to ISO C90 was published..
Note that “character” is here used in the sense of byte. In a string using a multibyte
character encoding (abstract) character consisting of more than one byte are not
treated as an entity. Each byte is treated separately. The function is not locale-
dependent.
memory. See Section 24.2.1 [Program Error Signals], page 637. Even if the operation of
strtok or wcstok would not require a modification of the string (e.g., if there is exactly
one token) the string can (and in the GNU. See Section 24.4.6 [Signal Handling
and Nonreentrant Functions], page 659, for a discussion of where and why reentrancy is
important.
Here is a simple example showing the use of strtok.
#include <string.h>
#include <stddef.h>
...
...
done by the user. Successive calls to strsep move the pointer along the tokens sep-
arated>
...
...
int
main (int argc, char *argv[])
{
110 The GNU C Library
if (argc < 2)
{
fprintf (stderr, "Usage %s <arg>\n", prog);
exit (1);
}
...
}
Portability Note: This function may produce different results on different systems.
int
main (int argc, char *argv[])
{
char *prog;
char *path = strdupa (argv[0]);
if (argc < 2)
{
fprintf (stderr, "Usage %s <arg>\n", prog);
exit (1);
}
...
5.9 strfry ‘string.h’.
n = (n << 8) | *in++;
if (--len > 0)
n = (n << 8) | *in;
}
memcpy (cp, l64a (htonl (n)), 6);
cp += 6;
}
*cp = ’\0’;
return out;
}
It is strange that the library does not provide the complete functionality needed but
so be it.
To decode data produced with l64a the following function should be used.
The l64a and a64l functions use a base 64 encoding, in which each character of an
encoded string represents six bits of an input word. These symbols are used for the base 64
digits:
0 1 2 3 4 5 6 7
0 . / 0 1 2 3 4 5
8 6 7 8 9 A B C D
16 E F G H I J K L
24 M N O P Q R S T
32 U V W X Y Z a b
40 c d e f g h i j
48 k l m n o p q r
56 s t u v w x y z
This encoding scheme is not standard. There are some other encoding methods which
are much more widely used (UU encoding, MIME encoding). Generally, it is better to use
one of these encodings.
error_t argz create (char *const argv[], char **argz, size_t Function
*argz len)
The argz_create function converts the Unix-style argument vector argv (a vector
of pointers to normal C strings, terminated by (char *)0; see Section 25.1 [Program
Arguments], page 683) into an argz vector with the same elements, which is returned
in argz and argz len.
error_t argz create sep (const char *string, int sep, char **argz, Function
size_t *argz len)
The argz_create_sep function converts the null-terminated string string into an argz
vector (returned in argz and argz len) by splitting it into elements at every occurrence
of the character sep.
size_t argz count (const char *argz, size_t arg len) Function
Returns the number of elements in the argz vector argz and argz len.
void argz extract (char *argz, size_t argz len, char **argv) Function Section 26.5
[Executing a File], page 732).
void argz stringify (char *argz, size_t len, int sep) Function
The argz_stringify converts argz into a normal string with the elements separated
by the character sep, by replacing each ’\0’ inside argz (except the last one, which
terminates the string) with sep. This is handy for printing argz in a readable manner.
Chapter 5: String and Array Utilities 115
error_t argz add (char **argz, size_t *argz len, const char *str) Function
The argz_add function adds the string str to the end of the argz vector *argz, and
updates *argz and *argz len accordingly.
error_t argz add sep (char **argz, size_t *argz len, const char Function
*str, int delim)
The argz_add_sep function is similar to argz_add, but str is split into separate ele-
ments in the result at occurrences of the character delim. This is useful, for instance,
for adding the components of a Unix search path to an argz vector, by using a value
of ’:’ for delim.
error_t argz append (char **argz, size_t *argz len, const char Function
*buf, size_t buf len)
The argz_append function appends buf len bytes starting at buf to the argz vector
*argz, reallocating *argz to accommodate it, and adding buf len to *argz len.
error_t argz delete (char **argz, size_t *argz len, char *entry) Function.
error_t argz insert (char **argz, size_t *argz len, char *before, Function
const char *entry).
char * argz next (char *argz, size_t argz len, const char *entry) Function;
116 The GNU C Library
Note that the latter depends on argz having a value of 0 if it is empty (rather than a
pointer to an empty block of memory); this invariant is maintained for argz vectors
created by the functions here.
char * envz entry (const char *envz, size_t envz len, const char Function
*name)
The envz_entry function finds the entry in envz with the name name, and returns a
pointer to the whole entry—that is, the argz element which begins with name followed
by a ’=’ character. If there is no entry with that name, 0 is returned.
char * envz get (const char *envz, size_t envz len, const char Function
*name)
The envz_get function finds the entry in envz with the name name (like envz_entry),
and returns a pointer to the value portion of that entry (following the ’=’). If there
is no entry with that name (or only a null entry), 0 is returned.
error_t envz add (char **envz, size_t *envz len, const char Function
*name, const char *value)).
Chapter 5: String and Array Utilities 117
error_t envz merge (char **envz, size_t *envz len, const char Function
*envz2, size_t envz2 len, int override).
As there are for the char data type macros are available for specifying the minimum
and maximum value representable in an object of type wchar_t.
These internal representations present problems when it comes to storing and transmit-
tal. Because each single wide character consists of more than one byte, they are effected by
byte-ordering. Thus, machines with different endianesses would see different values when
accessing the same data. This byte ordering concern also applies for communication pro-
tocols that are all byte-based and, thereforet operation system does not understand
EBCDIC directly the parameters-to-system calls have to be converted first anyhow.
• The simplest character sets are single-byte character sets. There can be only up to
256 characters (for 8 bit character sets), which is not sufficient to cover all languages
but might be sufficient to handle a specific text. Handling of a 8 bit character sets is
simple. This is not true for other kinds presented later, and therefore, the application
one uses might require the use of 8 bit character sets.
• The ISO 2022 standard defines a mechanism for extended character sets where one
character can be represented by more than one byte. This is achieved by associating a
state with the text. Characters that can be used to change the state can be embedded
in the text. Each byte in the text might have a different interpretation in each state.
122 The GNU C Library
The state might even influence whether a given byte stands for a character on its own
or whether it has to be combined with some more bytes. operations.
• Early attempts to fix 8 bit character sets for other languages using the Roman alphabet
lead to character sets like ISO 6937. Here bytes representing characters like the acute
accent do not produce output themselves: one has to combine them with other charac-
ters to get the desired result. For example, the byte sequence.
• Instead of converting the Unicode or ISO 10646 text used internally, it is often also
sufficient to simply use an encoding different than UCS-2/UCS-4. The Unicode and
ISO 10646 standards even specify such an encoding: UTF-8. This encoding is able to
represent all of ISO 10646 31 bits in a byte string of length one to six.
Chapter 6: Character Set Handling 123
type encodes ISO 10646 characters. If this symbol is not defined one should avoid making
assumptions about the wide character representation. If the programmer uses only the
functions provided by the C library to handle wide character strings there should be no
compatibility problems with other.
Despite the limitation that the single byte value always is it is necessary btowc.
There also is a function for the conversion in the other direction.
Chapter 6: Character Set Handling 127
There are more general functions to convert single character from multibyte representa-
tion to wide characters and vice versa. These functions pose no limit on the length of the
multibyte representation and they also do not require it to be in the initial state.));
128 The GNU C Library
The attentive reader now will note that mbrlen can be implemented as
Chapter 6: Character Set Handling 129;
Chapter 6: Character Set Handling 131
}
Chapter 6: Character Set Handling 133
running text is not always an adequate solution and, therefore, should never be used in
generally used code.
The generic conversion interface (see Section 6.5 [Generic Charset Conversion], page 140);
while (!eof)
{
ssize_t nread;
ssize_t nwrite;
char *inp = buffer;
wchar_t outbuf[BUFSIZ];
wchar_t *outp = outbuf;
return 1;
}
guaranteed that no library function changes the state of any of these functions). For the
above reasons it is highly requested that the functions described in the previous section be
used in place of non-reentrant conversion functions..
wchar_t *
mbstowcs_alloc (const char *string)
{
size_t size = strlen (string) + 1;
wchar_t *buf = xmalloc (size * sizeof (wchar_t));
• Scan the string one character at a time, in order. Do not “back up” and rescan
characters already scanned, and do not intersperse the processing of different strings.
Here is an example of using mblen following these rules:
void
scan_string (char *s)
{
int length = strlen (s);.
• If neither the source nor the destination character set is the character set used for
wchar_t representation, there is at least a two-step process necessary to convert a
text using the functions above. One would have to select the source character set as
the multibyte encoding, convert the text into a wchar_t text, con-
straints.
iconv_t iconv open (const char *tocode, const char *fromcode) Function.
142 The GNU C Library
The iconv implementation can associate large data structure with the handle returned
by iconv_open. Therefore, it is crucial to free all the resources once all conversions are
carried out and the conversion is not needed anymore.
The iconv_close function was introduced together with the rest of the iconv func-
tions in XPG2 and is declared in ‘iconv.h’.
The standard defines only one actual conversion function. This has, therefore, the most
general interface: it allows conversion from one buffer to another. Conversion from a file to
a buffer, vice versa, or even file to file can be implemented on top of it.
Chapter 6: Character Set Handling 143
size_t iconv (iconv_t cd, char **inbuf, size_t *inbytesleft, char Function
**outbuf, size_t *outbytesleft)
The iconv function converts the text in the input buffer according to the rules associ-
ated.
144 The GNU C Library
‘icon.
return -1;
}
Chapter 6: Character Set Handling 145;
}
}
}
146 The GNU C Library
if (iconv_close (cd) != 0)
perror ("iconv_close");
This solution is problematic as it requires a great deal of effort to apply to all char-
acter sets (potentially an infinite set). The differences in the structure of the different
character sets is so large that many different variants of the table-processing functions
must be developed. In addition, the generic nature of these functions make them slower
than specifically implemented functions.
• The C library only contains a framework that can dynamically load object files and
execute the conversion functions contained therein.
This solution provides much more flexibility. The C library itself contains only very lit-
tle conver-
sion
148 The GNU C Library
destination character set is ISO 10646. The existing set of conversions is simply meant to
cover all conversions that might be of interest.
All currently available conversions use the triangulation method above, making conver-
sion.
The set of available conversions form a directed graph with weighted edges. The weights
on the edges are the costs specified in the ‘g ‘g ‘gconv-modules’ file, however, specifies that the new conversion modules can
perform this conversion with only the cost of 1.
A mysterious item about the ‘g ‘g effectedess, the internal representation consistently is named INTERNAL
even on big-endian systems where the representations are identical.
Chapter 6: Character Set Handling 151 ‘g ‘gconv.h’.
gconv_fct __fct
gconv_init_fct __init_fct
gconv_end_fct __end_fct
These elements contain pointers to the functions in the loadable module.
The interface will be explained below.
152 The GNU C Library pur-
pose.
int __is_last
This element is nonzero if this conversion step is the last one. This infor-
mation func-
tions-
log, Sec-
tion 6.3.2 [Representing the state of the conversion], page 124)..
154 The GNU C Library
int
gconv_init (struct __gconv_step *step)
{
/* Determine which direction. */
struct iso2022jp_data *new_data;
enum direction dir = illegal_dir;
enum variant var = illegal_var;
int result;
result = __GCONV_NOCONV;
if (dir != illegal_dir)
{
new_data = (struct iso2022jp_data *)
156 The GNU C Library conver-
sion
Chapter 6: Character Set Handling 157 hap-
pen if the ‘gconv-modules’ file has errors.
__GCONV_NOMEM
Memory required to store additional information could not be allocated.
The function called before the module is unloaded is significantly easier. It often has
nothing at all to do; in which case it can be left out completely.
The most important function is the conversion function itself, which can get quite com-
plicated for complex character sets. But since this is not of interest here, we will only
describe a possible skeleton for the conversion function.)
Chapter 6: Character Set Handling 159 conver-
sion,
160 The GNU C Library
/*;
available. */
data->__outbuf = outbuf;
break;
}
if (result != __GCONV_EMPTY_INPUT)
{
if (outerr != outbuf)
{
/* Reset the input buffer pointer. We
document here the complex case. */
size_t nstatus;
return status;
}
This information should be sufficient to write new modules. Anybody doing so should
also take a look at the available source code in the GNU C library sources. It contains
many examples of working and optimized modules.
Chapter 7: Locales and Internationalization 163
LC_ALL This is not an environment variable; Section 8.2.1.6 [User influence on gettext], page 203.
When you read the current locale for category LC_ALL, the value encodes the entire
combination of selected locales for all categories. In this case, the value is not just a
single locale name. In fact, we don’t make any promises about what it looks like. But environ-
ment variable and use its value to select the locale for category.
If a nonempty string is given for locale, then the locale of that name is used if possible.;
Portability Note: Some ISO C systems may define additional locale categories, and
future versions of the library will do so. For portability, assume that any symbol beginning
with ‘LC_’ might be defined in ‘locale.h’.).
Chapter 7: Locales and Internationalization 171 corre-
sponds.
Chapter 7: Locales and Internationalization 173.
174 The GNU C Library.
Chapter 7: Locales and Internationalization 175
FRAC_DIGITS
The same as the value returned by localeconv in the frac_digits ele-
ment ele-
ment of the struct lconv.
N_SIGN_POSN
The same as the value returned by localeconv in the n_sign_posn ele-
ment.
176 The GNU C Library.
An example of nl_langinfo usage is a function which has to print a given date and
time in a locale-specific way. At first one might think that, since strftime internally uses
the locale information, writing something like the following is enough:
Chapter 7: Locales and Internationalization 177.
ssize_t strfmon (char *s, size_t maxsize, const char *format, Function
...):
•.
178 The GNU C Library
‘+’, ‘(’ At most one of these flags can be used. They select which format
to represent the sign of a currency amount. By default, and if ‘+’ is
given, the locale equivalent of +/− is used. If ‘(’ is given, negative
amounts are enclosed in parentheses. The exact format is determined
by the values of the LC_MONETARY category of the locale selected at
program runtime.
‘!’ The output will not contain the currency symbol.
‘-’ Section 7.6.1.1 [Generic Numeric Formatting Parameters], page 168).
If the exact representation needs more digits than given by the field width, the dis-
played value is rounded. If the number of fractional digits is selected to be zero, no
decimal point is printed.
As a GNU extension, the strfmon implementation in the GNU libc allows an optional
‘L’ next as a format modifier. If this modifier is given, the argument is expected to
be a long double instead of a double value.
Finally, the last component is a format specifier. There are three specifiers defined:
‘i’ Use the locale’s rules for formatting an international currency value.
‘n’ Use the locale’s rules for formatting a national currency value.
‘%’
Chapter 7: Locales and Internationalization 179 −1 @"
180 The GNU C Library.
8 Message Translation in separate files
which are loaded at runtime depending on the language selection of the user.
The GNU C Library provides two different sets of functions to support message trans-
lation.
• locate the external data file with the appropriate translations.
• load the data and make it possible to address the messages
• map a given key to the translated message
The two approaches mainly differ in the implementation of this last step. The design
decisions made for this influences the whole rest. be-
low.
Chapter 8: Message Translation 185 Section 25.4.2 [Standard Environment Variables], page 720). Which
variables are examined is decided by the flag parameter of catopen. If the value is
NL_CAT_LOCALE (which is defined in ‘nl
EBADF The catalog does not exist.
ENOMSG.
186 The GNU C Library
char * catgets (nl_catd catalog desc, int set, int message, const Function
char *string)
• the string parameters should contain reasonable text (this also helps to under-
stand the program seems otherwise there would be no hint on the string which
is expected to be returned.
• all string arguments should be written in the same language. Section 8.1.4 [How to use the
catgets interface], page 189).
− a number. In this case the value of this number determines the set to which the
following messages are added.
− an identifier consisting of alphanumeric characters plus the underscore character.
In this case the set get automatically a number assigned. This value is one added
to the largest set number which so far appeared.
How to use the symbolic names is explained in section Section 8.1.4 [How to use
the catgets interface], page 189.
It is an error if a symbol name appears more than once. All following messages
are placed in a set with this number.
• If a line contains as the first non-whitespace characters the sequence $delset followed
by a whitespace character an additional argument is required to follow. This argument
can either be:
− a number. In this case the value of this number determines the set which will be
deleted.
− an identifier consisting of alphanumeric characters plus the underscore character.
This symbolic identifier must match a name for a set which previously was defined.
It is an error if the name is unknown.
In both cases all messages in the specified set will be removed. They will not appear in
the output. But if this set is later again selected with a $set command again messages
could be added and these messages will appear in the output.
• If a line contains after leading whitespaces the sequence ).
• Any other line must start with a number or an alphanumeric identifier (with the under-
score character included). The following characters (starting after the first whitespace
character) will form the string which gets associated with the currently selected set and
the message number represented by the number and identifier respectively. Section 8.1.4 [How to use the catgets interface], page 189).
There is one limitation with the identifier: it must not be Set. The reason will be
explained below.
188 The GNU C Library:
• Lines 1 and 9 are comments since they start with $ followed by a whitespace.
• The quoting character is set to ". Otherwise the quotes in the message definition would
have to be left away and in this case the message with the identifier two would loose
its leading whitespace.
• Mixing numbered messages with message having symbolic names is no problem and
the numbering happens automatically.
‘hello.msg’ and the program source file ‘hello.c’):
‘gettext’ manual (see section “GNU gettext utilities” in Native Language Support Library
and Tools). We will only give a short overview.
Though the catgets functions are available by default on more systems the gettext
interface is at least as portable as the former. The GNU ‘gettext’ package can be used
wherever the functions are not available..
char * dcgettext (const char *domainname, const char *msgid, int Function
category)
The dcgettext adds another argument to those which dgettext takes. This argu-
ment ‘locale.
Chapter 8: Message Translation 195.
This has the consequence that programs without language catalogs can display the cor-
rect.
char * dngettext (const char *domain, const char *msgid1, const Function
char *msgid2, unsigned long int n).
char * dcngettext (const char *domain, const char *msgid1, const Function
char *msgid2, unsigned long int n, int category).
Artificial Esperanto
Two forms, singular used for zero and one
Exceptional case in the language family. The header entry would be:
Plural-Forms: nplurals=2; plural=n>1;
Languages with this property include:
Romanic family
Three forms, special cases for one and two
The header entry would be:
Plural-Forms: nplurals=3; plural=n==1 ? 0 : n==2 ? 1 : 2;
Languages with this property include:
Celtic Gaeilge
Three forms, special cases for numbers ending in 1 and 2, 3, 4, except those ending in
1[1-4]
The header entry would look like this:
Plural-Forms: nplurals=3; \
plural=n%100/10==1 ? 2 : n%10==1 ? 0 : (n+9)%10>3 ? 2 : 1;
Languages with this property include:
Slavic family
Czech, Russian
Three forms, special cases for 1 and 2, 3, 4
The header entry would look like this:
Plural-Forms: nplurals=3; \
plural=(n==1) ? 1 : (n>=2 && n<=4) ? 2 : 0;
Languages with this property include:
Slavic family
Slovak
Three forms, special case for one and some numbers ending in 2, 3, or 4
The header entry would look like this:
Plural-Forms: nplurals=3; \
plural=n==1 ? 0 : \
n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2;
(Continuation in the next line is possible.)
Languages with this property include:
Slavic family
Polish
Four forms, special case for one and all numbers ending in 2, 3, or 4
The header entry would look like this:
Plural-Forms: nplurals=4; \
plural=n==1 ? 0 : n%10==2 ? 1 : n%10==3 || n%10==4 ? 2 : 3;
Languages with this property include:
Chapter 8: Message Translation 201
Slavic family
Slovenian
As a consequence many people say that the gettext approach is wrong and instead
catgets should be used which indeed does not have this problem. But there is a very
simple and powerful method to handle these kind of problems with the gettext functions. unam-
biguous.!
Chapter 8: Message Translation 203
If one now consistently uses the enlengthened string form and replaces the gettext calls
with calls to sgettext (this is normally limited to very few places in the GUI implementa-
tion).
• They are easy to write and therefore can be provided by the project they are used in.
This is not an answer by itself and must be seen together with the second part which
is:
• There is no way the C library can contain a version which can work everywhere. The
problem is the selection of the character to separate the prefix from the actual string
in the enlenghtened string. The examples above used | ‘iso.
The default value for dir name is system specific. It is computed from the value given
as the prefix while configuring the C library. This value normally is ‘/usr’ or ‘/’. For the
former the complete dir name is:
/usr/share/locale
We can use ‘/usr/share’ since the ‘.mo’ files containing the message catalogs are sys-
tem:
Chapter 8: Message Translation 205
{ ‘ be-
cause are two more or less standardized forms:
X/Open Format
language[_territory[.codeset]][@modifier]
CEN Format (European Community Standard)
language[_territory][+audience][+special][,[sponsor][_revision]]
The functions will automatically recognize which format is used. Less specific locale
names will be stripped of in the order of the following list:
1. revision
2. sponsor
3. special
4. codeset
5. normalized codeset
6. territory
7. audience/modifier
206 The GNU C Library
From the last entry one can see that the meaning of the modifier field in the X/Open
format and the audience format have the same meaning. Beside one can see that the
language field for obvious reasons never will be dropped.:
1. Remove all characters beside numbers and letters.
2. Fold letters to lowercase.
3. If the same only contains digits prepend the string ‘/usr/share/locale/locale.alias’ (replace ‘..
208 The GNU C Library
Chapter 9: Searching and Sorting 209
void * lfind (const void *key, void *base, size_t *nmemb, size_t Function
size, comparison_fn_t compar)
The lfind function is returned.
The mean runtime of this function is *nmemb/2. This function should only be used
elements often get added to or deleted from the array in which case it might not be
useful to sort the array before searching.
210 The GNU C Library
void * lsearch (const void *key, void *base, size_t *nmemb, Function
size_t size, comparison_fn_t compar)’.
void * bsearch (const void *key, const void *array, size_t count, Function
size_t size, comparison_fn_t compare)
The bsearch function Section 9.1 [Defining the Comparison
Function], page 209):
{.
struct critter
{
const char *name;
const char *species;
};
int
critter_cmp (const struct critter *c1, const struct critter *c2)
{
return strcmp (c1->name, c2->name);
}
void
print_critter (const struct critter *c)
{
printf ("%s, the %s\n", c->name, c->species);
};
Chapter 9: Searching and Sorting 213
Entries of the hashing table and keys for the search are defined using this type:.
int hsearch r (ENTRY item, ACTION action, ENTRY **retval, struct Function
hsearch_data *htab)
The hsearch_r function was called with an so far unknown key
and action set to ENTER.
ESRCH The action parameter is FIND and no corresponding element is found in
the table. function is returned.
void * tfind (const void *key, void *const *rootp, comparison_fn_t Function
compar)
The tfind function is similar to the tsearch function..
10 Pattern Matching
The GNU C Library provides pattern matching facilities for two kinds of patterns: reg-
ular expressions and file-name wildcards. The library also provides a facility for expanding
variable and command references and parsing text into words in the way the shell does.
int fnmatch (const char *pattern, const char *string, int flags) Function
This function tests whether the string string matches the pattern pattern. It re-
turns cannot experience an “error”—it always returns an
answer for whether the match succeeds. However, other implementations of fnmatch
might sometimes report “errors”. They would do so recom-
mend.
220 The GNU C Library
‘ksh’. The patterns are written in the form explained in the following table
where pattern-list is a | separated list of patterns.
?(pattern-list)
The pattern matches if zero or one occurrences of any of the pat-
terns in the pattern-list allow matching the input string.
*(pattern-list)
The pattern matches if zero or more occurrences of any of the pat-
terns in the pattern-list allow matching the input string.
+(pattern-list)
The pattern matches if one or more occurrences of any of the pat-
terns.
10.2 Globbing ‘glob.h’.
For use in the glob64 function ‘glob.h’ contains another definition for a very similar
type. glob64_t differs from glob_t only in the types of the members gl_readdir, gl_stat,
and gl_lstat.
int glob (const char *pattern, int flags, int (*errfunc) (const char Function
10.2.2 [Flags for Globbing], page 224,
There was an error opening a directory, and you used the flag GLOB_ERR
or your specified errfunc returned a nonzero value. See below.
In the event of an error, glob stores information in *vector-ptr about all the matches
it has found so far.
It is important to notice that the glob function will not fail if it encounters directories
or files which cannot be handled without the LFS interfaces. The implementation of
glob is supposed to use these functions internally. This at least is the assumptions
made by the Unix standard. The GNU extension of allowing the user to provide
own directory handling and stat functions complicates things a bit. If these callback
functions are used and a large file or directory is encountered glob can fail.
224 The GNU C Library
int glob64 (const char *pattern, int flags, int (*errfunc) (const Function
char *filename, int error-code), glob64_t *vector-ptr)
The glob64 function was added as part of the Large File Summit extensions but
is not part of the original LFS proposal. The reason for this is simple: it is not
necessary. The necessity for a glob64 function is added by the extensions of the
GNU glob implementation which allows the user to provide own directory handling
and stat functions. The readdir and stat functions do depend on the choice of _
FILE_OFFSET_BITS since the definition of the types struct dirent and struct stat
will change depending on the choice.
Beside this difference the glob64 works just like glob in all aspects.
This function is a GNU extension._NOSORT
Don’t sort the file names; return them in no particular order. (In practice, the
order will depend on the order of the entries in the directory.) The only reason
not to sort is to save time._PERIOD
The . character (period) is treated special. It cannot be matched by wildcards.
See Section 10.1 [Wildcard Matching], page 219, FNM_PERIOD.
GLOB_MAGCHAR
The GLOB_MAGCHAR value is not to be given to glob in the flags parameter.
Instead, glob sets this bit in the gl flags element of the glob t structure provided
as the result if the pattern used for matching contains any wildcard character.
GLOB_ALTDIRFUNC
Instead of the using the using the normal functions for accessing the filesys-
tem the glob implementation uses the user-supplied functions specified in the
structure pointed to by pglob parameter. For more information about the func-
tions refer to the sections about directory handling see Section 14.2 [Accessing
Directories], page 371, and Section 14.9.2 [Reading the Attributes of a File],
page 392.
GLOB_BRACE
If this flag is given the handling of braces in the pattern is changed. It is now
required that braces appear correctly grouped. I.e., for each opening brace there
226 The GNU C Library
be determined for other reasons this leads to an error. glob will return GLOB_
NOMATCH instead of using the pattern itself as the name.
This functionality is equivalent to what is available in C-shells if nonomatch
flag is not set.
GLOB_ONLYDIR
If this flag is used the globbing function takes this as a hint that the caller is
only interested in directories matching the pattern. If the information about
the type of the file is easily available non-directories will be rejected but no
extra work will be done to determine the information for each file. I.e., the
caller must still be able to filter directories out..
After you create a regex_t object, you can compile a regular expression into it by calling
regcomp.
int regcomp (regex_t *compiled, const char *pattern, int cflags) Function 10.3.2 [Flags for POSIX Regular Ex-
pressions], page 229.
If you use the flag REG_NOSUB, then regcomp omits from the compiled regular expres-
sion Section 10.3.6
[POSIX Regexp Matching Cleanup], page 232.).
Chapter 10: Pattern Matching 229
REG_ECOLLATE
The regular expression referred to an invalid collating element (one not de-
fined in the current locale for string collation). See Section 7.3 [Categories of
Activities that Locales Affect], page 164...
230 The GNU C Library.
You should always free the space in a regex_t structure with regfree before using the
structure to compile another regular expression.
When regcomp or regexec reports an error, you can use the function regerror to turn
it into an error message string..
wordfree (&result);
return status;
}
11 Input/Output Overview:
• Chapter 12 [Input/Output on Streams], page 245, which covers the high-level functions
that operate on streams, including formatted input and output.
• Chapter 13 [Low-Level Input/Output], page 319, which covers the basic I/O and control
functions on file descriptors.
• Chapter 14 [File System Interface], page 369, which covers functions for operating on
directories and for manipulating file attributes such as access modes and ownership.
• Chapter 15 [Pipes and FIFOs], page 411, which includes information on the basic
interprocess communication facilities.
• Chapter 16 [Sockets], page 417, which covers a more complicated interprocess commu-
nication facility with support for networking.
• Chapter 17 [Low-Level Terminal Interface], page 465, which covers functions for chang-
ing how input and output to terminals or other serial devices are processed.
use file descriptors if your program needs to do input or output in special modes, such as
nonblocking (or polled) input (see Section 13.14 [File Status Flags], page 357).
Streams provide a higher-level interface, layered on top of the primitive file descriptor
facilities. The stream interface treats all kinds of files pretty much alike—the sole exception
being the three styles of buffering that you can choose (see Section 12.20 [Stream Buffering],
page 303).
The main advantage of using the stream interface is that the set of functions for per-
forming de-
scriptor Section 12.14 [Formatted Input],
page 287) and formatted output functions (see Section 12.12 [Formatted Output], page 264)..
11.2.1 Directories Chapter 14
[File System Interface], page 369.
242 The GNU C Library Section 31.6 [Limits on File System Capacity], page 828.
In the GNU system, Section 14.5 [Symbolic Links], page 383. Section 14.5 [Symbolic Links], page 383.
12 Input/Output on Streams
This chapter describes the functions for creating streams and performing input and
output operations on them. As discussed in Chapter 11 [Input/Output Overview], page 239,
a stream is a fairly abstract, high-level concept representing a communications channel to
a file, device, or process.
12 ‘stdio.h’.
FILE objects are allocated and managed internally by the input/output library functions.
Don’t try to create your own objects of type FILE; let the library do it. Your programs
should deal only with pointers to these objects (that is, FILE * values) rather than the
objects themselves.
In the GNU system, you can specify what files or processes correspond to these streams
using the pipe and redirection facilities provided by the shell. (The primitives shells use to
implement these facilities are described in Chapter 14 [File System Interface], page 369.) Section 12.3 [Opening Streams], page 246.
The three streams stdin, stdout, and stderr are not unoriented at program start (see
Section 12.6 [Streams in Internationalized Applications], page 253).
As you can see, ‘+’ requests a stream that can do both input and output. The ISO
standard says that when using such a stream, you must call fflush (see Section 12.20
[Stream Buffering], page 303) or a file positioning function such as fseek (see Sec-
tion 12.18 [File Positioning], page 299) when switching from reading to writing or vice
versa. Otherwise, internal buffers might not be emptied properly. The GNU C library
does not have this limitation; you can do arbitrary reading and writing operations on
a stream in whatever order.
Additional characters may appear after these to specify flags for the call. Always
put the mode (‘r’, ‘w+’, etc.) first; that is the only part you are guaranteed will be
understood by all systems.
The GNU C library defines one additional character for use in opentype: the character
‘x’ insists on creating a new file—if a file filename already exists, fopen fails rather
than opening it. If you use ‘x’ you are guaranteed that you will not clobber an existing
file. This is equivalent to the O_EXCL option to the open function (see Section 13.1
[Opening and Closing Files], page 319).
The character ‘b’ in opentype has a standard meaning; it requests a binary stream
rather than a text stream. But this makes no difference in POSIX systems (including
the GNU system). If both ‘+’ and ‘b’ are specified, they can appear in either order.
See Section 12.17 [Text and Binary Streams], page 298.
If the opentype string contains the sequence ,ccs=STRING then STRING is taken
as the name of a coded character set and fopen will mark the stream as wide-oriented
which appropriate conversion functions in place to convert from and to the charac-
ter set STRING is place. Any other stream is opened initially unoriented and the
orientation is decided with the first file operation. If the first operation is a wide char-
acter.
You can have multiple streams (or file descriptors) pointing to the same file open at the
same time. If you do only input, this works straightforwardly, but you must be careful if any
output streams are included. See Section 13.5 [Dangers of Mixing Streams and Descriptors],
page 330. This is equally true whether the streams are in one program (not usual) or in
several programs (which can easily happen). It may be advantageous to use the file locking
facilities to avoid simultaneous access. See Section 13.15 [File Locks], page 362..
FILE * freopen (const char *filename, const char *opentype, FILE Function
.
freopen has traditionally been used to connect a standard stream such as stdin with
a file of your own choice. This is useful in programs in which use of a standard stream
for certain purposes is hard-coded. In the GNU C library, you can simply close the
standard streams and open new ones with fopen. But other systems lack this ability,
so using freopen is more portable.
When the sources are compiling with _FILE_OFFSET_BITS == 64 on a 32 bit machine
this function is in fact freopen64 since the LFS interface replaces transparently the
old interface.
separately. Solaris introduced a few functions to get this information from the stream
descriptor and these functions are also available in the GNU C library.
For slightly different kind of problems there are two more functions. They provide even
finer-grained information.
writes the remaining buffered output, it might get an error because the disk is full.
Even if you know the buffer is empty, errors can still occur when closing a file if you
are using NFS.
The function fclose is declared in ‘stdio.h’.
To close all streams currently available the GNU C Library provides another function.
If the main function to your program returns, or if you call the exit function (see Sec-
tion 25.6.1 [Normal Termination], page 724), all open streams are automatically closed prop-
erly. If your program terminates in any other manner, such as by calling the abort function
(see Section 25.6.4 [Aborting a Program], page 726) or from a fatal signal (see Chapter 24
[Signal Handling], page 635), open streams might not be closed properly. Buffered output
might not be flushed and files may be incomplete. For more information on buffering of
streams, see Section 12.20 [Stream Buffering], page 303.);
}
252 The GNU C Library
Chapter 12: Input/Output on Streams 253.
FSETLOCKING_INTERNAL
The stream stream will from now on use the default internal locking.
Every stream operation with exception of the _unlocked variants will
implicitly lock the stream.
FSETLOCKING_BYCALLER
After the __fsetlocking function returns the user is responsible for lock-
ing ‘stdio_ext.h’.
This function is especially useful when program code has to be used which is written
without knowledge about the _unlocked functions (or if the programmer was too lazy to
use them). re-
striction: a stream can be used either for wide operations or for normal operations. Once
it is decided there is no way back. Only a call to freopen or freopen64 can reset the
orientation. The orientation can be decided in three ways:
• If any of the normal character functions is used (this includes the fread and fwrite
functions) the stream is marked as not wide oriented.
• If any of the wide character functions is used the stream is marked as wide oriented.
• The fwide function.
the result of getc and friends, and check for EOF after the call; once you’ve verified that
the result is not EOF, you can be sure that it will fit in a ‘char’ variable without loss of
information.
ssize_t getdelim (char **lineptr, size_t *n, int delimiter, FILE Function
);
}
Chapter 12: Input/Output on Streams 261
char * fgets unlocked (char *s, int count, FILE *stream) Function
The fgets_unlocked function is equivalent to the fgets function except that it does
not implicitly lock the stream.
This function is a GNU extension.
12.10 Unreading, fseeko or rewind;
see Section 12.18 [File Positioning], page 299) is called, any pending pushed-back
characters are discarded.
Unreading a character on a stream that is at end of file clears the end-of-file indicator
for the stream, because it makes the character of input available. After you read that
character, trying to read again will encounter end of file.
Here is an example showing the use of getc and ungetc to skip over whitespace charac-
ters. When this function reaches a non-whitespace character, it unreads that character to
be seen again on the next read operation on the stream.
#include <stdio.h>
#include <ctype.h>
void
skip_whitespace (FILE *stream)
{
int c;
do
/* No need to check for EOF because it is not
isspace, and ungetc ignores EOF. */
c = getc (stream);
while (isspace (c));
ungetc (c, stream);
}
memory—not just character or string objects—can be written to a binary file, and mean-
ing ‘stdio.h’.
size_t fread (void *data, size_t size, size_t count, FILE *stream) Function.
size_t fread unlocked (void *data, size_t size, size_t count, Function
FILE *stream)
The fread_unlocked function is equivalent to the fread function except that it does
not implicitly lock the stream.
This function is a GNU extension.
size_t fwrite (const void *data, size_t size, size_t count, FILE Function
*stream)
This function writes up to count objects of size size from the array data, to the
stream stream. The return value is normally count, if the call succeeds. Any other
value indicates some sort of error, such as running out of space.
size_t fwrite unlocked (const void *data, size_t size, size_t Function
count, FILE *stream)
The fwrite_unlocked function is equivalent to the fwrite function except that it
does not implicitly lock the stream.
This function is a GNU extension.
NL ARGMAX Macro
The value of ARGMAX is the maximum value allowed for the specification of an
positional parameter in a printf call. The actual value in effect at runtime
can be retrieved by using sysconf using the _SC_NL_ARGMAX parameter see Sec-
tion 31.4.1 [Definition of sysconf], page 818..
• Zero or more flag characters that modify the normal behavior of the conversion speci-
fication.
• An optional decimal integer specifying the minimum field width. If the normal conver-
sion produces fewer characters than this, the field is padded with spaces to the specified
width. This is a minimum value; if the normal conversion produces more characters
than this, the field is not truncated. Normally, the output is right-justified within the
field.
You can also specify a field width of ‘*’. This means that the next argument in the
argument list (before the actual value to be printed) is used as the field width. The
value must be an int. If the value is negative, this means to set the ‘-’ flag (see below)
and to use the absolute value as the field width.
• An optional precision to specify the number of digits to be written for the numeric
conversions. If the precision is specified, it consists of a period (‘.’) followed optionally
by a decimal integer (which defaults to zero if omitted).
You can also specify a precision of ‘*’. This means that the next argument in the
argument list (before the actual value to be printed) is used as the precision. The value
must be an int, and is ignored if it is negative. If you specify ‘*’ for both the field
width and precision, the field width argument precedes the precision argument. Other
C library versions may not recognize this syntax.
Chapter 12: Input/Output on Streams 267
• An optional type modifier character, which is used to specify the data type of the
corresponding argument if it differs from the default type. (For example, the integer
conversions assume a type of int, but you can specify ‘h’, ‘l’, or ‘L’ for other integer
types.)
• A character that specifies the conversion to be applied.
The exact options that are permitted and how they are interpreted vary between the
different conversion specifiers. See the descriptions of the individual conversions for infor-
mation section “Declaring Attributes of Functions”
in Using GNU CC, for more information.
‘%C’ This is an alias for ‘%lc’ which is supported for compatibility with the Unix
standard.
‘%s’ Print a string. See Section 12.12.6 [Other Output Conversions], page 272.
‘%S’ This is an alias for ‘%ls’ which is supported for compatibility with the Unix
standard.
‘%p’ Print the value of a pointer. See Section 12.12.6 [Other Output Conversions],
page 272.
‘%n’ Get the number of characters printed so far. See Section 12.12.6 [Other Output
Conversions], page 272. Note that this conversion specification never produces
any output.
‘%m’ Print the string corresponding to the value of errno. (This is a GNU extension.)
See Section 12.12.6 [Other Output Conversions], page 272.
‘%%’ Print a literal ‘%’ character. See Section 12.12.6 [Other Output Conversions],
page 272..
‘’’ Separate the digits into groups as specified by the locale specified for the LC_
NUMERIC category; see Section 7.6.1.1 [Generic Numeric Formatting Parame-
ters], page 168. This flag is a GNU extension.
‘0’:
‘hh’ Specifies that the argument is a signed char or unsigned char, as appropri-
ate. A char argument is converted to an int or unsigned int by the default
argument promotions anyway, but the ‘h’ modifier says to convert it back to a
char again.
This modifier was introduced in ISO C99.
‘h’ Specifies that the argument is a short int or unsigned short int, as appro-
priate. A short argument is converted to an int or unsigned int by the
default argument promotions anyway, but the ‘h’ modifier says to convert it
back to a short again.
‘j’ Specifies that the argument is a intmax_t or uintmax_t, as appropriate.
This modifier was introduced in ISO C99.
‘l’ Specifies that the argument is a long int or unsigned long int, as appropri-
ate..
‘L’
‘ll’
‘q’.
‘t’ Specifies that the argument is a ptrdiff_t.
This modifier was introduced in ISO C99.
‘z’
‘Z’ Specifies that the argument is a size_t.
270 The GNU C Library:
‘-’ Left-justify the result in the field. Normally the result is right-justified.
‘+’ Always include a plus or minus sign in the result.
‘’ If the result doesn’t start with a plus or minus sign, prefix it with a space
instead. Since the ‘+’ flag ensures that the result includes a sign, this flag is
ignored if you supply both of them.
‘#’ Specifies that the result should always include a decimal point, even if no digits
follow it. For the ‘%g’ and ‘%G’ conversions, this also forces trailing zeros after
the decimal point to be left in place where they would otherwise be removed.
‘’’ Separate the digits of the integer part of the result into groups as specified by
the locale specified for the LC_NUMERIC category; see Section 7.6.1.1 [Generic
Numeric Formatting Parameters], page 168. This flag is a GNU extension.
‘0’:
‘L’ An uppercase ‘L’ specifies that the argument is a|
272 The GNU C Library
In the GNU system, argu-
ment, and no flags, field width, precision, or type modifiers are permitted.
int swprintf (wchar_t *s, size_t size, const wchar_t *template, Function
...)
This is like generated for the given input, excluding
the trailing null. If not all output fits into the provided buffer a negative value is
returned. You should try again with a bigger output string. Note: this is different
from how snprintf handles this situation..
int snprintf (char *s, size_t size, const char *template, ...) Function:
Chapter 12: Input/Output on Streams 275;
}printf does not
destroy the argument list of your function, merely the particular pointer that you passed
to it.
GNU C does not have such restrictions. You can safely continue to fetch arguments from
a va_list pointer after passing it to vprintf, and va_end is a no-op. (Note, however, that
subsequent va_arg calls will fetch the same arguments which vprintf previously used.)
Prototypes for these functions are declared in ‘stdio.h’.
int vfprintf (FILE *stream, const char *template, va_list ap) Function
This is the equivalent of fprintf with the variable argument list specified directly as
for vprintf.
int vfwprintf (FILE *stream, const wchar_t *template, va_list ap) Function
This is the equivalent of fwprintf with the variable argument list specified directly
as for vwprintf.
int vsprintf (char *s, const char *template, va_list ap) Function
This is the equivalent of sprintf with the variable argument list specified directly as
for vprintf.
int vswprintf (wchar_t *s, size_t size, const wchar_t *template, Function
va_list ap)
This is the equivalent of swprintf with the variable argument list specified directly
as for vwprintf.
int vsnprintf (char *s, size_t size, const char *template, va_list Function
ap)
This is the equivalent of snprintf with the variable argument list specified directly
as for vprintf.
int vasprintf (char **ptr, const char *template, va_list ap) Function
The vasprintf function is the equivalent of asprintf with the variable argument
list specified directly as for vprintf.
278 The GNU C Library
void
eprintf (const char *template, ...)
{
va_list ap;
extern char *program_invocation_short_name;
size_t parse printf format (const char *template, size_t n, int Function
*argtypes)
This function returns information about the number and types of arguments expected
by the printf template string template. The information is stored in the array
Chapter 12: Input/Output on Streams 279
argtypes; each element of this array describes one argument. This information is
encoded using the various ‘PA_’ macros, listed below.
The arguments, allocate a
bigger array and call parse_printf_format again.
The argument types are encoded as a combination of a basic type and modifier flag bits.
Here are symbolic constants that represent the basic types; they stand for integer values.
PA_CHAR This specifies that the base type is int, cast to char.
PA_STRING
This specifies that the base type is char *, a null-terminated string.
PA_POINTER
This specifies that the base type is void *, an arbitrary pointer.
PA_DOUBLE
This specifies that the base type is double.
PA_LAST You can define additional base types for your own programs as offsets from
PA_LAST. For example, if you have data types ‘foo’ and ‘bar’ with their own
specialized printf conversions, you could define encodings for these types as:
#define PA_FOO PA_LAST
#define PA_BAR (PA_LAST + 1)
Here are the flag bits that modify a basic type. They are combined with the code for
the basic type using inclusive-or.
PA_FLAG_PTR
If this bit is set, it indicates that the encoded type is a pointer to the base
type, rather than an immediate value. For example, ‘PA_INT|PA_FLAG_PTR’
represents the type ‘int *’.
PA_FLAG_SHORT
If this bit is set, it indicates that the base type is modified with short. (This
corresponds to the ‘h’ type modifier.)
280 The GNU C Library
PA_FLAG_LONG
If this bit is set, it indicates that the base type is modified with long. (This
corresponds to the ‘l’ type modifier.)
PA_FLAG_LONG_LONG
If this bit is set, it indicates that the base type is modified with long long.
(This corresponds to the ‘L’ type modifier.)
PA_FLAG_LONG_DOUBLE
This is a synonym for PA_FLAG_LONG_LONG, used by convention with a base
type of PA_DOUBLE to indicate a type of long double.
int
validate_args (char *format, int nargs, OBJECT *args)
{
int *argtypes;
int nwanted; | https://www.scribd.com/document/42995944/Glibc-Manual-2-2-5 | CC-MAIN-2019-35 | refinedweb | 13,500 | 65.52 |
Sync services is evolved pretty rapidly and taken all my time as a result. But in the last few days I got a chance to update the demos to work against the beta 2 runtime. The changes are not many but are important to understand. Here is a rundown of what I had to change in the code:
Namespace Changes:
- Added namespace (Microsoft.Synchronization) to host the most basic type that are common to upper layers
- Change the name of Microsoft.Synchronization.Data.Client to Microsoft.Synchronization.Data.SqlServerCe
SyncAgent Changes:
- Added Configuration property to host SyncTable and SyncParameter collections
- Rename ClientSyncProvider and ServerSyncProvider properties to LocalProvider and RemoteProvider, respectively.
ServerProvider Changes:
- The sync commands on the provider interface (SelectNewAnchorCommand and SelectClientIdCommand) require output parameters instead of return values. This is need for extensibility as you will see with batching.
- Added ServerSyncProviderProxy type to save you from writing code on the client for n-tier scenario. The proxy will accept a web reference and will take care of loading the correct methods for you.
Well, that’s really not everything. There are few tweaks and bug fixes here and there. The best thing is to give it a try and share your experience with me and others through the forums. You are really helping us make the runtime rocks!. | https://blogs.msdn.microsoft.com/synchronizer/2007/08/17/updates-updates-updates/ | CC-MAIN-2017-09 | refinedweb | 219 | 65.73 |
pandas2ri.py2ri use way too much RAM
I'm trying to setup rpy2 on my computer.
My config:
- Windows 10, 16Go RAM
- Anaconda: 4.4.0 with python 3.6
- An independant R version: 3.4.1
Python packages:
- rpy2: 2.8.6
- pandas: 0.20.3
First thing i do is to load my data set (adult from UCI repo):
import pandas as pd original_data = pd.read_csv( "", names=[ "Age", "Workclass", "fnlwgt", "Education", "Education-Num", "Martial Status", "Occupation", "Relationship", "Race", "Sex", "Capital Gain", "Capital Loss", "Hours per week", "Country", "Target"], sep=r'\s*,\s*', engine='python', na_values="?") original_data.head() import sys sys.getsizeof(original_data) >> 34605590
Now that i have imported my data set that is not that big... Now i start R interface as shown in rpy2 documentation
from rpy2.robjects import r, pandas2ri pandas2ri.activate()
Control R memory limit (result in Mo)
import rpy2.robjects as robjects r('memory.limit()') >> array([ 2047.])
2Go i should be safe... I try to pass my data set to R.
r_dataframe = pandas2ri.py2ri(original_data) --------------------------------------------------------------------------- RRuntimeError Traceback (most recent call last) <ipython-input-17-5921ca574db2> in <module>() ----> 1 r_dataframe = pandas2ri.py2ri(original_data) C:\Users\touleem\AppData\Local\Continuum\Anaconda3\lib\functools.py in wrapper(*args, **kw) 801 802 def wrapper(*args, **kw): --> 803 return dispatch(args[0].__class__)(*args, **kw) 804 805 registry[object] = func C:\Users\touleem\AppData\Local\Continuum\Anaconda3\lib\site-packages\rpy2\robjects\pandas2ri.py in py2ri_pandasdataframe(obj) 58 od[name] = StrVector(values) 59 ---> 60 return DataFrame(od) 61 62 @py2ri.register(PandasIndex) C:\Users\touleem\AppData\Local\Continuum\Anaconda3\lib\site-packages\rpy2\robjects\vectors.py in __init__(self, obj) 956 " of type VECSXP") 957 --> 958 df = baseenv_ri.get("data.frame").rcall(tuple(kv), globalenv_ri) 959 super(DataFrame, self).__init__(df) 960 RRuntimeError: Error: cannot allocate vector of size 254 Kb
Please note that before failling, my RAM usage exploded (from 5Go to 16Go :O).
Why does it take so much RAM?
PS: i found a way around using numpy2ri.py2ri and feeding after complementary informations to R to re-build a data.frame.
Thanks for your help
That seems odd. May be R is making unnecessary copies when running
df = baseenv_ri.get("data.frame").rcall(tuple(kv), globalenv_ri). Did you try using R to read the CSV ?
Hello Laurent,
I was able to do utils.read_csv ok.
I actually tried to find this problem, and reduced it to a specific scenario in my case:
I have a two column pandas dataframe. first column is a dtype int64, second column is a dtype object that contains str. That str column has an entry that is missing, which results in it being a float of type NaN (while all the other entries are python str filled in properly).
Feeding this dataframe into pandas2ri.py2ri results in the memory leak and crash as described in this issue. This only happens with some minimum number of rows, not sure the number, but if it is a small enough number of rows it is ok.
I fixed my problem by converting NaN in my dtype object column with dataframe.fillna to be blank string
Thanks. If you have a self-sufficient example to reproduce the issue, this is always helpful.
In the meantime, here are initial notes.
In my experience, arrays of strings in
numpy(
pandasis relying on
numpyfor arrays) can be of
dtype
"<U[0-9]+", or
object, or
"S"( ): :
Having a
Nonein the sequence changes the default type to
object:
Now how is
rpy2handling this ?
Not perfect, as one might expect an
NAinstead of the string
"None", but no float in sight.
Or may be you meat that missing values your Python/
pandasvector are encoded as a float (value
NaN) ?
Still no float.
Now trying the numpy converter:
The error is currently intended as the
dtypeof the
numpyarray is
objectand I was unsure about what the conversion of a Python
Noneto R should be (as R's NA values do have a type). Now that there is more empirical experience with the Python-R bridge , it would probably make sense to convert to the type of the symbol NA (that is boolean). Still no float though.
From this is I am only seeing to possible path to the behavior experienced:
Noneitem is raising an exception while something at the C level is not handled properly (could be with
rpy2code, or with the Cython part of
pandas) .
LANG=en_US.UTF-8).
Here is my minimal reproduction code:
I tried with len of 7500, and it still crashes. I notice though that it returns from the function call, but in another core on my machine is spiked and eats the memory. So, I suspect it is some internals code that is doing garbage cleanup, due to the delayed nature of it occuring after it returns the result
I only had a little bit of time to investigate this further and is seems that:
This seems to be caused by hard-to-reconcile differences between and R vectors and Python arrays.
Without this
rscis a Python
list, which is something converted to an R
list, and R's constructor
data.frame()is considering a
listto mean columns, as shown below:
Now, when combining this with your column
numthis will result in an attempt to create a 100,000 x 100,000 data table (see example below), which might exceed the RAM of your machine (if my back-of-envelope calculation is correct, on a 64 bit architecture your example would require at least 120 GB of RAM).
I am unsure about how this would be best handled on the
rpy2side as mapping Python
listobjects to R
listobjects seems the most natural and the R constructor
data.frame()does accept lists (although the combination leads to what might be an unintuitive behavior).
Revision 194937839721 (branch
default, which means future release 3.0.0 as the time of writing), is proposing a fix: pandas
Seriesof dtype
"O"are converted to R arrays of strings while issuing a warning about it.
How do people feel about it, especially the part about warnings ? On one hand I like the warnings (as a silent conversion might create other issues for users wondering while their arrays of Python objects are turned into strings), but I am unsure about whether the warnings might turn to be obnoxious in practice.
As no concern about the proposed fixed was expressed I have already backported it to rpy2-2.9.x (revision 50f81309de07). If no problem is discovered during the next week or so, this will be included in rpy2-2.9.2 (with or without warnings depending on the feedback).
Excellent job! Looks good, warnings or not.
Thank you Ben for reporting it (I was very mad as my computed kept crashing while converting a simple csv-created table to R and it was hard to debug) and thank you Laurent for the quick fix! I can confirm that it works in a simple case, though the problem persist if
pandas2ri.activate()is invoked.
This one works (warnings are emitted, data frame is created as expected):
Output:
But this one does not:
Output:
PS. I do not like the warnings, these are very mhm... verbose. Could we have a specific subclass (e.g. RPy2Warning or something) so one can easily suppress these warnings? Or use logging.warning instead of warnings.warn?
I think that the second call might have to be:
I would need a better case against warnings to consider removing them. Python/numpy/pandas columns/arrays where items can be arbitrary "objects, which is not feasible in R data frames.
The current practical approach is to cast such Python arrays to string arrays in R, as arrays of strings are often arrays of "objects" as far as pandas/numpy are concerned. However, doing so silently can cause rather hard-to-find issues. The trade-off here is to be provide immediate convenience at the cost of verbosity. The warnings can be silenced either by writing one's own additional conversion rules (see example in the doc) or by writing one's own "pre-conversion" function. The suggestion to have typed warnings to facilitate filtering is quite reasonable though (now tracked as issue #445).
Otherwise the conversion systems has a number of design flaws and should be rewritten (...some day - this has been mentioned for some time now, but the time / resources to do it never materialized). In the meantime, using
localconverter()(see it demonstrated in the documentation here can provide quite a bit of flexibility for customization while retaining control (custom conversion limited to a block).
PS: Your code does not appear to work the way advertised here: the object
df_pandashas shape
(2, 1)here. | https://bitbucket.org/rpy2/rpy2/issues/421/pandas2ripy2ri-use-way-too-much-ram | CC-MAIN-2018-43 | refinedweb | 1,454 | 64.2 |
On Sep 2, 2008, at 8:34 AM, Ramin wrote: > instance Monad Query where > return (initState, someRecord) = Query (initState, someRecord) > {- code for (>>=) -} > GHC gives an error, "Expected kind `* -> *', but `Scanlist_ctrl' has > kind `* -> * -> *' ". I believe you understand the problem with the above code, judging from your attempt to fix it below. > If I try this: > instance Monad (Query state) where > return (initState, someRecord) = Query (initState, someRecord) > {- code for (>>=) -} > GHC give an error, "Occurs check: cannot construct the infinite > type: a = (s, a) when trying to generalise the type inferred for > `return' ". The problem is your type for the return function. The way you have written it, it would be `return :: (state, rec) -> Query state rec`. Perhaps it would be easier to see the problem if we defined `type M = Query MyState`. Then you have `return :: (MyState, rec) -> M rec`. Compare this to the type it must be unified with: `return :: a -> m a`. The two 'a's don't match! The type you are after is actually `return :: rec -> M rec` or `return :: rec -> Query state rec`. I hope this helps lead you in the right direction. I'm not giving you the solution because it sounds like you want to solve this for yourself and learn from it. - Jake McArthur | http://www.haskell.org/pipermail/haskell-cafe/2008-September/046950.html | CC-MAIN-2014-41 | refinedweb | 210 | 79.4 |
.
This topic contains the following sections..
A name is specified in XAML by setting a value for one of two nearly equivalent attributes, Name or x:Name. (For details on the distinction between Name and x:Name, see Remarks in Name.) If you attempt to define the same name twice within any.
Names in XAML namescopes enable scripting code to reference the objects that were initially defined in XAML. The internal result of parsing XAML is to create a set of objects that retain some or all of the relationships these objects had in the XAML syntax. These relationships are maintained as specific object properties of the created objects, or are exposed to utility methods in the Silverlight programming model.
The most typical use of a name in a XAML namescope for Silverlight version 2 is as a direct reference to an object instance, which is enabled by the markup compile pass combined with a generated InitializeComponent method in the partial class templates.
You can also use the utility method FindName yourself at run time to return a reference to objects that were defined with a name in the XAML markup. CLR Silverlight managed project after compilation. You can also see the fields and InitializeComponent method as members of your resulting assemblies if you reflect over them or otherwise examine their MSIL contents.
XAML can be also be used as the string input for the XamlReader..::.Load method, which acts analogously to the initial XAML source parse operation. Load creates a new disconnected tree of Silverlight (such as specifying a new ImageBrush for a Fill property value). by current Application. You can get the visual root (the root XAML element, also known as the content source) from the current application in one line of code with the call Application.Current.RootVisual.(String).
Templates in Silverlight.. Also, there are conventions that control authors should follow in order to name parts and template parts and report these as templatable .
Using the JavaScript API, there are some technical differences in how XAML namescopes are created and used. Also, an XML namespace is not required in the root of the XAML; the Silverlight client XML namespace is inferred.
The primary technical difference at a programming model level is that a name in the XAML namescope can no longer be treated directly as an object in the JavaScript scripting calls. The name must be considered to be conceptually just a string at this stage. To generate a true JavaScript object reference to an object defined in the XAML, you must pass the Name string from the original XAML to the FindName utility method called from JavaScript.
FindName works from almost any scope and will search the entire run-time set of objects; you are not limited to searching in particular parent-child axes. However, there are ways to define multiple XAML namescopes within a Silverlight-based application using the JavaScript API.
In the JavaScript API, there is no true constructor syntax available in scripting for Silverlight objects. Instead, you define objects in XAML fragments, which are supplied as the input parameter for the CreateFromXaml method. The object elements that are defined as part of this XAML fragment can also have values defined for Name or x:Name.
CreateFromXaml has an important XAML namescope behavior difference as compared to the managed XamlReader..::.Load. Using the default (one-parameter) signature of CreateFromXaml, a preliminary XAML namescope is created, based on the root of the provided XAML. This preliminary XAML namescope evaluates any defined names in the provided XAML for uniqueness. If names in the provided XAML are not internally unique at this point, CreateFromXaml throws an error. If names in the provided XAML collide with names that are already in the primary Silverlight namescope, no errors occur immediately. When the CreateFromXaml method is called, the created object tree that is returned is disconnected. As with the managed XamlReader..::.Load usage,. This is because rather than maintaining the XAML namescopes discretely, by default the behavior in the JavaScript API is to attempt to merge the XAML namescopes whenever two formerly separate object trees are also creates a preliminary XAML namescope that evaluates any defined names in the provided XAML for uniqueness. The difference in the behavior for the createNameScope=true case is that the disconnected object tree is now flagged to not attempt to merge its XAML namescope with the main application XAML namescope when the object trees are joined. Generally, this behavior resembles the discrete XAML namescope considerations mentioned in the previous sections for the managed API. To get to objects that are defined in a different XAML namescope, you can use several techniques in the JavaScript API (these techniques parallel managed equivalent techniques):
Walk the entire tree in discrete steps with GetParent and/or collection properties.
If you are calling from a discrete XAML namescope and want the main application namescope, you main application XAML namescope and want an object within a discrete XAML namescope, the best thing to do is to plan ahead in your code and retain a reference to the object that was returned by CreateFromXaml and then added to the object tree. This object is now a valid object for calling FindName within the created discrete XAML namescope.
The FindName method defined on the Silverlight plug-in object does not entirely work around the disconnected XAML namescope problems of CreateFromXaml described in this topic; its XAML namescope is always the root XAML namescope.
Inline XAML is a technique where XAML can be defined in the HTML that hosts the Silverlight plug-in rather than in a separate XAML file. Because of its position in the browser DOM, inline XAML constitutes a deliberate choice to use the JavaScript API, even if a managed-code capable version is specified during Silverlight initialization. (Without a discrete XAML file that is known to a managed project file, which specifies x:Class at its root, there is no way for an inline XAML solution to know how to connect the code-behind and XAML, and no specific instruction to compile and connect the managed code.) XAML namescopes in the JavaScript API act identically, whether XAML content is defined as inline XAML or as a file that is entirely XAML. The XAML namescope is defined on the basis of the root of the XAML portion, as if it were the root of a XAML file. | http://msdn.microsoft.com/en-us/library/cc189026(VS.95).aspx | crawl-002 | refinedweb | 1,067 | 57.3 |
In this article, I am going to explore the ViewState. ViewState is one thing that I always like to use. It makes life easier. As we all know, Viewstate is one way of maintaining the state in web applications.
As we know, a web page is created every time when a page is posted to the server. This means the values that the user entered in the webpage are going to vanish after the postback or submit. To get rid of this problem, the ASP.NET framework provides a way to maintain these values by virtue of ViewState. When we enable the viewstate of any control, the value is going to be maintained on every postback or server roundtrip.
But how are these values maintained? It doesn't come free. ViewState uses a hidden variable that resides on the page to store control values. This means that if a page has lots of controls with viewstate enabled, the page size would become heavy, in several kilobytes; i.e., it will take longer to download the page. And also, on every postback, all the data is posted to the server, increasing the network traffic as well.
In new era applications, we use lots of heavy controls like gridviews etc., on our pages which makes the page size exponentially heavy. It is always recommended to use View State judiciously, and some programmers try to avoid using it because of the performance overhead.
Here, I am going to discuss how we can reduce the performance overhead caused by View State.
As we all know, the Web is a stateless medium; i.e., states are not maintained between requests by default. A web page is created every time a page is posted to the server. The ASP.NET framework provides us several ways to maintain the state. These are:
Here, we going to discuss one of the client state management techniques: ViewState. We can persist data during postbacks with the help of ViewState. Why do we call ViewState as client-side? Because the data is stored on the page, and if we move from one page to another, the data would be lost.
So where does the ViewState gets stored? The ViewState is stored on the page in a hidden field named __viewstate.
__viewstate
I will not discuss the basics of ViewState in detail; for details, have a look at a very good article here on CodeProject: Beginner's Guide To View State [^].
In new era applications, we generally have lots of rich and heavy controls on our page, and also provide lots of functionality on the page with the help of latest technologies like AJAX etc. To accomplish our tasks, we use ViewState a lot, but as we know, it doesn't come free - it has a performance overhead.
The ViewState is stored on the page in the form of a hidden variable. Its always advised to use the ViewState as little as possible. We also have other ways to reduce the performance overhead. Here, I am going to discuss the following ways:
We will see all these with an example.
First, let's see a normal scenario!!
Here, I have a webpage with a GridView which shows a lot of data (16 rows). I have enabled ViewState so that I don't need to repopulate the whole grid on every postback. But what am I paying for that? Let's see the example.
GridView
First, let's see the page size on postback with enableviewstate=true, which is true by default.
enableviewstate=true
Here we can see the request size is 4.9 K. Also, we can see the ViewState of the page by viewing the View Source. Let's see it:
You can see the __viewstate variable on the screen and the amount of data it has. If we have several controls like this, imagine your page size will be.
But the benefit is, we dont need to hit the database to fetch the data and bind it on every postback.
Now, let's see what happens when we disable the viewstate; i.e., set enableviewstate=false.
enableviewstate=false
and there is no data on the page because I loaded the data for the first time on page load. On subsequent postbacks, I don't bind it. That's why when I click on PostBack, the data gets lost, but the page size is here is just 773 B. Also, let's examine the View Source of the page:
We see the __viewstate variable, but with very less data. Although we have disabled viewstate, ASP.NET uses viewstate for some data to maintain the page state, but this is very little and not going to cause any performance overhead.
Now we can imagine the overhead of viewstate. You can check your application (if you are using viewstate) and see the page size is increased by the viewstate. Now we will discuss one way to reduce the page size.
We can compress the viewstate to reduce the page size. By compressing the viewstate, we can reduce the viewstate size by more than 30%. But here, the question arises, when do we compress and decompress the viewstate? For that, we have to dig into the page life cycle. As we know, viewstate is used by ASP.NET to populate controls. So we should compress the viewstate after all the changes are done in the viewstate, and save it after compression, and we have to decompress the viewstate just before it is used by ASP.NET to load all the controls from the viewstate. Let's jump to the page life cycle and see how we can fit our requirement.
As we can see, there are two methods. SaveViewState is responsible for collecting the view state information for all of the controls in the control hierarchy and persist it in the __viewstate hidden form field. The view state is serialized to the hidden form field in the SavePageStateToPersistenceMedium() method during the save view state stage, and is deserialized by the Page class' LoadPageStateFromPersistenceMedium() method in the load view state stage. In these methods, we can compress and decompress the viewstate. Let's take a pictorial view.
SaveViewState
SavePageStateToPersistenceMedium()
Page
LoadPageStateFromPersistenceMedium()
here, we need to override the methods SavePageStateToPersistenceMedium() for compressing the viewstate, and SavePageStateToPersistenceMedium() for decompressing the viewstate. I am going to use GZip for compression (provided by .NET). And this is available in the namespace System.IO.Compression. Let's jump to the code.
System.IO.Compression
I have a class CompressViewState that is inherited from System.Web.UI.Page. I have overridden the above two methods. I also made two private methods: for compressing a byte stream, and for decompressing it. Let's have a look at the compressing one:
CompressViewState
System.Web.UI.Page
/// This Method takes the byte stream as parameter
/// and return a compressed bytestream.
/// For compression it uses GZipStream
private byte[] Compress(byte[] b)
{
MemoryStream ms = new MemoryStream();
GZipStream zs = new GZipStream(ms, CompressionMode.Compress, true);
zs.Write(b, 0, b.Length);
zs.Close();
return ms.ToArray();
}
As you can see, the Compress method takes a byte array as a parameter and returns compressed data in byte array form. In the method, I have used GZipStream for compressing the data. Now, let's look at the decompressing one:
Compress
GZipStream
/// This method takes the compressed byte stream as parameter
/// and return a decompressed bytestream.
private byte[] Decompress(byte[] b)
{
MemoryStream ms = new MemoryStream();
GZipStream zs = new GZipStream(new MemoryStream(b),
CompressionMode.Decompress, true);
byte[] buffer = new byte[4096];
int size;
while (true)
{
size = zs.Read(buffer, 0, buffer.Length);
if (size > 0)
ms.Write(buffer, 0, size);
else break;
}
zs.Close();
return ms.ToArray();
}
As you can see, this method takes compressed data as a parameter and returns decompressed data.
Let's now look at the overridden methods. First, LoadPageStateFromPersistenceMedium:
LoadPageStateFromPersistenceMedium
protected override object LoadPageStateFromPersistenceMedium()
{
System.Web.UI.PageStatePersister pageStatePersister1 = this.PageStatePersister;
pageStatePersister1.Load();
String vState = pageStatePersister1.ViewState.ToString();
byte[] pBytes = System.Convert.FromBase64String(vState);
pBytes = Decompress(pBytes);
LosFormatter mFormat = new LosFormatter();
Object ViewState = mFormat.Deserialize(System.Convert.ToBase64String(pBytes));
return new Pair(pageStatePersister1.ControlState, ViewState);
}
As you can see, in this method, we have taken the viewstate in a string variable from PageStatePersister, decompressed it (as it is compressed in the Save viewstate stage; we'll discuss after this), and deserialized it in an Object, and returned the new Pair.
PageStatePersister
Object
new Pair
Now moving to SavePageStateToPersistenceMedium.
SavePageStateToPersistenceMedium
protected override void SavePageStateToPersistenceMedium(Object pViewState)
{
Pair pair1;
System.Web.UI.PageStatePersister pageStatePersister1 = this.PageStatePersister;
Object ViewState;
if (pViewState is Pair)
{
pair1 = ((Pair)pViewState);
pageStatePersister1.ControlState = pair1.First;
ViewState = pair1.Second;
}
else
{
ViewState = pViewState;
}
LosFormatter mFormat = new LosFormatter();
StringWriter mWriter = new StringWriter();
mFormat.Serialize(mWriter, ViewState);
String mViewStateStr = mWriter.ToString();
byte[] pBytes = System.Convert.FromBase64String(mViewStateStr);
pBytes = Compress(pBytes);
String vStateStr = System.Convert.ToBase64String(pBytes);
pageStatePersister1.ViewState = vStateStr;
pageStatePersister1.Save();
}
Here, we read the viewstate information from pageStatePersister and then serialize it, and finally, after compressing it, save it in a page persister.
pageStatePersister
This is all about compressing and decompressing the viewstate. I have put the class in the App_Code folder so that it is available to the entire application. And wherever we need to use it, we should inherit it from CompressViewState. Thus, we can reduce the viewstate size significantly and make our applications perform much better .
Now, let's move to another methodology: saving the viewstate into the file system.
This methodology should be used only if you have enough data in the viewstate; else, you will need more time in compressing and decompressing the viewstate between requests without much gain.
I am going to explain two approaches here:
This way, we can reduce the viewstate size to zero. There is no performance overhead on a round trip to the server. Still, there is a bit of overhead in reading and saving the viewstate to the file system. We'll discuss this later.
Now in this methodology, we need to use two methods:
Now we will see how to save the viewstate to the file system. But how will we save the ViewState? What would be the name of the file? Does it have to be unique for every user? Do we need multiple files for the same user session.
There are a lot of questions. One answer, we don't need multiple files for the same user at the same time because ViewState is confined to every page, and when the user moves to another page, the previous page's ViewState gets ripped off. At a time, we just need to save the current page's ViewState.
As session ID is unique for every user, why should we not use the session ID in the file to make it unique, and whenever we'll need to get the file, we'll just pick it by using the session ID.
Let's jump to the code.
Here, I have a file name PersistViewStateToFileSystem that is inherited from System.web.UI.Page and resides in the App_code folder so that it is available through the whole application. Here, we have a property which returns the full path of the ViewState file. I used the session ID as a name of the file, and .vs as the extension. We also have a property that returns the path of the ViewState file:
PersistViewStateToFileSystem
System.web.UI.Page
public string ViewStateFilePath
{
get
{
if (Session["viewstateFilPath"] == null)
{
string folderName = Path.Combine(Request.PhysicalApplicationPath,
"PersistedViewState");
string fileName = Session.SessionID + "-" +
Path.GetFileNameWithoutExtension(Request.Path).Replace("/", "-") + ".vs";
string filepath = Path.Combine(folderName, fileName);
Session["viewstateFilPath"] = filepath;
}
return Session["viewstateFilPath"].ToString();
}
}
Again, here we have implemented both the functions.
LoadPageStateFromPersistenceMedium loads the file from the ViewState file, if it exists. Let's have a look at the code for LoadPageStateFromPersistenceMedium:);
}
}
and SavePageStateToPersistenceMedium saves the file in the file system instead of pagepersister. Moving to the code for LoadPageStateFromPersistenceMedium:
pagepersister
protected override void SavePageStateToPersistenceMedium(object state)
{
LosFormatter los = new LosFormatter();
StringWriter sw = new StringWriter();
los.Serialize(sw, state);
StreamWriter w = File.CreateText(ViewStateFilePath);
w.Write(sw.ToString());
w.Close();
sw.Close();
}
We need to inherit our page from the class PersistViewStateToFileSystem to use it.
What is the best time to remove the ViewState file from the server? Our requirement is to remove the file at session end. So we can use the Application level Session_End (that is defined in the Global.asax file) to remove the file from the web server. I save the ViewState file path in the session, and access the file path from there. Let's see the code for Session_End.
Session_End
void Session_End(object sender, EventArgs e)
{
//Deleting the viewstate file
string filePath = Session["viewstateFilPath"] as string;
if (!string.IsNullOrEmpty(filePath) && File.Exists(filePath))
{
File.Delete(filePath);
}
}
You can get the full sample application in the attachment.
Now, let's move to our example. Let's see the request size.
It's just 2.5 K. We can see that our page size has reduced dramatically. Let's now see the View Source of the page:
Cool! There is no data in the ViewState variable.
InProc
First, I would say thanks to all who shared their valuable feedback. I am adding a section because of them. As the session ID approach has many limitations as mentioned in the Points to Remember section, I have added the following new approach.
We can use a GUID instead of session ID to uniquely identify the ViewState file. And to get the GUID, we will use a hidden field to store the GUID so that we can get the file name as and when we require. So now here, we only need to change the View State file path property, and need to have a hidden field on every page. The hidden field on the page can be as:
<asp:hiddenfield
So actually, in my demo application, I have added a key isSessionId in the app.config section of web.config; if it set true, then we are using the Session ID approach, else the hidden field approach. Let's look at the property code:
isSessionId
app.config
public string ViewStateFilePath
{
get
{
bool isSessionId = Convert.ToBoolean(
System.Configuration.ConfigurationManager.AppSettings["isSessionId"]);
string folderName = Path.Combine(Request.PhysicalApplicationPath,
"PersistedViewState");
string fileName = string.Empty;
string filepath = string.Empty;
if (!isSessionId)
{
HiddenField hfVSFileName = null;
string VSFileName = "";
// Get the HiddenField Key from the page
hfVSFileName = FindControl(this, "hfVSFileName") as HiddenField;
// Get the HiddenField value from the page
string hfVSFileNameVal = GetValue(hfVSFileName.UniqueID.ToString());
if (!string.IsNullOrEmpty(hfVSFileNameVal))
{
VSFileName = hfVSFileNameVal;
}
if (!Page.IsPostBack)
{
VSFileName = GenerateGUID();
hfVSFileName.Value = VSFileName;
//Removing files from Server
RemoveFilesfromServer();
}
fileName = VSFileName + "-" +
Path.GetFileNameWithoutExtension(
Request.Path).Replace("/", "-") + ".vs";
filepath = Path.Combine(folderName, fileName);
return filepath;
}
else
{
if (Session["viewstateFilPath"] == null)
{
fileName = Session.SessionID + "-" +
Path.GetFileNameWithoutExtension(
Request.Path).Replace("/", "-") + ".vs";
filepath = Path.Combine(folderName, fileName);
Session["viewstateFilPath"] = filepath;
}
return Session["viewstateFilPath"].ToString();
}
}
}
In the hidden field approach, I am creating a GUID for uniqueness of the filename, and also, it is created only once on page load.
I have made a change regarding removing files from the server. Here, I am removing files from the server which are older than 3 days:
private void RemoveFilesfromServer()
{
try
{
string folderName = Path.Combine(Request.PhysicalApplicationPath,
"PersistedViewState");
DirectoryInfo _Directory = new DirectoryInfo(folderName);
FileInfo[] files = _Directory.GetFiles();
DateTime threshold = DateTime.Now.AddDays(-3);
foreach (FileInfo file in files)
{
if (file.CreationTime <= threshold)
file.Delete();
}
}
catch (Exception ex)
{
throw new ApplicationException("Removing Files from Server");
}
}
I am calling this method once on every new page request. You can call this method based on your design. We can also make the number of days configurable.
At last, I just want to say, since we generally use lots of ViewState controls on our page, at least we should use one of the discussed approaches.
Feedback is the key for me. I would request to you all to share your feedback and give me suggestions which would encourage and help in writing more.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
LouisDupont wrote:not having ViewState would be better but usually that is not an option for old projects and for productivity purposes
LouisDupont wrote:Finally, it would be appreciable to have Links from those who said about JQuery or comming new technologies to achieve ViewState reductions or elimination..
tec-goblin wrote:You're fighting the symptoms and not the cause, and every solution is a bit complicated without much to gain
tec-goblin wrote:You're fighting the symptoms and not the cause
Pete Mourfield wrote:First, in ASP.NET 4 the ViewStateMode attribute has been added. I would submit that a good practice to use would be to set the ViewStateMode='Disabled' at the @Page/web.config level, and only re-enable ViewState for this controls that need it. This greatly reduces the total amount of ViewState.
Pete Mourfield wrote:Last, your approach to compress the ViewState on the server is interesting, but I wonder how much affect the compression would have on the CPU performance of the server, particularly when the server is under load.
Session.SessionID + "-" + Path.GetFileNameWithoutExtension(Request.Path).Replace("/", "-") + ".vs";
<pages enableViewState="false">
...
</pages>
TimMerksem wrote:All given solutions are bad, the only way to have a better viewstate is by reducing what you put in
TimMerksem wrote:Session End is not guaranteed to work.
Session.SessionID + "-" + Path.GetFileNameWithoutExtension(Request.Path).Replace("/", "-") + ".vs";what happens when I open 2 IE 7 browsers and visit the same page without passing the -nomerge option to IE?
TimMerksem wrote:De/Compressing requires a lot of memory and cpu, basically for achieving almost nothing since most browsers already GZIP the response!
TimMerksem wrote:Just use viewstate for what is meant, saving state of controls. Don't put any more data in it! Disable it for all your pages and controls by default when starting the project using the web.config:
...
and then start to enable it per control and per page
Hitesh.D.Patel wrote:But techniques are not so good for new era applications.
Asken wrote:Well written and clearly explained but to me a bit outdated.
Asken wrote:Better would be to use modern web techniques and not use viewstate at all. I think the ASP.Net implementation using viewstate is evil by design and have a tendency to "blow up" in your face. Especially when doing RAD development with drag'n drop user controls.
Asken wrote:As I see it current ASP.Net client-side will probably be replaced by various scripting technologies which there are already alot of. They normally reduces the application payload to 10% or less saving heaps of bandwidth and read/write overhead. Even MS is starting to realize this and are including jQuery in Studio 2010 web solutions.
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/101888/ViewState-Various-ways-to-reduce-performance-overh?fid=1583515&df=90&mpp=50&sort=Position&spc=Relaxed&select=3565833&noise=1&prof=True&view=Quick | CC-MAIN-2014-52 | refinedweb | 3,161 | 58.28 |
Test 1 from Fall 2016
Note, you may only refer to this site during the test. Do not look at other notes or code or Google, etc. Total points is 30, each question is 10 points.
Create a bitbucket repo called
csci221-test1. Create a directory for each problem, i.e.,
prob1,
prob2,
prob3. Be sure to add me as a “reader” to your repository. The test answers are due by Fri Oct 7, 11:59pm.
Problem 1
Update the linked list code from class, on londo in
/home/jeckroth/csci221/class-examples/2016-09-21/, so that the list can store any kind of value in any node. In other words, the list should be able to store an integer in one node, a double in the next, a tree in the third node, etc. Note, the only way to do this in C++ is to use
void* pointers for the values so that each node points to its value rather than storing the value directly.
Write a
main.cpp file and
main function that tests this functionality (adding an integer, double, and tree object to the same linked list). Also retrieve the integer and double values from the list (via their pointers) and print the values (not the pointers), and retrieve the tree’s pointer from the list and print its root value. Hint, you cannot store the ints or doubles directly, you can only store a pointer to them; also, you’ll need a lot of type casting.
Be sure there are no memory leaks, including in your
main.cpp.
Problem 2
Starting with our existing tree code, on londo in
/home/jeckroth/csci221/class-examples/2016-09-30/, write two functions:
Write a function that finds the max depth of the tree. The max depth is the longest number of steps from the root to a leaf. The depth of a tree with just one root node and no children is 1. Add two tests for this function in
main.cpp. Be sure there are no memory leaks (the starting code has no leaks).
Write a function that counts and returns the number of nodes in the tree. Add two more tests to
main.cppto test this function. Be sure there are no memory leaks.
Problem 3
Write a “Vector” class that stores
double values in an array (do not use a linked list or anything else but an array for the underlying storage). Create these class functions, plus any others you need:
- a default constructor,
Vector()
- a constructor that specifies the initial size of the vector,
Vector(int n)
- a set-value function,
void set(int idx, double val), that sets the position
idxin the vector to the val given; if the position is larger than the current size of the array, grow the array to accomodate
- a get-value function,
double get(int idx) const, that returns the value at position
idx; if the position is invalid (too large or negative), just return anything you want (the behavior is undefined)
- a function that returns the array size,
int size() const, which may have grown since it was first constructed due to the
setfunction
Create a proper
.cpp and
.h separation of code for the class. Use the
main.cpp file below to test your class. You should have no memory leaks, and the main function should print correct values (it’s self-explanatory). Also write a Makefile, with a
clean target as well as the usual targets.
#include <iostream> using namespace std; #include "vector.h" int main() { Vector v; v.set(3, 10.7); v.set(4, 11.7); cout << "v.get(3) == " << v.get(3) << endl; cout << "v.get(4) == " << v.get(4) << endl; cout << "v.size() should be >= 5: " << v.size() << endl; Vector *pv = new Vector(20); pv->set(0, 88.8); pv->set(19, 99.9); cout << "pv->get(0) == " << pv->get(0) << endl; cout << "pv->get(19) == " << pv->get(19) << endl; cout << "pv->size() should be >= 20: " << pv->size() << endl; delete pv; Vector *pv2 = new Vector; for(int i = 0; i < 10; i++) { pv2->set(i, 2.2); } for(int i = 0; i < 10; i++) { cout << "pv2->get(" << i << ") == " << pv2->get(i) << endl; } delete pv2; } | http://csci221.artifice.cc/guide/test-1-fall2016.html | CC-MAIN-2017-30 | refinedweb | 705 | 81.63 |
Building Flappy Bird #6 – Randomization & Ground
Right now, our game is a bit too easy. It goes on forever, but it’s always exactly the same.
What we want to do next is add some variation to the seaweed. To accomplish this, we’ll have our game pick a randomized Y value for the position.
Since we already move our seaweed when it gets to -15 on the X axis, we can make do the randomization at that time.
To do the randomization, we’ll just call into the Random function Unity provides us.
float randomYPosition = UnityEngine.Random.Range(-3, 3);
UnityEngine.Random.Range will return a value between the first and second numbers passed in. For this example, we’re passing negative 3 and positive 3, so we’ll get a number somewhere between there.
Change your “MoveLeft”) { float randomYPosition = UnityEngine.Random.Range(-3, 3); transform.position = new Vector3(15, randomYPosition, 0); } } }
Give the game a play now.
Notice how the seaweed Y position is changing just enough to add some difficulty to our game.
Cheats (Bugs we should fix)
If you’ve been playing all these times I said to Play, you’ve probably noticed a few issues.
For example, if you fall down without hitting a seaweed, you just fall, there’s no ground. The same goes for flying too high, you can go above the seaweed and just hang out there safely.
Open the “Fish” script and modify the Update() method to match this
//); } if (transform.position.y > 6f || transform.position.y < -6f) { Application.LoadLevel(0); } }
If you play again, you’ll see that when the fish drops below -6 on the Y axis, the fish dies and the level re-loads.
The same happens if you click fast enough and bring your above positive 6 on the Y axis.
Real Ground
Let’s add some real ground now. In Flappy Bird, we have a simple striped ground (mario ground). For our game, we have some dirt.
To do this, we’re actually going to add a quad to our scene. The quad is located under 3D assets, but it does exactly what we need for our 2D game.
Remember you can mix and match 2D/3D in your games.
Rename the new quad to “Ground”
Adjust the position to [0, -4.8, 0]. Y is -4.8
Set the X value of Scale to 20.
Your Game View should now look like this
Materials
A quad is not a sprite, so we don’t have a Sprite Renderer.
What we have instead is a Mesh Renderer.
What we need to do is change the Material of our renderer. The art package you downloaded in part 1 has a materials folder with a material named “Ground”.
Drag that “Ground” material and drop it onto the “Ground” Game Object in the Hierarchy.
You could also drag the material onto the area that says “Element 0” in the screenshot above.
Since the quad is a 3D object, when we added it, there was a 3D collider attached to it. That collider is a new type that we haven’t used before called a MeshCollider.
We’re building a 2D game though, so we need to remove that 3D collider.
Then add a BoxCollider2D to our “ground”.
Your BoxCollider2D should have a nice rectangular green outline.
When we hit the ground, we want it to do the same thing the seaweed does, so let’s reuse one of our old scripts.
Add the “SeaweedCollisionHandler” script to the “ground”
Get ready to play
Now think for a second.
What do you expect to happen?
..
..
Go ahead and hit play to see if you were right.
What’s going on?
Right now, it probably seems think things have gotten worse.
Your fish is sliding along the ground. The seaweed is sliding along the ground. And your fish isn’t dying until he slides in.
If you remember from part 2, we forgot to check the IsTrigger checkbox.
Go ahead and check it now, then give it another try.
Your fish should now be dying the moment it touches the ground.
Animating the Ground
The last thing we need to do is get our ground animating. Previously, when we wanted things to move, we applied a Translate on their rigidbody.
For the ground though, we’re doing something different. We created the ground as a Quad for a specific reason. We want to animate the texture on it, without actually moving the transform.
To do that, we’ll need to create a new script. Create one now named “GroundScroller”.
Open the “GroundScroller” script and edit it to match this
using UnityEngine; public class GroundScroller : MonoBehaviour { [SerializeField] private float _scrollSpeed = 5f; // Update is called once per frame void Update() { // Get the current offset Vector2 currentTextureOffset = this.GetComponent<Renderer>().material.GetTextureOffset("_MainTex"); // Determine the amount to scroll this frame float distanceToScrollLeft = Time.deltaTime * _scrollSpeed; // Calculate the new offset (Add current + distance) float newTextureOffset_X = currentTextureOffset.x + distanceToScrollLeft; // Create a new Vector2 with the updated offset currentTextureOffset = new Vector2(newTextureOffset_X, currentTextureOffset.y); // Set the offset to our new value this.GetComponent<Renderer>().material.SetTextureOffset("_MainTex", currentTextureOffset); } }
Attach the “GroundScroller” script to the Ground GameObject
Try playing again and watch the ground scroll along with the seaweed!
Well, maybe it’s not quote scrolling “along” with the seaweed. It’s going a bit too fast and looks pretty strange.
Luckily, if you were paying attention to the code, we’ve added a variable to adjust the scroll speed.
If you try to adjust the speed while playing, it’s going to keep resetting. This is because of how we’re handling death. When we do a LoadLevel to reload the scene, all of those gameobjects are being re-loaded and re-created.
We have a couple of options for finding the correct speed.
- Change our death system to not use LoadLevel
- Stop the game, adjust the value, hit play, rinse, repeat (until we get the right value).
- Disable the fish and adjust while the game plays.
Personally, I prefer the easy way, so let’s go with option c.
- Stop playing.
- Now disable the Fish.
- Start Playing again
- Adjust the speed until find a reasonable value. (I ended up with 1.5)
- Memorize that number
- Stop playing
- Re-enter that number.
Now play one more time and enjoy what you’ve made. See how far you can get.
Next Up
Great work so far. Our game has come together and is functional and fun.
In the next part, we’ll make our game a little fancier with some props and add a unique scoring system. | https://unity3d.college/2015/11/17/unity3d-intro-building-flappy-bird-part-6/ | CC-MAIN-2020-34 | refinedweb | 1,108 | 76.11 |
Repeat
body while the condition
cond is true.
Aliases:
tf.while_loop( cond, body, loop_vars, shape_invariants=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None, maximum_iterations=None, return_same_structure=False )
cond is a callable returning a boolean scalar tensor.
body is a callable
returning a (possibly nested) tuple, namedtuple or list of tensors of the same
arity (length and structure) and types as
loop_vars.
loop_vars is a
(possibly nested) tuple, namedtuple or list of tensors that is passed to both
cond and
body.
cond and
body both take as many arguments as there are
loop_vars.
In addition to regular Tensors or IndexedSlices, the body may accept and return TensorArray objects. The flows of the TensorArray objects will be appropriately forwarded between loops and during gradient calculations.
Note that
while_loop calls
cond and
body exactly once (inside the
call to
while_loop, and not at all during
Session.run()).
while_loop
stitches together the graph fragments created during the
cond and
body
calls with some additional graph nodes to create the graph flow that
repeats
body until
cond returns false.
For correctness,
tf.while_loop() strictly enforces shape invariants for
the loop variables. A shape invariant is a (possibly partial) shape that
is unchanged across the iterations of the loop. An error will be raised
if the shape of a loop variable after an iteration is determined to be more
general than or incompatible with its shape invariant. For example, a shape
of [11, None] is more general than a shape of [11, 17], and [11, 21] is not
compatible with [11, 17]. By default (if the argument
shape_invariants is
not specified), it is assumed that the initial shape of each tensor in
loop_vars is the same in every iteration. The
shape_invariants argument
allows the caller to specify a less specific shape invariant for each loop
variable, which is needed if the shape varies between iterations. The
tf.Tensor.set_shape
function may also be used in the
body function to indicate that
the output loop variable has a particular shape. The shape invariant for
SparseTensor and IndexedSlices are treated specially as follows:
a) If a loop variable is a SparseTensor, the shape invariant must be TensorShape([r]) where r is the rank of the dense tensor represented by the sparse tensor. It means the shapes of the three tensors of the SparseTensor are ([None], [None, r], [r]). NOTE: The shape invariant here is the shape of the SparseTensor.dense_shape property. It must be the shape of a vector.
b) If a loop variable is an IndexedSlices, the shape invariant must be a shape invariant of the values tensor of the IndexedSlices. It means the shapes of the three tensors of the IndexedSlices are (shape, [shape[0]], [shape.ndims]).
while_loop implements non-strict semantics, enabling multiple iterations
to run in parallel. The maximum number of parallel iterations can be
controlled by
parallel_iterations, which gives users some control over
memory consumption and execution order. For correct programs,
while_loop
should return the same result for any parallel_iterations > 0.
For training, TensorFlow stores the tensors that are produced in the forward inference and are needed in back propagation. These tensors are a main source of memory consumption and often cause OOM errors when training on GPUs. When the flag swap_memory is true, we swap out these tensors from GPU to CPU. This for example allows us to train RNN models with very long sequences and large batches.
Args:
cond: A callable that represents the termination condition of the loop.
body: A callable that represents the loop body.
loop_vars: A (possibly nested) tuple, namedtuple or list of numpy array,
Tensor, and
TensorArrayobjects.
shape_invariants: The shape invariants for the loop variables.
parallel_iterations: The number of iterations allowed to run in parallel. It must be a positive integer.
back_prop: Whether backprop is enabled for this while loop.
swap_memory: Whether GPU-CPU memory swap is enabled for this loop.
name: Optional name prefix for the returned tensors.
maximum_iterations: Optional maximum number of iterations of the while loop to run. If provided, the
condoutput is AND-ed with an additional condition ensuring the number of iterations executed is no greater than
maximum_iterations.
return_same_structure: If True, output has same structure as
loop_vars. If eager execution is enabled, this is ignored (and always treated as True).
Returns:
The output tensors for the loop variables after the loop.
If
return_same_structure is True, the return value has the same
structure as
loop_vars.
If
return_same_structure is False, the return value is a Tensor,
TensorArray or IndexedSlice if the length of
loop_vars is 1, or a list
otherwise.
Raises:
TypeError: if
condor
bodyis not callable.
ValueError: if
loop_varsis empty.
Example:
i = tf.constant(0) c = lambda i: tf.less(i, 10) b = lambda i: tf.add(i, 1) r = tf.while_loop(c, b, [i])
Example with nesting and a namedtuple:
import collections Pair = collections.namedtuple('Pair', 'j, k') ijk_0 = (tf.constant(0), Pair(tf.constant(1), tf.constant(2))) c = lambda i, p: i < 10 b = lambda i, p: (i + 1, Pair((p.j + p.k), (p.j - p.k))) ijk_final = tf.while_loop(c, b, ijk_0)
Example using shape_invariants:
i0 = tf.constant(0) m0 = tf.ones([2, 2]) c = lambda i, m: i < 10 b = lambda i, m: [i+1, tf.concat([m, m], axis=0)] tf.while_loop( c, b, loop_vars=[i0, m0], shape_invariants=[i0.get_shape(), tf.TensorShape([None, 2])])
Example which demonstrates non-strict semantics: In the following
example, the final value of the counter
i does not depend on
x. So
the
while_loop can increment the counter parallel to updates of
x.
However, because the loop counter at one loop iteration depends
on the value at the previous iteration, the loop counter itself cannot
be incremented in parallel. Hence if we just want the final value of the
counter (which we print on the line
print(sess.run(i))), then
x will never be incremented, but the counter will be updated on a
single thread. Conversely, if we want the value of the output (which we
print on the line
print(sess.run(out).shape)), then the counter may be
incremented on its own thread, while
x can be incremented in
parallel on a separate thread. In the extreme case, it is conceivable
that the thread incrementing the counter runs until completion before
x is incremented even a single time. The only thing that can never
happen is that the thread updating
x can never get ahead of the
counter thread because the thread incrementing
x depends on the value
of the counter.
import tensorflow as tf n = 10000 x = tf.constant(list(range(n))) c = lambda i, x: i < n b = lambda i, x: (tf.compat.v1.Print(i + 1, [i]), tf.compat.v1.Print(x + 1, [i], "x:")) i, out = tf.while_loop(c, b, (0, x)) with tf.compat.v1.Session() as sess: print(sess.run(i)) # prints [0] ... [9999] # The following line may increment the counter and x in parallel. # The counter thread may get ahead of the other thread, but not the # other way around. So you may see things like # [9996] x:[9987] # meaning that the counter thread is on iteration 9996, # while the other thread is on iteration 9987 print(sess.run(out).shape) | https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/while_loop?hl=zh-cn | CC-MAIN-2020-29 | refinedweb | 1,201 | 57.37 |
Definition of Ruby Objects
Ruby is a purely object-oriented language, everything in Ruby is an object because Ruby supports everything like encapsulation, inheritance, operator overloading, and polymorphism, etc, to create an object in Ruby we can use the new keyword, we can create an unlimited object from any class and each object can access the attributes of the class (like a method of class and constantly defined inside the class), we can access an attribute by object using object created by class name, most important uses of the object is they are very portable as they can contain any data like method and variable (obj.methodname and object.variableName) also each object points to the same memory location for accessing method and variable without allocating new memory.
How to Create Objects in Ruby?
To create an Object in Ruby we can use the new keyword with the name of the class like ABC.new.See the below example of the Object creation in Ruby. Let me explain the below example.
- We have defined a class with class name ABC, We can see that we have used class keyword before the class name(ABC). We will use this class name to create objects using a new keyword.
- Inside the class, we have given some comment sections, here in place of comments we can also put some methods and constants.
- Inside the class we have used a method initialize method, it is a predefined method that does the initialization.
- Finally, we are creating an object from the class ClassName with the help of a new
- When we create objects from the class at that time the class will also initialize the method automatically and do the work of initialization of variables. Here Object contains all the variables and methods of the ABC class.
Please see the syntax below for the class creation in Ruby.
class ABC
#Do the initialisation here
def initialize(param1, param2)
# Here we are initialising the variables .
@param1 = param1
@param2 = param2
end
#Write here your methods and constants
end
#Create object from class ClassName
A1 =ABC.new(param1, param2)#Creation Of Object A1
A2 =ABC.new(param1, param2)#Creation Of Object A2
A3 =ABC.new(param1, param2)#Creation Of Object A3
A4 =ABC.new(param1, param2)#Creation Of Object A4
How do Objects Work in Ruby?
If you have seen any civil engineer designing or drawing some architecture for the buildings for architecture for bridges. Once a civil engineer draws any design that design can be used to create many buildings. Because that drawing contains everything which we need to create any buildings or bridges. In the same way objects in Ruby play the role of buildings and all the classes play the role of paper of drawing. Every time we create an object and the object contains the properties of classes. We can explain the working of class in Ruby can be explained with the help of the flowchart below.
- Here we have one class with the name ABC and this class contains a method and a constant.
- Here the new keyword will notify the Ruby compiler about instruction for the creation of an object from class name ABC.
- Once Ruby compiler reads the new keyword it will give access or more technically say associate the memory of the methods and variables of the class attributes.
- Now we are creating many objects from the class ABC like object A1, object A2, object A3, and object A4.
- Each object created from this class contains the same method and constant inside it. This means at any time if needed these objects can access the method M and variable C.
See the below flowchart of Ruby’s class for better understanding.
Examples to Implement Objects in Ruby
Below is an example for the object creation and it’s uses in Ruby. The aim of this program is to display the personal information. We can explain the below example in the following steps.
- First, we have created a class with the name Person.
- Next, we have created a Ruby inbuilt method initialize which will be used as the constructor inside the Ruby class to initialize basic things.
- We wrote a method display_person inside the class which will display the data for the attributes passed to it.
- Next, we are creating an object from the class Person with the help of a new keyword and calling method display_person with required parameters to display the person data.
Example #1
Please see the below example of code along with a screen of output.
Code:
class Person
# Method for initialisation inside the class
def initialize()
# Initialising work
puts "Initialisation for the class Person will be done here"
end
def display_person(name,age,sex)
puts "The name of the person is #{name} and age is #{age} also he is a #{sex}"
end
end
# Creating an objects and passing parameters for initialization
personObjec1 = Person.new()
personObject2 = Person.new()
personObjec1.display_person("Ranjan", 31, "male")
personObjec1.display_person("Manisha", 28, "female")
Output:
Example #2
Below is an example of the object creation and it’s uses in Ruby. The aim of this program is to display personal information. We can explain the below example in the following steps.
- First, we have created a class with the name Person as with help of Person class we will create objects using a new keyword.
- Next, we have created a Ruby inbuilt method initialize which will be used as the constructor inside the Ruby class to initialize user details for use in other methods.
- We wrote a method display_person inside the class which will display the data for the attributes passed to initialize at the time of object creation.
- Next, we are creating an object from the class Person and calling method display_person with required no parameters as the parameters will be used which we have passed at the time of initialization.
Please see the below example of code along with a screen of output.
Code:
class Person
# Method for initialisation inside the class
def initialize(name ,age ,sex )
# Initialising
@name = name
@age = age
@sex =sex
end
def display_person()
puts "The name of the person is #{@name} and age is #{@age} also he is a #{@sex}"
end
end
# Creating an objects and passing parameters for initialization
personObjec1 = Person.new("Ranjan", 31, "male")
personObjec2 = Person.new("Manisha", 28, "female")
personObjec1.display_person()
personObjec2.display_person()
Output:
Recommended Articles
This is a guide to Ruby Objects. Here we also discuss the definition and how to create objects in ruby along with different examples and its code implementation. You may also have a look at the following articles to learn more – | https://www.educba.com/ruby-objects/?source=leftnav | CC-MAIN-2022-40 | refinedweb | 1,103 | 61.26 |
In September our local C++ User Group started a “new year” of meetings after a little break in August. I had a pleasure to give a talk about string operations in C++17.
Here are the slides and additional comments.
The Talk
For my book I wrote a lot of content about
string_view,
std::searcher and
std::to_chars,
std::from_chars and I wanted to make a short summary of those features.
In the talk I included some of my benchmarks and notes that were covered in my blog, for example:
- Preprocessing Phase for C++17’s Searchers
- Speeding up Pattern Searches with Boyer-Moore Algorithm from C++17
- Speeding Up string_view String Split Implementation
- Performance of std::string_view vs std::string from C++17
- plus the new content about conversion routines.
Most of the time we spent discussing
string_view as this feature might have a bigger impact on your code.
std::searcher and low-level conversion routines are quite specialized so they won’t be used that often as views.
For example, during the discussion, we shared the experience using string views. One case is that when you refactor some existing code you’ll often find a situation where you can use views through chains of function calls, but then, at some point, you’re stuck as you have to perform conversion to
string anyway.
Another thing, that was brought by Andrzej Krzemieński (from Andrzej’s C++ Blog). While
string_view is supposed not to allocate any extra memory, you should be still prepared for memory allocations for exceptions.
Have a look at this code:
#include <iostream> #include <stdexcept> #include <string_view> void* operator new(std::size_t n) { std::cout << "new() " << n << " bytes\n"; return malloc(n); } int main() { std::string_view str_view("abcdef"); try { for (std::size_t i = 0; true; ++i) std::cout << i << ": " << str_view.at(i) << '\n'; } catch (const std::out_of_range& e) { std::cout << "Whooops. Index is out of range.\n"; std::cout << e.what() << '\n'; } }
The code uses
str_view.at(i) that can throw when trying to access an index out of range. When an exception is created, you’ll see some memory allocation - for the message string.
It’s probably not super often to use
at, but it’s an interesting observation.
The Slides
Summary
The talk was my third presentation for the Cracow User Group. It’s an amazing experience, and I hope to be able to deliver more good stuff in the future :)
What is your experience with string views, searchers and low-level conversion routines? Have you played with the new features? | https://www.bfilipek.com/2018/10/strings17talk.html | CC-MAIN-2019-13 | refinedweb | 426 | 70.63 |
#include <stdio.h> #include <assert.h> #define default assert(0); default #define case assert(0); case #define fallthrough if(0) int main() { int i; while(-1 != (i = getchar())) { switch(i - '0') { case 0: printf("zero"); break; /* zero */ case 1: printf("one"); break; /* one */ case 2: printf("two"); break; /* two */ case 3: printf("three"); fallthrough /* threefour */ case 4: printf("four"); break; /* four */ case 5: printf("five"); fallthrough /* fivesixseven */ case 6: printf("six"); fallthrough /* sixseven */ case 7: printf("seven"); break; /* seven */ case 8: printf("eight"); /* ERROR */ case 9: printf("nine"); break; /* nine */ default: printf("\n"); } } return 0; }
case; {statements}; break; case;An IDE that warned you about missing breaks would be nice. The break statement in a C/C++ case statement seems only to provide for "code reuse" without function calls. The two uses I have seen made use of this are:
select on x ..case 'foo': ....blah_one() ..case 'bar','star','far': // note multiple match items ....blah_two() ..case 'zeek','Greek': ....blah_three() ..otherwise: ....blah() end select(Dots used due to TabMunging) I like that approach. It's more intutive and less error-prone than the way C/Java requires a break statement. If you have something really complex such that this is not sufficient, then if/elseif/else is often the preferred way.
public static void main(String[] args) { switch(args.length) { case 4: handleFourthArgument(args[3]); case 3: handleThirdArgument(args[2]); case 2: handleSecondArgument(args[1]); case 1: handleFirstArgument(args[0]); break; default: // print warning on unexpected arguments }where the "handleXXXArgument" calls are typically some kind of in-line code.
if (args.length == 4) {handleFourthArgument(args[3]);} if (args.length >= 3) {handleThirdArgument(args[2]);} if (args.length >= 2) {handleSecondArgument(args[1]);} if (args.length >= 1) {handleFirstArgument(args[0]);}However, this is very slow! It means there are 4 comparisons. The case statement of above has only one comparison and one jump to an address calculated from the result of this comparison; It may not matter with 4 comparisons at the beginning of a program, but if you have 200 and the code is executed 800 times every second, then YES, it does matter!
public static void main(String[] args) { switch(args.length) { case 4: handleFourthArgument(args[3]); handleThirdArgument(args[2]); handleSecondArgument(args[1]); handleFirstArgument(args[0]); break; case 3: handleThirdArgument(args[2]); handleSecondArgument(args[1]); handleFirstArgument(args[0]); break; case 2: handleSecondArgument(args[1]); handleFirstArgument(args[0]); break; case 1: handleFirstArgument(args[0]); break; default: // print warning on unexpected arguments break; }But this is very redundant. I strongly prefer the series of if's over this one. This supports an arbitrary number of arguments:
ArgumentHandlers[] theArgumentHandlers = loadArgumentHandlers(); if (args.length > theArgumentHandlers.length) throw new TooManyArgumentsException(); for (int index=0; index<args.length; index++) { theArgumentHandlers[index].handleArgument(args[index]); }It is also non-obvious to the vast majority of developers. It perhaps could be argued that it is PrematureAbstraction. Fine. How's this, then?
for(int index = 0; index < args.length; index++) { // switch (index) {//Oh god, I just made a FOR-CASE structure. Forgive me gods of programming, this was the solution of least kludge... case 4: // handles this case only if args.length >= 4 HandleFourthArgument?(args[index]); break; //etc. } }Besides, this is really a bad example; if the handling of the arguments is similar, I'd rather write
void handleNthArgument(int argumentNum, string argumentVal) { ... }
while (something) { if (something_else) do_something(); else break; // while loop will exit. }#2:
while (something) { // Admittedly, switch wouldn't be used for a true/false comparison. switch (something_else) { case true: do_something(); case false: break; // switch will break, but what about breaking the while loop? } }So, we're to use if() instead of switch() if we wanted to break out of a loop directly from the case block? It seems that as if() and switch() are so similar, they should both have the ability to break out of the enclosing loop. (If switch was deemed necessary, the following solution could always be put in place: #3:
while (something) { bool bShouldBreak = false; switch (something_else) { case true: do_something(); case false: bShouldBreak = true; } if (bShouldBreak) break; ... }) Or am I missing something? David Grant Does this count as something? #4:
while (something && something_else) { do_something(); }It covers the simple true/false situation. Some languages (Ada, Modula?) have labeled break/continue statements that let you jump out multiple levels. That handles heterogenous do_somethings(). Ofcourse, SwitchStatement is a CodeSmell, so only the true/false case matters. I don't know about Ada or Modula, but Java and Perl both have labeled loops that let you specify which look you want to break or continue. A Java (1.5) example:
// find a Person in an Organization, given a personSought and a List<Organization> orgLoop: for (Organization org : orgs) { List<Person> people = org.getRoster(); personLoop: for (Person person : people) { if (person.equals(personSought)) { doSomething(person); break orgLoop; } } }Those Java guys sure did choose a funny keyword for "goto" ;) -- dc No need here for the label on the inner loop, but I prefer all of them in a nested set to have names, if any of them do. -- DavidConrad This page was originally about case/switch statements, not loops. I don't mind them for loops, but I do for case/switch.
switch(a) { // Example Sally-4 case 1,2,3 {...} case 4 {...} case 5,6 {...} ... otherwise {...} }I can dig it. We could keep the old syntax in for backward compatibility. Problem solved! Now we can all go home :-) Maybe we could use the keyword "when" to distinquish between the old-style. This would even allow mixing for backward compatibility. Use "when" for the non-break-needing multi-set version of a sub-block, and "case" for the old kind. However, "case" fall-thru's (no-break) still would not be valid if the next sub-block is a "when". Fall-thru's would only work on adjacent "case" sub-blocks. This is so that one is not lured into a false sense of security because they are using the new-fangled one. (Generally it would be considered poor style to mix, but possible.) But, that may all be too confusing for programmers. Perhaps just use different keywords in place of "switch" and "case" for the new style to distinguish from the old style (which would still be supported). Suggestion:
select(a) { when 1,2,3 {...} when 4 {...} when 5,6 {...} ... otherwise {...} }
#define SWITCHCASE(thisOne, isThatOne) switch ( thisOne ) { case (isThatOne) : { #define THRUCASE(isThatOne) } case (isThatOne) : { #define THRUDEFAULT } default : { #define CASE(isThatOne) } break; case (isThatOne) : { #define DEFAULT } break; default : { #define SWITCHOFF }}Example 1:
SWITCHCASE ( nBody , .......... 0 ) return nSun CASE ... ( 5 ) THRUCASE ( 6 ) THRUCASE ( 7 ) THRUCASE ( 8 ) return nGiantGaseous DEFAULT ...... return nDwarfRocky SWITCHOFFExample 2:
SWITCHCASE( getchar() , ...... 'y' ) THRUCASE ( 'Y' ) ........... return ID_YES CASE ( 'a' ) THRUCASE ( 'A' ) ........... return ID_ABORT CASE ( 'n' ) THRUCASE ( 'N' ) THRUDEFAULT return ID_NO SWITCHOFFSome pros:
- fallthrough behaviour must be explicitly chosenSome cons:
- ProgrammingDialect? : departing from familiar syntax might make maintenance harder. | http://c2.com/cgi-bin/wiki?IsBreakStatementArchaic | CC-MAIN-2014-15 | refinedweb | 1,140 | 57.77 |
Comment on Tutorial - How to Send SMS using Java Program (full code sample included) By Emiley J.
Comment Added by : criz
Comment Added at : 2007-10-16 21:06:30
Comment on Tutorial : How to Send SMS using Java Program (full code sample included) By Emiley J.
Hello,
Can you please send me the complete code to "RECEIVE" and "SEND" SMS Messages. public class Overload2 {
View Tutorial By: ITMIT at 2011-06-05 19:16:46
2. w0w amazing
View Tutorial By: elvin at 2009-01-05 01:00:13
3. I have loaded allthe 5 java files.But how to imple
View Tutorial By: Thennarasu at 2011-05-20 02:30:28
4. please send me basic programs in java with explana
View Tutorial By: Rekha at 2013-04-14 03:55:51
5. Thank you for your simple example which will help
View Tutorial By: eekld at 2012-01-04 15:46:48
6. i need How to connect with database connection in
View Tutorial By: Renjit at 2010-01-04 01:54:25
7. Following line
if (strcmp(dp->
View Tutorial By: Sebastiaan Jaarsma at 2012-12-24 19:28:43
8. wehhh... so so so difficult
View Tutorial By: sam at 2013-02-28 01:55:41
9. its good
View Tutorial By: sumanth at 2010-04-10 20:10:00
10. please show many codes in c++....thankZz
View Tutorial By: Maybelline Hope Bongolan at 2008-08-20 17:50:51 | http://java-samples.com/showcomment.php?commentid=8809 | CC-MAIN-2018-09 | refinedweb | 246 | 66.44 |
MAIL REFERENCE MANUAL Kurt Shoens Revised by Craig Leres and Mark Andrews Version 5.5 April 9, 2016 1. Introduction Mail provides a simple and friendly environment for sending and receiving mail. It divides incoming mail into its constituent messages and allows the user to deal with them in any order. In addition, it provides a set of ed-like commands for manipulating messages and send- ing mail. Mail offers the user simple editing capabilities to ease the composition of outgoing messages, as well as providing the ability to define and send to names which address groups of users. Finally, Mail is able to send and receive messages across such networks as the ARPANET, UUCP, and Berkeley network. This document describes how to use the Mail program to send and receive messages. The reader is not assumed to be familiar with other message handling systems, but should be familiar with the UNIX[1] shell, the text editor, and some of the common UNIX commands. "The UNIX Programmer's Manual," "An Introduction to Csh," and "Text Editing with Ex and Vi" can be consulted for more information on these topics. A word of explanation is in order here concerning the name Mail: the original UNIX mail program was known as /bin/mail. The BSD mail program was called Mail to differentiate it from the older mail ____________________ [1] UNIX is a trademark of Bell Laboratories. USD:7-2 Mail Reference Manual program. /bin/mail is not included in OpenBSD so there is no ambiguity and the BSD mail program is installed as /usr/bin/mail; /usr/bin/Mail is simply a link for backwards compatibility. To further confuse the issue, a second link was retained for compatibility with SystemV sys- tems, mailx. In this document, we use the original name, `Mail', to refer to any of these. Here is how messages are handled: the mail system accepts incom- ing messages for you from other people and collects them in a file, called your system mailbox. When you log in, the system notifies you if there are any messages waiting in your system mailbox. If you are a csh user, you will be notified when new mail arrives if you inform the shell of the location of your mailbox. On OpenBSD, your system mailbox is located in the directory /var/mail in a file with your login name. If your login name is "sam," then you can make csh notify you of new mail by including the following line in your .cshrc file: set mail=/var/mail/sam When you read your mail using Mail, it reads your system mailbox and separates that file into the individual messages that have been sent to you. You can then read, reply to, delete, or save these messages. Each message is marked with its author and the date they sent it. 2. Common usage The Mail command has two distinct usages, according to whether one wants to send or receive mail. Sending mail is simple: to send a message to a user whose login name is, say, "root," use the shell com- mand: % Mail root then type your message. When you reach the end of the message, type an EOT (Control-D) at the beginning of a line, which will cause Mail to echo "EOT" and return you to the Shell. When the user you sent mail to next logs in, he will receive the message: You have mail. to alert him to the existence of your message. If, while you are composing the message you decide that you do not wish to send it after all, you can abort the letter with a <Control-C>. Typing a single <Control-C> causes Mail to print (Interrupt -- one more to kill letter) Typing a second <Control-C> causes Mail to save your partial letter on the file "dead.letter" in your home directory and abort the letter. Once you have sent mail to someone, there is no way to undo the act, so be careful. Mail Reference Manual USD:7-3 The message your recipient reads will consist of the message you typed, preceded by a line telling who sent the message (your login name) and the date and time it was sent. If you want to send the same message to several other people, you can list their login names on the command line. Thus, % Mail sam bob john Tuition fees are due next Friday. Don't forget!! <Control-D> EOT % will send the reminder to sam, bob, and john. If, when you log in, you see the message, You have mail. you can read the mail by typing simply: % Mail Mail will respond by typing its version number and date and then list- ing the messages you have waiting. Then it will type a prompt and await your command. The messages are assigned numbers starting with 1 -- you refer to the messages with these numbers. Mail keeps track of which messages are new (have been sent since you last read your mail) and read (have been read by you). New messages have an N next to them in the header listing and old, but unread messages have a U next to them. Mail keeps track of new/old and read/unread messages by putting a header field called "Status" into your messages. To look at a specific message, use the type command, which may be abbreviated to simply t. For example, if you had the following mes- sages: N 1 root Wed Sep 21 09:21 "Tuition fees" N 2 sam Tue Sep 20 22:55 you could examine the first message by giving the command: type 1 which might cause Mail to respond with, for example: Message 1: From root Wed Sep 21 09:21:45 1978 Subject: Tuition fees Status: R Tuition fees are due next Wednesday. Don't forget!! USD:7-4 Mail Reference Manual Many Mail commands that operate on messages take a message number as an argument like the type command. For these commands, there is a notion of a current message. When you enter the Mail program, the current message is initially the first by simply typing a newline. As a special case, you can type a newline as your first command to Mail to type the first message. If, after reading a message, you wish to immediately send a reply, you can do so with the reply command. Reply, like type, takes a message number as an argument. Mail then begins a message addressed to the user who sent you the message. You may then type in your letter in reply, followed by a <Control-D> at the beginning of a line, as before. Mail will type EOT, then type the ampersand prompt to indicate its readiness to accept another command. In our example, if, after typing the first message, you wished to reply to it, you might give the command: reply Mail responds by typing: To: root Subject: Re: Tuition fees and waiting for you to enter your letter. You are now in the message collection mode described at the beginning of this section and Mail will gather up your message up to a <Control-D>. Note that it copies the subject header from the original message. This is useful in that correspondence about a particular matter will tend to retain the same subject heading, making it easy to recognize. If there are other header fields in the message, the information found will also be used. For example, if the letter had a "To:" header listing several reci- pients, Mail would arrange to send your reply to the same people as well. Similarly, if the original message contained a "Cc:" (carbon copies to) field, Mail would send your reply to those users, too. Mail is careful, though, not too send the message to you, even if you appear in the "To:" or "Cc:" field, unless you ask to be included explicitly. See section 4 for more details. Mail Reference Manual USD:7-5 After typing in your letter, the dialog with Mail might look like the following: reply To: root Subject: Tuition fees Thanks for the reminder EOT & The reply command is especially useful for sustaining extended conversations over the message system, with other "listening" users receiving copies of the conversation. The reply command can be abbre- viated to r. Sometimes you will receive a message that has been sent to several mes- sage directly with the mail command, which takes as arguments the names of the recipients you wish to send to. For example, to send a message to "frank," you would do: mail frank This is to confirm our meeting next Friday at 4. EOT & The mail command can be abbreviated to m. Normally, each message you receive is saved in the file mbox in your login directory at the time you leave Mail. Often, however, you will not want to save a particular message you have received because it is only of passing interest. To avoid saving a message in mbox you can delete it using the delete command. In our example, delete 1 will prevent Mail from saving message 1 (from root) in mbox. In addi- tion to not saving deleted messages, Mail will not let you type them, either. The effect is to make the message disappear altogether, along with its number. The delete command can be abbreviated to simply d. Many features of Mail can be tailored to your liking with the set command. The set command has two forms, depending on whether you are setting a binary option or a valued option. Binary options are either on or off. For example, the "ask" option informs Mail that each time you send a message, you want it to prompt you for a subject header, to USD:7-6 Mail Reference Manual be included in the message. To set the "ask" option, you would type set ask Another useful Mail option is "hold." Unless told otherwise, Mail moves the messages from your system mailbox to the file mbox in your home directory when you leave Mail. If you want Mail to keep your letters in the system mailbox instead, you can set the "hold" option. Valued options are values which Mail uses to adapt to your tastes. For example, the "SHELL" option tells Mail which shell you like to use, and is specified by set SHELL=/bin/csh for example. Note that no spaces are allowed in "SHELL=/bin/csh." A complete list of the Mail options appears in section 5. Another important valued option is "crt." If you use a fast video terminal, you will find that when you print long messages, they fly by too quickly for you to read them. With the "crt" option, you can make Mail print any message larger than a given number of lines by sending it through a paging program. This program is specified by the valued option PAGER. If PAGER is not set, a default paginator is used. For example, most CRT users with 24-line screens should do: set crt=24 to paginate messages that will not fit on their screens. In the default state, more (default paginator) prints a screenful of informa- tion, then types ``byte XXX'', where `XXX' represents the number of bytes paginated. Type a space to see the next screenful. Another adaptation to user needs that Mail provides is that of aliases. An alias is simply a name which stands for one or more real user names. Mail sent to an alias is really sent to the list of real users associated with it. For example, an alias can be defined for the members of a project, so that you can send mail to the whole pro- ject by sending mail to just a single name. The alias command in Mail defines an alias. Suppose that the users in a project are named Sam, Sally, Steve, and Susan. To define an alias called "project" for them, you would use the Mail command: alias project sam sally steve susan The alias command can also be used to provide a convenient name for someone whose user name is inconvenient. For example, if a user named "Bob Anderson" had the login name "anderson,"" you might want to use: alias bob anderson so that you could send mail to the shorter name, "bob." Mail Reference Manual USD:7-7 While the alias and set commands allow you to customize Mail, they have the drawback that they must be retyped each time you enter Mail. To make them more convenient to use, Mail always looks for two files when it is invoked. It first reads a system wide file "/etc/mail.rc," then a user specific file, ".mailrc," which is found in the user's home directory. The system wide file is maintained by the system administrator and contains set commands that are applicable to all users of the system. The ".mailrc" file is usually used by each user to set options the way he likes and define individual aliases. For example, my .mailrc file looks like this: set ask nosave SHELL=/bin/csh As you can see, it is possible to set many options in the same set command. The "nosave" option is described in section 5. Mail aliasing is implemented at the system-wide level by the mail delivery system sendmail. These aliases are stored in the file /etc/mail/aliases and are accessible to all users of the system. The lines in /etc/mail/aliases are of the form: alias: name<1>, name<2>, name<3> where alias is the mailing list name and the name<i> are the members of the list. Long lists can be continued onto the next line by start- ing the next line with a space or tab. Remember that you must execute the command newaliases (as superuser) after editing /etc/mail/aliases since the delivery system uses an indexed file created by newaliases. We have seen that Mail can be invoked with command line arguments which are people to send the message to, or with no arguments to read mail. Specifying the -f flag on the command line causes Mail to read messages from a file other than your system mailbox. For example, if you have a collection of messages in the file "letters" you can use Mail to read them with: % Mail -f letters You can use all the Mail commands described in this document to exam- ine, modify, or delete messages from your "letters" file, which will be rewritten when you leave Mail with the quit command described below. Since mail that you read is saved in the file mbox in your home directory by default, you can read mbox in your home directory by using simply % Mail -f Normally, messages that you examine using the type command are saved in the file "mbox" in your home directory if you leave Mail with the quit command described below. If you wish to retain a message in USD:7-8 Mail Reference Manual your system mailbox you can use the preserve command to tell Mail to leave it there. The preserve command accepts a list of message numbers, just like type and may be abbreviated to pre. Messages in your system mailbox that you do not examine are nor- mally retained in your system mailbox automatically. If you wish to have such a message saved in mbox without reading it, you may use the mbox command to have them so saved. For example, mbox 2 in our example would cause the second message (from sam) to be saved in mbox when the quit command is executed. Mbox is also the way to direct messages to your mbox file if you have set the "hold" option described above. Mbox can be abbreviated to mb. When you have perused all the messages of interest, you can leave Mail with the quit command, which saves the messages you have typed but not deleted in the file mbox in your login directory. Deleted messages are discarded irretrievably, and messages left untouched are preserved in your system mailbox so that you will see them the next time you type: % Mail The quit command can be abbreviated to simply q. If you wish for some reason to leave Mail quickly without alter- ing either your system mailbox or mbox, you can type the x command (short for exit), which will immediately return you to the Shell without changing anything. If, instead, you want to execute a Shell command without leaving Mail, you can type the command preceded by an exclamation point, just as in the text editor. Thus, for instance: !date will print the current date without leaving Mail. Finally, the help command is available to print out a brief sum- mary of the Mail commands, using only the single character command abbreviations. 3. Maintaining folders Mail includes a simple facility for maintaining groups of mes- sages together in folders. This section describes this facility. To use the folder facility, you must tell Mail where you wish to keep your folders. Each folder of messages will be a single file. For convenience, all of your folders are kept in a single directory of your choosing. To tell Mail where your folder directory is, put a Mail Reference Manual USD:7-9 line of the form set folder=letters in your .mailrc file. If, as in the example above, your folder direc- tory does not begin with a `/,' Mail will assume that your folder directory is to be found starting from your home directory. Thus, if your home directory is /home/person the above example told Mail to find your folder directory in /home/person/letters. Anywhere a file name is expected, you can use a folder name, pre- ceded automatically removed from your system mailbox. In order to make a copy of a message in a folder without causing that message to be removed from your system mailbox, use the copy com- mand, which is identical in all other respects to the save command. For example, copy +classwork copies the current message into the classwork folder and leaves a copy in your system mailbox. The folder command can be used to direct Mail to the contents of a different folder. For example, folder +classwork directs Mail. To start Mail reading one of your folders, you can use the -f option described in section 2. For example: % Mail -f +classwork will cause Mail to read your classwork folder without looking at your system mailbox. USD:7-10 Mail Reference Manual 4. More about sending mail 4.1. Tilde escapes While typing in a message to be sent to others, it is often use- ful to be able to invoke the text editor on the partial message, print the message, execute a shell command, or do some other auxiliary func- tion. Mail provides these capabilities through tilde escapes, which consist of a tilde (~) at the beginning of a line, followed by a sin- gle character which indicates the function to be performed. For exam- ple, to print the text of the message so far, use: ~p which will print a line of dashes, the recipients of your message, and the text of the message so far. Since Mail requires two consecutive <Control-C>'s to abort a letter, you can use a single <Control-C> to abort the output of ~p or any other ~ escape without killing your letter. If you are dissatisfied with the message as it stands, you can invoke the text editor on it using the escape ~e which causes the message to be copied into a temporary file and an instance of the editor to be spawned. After modifying the message to your satisfaction, write it out and quit the editor. Mail will respond by typing (continue) after which you may continue typing text which will be appended to your message, or type <Control-D> to end the message. A standard text editor is provided by Mail. You can override this default by setting the valued option "EDITOR" to something else. For example, you might prefer: set EDITOR=/bin/ed Many systems offer a screen editor as an alternative to the stan- dard text editor, such as the vi editor from UC Berkeley, or mg, an emacs-like editor. To use the screen, or visual editor, on your current message, you can use the escape, ~v ~v works like ~e, except that the screen editor is invoked instead. A default screen editor is defined by Mail. If it does not suit you, you can set the valued option "VISUAL" to the path name of a different editor. Mail Reference Manual USD:7-11 It is often useful to be able to include the contents of some file in your message; the escape ~r filename is provided for this purpose, and causes the named file to be appended to your current message. Mail complains if the file doesn't exist or can't be read. If the read is successful, the number of lines and characters appended to your message is printed, after which you may continue appending text. The filename may contain shell metacharac- ters like * and ? which are expanded according to the conventions of your shell. As a special case of ~r, the escape ~d reads in the file "dead.letter" in your home directory. This is often useful since Mail copies the text of your message there when you abort a message with <Control-C>. To save the current text of your message on a file you may use the ~w filename escape. Mail will print out the number of lines and characters written to the file, after which you may continue appending text to your mes- sage. Shell metacharacters may be used in the filename, as in ~r and are expanded with the conventions of your shell. If you are sending mail from within Mail's command mode you can read a message sent to you into the message you are constructing with the escape: ~m 4 which will read message 4 into the current message, shifted right by one tab stop. You can name any non-deleted message, or list of mes- sages. Messages can also be forwarded without shifting by a tab stop with ~f. This is the usual way to forward a message. If, in the process of composing a message, you decide to add additional people to the list of message recipients, you can do so with the escape ~t name1 name2 ... You may name as few or many additional recipients as you wish. Note that the users originally on the recipient list will still receive the message; you cannot remove someone from the recipient list with ~t. USD:7-12 Mail Reference Manual If you wish, you can associate a subject with your message by using the escape ~s Arbitrary string of text which replaces any previous subject with "Arbitrary string of text." The subject, if given, is sent near the top of the message prefixed with "Subject:" You can see what the message will look like by using ~p. For political reasons, one occasionally prefers to list certain people as recipients of carbon copies of a message rather than direct recipients. The escape ~c name1 name2 ... adds the named people to the "Cc:" list, similar to ~t. Again, you can execute ~p to see what the message will look like. The escape ~b name1 name2 ... adds the named people to the "Cc:" list, but does not make the names visible in the "Cc:" line ("blind" carbon copy). The recipients of the message together constitute the "To:" field, the subject the "Subject:" field, and the carbon copies the "Cc:" field. If you wish to edit these in ways impossible with the ~t, ~s, ~c and ~b escapes, you can use the escape ~h which prints "To:" followed by the current list of recipients and leaves the cursor (or printhead) at the end of the line. If you type in ordinary characters, they are appended to the end of the current list of recipients. You can also use your erase character to erase back into the list of recipients, or your kill character to erase them altogether. Thus, for example, if your erase and kill characters are the standard (on printing terminals) <Control-H> and <Control-U> keys, ~h To: root kurt^H^H^H^Hbill would change the initial recipients "root kurt" to "root bill." When you type a newline, Mail advances to the "Subject:" field, where the same rules apply. Another newline brings you to the "Cc:" field, which may be edited in the same fashion. Another newline brings you to the "Bcc:" ("blind" carbon copy) field, which follows the same rules as the "Cc:" field. Another newline leaves you appending text to the end of your message. You can use ~p to print the current text of the header fields and the body of the message. Mail Reference Manual USD:7-13 To effect a temporary escape to the shell, the escape ~!command is used, which executes command and returns you to mailing mode without altering the text of your message. If you wish, instead, to filter the body of your message through a shell command, then you can use ~|command which pipes your message through the command and uses the output as the new text of your message. If the command produces no output, Mail assumes that something is amiss and retains the old version of your message. A frequently-used filter is the command fmt, designed to format outgoing mail. To effect a temporary escape to Mail command mode instead, you can use the ~:Mail command escape. This is especially useful for retyping the message you are replying to, using, for example: ~:t It is also useful for setting options and modifying aliases. If you wish abort the current message, you can use the escape ~q This will terminate the current message and return you to the shell (or Mail if you were using the mail command). If the save option is set, the message will be copied to the file "dead.letter" in your home directory. If you wish (for some reason) to send a message that contains a line beginning with a tilde, you must double it. Thus, for example, ~~This line begins with a tilde. sends the line ~This line begins with a tilde. Finally, the escape ~? prints out a brief summary of the available tilde escapes. USD:7-14 Mail Reference Manual On some terminals (particularly ones with no lower case) tilde's are difficult to type. Mail allows you to change the escape character with the "escape" option. For example, I set set escape=] and use a right bracket instead of a tilde. If I ever need to send a line beginning with right bracket, I double it, just as for ~. Chang- ing the escape character removes the special meaning of ~. 4.2. Network access This section describes how to send mail to people on other machines. Recall that sending to a plain login name sends mail to that person on your machine. If your machine is directly (or sometimes, even, indirectly) connected to the Internet, you can send messages to people on the Internet using a name of the form name@host.domain where name is the login name of the person you're trying to reach, host is the name of the machine on the Internet, and domain is the higher-level scope within which the hostname is known, e.g. EDU (for educational institutions), COM (for commercial entities), GOV (for governmental agencies), ARPA for many other things, BITNET or CSNET for those networks. If your recipient logs in on a machine connected to yours by UUCP (the Bell Laboratories supplied network that communicates over tele- phone lines), sending mail can be a bit more complicated. You must know the list of machines through which your message must travel to arrive at his site. So, if his machine is directly connected to yours, you can send mail to him using the syntax: host!name where, again, host is the name of the machine and name is the login name. If your message must go through an intermediary machine first, you must use the syntax: intermediary!host!name and so on. It is actually a feature of UUCP that the map of all the systems in the network is not known anywhere (except where people decide to write it down for convenience). Talk to your system administrator about good ways to get places; the uuname command will tell you systems whose names are recognized, but not which ones are frequently called or well-connected. When you use the reply command to respond to a letter, there is a problem of figuring out the names of the users in the "To:" and "Cc:" lists relative to the current machine. If the original letter was sent to you by someone on the local machine, then this problem does not Mail Reference Manual USD:7-15 exist, but if the message came from a remote machine, the problem must be dealt with. Mail uses a heuristic to build the correct name for each user relative to the local machine. So, when you reply to remote mail, the names in the "To:" and "Cc:" lists may change somewhat. 4.3. Special recipients As described previously, you can send mail to either user names or alias names. It is also possible to send messages directly to files or to programs, using special conventions. If a recipient name has a `/' in it or begins with a `+', it is assumed to be the path name of a file into which to send the message. If the file already exists, the message is appended to the end of the file. If you want to name a file in your current directory (ie, one for which a `/' would not usually be needed) you can precede the name with `./' So, to send mail to the file "memo" in the current directory, you can give the command: % Mail ./memo If the name begins with a `+,' it is expanded into the full path name of the folder name in your folder directory. This ability to send mail to files can be used for a variety of purposes, such as maintaining a journal and keeping a record of mail sent to a certain group of users. The second example can be done automatically by including the full pathname of the record file in the alias command for the group. Using our previous alias example, you might give the command: alias project sam sally steve susan /usr/project/mail_record Then, all mail sent to "project" would be saved on the file "/usr/project/mail_record" as well as being sent to the members of the project. This file can be examined using Mail -f. It is sometimes useful to send mail directly to a program, for example one might write a project billboard program and want to access it using Mail. To send messages to the billboard program, one can send mail to the special name `|billboard' for example. Mail treats reci- pient names that begin with a `|' as a program to send the mail to. An alias can be set up to reference a `|' prefaced name if desired. Caveats: the shell treats `|' specially, so it must be quoted on the command line. Also, the `| program' must be presented as a single argument to mail. The safest course is to surround the entire name with double quotes. This also applies to usage in the alias command. For example, if we wanted to alias `rmsgs' to `rmsgs -s' we would need to say: alias rmsgs "| rmsgs -s" USD:7-16 Mail Reference Manual 5. Additional features This section describes some additional commands useful for read- ing your mail, setting options, and handling lists of messages. 5.1. Message lists Several Mail commands accept a list of messages as an argument. Along with type and delete, described in section 2, there is the from command, which prints the message headers associated with the message list passed to it. The from command is particularly useful in conjunc- tion with some of the message list features described below. A message list consists of a list of message numbers, ranges, and names, separated by spaces or tabs. Message numbers may be either decimal numbers, which directly specify messages, or one of the spe- cial characters "^", ".", or "$" to specify the first relevant, current, or last relevant message, respectively. Relevant here means, for most commands "not deleted" and "deleted" for the undelete com- mand. A range of messages consists of two message numbers (of the form described in the previous paragraph) separated by a dash. Thus, to print the first four messages, use type 1-4 and to print all the messages from the current message to the last message, use type .-$ A name is a user name. The user names given in the message list are collected together and each message selected by other means is checked to make sure it was sent by one of the named users. If the message consists entirely of user names, then every message sent by one of those users that is relevant (in the sense described earlier) is selected. Thus, to print every message sent to you by "root," do type root As a shorthand notation, you can specify simply "*" to get every relevant (same sense) message. Thus, type * prints all undeleted messages, delete * deletes all undeleted messages, and Mail Reference Manual USD:7-17 undelete * undeletes all deleted messages. You can search for the presence of a word in subject lines with /. For example, to print the headers of all messages that contain the word "PASCAL," do: from /pascal Note that subject searching ignores upper/lower case differences. 5.2. List of commands This section describes all the Mail commands available when receiving mail. - The - command goes to the previous message and prints it. The - command may be given a decimal number n as an argument, in which case the nth previous message is gone to and printed. ? Prints a brief summary of commands. ! Used to preface a command to be executed by the shell. Print Like print, but also print out ignored header fields. See also print, ignore, and retain. Print can be abbreviated to P. Reply or Respond Note the capital R in the name. Frame a reply to one or more mes- sages. The reply (or replies if you are using this on multiple messages) will be sent ONLY to the person who sent you the mes- sage (respectively, the set of people who sent the messages you are replying avail- able to you through the mail command. The Reply command is espe- cially useful for replying to messages that were sent to enormous distribution groups when you really just want to send a message to the originator. Use it often. Reply (and Respond) can be abbreviated to R. Type Identical to the Print command. Type can be abbreviated to T. alias Define a name to stand for a set of other names. This is used when you want to send messages to a certain group of people and want to avoid retyping their names. For example USD:7-18 Mail Reference Manual alias project john sue willie kathryn creates an alias project which expands to the four people John, Sue, Willie, and Kathryn. If no arguments are given, all currently-defined aliases are printed. If one argument is given, that alias is printed (if it exists). Alias can be abbreviated to a. alternates If you have accounts on several machines, you may find it con- venient to use /etc/mail/aliases on all the machines except one to direct your mail to a single account. The alternates command is used to inform Mail that each of these other addresses is really you. Alternates takes a list of user names and remembers that they are all actually you. When you reply to messages that were sent to one of these alternate names, Mail will not bother to send a copy of the message to this other address (which would simply be directed back to you by the alias mechanism). If alter- nates is given no argument, it lists the current set of alternate names. Alternates is usually used in the .mailrc file. Alternates can be abbreviated to alt. chdir The chdir command allows you to change your current directory. Chdir takes a single argument, which is taken to be the pathname of the directory to change to. If no argument is given, chdir changes to your home directory. Chdir can be abbreviated to c. copy The copy command does the same thing that save does, except that it does not mark the messages it is used on for deletion when you quit. Copy can be abbreviated to co. delete Deletes a list of messages. Deleted messages can be reclaimed with the undelete command. Delete can be abbreviated to d. dp or dt These commands delete the current message and print the next mes- sage. They are useful for quickly reading and disposing of mail. If there is no next message, Mail says ``No more messages.'' edit To edit individual messages using the text editor, the edit com- mand is provided. The edit command takes a list of messages as described under the type command and processes each by writing it into the file Messagex where x is the message number being edited and executing the text editor on it. When you have edited the message to your satisfaction, write the message out and quit, upon which Mail will read the message back and remove the file. Edit can be abbreviated to e. Mail Reference Manual USD:7-19 else Marks the end of the then-part of an if statement and the begin- ning of the part to take effect if the condition of the if state- ment is false. endif Marks the end of an if statement. exit or xit Leave Mail without updating the system mailbox or the file you were reading. Thus, if you accidentally delete several messages, you can use exit to avoid scrambling your mailbox. Exit can be abbreviated to ex or x. file The same as folder. File can be abbreviated to fi. folders List the names of the folders in your folder directory. folder The folder command switches to a new mail file or folder. With no arguments, it tells you which file you are currently reading. If you give it an argument, it will write out changes (such as deletions) you have made in the current file and read the new file. Some special conventions are recognized for the name: Name Meaning ___________________________________________ # Previous file read % Your system mailbox %name Name's system mailbox & Your ~/mbox file +folder A file in your folder directory Folder can be abbreviated to fo. from The from command takes a list of messages and prints out the header lines for each one; hence from joe is the easy way to display all the message headers from "joe." From can be abbreviated to f. headers When you start up Mail to read your mail, it lists the message headers that you have. These headers tell you who each message is from, when they were received, how many lines and characters each USD:7-20 Mail Reference Manual message is, and the "Subject:" header field of each message, if present. In addition, Mail tags the message header of each mes- sage that has been the object of the preserve command with a "P." Messages that have been saved or written are flagged with a "*." Finally, deleted messages are not printed at all. If you wish to reprint the current list of message headers, you can do so with the headers command. The headers command (and thus the initial header listing) only lists the first so many message headers. The number of headers listed depends on the speed of your terminal. Mail maintains a notion of the current "window" into your mes- sages for the purposes of printing headers. Use the z command to move forward a window, and z- to move back a window. You can move Mail's notion of the current window directly to a particular mes- sage by using, for example, headers 40 to move Mail's attention to the messages around message 40. Headers can be abbreviated to h. help Print a brief and usually out of date help message about the com- mands in Mail. The man page for mail is usually more up-to-date than either the help message or this manual. It is also a synonym for ?. hold Arrange to hold a list of messages in the system mailbox, instead of moving them to the file mbox in your home directory. If you set the binary option hold, this will happen by default. It does not override the delete command. Hold can be abbreviated to ho. if Commands in your ".mailrc" file can be executed conditionally depending on whether you are sending or receiving mail with the if command. For example, you can do: if receive commands... endif An else form is also available: if send commands... else commands... endif Note that the only allowed conditions are receive and send. ignore N.B.: Ignore has been superseded by retain. Add the list of header fields named to the ignore list. Header Mail Reference Manual USD:7-21 fields in the ignore list are not printed on your terminal when you print a message. This allows you to suppress printing of certain machine-generated header fields, such as Via which are not usually of interest. The Type and Print commands can be used to print a message in its entirety, including List the valid Mail commands. List can be abbreviated to l. mail Send mail to one or more people. If you have the ask option set, Mail will prompt you for a subject to your message. Then you can type in your message, using tilde escapes as described in section 4 to edit, print, or modify your message. To signal your satis- faction with the message and send it, type <Control-D> at the beginning of a line, or a . alone on a line if you set the option dot. To abort the message, type two interrupt characters (Control-C by default) in a row or use the ~q escape. The mail command can be abbreviated to m. mbox Indicate that a list of messages be sent to mbox in your home directory when you quit. This is the default action for messages if you do not have the hold option set. more Takes a message list and invokes the pager on that list. next or + The next command goes to the next message and types it. If given a message list, next goes to the first such message and types it. Thus, next root goes to the next message sent by "root" and types it. The next command can be abbreviated to simply a newline, which means that one can go to and type a message by simply giving its message number or one of the magic characters "^" "." or "$". Thus, . prints the current message and USD:7-22 Mail Reference Manual 4 prints message 4, as described previously. Next can be abbrevi- ated to n. preserve Same as hold. Cause a list of messages to be held in your system mailbox when you quit. Preserve can be abbreviated to pre. print Print the specified messages. If the crt variable is set, mes- sages longer than the number of lines it indicates are paged through the command specified by the PAGER variable. The print command can be abbreviated to p. quit Terminates the session, saving all undeleted, unsaved and unwrit- ten messages in the user's mbox file in their login directory (messages marked as having been read), preserving all messages marked with hold or preserve or never referenced in their system mailbox. Any messages that were deleted, saved, written, or saved to mbox are removed from command. Quit can be abbrevi- ated to q. reply or respond Frame a reply to a single message. The reply will be sent to the person who sent you the message (to which you are replying), plus all the people who received the original message, except available to you through the mail command. The reply (and respond) command can be abbrevi- ated to r. It is often useful to be able to save messages on related topics in a file. The save command gives you the ability to do this. Mail Reference Manual USD:7-23 The save command takes as an argument a list of message numbers, followed by the name of the file in which to save the messages. The messages are appended to the named file, thus allowing one to keep several messages in the file, stored in the order they were put there. The filename in quotes, followed by the line count and character count is echoed on the user's terminal. An example of the save command relative to our running example is: s 1 2 tuitionmail Saved messages are not automatically saved in mbox at quit time, nor are they selected by the next command described above, unless explicitly specified. Save can be abbreviated to s. Set an option or give an option a value. Used to customize Mail. Section 5.3 contains a list of the options. Options can be binary, in which case they are on or off, or valued. To set a binary option option on, do set option To give the valued option option the value value, do set option=value There must be no space before or after the ``='' sign. If no arguments are given, all variable values are printed. Several options can be specified in a single set command. Set can be abbreviated to se. shell The shell command allows you to escape to the shell. Shell invokes an interactive shell and allows you to type commands to it. When you leave the shell, you will return to Mail. The shell used is a default assumed by Mail; you can override this default by setting the valued option "SHELL," eg: set SHELL=/bin/csh Shell can be abbreviated to sh. USD:7-24 Mail Reference Manual size Takes a message list and prints out the size in characters of each message. source The source command reads mail commands from a file. It is useful when you are trying to fix your ".mailrc" file and you need to re-read it. Source can be abbreviated to so. top The top command takes a message list and prints the first five lines of each addressed message. If you wish, you can change the number of lines that top prints out by setting the valued option "toplines." On a CRT terminal, set toplines=10 might be preferred. Top can be abbreviated to to. type Same as print. Takes a message list and types out each message on the terminal. The type command can be abbreviated to t. unalias Takes a list of names defined by alias commands and discards the remembered groups of users. The group names no longer have any significance. undelete Takes a message list and marks each message as not being deleted. Undelete can be abbreviated to u. unread Takes a message list and marks each message as not having been read. Unread can be abbreviated to U. unset Takes a list of option names and discards their remembered values; the inverse of set. visual It is often useful to be able to invoke one of two editors, based on the type of terminal one is using. To invoke a display oriented editor, you can use the visual command. The operation of the visual command is otherwise identical to that of the edit command. Both the edit and visual commands assume some default text edi- tors. The default for "EDITOR" is /usr/bin/ex. The default for "VISUAL" is /usr/bin/vi. These default editors can be overridden by the valued options "EDITOR" and "VISUAL" for the standard and screen editors. You might want to do: Mail Reference Manual USD:7-25 set EDITOR=/bin/ed VISUAL=/usr/bin/mg Visual can be abbreviated to v. write The save command always writes the entire message, including the headers, into the file. If you want to write just the message itself, you can use the write command. The write command has the same syntax as the save command, and can be abbreviated to simply w. Thus, we could write the second message by doing: w 2 file.c As suggested by this example, the write command is useful for such tasks as sending and receiving source program text over the message system. The filename in quotes, followed by additional file information, is echoed on the user's terminal. z Mail presents message headers in windowfuls as described under the headers command. You can move Mail's attention forward to the next window by giving the z+ command. Analogously, you can move to the previous window with: z- 5.3. Custom options Throughout this manual, we have seen examples of binary and valued options. This section describes each of the options in alpha- betical order, including some that you have not seen yet. To avoid confusion, please note that the options are either all lower case letters or all upper case letters. When I start a sentence such as: "Ask" causes Mail to prompt you for a subject header, I am only capi- talizing "ask" as a courtesy to English. EDITOR The valued option "EDITOR" defines the pathname of the text edi- tor to be used in the edit command and ~e escape. If not defined, /usr/bin/ex is used. LISTER Pathname of the directory lister to use in the folders command. Default is /bin/ls. MBOX The name of the mbox file. It can be the name of a folder. The default is ``mbox'' in the user's home directory. USD:7-26 Mail Reference Manual PAGER Pathname of the program to use for paginating output when it exceeds crt lines. A default paginator is used if this option is not defined. SHELL The valued option "SHELL" gives the path name of your shell. This shell is used for the ! command and ~! escape. In addition, this shell expands file names with shell metacharacters like * and ? in them. VISUAL The valued option "VISUAL" defines the pathname of the screen editor to be used in the visual command and ~v escape. If not defined, /usr/bin/vi is used. append The "append" option is binary and causes messages saved in mbox to be appended to the end rather than prepended. Normally, Mail will put messages in mbox in the same order that the system puts messages in your system mailbox. By setting "append," you are requesting that mbox be appended to regardless. It is in any event quicker to append. ask "Ask" is a binary option which causes Mail to prompt you for the subject of each message you send. If you respond with simply a newline, no subject field will be sent. askbcc "Askbcc" is a binary option which causes you to be prompted for additional blind carbon copy recipients at the end of each mes- sage. Responding with a newline shows your satisfaction with the current list. askcc "Askcc" is a binary option which causes you to be prompted for additional carbon copy recipients at the end of each message. Responding with a newline shows your satisfaction with the current list. autoinc Causes new mail to be automatically incorporated when it arrives. Setting this is similar to issuing the inc command at each prompt, except that the current message is not reset when new mail arrives. autoprint "Autoprint" is a binary option which causes the delete command to behave like dp -- thus, after deleting a message, the next one will be typed automatically. This is useful when quickly scan- ning and deleting messages in your mailbox. Mail Reference Manual USD:7-27 crt The valued option is used as a threshold to determine how long a message must be before PAGER is used to read it. debug The binary option "debug" causes debugging information to be displayed. Use of this option is the same as using the -d command line flag. dot "Dot" is a binary option which, if set, causes Mail to interpret a period alone on a line as the terminator of the message you are sending. escape To allow you to change the escape character used when sending mail, you can set the valued option "escape." Only the first character of the "escape" option is used, and it must be doubled if it is to appear as the first character of a line of your mes- sage. If you change your escape character, then ~ loses all its special meaning, and need no longer be doubled at the beginning of a line. folder The name of the directory to use for storing folders of messages. If this name begins with a `/' Mail considers it to be an abso- lute pathname; otherwise, the folder directory is found relative to your home directory. hold The binary option "hold" causes messages that have been read but not manually dealt with to be held in the system mailbox. This prevents such messages from being automatically swept into your mbox file. ignore The binary option "ignore" causes <Control-C> characters from your terminal to be ignored and echoed as @'s while you are send- ing mail. <Control-C> characters retain their original meaning in Mail command mode. Setting the "ignore" option is equivalent to supplying the -i flag on the command line as described in sec- tion 6. ignoreeof An option related to "dot" is "ignoreeof", which makes Mail refuse to accept a <Control-D> as the end of a message. "Ignoreeof" also applies to Mail command mode. indentprefix String used by the ~m tilde escape for indenting messages, in place of the normal tab character (`^I'). Be sure to quote the value if it contains spaces or tabs. USD:7-28 Mail Reference Manual keep The "keep" option causes Mail to truncate your system mailbox instead of deleting it when it is empty. This is useful if you elect to protect your mailbox, which you would do with the shell command: chmod 600 /var/mail/yourname where yourname is your login name. If you do not do this, anyone can probably read your mail, although people usually don't. keepsave When you save a message, Mail usually discards it when you quit. To retain all saved messages, set the "keepsave" option. metoo When sending mail to an alias, Mail makes sure that if you are included in the alias, that mail will not be sent to you. This is useful if a single alias is being used by all members of the group. If however, you wish to receive a copy of all the mes- sages you send to the alias, you can set the binary option "metoo." noheader The binary option "noheader" suppresses the printing of the ver- sion and headers when Mail is first invoked. Setting this option is the same as using -N on the command line. nosave Normally, when you abort a message with two <Control-C>'s, Mail copies the partial letter to the file "dead.letter" in your home directory. Setting the binary option "nosave" prevents this. Replyall Reverses the sense of reply and Reply commands. quiet The binary option "quiet" suppresses the printing of the version when Mail is first invoked, as well as printing the for example "Message 4:" from the type command. record If you love to keep records, then the valued option "record" can be set to the name of a file to save your outgoing mail. Each new message you send is appended to the end of the file. screen When Mail initially prints the message headers, it determines the number to print by looking at the speed of your terminal. The faster your terminal, the more it prints. The valued option "screen" overrides this calculation and specifies how many mes- sage headers you want printed. This number is also used for scrolling with the z command. Mail Reference Manual USD:7-29 messages containing the substring `y' in the ``To'', ``Cc'', or ``Bcc'' header fields. The check for ``to'' is case sensitive, so that ``/To:y'' can be used to limit the search for `y' to just the ``To:'' field. sendmail To use an alternate mail delivery system, set the "sendmail" option to the full pathname of the program to use. Note: this is not for everyone! Most people should use the default delivery system. toplines The valued option "toplines" defines the number of lines that the "top" command will print out instead of the default five lines. verbose The binary option "verbose" causes Mail to invoke sendmail with the -v flag, which causes it to go into verbose mode and announce expansion of aliases, etc. Setting the "verbose" option is equivalent to invoking Mail with the -v flag as described in sec- tion 6. 6. Command line options This section describes command line options for Mail and what they are used for. -b list Send blind carbon copies to list. -c list Send carbon copies to list of users. List should be a comma separated list of names. -f file Show the messages in file instead of your system mailbox. If file is omitted, Mail reads mbox in your home directory. -I Forces mail to run in interactive mode, even when input is not a terminal. In particular, the special ~ command character, used when sending mail, is only available interactively. -i Ignore tty interrupt signals. This is particularly useful when using mail on noisy phone lines. -N Suppress the initial printing of headers. USD:7-30 Mail Reference Manual -n Inhibit reading of /etc/mail.rc upon startup. -s string Used for sending mail. String is used as the subject of the mes- sage being composed. If string contains blanks, you must sur- round it with quote marks. -u name Read names's mail instead of your own. Unwitting others often neglect to protect their mailboxes, but discretion is advised. Essentially, -u user is a shorthand way of doing -f /var/mail/user. -v Use the -v flag when invoking sendmail. This feature may also be enabled by setting the the option "verbose". The following command line flags are also recognized, but are intended for use by programs invoking Mail and not for people. -d Turn on debugging information. Not of general interest. -T file Arrange to print on file the contents of the article-id fields of all messages that were either read or deleted. -T is for the readnews program and should NOT be used for reading your mail. 7. Format of messages This section describes the format of messages. Messages begin with a from line, which consists of the word "From" followed by a user name, followed by anything, followed by a date in the format returned by the ctime library routine described in section 3 of the Unix Programmer's Manual. A possible ctime format date is: Tue Dec 1 10:58:23 1981 The ctime date may be optionally followed by a single space and a time zone indication, which should be three capital letters, such as PDT. Following the from line are zero or more header field lines. Each header field line is of the form: name: information Name can be anything, but only certain header fields are recognized as having any meaning. The recognized header fields are: article-id, bcc, cc, from, reply-to, sender, subject, and. Mail Reference Manual USD:7-31 If any headers are present, they must be followed by a blank line. The part that follows is called the charac- ters. Then, one can send a 16-bit binary number as three characters. These characters should be packed into lines, preferably lines about 70 characters long as long lines are transmitted more efficiently. The message delivery system always adds a blank line to the end of each message. This blank line must not be deleted. The UUCP message delivery system sometimes adds a blank line to the end of a message each time it is forwarded through a machine. It should be noted that some network transport protocols enforce limits to the lengths of messages. 8. Glossary This section contains the definitions of a few phrases peculiar to Mail. alias An alternative name for a person or list of people. flag An option, given on the command line of Mail, prefaced with a -. For example, -f is a flag. header field At the beginning of a message, a line which contains information that is part of the structure of the message. Popular header fields include to, cc, and subject. mail A collection of messages. Often used in the phrase, "Have you read your mail?" mailbox The place where your mail is stored, typically in the directory /var/mail. message A single letter from someone, initially stored in your mailbox. message list A string used in Mail command mode to describe a sequence of mes- sages. USD:7-32 Mail Reference Manual option A piece of special purpose information used to tailor Mail to your taste. Options are specified with the set command. 9. Summary of commands, options, and escapes This section gives a quick summary of the Mail commands, binary and valued options, and tilde escapes. The following table describes the commands: Command Description _________________________________________________________________________ + Same as next - Back up to previous message ? Print brief summary of Mail commands ! Single command escape to shell Print Type message with ignored fields Reply Reply to author of message only Respond Same as Reply Type Type message with ignored fields alias Define an alias as a set of user names alternates List other names you are known by chdir Change working directory, home by default copy Copy a message to a file or folder delete Delete a list of messages dp Same as dt dt Delete current message, type next message edit Edit a list of messages else Start of else part of conditional; see if endif End of conditional statement; see if exit Leave mail without changing anything file Interrogate/change current mail file folder Same as file folders List the folders in your folder directory from List headers of a list of messages headers List current window of messages help Same as ? hold Same as preserve if Conditional execution of Mail commands ignore Set/examine list of ignored header fields inc Incorporate new messages list List valid Mail commands mail Send mail to specified names mbox Arrange to save a list of messages in mbox more Invoke pager on message list next Go to next message and type it preserve Arrange to leave list of messages in system mailbox print Print messages quit Leave Mail; update system mailbox, mbox as appropriate reply Compose a reply to a message Mail Reference Manual USD:7-33 respond Same as reply retain Supersedes ignore save Append messages, headers included, on a file saveignore List of headers to ignore when using the save command saveretain List of headers to retain when using the save command set Set binary or valued options shell Invoke an interactive shell size Prints out size of message list source Read mail commands from a file top Print first so many (5 by default) lines of list of messages type Same as print unalias Remove alias undelete Undelete list of messages unread Marks list of messages as not been read unset Undo the operation of a set visual Invoke visual editor on a list of messages write Append messages to a file, don't include headers xit Same as exit z Scroll to next/previous screenful of headers USD:7-34 Mail Reference Manual The following table describes the options. Each option is shown as being either a binary or valued option. Option Type Description ___________________________________________________________________________ EDITOR valued Pathname of editor for ~e and edit LISTER valued Pathname of directory lister MBOX valued Pathname of the mbox file PAGER valued Pathname of pager for Print, print, Type and type SHELL valued Pathname of shell for shell, ~! and ! VISUAL valued Pathname of screen editor for ~v, visual append binary Always append messages to end of mbox ask binary Prompt user for Subject: field when sending askbcc binary Prompt user for additional BCc's at end of message askcc binary Prompt user for additional Cc's at end of message autoinc binary Automatically incorporate new mail autoprint binary Print next message after delete crt valued Minimum number of lines before using PAGER debug binary Print out debugging information dot binary Accept . alone on line to terminate message input escape valued Escape character to be used instead of ~ folder valued Directory to store folders in hold binary Hold messages in system mailbox by default ignore binary Ignore <Control-C> while sending mail ignoreeof binary Don't terminate letters/command input with ^D indentprefix valued String used for indenting messages keep binary Don't unlink system mailbox when empty keepsave binary Don't delete saved messages by default metoo binary Include sending user in aliases noheader binary Suppress initial printing of version and headers nosave binary Don't save partial letter in dead.letter Replyall binary Reverses the sense of the [Rr]eply commands quiet binary Suppress printing of Mail version/message numbers record valued File to save all outgoing mail in screen valued Size of window of message headers for z, etc. searchheaders binary Search string for message headers sendmail valued Choose alternate mail delivery system toplines valued Number of lines to print in top verbose binary Invoke sendmail with the -v flag Mail Reference Manual USD:7-35 The following table summarizes the tilde escapes available while sending mail. Escape Arguments Description _______________________________________________________________ ~! command Execute shell command ~b name ... Add names to "blind" Cc: list ~c name ... Add names to Cc: field ~d Read dead.letter into message ~e Invoke text editor on partial message ~f messages Read named messages ~F messages Same as ~f, but includes all headers ~h Edit the header fields ~m messages Read named messages, right shift by tab ~M messages Same as ~m, but includes all headers ~p Print message entered so far ~q Abort entry of letter; like <Control-C> ~r filename Read file into message ~s string Set Subject: field to string ~t name ... Add names to To: field ~v Invoke screen editor on message ~w filename Write message on file ~| command Pipe message through command ~: Mail command Execute a Mail command ~~ string Quote a ~ in front of string The following table shows the command line flags that Mail accepts: Flag Description ____________________________________________________________ -b list Send blind carbon copies to list. -c list Send carbon copies to list -d Turn on debugging -f [name] Show messages in name or ~/mbox -I Force Mail to run in interactive mode -i Ignore tty interrupt signals -N Suppress the initial printing of headers -n Inhibit reading of /etc/mail.rc -s subject Use subject as subject in outgoing mail -T file Article-id's of read/deleted messages to file -u user Read user's mail instead of your own -v Invoke sendmail with the -v flag Notes: -d and -T are not for human. | http://www.mirbsd.org/htman/i386/manUSD/07.mail.htm | CC-MAIN-2016-50 | refinedweb | 11,289 | 69.72 |
1441/how-can-find-out-the-index-element-from-row-and-column-python
Hello @Khanhh ,
Use panda to find the value of ...READ MORE
You probably want to use np.ravel_multi_index:
import numpy as ...READ MORE
Hi, good question. If you are considering ...READ MORE
You can use the exponentiation operator or ...READ MORE
When I am using os.listdir I am ...READ MORE
Delete the List and its element:
We have ...READ MORE
I am trying to count all the ...READ MORE
down voteacceptedFor windows:
you could use winsound.SND_ASYNC to play them ...READ MORE
There are several options. Here is a ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/1441/how-can-find-out-the-index-element-from-row-and-column-python?show=1442 | CC-MAIN-2020-50 | refinedweb | 112 | 73.34 |
When people think of C# 3.0 and Linq, they commonly think of queries and databases. The phenomenal work of the Linq to SQL guys provides ample reason to think of it this way; nevertheless, C# 3.0 and Linq are really much much more. I have discussed a number of things that can be done with lambdas, expression trees, and queries and will continue to do so but I want to pause and discuss a little gem that is often overlooked in C# 3.0. This new language feature has fundamentally changed both the way that I work in C# and my view of the world. I've been using it a lot without ever drawing attention explicitly to it. At least one reader noticed it and the possibilities it opens up and at least a couple of readers want an expanded version of it without even knowing it.
So what is the feature? It's extension methods.
At first glance they don't look very special. I mean really, all they are is one extra token in the definition of a static method inside of a static class.
static class Foo { public static Bar Baz(this Qux Quux) { ...
static class Foo {
public static Bar Baz(this Qux Quux) { ...
But as is usually the case, it's the semantics that are more interesting than the particular syntax.
The first argument of an extension method (the argument marked with this) is the implicit receiver of the method. The extension method appears to be an instance method on the receiver but it is not. Therefore, it cannot access private or protected members of the receiver.
For example, let's say that I detested the fact that the framework doesn't have a ToInt method defined on string. Now, I can just provide my own:
public static int ToInt(this string s){ return int.Parse(s);}
public static int ToInt(this string s){ return int.Parse(s);}
And I can then call it as:
"5".ToInt()
"5".ToInt()
The compiler transforms the call into:
ToInt("5")
ToInt("5")
Notice how it turns it outside out. So if I have three extension methods A, B, and C
x.A().B().C()
x.A().B().C()
The calls get turned into
C(B(A(x)))
C(B(A(x)))
While all of this explains how extension methods work, it doesn't explain why they are so cool.
A few months back, I was reading various online content related to C# 3.0. I wanted to get a feel for what customers were feeling and incorporate it as much as possible into the product. In the process, I came across an interesting post, Why learning Haskell/Python makes you a worse programmer. The author argues that learning a language like Python or Haskell can make things more difficult for you if your day job is programming in a language like C#.
I sympathize with what the author has to say and have had to spend enough time programming in languages that I didn't like that I think that I understand the pain.
That said, I hope that the author (and others who feel like him) will be pleasantly surprised by C# 3.0. For example, let's look at his example of painful programming:
"I have a list of Foo objects, each having a Description() method that returns a string. I need to concatenate all the non-empty descriptions, inserting newlines between them."
"I have a list of Foo objects, each having a Description() method that returns a string. I need to concatenate all the non-empty descriptions, inserting newlines between them."
In Python, he says that he would write:
"\n".join(foo.description() for foo in mylist if foo.description() != "")
"\n".join(foo.description() for foo in mylist if foo.description() != "")
In Haskell, his solution looks like:
concat $ List.intersperse "\n" $ filter (/= "") $ map description mylist
concat $ List.intersperse "\n" $ filter (/= "") $ map description mylist
These both look like reasonable code and I rather like them. Fortunately, you can express them in C# 3.0. Here is the code that looks like the Python solution.
"\n".Join(from x in mylist where x.Description != "" select x.Description)
"\n".Join(from x in mylist where x.Description != "" select x.Description)
And here is the code that is closer to his Haskell solution:
mylist.Where(x => x.Description != "").Select(x => x.Description).Intersperse("\n").Concat();
mylist.Where(x => x.Description != "").Select(x => x.Description).Intersperse("\n").Concat();
At this point, some will protest that there is no Join instance method on string and there is no Intersperse defined on IEnumerable<T>. And for that matter, how can you define a method on an interface in the first place? Of course, extension methods are the answer to all of these questions.; }}
It is as if these methods were defined on the receiver to begin with. At this point the realization sets in: a whole new mode of development has been opened up.
Typically for a given problem, a programmer is accustomed to building up a solution until it finally meets the requirements. Now, it is possible to extend the world to meet the solution instead of solely just building up until we get to it. That library doesn't provide what you need, just extend the library to meet your needs.
I find myself switching between the two modes frequently: building up some functionality here and extending some there. In fact, these days I find that I often start with extension methods and then when certain patterns begin to emerge then I factor those into classes.
It also makes some interesting styles of programming easier. I am sure it has some name, but since I don't know what it is I'll call it data interface programming. First we declare an immutable interface that includes only data elements.
interface ICustomer{ string Name { get; } int ID { get; }}
interface ICustomer{ string Name { get; } int ID { get; }}
Then, we declare an inaccessible implementation of ICustomer that allows customers to be created through a factory that only exposes the immutable version.
class Factory{ class Customer : ICustomer { public string Name { get; set; } public int ID { get; set; } } public static ICustomer CreateCustomer(int id, string name) { return new Customer { ID = id, Name = name }; }}
class Factory{ class Customer : ICustomer { public string Name { get; set; } public int ID { get; set; } }
public static ICustomer CreateCustomer(int id, string name) { return new Customer { ID = id, Name = name }; }}
Then we can declare behavior through extension methods.
public static string GetAlias(this ICustomer customer){ return customer.Name + customer.ID.ToString();}
public static string GetAlias(this ICustomer customer){ return customer.Name + customer.ID.ToString();}
And finally, we can use the behavior.
var customer = Factory.CreateCustomer(4, "wes");Console.WriteLine(customer.GetAlias());
var customer = Factory.CreateCustomer(4, "wes");Console.WriteLine(customer.GetAlias());
All of this may seem like a round about way to declare an immutable abstract base class with various derived classes. But there is a fundamental difference, the interface and behavior can change depending upon which extension methods are in scope. So one part of the program or system can treat them one way and another can have an entirely different view of things.
Of course, what I really want to be able to do (and we don't do it yet) is something like:
var customer = new ICustomer { ID = 4, Name = "wes" };Console.WriteLine(customer.GetAlias());
var customer = new ICustomer { ID = 4, Name = "wes" };Console.WriteLine(customer.GetAlias());
And then I skip the whole Factory thing all together. The customer is immutable and the definition of the type is short and sweet. All of the work of done by the compiler which incidentally doesn't need the factory because it can name mangle the implementation class and provide customized constructors automatically. But I digress, hopefully we can do something like that in the future.
Of course extension methods don't make the traditional techniques inapplicable, they are still as useful as ever. As with all design considerations, there are trade-offs involved. Care must be taken to manage extension methods so that chaos doesn't ensue, but when they are used appropriately they are fantastically useful.
As I have been writing C# code, I have accumulated a library of useful extension methods and I encourage you to do the same thing so that the ideas that you think roll naturally off of your fingertips.
If you would like to receive an email when updates are made to this post, please register here
RSS
I think I'm missing the point. How does this approach improve on traditional approaches to polymorphism. An example would be helpful.
Awesome! You know what would be cool....a website where you can share extension methods.
I bet you'd see a ton of stuff for math, image manipulations, string & regex.
Extension methods are really quite nice. It's interesting that you used Join() as an example, since that's the first extension method I wrote (and probably the one that's proved most broadly useful thus far). Here's a version that skips the array translation:
static string Join<T>(this IEnumerable<T> value,
string separator,
Converter<T, string> converter) {
StringBuilder joined = new StringBuilder(128);
IEnumerator<T> enumerator = value.GetEnumerator();
if (enumerator.MoveNext()) {
for (;;) {
joined.Append(converter(enumerator.Current));
if (enumerator.MoveNext()) {
joined.Append(separator);
} else {
break;
}
}
}
return joined.ToString();
}
This particular overload also contains a third parameter (second in extension method syntax) which I've found to really expand the utility of the method. For example, in Web projects you often have to encode data, e.g.:
Response.Write(listOfStrings.Join(", ", HttpUtility.HtmlEncode));
It's also an efficient way of joining things of the wrong type:
Response.Write(new int[] { 1, 2, 3 }.Join(", ", Convert.ToString));
Or the _really_ wrong type (thanks lambda syntax!):
Response.Write(GetUserObjects().Join(", ", u => HttpUtility.HtmlEncode(user.FullName));
Here are a few other Ruby inspired methods:
Jafar:
Thank you for asking for clarification. I don't think that extension methods necessarily improve upon traditional approaches so much as they provide an alternative with different tradeoffs (another tool in the toolbox).
So in that light, I can name a number of ways that they are very useful.
1. They are incredibly light weight to define compared with more traditional techniques
2. They allow classes to be extended after the fact even if the implementation is not accessible to the developer
3. They allow different parts of the same program to have different view of the behavior of various types
4. They allow behavior to be defined over interfaces
There are other reasons too. If I have time later on then I'll post some explicit examples.
Sushant:
Long time no see. You can post your code on
Derek:
I love your definition of Join and your post on extension methods. Thank you for sharing it.
Surely this also means that you can write more static side-effect free code but treat it like an instance method.
Having originally started programming in imperative/ OO languages and moving rapidly towards functional programming I have recently come unstuck about whether to make methods instance methods or static.
This answers the problem in one step - make them static - ergo side-effect free and stateless - and treat them like an instance method - more easily readable dot notation.
Me too. The whole static vs. instance thing is causing me no end of pain as I try and fold (sic) the functional stuff into my OO background.
I'd really appreciate any guidance from folks who've been using C# 3 long enough to have written some medium to large code bases in it already.
Mark:
I agree. Well put.
Tom:
That is a great question to ask and I hope that people can comment on that. One piece of guidance that I can give is that I am sure that you are a wonderful software developer. So I encourage you to try it out on less critical apps first and get your groove so to speak. Figure out what works and what doesn't for you. I know that as I use C# 3.0 more and more, my style is constantly evolving.
Besides what I have already said, one thing that is always a little strange (even with strategy pattern of whatever) is the expression of non-trivial algorithms in OO code. I am finding that many times they fit more naturally as extension methods. But I absolutely love them as a toolbox and as a quick prototyping way of working.
I've been a Boo fanatic for quite a while, its a python-inspired CLR language, and has had extension methods for a while. I love them so much!!! I have a stdlib of string and IEnumerable extensions (see: map) that I use everywhere.
Of course, fsharp came out a little while latter, and they have almost all of the IEnumerables extensions I'd been so dilligently re-creating.
Extension methods also allow for some other unusual programming styles. I read a paper called "First-class relationships in an object-oriented language", in whch the authors propose a new language that supports their thesis. Using extension methods, no such language is needed.
See for more details.
The one thing about extension methods that worries me is the packaging.
They are static members of a class (possibly a static class), that exists in a namespace. Importing that namespace then imports all the extension methods in all the classes in the namespace. OK, so whats the role of the classes in which the extension methods are defined?
Now, namepaces are pretty fuzzy things to start off with. They arent restricted to belonging to any particulat assembly, file or folder. Any class in any assembly can potentially belong to any namespace.
The end result is that you have a very fuzzy way of packing up bundles of extension methods for use - one in which you never really know where you extension methods are coming from - maybe today they come from your known assemblies, but tomorrow they are coming from somewhere unexpected.
Id rather have the classes containing extension methods have to be explicitly named as imported for extension - you cant have duplicate classes in a namespace, so at least you get some warning if there are collisions.
-I fully agree about your use of extension methods, but i guess that as they give more flexibility to the developer, decipline is needed. One can go after extension methods, to find himself fast in a mess of packages and classes. Having said that, i pretty like your implementation, i think extension methods can be most handy for expressions builders, and fluent interfaces. i mean as most readers said, a side-effect free helper fuctions that can be used in fluent-interface style.
- i really moaned a lot about anonymous classes, and i would really love to see them in c# 3.0 (i dont want to wait for another version).
- I posted on my blog about extension methods (as a reflection of Martin Fowler's Fluent Interfaces post)
" OBSEV:: Fluent Interface and c# 3.0 Extension Methods : The flexibility of dynamic typing with the powerfull AutoCompletion "
i guess it worth to be read
- i like statically typed laguages, extension methods came to offer me some flexibility i really lacked before.
by the way, it might be a good idea, to have something like a namespace interface, where we can switch implementations for extension methods, kind of AOP :p . anyway thats an idea to suggest to the research guys :)
Damien:
That is an awesome post. I hadn't thought of pulling cyclical dependencies out an using extension methods to manage them. Very nice indeed.
I agree about the packaging thing. Use them with caution. Possibly include each set of related extension methods in their own namespace. We are thinking about things (post orcas) that would extend and improve the situation.
Sadek:
Interesting stuff.
Post-Orcas it will no doubt be too late to fix the extension-method packaging problem. Its a wart, and better to fix it now before its set in stone. Too many warts accumulating in c# as it is.
Thanks for the link Wes.
BTW I've tried walking a tree with extension methods and LINQ and wondered if you could see a better way... wouldn't surprise me!
Hi Wes,
I've been playing around a lot with both C# 3 and C# 2, trying to push the whole functional thing as far as makes sense in each respective iteration of the langauge (I mean no ones suggesting throwing out the OO baby with the bath water right). In so doing I'm finding that in terms of supporting functional composition (or 'pipelining') it seems as if it's may be better to trend towards returning an empty rather than null collection. So empty means 'nothing' and null is not used in favor of an exception being throw.
Given that the cost of a managed new is on average considerably lower than an unmanged malloc I find I'm less nervous about writing code like this than I might be in say C++. Added to which these empty collections when spun up during a pipeline usually have very short life spans since they're almost always either return values or parameters (occasionally locals), but not member variables.
I was wondering what your thoughts where on the subject. Clearly like anything it could be taken too far - for example if some strange custom container had a computationally expensive default constructor (hard to imagine really in the general case).
I'm aware that the empty vs. null debate is not a new one - I'm just interested to know if writing in a FP style favors one approach over another. Perhaps some of the other FP vets could offer up their experience on the subject?
Kind regards,
tom
Alex:
I like it. You'll want to check out my Linq perf post when I finish it for some details that are related to your implementation.
I agree. Do not throw the OO baby out!
I like the idea of returning empty collections especially since all empty collections are created equal (of a given type). So you really don't even need to new them up very often (though they are relatively cheap as you indicate).
Personally, I really like removing null as much as possible (without doing it for its own sake). So one class that I often use is IOptional<T> which is similar to Nullable<T> but for reference types.
It would be nice if we could define static functions in a namespace without having to have a useless wrapping class.
Perhaps every namespace should have a hidden static class that holds the free functions in that namespace. Imprting the namespace would be equivalent to importing the functions defined in that hidden class.
You could still package up functions in a static class, but that static class would need to be imported explicitly for those functions to be available as free functions or extension methods.
By free function, I mean a function that can be called without a qualifying class prepended.
C# has been a class based language since the get-go so it's hard to imagine it changing to the degree that we'd be able to write free functions in it.
One alternative might be a 'with' ('using' is overloaded enough) style keyword that would open up a static class into the current scope and allow you to omit the '<ClassName>.' from a function invocation. Of course you can alway point a Func<...> at a member function and then reference using just the variable name - which gets you closer to what you want today (i.e. it works in both C# 2 and 3).
As long as c# had static methods, its class-based nature was merely a figleaf. I mean, Math.Sin(x) is merely more verbose than Sin(x) - no other benefit accrues.
Perhaps the namespace scope resolution operator :: can be applied to the using directive, e.g.
using System::Math;
using System.Linq::Query.*;
using Foo.Bar::Baz.Boink;
The first directive would import the Math class from the System namespace. The second directive would import the methods of the Query class as free functions or extension methods. The third directive would import the Boink() method of the Baz class of the Foo.Bar namespace.
Syntactic sugar. Removes nothing, adds precision and consiseness simultaneously.
All greed? Ok, then, dev team, do your stuff.
Tom & Damien:
We have considered it several times and it is certainly a possibility for the post Orcas timeframe. I would love some way to do this.
Why not just:
public static IEnumerable<T> Intersperse<T>(this IEnumerable<T> sequence, T value)
{
yield return sequence.First();
foreach (var item in sequence.Skip(1))
{
yield return value;
yield return item;
?
That works very well except if sequence doesn't contain anything. Which can be solved if you add an if statement before yielding the first item.
Last week the Microsoft MVP's converged on Redmond from all corners of the globe. It was a great occaission
This is a good explanation of how to write an extension method. One thing: you can write the above code without intersperce and join, still in an FP style:
mylist.Where(x => x.Description != "").Select(x => x.Description).Aggregate("", (s, i) => s + i + "\n");
If you don't like all of the short lived string objects on the heap, this works too:
mylist.Where(x => x.Description != "").Select(x => x.Description).Aggregate(new StringBuilder(), (s, i) => s.Append(i).Append("\n"), s => s.ToString());
Estava a ler uma mensagem de mais um guru da Microsoft, o Wes Dyer . Nela, ele apresentava uma aplicação
This would be closer to the Haskell version:
mylist.Select(x => x.Description).Where(d => d != "").Intersperse("\n").Concat(); | http://blogs.msdn.com/wesdyer/archive/2007/03/09/extending-the-world.aspx | crawl-002 | refinedweb | 3,662 | 64 |
More information on ToolValidators can be found here.
Moving on let's get to an example. Say you want to populate a string field with a predefined list of object or what is known as a value list to the GP world. Simply right click your script and click on the Validation Tab. Next click on edit, and you are ready to begin.
By default, IDLE should open, unless you customized your system to use pythonwin. You should see something like this:
Since you want the value like to show up when the GUI is fired, use the initializeParameters(self) method.
Now just create a simple search cursor, and append in the values from the table you want to force the user to choose like such:
def initializeParameters(self):
"""Refine the properties of a tool's parameters. This method is
called when the tool is opened."""
import os, sys, arcgisscripting as ARC
self.GP = ARC.create(9.3)
self.rows = self.GP.searchcursor(r"
\CountryCodes.dbf")
self.row = self.rows.next()
self.holder = []
while self.row:
self.holder.append(self.row.getvalue('CC'))
self.row = self.rows.next()
self.holder.append('*')
self.holder.sort()
self.params[1].Filter.List = self.holder
return
Now you have a drop down menu from which a user is forced to select from. The best thing is that it's dynamic, so when the table changes so does the selection.
Enjoy | https://anothergisblog.blogspot.in/2009/06/ | CC-MAIN-2018-09 | refinedweb | 236 | 55.95 |
On Tue, Jan 05 2010 at 9:57pm -0500, Mike Snitzer <snitzer redhat com> wrote: > On Tue, Jan 05 2010 at 9:23pm -0500, > Martin K. Petersen <martin petersen oracle com> wrote: > > > >>>>> "Alasdair" == Alasdair G Kergon <agk redhat com> writes: > > > > Alasdair> extern int blk_stack_limits(struct queue_limits *t, struct > > Alasdair> queue_limits *b, > > Alasdair> sector_t offset); > > > > Alasdair> This function is asking for the offset to be supplied as > > Alasdair> sector_t i.e. in units of sectors, but this patch uses bytes. > > Alasdair> Please either change that to sectors as per the prototype, or > > Alasdair> if it really does want bytes, fix the prototype to make that > > Alasdair> clear. > > > > It is sector_t because we don't have an existing type that fits the bill > > (i.e. >= sector_t and dependent on whether LBD is on or not). We're > > trying to move away from counting in sectors because the notion is > > confusing in the light of the logical vs. physical block size, device > > alignment reporting, etc. > > > > So maybe something like this? > > > > > > block: Introduce blk_off_t > > > > There are several places we want to communicate alignment and offsets in > > bytes to avoid confusion with regards to underlying physical and logical > > block sizes. Introduce blk_off_t for block device byte offsets. > > > > Signed-off-by: Martin K. Petersen <martin petersen oracle com> > ... > > diff --git a/include/linux/types.h b/include/linux/types.h > > index c42724f..729f87a 100644 > > --- a/include/linux/types.h > > +++ b/include/linux/types.h > > @@ -134,9 +134,11 @@ typedef __s64 int64_t; > > #ifdef CONFIG_LBDAF > > typedef u64 sector_t; > > typedef u64 blkcnt_t; > > +typedef u64 blk_off_t; > > #else > > typedef unsigned long sector_t; > > typedef unsigned long blkcnt_t; > > +typedef unsigned long blk_off_t; > > #endif > > > > /* > > After looking closer there seems to be various type inconsistencies in > the alignment_offset and discard_alignment related routines (returning > 'int' in places, etc). > > The following patch is what I found; I have no problem with switching > from 'unsigned long' to blk_off_t for LBD though. > > Martin, would like to carry forward with this? Have I gone overboard > with this patch? I missed fixing disk_stack_limits()'s 'offset' though... Mike | https://www.redhat.com/archives/dm-devel/2010-January/msg00016.html | CC-MAIN-2017-17 | refinedweb | 336 | 56.66 |
Work at SourceForge, help us to make it a better place! We have an immediate need for a Support Technician in our San Francisco or Denver office.
You can subscribe to this list here.
Showing
6
results of 6
At 03:18 PM 23/06/2004 -0400, Matt Feifarek wrote:
.
I'll send the code to you privately.
>As far as maxlength, you can send maxLength into the constructor for
>TextInputField. If you want to use just TextField, you can set the
>attributes yourself (it's not techincally a valid attribute in xhtml, I think).
I specified:
newForm.addField(Fields.TextField('mname',[Validators.MaxLength(25)],label="Middle
Name"))
It made no difference in the width of the field that was rendered. I
specified MaxLength(6) for Salutation and both fields were the same width,
and no different than when I had not specified any MaxLength.
>The "right way" to do it is to use CSS though.
I do not know of a way to specify the width of a given text entry field in
a form with CSS. When you think about it, it makes sense as CSS operates on
classes of things, not individual instances of things.
Regards,
Clifford Ilkay
Dinamis Corporation
3266 Yonge Street, Suite 1419
Toronto, Ontario
Canada M4N 3P6
Tel: 416-410-3326.
As far as maxlength, you can send maxLength into the constructor for
TextInputField. If you want to use just TextField, you can set the
attributes yourself (it's not techincally a valid attribute in xhtml, I
think).
The "right way" to do it is to use CSS though.
Regards...
I would recommend that you use the database module provided for your
text protection. So it a user enter something like
if you let the pgSQL
First Name = aaron'; truncate USERS;
then that will be thier first name.
in the pgsql its something like
cursor.execute('''insert into USERS VALUES(%s)''' % (userinput))
will automaticlly escape the string
On the other hand if you want to test the users input and tell them they
are bad users then try something like
if userinput <> PgSQL.pgquotestring(userinput):
raise 'Bad Input'
Please test this though, I used is many years ago and the code was just
made up while I wait for webware to restart....
-Aaron
Shayne ONeill wrote:
>Try slashing and unslashing. Theres PHP functions for this, but I really
>dont know for py
>
>On Tue, 22 Jun 2004, Matt Feifarek wrote:
>
>
>
>>Marc Saric wrote:
>>
>>
>>
>>>Although this is only for Intranet-use, I would like to add a Validator,
>>>which prevents SQL-injection on Db-queries.
>>>
>>>Has anyone tried to write one or an advice, where to look or how to
>>>tackle this problem?
>>>
>>>
>>Hello, I'm the author of FormKit.
>>
>>We've never done this specifically, but I expect that it's just a matter
>>of inspecting a string and looking for nasty bits. Do some googling to
>>see what the standards are for that.
>>
>>In any case, converting a string is easy to do in a validator. Look in
>>some of the examples to see how a validator works. Maybe look at
>>FormKit.Validators.Year as a starter.
>>
>>You can put whatever code you like into the _validate method, or if it's
>>a matter of converting the string into something else (escaping it, say)
>>you can use _convert.
>>
>>Good luck.
>>
>>
>>
>>-------------------------------------------------------
>>This SF.Net email sponsored by Black Hat Briefings & Training.
>>Attend Black Hat Briefings & Training, Las Vegas July 24-29 -
>>digital self defense, top technical experts, no vendor pitches,
>>unmatched networking opportunities. Visit
>>_______________________________________________
>>Webware-discuss mailing list
>>Webware-discuss@...
>>
>>
>>
>>
>
>
>-------------------------------------------------------
>This SF.Net email sponsored by Black Hat Briefings & Training.
>Attend Black Hat Briefings & Training, Las Vegas July 24-29 -
>digital self defense, top technical experts, no vendor pitches,
>unmatched networking opportunities. Visit
>_______________________________________________
>Webware-discuss mailing list
>Webware-discuss@...
>
>
>
--
-Aaron
If the car industry behaved like the computer industry
over the last 30 years, a Rolls-Royce would cost $5,
get 300 miles per gallon, and blow up once a year
killing all passengers inside.
Hi Matt,
At 02:14 PM 22/06/2004 -0400, Matt Feifarek wrote:
>CLIFFORD ILKAY wrote:
>
>>... in the writeContent() method
>>
>> if self.form.fname.error():
>> self.writeln('''\t<tr><td
>>Please enter a first name</td>''')
>> self.writeln('''\t<td
>>%s</td></tr>''' % self.form.fname.tag())
>> else:
>> self.writeln('''\t<tr><td>%s</td>''' %
>> self.form.fname.label())
>> self.writeln('''\t<td>%s</td></tr>''' %
>> self.form.fname.tag())
>>
>> self.writeln('''\t<tr><td>%s</td>''' %
>> self.form.mname.label())
>> self.writeln('''\t<td>%s</td></tr>''' %
>> self.form.mname.tag())
>>
>> if self.form.lname.error():
>> self.writeln('''\t<tr><td
>>Please enter a last name</td>''')
>> self.writeln('''\t<td
>>%s</td></tr>''' % self.form.lname.tag())
>> else:
>> self.writeln('''\t<tr><td>%s</td>''' %
>> self.form.lname.label())
>> self.writeln('''\t<td>%s</td></tr>''' %
>> self.form.lname.tag())
>Some comments:
>
>In FK there is already stuff for error formatting. For example, if there's
>an error, the input tag will contain class="error" so you can mark that as
>"red" or something (look in the examples).
I saw that but I wanted to customize the layout of the incomplete form,
i.e. I wanted the error string beside the fields, not above them, and I
wanted more specific error messages. Having said that, it can get out of
hand very quickly if you have a couple of error conditions you are testing
for. You have to have a bunch of tests just to output the correct error
string. I cannot think of a better way of doing it though in order to
provide useful feedback to the user.
>Also, it's easier on your typing if you make a short-name reference to the
>form object in your local name space, like "f = self.form". Then you can
>just do f.lname.tag().
Select, middle button click is quite fast too:) As you saw below, I used an
alias for self.form in newForm. I prefer names that convey meaning even to
those who may not be familiar with Python, or even programming for that matter.
>But there's nothing "wrong" with what you're doing; there's just already
>some plumbing for some of it.
>
>>This seems to work though I have a couple of questions about it.
>>
>>1. When I click on the Submit button, if I do not enter a Middle Name,
>>the form will not POST successfully. That is strange considering that
>>Middle Name is not mandatory. If I enter anything in Middle Name, the
>>form will POST. The only difference I can see between Middle Name and the
>>other fields is that it is not mandatory and as such, I do not test for
>>an error condition. How can I get the form to POST with a blank Middle
>>Name field?
>
>Hmm. That shouldn't happen. It's possible that it's getting confused by
>the lack of a Validator set declaration in the constructor of your field. IE,
>
> newForm.addField(Fields.TextField('mname',label="Middle Name"))
>
>may have to be:
>
> newForm.addField(Fields.TextField('mname', [ ], label="Middle Name"))
>
>If that's the case, that's a bad bug on our parts. Send me your actual
>code and I'll take a look..
>>2. I am using SQLObject to map the PostgreSQL tables to Python objects. I
>>have that part prototyped and working. Once I get past the hurdle above,
>>I have to transfer the contents of the form fields above to the Person
>>object attributes. According to the Class Reference, I think I should be
>>using self.form.value(fieldName) to do this. e.g.
>>
>>if self.form.isSuccessful():
>> newPerson = Person(
>> firstName = self.form.value(fname),
>> middleName = self.form.value(mname),
>> lastName = self.form.value(lname),
>> )
>>
>>... where Person is a SQLObject class corresponding to a PostgreSQL
>>table. Am I on the right track?
>
>That will certainly work. Alternatively, you can poke at each field:
>
> newPerson = Person(
> firstName = self.form.fname.value(),
> middleName = self.form.mname.value(),
> [etc]
> )
>
>Or, better yet, just get the whole values bundle as a dictionary:
>
>values = self.form.values()
>
>newPerson = Person( firstName=values['fname'], middleName=values['mname']
>) etc...
The first or second option is probably more readily understood by someone
who may be new to Python.
Regards,
Clifford Ilkay
Dinamis Corporation
3266 Yonge Street, Suite 1419
Toronto, Ontario
Canada M4N 3P6
Tel: 416-410-3326
Hello!
I'm trying to create a web application with cheetah and webware, but I
have some difficulty to start (indeed I didn't success to access to
examples given in cheetah tutoriel).
I made an application directory to develop my web application using
Wabware/bin/MakeAppWorkDir.py script. Then I turn on the server with
./AppServer.
When I try to load the URL:
I get the error:
Traceback (most recent call last): File
"/home/sand/local/lib/python/Webware/WebKit/ThreadedAppServer.py", line
293, in threadloop rh.handleRequest() File
"/home/sand/local/lib/python/Webware/WebKit/ThreadedAppServer.py", line
501, in handleRequest dict_length = loads(chunk)ValueError: bad
marshal data
Any idea?
Thanks
sandrine
Hi again guys!
thanks for help Greg.
Now I think it´s not a good idea to pass the page name throught
request, so I done another approach. I made the RestrictModel as I
said...
What is the idea of RestrictModel. If the user is not logged in, I show
a login form, else I check if the user permission is enough, if the
user permission is not enough I show a error message, else, I show the
page the user is trying to get in (RestrictModel in this case).
I can create a subclass of the restrictModel and this servlet will have
a login form implemented and will have a login validation too. All I
have to do is to define which user permission is able to access the
page...
See the code above, it´s coded using brazilian portuguese, but I think
you can understand:
class ModeloRestrito(Modelo):
def _validarRequest(self):
login=self.request().value('_login_', None)
senha=self.request().value('_senha_', None)
if login is None and senha is None:
return # caso em que não tentei fazer login
if not login:
self.erro.append('Você precisa especificar um login!')
if not senha:
self.erro.append('Você precisa especificar uma senha!')
if self.erro:
return
try:
usuario=Usuario.byLogin(login)
except SQLObjectNotFound:
self.erro.append('O login fornecido não existe!')
return
if not usuario.senha==senha:
self.erro.append('A senha fornecida está incorreta!')
else:
self.session().setValue('_userLoggedIn_', usuario.id)
def doesUserHasPermission(self):
usuario=Usuario.get(self.userLoggedIn())
for i in usuario.niveis_permissao:
if i.id in self.userPermission():
return True
return False
def userPermission(self):
return [1]
def writeLoginErrorMessage(self):
self.writeln('<p class="erro">Você precisa estar logado para
acessar esta seção</p>')
def writePermissionErrorMessage(self):
self.writeln('<p class="erro">Seu login não dá permissão de acesso
a esta seção!</p>')
def writeLoginForm(self):
self.writeln('''\
<form method="post" action="%s">
<table>
<tr>
<td>Login:</td>
<td><input type="text" name="_login_" value="%s"></td>
</tr>
<tr>
<td>Senha:</td>
<td><input type="password" name="_senha_"></td>
</tr>
</table>
<p><input type="submit" value="Enviar Dados"></p>
</form>''' % (self.__class__.__name__, self.request().value('_login_',
'')))
def _writeContent(self):
if not self.userLoggedIn():
self.writeLoginErrorMessage()
self.writeLoginForm()
else:
if self.doesUserHasPermission():
Modelo._writeContent(self)
else:
self.writePermissionErrorMessage()
The model is too long to be shown here, but it is based on page, I
change some methods like writeContent( I call _writeContent before),
and the _respond method (I call _validarRequest before writeHTML and
call validarRequest inside writeHTML)...
If someone wanna see this working:
the login michel and pass teste can acess all pages, the user teste
with pass teste can acess only the "Pagina Restrita 2"...
Sorry the big message, and sorry again about the poor english :)
=====
--
Michel Thadeu Sabchuk
Curitiba/PR
______________________________________________________________________
Yahoo! Mail - agora com 100MB de espaço, anti-spam e antivírus grátis! | http://sourceforge.net/p/webware/mailman/webware-discuss/?viewmonth=200406&viewday=23 | CC-MAIN-2014-35 | refinedweb | 1,995 | 58.48 |
Proposal from RechargEast Magazine
Proposal from RechargEast Magazine
- Dear colleagues,We address to you with an unusual request. RechargEast and ReFortunEast Magazines have taken a mass campaign, aiming to create associations in the countries of CEE. One of the main aims of the RFE Seminars is to convince the representatives of the country, where RFE Seminars takes place that it is high time to make an association. In this regard, we are looking for ways to motivate all recharging companies to unite and form an association. The first RFE Seminars took place in < ?:namespace prefix = st1
. With many efforts, we have succeeded to convince 22 companies to form BIATI (Bulgarian Association of Toner and ink). The second RFE Seminars is in Sofia, Bulgaria in April, 16-17, 2005. You are more than welcome to attend it. You can get more details on Bucharest, Romania
What will you tell if you enlist among the companies, interested not only in their own fortune, but also in the fortune of the whole industry? At the moment, we make a list of the biggest companies producers and distributors, which are ready to give 5 % discount to companies, members of the re-industry’ association of CEE. We have already 2 big companies in the list and we’ll be more than happy to include you too. What will you get from charity gesture?
1/ Public recognition and advertisement, as the list of benefactors will be published in every issue of RechargEast and ReFortunEast magazines, together with links to your web-sites.< ?:namespace prefix = o
You’ll see that companies will look for you for the discount given of 5 %. It will give you a competitive advantage before your competitors. We need you at the moment. In 20 years you’ll have the excellent opportunity to prove that you are worth to be in the list of the biggest producers and distributors!
Dr. Aydan Bekirov,
Publisher | http://tonernews.com/forums/topic/webcontent-archived-8495/ | CC-MAIN-2017-22 | refinedweb | 320 | 63.19 |
The Bastrop Advertiser
bastropadvertiser.com
Texas’ Oldest Weekly Newspaper Since March 1, 1853
Volume 159, Number 85
INSIDE
THURSDAY, NOVEmBER 22, 2012
Semi-Weekly Since Sept. 5, 1977 An edition of the
$1.00
COUNTY
Love an animal today BY SARAH ACOSTA Staff Writer
HAPPY THANKSGIVING
From the staff of the The Bastrop Advertiser
STARS
Maybe you didn’t know there are stars among us. —Page A4
SPORTS
Basketball season under way.
The Bastrop County Animal Shelter is the last hope for many cats and dogs that have been abandoned and are seeking the love and comfort of a new home. A volunteer-based, nonprofit organization, Urgent Animals at Bastrop County Animal Shelter Inc., has made it their mission to raise awareness and funds, and assist in the rescue, rehabilitation and adoption process of the animals at the local shelter. “Our main vision is to See COUNTY, page A7
—Page B1
Urgent Animals volunteer Amy Baxter, left, and president Lorraine Joy take Green Bean and Becka out of their kennels at the shelter for some play time.
Friday, Nov. 23 and Saturday, Nov. 24
n.
WEATHER THURSDAY FORECAST HI: 76 LO: 55
Statewide testing and accountability measures and a statewide school finance lawsuit have led two Bastrop County school districts to become involved in legal proceedings challenging the fairness of the related laws. The Bastrop school district board voted at its regular meeting Tuesday to join a statewide coalition of school districts which initiated proceedings on Nov. 1 aimed at overturning the annual Adequate Yearly Progress ratings issued under the federal No Child Left Behind law. The case, organized by the Texas Association See SCHOOL, page A5
RETAIL
Buc-ee’s dazzles BY TERRY HAGERTY Assistant Editor
Saturday, Dec. 1
n Smithville’s Festival of Lights. The air is festive throughout the day with live music featuring local choirs and local bands at the gazebo, plenty of food, drinks, fellowship and fun with a Comin’ Around the Mountain Christmas theme. Early morning run/walk to a theatrical play after the lighted parade. Visit website for more information at smithvilletx.org. For a complete calendar, see page A2
School districts join court battles BY ERIN GREEN Staff Writer
Falling into Arbor Day
CALENDAR
LEGAL
The group of Camp Swift soldiers who came into the new Buc-ee’s store during its opening last week seemed to respond immediately to the immensity of the place. Erin Green/Bastrop Advertiser
Mulching is fun for the fourth-graders of Mina Elementary School, who crowd around the newly-planted American Elm tree city parks superintendent Jason Alfaro helped plant. BY ERIN GREEN Staff Writer Arbor Day is usually celebrated in the spring, but that’s no reason why it can’t be celebrated in November. The fourth-graders of Mina Elementary School were on hand along with several City of Bastrop officials and Texas A&M Forest Service officials, includ-
ing forester Jim Rooni, at Fisherman’s Park Friday for the fall celebration of Arbor Day. Bastrop Mayor Terry Orr read a proclamation naming Friday Arbor Day in the city. “Let’s cherish trees and recognize their importance in our lives,” he said, concluding his remarks to the kids. Welcoming the students
“We’re still hiring.”
— John Taylor, general manager
— and their teachers and principal Martha Werner to the park — Rooni spoke about the importance of trees and asked the children whether they like notebook paper, eating apples, living in homes made of wood and similar questions showing the diversity of how trees are important in everyday life.
Several of them pumped their arms enthusiastically in the air only footsteps into the 56,000 square-foot facility near the intersection of Texas 95 and Texas 71– now Bucee’s second largest after its New Braunfels store.
See ARBOR, page A7
See RETAIL, page A6
Mostly sunny
FIRE INDEX News Community Sports Classifieds
Page Page Page Page
16 pages, two sections Newsroom (512) 321-2557 © The Bastrop Advertiser
We Recycle
A2 A3 B1 B5
As cold, dry weather sets in, danger mounts BY TERRY HAGERTY Assistant Editor The thick black column of smoke spiraled hundreds of feet up into the air and could be seen from miles around Bastrop late on Wednesday morning, Nov. 14. “Given the way the smoke looked, we were responding to the possibility of a house fire,” Bastrop Fire Department assistant chief Rod Stradling said of flames that could have easily spread, had there been a stronger wind. But fortunately, a house was not on fire and the wind was less than four mph. The smoke was billowing from a huge pile of items that are illegal to burn, under rules
“There are still a lot of places that have thick fuel around the county.”
— Rod Stradling, assistant fire chief
from the Texas Commission on Environmental Quality, Stradling pointed out. “That pile contained couches, mattresses, clothing and a micro-wave,” Stradling said. The 30-feet-wide pile had substantial flames going when sheriff’s deputies and firemen arrived. Stradling See FIRE, page A7
Terry Hagerty/Bastrop Advertiser
Rod Stradling, assistant chief with the Bastrop Fire Department, monitors a burn near a residence north of downtown last week.
Page A2 • The Bastrop Advertiser
Thursday, November 22, 2012
THROUGH ERIN’S EYES
Giving thanks this Thanksgiving Forget the turkey. To me, Thanksgiving dinner is all about the sides — the mashed potatoes with gravy, the sweet potato casserole with the ooey, gooey melted marshmallows on top, the green bean casserole, the young peas and, last but not least, the cranberry sauce and for dessert, pumpkin pie. The cranberry sauce was always my favorite, but I don’t mean this to be about the dinner many of us are sitting down to today. This is about the reasons we have to be thankful. It’s a tough time right now, with the economy in doubt and many fears about the
ERIN
GREEN “fiscal cliff,” unemployment, jobs, gas prices and all of the numerous other things that keep many of us awake far into the night. But we still have much to be thankful for. We, individually and we as a nation. As much as the nation is divided and as many problems face us as a nation, there are still many things to be thankful for. We’re a free country. We have the ability to
vote for our leaders, to praise them or criticize them as we want. We can choose them as we want. But there’s so much more beyond that. We live in a time that offers so much knowledge to be consumed in so many different ways. Medical science is making enormous strides in fighting many of the worst diseases. We can fight diseases and cure things that many years ago would have easily killed or maimed. Moved far away from friends and family? It’s easier to stay in touch with our loved ones than ever. Even the poorest and most down-and-
out among us have far more than their most down-and-out counterparts in other countries of the world. Although there are many who still have a long way to go to recover from the wildfire of more than a year ago, Bastrop as a city has come a long way to find its way back from the biggest wildfire in the state’s history. Personally, I have a great deal to be thankful for, as well. Many people, including myself, have been doing the “30 days of thankfulness” on Facebook. The major things — family and friends, for instance — are easy to note, but it doesn’t make those things any
less real. In fact, stretching the season of thanksgiving throughout the month makes us think much more deeply about all of those things in life we can be thankful for — family and friends, pets, jobs, beating challenges or diseases. But for me, Thanksgiving is about more than just being thankful. It’s about thankfulness, togetherness with those we love but it’s also about family and traditions. My family had plenty of Thanksgiving traditions. I always, always watch the big Thanksgiving parade in New York City. I have yet to
miss it even once. And Thanksgiving dinner was always hosted by us for ourselves, my grandparents and my aunts, uncle and cousins on my dad’s side. I also loved Thanksgiving night, when we had what, to me as a child, was the best part — the Lighting of the Great Tree at the old Rich’s Department Store in downtown Atlanta. They’d light the tree when the soloist hit the highest note of “O Holy Night.” We’d have our tree up and decorated as well and my dad would crouch down to light ours at the same time. I loved it. See? Yet another thing to be thankful for. Happy Thanksgiving.
BCSO
BEARING SENIORITIS
Won’t be thinking 2 arrested, charged with assault about pesky doors BY ERIN GREEN Staff Writer
The school seems to have an issue with doors. Here at Bastrop High School, some doors are unlocked, others aren’t and some aren’t even con- Melody sidered doors. One certain Funderburgh set of doors in particular that happen to be 20ft from my 3rd and 5th me, there with my family. period class is always a I’ll never need a therapist gamble. When the door is with a family like mine, unlocked, it’s the quickest they are great, and you way for me to get from the can’t hide anything from 700 hall through the main those women, but I’ve nevbuilding by the bistro, to er wanted to. My two male my next class, but every cousins are just as loving, morning, someone flips but often spend more time a coin or spins a wheel to playing video games than decide which doors will be talking about feelings. accessible and which will If I said I was only exnot. I was late to class, but, cited to see my family I oh well. My teacher under- would be lying. The food stood that the locked doors is out of this world, but had thwarted yet another nothing beats the monkey student. bread. I’ve heard of some Regardless of the people using cinnamon doors, there isn’t much and sugar on their monthat can beat my day to- key bread but not in my day. It’s Friday and I have house. Our Thanksgiving this whole next week off monkey bread is soaked for Thanksgiving. I will in garlic and butter, and, have work a few days this as you chew each bite, you next week, but it doesn’t can literally feel your fat matter. My grandmother, cells jumping for joy. It’s who began emailing ideas my favorite Thanksgiving for this year’s menu al- dish and the only other most a month ago, makes time of year it’s made is almost everything from Christmas. I think this scratch. There is really year I’ll watch them make nothing like watching the it so I can just buy myself women in my family get to- the ingredients and make gether on a holiday. They it whenever I want. They all seem to flow around say it wouldn’t be as speeach other in the kitchen cial if we had it all the like a system of cogs and time. I guess that may wheels. There is always be true, but more imporsomething to talk about tantly, we would be very and the house is usually out of shape. Either way filled with laughter. It’s al- this is going to be a good ways been a safe place for Thanksgiving.
The Bastrop Advertiser Texas’ Oldest Weekly Newspaper Since March 1, 1853 Semi-Weekly Since Sept. 5, 1977
Alcohol was an alleged contributing factor in two recent arrests for assault carried out by the Bastrop County Sheriff’s Office. Sheriff’s deputies were called out to Cynthia Drive about 10:30 p.m. on Nov. 9 in response to an argument in which Esteban Garza and two friends had allegedly been drinking.
Garza left the scene when the deputies arrived, but returned later in the evening to collect his belongings from the house. Deputies charged him with assault causing bodily injury and family violence. Garza was released from jail Nov. 12 on a $5,000 surety bond. Deputies also reported to the 400 block of FM 2336 on Nov. 10 regarding an argument between a couple sit-
ting in a truck. According to the report, the woman, Dorothy King, had allegedly been drinking and deputies found her with blood on her hands and saw obvious injuries to the face of the man sitting in the truck. Deputies charged the woman, Dorothy King, 43, with assault causing bodily injury and family violence. She was released from jail on Nov. 11 on a $3,000 surety bond.
ticipated in a previous Tree of Angels, your special angel will be available in the church foyer in alphabetical order by last name. Please bring a picture of your loved one to place on the memorable table in the church entryway. For more information, contact the Bastrop County Sheriff’s Office at 512-549-5100 or by email at lisa.jackson@ co.bastrop.tx.us.
street markets, Holiday homes tour, Christmasthemed shows at various locales and much more. Check lostpineschristmasbastroptx. com for complete information.
LOOKING AHEAD Friday, Nov. 23 and Saturday, Nov. 24
■.
Saturday, Dec. 1
■ Smithville’s Festival of Lights. The air is festive throughout the day with live music featuring local choirs and local bands at the gazebo, plenty of food, drinks, fellowship and fun with a Comin’ Around the Mountain
STAFF Editor: Cyndi Wright, ext. 12 (cwright@bastropadvertiser.com)
Assistant Editor: Terry Hagerty, ext. 22 (thagerty@bastropadvertiser.com)
For missed papers call (800) 445-9898
Staff Writer: Erin Green, ext 21
Devoted to the welfare of the people of Bastrop County. Published 104 times a year on Thursdays and Saturdays by Austin Community Newspapers, a division of Cox Newspapers, at:
Sports Editor: Chris Dukes, ext. 17
The Bastrop Advertiser (USPS045-020), P.O. Box 459/908 Water St., Bastrop, TX 78602 Phone: 321-2557, Fax: 321-1680 Office hours: 8 a.m. to 5 p.m. Monday-Friday
(classifieds@bastropadvertiser.com)
SUBSCRIPTION RATES $52.00 per year or $37.44 per six months delivered in county, $62.40 per year or $43.16 per six months delivered out of county, and $74.88 per year delivered out of state (all are payable in advance). Periodic postage paid at Bastrop, Texas 78602. POSTMASTER: Send address changes to: The Bastrop Advertiser, P.O. Box 459, Bastrop, Texas 78602-0459
According to Arielle Keegan, police records specialist with the sheriff’s office, Garza, 40, and one of the friends had reportedly gotten into an argument which had escalated. According to the report, Garza had allegedly physically assaulted one of the friends and had taken her cell phone away from her when she tried to call for help. The report indicates
(egreen@bastropadvertiser.com)
(cdukes@bastropadvertiser.com)
Classifieds, Subscriptions: Ginny Pickering, ext. 11 Retail Advertising:
communityadvertising@statesman.com
Advertising Deadlines for Thursday: 4:00 p.m. Fridays for the following Thursday Advertising Deadlines for Saturday: 4:00 p.m. Wednesdays for the following Saturday Deadlines subject to change for designated holidays
The entire content of The Bastrop Advertiser is protected under Federal Copyright Act. Reproduction of any portion of any issue will not be permitted without express permission of The Bastrop Advertiser.
Christmas theme. Early morning run/walk to a theatrical play after the lighted parade. Visit website for more information at smithvilletx. org. ■ 6 p.m. – Festival of Trees, Family Crisis Center annual fundraiser at Hyatt Regency Lost Pines Resort. For tickets and information about sponsorship, donations, volunteering and more, contact the center at 512-321-7760.
Sunday, Dec. 2
■ 6 p.m. – 14th annual Tree of Angels dedication ceremony at Calvary Episcopal Church, 603 Spring St., Bastrop. The public is invited to bring an angel ornament to place on the tree in honor or in memory of victims and survivors of violent crime. If you have par-
Thursday, Dec. 6 through Sunday, Dec. 9
■ Lost Pines Christmas events kick off with an open house at city hall, 1311 Chestnut St. Events through the weekend include special shopping and wine tasting, River of Lights nightly on the Colorado River Walk, lots of live music, lighted Christmas parade,
ADVERTISING ACCEPTABILITY: The Bastrop Advertiser reserves the right to reject or edit any advertisement submitted for publication, in its sole discretion. We will not knowingly accept any Bastrop Advertiser is not responsible for errors or omissions in advertisements, for failure to publish in a timely manner, or for any damages caused thereby. The sole remedy for failure to publish. RECYCLING: The Bastrop Advertiser is recyclable, and we encourage you to recycle your newspaper when you are finished with it. The city of Bastrop has curbside recycling (321-3941), and Recycling Options is located at 217 Pershing Blvd., around the corner from the former County Recycling Station. For information, call 303-6665.
Saturday, Dec. 8
■ 9 a.m. – Pine Street Market Days ■ noon-5 p.m. – Holiday Homes Tour, featuring the historic Ancient Oaks Plantation which has been restored to museum quality and open to the public for this event only. Plus, enjoy other festive homes and an old county church. Tickets: $15 per person. For advance tickets, contact Bastrop County Historical Society Museum, 702 Main Street, 512-303-0057 or BastropHolidayHomesTour.com.
SUBMITTING A NEWS OR SPORTS STORY: Submit information by email or in writing and delivered in person or sent by mail. Be as concise as possible. Please include a contact name and telephone number. Photos may be submitted by email or in person. If you have any questions, call editor Cyndi Wright or assistant editor Terry Hagerty for news items or sports editor Chris Dukes for sports items. CORRECTIONS: The Bastrop Advertiser makes every effort to provide fair and accurate information. Should we make an error, please contact the news department. It is our policy to correct significant errors of fact. OBITUARIES: Obituary announcements are published free of charge but are subject to editing (for “as is” obituary announcements, contact the ads department). Photos may also be submitted. Obituary announcements are usually provided by the funeral home or family. For questions, call editor Cyndi Wright or assistant editor Terry Hagerty. WEDDINGS, ENGAGEMENTS, ANNIVERSARIES, AND BIRTHS: Wedding, anniversary, engagement and birth announcements are paid and have a set format. Contact Ginny Pickering in the classifieds department for rates. HOW TO PLACE A CLASSIFIED OR DISPLAY AD: The best way to place a classified or display ad is in person during regular business hours or by telephone. You may also use the classified ad form in the classified section, and you may also send ad copy by fax. Rates and special offers for classified ads are listed in the classified ads section of each issue. Media kits are available for advertisers, which also include information about our other Austin-area newspapers. If you have any questions, call Ginny Pickering for classified ads or Debbie Denny for retail ads. BACK ISSUES: Back issues of The Bastrop Advertiser are kept for about a year and are available at our office while supplies last. There are also bound copies of certain years at our office and copies available on microfilm at the Bastrop Public Library. If you are looking for a specific article or advertisement, you are welcome to browse through our back issues during regular business hours.
Member: Bastrop Chamber of Commerce, National Newspaper Association, South Texas Press Association, Texas Press Association
Thursday, November 22, 2012
The Bastrop Advertiser • Page A3
MeDICINe
SHS grad on forefront of staph research By Terry HagerTy Assistant Editor A Smithville High School graduate is at the forefront of one of the more perplexing medical situations – people who go into healthcare facilities and pick up unexpected illnesses there. Sometime those illnesses, referred to as healthcare associated infections, or HAIs, can be fatal. Rodney E. Rohde, a 1985 SHS graduate who received his master and doctorate degrees from Texas State University, is certified as a specialist in virology, microbiology, and pathology from the American Society for Clinical Pathologists. He has written extensively about staphylococcus aureus, commonly referred to as “staph,” – a group of bacteria that can cause a number of diseases as a result of infection of various tissues of the body. Staph-related illness can range from mild and requiring no treatment to severe and potentially fatal, according to medical experts. The illness usually starts with a localized staph skin infection, but the staph bacteria manufacture a toxin that affects skin all over the body. “Staph is a rapidly growing threat in the realm of antibioticresistant organisms found in the healthcare arena and general public,” Rohde said. “Any room or place, including a hospital or other clinical setting (nursing home, dialy-
Contributed photo
Smithville High School graduate Rodney Rohde, shown taking cultures in a lab, is an expert on healthcare associated infections. sis, etc.) can look and smell clean. However, it may not be ‘microbial clean.’” Rohde explained that some bacteria and other microbes are not killed or inhibited by all detergents, bleach or chemical disinfectants. He cited an example. “For instance, one could completely clean a room with disinfectants and get rid of most major microbes. However, if there was a prior patient with a ‘clostridium difficile’
infection, the room is probably still contaminated with endospores,” he said. He termed clostridium “a nasty bacterial, diarrheal agent that produces very resistant endospores that are resistant to most disinfectants. While most types of HAIs are declining, clostridium remains at historically high levels, causing diarrhea linked to 14,000 American deaths each year, according to the Center for Disease Control.
What the public can do
Rohde detailed some of the preventive measures the public can take. “The public should
educate themselves about healthcare associated infections,” he said. “We all do major research when it comes to purchasing a car, a new phone or childcare choices. Why would you not educate yourself about your personal health choices?” He said most states currently have an HAI reporting mandate from the federal government and individuals can ask for information about HAI rates or adverse events prior to treatment at a healthcare facility. “Many states are now required to list this information on public websites (including individual hospitals) for viewing and transparency,” Rohde said. “However, as I’ve told many of the people that contact me about staph – it’s still your job to be proactive (and ask questions) with your physician and healthcare team. Remember,
it’s your health!” Rohde explained how healthcare facilities are combating HAIs. “For the most part, hospitals and healthcare facilities are doing everything they can to reduce or eliminate HAIs and adverse events,” Rohde said. For example, some hospitals now require staph pre-admission cultures be taken, in which the person is evaluated for staph colonization before admitted into the facility, he said. “If positive, that person can be quarantined to a certain floor or wing so as to not transmit MRSA to other patients, especially high risk, immune-compromised patients,” Rohde said. “Healthcare facilities are also redoubling efforts in environmental decontamination with respect to proper microbial cleaning.”
Page A4 • The Bastrop Advertiser
Thursday, November 22, 2012
Moore About bAStroP
Celebrities among us as Bastrop becomes film-friendlier I have told you about the recent plethora of television pilots and commercials that have been filmed in Bastrop, following the films “Bernie” and “When Angels Sing.” What you may not know is that a wellknown British-born director, a well-respected author and short film producer/director, and the award-winning owner of a first-class film studio are our neighbors. Last week the home of Judi and Tommy Hoover was the setting for a lovely gathering of friends old and new, and included some highly regarded national and international filmmakers. One of those celebrities was Tommy Warren, owner of Spiderwood Studios on Hwy. 969 near Utley. Tommy won two Telly awards for his animated short film “Flight of Magic.” He was honored in nonbroadcast productions with the 2012 Silver Award for Use of Animation and the 2012 Bronze Award for Children’s Audience. The Telly award is the premier award honoring the finest video and film productions, outstanding TV commercials and programs, as well as web commercials, videos and films and receives over 11,000 international entries annually. He has also won a prestigious Award of Excellence in Animation from The Accolade Competition for the film. By receiving an Accolade, Spiderwood Studios joins the ranks of other high-profile winners of this internationallyrespected award. Spiderwood is also doing a great deal of work in
has been writing wellrespected novels and short stories ever since. In 2007, Amber Quill Press reprinted a 1995 literary novel Banks DEBBiE wrote entitled “The Moore Turtle’s Voice.” The mooreaboutbastrop@yahoo.com novel was the winner of the 1995 Austin Book 3-D. Award. Director Peter MackSince founding Upenzie and his wife, start in 2000, Carolyn Sally, were also there. began writing scripts Once a leading advertis- and producing short ing agency copywriter, films, including “BasPeter produced and trop: The First 175 directed over 200 com- Years,” which won Best mercials all over the Documentary at a 2007 world, winning several South Texas film fesawards along the way. tival. Her moving film Since turning his atten- “The Fire” tells the story tion to film making, he of the horrendous Bashas had several features trop wildfires of 2011. distributed globally. She brings in experts Mackenzie recent- to teach screenwriting, ly received the Capra production and postAward at the second an- production. Her love of nual Life Fest Film Fes- the craft is immense, tival in Los Angeles for and many aspiring film his film “Doonby.” This makers have benefitted is the premiere festival from her dedication. dedicated to showcasAlso joining in the ing films that highlight fun were Kay and the intrinsic value of Conor McAnally, Joe life and the human ex- Grady Tuck, Lori Maperience. Named after drid, Mike Foster, and legendary filmmaker Lavonne Roberts and Frank Capra, the Fes- her beautiful daughter, tival’s Capra Award for Alessandra. It was a terBest Film was present- rific evening and I aped to Peter’s film, which preciate the opportunity was mainly shot in and to visit with all of these around Smithville. The extremely interesting film’s lead, John Sch- people. Thank you, Judi neider, was joined by and Tommy. co-star Joe Estevez and producer Mark Joseph Speaking of films in accepting the award. The Bastrop Film No stranger to in- Commission will screen ternational recogni- the independent film tion, Carolyn Banks, “America’s Parking Lot” the founder of Upstart, on Nov. 26. Your BFC Bastrop Community has partnered with the Access Television and Texas Independent the Annabelle Resource Film Network to proCenter, was there for mote independent films the fun. Carolyn’s first made by Texas directors national publication and producers. One of was her short story the films we screened in “Idyll,” which appeared January won an Acadin “Voyages”, a literary emy Award in its catmagazine, in 1968. She egory.
This film follows Cy and Tiger, two diehard fans of the Dallas Cowboys and leaders of the legendary Gate 6 tailgate party, as they spend their last season with the team at historic Texas Stadium. When the Cowboys move 20 miles west to Arlington, the shifting politics and economics of major league sports threaten to dissolve the friendships and traditions these blue-collar tailgaters have built over 20 years, while Cy and Tiger scramble to preserve their place in “America’s Parking Lot.” The film premiered this year at South by Southwest. Director Jonny Mars will be with us that evening to answer your questions. Mars is a producer, actor and writer. He recently produced the Texas Meets Tillamook campaign, which filmed a segment in Bastrop and has starred in award-winning shorts. This marks his directorial debut. It all takes place on Monday, Nov. 26 at the Annabelle Resource Center, the ARC, located at 1508 Cypress St. Doors open at 6:30 p.m., and the film begins at 7 p.m. Admission is $3 and $1 for students. Refreshments will be available.
business news
High Cotton Gifts and Antiques, owned by Pam and Frank Ferguson, has been so successful that they need more space. They are now in the process of moving to 922 Main St. I’m glad they are staying downtown. Welcome to two new businesses, Veranda Jewelry, Gifts and Home
Décor and Lark Jewelry. Veranda is located at 813 Main St. Lark Jewelry and Gifts has opened in The Crossing near The Bastrop Brewhouse. Speaking of the Brewhouse, they now have a Bourbon Bar in one of the smaller buildings on the property with all types of bourbon for your tasting enjoyment. Saturday is “Small Business Saturday.” We are encouraged to shop local and thereby support our independent shops. Downtown Bastrop has some truly unique businesses which offer one-of-a-kind items that make great Christmas presents and are reasonably priced. In honor of Small Business Saturday in Downtown Bastrop, there will be a cash mob at noon at the Outdoor Market Place Stage at 921 Main St. The name of a business will be pulled out of a hat, and then those gathered will “mob” the store with business. We agree to spend at least $5. There will be chances to win great prizes that have been donated by the some of the businesses. Afterwards, there will be live music by Max Rios of Austin and plenty of time to continue shopping and dining. Sounds like fun to me.
Pickin’ Party fun
Last Saturday afternoon, Jo and Dallas Albers hosted “Pickin’ Party” on their porch. There were guitars, a stand-up bass, a violin, a mandolin and more and really good singin’. The pickin’ and singin’ began after we were treated to delicious barbecue and chili
with all the fixin’s. Sirius radio fans know our host as Dallas Wayne, host of Willie’s Roadhouse and Outlaw Country. He is also a member of the musical group Heybale that plays on Sunday evenings at the Continental Club in Austin. Dallas appeared in Bastrop recently as the master of ceremonies during our Veterans Day ceremony. The conversations were lively and interesting and there was lots of laughter in addition to terrific music. While some of the attendees preferred to remain anonymous, local folks enjoying the fun included Kristi Koch, Ellen Moore and Bob Hoover, Paula and Jim Clark, Kelley and Herb Goldsmith and Judi and Tommy Hoover.
thanksgiving
My niece and her family are here from Baton Rouge to share the day along with members of my Bastrop family. We will be enjoying fried turkey and my nephew’s “pastalaya.” You have probably heard of jambalaya. He substitutes pasta for the rice. You can be jealous. I hope you have a wonderful Thanksgiving Day.
until next week
That’s all for now. Until next week, be good to yourself. Let me hear from you by email at mooreaboutbastrop@yahoo.com so I can share the fun things that are going on in and around Bastrop. Remember: “I fear the day that technology will surpass our human interaction. The world will have a generation of idiots.” Albert Einstein
All About CedAr Creek
Helping out the less fortunate comes natural to Cedar Creekers It never ceases to amaze me that we have such wonderful people that live in Cedar Creek. Cedar Creek United Methodist Church put out a call to the congregation that we needed to gather food for the less fortunate families in the area. Last week, folks from the church delivered Thanksgiving to 40 families in the area. I don’t know about anyone else but I would say that is definitely doing
Vicki Lyn
JAMeS
cedarcreek50@yahoo.com God’s work. And that’s because there were people who donated and helped to make it a success. Speaking of CCUMC, they are a busy bunch of
people. Coming up on Dec. 1 they will be hosting a Christmas Cantada. I will give you more on that at a later date. Also be watching for the news on the Community Christmas Dinner on Dec. 16. Be watching for updates and further info.
Scholarships
I told you a few weeks ago that a little birdie told me that “someone” was going to be giving out scholarships. Well, here it is. Bastrop Chapter #64, Order of the Eastern Star will be hosting two fundraisers to raise funds for scholarships for graduating seniors of Cedar Creek High School, Bastrop High School and Elgin High School. See CedAr Creek, page A5
Thursday, November 22, 2012
The Bastrop Advertiser • Page A5
CEDAR CREEK from page A4 The first fundraiser will be a garage sale at Mina Masonic Lodge, 601 Main St. in Bastrop, starting at 8 a.m. on Saturday, Dec. 1. As the date gets closer, I will give you info on the other fund raiser that will be taking place.
Turkey stuffing
Yes, Dora, I do still have the recipe that I put out last year for the turkey stuffing made with popcorn. It took me a little bit to find it and figure out what you were talking about, so here it is. This recipe also includes the use of
popcorn as a stuffing ingredient - imagine that. When I found this recipe, I thought it was perfect for people like me, who just are not sure how to tell when a turkey is thoroughly cooked, but not dried out. Give this a try: 8 - 15 lb. turkey, 1 cup melted butter, 1 cup stuffing, 1 cup unpopped popcorn, salt/ pepper to taste. Preheat oven to 350 degrees. Brush turkey well with melted butter, salt and pepper. Fill cavity with stuffing and popcorn. Place in baking pan, making sure the neck end is
What a weekend it was for any of us in the southwest part of the county and those in the southeast part of Travis. One man commented that “I have never seen so many cops in one place before.” I totally agree with him. I live in the far SW cor-
ner of the county, just a stone’s throw from the county line of Bastrop and Travis. Going down 21 to church on Sunday, I counted four DPS troopers and six county cars by the time I got to 535 (Pearce Lane). My husband works part time at Lake Travis and getting home Saturday was a nightmare, but he finally made it driving the back roads. The skies were full of private planes and helicopters flying people in and out of the track. One driver commented that he had never met such friendly people as he found in
Bastrop. I haven’t verified this yet, but rumor has it that one driver liked it so well he went out and bought a 500acre ranch in the area. I think F1 is here to stay and will be back next year. Oh and kudos to our deputies who did a fine job of keeping the traffic moving and preventing any snarls. Good job! Well, folks, today is Thanksgiving and as you spend time with family and friends, think of just one thing that you are thankful for and share it. So many won’t be with loved ones today be-
cause of war, miles or anger so they will miss out on so much. I would like to share something my mother taught me years ago. She said we don’t get to pick our families, God does that for us. They may not be perfect and they make mistakes, but we need to love them regardless and be thankful that we have them. So many have none and it’s a lonesome time for them. No one is perfect! So ‘til next time, please be safe and God bless. Email with your news at cedar creek50@yahoo.com.
suing the state, arguing that the Legislature failed to meet its constitutional obligations. As reported in the Austin AmericanStatesman, the battle is over whether Texas is providing more than 1,000 ethically and economically diverse school districts what they need to ensure students are receiving the “general diffusion of knowledge” needed to succeed in the 21st century, as constitutionally required. The trial, which began Oct. 22 in a Travis County courtroom before District Judge John Dietz, is expected to take three months. The plaintiffs, which include the coalition Smithville is part of, also includes the Mexican-American Legal Defense and Education Fund, the ThompsonHorton and the Haynes & Boone groups, and the Charter School Group. An intervenor group, TREE, is led by Kent Grusendorf, the former chairman of the Texas House Public Education Committee, but does not represent any school districts. Taken as a whole, the plaintiffs comprise about two-thirds of Texas’ school districts, which educate threequarters of the state’s 5 million students. “It looks to me like
this is the granddaddy of all these cases,” Dietz said in a preliminary hearing over the summer. “We’ve got every topic that’s ever been discussed.” The groups sued the state over a variety of possible violations, many of which are overlapping, but the concept of “equity” in school finance emerged as a key issue — do poor school districts get the same amount of money to educate its students as the richer ones do? According to the Statesman, the suit involves questions of equity dating back to a 1984 suit by the Edgewood school district arguing that huge disparities in spending had emerged due to disparities in property tax bases in communities. That was followed in 1989 by a ruling by the Texas Supreme Court that “children who live in poor districts and children who live in rich districts must be afforded a substantially equal opportunity to have access to educational funds.” The court later explained that “citizens who were willing to shoulder similar tax burdens should have similar access to revenue for education.” The Statesman re-
ported that the state eventually settled on a system in which it poured more money into schools, obligated the state to supplement local taxes in poorer districts and required property-wealthy districts to share some of their riches with the poorer ones in what is called the Robin Hood provision. But, some districts argued before the Supreme Court in 2005, inequities surfaced again. The court didn’t uphold the claim of inequality, but instead found the system effectively enacted a statewide property tax by imposing caps on local property taxes — something the constitution explicitly prohibits. To remedy this, the Statesman reported, the Legislature reduced local school property tax rates by one-third and dedicated more state money to schools to replace the local money in a special legislative session in 2006. Legislators then temporarily froze districts at the amount of per-student funding they were spending at the time with the understanding they would find a long-term solution the following year. But, the current suit says, the freeze was never lifted, mak-
ing the situation tougher for poorer districts. Rock McNulty, superintendent of Smithville schools, said the district chose to become part of the suit because the system’s students deserve a fairer funding system. “The state, over the past six years, backed away from their portion of funding the education of children in Texas,” McNulty said in a written statement. “The current financial system is flawed and is short-changing our children. Over the last two years, the state has cut funding by over $1.5 million to Smithville and yet standards and unfunded mandates continue to be placed on school districts. Although local taxes have increased and residents are investing directly to their local schools, the state continues to keep backing away from their obligations to contribute to the education of our children. … The millions of dollars a year that the state is spending on testing and an accountability system that even the educators who face parents and the public every day find difficult to understand is not getting to the classrooms where the learning takes place.”
toward the front of the oven, not the back. After about four hours, listen for the popping sounds. When the turkey blows the oven door open and the bird flies across the room, it’s done. And you thought I didn’t know how to cook. Enjoy!
F1 gone again
SCHOOL from page A1 of Community Schools, will be heard before the State Office of Administrative Hearings. By joining the suit involving more than 600 school districts across Texas, Bastrop asserts that every AYP rating issued since the implementation of the federal system in 2003 is invalid. The challenge also asserts the AYP ratings constitute an unlawful, costly and destructive federal intrusion into local school operations and that the Texas Education Agency, in its efforts to comply with the federal mandates, acted without authority from the Legislature and denied school districts due process. Bastrop Superintendent Steve Murray said the district, the Bastrop Area Chamber of Commerce board and the Bastrop city council have joined in the protest against such testing and associated accountability by passing an anti-high stakes testing resolution to combat the “overbearing nature of such testing” for students and teachers. This nature also includes the increasingly high standards students — and every subgroup of students consisting of at least 25 students — must pass in order for the school to make
AYP each year. “This unrealistic performance standard has already amounted to more than half of Texas school campuses receiving a failing grade and over 70 percent of all districts in the state,” Murray said. “If not repealed … virtually every school and district in Texas and across the nation will be categorized as failing by 2014. (The Bastrop school district and board) believes in appropriate assessment and accountability for our students, staff and schools, but not the inordinate and unyielding approach to such taken by Texas and the federal government. We applaud our board for taking a stand with parents against time- and focus-robbing measures of this nature.” While the Bastrop district is joining this group, the Smithville school district’s involvement in the statewide school finance lawsuit has already reached the courtroom. The Smithville district is one of the 439 school district represented by the Texas Taxpayer and Student Fairness Coalition, representing a total of some 1.3 million schoolchildren throughout the state. The coalition joined five other groups in
����������������������������
Page A6 • The Bastrop Advertiser
Thursday, November 22, 2012
RETAIL from page A1 The soldiers wore smiles that seemed to communicate something akin to, “look at all this neat merchandise.” They also received vigorous applause from other customers. General manager John Taylor, who resides in Bastrop, was positioned near the center of the store. He was both intently focused on seeing that everything was in its proper place and that staff was operating smoothly, but he also made time to greet customers. One of the things Taylor said he wanted to show a reporter right away were the expansive restrooms. “It’s one thing that we hang our hats on when it comes to Bucee’s,” said Taylor of the spacious restrooms. “And we have individual hand-sanitizers at each station.” There’s a total of 71 stations among the two restrooms. And there are two fulltime attendants, each one immediately entering a just-va-
Tasty treats are made on site at Buc-ee’s. Terry Hagerty/Bastrop Advertiser
The main entrance to Buc-ee’s fronts Texas 95, near the intersection with Texas 71. The parking lot has 655 spaces.
The nearly 60,000 square feet of floor space in Buc-ee’s wows the crowd on its opening day. cated stall to ensure it is left tidy, Taylor pointed out. A reporter joked
there’s almost enough room to conduct a tennis match in the mammoth restrooms.
“That’s right,” Taylor responded, and then added there are 655 parking spaces, 24 gas pumps, 20 cash registers, 200 employees – and they are still hiring. “Just go to our website,,” Taylor said for those seeking work at the Bastrop location. Cruising around the store, a shopper can see there’s everything from fresh-made Black Angus-brisket sandwiches to camping/fishing gear, Longhorn and Aggieemblazoned clothes, beer, fruits and vegetables – and oh, yes, those cute little Buc-ee-beaver stuffed animals. Food services manager Hank Ledwig, who said he had come from the Wharton Buc-ee’s to help out during the opening week, heralded the non-stop work ethic
Need some gas in a hurry? Buc-ee’s has 24 gas pumps.
Soldiers training at Camp Swift were greeted with applause when they entered the store during its opening day on Nov. 14. at the 24/7 Buc-ee’s. “About the only time I know of that we closed a Buc-ee’s was during a hurricane that came through the Houston and Lake Jackson area a couple of years ago,” Ledwig said. “The sheriff’s department came by and said it’s time to close and
get out – and we did.” But that was history. Ledwig invited a reporter to sample some freshly cooked brisket. “Is that good or what?” he inquired. Nearby, a row of Buc-ee’s stuffed beavers seemed to be smiling proudly.
Thursday, November 22, 2012
The Bastrop Advertiser • Page A7
ARBOR from page A1 “Do you like trees? Give a yell,” Rooni said as the kids shouted in response. “I don’t want to spend a lot of time on it, but we can all agree trees are a good thing for everybody, they’re very important and they help us live better lives.” In talking to the kids about Arbor Day, he noted it’s a day celebrated not only in Bastrop, across Texas and the country, but in foreign countries as well. It’s also held on different days throughout the year. But, he said, to really understand the importance of Arbor Day, it’s important to understand the life of J. Sterling Morton, the man who founded Arbor Day in 1885. Morton, a newspaperman and politician originally from New York, had moved to Nebraska and found very few trees, Rooni told the kids. So Morton had trees planted — up to 1 million of them — and
a holiday in April 1872. “One person planting one seedling can
make a big, big difference,” Rooni said. To help celebrate
the importance of trees, the fourth-graders of Mina wrote es-
says on why trees are important and one winner from each class was presented with an award in the shape of a slice of a tree. The winners were Kaitlyn Pascoe, Taylor Hulsey, Elijah Youngblood and Bryson Brimhall. The students also got the chance to watch and assist with the mulching as city parks staff planted an American Elm tree near the pavilion at the park. As Jason Alfaro, parks superintendent, and his crew worked to prepare to plant the tree — the American Elm, as well as several other seedlings of pecan and Bur Oak trees — the kids crowded around in a circle to watch Alfaro and to help mulch the tree before concluding the ceremony. “Who knew mulching was this fun?” remarked Rooni with a smile as the students jockeyed for position to help.
increase the live outcome and promote education about the shelter throughout the community,” said co-founder Jeremy Parks. “We want as many people to know that there is a shelter here that needs help from the community in order to assist these animals.” At the age of 14, while living in Louisiana, Parks found Lorraine Joy of Bastrop on Facebook and realized that they shared a common goal - helping animals in need. Urgent Animals has been working with the Bastrop County Animal Shelter since late 2010 and has attracted more than 35 volunteers, who dedicate their time to the animals’ needs during their stay at the shelter. Joy, who is the cofounder and current president of Urgent Animal said, “I knew I wanted to do something to help because I felt bad that there were so many animals needing a good home.” Urgent Animals is always looking for animalSarah Acosta/Bastrop Advertiser responsible, Urgent Animals co-founder Jeremy Parks visits his feline friend loving people in the community who are Hondo in the Cattery at the Bastrop County Animal Shelter.
willing to join their team in the hopes of improving the animals’ quality of life. “Urgent Animals is one of our biggest volunteer groups for the shelter, along with Friends of Bastrop County Animal Shelter,” said Bastrop County Animal Services director Diana Mollaghan. “They have at least two people each week helping with conditioning and training of the animals by walking them and providing them with the love and care that they need.” Many of the animals that arrive at the shelter are depressed and confused about their whereabouts. With the help of these volunteers, the animals are able to reacquaint themselves with loving, human interaction which will allow them a better opportunity for adoption in the future. “I spend 40 to 50 hours, six days a week at the shelter because I feel it is my duty to help these animals in any way that I can,” said Urgent Animals volunteer Amy Baxter. “I really enjoy playing and sometimes just
cuddling with the dogs all day. It gives me the satisfaction that I am making a difference in their lives.” Volunteering with Urgent Animals does not require any specific hourly commitment and volunteers are only asked to do what they can with the spare time they may have. Duties may include walking the dogs, cleaning kennels, bathing animals and, most importantly, socializing with the animals. Urgent Animals is currently in the process of obtaining their
501(c) (3) status which will allow them taxdeductible donations in the near future. If you are interested in raising money for the organization or volunteering your time, visit their Facebook page titled, Urgent Animals At Bastrop County Animal Shelter, Inc. Bastrop County Animal Services has additional opportunities for volunteers as the animals are always in need of human interaction. If you are interested in participating, email Mollaghan at diane.mollaghan@ co.bastrop.tx.us.
Erin Green/Bastrop Advertiser
The fourth-graders of Mina Elementary School watch as Jason Alfaro, city parks superintendent, prepares to plant an American Elm tree at Fisherman’s Park Friday during the city’s fall celebration of Arbor Day. that day to celebrate the planting of trees was first celebrated as
COUNTY from page A1
FIRE from page A1 and a second firefighter were able to douse the flames without further reinforcement. Stradling cautioned that the illegal burning of plastics can give off a potentially lethal gas – hydrogen cyanide. He said firefighters have encountered people burning such materials recently. “And, you are only supposed to burn trimmings from your own yard, not your neighbor’s,” Stradling said. Firefighters responded to a second control burn that went awry last week on Kleinert Street in Lake Bastrop Estates. “Some people were clearing their property, doing some burning when some embers got over to unburned weeds,” Stradling said. He said it involved less than half an acre, but he still called out three brush trucks as a precaution. Stradling also discussed the continuing danger of outside fires being “fueled” during the colder months. “There are still a lot
of places that have thick fuel around the county,” Stradling said, using the common term among firefighters for mainly ground-level vegetation that contributes to fires spreading rapidly under the right conditions. “A freeze or cold northern winds will kill off the vegetation, which makes it dry out and makes it easier to burn,”
Stradling said. He explained that vegetation that hasn’t died still has moisture coming up from its roots, which can make it burn more slowly. However,
those conditions can easily be offset during a spate of searing temperatures and high winds, such as the period that preceded the disastrous 2011 Labor Day fires.
Page A8 • The Bastrop Advertiser
Thursday, November 22, 2012
SPORTS
The Bastrop Advertiser • bastropadvertiser.com
THURSDAY, NOVEMBER 22, 2012 • SECTION B
BASKETBALL • SMITHVILLE
File Photos
Valerie Harmon and Smithville are off to a 4-0 start with four wins of 4A teams. They are attempting to make the playoffs for the 20th-straight time in 2012-13.
The tradition continues Smithville girls want to add more hardware
Trina Deyo and her teammates have their sights set on the district title t h i s season.
BY SETH STARKEY Contributing Writer To an outsider, a 3A girls’ basketball team opening its season with four impressive—and often lopsided—wins over 4-A schools would be considered a surprise. There’s nothing surprising about it if you know the Smithville Lady Tigers.
Smithville has already rattled off victories against Lehman, Travis, Manor, and Cedar Creek. With an average winning margin of 37 points, each win is yet another impressive showcase of the school’s tradition. “We’ve always tried to play a tougher schedule,” Coach Robin Ramsay said after Friday’s 68-45 win at Cedar Creek. “We have a tradition we’re proud of and we expect to win. We know we’ll take our knocks along
the way, but that’s okay. It’ll help us prepare for later in the season.” Ramsay isn’t just talking about the end of the regular season. Smithville is aiming for its 20th straight playoff berth, a streak that started before any of the current Lady Tigers were even born. Coach Ramsay refuses to let the players feel the pressure associated with matching past See BASKETBALL, page B3
FOOTBALL • TRIBE CONSOLIDATED
CHRIS DUKES • COMMENTARY
Wildcats hush critics with big win
CHRIS
DUKES I will be the first to admit to everyone, I expected this to be a season wrap-up column on Elgin football, and to be worrying exclusively about basketball from here on out. Boy, did the Wildcats prove me, and just about the entire rest of the state, wrong Friday night. As a result, See DUKES, page B3
Tribe Consolidated is just one win away from a berth in the TCAL State Championship Game after a win in the quarterfinals last week.
File photo
Tribe sets sights on state title BY CHRIS DUKES Sports Editor
Photo by Jamie Harms
The Wildcats raced out to a 21-0 lead and never looked back in their upset of Brenham. The win was part of a clean sweep from District 17-4A Friday night.
Tribe Consolidated rolled over its first round playoff opponent Friday night with a 70-20 blowout win over Cornerstone Christian Academy. The Warriors put up 36 points in the first quarter, and by halftime they led 49-20. “We were able to put up points in the second half and end the game,” coach Billy Helm said.
Tribe came out of the break firing on all cylinders, and scored the next 21 points while its defense came up with three crucial stops to invoke 6-man football’s 45-point rule. The win earns the Warriors a spot in the TCAL semifinals, which will be played as part of a quadruple-header of football on Nov. 24. The first game will start at 11 p.m., and it will run through the final contest with a 4:30 p.m. start time. The game will be against a tough
Cleburne Christian Academy team. “The only teams left out there are all very, very good,” head coach Billy Helm said. “We are going to have our work cut out for us if we want to move on.” The two teams have played three common opponents this year. Tribe won all three of its games against Allen Academy, Cornerstone Christian and Fort Bend by the 45-poitn See TRIBEL, page B3
Page B2 • The Bastrop Advertiser
Thursday, November 22, 2012
Coaches, please send in all scores either by email at cdukes@bastropadvertiser.com, fax at 512-321-1680 or phone at 512-321-2557 17-4A plAyoff scores
Manor ....................................................47 Huntsville ...............................................21
The Woodlands high school, conrore, Texas
Brenham .................................................30 Elgin ......................................................44
Tribe consolidated (9-2) vs. cleburne christian Academy (9-2) noon, Brenham sports complex, Brenham, Texas
Georgetown .............................................44 Magnolia West .........................................18
Bastrop senior Golf Tournament at lost pines Golf course, oct. 31
Montomery ...............................................8 Connally .................................................33
first place -- 61 Jed oliver, Maynard clark, carl fontenot second place -- 62 lindon Mackey, dean farley, Ken Zimostrad, Ken Moreland, leroy herauf Third place -- 63 Ken Watts, Alex Graham, carol Graham, Ken Green closest to the hole No. 3 david Tessmer; No. 7 skipper Juarez; No. 12 Alex Graham; No. 16 esther Wright; No. 17 stan haddox
26-3A plAyoff scores El Campo ................................................35 La Grange ...............................................21 Cuero .....................................................14 Bellville...................................................34 Sweeney..................................................13 Yoakum....................................................6
plAyoff schedule elgin (7-4) vs. Barber’s hill (8-3) 1:30 p.m.
pine forest Golf club hole in one
Gene rampy made a hole in one on hole No. 7 at pine forest Golf club on oct. 31 he used a 9-iron from 138 yards. The event was witnesses by Gerald cato,
Bobby Nauert, and Jerry dobson. ELGiN 44, BrENHaM 30 BHS 7 14 7 2 – 30 EHS 28 7 3 6 – 44
ToTAl offeNse
BHS – 257 EHS – 517 1st quarter E: richard robinson 44 yard pass from Te’rel simmons (raul pena Kick) E: de’Trean simmons 75 yard punt return (raul pena Kick) E: Te’rel simmons 4-yard run (raul pena Kick) B: 4-yard pass from to Walter Thomas to Kwadareus chapel (haiden lane Kick) E: colin snell 28-yard pass from T’rell simmons (raul pena Kick) 2nd B: K.d. leakes 5-yard run (haiden lane Kick) E: colin snell 26 yard pass from Te’rel simmons(raul pena Kick) 3rd E: raul pena 31 yard field goal B: Kwadareus chapel 14 yard pass from darrion Johnson (haiden lane Kick)
4th E: de’Trean simmons 40-yard run (Kick failed) B: saftey rUSHiNG BHS – 28-79-1 K.d. leakes 14-38, darrion Johnson 9-32, Walter Thomas 4-9 EHS – 36-252-2 chris Walker 4-21, Te’rel simmons 21-131, da’Trean simmons 11-100-2 PaSSiNG BHS – Walter Thomas 4-11-0-2INT, robert stark 6-12-118-0-INT, darrion Johnson 1-12TD EHS –Te’rel simmons 7-17-265-3Td rECEiViNG Bhs – Kwadareus chapel 5-103 Td, Jacob sparks 1-21, courtland sutton 1-12, K.d. leakes 1-12, Malik Wilson 1-10, EHS –colin snell 3-98-2, da’Trean simmons 2-41, richard robinson 1-26-1, Abraham Zepeda 1-8 FirST DOWNS BHS – 9 EHS – 11 PENaLiTES BHS – 4-35 EHS – 3-20
�������������� ������������������������������������������
Thursday, November 22, 2012
The Bastrop Advertiser • Page B3
DUKES from page B1
Quarterback Denton Cooper was second-team all-district in his first season under center for the Tigers.
File Photos
16 Tigers named all-district The Tigers were well-represented in the 2012 26-3A All District Team, which was announced Monday morning. Sixteen players were named either first or second team alldistrict from Smithville. Adrian James was on the first team as a guard. James was the anchor of the Tiger offensive line this season, and helped make room for the Smithville running back rotation that piled up 1,939 yards this season. Center Ryan Smith was named second-team all district. On the defensive side, Kegan Bledsoe was named the first-team defensive tackle. He, along with teammate and fellow first-team pick Konnor Hurta, were able to shut down opposing offenses for most of the season. Their pressure was one of the reasons Jeremey Kadlecek was able to make so many plays in the secondary. Kadlecek was honored for his contributions by receiving first-team status at
File Photo
Smithville’s Gray Morris was one of 16 Tigers named to the 26-3A All District Team Monday morning. cornerback. Kalil McCathern earned first-team nod at the safety spot. With safeties Gray Morris and Jimmie Gonzales on the second team, all four of starting members in the Tiger secondary were all-district. The dangerous Mor-
ris also earned second-team return specialist. In his first season under center, Denton Cooper was the second team quarterback. Cooper passed for 566 yards and three scores, and rushed for 250 yards and three touchdowns. Team rushing leader DaAaron Jackson was also named first team. He carried the ball 115 times for 656 yards and four scores. Fullback Jake Dawson was a blocking specialist who finally got the recognition he deserved with a second-team nod as well. Tight end Bryce Helmcamp earned his spot by catching six passes for 130 yards and a score. Helmcamp and Cooper both got second—team honors on defense as well at linebacker. The suprilitives went to La Grange’s Logan Vinklarek (district MVP) and Kolby Kolek (defensive MVP). Gonzales’ Cecil Johnson was offensive MVP and Yoakum’s Tre’Vontae Hights was newcomer of the year.
BASKETBALL from page B1 successes. “We don’t want to focus on those kinds of things. We want to focus on one game at a time. But it’ll be on the kids’ minds come January when we get into the heart of district play.” Still, it’s safe to say the players are wellaware how important it is to make the playoffs. Senior forward Julia Kubicek talked about extending the streak. “It’s definitely a big goal. We know the tradition here. The first goal is to be district champs.” On Friday night Kubicek scored six points and brought down four rebounds to help her team win its fourth straight contest. Friday’s game was a testament to Smithville’s balance on both sides of the court. The players pass like they have been gifted with ESP, and the defense seems air-tight. All ten players saw time
on the floor in Friday’s game and all ten players contributed. Kubicek shared the secret of Smithville’s chemistry. “We’ve been playing together since we were younger. We know what each other is thinking, where each other is going to be. We’re all friends—on and off the court.” Not only have they been playing hoops together for years, but Kubicek and senior Trina Deyo have dreamed of wearing the Smithville uniform since childhood. Kubicek recalls watching her aunt play for the Lady Tigers. Deyo learned of Smithville’s tradition first hand watching her older sister play. Smithville girls’ basketball is a family affair, as well as the pride of the town. “Making [the playoffs] is not just for ourselves,” Deyo said. “It’s for the whole community.”
All About Chemistry
With seniors like Deyo and Kubicek— among others—Coach Ramsay saw signs of great potential as the team transitioned through the offseason. “Everybody had a bad taste in their mouths after last season,” Ramsay said. “We made the playoffs…but we still weren’t satisfied. The kids that returned came back hungry. We didn’t have to talk about much when the season started.” It is evident the pride Ramsay has in her team’s ability to move the ball and get everyone involved. She knows that type of oncourt chemistry doesn’t just happen. “I’ve seen the most outstanding chemistry. Everybody’s on the same page. At practice, in the locker room, in the cafeteria… it doesn’t matter.” “This year it’s all about team unity,” Kubicek said. “It’s really important to this group of seniors to continue
the tradition and pass it down.” The seniors intend on passing down all facets of the tradition— the playoff streak, the winning tradition, the sense of community. For the Lady Tigers the aspect of teamwork is paramount. “There’s no one leader on our team,” Deyo said. “We’re all leaders and we each have to push the others forward.” The Smithville coaches and players are happy to start the season strong, but don’t confuse their happiness with contentment. The Lady Tigers know they have a lot left to accomplish. “Coach has been telling us since we started that we should never peak,” Deyo said. “That’s how we focus. We gotta keep moving forward.”
they will be one of the select few high school football teams in Texas playing after Thanksgiving Day. The game even gained statewide recognition. Dave Campbell’s Texas Football managing editor Travis Stewart called Elgin’s win over Brenham one of the biggest playoff upsets of the weekend. Elgin’s upset came as the result of fearlessness from the entire team, a product of great leadership from coach Wade Griffin and quarterback Te’Rel Simmons. I wrote before the game that if Elgin wanted to keep things close, they would need a spectacular performance from both the Simmons family members, I certainly didn’t see a blowout happening the other way. Elgin raced out in front and never looked back, and as a result, the downhill Cub ground game that wore out opponents all year never had the time to get going as Brenham spent the whole night playing catch-up. Te’Rel Simmons found a streaking Richard Robinson on a 44 yard pass, and the Wildcats led the rest of the way. After a defensive stop, De’Trean Simmons weaved right through the Cub coverage to add a 75-yard touchdown of his own, and before Brenham could think about it, they were down 21-0. Overall Te’Rel completed only six passes on the night, but they went for 257 yards. That’s an average of 42.8 yards per completion – absolutely astonishing when you think about it. Both the Simmons brothers finished with over 100 yards rushing. With a big lead, the other side of the ball came up big against Elgin, too. With a chance to get all the momentum back to start the second half, the Cub offense couldn’t muster
a first down on its first possession and had to give the ball back to the Wildcats, who turned it into three points. From there, the high-octane Elgin offense shifted into cruise control, playing keep-away and limiting the number of chances Brenham had to come back. This game was just part of a definite trend. The rest of the high school football world found out something Bastrop, Cedar Creek and Elgin already knew this weekend – District 17-4A is very good. Whether it was district champion Georgetown manhandling Magnolia West, Manor speeding around Huntsville, Connally thumping Montgomery or Elgin shocking Brenham, the Central Texas region humbled its eastern counterparts. Overall, District 174A outscored 18-4A 16777, or by exactly 100 points. Although it doesn’t do much more than offer a warm, fuzzy feeling to the likes of Bastrop and Cedar Creek, it’s still nice to know that some of the tough losses both schools suffered were to superior competition. The offenses of 174A all scored at least 33 points in their wins, and all of them showcased the outstanding speed in the district. Whether it was De’Trean Simmons outrunning the Brenham Cub defense, Connally’s LaDaedrix Payton steamrolling through the Montgomery defense or Georgetown’s Jake Hubenak dismantling the Magnolia West secondary, 17-4A’s athletes were on full display. Can Elgin and the rest of 17-4A keep the trend going? We will all find out when the ‘Cats take on Barber’s Hill on Saturday at 1:30 p.m. in Conroe. I know one thing for sure, I’m not starting any season wrap-ups for next Thursday’s paper.
TIRBE from page B1 mercy rule, while Cleburne went 2-1 in those same contests, defeating Cornerstone 89-77, Fort Bend 84-40 and losing to Allen 74-53. The two teams have near-identical rushing statistics, Tribe averages 245 yards on the game, while Cleburne gets 244. Cleburne has the slight edge in passing at 203-188 per game, and scoring at 64.9 points per contest compared to the Warriors’ 62.2. Both teams rely on their quarterback to hurt their opponents
with both his feet and arm. Cleburne’s Aaron Midkiff averages 367 total yards per game, while Tribe’s Colton Richter is at 216.9 yards per contest. Tribe comes into the game ranked No. 2 in the state in its division, while Cleburne is right behind at No. 3. A win would put the Warriors just one game away from their ultimate goal – a state title. The game is schedule for a noon start at the Brenham Sports Complex.
Page B4 • The Bastrop Advertiser
Thursday, November 22, 2012
Terry Hagerty/Bastrop Advertiser
The Annual Thanksgiving Service at Bastrop’s First Assembly of God Church Sunday evening was an occasion for prayer and song. It was sponsored by the Bastrop Christian Ministerial Alliance.
Congregations come together to give thanks, pray
Audience members raise their arms in praise of the Lord during spiritual singing at the Annual Thanksgiving Service.
Pastor Grady Chandler, with the Pentecostals of Bastrop, delivered an inspiring message to the audience.
Pastor Mike Vega, with Word of Life Church, told about his church recovering in the aftermath of the Bastrop County Complex Fire.
Darla Graf leads the singing during the Annual Thanksgiving Service on Sunday night.
One of several spiritual verses was displayed for the audience during the Thanksgiving service. An audience members follows along with readings from the Bible.
Thursday, November 22, 2012
The Bastrop Advertiser • Page B5
REGULAR MEETINGS Mondays
n noon – The Healing River Alliance, a networking resource and support system which provides awareness of alternative practices for the betterment of health and the environment in the Bastrop area, meets every Monday (except holidays) at 1202 Pecan St. (corner of Farm) in Bastrop. Call Cy Labat at 512970-3705. n 6:15 p.m. – Take Off Pounds Sensibly (TOPS) meets every Monday at Kerr Community Center, 1308 Walnut St. in Bastrop. More information about TOPS may be found at. n 7 p.m. – The Bastrop County Republican Party meets the fourth Monday each month. Locations vary; call Albert Ellison at (512) 796-4560. n 7 p.m. - Jail Mail Art meets every Monday. The program supports children and families affected by incarceration. Call (512) 581-8085 for more information. n 7:30 p.m. – Mina Lodge meets the fourth Monday each month. Visiting Masonic members are welcome. Call 303-0191. n 7:30 p.m. – The Order of the Eastern Star meets at the Mina Masonic Lodge at 601 Main St. the second Monday each month. Visiting members welcome. Call 321-4872.
Tuesdays
n 7:30 a.m. – Bastrop Networking Group meets every Tuesday morning at the Texas Grill. Call Derrick Formby (512) 321-5525. n 8 a.m. – The Downtown Business Alliance meets first and third Thursdays at Bastrop Public Library, 1100 Church St. Call Drusilla Rogers at 3213777. n 9:30 a.m. – The Bastrop AARP Chapter meets every third Tuesday at the Bastrop Senior Center, 1008 Water St. Social begins at 9:30 a.m.; general meeting at 10 a.m. Call Stella at 303-4136. n 10 a.m. – The MOMS Club of Bastrop, an activity and support group for athome moms in the local area, meets the second Tuesday of each month at the Kerr Community Center. Contact President Cara Jones-Stockdale at mcbastrop@ gmail.com or call (512) 772-1233. n noon – The Rotary Club of Bastrop County meets for lunch every Tuesday at Cedars Mediterranean Grill, 904 College St. No reservation required. Call Tommie Marlar at (512) 3322665 or call 332-2159. n 2 p.m. – Alzheimer’s support group meets the first Tuesday of every month at Mon Ami Personal Care Homes at 1242 Texas 71 West, next to Prince of Peace Lutheran church. Call Dawn Smith at (512) 985-5916. n 5 p.m. – Diabetes Support Group meets the second Tuesday of each month at First United Methodist Church fellowship hall, 1201 Main St. in Bastrop. n 6:30 - 8:30 p.m. – The Bastrop Folk Dance Troupe holds practices each Wednesday at Aqua Water, 415 Old Austin Hwy, Bastrop. Studies include belly dancing and flamenco dancing. Family environment; all are welcome. Call Sharon 284-0271 or Colleen 293-3895. n 6:30 p.m. – Lost
Pines Artisans Alliance meets every third Tuesday in the Mary Nichols Art Gallery, 301 Burleson, Smithville. Call 3604347. n 6:30 p.m. The Lost Pines Garden Club meets the second Tuesday each month from September - May at the Bastrop Public Library, 1100 Church St. Meetings open to public; light refreshments. Call Norman Jones at 985-5741. n 6:30 p.m. – Outdoor Women, a group that provides women the opportunity to learn and experience outdoor activities in natural surroundings meets the third Tuesday of every month at Bastrop Public Library. n 7 p.m. – Bastrop Fine Arts Guild meets the first Tuesday of each month, except December, in the Aqua Water building, 415 Old Austin Hwy. Call Jeanette Condray 576-1100. n 7 p.m. – Does someone’s drinking bother you? If so, there is a solution. Learn how. Smithville Al-Anon Family Group meets Tuesday evenings at 7 p.m. at the First Presbyterian Church fellowship hall at 300 Burleson in Smithville. For more information, call Lori at (512) 237-4790. n 7:30 p.m. – Bastrop County Audubon Society meets every 3rd Tuesday in the community room of the First National Bank, 489 Texas 71 W, Bastrop. Call (512) 2812762. n 7:30 p.m. – VFW Ladies Auxiliary Post 2527 meets every second Tuesday of the month at 1503 FM 20 in Bastrop. Call Sophie Gregory at (512) 7504515..
Wednesdays
n 8 a.m. – Edward Jones coffee club meets the first Wednesday each month at the office of Charles J. Mazac, 3851 Hwy 71 East, Bastrop (log cabin near Lakeside Hospital). The club is an informal gathering discussing current events, the economy and investing ideas. RSVP to Stephanie at 321-5525 or just show up. n 11 a.m. - Take Off Pounds Sensibly (TOPS) meets every Wednesday at Greater Texas Federal Credit Union, 115 Hunters Crossing Blvd. in Bastrop. More information about TOPS may be found at n noon – The Lost Pines Lions Club meets the second and fourth Wednesday of each month at The Texas Grill, 101 Texas 71. Call Milli at (512) 985-6671 for more information. n 12:30 p.m. – A group of bridge players meet every Wednesday afternoon at the Bastrop Senior Center on Water Street. Anyone is welcome to become a member; call 704-7741 or e-mail petemcdaniel1@gmail.com. n 6:30 p.m. – Friends of the Bastrop County Animal Shelter meets every Wednesday at First National Bank conference room, 489 Texas 71 West. Call 321-1415 for more information. n 6:30 p.m. – Overeaters Anonymous meets every Wednesday at Cedar Creek United Methodist Church. Contact Glenda at 303-1393 for more information. n 7 p.m. – The Bas-
trop Philosophical Society meets the last Wednesday each month to discuss issues of weight and importance. Meets at the Upstart Media Arts Center at 1800 Linda St. Attendance is free. n 7 p.m. – Popcorn LIVE!, a Bastrop-area movie club, meets the third Wednesday of every month at the Upstart Media Arts Center (UMAC) at 1800 Linda St. behind Bastrop High School. Free; all movie-lovers welcome; light refreshments. As with book clubs, there is discussion afterward. Call 303-1531. n 7 p.m. - Bastrop County Taxpayers Association meets the fourth Wednesday of every month at First National Bank Community Room, 489 Texas 71 West. If you are concerned about runaway spending and high taxes, come join this group.
Thursdays
n 8 a.m. – The Bastrop County Referral Network meets every Thursday morning, at Deli Depot, 1006 Main St. Call 722-4236 or 303-5762. n noon - 1 p.m. – The Cedar Creek Rotary Club meets each Thursday at the Hyatt Lost Pines. Call Shawna Elliott at (512) 9881546. n 2 p.m. – Parkinson’s Support Group meeting is held the fourth Thursday of every month at Argent Court Assisted Living Facility at 508 Old Austin Highway. n 5:30 p.m. – Diabetes Support Group sponsored by Smithville Regional Hospital, on the 2nd Thursday of each month in the Education Building of the Bastrop Christian Church, 1106 Church St., (next to the Bastrop Public Library). Call Cyndi or Shirley at (512) 360-2002.
n 6-8 p.m. - Giggles and Screams Comedy Writers meets every Thursday at the UMAC, 1800 Linda St., Bastrop. This group is part of Upstart, a nonprofit media arts organization that runs Bastrop Community Access Television. Giggles and Screams is led by writer Carolyn Banks, is free and is open to pros as well as wannabes. The group watches two episodes of an established sitcom every week, then writers talk about their own work. n 7 p.m. – Bastrop Area Cruisers Car Club meets the 1st Thursday each month at the Hyatt Lost Pines. Buffet dinner at 6 p.m. for $10. Open to members or any car enthusiasts. Call (512) 237-2306. n 7:30 p.m. – Bastrop County Bass Club meets the fourth Thursday of every month in the upstairs
meeting room of First National Bank, next to Bealls in Bastrop. Contact president Jim Deiso at jim.deiso@ bastropbassclub.com or call (512) 239-9112.
Fridays
n 7:15 a.m. – The Bastrop Area Breakfast Club meets the 2nd Friday of each month for breakfast, speakers and networking at Cedars Mediterranean Grill, 904 College St. Cost is $10, which includes breakfast. RSVP to Patty Webb at bastropabc@ yahoo.com or call (512) 415-6321.
Miscellaneous
n La Leche League of Bastrop, a nonprofit organization offering support and information to pregnant and breastfeeding mothers, meets twice monthly. Contact Cara JonesStockdale for details at LLLBastrop@gmail. com or (512) 772-1233.
��������������
���������������������� ��������������
��������������������
�������������������������������� ��������������
�����������������������
���������������������������
����������
����������������
��������������������
Fleetwood, Clayton and Tierra Verde homes. Starting as low as $33,900 delivered with A/C connected 3/2. See if we have a program that fits your budget. Fayette Country Homes 800-369-6888 Open till 6pm 7days a week. ( RBI 32896 )
WORKERS NEEDED for Electrical Substation Project in East Austin near Manor. Workers needed for concrete construction, underground electrical conduit duct bank, precast concrete manholes & steel erection. $12.00 to $18.00 per hour depending on abilities. 10 to 20 hrs. overtime weekly. Drug free workplace. Contact Douglas @ 830-629-5808.
HOUSE FOR RENT 3/Bedroom, 1/Bath 308 Byrne St.,Smithville, TX. Call 512-237-2108 for appt. No pets.
CITATION BY PUBLICATION CAUSE NO. 11573 THE COUNTY OF BASTROP, TEXAS PLAINTIFF VS. PLEASANT CHAPEL CHURCH ET AL DEFENDANT IN THE DISTRICT COURT 335TH JUDICIAL DISTRICT BASTROP COUNTY, TEXAS THE STATE OF TEXAS COUNTY OF BASTROP In the name and by the authority of the State of Texas, Notice is hereby given as follows: TO: Pleasant Chapel Church aka Pleasant Chappel A. M. E. Church of Walnut Creek, if living, and if any or all of the above named defendants be dead, the unknown heirs of each or all of said above named persons who may be dead, and the unknown heirs of the unknown heirs of said above named persons, and the unknown owner or owners of the hereinafter described land, and the executors, administrators, guardians, legal representatives, lega-tees and devisees of the above named persons, and any and all other persons, including adverse claimants, the un-known stockholders of any defunct cor-porations, their successors, heirs and assigns, owning or having or claiming any legal or equitable interest in or lien upon the following described property, delinquent to Plaintiff herein, for taxes, all of said property being located in said County and State, to-wit: Tract: 1 Account Number: R93957 Property Description: 1.0000 acre, more or less, Abstract 339, Samuel Wolfenberger Survey, Bastrop County, Texas Approximate Property Address: 601 Pleasant Chapel Road 78612 Deed Reference: Volume 144, Page 288, Deed Records, Bastrop County, Texas Assessed Name: PLEASANT CHAPEL CHURCH Which said property is delinquent to Plaintiff for taxes in the following amount: $4,049.47, exclusive of interest, penalties and costs, and there is included in this suit in addition to the taxes, all said interest, penalties and costs thereon, allowed by law up to and including the date of judgment herein. You are hereby notified that suit has been brought by the The County of Bastrop, Texas, Plaintiff, against the above named persons, as Defendants, by petition filed on August 24, 2012, in a certain suit styled The County of Bastrop, Texas vs. Pleasant Chapel Church et al, for collection of taxes on said property and that said suit is now pending in the District Court of Bastrop County, Texas 335th Judicial District, and the file number of said suit is 11573, that the names of all taxing units which assessors, shall take notice that claims not only for any taxes which were delinquent on said property at the time this suit was filed but all tax-es becoming delinquent thereon at any time thereafter up to the day of judg-ment, including all interest, penalties and costs allowed by law thereon, may, upon request, therefore on the first Mon-day after the expiration of forty-two (42) days from and after the date of issuance of this citation as set out below, said appearance and answer date being the 10th day of December, 2012, (which is the return day of such citation), before the honorable District Court of Bastrop. This citation is issued and given under my hand and seal of said Court in the City of Bastrop, Bastrop, Texas, this the 26th day of October, 2012. SARAH LOUCKS, Clerk of the District Court Bastrop County, Texas, 335th Judicial District PO Box 770 Bastrop, Texas 78602 (512) 332-7244 Fax: (512) 332-7249 Stacy Ott, Deputy
Tax Return Pre-Approval for 2013. Program for 3,4,5 bedroom doublewide. Programs starting with 575 Credit Score or Higher. Lets get started today. Fayette Country Homes Schulenburg 800-369-6888 Open Sundays 1-6. ( RBI 32896)
������������
HIGHWAY 71 1 Bedroom, 1 Bath with study, $650/mo. 2BR/1BA, $650/mo. Call 512-321-3333. 2BR/2BA WEST OF BASTROP. Clean, modern, CA/CH, storage, washer/dryer room, sunroom, carport. $800 +deposit. 321-4964.
Do Not Wait. Let us see what you need to do to purchase a home. 2013 Refunds just around the corner. Get Pre Approved, Select your home from a large selection. Single, Doubles, New or Used. Fayette Country Homes off Interstate 10 and Hwy 77. 979-743-6192 Call for more information. (RBI 32896)
FRIDAY 23RD & SATURDAY 24TH. Baby things, toys, jewelry, furniture & more. 5066 FM 535.
����������������������������
HUGE MULTI-FAMILY YARD SALE & Ranch sale, 11/24, 7am-2pm, FM 535 in Cedar Creek, 512-922-3377.
HOMES FOR RENT from $500.00/mo. 2BR, 3BR & 4BR. Direct TV/Internet $35/mo. Lots $300/mo. RV Lots available. 512-308-1030.
����������
����������������������
��������������������������
SHELLED PECANS $9.00 lb. Maurice Lowden 100 Maynard, Bastrop. 303-0703. Tell A Friend
OFFICE/WAREHOUSE SPACE Hwy. 71. 30 to 50 cents per foot. 1500 ft. & up. Between Bastrop & Smithville. Rick 512-845-4487.
55”MITSUBISHI TV with speakers, HD 1080p, great picture. $300. 512-360-4614 if no answer leave message.
��������������������
SERTA MATTRESS COMPANY IN LOCKHART, TEXAS HAS IMMEDIATE OPENINGS FOR SEWING MACHINE MECHANICS. Please send resumes to hr_manfgjobs@yahoo.com or fax to 512-398-2264. MARKET RESEARCH PARTICIPANTS wanted. Need market research participants to evaluate local establishments. Apply free: Shop.BestMark.com or call 800-969-8477
City of Elgin
Communications Operator see elgintx.com
Want to work on a ranch? If so we need you! Down Home Ranch in Elgin (between Taylor and Elgin) needs part-time kitchen assistant/instructor. Hours are morning to early afternoon. We are also looking for weekend residential assistants. This is a part-time Friday at 4:30 pm to Sunday at 5 pm sleep-over position working with persons with mild to moderate intellectual disabilities. Must pass extensive background checks and have excellent driving record. Fax inquiries to 512-856-0256 or bring/mail to 20250 FM 619, Elgin, TX, 78621. Please visit our website for more information and to fill out an employment application. HANDYMAN/HELPER Part-time. Yardwork, fencing, house repair. Must speak English. Ideal for retired person. Call Dave 512-360-4894.
FIVE 2012 JEEP WRANGLER Sahara tires, less than 1k miles. $700 for all! Michael 512-665-0789.
�������������������� 12 FOOT LIVESTOCK TRAILER, tandem axle, fresh paint. $1,000. OBO. 512-321-3499.
��������������������
BY OWNER - 2 HOUSES: 2600sqft & 4900sqft on 5 acre lot in La Reata Subdivision wildlife preserve in Smithville Tx. Off Route 304. Restricted community. $750,000. 512-517-7722 LOW DOWN 3BR/2BA, 1056 sq ft. Only $62,800 w/$800 dn 104 Bunte St, Smithville Call Mr Smith 855-847-6806 LOW DOWN 3BR/2.5BA 1583 sq ft. Only $85,800 w/$1600 dn 606 Washington St., Smithville Call Mr. Smith 855-847-6806
�������������������������������� A SINGLE PARENT & 1ST TIME BUYER Program call to qualify for 3, 4 and 5 bedroom homes with land. 512-295-7803 rbi #34010 3 ACRES W/ BIG DOUBLEWIDE $900 per month p/i with $5000 Down 360 months 4.25% int Close drive to Austin. Call 512-295-7803 rbi 34010 LOVE YOUR KIDS? Give them Award Winning Schools w/ 4/2 home on nice large lot in the country will do in house financing. Call Paul 512-240-3232 16x76 3 Bed 2 Bath ONLY $14,900! Financing Available! Call 512-385-2077 3 BEDROOM 2BATH MODULAR HOME ON ACRE Only $621 (PI) per month! Will Finance! Call 512-385-2077 BANK FORECLOSURE RESOURCE CENTER! Homes Set Up on Land! Take Over Payments! Call for available listings! Call 512-385-2077
��������������������� 61+ ACRES, 3 Large Stock Tanks, Cross Fenced, Great View. South of Elgin. 512-940-5200.
��������������������� A FURNISHED SINGLEWIDE $18,000 delivered w/ in house Financing available rbi 34010 Call 512-295-7803 20 ACRES W/ BARN, Fencing and 4/2 mobile home. $9,000 down. $1300 per month p/i 3.75% int 360mo. Call 512-240-3232
����������
����������
NOTICE TO SEEK PROPOSALS The Dime Box Independent School District is seeking proposals for contracting custodial services for the 2012-2013 school year beginning on February 4, 2013 with a twelve months contract. Download the RFP document by clicking on the link below or contact Stephanie Kieschnick in the Superintendent's office (979) 884-2324 to have one mailed. Proposals must be mailed or hand delivered by Wednesday, December 12, 2012 by 2:00 PM at the Dime Box ISD Superintendent's Office, P.O. Drawer 157, 1079 Stephen F. Austin Blvd., Dime Box, Texas 77853. Proposals will be considered at the December 20, 2012 meeting of the Dime Box ISD Board of Trustees. Interviews will be conducted at the January 17, 2013 DBISD Board of Trustees meeting. Proposals will not be accepted electronically or by fax.
The Planning and Zoning Commission will hold a Public Hearing on December 4, 2012 at 6:00 p.m. in the Council Chambers of City Hall to discuss a minor plat. A recommendation will be given at the City Council meeting by the Planning and Zoning Commission. The City Council will hold a public hearing to discuss and seek action on December 10, 2012 at 7:00 p.m. for the following location: 1655 NE Loop 230 Webb Addition, Lot 22-C, Acres 1.3290 Owner - Ray & Kim Lerche NOTICE OF PUBLIC HEARING Notice is hereby given that the Board of Directors of the Lost Pines Groundwater Conservation District is considering its 2013 Budget and will conduct a public hearing and receive public testimony pertaining to the proposed budget at its meeting on December 19, 2012, beginning at 7:00 o'clock P.M., at Giddings City Hall, 118 E. Richmond St., Giddings, TX 78942. Those persons who wish to review the budget may do so by accessing the District's website or, if they wish to receive a hard copy of the proposed 2013 budget in the mail, they should make their request by writing the District at the address below. Questions may be directed to the District by contacting the Assistant Secretary at 512-360-5088 or this email address, peggy_campion@yahoo.com. Those persons who wish to provide written comments about the budget for the Board to consider should forward their comments to this email address or mailing them to the District at P.O. Box 1027, Smithville, Texas 78957, prior to December 14, 2012. Date: November 15, 2012 Peggy Campion Assistant Secretary CITATION BY PUBLICATION STATE OF TEXAS COUNTY OF BASTROP Cause No. 10,267; Estate of Benjamin D. Carroll, Deceased, County Court at Law, Bastrop County, Texas The alleged heir(s) at law in the above-numbered and entitled estate have filed an Application Determine Heirship in this estate on the 6th day of November, 2012, requesting that the Court determine who the heirs and only heirs of BENJAMIN D. CARROLL, Court Clerk on or before the above-noted date and time. GIVEN UNDER MY HAND AND THE SEAL OF SAID COURT at office in Bastrop, Texas, this 6th day of November, 2012. ROSE PIETSCH Clerk of the Bastrop County Court By: Mindy Supak, Deputy P.O. Box 577, Bastrop, Texas 78602 PUBLIC HEARING NOTICE The Bastrop County Commissioners Court will hold a public hearing, pursuant to Section 257.022 of the Texas Transportation Code, to consider abolishing Bastrop County Road District #3. The hearing will be held at 9:30 a.m., Monday, November 26, 2012 in the Commissioners Courtroom, Second Floor, 804 Pecan St. Bastrop, Texas.
�������������������������� ��������������������������������
The Planning and Zoning Commission will hold a Public Hearing on December 4, 2012 at 6:00 p.m. in the Council Chambers of City Hall to discuss a variance for setbacks and 30% coverage. A recommendation will be given at the City Council meeting by the Planning and Zoning Commission. The City Council will hold a public hearing to discuss and seek action on December 10, 2012 at 7:00 p.m. for the following location: 503 Cleveland Street Original Townsite, Blk 3, Lot 11 Owner - Bob & Linda Estus
��������������
Page B6 • The Bastrop Advertiser
Thursday, November 22, 2012
Guerra takes Upstart reins SUDOKU Upstart recently announced that, effective immediately, Colin Guerra has been named its executive director and will take over all of the nonprofit media arts organization’s daily operations. Guerra is a wellknown videographer, audio expert and musician and recently ran for the Texas Legislature. He co-produced the Upstart documentary, “The Fire.” Prior to this appointment, he served as Upstart’s operations manager, primarily running Bastrop Community Access Television for the City of Bastrop and the web site. Many events are held at the Upstart Media Arts Center, which is just off highway 95, just a short distance east of Bastrop’s historic downtown. The Upstart campus also includes the nearly 7,000 sq. ft. Annabelle Resource Center for movies, television and
Contributed photo
Collin Guerra has been named executive director of Upstart. new media, a place filmmakers can use to build sets, store props and costumes or even shoot. Upstart also offers courses in writing, video production, sound, lighting and other aspects of the business. In addition to classes and filming local meetings and events,
Upstart provides the local community with the opportunity to make television shows and movies via the public access television station. Programming includes Bastrop News Update, City Talk, City Council, County Commissioners and School Board meetings, but also narrative shorts and videos on features that tourists might enjoy. Former executive director Carolyn Banks will serve on the Upstart board and as creative consultant. She will also be Upstart’s liaison with arts and cultural organizations in the city.
Send us your entertainment news The Bastrop Advertiser (512) 321-2557 news@ bastropadvertiser.com or to the The Smithville Times CALL 237-4655 news@smithvilletimes. com
EVENING TV LISTINGS
11/22/12
Like puzzles? Then you’ll love sudoku. This mindbending puzzle will have you hooked from the moment you square off, so sharpen your pencil and put your sudoku savvy to the test.
Here’s how it works
S u d o k u.
NOVEMBER 23 - 27, 2012
FRIDAY EVENING 00-Bastrop, Elgin, Smithville Cable \-Broadcast NOVEMBER 23, 2012 6PM 6:30 7PM 7:30 8PM 8:30 9PM 9:30 10PM 10:30 11PM 11:30 Fox 7 News Edge at Nine Big Bang Simpsons Simpsons TMZ (N) ’ Kitchen Nightmares Å Fringe (N) ’ Å 2 FOX _ TMZ (N) ’ Big Bang Last Man Malibu News Football Nightline (N) J. Kimmel Shark Tank ’ Å (:01) 20/20 ’ Å 3 ABC 8 KVUE News Ent Guys-Kids Grimm ’ Å KXAN News Tonight Show w/Jay Leno Jimmy Fallon Go On Å Dateline NBC (N) ’ Å 4 NBC D KXAN News Wheel Frosty Frosty Hoops & SpongeBob! Blue Bloods ’ Å K-Eye News Late Show W/Letterman Ferguson 5 CBS J K-Eye News Two Men Washington Need Charlie Rose PBS NewsHour (N) Å Moyers & Company Å Great Performances: Andrea Bocelli Live in Central Park ’ Å 9 PBS 2 KXAN News Engagement The Office The Office 30 Rock ’ 30 Rock ’ The Happy Elf ’ Å 12 MNT V Family Feud Family Feud Grandma Got Run Over Law Order: CI Monk ’ Å Monk ’ Å Frasier ’ Football Cold Case Files ’ Å 23 CW % EP Daily (N) Frasier ’ To Be Announced To Be Announced To Be Announced To Be Announced To Be Announced To Be Announced 60 A&E Terminator 2 ›› “Constantine” (2005, Fantasy) Keanu Reeves, Rachel Weisz. ‘R’ Comic Men ››› “Terminator 2: Judgment Day” The Walking Dead Å 63 AMC (5:00) “Friday After Next” ››› “Coming to America” (1988) Eddie Murphy, Arsenio Hall. Eddie Murphy: One Night Only The Wendy Williams Show 73 BET Reba Å Reba Å The 46th Annual CMA Awards Honoring country music industry members. ’ Å Crossroads (N) ’ Behind the Music ’ Å 70 CMT Erin Burnett OutFront (N) Anderson Cooper 360 (N) Piers Morgan Tonight (N) Anderson Cooper 360 Erin Burnett OutFront Piers Morgan Tonight 46 CNN Harold (:16) › “Half Baked” (1998) Dave Chappelle. Å (:17) ›› “Jackass: The Movie” (2002, Comedy) Å (:19) ›› “Jackass: Number Two” (2006) 59 COM Dog Gravity Falls A.N.T. Farm Good-Charlie A.N.T. Farm Gravity Falls Shake It Up! Jessie Å Jessie Å A.N.T. Farm (N) ’ Å 42 DISN Jessie Å Flying Wild Alaska Å Gold Rush The Dirt (N) ’ Gold Rush (N) ’ Å Jungle Gold (N) ’ Å Gold Rush ’ Å Jungle Gold ’ Å 34 DSC College Football Arizona State at Arizona. (N) (Live) 52 ESPN College Football Teams TBA. (N) (Live) Fresh Prince Fresh Prince (5:30) ›› “Nanny McPhee” (2005) ›› “Nanny McPhee Returns” (2010) Emma Thompson. Premiere. The 700 Club ’ Å 37 FAM Diners, Drive Diners, Drive Diners, Drive Diners, Drive 32 FOOD Diners, Drive Diners, Drive Diners, Drive Diners, Drive Diners, Drive Diners, Drive My. Diners Health UEFA Mag. Spurs Live NBA Basketball San Antonio Spurs at Indiana Pacers. (Live) Spurs Live Spurs In. Football Sportsday High School (N) 54 FX SW Jingle & Bell Jingle “It’s Christmas, Carol!” (2012) Carrie Fisher. Å “The Most Wonderful Time of the Year” (2008) Å 39 HALL Jingle & Bell Jingle Cnt. Cars Cnt. Cars Pickers Love- 1880’s How the Cajun Pawn (:01) American Pickers American Pickers Å American Pickers Å 61 HIST Psych “Christmas Joy” ’ “12 Wishes of Christmas” (2011) Elisa Donovan. ’ ››› “The Natural” (1984, Drama) Robert Redford, Robert Duvall. Premiere. ’ 38 ION “The Wedding Planner” My Life Is a Lifetime Movie (:01) “Pretty Woman” ››› “Pretty Woman” (1990, Romance-Comedy) Richard Gere, Julia Roberts. 26 LIFE Doomsday Bugged Out Doomsday Bugged Out Doomsday Preppers Doomsday Bugged Out Doomsday Bugged Out Doomsday Preppers 51 NGC Kung Fu My Wife My Wife George George Friends ’ Friends ’ Hollywood Heights Å 41 NICK Figure It Out Victorious Kung Fu ›› “Star Wars: Episode II -- Attack of the Clones” (2002) ’ 69 SPIKE ›› “Star Wars: Episode I -- The Phantom Menace” (1999) Liam Neeson, Ewan McGregor. ’ Live & Let WWE Friday Night SmackDown! (N) ’ Å ›› “Quantum of Solace” (2008) Daniel Craig, Olga Kurylenko. Å 58 SYFY (4:00) “Casino Royale” (5:00) “Dial M for Murder” ›››› “My Fair Lady” (1964, Musical) Audrey Hepburn, Rex Harrison. Å ››› “Camelot” (1967, Musical) Richard Harris. Å 64 TCM Say Yes Say Yes Say Yes Say Yes Brides-Hills Brides-Hills Say Yes Say Yes Brides-Hills Brides-Hills Along for the Bride Å 35 TLC Obsessed The Mentalist ’ Å ›› “Kiss the Girls” (1997) Morgan Freeman, Ashley Judd. Å ›› “Disturbia” (2007, Suspense) Shia LaBeouf. Å 67 TNT Andy Griffith The Andy Griffith Show Andy Griffith Raymond Raymond Cleveland Divorced Raymond Raymond King King 40 TVL (4:00) Weather Center Live Twist of Fate Twist of Fate Plane Xtr. Plane Xtr. Twist of Fate Twist of Fate Plane Xtr. Plane Xtr. Weather Center Live Å 45 TWC (5:00) ›› “Bad Boys II” (2003) Martin Lawrence. Å ›› “Fast & Furious” (2009, Action) Vin Diesel. Å ›› “Bad Boys II” (2003, Action) Martin Lawrence. Å 66 USA VH1 Special ’ VH1 Special ’ VH1 Special ’ VH1 Special ’ VH1 Special ’ VH1 Special ’ 71 VH1 How I Met How I Met How I Met How I Met WGN News at Nine (N) ’ Funniest Home Videos Engagement Engagement 21 WGN-A Funniest Home Videos Better Worse Better Worse Better Worse Better Worse This C’mas 65 WTBS (5:00) “Four Christmases” ›› “Fred Claus” (2007, Comedy) Vince Vaughn, Paul Giamatti. Å “Dazed and Confused” ‘R’ ››› “Open Range” (2003, Western) Robert Duvall. ’ ‘R’ Å Billy Mad (:20) ›› “Batman Returns” (1992) Michael Keaton. ’ 760 ENC Mr. Poppers 24/7 REAL Sports Gumbel Littl Fock “Crossfire Hurricane” (2012) ’ ‘NR’ Å (:45) ›› “The Hangover Part II” (2011) ’ ‘R’ Å 16 HBO 24/7 Boardwalk Empire Å Treme ’ Å ››› “The American” ‘R’ 702 HBO2 X-Men: First ›› “Cowboys & Aliens” (2011) Daniel Craig. ‘PG-13’ Man Moon ››› “Crazy, Stupid, Love.” (2011) Steve Carell. Å “Endure” (2010) Devon Sawa. ’ ‘R’ Å 703 HBOS (:10) ››› “Field of Dreams” (1989) Kevin Costner. ’ (4:35) “Bridesmaids” (2011) (6:50) ›› “Fast Five” (2011) Vin Diesel. ‘PG-13’ Å Skin to Max Hunted “Polyhedrus” ’ Emmanuelle Hunted “Polyhedrus” ’ 715 MAX “A Very Harold & Kumar 3D Christmas” (:20) ›› “Elektra” (2005) ‘PG-13’ Å Hunted “Ambassadors” ’ ›› “Due Date” (2010) ’ ‘R’ Å 716 MMAX Very 3D Joan Rivers: Don’t Start Jim Rome on Showtime › “I Don’t Know How She Does It” Å 730 SHOW Perfect Sc (:45) ››› “50/50” (2011) Joseph Gordon-Levitt. ‘R’
SATURDAY EVENING 00-Bastrop, Elgin, Smithville Cable \-Broadcast NOVEMBER 24, 2012 6PM 6:30 7PM 7:30 8PM 8:30 9PM 9:30 10PM 10:30 11PM 11:30 Fox 7 News Edge at Nine MasterChef Å (DVS) 30 Seconds 2 FOX _ College Football Teams TBA. (N Subject to Blackout) ’ (Live) Å News Inside Edit. ’70s Show College Football Notre Dame at USC. (N) (Live) Å 3 ABC 8 KVUE News Tailgaters KXAN News (:29) Saturday Night Live ’ Å ›› “Indiana Jones and the Kingdom of the Crystal Skull” (2008) Harrison Ford. 4 NBC D KXAN News Wheel Made in Jersey “Camelot” NCIS “Devil’s Triangle” ’ K-Eye News CSI: Miami ’ Å Ring, Honor 48 Hours (N) ’ Å 5 CBS J K-Eye News Two Men Motown: Big Hits and More Austin City Limits Å The British Beat (My Music) ’ Å 9 PBS 2 Doo Wop Discoveries (My Music) ’ Å KXAN News Engagement The Office King Friends ’ The Closer “You Are Here” The Closer “Batter Up” Seinfeld ’ Old Christine 12 MNT V Friends ’ Daryl House Wrestlers TXRD Lonestar Rollergirls Leverage Å 23 CW % EP Daily (N) EP Daily (N) Star Wars: The Clone Wars Law & Order “Slave” ’ Storage Storage Storage Storage Storage Storage Storage Storage To Be Announced Storage Storage 60 A&E (4:30) ›› “Big Jake” ››› “Appaloosa” (2008, Western) Ed Harris. Premiere. ‘R’ Å ››› “Appaloosa” (2008, Western) Ed Harris. ‘R’ Å 63 AMC (4:30) “The Best Man” Å ››› “Barbershop 2: Back in Business” (2004) Ice Cube. Å “Dysfunctional Friends” (2011, Comedy) Stacey Dash. Å 73 BET Redneck Island Redneck Island (N) Chainsaw Big Texas Redneck Island Chainsaw Big Texas Reba Å Reba Å 70 CMT CNN Newsroom (N) Piers Morgan Tonight CNN Newsroom (N) Piers Morgan Tonight CNN Presents Å CNN Presents Å 46 CNN Tosh.0 Tosh.0 It’s Always Sunny in Phila. Kyle Kinane It’s Always Sunny in Phila. (4:58) ›› “Office Space” (6:59) Tosh.0 Tosh.0 59 COM Dog Austin & Ally Gravity Falls Good-Charlie Jessie Å Austin & Ally Jessie Å 42 DISN A.N.T. Farm ›› “Alice in Wonderland” (2010) Johnny Depp. ‘PG’ Outlaw Empires ’ Å Outlaw Empires ’ Å Outlaw Empires ’ Å Alaska Marshals ’ Å Outlaw Empires ’ Å Alaska Marshals ’ Å 34 DSC (:45) College Football Teams TBA. (N) (Live) (:45) SportsCenter (N) (Live) Å SportsCenter (N) Å 52 ESPN Score (5:00) “Home Alone 4” “Unaccompanied Minors” ››› “Home Alone” (1990, Comedy) Macaulay Culkin. ›› “Richie Rich” (1994, Comedy) Macaulay Culkin. 37 FAM My. Diners My. Diners 32 FOOD Diners, Drive Diners, Drive My. Diners My. Diners My. Diners My. Diners My. Diners My. Diners Iron Chef America Mavs Insider Mavs Live NBA Basketball Los Angeles Lakers at Dallas Mavericks. (Live) Mavs Live Mavs Insider Big 12 Live (N) (Live) 54 FX SW C’boys “The Wishing Tree” (2012, Drama) Jason Gedrick. Å “Matchmaker Santa” Å 39 HALL (5:00) “Christmas Magic” “Naughty or Nice” (2012) Hilarie Burton. Premiere. Å Built America Pawn Stars Pawn Stars Pawn Stars Pawn Stars Mankind The Story Mankind The Story of All of Us “Empires” Å 61 HIST Law Order: CI Law Order: CI House Life of a young boy. House “Sleeping Dogs Lie” House “Clueless” Å House “Safe” ’ Å 38 ION “The Christmas Hope” “March Sisters” “The March Sisters at Christmas” (2012) Premiere. Å “Holiday Spin” (2012, Drama) Ralph Macchio. Å 26 LIFE Alaska State Troopers Alaska State Troopers Doomsday Preppers Doomsday Preppers Doomsday Preppers Doomsday Preppers 51 NGC Victorious Victorious Yes, Dear ’ Yes, Dear ’ Friends ’ Friends ’ Friends ’ Friends ’ 41 NICK Victorious Victorious Victorious Marvin ›››› “Star Wars IV: A New Hope” (1977) Mark Hamill, Harrison Ford. ’ ›› “Reign of Fire” (2002, Fantasy) Christian Bale. ’ 69 SPIKE “Star Wars: Ep. III” “Dungeons & Dragons: The Book of Vile Darkness” “Dungeons & Dragons: Wrath of the Dragon God” Å 58 SYFY “Dungeons & Dragons: Wrath of the Dragon God” Å (4:30) ››› “Gypsy” Å ››› “Jezebel” (1938, Drama) Bette Davis. Å (DVS) ›››› “Ben-Hur” (1959, Historical Drama) Charlton Heston, Jack Hawkins. 64 TCM 20/20 on TLC ’ Å 20/20 on TLC ’ Å 20/20 on TLC (N) ’ Å 20/20 on TLC (N) ’ Å 20/20 on TLC ’ Å 20/20 on TLC ’ Å 35 TLC (4:45) ›› “Kiss the Girls” ›› “Angels & Demons” (2009, Suspense) Tom Hanks, Ewan McGregor. Å (DVS) ›› “The Da Vinci Code” (2006) Tom Hanks. Å 67 TNT Divorced Divorced Divorced Divorced Divorced Divorced Divorced Divorced Raymond Raymond King King 40 TVL Weather Center Live (N) Hurricane Hurricane Twist of Fate Twist of Fate Weather Center Live (N) Hurricane Hurricane Twist of Fate Twist of Fate 45 TWC Law & Order: SVU Law & Order: SVU Law & Order: SVU Law & Order: SVU Law & Order: SVU Law & Order: SVU 66 USA T.I. and Tiny Marry-Game › “Double Take” (2001) Eddie Griffin. Premiere. ’ Basketball Wives LA ’ Basketball Wives LA ’ Basketball Wives LA ’ 71 VH1 Funniest Home Videos NBA Basketball Chicago Bulls at Milwaukee Bucks. (N) (Live) News/Nine Bones ’ Å 21 WGN-A Funniest Home Videos King Big Bang Big Bang Big Bang Big Bang Wedding Band (N) Wedding Band ›› “Shrek the Third” 65 WTBS King (4:50) “Air Force One” ‘R’ ››› “The Lord of the Rings: The Return of the King” (2003) Elijah Wood. ’ ‘PG-13’ Å ›› “Under Siege 2: Dark Territory” ‘R’ 760 ENC Harry Potter (:45) ›› “Red Tails” (2012) Cuba Gooding Jr. ‘PG-13’ Å 24/7 Boxing Andre Berto vs. Robert Guerrero, Welterweights. (N) Å 16 HBO (10:55) “Pulp Fiction” ‘R’ Treme ’ Å Treme ’ Å Game of Thrones ’ Å True Blood “Sunset” Å 702 HBO2 (5:25) ››› “The Girl” ’ Little State-Grc. › “Little Fockers” (2010) ‘PG-13’ Å › “Envy” (2004) Ben Stiller. ‘PG-13’ Å 703 HBOS (5:50) ›› “Hereafter” (2010) Matt Damon. ‘PG-13’ Å Contagion (:20) ›› “Resident Evil: Apocalypse” (:35) Hunted “Polyhedrus” Skin to Max Hunted “Polyhedrus” ’ “Philly Kid” (2012) Wes Chatham. ‘R’ Å 715 MAX Die Hard ‘R’ Strike Back ’ Å ››› “Bridesmaids” (2011) Kristen Wiig. ’ ‘NR’ Å (:10) Sex Games Cancun Feature 1 Å 716 MMAX “Die Hard-Veng.” Homeland “I’ll Fly Away” War Horse ›› “Faster” (2010) Dwayne Johnson. ’ ‘R’ Å ›› “Red State” (2011) Michael Parks. Dexter Å 730 SHOW Homeland
SUNDAY EVENING 00-Bastrop, Elgin, Smithville Cable \-Broadcast NOVEMBER 25, 2012 6PM 6:30 7PM 7:30 8PM 8:30 9PM 9:30 10PM 10:30 11PM 11:30 Burgers Family Guy Cleveland Fox 7 News Edge at Nine Big Bang Simpsons Burn Notice Å 2 FOX _ NFL Football The OT (N) Simpsons (:35) Entertainment Tonight (:35) Extra Once Upon a Time (N) ’ Revenge “Lineage” (N) ’ (:01) 666 Park Avenue (N) News 3 ABC 8 Funniest Home Videos KXAN News GameNight Raymond (:20) NFL Football Green Bay Packers at New York Giants. (N) ’ (Live) Å 4 NBC D Football Night in America (N) Å K-Eye News KEYE Sports Two Men Texas Music 60 Minutes (N) ’ Å The Amazing Race (N) ’ The Good Wife (N) Å The Mentalist ’ Å 5 CBS J Incredible Health-Joel Downton Abbey Revisited (N) Å Downton Abbey Revisited Å Austin City Limits Å 9 PBS 2 Happy Holidays KXAN News Engagement The Office King Seinfeld ’ Old Christine 12 MNT V First Family Box Office ››› “Hot Shots! Part Deux” (1993) Charlie Sheen. Star Wars: The Clone Wars Hollywood Whacked Private Practice ’ Å Private Practice ’ Å Castle ’ Å 23 CW % Law & Order “Girlfriends” Storage Storage Storage Storage Storage Storage Storage Storage Storage Storage Storage Storage 60 A&E (5:00) “Land of the Dead” The Walking Dead Å The Walking Dead (N) (:01) The Walking Dead Talking Dead Comic Men The Walking Dead Å 63 AMC Soul T. Awards Red Carpet 2012 Soul Train Awards Honoring music’s most soulful artists. (N) Apollo Live Don’t Sleep! Don’t Sleep! Don’t Sleep! 73 BET Facing ›› “Fireproof” (2008, Drama) Kirk Cameron, Erin Bethea. Premiere. (:45) ›› “Fireproof” (2008, Drama) Kirk Cameron, Erin Bethea, Alex Kendrick. 70 CMT CNN Newsroom (N) Piers Morgan Tonight CNN Newsroom (N) Piers Morgan Tonight CNN Presents Å CNN Presents Å 46 CNN (5:00) ››› “Mean Girls” ›› “The House Bunny” (2008) Anna Faris. Premiere. (:02) ›› “The House Bunny” (2008) Anna Faris. Å (:03) ››› “Bad Santa” 59 COM Dog Gravity Falls A.N.T. Farm Good-Charlie Dog Shake It Up! Jessie Å A.N.T. Farm Wizards Wizards Jessie Å 42 DISN Phineas Magic Magic MythBusters ’ Å MythBusters ’ Å MythBusters ’ Å MythBusters ’ Å MythBusters ’ Å 34 DSC Countdown 30 for 30 SportsCenter SportsCenter (N) (Live) Å SportsCenter 52 ESPN World/Poker SportsCenter (N) Å (5:00) “Home Alone” (1990) “Home Alone: The Holiday Heist” (2012) Premiere. “Home Alone: The Holiday Heist” (2012) Debi Mazar Joel Osteen Kerry Shook 37 FAM Sugar Dome (N) The Next Iron Chef Chopped (N) Iron Chef America The Next Iron Chef 32 FOOD The Next Iron Chef Texas A&M No-Huddle (N) College Basketball San Diego State at USC. (N) (Live) GameTime Spotlight 54 FX SW Texas A&M Football Show Inside Sooner Football “Annie Claus Is Coming to Town” (2011) Maria Thayer. “A Princess for Christmas” 39 HALL “Moonlight and Mistletoe” “Hitched for the Holidays” (2012) Joey Lawrence. Å Pawn Stars Pawn Stars Pawn Stars Pawn Stars Pawn Stars Pawn Stars Outback Hunters (N) Å To Be Announced Pawn Stars Pawn Stars 61 HIST “A Golden Christmas 2: The Second Tail” (2011) ’ ››› “Golden Christmas 3” (2012) Premiere. ’ “12 Wishes of Christmas” (2011) Elisa Donovan. ’ 38 ION “Love at the Christmas Table” (2012) Danica McKellar. “Liz & Dick” (2012) Lindsay Lohan. Premiere. Å (:01) “Love at the Christmas Table” (2012) Å 26 LIFE American Escort Inside Underground Poker Drugs, Inc. (N) Alaska State Troopers (N) Inside Underground Poker Drugs, Inc. 51 NGC Yes, Dear ’ Friends ’ Friends ’ Friends ’ Friends ’ 41 NICK SpongeBob SpongeBob See Dad Run ›› “Hotel for Dogs” (2009) Emma Roberts. ’ Å ›››› “Star Wars V: The Empire Strikes Back” (1980) Mark Hamill. ’ ›› “Red Dawn” (1984, Action) Patrick Swayze. ’ 69 SPIKE “Star Wars IV” (5:30) ›› “National Treasure: Book of Secrets” (2007) ››› “Indiana Jones and the Temple of Doom” (1984) Å ››› “Groundhog Day” (1993) 58 SYFY “The Lemon Drop Kid” Movie ›› “The Golden Voyage of Sinbad” (1974) Å ›› “Sinbad and the Eye of the Tiger” (1977) Å 64 TCM Extreme Extreme Extreme Cougar Wives (N) Sister Wives ’ Å Sister Wives ’ Å Sister Wives (N) ’ Å Extreme Cougar Wives ’ 35 TLC (4:00) “Angels & Demons” ››› “A Time to Kill” (1996) Sandra Bullock, Samuel L. Jackson. Å (DVS) ›› “Hide and Seek” (2005) Robert De Niro. Premiere. 67 TNT Cleveland Cleveland Cleveland Cleveland Cleveland Cleveland Cleveland Divorced Raymond Raymond King King 40 TVL Weather Center Live (N) Coast Guard Alaska Coast Guard Florida Coast Guard Alaska Coast Guard Florida Weather Center Live Å 45 TWC NCIS A girl is kidnapped. NCIS “Broken Arrow” ’ NCIS “Recruited” ’ Å NCIS ’ Å (DVS) ›› “Shutter Island” (2010) Leonardo DiCaprio. Å 66 USA (5:00) Saturday Night Live Saturday Night Live in the 2000s: Time and Again ’ Marry-Game T.I. and Tiny Couples Therapy ’ Basketball Wives LA ’ 71 VH1 How I Met How I Met How I Met How I Met News/Nine Replay 30 Rock ’ 30 Rock ’ Engagement Engagement 21 WGN-A Bloopers! ’ How I Met Wedding Band Just Married ››› “Ice Age” (2002) Premiere. (:45) ››› “Ice Age” (2002) Voices of Ray Romano. 65 WTBS (5:00) “Shrek the Third” (5:00) “Shanghai Knights” ›› “Robin Hood: Prince of Thieves” (1991) Kevin Costner. ‘PG-13’ ››› “Analyze This” (1999) ’ ‘R’ Å (:15) › “Bulletproof” ‘R’ 760 ENC (:05) ›› “Knight and Day” (2010) Tom Cruise. ‘PG-13’ Boardwalk Empire (N) ’ Treme “Tipitina” ’ Å (:15) Boardwalk Empire ’ (:15) Treme “Tipitina” ’ 16 HBO Tower Heist ›› “Hall Pass” (2011) Owen Wilson. ’ ‘R’ Å 24/7 Usual ›› “Tower Heist” (2011) Ben Stiller. ’ 702 HBO2 Boxing Martha › “Dream House” (2011) Daniel Craig. ››› “Collateral” (2004) Tom Cruise. ’ ‘R’ Å 703 HBOS (5:45) ››› “The Terminal” (2004) Tom Hanks. Å (5:05) ›› “Unknown” ’ ››› “Die Hard 2” (1990) Bruce Willis. ’ ‘R’ Å ›› “Fast Five” (2011) Vin Diesel. ’ ‘PG-13’ Å “Sexy Wives Sindrome” ’ 715 MAX Girl’s Guide (:10) ››› “Contagion” (2011) Marion Cotillard. Å Hunted “Polyhedrus” ’ › “This Means War” (2012) ‘PG-13’ Å 716 MMAX (3:45) ›››› “Titanic” Homeland “I’ll Fly Away” Dexter “Helter Skelter” (N) Homeland “Two Hats” (N) Dexter “Helter Skelter” ’ Homeland “Two Hats” ’ 730 SHOW Dexter “Argentina” Å
MONDAY EVENING 00-Bastrop, Elgin, Smithville Cable \-Broadcast NOVEMBER 26, 2012 6PM 6:30 7PM 7:30 8PM 8:30 9PM 9:30 10PM 10:30 11PM 11:30 Fox 7 News Edge at Nine Big Bang Simpsons Simpsons TMZ (N) ’ Bones ’ Å The Mob Doctor ’ Å 2 FOX _ TMZ (N) ’ Big Bang Dancing With the Stars Extreme Makeover: Home Extreme Makeover: Home News Nightline (N) Jimmy Kimmel Live Å 3 ABC 8 KVUE News Ent KXAN News Tonight Show w/Jay Leno Jimmy Fallon The Voice The top 8 artists perform. (N) ’ (Live) Å (:01) Revolution (N) Å 4 NBC D KXAN News Wheel K-Eye News Two Men How I Met Partners (N) 2 Broke Girls Mike & Molly K-Eye News Late Show W/Letterman Ferguson Hawaii Five-0 (N) ’ Å 5 CBS J Doo Wop Discoveries (My Music) ’ Å Motown: Big Hits and More (My Music) ’ Å Charlie Rose (N) ’ Å 9 PBS 2 PBS NewsHour (N) Å KXAN News Engagement The Office The Office 30 Rock ’ 30 Rock ’ Gossip Girl (N) ’ Å 12 MNT V Family Feud Family Feud 90210 Liam is threatened. Law & Order: SVU Law & Order: SVU Law Order: CI Frasier Cash Cab Cold Case Files ’ Å 23 CW % EP Daily (N) Frasier ’ Intervention Å Intervention Å Intervention Å Intervention Å (:01) Intervention Å (:01) Intervention Å 60 A&E (4:30) ››› “Top Gun” ››› “A Few Good Men” (1992, Drama) Tom Cruise, Jack Nicholson. ‘R’ Å ›› “Behind Enemy Lines” (2001) Owen Wilson. Å 63 AMC 106 & Park: Top 10 Live Don’t Sleep! Soul Man The Wendy Williams Show New York Undercover ’ New York Undercover ’ New York Undercover ’ 73 BET Big Texas Big Texas Big Texas Chainsaw Chainsaw Chainsaw ››› “Gridiron Gang” Futurama ’ Futurama ’ Futurama ’ South Park Brickleberry South Park Daily Show Colbert Rep South Park South Park 59 COM Phineas Phineas A.N.T. Farm Wizards Wizards ›› “Sky High” (2005) ’ ‘PG’ Å Jessie Å 42 DISN Shake It Up! A.N.T. Farm Jessie Å American Chopper Å American Chopper Å American Chopper Å American Chopper Å American Chopper Å American Chopper Å 34 DSC SportsCenter (N) (Live) Å 52 ESPN (5:30) Monday Night Countdown (N) (Live) NFL Football Carolina Panthers at Philadelphia Eagles. (N Subject to Blackout) (Live) “Christmas Every Day” ››› “A Boy Named Charlie Brown” (1969, Comedy) ›››› “WALL-E” (2008) Voices of Ben Burtt. The 700 Club ’ Å 37 FAM Diners, Drive Diners, Drive 32 FOOD Diners, Drive Diners, Drive Diners, Drive Diners, Drive Diners, Drive Diners, Drive Diners, Drive Diners, Drive My. Diners Health NBA Basketball San Antonio Spurs at Washington Wizards. (N) (Live) Spurs Live College Basketball Big 12 Cowboys Game Time World Poker 54 FX SW “Debbie Macomber’s Trading Christmas” (2011) Å ›› “A Carol Christmas” 39 HALL “Annie Claus Is Coming” “The Most Wonderful Time of the Year” (2008) Å Pawn Stars Pawn Stars American Pickers (N) Pawn Stars Pawn Stars Love- 1880’s Pawn Stars Pawn Stars Pawn Stars American Pickers Å 61 HIST Criminal Minds “25 to Life” Criminal Minds “Corazon” Criminal Minds ’ Å Criminal Minds ’ Å Criminal Minds ’ Å Criminal Minds ’ Å 38 ION “His and Her Christmas” (:01) “Liz & Dick” (2012) “Liz & Dick” (2012) Lindsay Lohan, Grant Bowler. Å “Dear Santa” (2011) Amy Acker, Brooklynn Proulx. Å 26 LIFE Taboo “Changing Gender” Taboo “Beauty” Taboo “Ugly” Taboo “Fat” Taboo “Beauty” Taboo “Ugly” 51 NGC My Wife Yes, Dear ’ Yes, Dear ’ Friends ’ Friends ’ Hollywood Heights Å 41 NICK Figure It Out Figure It Out Friends ’ Friends ’ My Wife Tattoo Night. Tattoo Night. Tattoo Night. ››› “Star Wars VI: Return of the Jedi” (1983, Science Fiction) Mark Hamill, Harrison Ford. ’ 69 SPIKE “Star Wars VI: Return” ›› “The Mist” (2007, Horror) Thomas Jane, Marcia Gay Harden. Å ›› “The Mist” (2007) Thomas Jane. 58 SYFY (5:30) ››› “Indiana Jones and the Temple of Doom” (5:30) “The Steel Trap” ››› “Pride and Prejudice” (1940) Greer Garson. (:15) ››› “Jane Eyre” (1944) Joan Fontaine. Å “Great Expectations” Å 64 TCM Cake Boss Cake Boss Cake Boss Cake Boss Cake Boss: Next Great Baker ’ Å Cake Boss Cake Boss: Next Great Baker ’ Å Cake Boss 35 TLC In NBA CSI: NY “Heart of Glass” NBA Basketball New York Knicks at Brooklyn Nets. (N) (Live) Å The Mentalist ’ Å CSI: NY “The Ride-In” ’ 67 TNT M*A*S*H Cosby Show Cosby Show Cosby Show Raymond Raymond Raymond Raymond King King King King 40 TVL Storm Riders Storm Riders Weather Weather Storm Riders Storm Riders Weather Weather Weather Center Live Å Weather Center Live Å 45 TWC The Soup CSI: Crime Scene CSI NCIS: Los Angeles Å WWE Monday Night RAW (N) ’ (Live) Å 66 USA Basketball Wives LA (N) T.I. and Tiny Marry-Game Basketball Wives LA ’ T.I. and Tiny Marry-Game Basketball Wives LA ’ Basketball Wives LA ’ 71 VH1 Funniest Home Videos Funniest Home Videos Engagement Engagement WGN News at Nine (N) ’ Funniest Home Videos 21 WGN-A Funniest Home Videos Family Guy Family Guy Family Guy Family Guy The Office The Office Family Guy ’ Å Conan (N) Å 65 WTBS Seinfeld ’ Seinfeld ’ “The Next Karate Kid” ‘PG’ Return to Lonesome Dove Å (:35) ››› “Ransom” (1996) Mel Gibson. ’ ‘R’ Å (:40) › “Cold Creek Manor” (2003) ‘R’ 760 ENC 24/7 Boxing (:15) › “Little Fockers” (2010) Robert De Niro. ‘PG-13’ Witness (N) Å ›› “Tower Heist” (2011) Ben Stiller. ’ 16 HBO Treme “Tipitina” ’ Å (:15) ›› “Larry Crowne” (2011) Tom Hanks. ‘PG-13’ 702 HBO2 (5:45) “Harry Potter and the Deathly Hallows: Part 2” ’ Boardwalk Empire Å “Extremely Loud” ›› “Arthur” (2011) Russell Brand. ’ ‘PG-13’ Å Treme “Tipitina” ’ Å 703 HBOS Suspicious (:20) ›› “Cheaper by the Dozen” ‘PG’ (:45) Hunted “Polyhedrus” Co-Ed (5:30) › “End of Days” (1999) ‘R’ Å ››› “Chronicle” (2012) Dane DeHaan. ›› “Horrible Bosses” (2011) ‘NR’ Å 715 MAX Co-Ed Co-Ed “Philly Kid” (2012) Wes Chatham. ‘R’ Å (:40) ››› “Boogie Nights” (1997) ‘R’ 716 MMAX (5:45) ››› “Die Hard” (1988) Bruce Willis. ’ ‘R’ Å Untold History-United Homeland “Two Hats” ’ Dexter “Helter Skelter” ’ Homeland “Two Hats” ’ Dexter “Helter Skelter” ’ 730 SHOW (5:20) ››› “50/50” ‘R’
NEW YORK TIMES BEST SELLERS FICTION: WEEK OF NOVEMBER 25, 2012all Karp. (Little, Brown, $27.99.)
TUESDAY EVENING 00-Bastrop, Elgin, Smithville Cable \-Broadcast NOVEMBER 27, 2012 6PM 6:30 7PM 7:30 8PM 8:30 9PM 9:30 10PM 10:30 11PM 11:30 Raising Ben-Kate Fox 7 News Edge at Nine Big Bang Simpsons Simpsons TMZ (N) ’ New Girl ’ Mindy 2 FOX _ TMZ (N) ’ Big Bang The Grinch Shrek/Halls Dancing With the Stars: All-Stars (Season Finale) (N) News Nightline (N) Jimmy Kimmel Live (N) ’ 3 ABC 8 KVUE News Ent (:01) Go On New Normal (:01) Parenthood (N) Å KXAN News Tonight Show w/Jay Leno Jimmy Fallon The Voice (N) Å 4 NBC D KXAN News Wheel K-Eye News Late Show W/Letterman Ferguson NCIS “Gone” (N) ’ NCIS: Los Angeles (N) ’ Vegas (N) ’ Å 5 CBS J K-Eye News Two Men Ed Sullivan’s Top Performers 1966-1969 (My Music) ’ Dr. Fuhrman’s Immunity Solution! ’ Å Charlie Rose (N) ’ Å 9 PBS 2 PBS NewsHour (N) Å Family Feud Family Feud KXAN News Engagement The Offi ce The Offi ce Hart of Dixie (N) ’ Å Emily Owens, M.D. (N) ’ 30 Rock ’ 30 Rock ’ 12 MNT V House “Fall From Grace” Law Order: CI Frasier Cash Cab House “The Dig” ’ Å Cold Case Files ’ Å 23 CW % EP Daily (N) Frasier ’ Storage Storage Storage Storage Storage Storage Storage Storage Storage Storage Storage Storage 60 A&E (4:00) “A Few Good Men” ›› “Rambo” (2008) Sylvester Stallone. ‘NR’ Å Airplane! ›› “Constantine” (2005, Fantasy) Keanu Reeves, Rachel Weisz. ‘R’ 63 AMC 106 & Park 2012 Soul Train Awards Honoring music’s most soulful artists. Family First Soul Man Don’t Sleep! Soul Man The Wendy Williams Show 73 BET “Any Given Sunday” (1999) Reba Å Reba Å Reba Å Reba Å ››› “Gridiron Gang” (2006, Drama) The Rock, Xzibit, Jade Yorker. 70 CMT Erin Burnett OutFront (N) Anderson Cooper 360 (N) Piers Morgan Tonight (N) Anderson Cooper 360 Erin Burnett OutFront Piers Morgan Tonight 46 CNN Colbert Rep Daily Show Workaholics Tosh.0 Tosh.0 Tosh.0 Tosh.0 (N) Brickleberry Daily Show Colbert Rep (:01) Tosh.0 Mash Up (N) 59 COM “Wizards of Waverly Place: The Movie” Phineas Phineas A.N.T. Farm Good-Charlie Wizards Wizards 42 DISN Shake It Up! A.N.T. Farm Jessie Å Deadliest Catch ’ Å Deadliest Catch ’ Å Deadliest Catch ’ Å Deadly Seas ’ Å Deadliest Catch ’ Å Deadly Seas ’ Å 34 DSC Valvano’s College Basketball North Carolina State at Michigan. (N) College Basketball North Carolina at Indiana. (N) (Live) SportsCenter (N) (Live) Å 52 ESPN Pixar Short Films ›››› “WALL-E” (2008) Voices of Ben Burtt. ››› “Up” (2009, Comedy) Voices of Ed Asner. The 700 Club ’ Å 37 FAM Chopped Chopped Chopped “Bird in the Pan” Chopped Chopped 32 FOOD Chopped Mavs Live Big 12 Stampede C’boys Sportsday World Poker Tour 54 FX SW NBA Basketball Dallas Mavericks at Philadelphia 76ers. (Live) › “Eve’s Christmas” (2004) Elisa Donovan. Å “The Case for Christmas” (2011) Dean Cain. Å ›› “Silver Bells” (2005) 39 HALL (5:00) “Fallen Angel” Å Mankind The Story Mankind The Story To Be Announced Mankind The Story Mankind The Story of All of Us “Survivors” (N) Å 61 HIST Criminal Minds ’ Criminal Minds ’ Criminal Minds “Proof” ’ Criminal Minds ’ Flashpoint (N) ’ Flashpoint ’ Å 38 ION To Be Announced Abby’s Ultimate Dance Abby’s Ultimate Dance TBA TBA To Be Announced Abby’s Ultimate Dance 26 LIFE Drugs, Inc. Drugs, Inc. Doomsday Preppers (N) Doomsday Preppers Doomsday Preppers Doomsday Preppers 51 NGC Figure It Out Figure It Out My Wife My Wife Friends ’ Friends ’ Yes, Dear ’ Yes, Dear ’ Friends ’ Friends ’ Hollywood Heights Å 41 NICK Ink Master “Buck Off” (N) Tattoo Night. Tattoo Night. Ink Master “Holy Ink” ’ Ink Master ’ Å Ink Master “Holy Ink” ’ 69 SPIKE Ink Master ’ Å Urban Total Total Total Total Viral Video Viral Video Total Total Viral Video Viral Video 58 SYFY Urban Smart (5:00) “Brighton Rock” Å ›› “Tail Spin” (1939) Alice Faye. ›› “Wild Bill Hickok Rides” (1942) ››› “Two-Faced Woman” (1941) Å 64 TCM Little People Big World: Little People Big World: Extreme Extreme Little People Big World: Extreme Extreme Sister Wives ’ Å 35 TLC The Mentalist “18-5-4” ’ Rizzoli & Isles Å Rizzoli & Isles (N) Å Leverage (N) Å Rizzoli & Isles Å Leverage Å 67 TNT M*A*S*H Cosby Show Cosby Show Cosby Show Raymond Raymond Raymond Raymond King King King King 40 TVL Weather Weather Storm Riders Storm Riders Weather Center Live Å Weather Weather Storm Riders Storm Riders Weather Center Live Å 45 TWC Law & Order: SVU Law & Order: SVU Law & Order: SVU Law & Order: SVU Law & Order: SVU Law & Order: SVU 66 USA T.I. and Tiny Marry-Game Couples Therapy ’ 40 Greatest Feuds 40 Greatest Feuds T.I. and Tiny Marry-Game Basketball Wives LA ’ 71 VH1 How I Met How I Met How I Met How I Met WGN News at Nine (N) ’ Funniest Home Videos Engagement Engagement 21 WGN-A Funniest Home Videos Big Bang Big Bang Big Bang Big Bang Big Bang The Office The Office Conan (N) Å 65 WTBS Seinfeld ’ Seinfeld ’ Big Bang (5:10) “An Unfinished Life” Return to Lonesome Dove (N) Å (:35) ››› “True Lies” (1994) Arnold Schwarzenegger. ‘R’ Å ›› “Battle: Los Angeles” 760 ENC 24/7 (:15) ›› “Red Tails” (2012) Cuba Gooding Jr. ’ ‘PG-13’ Å Treme “Tipitina” ’ Å (:10) Boardwalk Empire ’ (:10) Witness Å 16 HBO REAL Sports Gumbel Boxing Andre Berto vs. Robert Guerrero, Welterweights. ››› “X-Men: First Class” (2011) James McAvoy. ’ 702 HBO2 (5:25) ››› “The Girl” ’ (:15) Boardwalk Empire ’ Cedar ›› “We Bought a Zoo” (2011) Matt Damon. ‘PG’ Å (:35) ›› “Knight and Day” (2010) Å 703 HBOS Treme “Tipitina” ’ Å (5:45) ›› “In & Out” (1997) Kevin Kline. (:20) “Philly Kid” (2012) Wes Chatham. ‘R’ ›› “Road House” (1989) Patrick Swayze. ’ ‘R’ Å Hunted “Polyhedrus” ’ 715 MAX “A Very Harold & Kumar 3D Christmas” Hotel Erotica Feature 2: Sensual Escapes “Anacondas: Hunt” 716 MMAX (:15) ›› “What’s Your Number?” (2011) Anna Faris. (:15) ›› “Faster” (2010) Dwayne Johnson. ’ ‘R’ Å Homeland “Two Hats” ’ Dexter “Helter Skelter” ’ ›› “Brüno” (2009) ‘R’ 730 SHOW “Hugh Hefner: Playboy”
NEW YORK TIMES BEST SELLERS NON-FICTION: WEEK OF NOVEMBER 25, 2012 A. Carlin. (Touchstone, $28.) 7 HALLUCINATIONS, by Oliver Sacks. (Alfred A. Knopf, $26.95.) 8 AMERICA AGAIN, by Stephen Colbert, Richard Dahm, Paul Dinello, Barry Julien, Tom Purcell et al.. (Grand Central, $28.99.) 9 WAGING HEAVY PEACE, by Neil Young. (Blue Rider, $30.) 10 UNBROKEN, by Laura Hillenbrand. (Random House, $27.)
Thursday, November 22, 2012
The Bastrop Advertiser • Page B7
EvEning Tv LiSTingS
nOvEMBER 26 - 30, 2012
WEDNESDAY EVENING 00-Bastrop, Elgin, Smithville Cable \-Broadcast NOVEMBER 28, 2012 6PM 6:30 7PM 7:30 8PM 8:30 9PM 9:30 10PM 10:30 11PM 11:30 Fox 7 News Edge at Nine Big Bang Simpsons Simpsons TMZ (N) ’ The X Factor (N) ’ (Live) Å 2 FOX _ TMZ (N) ’ Big Bang A Charlie Brown Christmas Mod Fam Suburgatory Nashville “Lovesick Blues” News Nightline (N) Jimmy Kimmel Live Å 3 ABC 8 KVUE News Ent Christmas-Rockefeller KXAN News Tonight Show w/Jay Leno Jimmy Fallon Saturday Night Live Popular holiday sketches. (N) Å 4 NBC D KXAN News Wheel K-Eye News Two Men CSI: Crime Scene K-Eye News Late Show W/Letterman Ferguson Survivor: Philippines (N) ’ Criminal Minds (N) ’ 5 CBS J Oscar Hammerstein -- Out Charlie Rose Dr. Wayne Dyer: Wishes Fulfilled Getting the most out of life. ’ Å 9 PBS 2 PBS NewsHour (N) Å KXAN News Engagement The Office The Office 30 Rock ’ 30 Rock ’ Supernatural (N) ’ Å 12 MNT V Family Feud Family Feud Arrow “Muse of Fire” (N) NUMB3RS “Robin Hood” Law Order: CI Cash Cab NUMB3RS “Velocity” Å Frasier ’ Cold Case Files ’ Å 23 CW % EP Daily (N) Frasier ’ Storage Storage Storage Storage Storage Storage Duck D. Duck D. Duck D. Duck D. Storage Storage 60 A&E CSI: Miami ’ Å ›› “Poseidon” (2006) Josh Lucas. ‘PG-13’ Å ›› “Poseidon” (2006) Josh Lucas. ‘PG-13’ Å › “Mission to Mars” ‘PG’ 63 AMC 106 & Park: Top 10 Live Soul Man Family First Don’t Sleep! The Game The Wendy Williams Show › “Next Day Air” (2009) Donald Faison. Premiere. 73 BET Reba Å Reba Å Reba Å Reba Å ››› “Rocky II” (1979, Drama) Sylvester Stallone, Talia Shire. (:45) ›› “Rocky IV” (1985) Talia Shire 70 CMT Erin Burnett OutFront (N) Anderson Cooper 360 (N) Piers Morgan Tonight (N) Anderson Cooper 360 Erin Burnett OutFront Piers Morgan Tonight 46 CNN Colbert Rep Daily Show Chappelle’s Key & Peele South Park South Park South Park Key & Peele Daily Show Colbert Rep South Park Brickleberry 59 COM Good-Charlie A.N.T. Farm ’ Å Austin & Ally Phineas Austin & Ally Jessie Å Wizards Wizards 42 DISN Shake It Up! A.N.T. Farm Dog American Guns ’ Å American Guns ’ Å American Guns ’ Å Sons of Guns ’ Å American Guns ’ Å Sons of Guns ’ Å 34 DSC College Basketball Ohio State at Duke. (N) (Live) SportsCenter (N) (Live) Å 52 ESPN Audibles (N) College Basketball Michigan State at Miami. (N) (Live) ››› “Up” (2009, Comedy) Voices of Ed Asner. ››› “Aladdin” (1992) Voices of Scott Weinger. The 700 Club ’ Å ›› “Three Days” (2001) 37 FAM Restaurant: Impossible Restaurant: Impossible (N) Restaurant Stakeout (N) Restaurant: Impossible Restaurant: Impossible 32 FOOD Restaurant: Impossible Spurs Live SEC Gridiron LIVE (N) Spurs In. Big 12 Cowboys In. Big 12 54 FX SW NBA Basketball San Antonio Spurs at Orlando Magic. (Live) “Christmas Cottage” “It’s Christmas, Carol!” (2012) Carrie Fisher. Å “Matchmaker Santa” (2012) Lacey Chabert. Å 39 HALL “Town Christmas” Pawn Stars Pawn Stars Pawn Stars Pawn Stars Cajun Pawn Cajun Pawn Invention Invention Restoration Restoration Pawn Stars Pawn Stars 61 HIST (4:00) “The Fugitive” (1993) WWE Main Event (N) ’ “In the Line of Fire” (1993) ›› “The Guardian” (2006, Drama) Kevin Costner, Ashton Kutcher, Sela Ward. ’ 38 ION To Be Announced Houstons Houstons Houstons Houstons My Life Is a Lifetime Movie My Life Is a Lifetime Movie Houstons Houstons 26 LIFE Border Wars Border Wars “War Games” Border Wars (N) Hell on the Highway (N) Border Wars Hell on the Highway 51 NGC My Wife Yes, Dear ’ Yes, Dear ’ Friends ’ Friends ’ Hollywood Heights Å 41 NICK Figure It Out Figure It Out Friends ’ Friends ’ My Wife Tenants Tenants ›› “S.W.A.T.” (2003, Action) Samuel L. Jackson, Colin Farrell. ’ 69 SPIKE ›› “S.W.A.T.” (2003) Samuel L. Jackson, Colin Farrell. Premiere. ’ Dark Side Dark Side Ghost Hunters ’ Å Dark Side Dark Side Ghost Hunters ’ Å Ghost Hunters ’ Å 58 SYFY Ghost Hunters ’ Å (5:15) ›› “Chandler” ››› “The Time Machine” (1960) Rod Taylor. ›› “The Andromeda Strain” (1971) Arthur Hill. Å (:15) ››› “Solaris” 64 TCM To Be Announced Cake Boss Extreme Cougar Wives ’ Cake Boss: Next Baker Cake Boss: Next Great Baker ’ Å Extreme Cougar Wives ’ 35 TLC Castle “Suicide Squeeze” Castle ’ Å Castle “Tick, Tick, Tick ...” Perception “Shadow” The Mentalist ’ Å Southland “Risk” ’ Å 67 TNT M*A*S*H Cosby Show Cosby Show Cosby Show Raymond Raymond Cleveland Divorced King King King King 40 TVL Coast Guard Alaska Coast Guard Alaska Coast Guard Alaska Coast Guard Alaska Weather Center Live Å Weather Center Live Å 45 TWC NCIS “Friends and Lovers” NCIS “Dead Man Walking” NCIS “Brothers in Arms” NCIS “Trojan Horse” Å NCIS “Angel of Death” ’ NCIS “Faking It” ’ Å 66 USA Couples Therapy ’ Couples Therapy ’ Couples Therapy ’ Couples Therapy (N) ’ Couples Therapy ’ Behind the Music ’ Å 71 VH1 Engagement Engagement Engagement Engagement WGN News at Nine (N) ’ Funniest Home Videos Engagement Engagement 21 WGN-A Funniest Home Videos Family Guy Family Guy Family Guy Family Guy Big Bang Big Bang The Office The Office Seinfeld ’ Seinfeld ’ Conan (N) Å 65 WTBS “The Quick and the Dead” Return to Lonesome Dove Gus’ daughter. ›› “National Lampoon’s Vacation” ‘R’ (:10) “Pirates of the Caribbean: On Stranger Tides” ’ 760 ENC (:15) “The Descendants” (5:15) ››› “X2: X-Men United” (2003) ›› “Final Destination 5” (2011) ‘R’ Å Boardwalk Empire Å Treme “Tipitina” ’ Å 16 HBO 24/7 Witness Å ›› “The Thing” (2011) ’ ‘R’ Å (:15) ›› “The Eagle” (2011) Channing Tatum. ‘PG-13’ 702 HBO2 “Jackie Chan” Collateral ‘R’ ›› “Sucker Punch” (2011) Emily Browning. ’ ‘PG-13’ ›› “The Hangover Part II” (2011) ‘R’ 703 HBOS ››› “Crazy, Stupid, Love.” (2011) Steve Carell. Å “A Very Harold & Kumar 3D Christmas” › “This Means War” (2012) ‘PG-13’ Å Skin to Max Busty Hunted “Ambassadors” ’ Hunted “Polyhedrus” ’ 715 MAX “Emmanuelle Through Time: Sexy Bite” Wonderland (:05) ›› “Contraband” (2012) Mark Wahlberg. ‘R’ Å ››› “Hanna” (2011) Saoirse Ronan. ’ ‘PG-13’ Å 716 MMAX Jim Rome on Showtime (N) Inside the NFL ’ Å Jim Rome on Showtime Inside the NFL (N) Å 730 SHOW (5:00) “Against the Ropes” Homeland “Two Hats” ’
THURSDAY EVENING 00-Bastrop, Elgin, Smithville Cable \-Broadcast NOVEMBER 29, 2012 6PM 6:30 7PM 7:30 8PM 8:30 9PM 9:30 10PM 10:30 11PM 11:30 Simpsons Simpsons TMZ (N) ’ The X Factor (N) Å Glee “Thanksgiving” (N) ’ Fox 7 News Edge at Nine Big Bang 2 FOX _ TMZ (N) ’ Big Bang (:02) Scandal “Defiance” News Nightline (N) Jimmy Kimmel Live Å Last Resort (N) ’ Å Grey’s Anatomy (N) Å 3 ABC 8 KVUE News Ent Rock Center KXAN News Tonight Show w/Jay Leno Jimmy Fallon 30 Rock ’ Up All Night The Office Parks 4 NBC D KXAN News Wheel K-Eye News Two Men Big Bang Two Men (:01) Person of Interest (N) K-Eye News Late Show W/Letterman Ferguson (:01) Elementary (N) Å 5 CBS J Arts in Context ’ Å The Daytripper ’ Å Downton Abbey Revisited Å Charlie Rose (N) ’ Å 9 PBS 2 PBS NewsHour (N) Å 12 MNT V Family Feud Family Feud The Vampire Diaries (N) ’ Beauty and the Beast (N) KXAN News Engagement The Office The Office 30 Rock ’ 30 Rock ’ Law Order: CI Cash Cab White Collar ’ Å White Collar ’ Å Frasier ’ Cold Case Files ’ Å 23 CW % EP Daily (N) Frasier ’ The First 48 Å The First 48 Å The First 48 (N) Å Panic 9-1-1 Å (:01) Panic 9-1-1 Å (:01) The First 48 Å 60 A&E CSI: Miami “Dead Zone” ››› “Fargo” (1996) Frances McDormand. ‘R’ Å ››› “Casino” (1995, Crime Drama) Robert De Niro, Sharon Stone. ‘R’ Å 63 AMC 106 & Park: Top 10 Live Apollo Live (N) BET Hip Hop Awards 2011 Don’t Sleep! The Game The Wendy Williams Show 73 BET Ron White’s Comedy Salute to the Ron White: They Call Me Ron White’s Comedy Salute to the (6:59) ›› “Accepted” (2006) Justin Long. Å Tosh.0 Tosh.0 Daily Show Colbert Rep Key & Peele Tosh.0 59 COM “Phineas and Ferb: The Movie” Dog Phineas Good-Charlie A.N.T. Farm Wizards Wizards 42 DISN Shake It Up! A.N.T. Farm Jessie Å Auction Auction Auction Auction Property Wars ’ Å Property Wars ’ Å Texas Car Wars ’ Å Texas Car Wars ’ Å 34 DSC SportsCenter (N) (Live) Å SportsCenter (N) Å 52 ESPN Football Live College Football Louisville at Rutgers. (N) (Live) “The Christmas List” (5:30) ››› “Aladdin” (1992, Fantasy) ››› “Happy Feet” (2006) Voices of Elijah Wood, Robin Williams. The 700 Club ’ Å 37 FAM Cupcake Wars Sugar Dome Sweet Genius Sweet Genius The Next Iron Chef Sweet Genius 32 FOOD Sportsday Outdoors World Poker Tour 54 FX SW Cowboys In. Sportsday College Basketball Southern Utah at Texas Christian. (N) OU Sooner Gift Guide “Smoky Mountain” ››› “The Santa Incident” (2010) Ione Skye. Å 39 HALL “A Princess for Christmas” “Naughty or Nice” (2012, Fantasy) Hilarie Burton. Å 101 Gadgets That Changed Pawn Stars Pawn Stars Pawn Stars Pawn Stars Ultimate Soldier Challenge Ultimate Soldier Challenge Pawn Stars Pawn Stars 61 HIST House “Euphoria, Part 1” House “Euphoria, Part 2” House “Forever” ’ Å Criminal Minds ’ Å Criminal Minds ’ Å Criminal Minds ’ Å 38 ION Project Runway All Stars Project Runway All Stars Project Runway All Stars Abby’s Ultimate Dance Project Runway All Stars Project Runway All Stars 26 LIFE Alaska State Troopers Alaska State Troopers Rocket City Rocket City Chainsaw Chainsaw Rocket City Rocket City Chainsaw Chainsaw 51 NGC You Gotta My Wife My Wife Yes, Dear ’ Yes, Dear ’ Friends ’ Friends ’ Hollywood Heights Å 41 NICK Figure It Out Figure It Out Friends ’ MMA GT Academy ›› “The Keeper” (2009) Jail (N) ’ iMPACT Wrestling (N) ’ Å Ink Master “Buck Off” ’ 69 SPIKE Jail Å “Dungeons & Dragons: The Book of Vile Darkness” › “Age of the Dragons” (2011) Danny Glover. Å 58 SYFY “Dungeons & Dragons: Wrath of the Dragon God” (4:30) ›› “Cimarron” ›› “The Iron Petticoat” (1956, Comedy) (:45) ››› “Silk Stockings” (1957) Fred Astaire. Å (:45) ››› “Comrade X” (1940) Å 64 TCM Along-Bride Along-Bride Say Yes Say Yes Along-Bride TBA Along-Bride TBA Four Weddings (N) Å Four Weddings ’ Å 35 TLC The Mentalist ’ Å NBA Basketball San Antonio Spurs at Miami Heat. (N) (Live) Å NBA Basketball Denver Nuggets at Golden State Warriors. (N) Å 67 TNT M*A*S*H Cosby Show Cosby Show Cosby Show Raymond Raymond Raymond Raymond King King King King 40 TVL Happen Happen Lifeguard! Lifeguard! Weather Center Live Å Happen Happen Lifeguard! Lifeguard! Weather Center Live Å 45 TWC NCIS “Ships in the Night” Burn Notice “Down & Out” (:01) NCIS “Jet Lag” Å Law & Order: SVU NCIS “Safe Harbor” ’ NCIS “Thirst” ’ 66 USA 100 Greatest Kid Stars (N) 100 Greatest Kid Stars (N) 100 Greatest Kid Stars ’ 100 Greatest Kid Stars ’ Saturday Night Live “The Best of David Spade” Å 71 VH1 How I Met How I Met How I Met How I Met WGN News at Nine (N) ’ Funniest Home Videos Engagement Engagement 21 WGN-A Funniest Home Videos Big Bang Big Bang Big Bang The Office The Office Conan (N) Å 65 WTBS Seinfeld ’ Seinfeld ’ Family Guy Family Guy Big Bang (5:15) “Grumpy Old Men” Return to Lonesome Dove (N) Å (:20) ›› “Colombiana” (2011) ‘PG-13’ (:10) ››› “Friends With Benefits” (2011) ’ ‘R’ Å 760 ENC (5:30) “Harry Potter and the Prisoner of Azkaban” ‘PG’ ›› “Safe House” (2012) Denzel Washington. ‘R’ Å Cathouse: Menage a Trois 24/7 Tower Heist 16 HBO ›› “Red Tails” (2012) Cuba Gooding Jr. ‘PG-13’ Å (:15) Boardwalk Empire ’ (:15) “Crossfire Hurricane” 702 HBO2 (:10) ›› “Antitrust” (2001) Ryan Phillippe. ‘PG-13’ Å Making: 127 ››› “127 Hours” (2010) James Franco. ››› “In Bruges” (2008) Colin Farrell. ’ ‘R’ Å 703 HBOS (5:50) ››› “Big Fish” (2003) Ewan McGregor. ‘PG-13’ (:45) “The Teenie Weenie Bikini Squad” (5:35) ›› “Mercury Rising” (1998) ‘R’ › “Firestorm” (1998) Howie Long. ’ ‘R’ ››› “Rise of the Planet of the Apes” 715 MAX Hunted “Kismet” ’ Å Hunted “Ambassadors” ’ Hunted “Polyhedrus” ’ ›› “Due Date” (2010) ‘R’ 716 MMAX This Means ››› “Chronicle” (2012) Dane DeHaan. Old Porn Reality Show Old Porn Stop, Charlie › “Apollo 18” (2011) Lloyd Owen. Å (:25) ››› “Goon” (2011, Comedy) ‘R’ 730 SHOW (5:20) ››› “50/50” ‘R’
SATURDAY DAYTIME 00-Bastrop, Elgin, Smithville Cable \-Broadcast NOVEMBER 24, 2012 8AM 8:30 9AM 9:30 10AM 10:30 11AM 11:30 12PM 12:30 1PM 1:30 2PM 2:30 3PM 3:30 4PM 4:30 5PM 5:30 Live Life Icons Hollywood Sports Paid Prog. ›› “Ice Age: The Meltdown” (2006) FOX FOX College Football Teams TBA. (N Subject to Blackout) ’ (Live) Å 2 FOX _ Animals Adven. Hanna Ocean Explore Rescue College Football Michigan at Ohio State. (N) (Live) College Football Teams TBA. (N) (Live) 3 ABC 8 News Noodle Pajanimals Poppy Cat Justin Paid Prog. Paid Prog. Paid Prog. College Football Grambling State vs. Southern. (N) ’ (Live) Å Jeopardy! NBC News 4 NBC D KXAN News Today (N) Today (N) ’ Å Liberty Big World Texas Paid Prog. Lucas Oil Off Road Football Football College Football Teams TBA. (N) (Live) Å 5 CBS J Doodlebop Doodlebop Busytown Busytown Liberty “Thomas & Friends” Super Biscuit Gardener Happiness Advantage With Shawn Rick Steves’ European Christmas ’ Å Suze Orman’s Money Class ’ Å The British Beat (My Music) ’Å 9 PBS 2 Iron Man Iron Man WWE Dragon Yu-Gi-Oh! Yu-Gi-Oh! ››› “Spy Kids” (2001) Antonio Banderas. ›› “Step Up” (2006) Channing Tatum. Scrubs ’ Scrubs ’ How I Met How I Met 12 MNT V Tiny Toons Adven Aqua Kids Game Football College Football Virginia at Virginia Tech. (N) (Live) College Football Idaho at Utah State. (N) (Live) Whacked 23 CW % Animals Winning Now Eat! Pssprt To Be Announced To Be Announced Billy Billy Billy Billy Billy Billy Billy Billy Flipping Miami Å Flipping Vegas Å Flipping Vegas Å Flipping Vegas Å 60 A&E (5:30) ›››› “Gone With the Wind” (1939) Clark Gable. ››› “McLintock!” (1963, Western) John Wayne. ‘NR’ Å ››› “El Dorado” (1967, Western) John Wayne. ‘NR’ Å ›› “Big Jake” (1971) ‘PG-13’ 63 AMC (7:00) Steve Harvey: Still Trippin’ S. Harvey S. Harvey S. Harvey S. Harvey S. Harvey S. Harvey S. Harvey S. Harvey S. Harvey S. Harvey S. Harvey S. Harvey S. Harvey S. Harvey ››› “The Best Man” (1999) 73 BET CMT Social Hour CMT Insdr Top 20 Countdown ’ Crossroads ’ The 46th Annual CMA Awards ’ Å Reba ’ Reba ’ Reba ’ Reba ’ 70 CMT Sat. Morn Bottom CNN Saturday Morning (N) CNN Newsroom (N) Your Money (N) CNN Newsroom (N) Gupta CNN Newsroom (N) The Situation Room 46 CNN “Cheech-Chong” (8:58) “Cheech and Chong’s Up in Smoke” (10:58) ›› “Police Academy” (1984) Å (12:58) › “Let’s Go to Prison” (2006) Å (2:58) ›› “Waiting...” (2005) Ryan Reynolds. (4:58) “Office Space” 59 COM Phineas Phineas Gravity Fish Wizards Wizards Good Good Austin Shake It Jessie ’ Jessie ’ Shake It Shake It Good Austin Austin ANT Farm A.N.T. Farm ’ Å 42 DISN Almost, Away Almost, Away Almost, Away Almost, Away Almost, Away Almost, Away Almost, Away Almost, Away American Guns Å American Guns Å 34 DSC College Football Teams TBA. (N) (Live) Score College Football Teams TBA. (N) (Live) Score 52 ESPN SportsCenter (N) (Live) College GameDay (N) (Live) Å (7:00) “The Princess Diaries” ›››› “Mary Poppins” (1964) Julie Andrews, Dick Van Dyke. ›› “Nanny McPhee” (2005) Colin Firth ›› “Nanny McPhee Returns” (2010) Emma Thompson. ›› “Home Alone 4” 37 FAM Paula Pioneer Pioneer Trisha’s Giada Chopped The Big Waste Restaurant: Im. Restaurant Stakeout All-Star Family Cook- Iron Chef America The Next Iron Chef 32 FOOD Paula HS Scoreboard Kids Big 12 College Football Alabama-Birmingham at Central Florida. (N) (Live) College Football Tulane at Houston. (N) (Live) 54 FX SW Outdoors Tailgate “Matchmaker Santa” (2012) Lacey Chabert. “It’s Christmas, Carol!” (2012) Carrie Fisher. “Christmas Magic” “The Most Wonderful Time of the Year” Å “Mistletoe Over Manhattan” (2011) Å 39 HALL “The Santa Suit” Å Modern Marvels Modern Marvels Modern Marvels The Men Who Built America “Bloody Battles” The Men Who Built America Å Built America The Men Who Built America Å 61 HIST Paid Prog. Paid Prog. Paid Prog. Paid Prog. Miracles Paid Prog. Paid Prog. Paid Prog. Earl Earl Law Order: CI Law Order: CI ›› “The Guardian” (2006, Drama) Kevin Costner. Premiere. ’ 38 ION Paid Prog. Paid Prog. Paid Prog. Paid Prog. My Life, Movie ›› “If You Believe” (1999) Ally Walker. Å ›› “Home by Christmas” (2006) Å “The Road to Christmas” (2006, Comedy) Å “Christmas Hop” 26 LIFE Alaska State Troopers Doomsday Preppers Doomsday Preppers Locked Up Abroad Locked Up Abroad Locked Up Abroad Locked Up Abroad Locked Up Abroad Locked Up Abroad Locked Up Abroad 51 NGC Sponge. Sponge. Sponge. Sponge. Turtles Kung Fu Penguins Robot Power Kung Fu Kung Fu Kung Fu Big Time Big Time iCarly ’ iCarly ’ iCarly ’ iCarly ’ Victorious Victorious 41 NICK ›› “Star Wars: Episode II -- Attack of the Clones” (2002) Ewan McGregor. ’ ››› “Star Wars: Episode III -- Revenge of the Sith” 69 SPIKE Ink Master “Holy Ink” ›› “Star Wars: Episode I -- The Phantom Menace” (1999) ’ Video Dark Side › “Dragon Wars” (2007, Action) Jason Behr. “Fire & Ice” (2008) Amy Acker, Tom Wisdom. Dungeons ››› “Dragon Dynasty” (2006) James Hong › “Age of the Dragons” (2011, Fantasy) Å 58 SYFY Video “Saint’s Double” (:15) “The Adventures of Robin Hood” (1938) (:15) ››› “There’s No Business Like Show Business” (:15) “The Mad Miss Manton” Å “Five Little Peppers in Trouble” ››› “Gypsy” (1962) Å 64 TCM Property Ladder ’ Property Ladder ’ Property Ladder ’ Property Ladder ’ Undercover Boss ’ Undercover Boss ’ Undercover Boss ’ Undercover Boss ’ Undercover Boss ’ Undercover Boss ’ 35 TLC Law & Order ’ Franklin & Bash Å Rizzoli & Isles Å Law & Order ’ ›› “What Lies Beneath” (2000) Harrison Ford. Å (:45) › “Obsessed” (2009) Idris Elba. Å (:45) ›› “Kiss the Girls” (1997) 67 TNT The Nanny The Nanny Roseanne Roseanne Roseanne Roseanne Roseanne Roseanne Roseanne Roseanne Cosby Cosby Cosby Cosby Cosby Cosby Cosby Cosby Divorced Divorced 40 TVL Christmas Christmas Tailgaters Football Weather Center Live Christmas Christmas Tailgaters Football Weekend View Å Weekend Now Å 45 TWC Law Order: CI Law & Order: SVU Law & Order: SVU Law & Order: SVU Law & Order: SVU Law & Order: SVU Law & Order: SVU Law & Order: SVU Covert Affairs Å Burn Notice Å 66 USA Top 20 Count. Top 20 Count. Trading Spouses Trading Spouses Trading Spouses Trading Spouses Trading Spouses Trading Spouses Couples Therapy ’ Basketball Wives LA 71 VH1 Matlock “The Idol” Law Order: CI Law Order: CI Law Order: CI Law Order: CI Law Order: CI Law Order: CI Law Order: CI Law Order: CI 21 WGN-A Matlock “The Idol” Jim Raymond ›› “The Wedding Date” (2005) Friends Friends Friends Friends King King ›› “Failure to Launch” (2006) Å ›› “Four Christmases” (2008) 65 WTBS Browns “Under Siege 2: Dark Territory” (4:50) “Air Force One” “Batman Returns” ’ “National Lampoon’s Vacation” (:35) ››› “Open Range” (2003) Robert Duvall. ‘R’ Å (:40) ›› “Batman Returns” (1992) Michael Keaton. Å 760 ENC 24/7 “Alvin-Chipwrecked” (7:45) ›› “Tower Heist” (2011) ›› “Arthur” (2011) Russell Brand. ‘PG-13’ Witness Å ›› “Tower Heist” (2011) ‘PG-13’ “Harry Potter and the Prisoner of Azkaban” 16 HBO Witness (:40) ››› “My Life Without Me” (2003) ‘R’ REAL Sports Gumbel (:35) ›› “One Day” (2011) Anne Hathaway. The Girl 702 HBO2 (:15) ›› “The Adjustment Bureau” (2011) ’ (:05) ›››› “Almost Famous” (2000) ’ ‘R’ (8:55) › “Little Fockers” (2010) ›› “Major League” (1989) Tom Berenger. (:20) ››› “John Grisham’s The Rainmaker” Boardwalk Empire ’ (:40) ››› “Big Fish” (2003) Ewan McGregor. ’ ‘PG-13’ 703 HBOS “My Big Fat” Monte C (:20) ››› “The Rundown” Å Hunted “Polyhedrus” ››› “Contagion” (2011) Å (:05) ›› “Transit” (2012) ’ ‘R’ (:40) ››› “While You Were Sleeping” ‘PG’ Hunted “Kismet” ’ Hunted ’ Å 715 MAX “Die Hard-Veng.” (:40) ›› “In Time” (2011) Justin Timberlake. ››› “Die Hard” (1988) Bruce Willis. ’ ‘R’ (:45) ››› “Die Hard 2” (1990) Bruce Willis. ’ ‘R’ Å 716 MMAX Ordinary ››› “Bridesmaids” (2011) Kristen Wiig. ’ “Spy Kids-Time in the World” “I Don’t Know How She Does It” Untold History-United Dexter “Argentina” ’ Boxing Homeland NASCAR › “Apollo 18” (2011) ’ ‘PG-13’ 730 SHOW Jim Rome, Sho
Crossword
SUNDAY DAYTIME 00-Bastrop, Elgin, Smithville Cable \-Broadcast NOVEMBER 25, 2012 8AM 8:30 9AM 9:30 10AM 10:30 11AM 11:30 12PM 12:30 1PM 1:30 2PM 2:30 3PM 3:30 4PM 4:30 5PM 5:30 Paid Prog. Paid Prog. FOX NFL Sunday (N) NFL Football Regional Coverage. (N) ’ (Live) Å NFL Football Regional Coverage. (N) ’ (Live) Å 2 FOX _ Paid Prog. Paid Prog. Fox News Sunday This Week Hot On! Reporter Recipe Food Texas & Texas Ball Boys Ball Boys Shark Tank ’ Å ABC News News ›› “Christmas With the Kranks” (2004) 3 ABC 8 News Meet the Press (N) LazyTown Wiggles Skiing Figure Skating ISU Grand Prix: NHK Trophy. NBC News Holiday Moments on Ice From Phoenix. (N) ’ News 4 NBC D KXAN News Today (N) Today (N) ’ Å J. Osteen Austin The NFL Today (N) NFL Post. Postgame Kaleidoscope on Ice CBS News News NFL Football Regional Coverage. (N) (Live) Å 5 CBS J CBS News Sunday Morning (N) ’ Nation Gardener MotorWk Parks Growing Contrary McL’ghlin Wash. Overheard Arts Cntxt 3 Steps to Incredible Health!-Joel Holidays Motown: Big Hits and More (My Music) Å 9 PBS 2 Cat-Christmas Paid Prog. Cornerstone/Hagee In Touch Paid Prog. Paid Prog. Troubadr Bronco › “Forever Mine” (1999) Joseph Fiennes. There ›› “Practical Magic” (1998) Sandra Bullock. There 12 MNT V Search Paid Prog. Paid Prog. Agribusiness Garden Old House ›› “Some Kind of Wonderful” (1987) ’Til Death ’Til Death Cold Case Files Å ›› “10 to Midnight” (1983) Andrew Stevens 23 CW % Cornerstone/Hagee Shipping Shipping Hoggers Hoggers Hoggers Hoggers Parking Parking Parking Parking Billy Billy To Be Announced Storage Storage Storage Storage Storage Storage 60 A&E (7:00) “Silver Bullet” ››› “Cujo” (1983) Dee Wallace. ‘R’ Å “Land of the Dead” ›› “Pet Sematary” (1989) Dale Midkiff. ‘R’ ›› “Pet Sematary Two” (1992) ‘R’ Å ›› “Christine” (1983) Keith Gordon. ‘R’ Å 63 AMC Bobby Jones Gospel Lift Voice S. Harvey S. Harvey “Dysfunctional Friends” (2011) Stacey Dash. Å ››› “Barbershop 2: Back in Business” (2004) Å ›› “Madea’s Family Reunion” (2006) Tyler Perry. Å 73 BET CMT Insdr (5:00) CMT Music ’ Crossroads ’ Top 20 Countdown ’ Reba ’ Reba ’ Reba ’ Reba ’ The 46th Annual CMA Awards ’ Å 70 CMT State of the Union Fareed Zakaria GPS Reliable Sources (N) State of the Union Fareed Zakaria GPS Next List Newsroom Your Money (N) CNN Newsroom (N) CNN Newsroom (N) CNN Newsroom (N) 46 CNN “Police Academy 2” ›› “Police Academy” (1984, Comedy) Å ››› “The Brady Bunch Movie” (1995) Å › “Vegas Vacation” (1997) Chevy Chase. ›› “Legally Blonde” (2001, Comedy) Å ››› “Mean Girls” 59 COM Phineas Phineas Good Dog Good Good Austin ANT Farm ›› “Alice in Wonderland” (2010) ‘PG’ Å Good Jessie ’ Dog ›› “Tinker Bell” (2008) ‘G’ Å A.N.T. Farm ’ Å 42 DISN Auction Auction Auction Auction Auction Auction Auction Auction Auction Auction Auction Auction Auction Auction MythBusters ’ Å MythBusters ’ Å MythBusters ’ Å 34 DSC College Football Final Poker World/Poker World/Poker 2012 World Series of Poker Final Table. From Las Vegas. 52 ESPN SportsCenter (N) (Live) Sunday NFL Countdown (N) (Live) Å ››› “Hook” (1991, Fantasy) Dustin Hoffman, Robin Williams. ›› “Richie Rich’s Christmas Wish” (1998) ›› “Richie Rich” (1994) Macaulay Culkin. ›› “Home Alone 4” (2002) French Stewart. ››› “Home Alone” 37 FAM Sandra’s Guy’s Sand. Be.- Made Paula Pioneer Restaurant: Im. My. Diners My. Diners My. Diners My. Diners My. Diners My. Diners My. Diners Health Diners What’s on 32 FOOD Rachael Ray’s Horse. Griot’s Spurs Live NBA Basketball San Antonio Spurs at Toronto Raptors. Spurs Live Auto Sports Unlimited (N) Game Golf Life XTERRA 54 FX SW Paid Prog. Paid Prog. Big 12 No-Huddle (N) Huddle “Moonlight” ›› “Once Upon a Christmas” (2000) Å “Mistletoe Over Manhattan” (2011) Å “Naughty or Nice” (2012) Hilarie Burton. Å ››› “A Princess for Christmas” (2011) 39 HALL “Christmas Magic” Little Ice Age/Chill Outback Hunters Outback Hunters Outback Hunters Outback Hunters Outback Hunters How the Earth Was Made Å What’s the Earth Worth? Å 61 HIST Fellowship Mass Paid Prog. Paid Prog. Inspiration Today Camp Meeting ’ “Holiday Heist” (2011) Lacey Chabert. ’ ›› “Christmas Town” (2008) ’ ›› “A Golden Christmas” (2009) ’ 38 ION J. Osteen Paid Prog. Chris Chris “Crazy for Christmas” (2005) Andrea Roth. “Undercover Christmas” (2003) Jami Gertz. “Dear Santa” (2011, Drama) Amy Acker. Å “The March Sisters at Christmas” (2012) Å 26 LIFE Border Wars “Traffic” Drugs, Inc. Alaska State Troopers Outlaw Bikers Outlaw Bikers Outlaw Bikers Biker Chicks Guerrilla Gold Rush Narco Bling Outlaw Bikers Å 51 NGC Sponge. Sponge. Sponge. Sponge. The Fairly OddParents Winx Club Winx Club Winx Club Big Time iCarly ’ iCarly ’ iCarly ’ iCarly ’ Victorious Victorious Victorious Victorious Sponge. Sponge. 41 NICK Tattoo Tattoo Ink Master “Holy Ink” ›› “Reign of Fire” (2002) Christian Bale. ’ ››› “Star Wars: Episode III -- Revenge of the Sith” (2005) Ewan McGregor. ›››› “Star Wars IV: A New Hope” (1977) 69 SPIKE Tattoo Treasure ›› “Nutty Professor II: The Klumps” (2000) ››› “Groundhog Day” (1993) Bill Murray. › “In the Name of the King: A Dungeon Siege Tale” 58 SYFY Twi. Zone Twi. Zone “Prey” (2007) Bridget Moynahan. Å (7:30) “Top Hat” (:15) ››› “The Winning Team” (1952) Å ››› “The Women” (1939) Norma Shearer. Å (DVS) ››› “Cat on a Hot Tin Roof” (1958, Drama) ››› “The Mortal Storm” (1940) “Lemon Drop” 64 TCM Gown Gown Gown Gown Gown Gown Gown Gown Bride Bride Couponing: Holiday Extreme Extreme Extreme Extreme Extreme Extreme Extreme Extreme 35 TLC Law & Order “Strike” Law & Order ’ Law & Order ’ › “The Reaping” (2007) Hilary Swank. Å ›› “The Da Vinci Code” (2006) Tom Hanks, Audrey Tautou. Å ›› “Angels & Demons” (2009) Tom Hanks. 67 TNT Roseanne Roseanne Roseanne Roseanne Roseanne Roseanne Cosby Cosby Cosby Cosby Cosby Cosby Cleveland Cleveland Cleveland Cleveland Cleveland Cleveland Cleveland Cleveland 40 TVL Coast Guard Alaska Coast Guard Alaska Weather Center Live Coast Guard Alaska Coast Guard Alaska Weekend View Å Weekend Now Å 45 TWC NCIS “Broken Bird” NCIS “Double Identity” NCIS “Obsession” ’ NCIS ’ Å NCIS “Requiem” ’ NCIS “Legend” Å NCIS “Legend” Å NCIS ’ Å NCIS ’ Å NCIS ’ Å 66 USA Top 20 Count. T.I.-Tiny T.I.-Tiny T.I.-Tiny Marry Basketball Wives LA Couples Therapy ’ The Women of SNL ’ Å Saturday Night Live › “Double Take” (2001) Eddie Griffin. ’ 71 VH1 30 Rock Funniest Home Videos Bloopers! ›› “Ice Age: The Meltdown” (2006) Å ›› “Practical Magic” (1998) Sandra Bullock. ›› “Sweet Home Alabama” (2002) Å 21 WGN-A ››› “Spy Kids” (2001) Antonio Banderas. Friends Friends Wedding Band “Shrek the Third” ›› “Fred Claus” (2007, Comedy) Vince Vaughn. Å ››› “Madagascar” (2005) Å (:45) ›››› “The Wizard of Oz” (1939) Judy Garland. 65 WTBS Friends Grumpy (:45) ››› “Analyze This” (1999) ’ ‘R’ Å “Shanghai Knights” Lonesome Dove Å (:10) Lonesome Dove Å (:45) Lonesome Dove ’ (Part 3 of 4) Å (:20) Lonesome Dove Å 760 ENC Boxing 24/7 Tower ››› “Forrest Gump” (1994) Tom Hanks. ‘PG-13’ Å Witness Å ››› “Crazy, Stupid, Love.” (2011) ‘PG-13’ ›› “Red Tails” (2012) Cuba Gooding Jr. ’ 16 HBO Tower (:15) ›› “Tower Heist” (2011) Ben Stiller. ’ Witness Å › “Red Riding Hood” (2011) Å Boardwalk Empire ’ Boxing 702 HBO2 Antitrust (:25) ›› “Deliver Us From Eva” Witness (:25) ››› “Cedar Rapids” ‘R’ (:05) ››› “Collateral” (2004) Tom Cruise. (:10) ›› “Along Came Polly” ’ Terminal 703 HBOS Glee 3D (:25) › “Dream House” (2011) ’ ›› “Sucker Punch” (2011) Emily Browning. (:05) › “Something Borrowed” (2011) ‘PG-13’ ››› “Galaxy Quest” (1999) ’ (:45) Hunted ’ Å (:45) › “Fear and Loathing in Las Vegas” (1998) ‘R’ Å (2:50) ›› “Fast Five” (2011) Vin Diesel. Å (:05) ›› “Unknown” 715 MAX “Anchorman: Legend of Ron” ›› “Hart’s War” (2002) Bruce Willis. ‘R’ Å (:15) ›› “Transit” (2012) ’ ‘R’ (:45) ›››› “Titanic” (1997) Leonardo DiCaprio. ‘PG-13’ 716 MMAX Gulliver ›› “Taking Lives” (2004) Angelina Jolie. ‘R’ War Horse ›› “The Twilight Saga: Eclipse” (2010) Å Untold History-United NASCAR Boxing (:05) › “The Three Musketeers” (2011) Å ›› “Fightville” (2011) ‘NR’ Å 730 SHOW Jim Rome, Sho WEEKDAY DAYTIME 00-Bastrop, Elgin, Smithville Cable \-Broadcast 8AM 8:30 9AM 9:30 10AM 10:30 11AM 11:30 12PM 12:30 1PM 1:30 2PM 2:30 Good Day Austin The Dr. Oz Show Wendy Williams Show News Dish Nat. Judge Divorce Judge B. Judge B. 2 FOX _ Good Day Austin Live With Kelly The View KVUE Midday News The Chew General Hospital Inside Ed. Extra 3 ABC 8 Good Morning Today Today Rachael Ray KXAN News at Noon Days of our Lives The Doctors 4 NBC D (7:00) Today The 700 Club The Price Is Right Young & Restless Millionaire Bold The Talk Let’s Make a Deal 5 CBS J CBS This Morning Dinosaur Tiger Sid WordWrld Barney Varied Tiger Signing Cyberchas Cat in the Curious 9 PBS 2 Curious Cat in the Super The People’s Court Steve Harvey Jerry Springer Maury Jeremy Kyle 12 MNT V Wommack J. Hanna Judge Mathis Justice Justice Court Court People People Better 23 CW % Paid Prog. Paid Prog. Movie Criminal Minds Criminal Minds CSI: Miami CSI: Miami Criminal Minds Criminal Minds The First 48 60 A&E Movie Varied Programs 63 AMC Chris Chris My Wife My Wife Foxx Foxx Parkers Parkers My Wife My Wife Foxx Foxx Parkers Movie 73 BET (5:00) CMT Music Varied Programs Extreme Makeover Extrm. Varied Strictest Parents Strictest Parents 70 CMT CNN Newsroom CNN Newsroom CNN Newsroom CNN Newsroom 46 CNN Entourage Comedy Daily Colbert Scrubs Scrubs Varied Programs Movie 59 COM Mickey Mickey Varied Pirates Mickey Varied Mickey Little Varied Gaspard & Phineas Movie Varied Programs 42 DISN Varied Programs 34 DSC SportsCenter SportsCenter SportsCenter SportsCenter SportsCenter Outside Football 52 ESPN SportsCenter Boy/World 700 Club The 700 Club Gilmore Girls What Like What Like 8 Rules 8 Rules ’70s Show ’70s Show ’70s Show ’70s Show 37 FAM Good Eats Unwrap Cooking Contessa Varied Dinners Secrets 30-Minute Giada Giada 32 FOOD Paid Prog. Varied Programs Big 12 No-Huddle Varied Programs 54 FX SW Varied Programs Marie Movie Movie 39 HALL Home & Family Varied Programs 61 HIST Bible Paid Prog. Paid Prog. Paid Prog. Varied Programs 38 ION Frasier Frasier Frasier Frasier Chris Chris Chris Chris How I Met How I Met To Be Announced To Be Announced 26 LIFE Varied Programs Alaska State Troopers Border Wars Taboo Varied Programs 51 NGC Bubble Bubble Sponge. Umizoomi Sponge. Sponge. Parents Parents Parents Kung Fu Kung Fu Kung Fu Sponge. Sponge. 41 NICK 69 SPIKE Varied Programs 58 SYFY Varied Programs Movie Varied Programs Movie Varied Programs (:45) Movie Varied Programs 64 TCM Baby Baby Varied Programs Say Yes Say Yes What Not to Wear Baby Baby Toddler Varied Not Wear Varied 35 TLC Supernatural Supernatural Leverage Varied Programs 67 TNT Beaver Beaver Van Dyke Van Dyke Love Lucy Love Lucy Griffith Griffith Gunsmoke Gunsmoke Bonanza 40 TVL Your Weather Today Wake Up With Al Day Planner 45 TWC Law & Order: SVU Law & Order: SVU Law & Order: SVU Law & Order: SVU Law & Order: SVU Varied Programs 66 USA Jump Start Big Morning Buzz Live Varied Programs 71 VH1 Matlock In the Heat of Night In the Heat of Night WGN Midday News Walker, Texas Ranger Walker, Texas Ranger 21 WGN-A Matlock There Browns Payne Prince Prince Prince Amer. Dad Amer. Dad Raymond Raymond Raymond Raymond Seinfeld 65 WTBS Jim Movie Varied Programs Movie Varied Programs 760 ENC Movie Varied Programs Movie Varied Programs 16 HBO Varied Programs (:15) Movie Varied Programs 702 HBO2 Movie Varied Programs (:25) Movie Varied Programs (:45) Movie 703 HBOS Movie Movie (:40) Movie Varied Programs (:15) Movie Varied Programs Movie 715 MAX (:15) Movie Varied Programs (:35) Movie 716 MMAX (:10) Movie Varied Movie Varied Programs Movie Varied Programs (1:50) Movie 730 SHOW (:15) Movie Varied Programs
Horoscopes ARIES Ð Mar 21/Apr 20 Aries, while thereÕs much about a situation that you donÕt understand, you will quickly be filled in on all the details you need to know to get the job done.
TAURUS Ð Apr 21/May 21 Taurus, confrontation will get you nowhere. It is better to avoid any troublesome parties and simply go on with your days. No need to put monkey wrenches in the plans.
3PM 3:30 Jdg Judy Jdg Judy Ellen DeGeneres Dr. Phil The Jeff Probst Show Arthur WordGirl Bill Cunningham America America The First 48
4PM 4:30 The Dr. Oz Show Katie Jeopardy! Jeopardy! The Ricki Lake Show Wild Kratt Electric Anderson Live Anderson Live The First 48
5PM 5:30 News News ABC News News NBC News News CBS News Martha Business Friends Friends ’Til Death ’Til Death Varied Programs
106 & Park: Top 10 Roseanne Roseanne Roseanne Roseanne Reba Reba The Situation Room Comedy Futurama Futurama Sunny South Pk Tosh.0 Good Varied Programs Phineas Good NFL Live Around Pardon Varied Reba Varied Programs Contessa Contessa Paula Cooking
SportCtr
Varied
Diners
Diners
Movie
Movie To Be Announced
Movie
iCarly
Movie iCarly
Sponge.
Sponge.
Victorious Victorious
Say Yes
Say Yes
Say Yes
Say Yes
Bonanza Storms
Storms
Bonanza
Walker, Texas Ranger Law Order: CI Friends Friends Friends Friends (:10) Movie Varied Programs (3:50) Movie
Varied Programs Varied Programs Movie Varied Movie Varied Programs
Movie Varied Medium Medium The Mentalist M*A*S*H M*A*S*H Varied Programs Chris Chris King King (:10) Movie (:45) Movie Varied Programs Movie Varied Movie Varied
november 22, 2012 GEMINI Ð May 22/Jun 21 Take some time to reflect on what you need to get done, Gemini. Things are about to get more hectic, and it will help to know what is on your schedule in the coming days.
CANCER Ð Jun 22/Jul 22 There is no need to put off romantic endeavors, Cancer. Make time to further relationships, and you will be happier for having made the additional effort.
LEO Ð Jul 23/Aug 23 Leo, a casual encounter with an old friend goes by like no time has elapsed at all. Agree to keep in touch and spend more time together going forward.
VIRGO Ð Aug 24/Sept 22 Virgo, there are too many messes to clean up, so instead of digging in you may just decide to procrastinate a little longer. Just be sure to make up the time later on.
r
LIBRA Ð Sept 23/Oct 23 You may find that things that are beneficial for others may not always be beneficial for you, Libra. But often you have to make sacrifices for the benefit of the entire group.
SCORPIO Ð Oct 24/Nov 22 Certain challenges may be tough to conquer, Scorpio. But with the right help you can get the job done. Gemini may be your shining light this week.
For entertainment purposes only Ð SAGITTARIUS Nov 23/Dec 21 There is no point in speculating about your finances, Sagittarius. Keep track of your deposits and withdrawals so you have a handle on all accounts. CAPRICORN Ð Dec 22/ Jan 20 Now is not the time to leap without looking, Capricorn. You have to be cautious with your choices and actions this time of the month. DonÕt make waves so close to the holidays.
AQUARIUS Ð Jan 21/Feb 18 Aquarius, although you do plenty, someone around the house could really use some more assistance from you. It may take some juggling of your schedule to accomplish.
PISCES Ð Feb 19/Mar 20 Usually your outpouring of creative juices is unstoppable, Pisces. This week you could have a little trouble thinking up new ideas.
Page B8 • The Bastrop Advertiser
Thursday, November 22, 2012
wAtts new in the Arts
Holidays are upon us - shop for that unique gift The fourth Thursday of November traditionally kicks off the holiday season and when my boys were young, we always put our Christmas tree up the Saturday after Thanksgiving Day. There was anticipation in looking forward to Thanksgiving Day; it meant that Christmas and the New Year cel-
JO
wAtts ebrations were upon us! Every day for the next six weeks would be special! These days, it seems that “the holidays” be-
gin earlier each year. I hear so many people comment on it, so it’s not new news, but gee! Who wants to be planning Thanksgiving in September? It’s probably because I hate shopping that I object to it all being about “Black Friday.” Sure, it’s not my business who shops and when. If folks want to camp out all night, it’s certainly their right to do so, but I don’t want to leave my house on the day after T.G. Just leave me out of it, I’ll handmake as many Christmas gifts as I can and Jo Watts/Bastrop Advertiser buy the rest locally from The Mary Nichols Art Center has a new look - and it looks like art. artists, chefs, services and businesses. And that’s the way I’ll antici- Your Own Gingerbread day is two weeks and pate Christmas. House art project. How- two days until SaturFestival of Lights ever, due to a limited day, December 1. in Smithville is coming number of gingerbread up, as always the first houses and decorations, LPAA Saturday in December. this activity does reThe Lost Pines ArtiYou can look forward to quire a pre-registration sans Alliance will once the lighted parade, lead and payment of $12. again feature works of by the award winning Contact the chamber of art and handmade gifts Smithville Tiger band, commerce at 237-2313 all priced at under $100. arts and crafts along to reserve your ginger- It’s a great place to shop Main Street and some- bread house. year round and especialthing new: a chance If you would like to ly at this time of year. for any culinary art- participate in the Fun You might find the perists to showcase their Run / Walk, or in the pa- fect one of a kind item for talents with the Build rade or the holiday mar- someone special and, at ket, stop by the chamber the same time, be helpoffice and fill out a form ing to support the arts or print them from our in your community. The website at- Mary Nichols Arts Cenvilletx.org. ter, 301 Burleson St. in The countdown to- Smithville, is open from
1-4 p.m. on Fridays and Sundays and 11 a.m. to 4 p.m. on Saturdays. You’ll recognize the place on the corner of Loop 230 and Burleson by the wonderful paintings displayed on the front and side of the house.
BFAG
This month’s featured artist at the Bastrop Fine Arts Guild, 1009 Main in Bastrop, is John Schaeffer. My husband and I went by to see his show last week and, even though we’ve seen his work and expected it to be great, we couldn’t help but being impressed once again. If you look carefully at the reflections in the chrome and shiny paint jobs on his vehicles, you’ll see pictures in the pictures. The guild is open every day except Mondays from 10 a.m. to 5 p.m. and even non-car-lovers like me will love John’s paintings. When looking for art in Bastrop, don’t forget Deborah Johnson’s Art Connections on Pine Street. She’s busy organizing the first Friday Art Walk for Dec. 7, but you’ll find her gallery full of new works by local and not so local artists. It’s worth the walk ‘round the corner.
�������������������� | https://issuu.com/wrightcy/docs/genex112212 | CC-MAIN-2017-30 | refinedweb | 27,245 | 69.52 |
Sorry for the big post. I’m having a lot of confusion when trying to
implement a simple model relationship that uses all the good things of
ActiveRecord.
I need to store some dinamic entries about knowledge of certain people
on
the database. A knowledge is measured with another table, with a range
of
possible values for a knowledge. This is a reduced schema of my
database:
create table people (
id int not null auto_increment,
name varchar(255) not null,
primary key (id)
);
create table knowledges (
id int not null auto_increment,
name varchar(255) not null,
primary key (id)
);
create table experiences (
id int not null auto_increment,
name varchar(255) not null,
primary key (id)
);
create table experiences_people (
person_id int not null,
knowledge_id int not null,
experience_id int not null
);
Let’s insert some default values:
insert into knowledges (name) values (“Books”);
insert into knowledges (name) values (“Music”);
insert into knowledges (name) values (“Sports”);
insert into experiences (name) values (“Bad”);
insert into experiences (name) values (“Medium”);
insert into experiences (name) values (“Good”);
And this are my models for this application:
class Person < ActiveRecord::Base
has_and_belongs_to_many :knowledges
end
class Knowledge < ActiveRecord::Base
has_and_belongs_to_many :people
end
class Experience < ActiveRecord::Base
end
This is why I need for the user interface:
During a user (person) creation, I need to display a page with a list
of
all possible knowledges and a dropdown list for each one (with, of
course,
all possible experience values).
During a user (person) edition, I need the same as before, but of
course
with stored values for each knowledge already selected, and with the
possibility of change one or more values at the same time, storing that
new
values on the database (with substitution of old values, if they exists,
or
creating new entries for new values).
During a knowledge view, a list with all people that has a value for
that
knowledge, displaying the experience for each one.
And this is what I started to code:
- For person creation, I have the following in people_controller.rb on
method “new”:
class PeopleController < ApplicationController
def new
@person = Person.new(@params[:person])
@knowledges = Knowledge.find(:all, :order => ‘name’).collect { |k| [
k.name, k.id ] }
@experiences = Experience.find(:all, :order => ‘name’).collect { |k| [
k.name, k.id ] }
if @request.post?
@knowledges = @params[:knowledges]
for knowledge in @knowledges
new_knowledge = Knowledge.new
## In fact, I need to create a new entry in experiences_knowledges
with
## the knowledge id and the experience id selected. How can I do
that?
@person.knowledges << knowledge
end
if @person.save
flash[‘notice’] = ‘Person was successfully created.’
redirect_to :action => ‘show’, :id => @person.id
end
end
end
And this is my new.rhtml view:
<%= start_form_tag :action => “new” %><%= end_form_tag %>
As you can see, there is a comment on the controller code where must be
the
creation of that entry. And I don’t know how to add knowledge entry to @
person.knowledges collection. I have the same (but more difficult to me)
problem when I think in coding edit action, where I’ll need to edit a
knowledge that is already on the collection.
And for the list of people that has a value for a certain knowledge, I
suposse that it’s possible to use the ‘people’ accessor to get the
people
collection associated with the person object. But how can I get the
experience associated with that person and knowledge to display it?
Thanks in advance for reading to here. Any suggestion will be really
thankfull. | https://www.ruby-forum.com/t/has-and-belongs-to-many-and-collections-management/89907 | CC-MAIN-2020-50 | refinedweb | 575 | 53.21 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.