text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
TL;DR: Security can't be overemphasized when it comes to developing software applications. A single authentication factor system (e.g username and password) is no longer safe enough. If credentials are stolen, a user can be impersonated. Implementing a multi-factor authentication system increases security by requiring the user to provide an additional sets of credentials before they are granted access. Implementing multi-factor authentication can be time-consuming, challenging, and often difficult to get right. However, in this post I'll show you how to quickly implement multi-factor authentication in your React applications in just a few minutes without breaking a sweat! Note: You need to have a fair knowledge of React to get the most out of this tutorial. Table of Contents 1. Step Up a React Application It's very easy to set up a react application these days, all thanks to Facebook's infamous create-react-app tool. If you haven't installed it yet, do so, else go ahead and create a new react app like so: Create React App 2. Install the following dependencies There are some modules we'll need in the later part of this tutorial. Let's install them now. Open up your terminal and run this command like so: npm install auth0-lock bootstrap classnames jwt-decode react-bootstrap react-router --save auth0-lock -- To adding Auth0 Lock widget for easy authentication bootstrap -- To beautify our interface classnames - For joining class names together jwt-decode -- To decode our JSON Web token react-bootstrap - Bootstrap for React react-router -- For routing 3. Set Up Authentication Components, Routing and Styling There are several ways to set up authentication in a React app but we'll choose a service that does the heavy lifting for us. With Auth0, you can easily set up authentication in your React apps. Open up your src/ directory and delete everything inside except the index.js file. Now, replace the code in the index.js file with the following: index.js import React from 'react'; import ReactDOM from 'react-dom'; import App from './containers/App/App'; import './app.css'; import 'bootstrap/dist/css/bootstrap.css' import {hashHistory} from 'react-router'; import makeRoutes from './routes'; const routes = makeRoutes() ReactDOM.render( <App history={hashHistory} routes={routes}/>, document.getElementById('root') ); Go ahead and create a routes.js and app.css file inside the src/ directory. Add code to the files respectively like so: routes.js import React from 'react' import { Route } from 'react-router' import makeMainRoutes from './views/Main/routes' export const makeRoutes = () => { const main = makeMainRoutes(); return ( <Route path=''> {main} </Route> ) } export default makeRoutes app.css @import url("styles/colors.css"); *, *:after, *:before { box-sizing: border-box; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; font-smoothing: antialiased; text-rendering: optimizeLegibility; font-size: 16px; } body { color: var(--dark); font-weight: lighter; font: 400 15px/22px 'Open Sans', 'Helvetica Neue', Sans-serif; font-smoothing: antialiased; padding: 0; margin: 0; } For brevity, head over to and copy the containers, styles, utils , views directory and its contents into your app. With this, we should have an almost-ready authentication app. 4. Set Up Authentication We have an Authentication helper class, src/utils/AuthService.js that encapsulates the login functionality and a JWT helper file, src/utils/jwtHelper.js that checks for the validity of JSON Web tokens in our app. Now, open up src/views/Main/routes.js. In this file, we have a line like so: .... const auth = new AuthService(_AUTH0_CLIENT_ID_, _AUTH0_DOMAIN_); We need to replace the _AUTH0_CLIENTID and _AUTH0DOMAIN with real values. If you don't have an account with Auth0, go ahead to the sign up for a free account to continue. Then create a new client app and go to the settings tab to grab the keys like so: Auth0 Credentials Now, run npm start, your welcome page should look like this: . Next, we will set up multi-factor authentication. 5. Set up Multifactor Authentication With Auth0, it is very easy to set up multi-factor authentication. On your Auth0 dashboard, click on the Multifactor Auth tab on the left. You'll get a page like this below: You can choose what form of multi-factor authentication you want. In this tutorial, we'll go with push notification. So go ahead and turn that on by sliding the knob to the right. After you have done that, run the application again and try to sign up. From clicking on the login button and signing up, we'll have: 2nd Factor Authentication Interface Here, there is the option to download the Auth0 Guardian app from either the App Store or from Google Play. Underneath that, there is the option to use Google Authenticator or SMS depending on the application's settings. Let's go with Auth0 Guardian. Once you have downloaded that, the next screen brings out a code that you need to scan with the app like so: The user will have to open up the Auth0 Guardian app on the mobile device like so: Note: I'm using an IPhone Opening Auth0 Guardian The user will have to scan the QR code. Immediately it scans, the next screen is presented like so: Save the number. It's useful when you need to login and you don't have your device with you! Proceed by checking the box like so: Continue, the next screen that is presented is this below: Click on continue. You'll receive a notification on your phone like the one below: You can allow or deny the request from the homescreen as shown below: Or you can open your phone, you'll see the request as shown below: Clicking on the request quickly brings out a screen that gives you the option to allow the request with some information about the incoming login request too. Pretty slick right? Once you allow the request, the web application gets notified that you have accepted the request and proceeds to login like so: The user has been finally logged in Multi-factor authentication with Auth0 Guardian is really that simple. No complications, no hassle! The code for this application is available on Github. Check it out! Conclusion Holy Molly! We have been able to integrate multi-factor authentication into a React application within just a few minutes. The awesome goodness about multi-factor authentication with Auth0 is that there are lots of configuration options available to you as a developer or an admin. There is no whining about this. Go forth and make your applications a massive stronghold by adding second factor authentication to your apps today! This content is sponsored via Syndicate Ads
https://scotch.io/tutorials/multifactor-authentication-in-your-react-apps
CC-MAIN-2018-22
refinedweb
1,108
56.15
Multiple GPU Support¶ Overview¶ Production grade solutions now use multiple machines with multiple GPUs to run the training of neural networks in reasonable time. This tutorial will show you how to run DALI pipelines by using multiple GPUs. Run Pipeline on Selected GPU¶ Start with the pipeline that is very similar to the basic pipeline from the Getting started section. This pipeline uses the GPU to decode the images. This is specified with the mixed value of device argument. [1]: import nvidia.dali.fn as fn import nvidia.dali.types as types from nvidia.dali.pipeline import Pipeline image_dir = "../data/images" batch_size = 4 def test_pipeline(device_id): pipe = Pipeline(batch_size=batch_size, num_threads=1, device_id=device_id) with pipe: jpegs, labels = fn.readers.file(file_root=image_dir, random_shuffle=False) images = fn.decoders.image(jpegs, device='mixed', output_type=types.RGB) pipe.set_outputs(images, labels) return pipe To run this pipeline on selected GPU we need to adjust the device_idparameter value. The ID ordinals are consistent with your CUDA device IDs, so you can run it on the GPU with ID = 1. Important: Remember that the following code will work for systems with at least 2 GPUs. [2]: # Create and build the pipeline pipe = test_pipeline(device_id = 1) pipe.build() # Run pipeline on selected device images, labels = pipe.run() We can print the images. [3]: import matplotlib.gridspec as gridspec import matplotlib.pyplot as plt %matplotlib inline def show_images(image_batch): columns = 4 rows = (batch_size + 1) // (columns) fig = plt.figure(figsize = (32,(32 // columns) * rows)) gs = gridspec.GridSpec(rows, columns) for j in range(rows*columns): plt.subplot(gs[j]) plt.axis("off") plt.imshow(image_batch.at(j)) [4]: show_images(images.as_cpu()) Sharding¶ It is not enough to run pipelines on different GPUs. During the training, each GPU needs to handle different samples at the same time, and this technique is called sharding. To perform sharding the dataset is divided into multiple parts or shards, and each GPU gets its own shard to process. In DALI sharding is controlled by the following parameters of every reader op: shard_id num_shards. For more information on these parameters you can look into any reader operator documentation. In the following sample, you can see how pipeline uses shard_id and num_shards: [5]: def sharded_pipeline(device_id, shard_id, num_shards): pipe = Pipeline(batch_size=batch_size, num_threads=1, device_id=device_id) with pipe: jpegs, labels = fn.readers.file( file_root=image_dir, random_shuffle=False, shard_id=shard_id, num_shards=num_shards) images = fn.decoders.image(jpegs, device='mixed', output_type=types.RGB) pipe.set_outputs(images, labels) return pipe Create and run two pipelines on two different GPUs and take samples from different shards of the dataset. [6]: # Create and build pipelines pipe_one = sharded_pipeline(device_id=0, shard_id=0, num_shards=2) pipe_one.build() pipe_two = sharded_pipeline(device_id=1, shard_id=1, num_shards=2) pipe_two.build() # Run pipelines images_one, labels_one = pipe_one.run() images_two, labels_two = pipe_two.run() When the images are printed we can clearly see that each pipeline processed different samples. [7]: show_images(images_one.as_cpu()) [8]: show_images(images_two.as_cpu()) In this simple tutorial we show you how to run DALI pipelines on multiple GPUs by using sharding. For more comprehensive examples in different frameworks please refer to training scripts that are availible for ResNet50 for MXNet, PyTorch and TensorFlow. Note: These scripts work with multiple GPU systems.
https://docs.nvidia.com/deeplearning/dali/master-user-guide/docs/examples/general/multigpu.html
CC-MAIN-2021-21
refinedweb
537
52.05
Gaining high frame rate from ELP camera (python) I'm using an ELP USB camera that is supposedly rated at 100fps at 640x480 quality but I don't seem to be getting anywhere near that frame rate. I was wondering if anyone has used an ELP camera with openCV and managed to achieve a frame rate close to 100 fps? Or if anyone has any advice to help increase the frame rate? I am using Ubuntu 14.04 with openCV version 2.4.8. I have already tested that the output is MJPEG at 640x420 and I am running cv2.VideoCapture in one thread and placing the frame on a queue. From the main thread I repeatedly ask for frames but only return when the results is not None. This gives a frame rate hovering around 30fps. One I have the image I am performing contour detection but essentially I would like to get the frame rate up as high as possible. Below is the code I'm using just to test the frame rate. import cv2 import time import numpy as np from datetime import datetime from threading import Thread, Lock, Condition import time from Queue import Queue class WebcamVideoStream: def __init__(self, src=0): # initialize the video camera stream self.stream = cv2.VideoCapture(src) # initialize the variable used to indicate if the thread should # be stopped self.stopped = False self.frame = None def start(self): global qt self.stopped = False qt = Queue(10) # start the thread to read frames from the video stream thread1 = Thread(target=self.update, args=()) thread1.start() return self def update(self): global qt # keep looping infinitely until the thread is stopped while True: if self.stopped: return _, self.frame = self.stream.read() qt.put(self.frame) def read(self): global qt if(not qt.empty()): self.CurrFrame=qt.get() if self.CurrFrame is not None: return self.CurrFrame if self.stopped: return def stop(self): print('Stop') # indicate that the thread should be stopped self.stopped = True return self vs = WebcamVideoStream(src=-1).start() time.sleep(1) i = 0 t0 = time.time() while i < 100: frame = vs.read() while frame is None: frame = vs.read() i = i + 1 rate = 100/(time.time()-t0) print(rate) cv2.destroyAllWindows() vs.stop() Thanks in advance I'm struggling to get the embedded code to format, sorry. os ? opencv version ? (i.e. on win, dshow will force that into bgr, which takes time, but what would you want to do even with a mjpeg image, if not uncompress it ?) then, this is basically an io bottleneck, you won't gain anything with multithreading In the demo I'm doing absolutely nothing with them. I had assumed that because (most of the time) when a frame is requested None is returned that the requests are coming in far faster than the camera can output them and therefore the queue never has more than one frame on it. apologies, i removed a comment, you the other, so the missing information: again, i'm quite sure, that cv2.VideoCapture will have to uncompress your mjpg image in read() (else you could not use it for contour detection) Ah ok. So this is causing the bottleneck? Is there any method to use to improve the frame rate? i don't think, there's much you can do from python . if it was c++, i'd say: try to use libv4l directly, without using VideoCapture (there's also a hidden fifo queue, threads & locks & whatnot in the v4l wrapper) I decided to upgrade to OpenCV 3 and this has made all the difference. Frame rate immediately rose greatly. Thanks for all your help, Berak.
https://answers.opencv.org/question/164160/gaining-high-frame-rate-from-elp-camera-python/
CC-MAIN-2022-40
refinedweb
613
75.2
Here is your answer. I hope this posting & source code help to you. And This page also is good explain to use background subtraction. This is simple code for example background subtraction. The input is cam video. Before get video frame, source code set some options. MOG2, ROI, and morphology in while, blur is option for reduce noise, And morphology is also option. As your project purpose and camera environment, you add more pre-processing. But I don't add labeling code, normally do labeling processing after background subtration, because blob selecting(valid size or not), blob counting, interpretation of blob. More information Background subtraction code. Morphology Labeling (findContours) Thank you. #include "opencv2/opencv.hpp" using namespace cv; int main(int, char) { VideoCapture cap(0); // open the default camera if (!cap.isOpened()) // check if we succeeded return -1; Ptr< BackgroundSubtractorMOG2 > MOG2 = createBackgroundSubtractorMOG2(3000, 64); //Options //MOG2->setHistory(3000); //MOG2->setVarThreshold(128); //MOG2->setDetectShadows(1); //shadow detection on/off Mat Mog_Mask; Mat Mog_Mask_morpho; Rect roi(100, 100, 300, 300); namedWindow("Origin", 1); namedWindow("ROI", 1); Mat frame; Mat RoiFrame; Mat BackImg; int LearningTime=0; //300; Mat element; element = getStructuringElement(MORPH_RECT, Size(9, 9), Point(4, 4)); for (;;) { cap >> frame; // get a new frame from camera if (frame.empty()) break; //option blur(frame(roi), RoiFrame, Size(3, 3)); //RoiFrame = frame(roi); //Mog processing MOG2->apply(RoiFrame, Mog_Mask); if (LearningTime < 300) { LearningTime++; printf("background learning %d \n", LearningTime); } else LearningTime = 301; //Background Imge get MOG2->getBackgroundImage(BackImg); //morphology morphologyEx(Mog_Mask, Mog_Mask_morpho, CV_MOP_DILATE, element); //Binary threshold(Mog_Mask_morpho, Mog_Mask_morpho, 128, 255, CV_THRESH_BINARY); imshow("Origin", frame); imshow("ROI", RoiFrame); imshow("MogMask", Mog_Mask); imshow("BackImg", BackImg); imshow("Morpho", Mog_Mask_morpho); if (waitKey(30) >= 0) break; } return 0; }.. Hi. how do you measure the size of roi? what software did you used to get the coordinate of roi? Hi. how do you measure the size of roi? what software did you used to get the coordinate of roi? Nice work! This sample code (and links) were quite helpful for helping me understand background subtraction. One question: what is the purpose of the "LearningTime" variable? Why is it capped at 300 / who decided that the limit is 300? What is so significant about the first 300 frames? Good work. Can you have this code into python language? Can be improuve your code for optimization over sized data and speed? good evening sir , I use QtCreator with openCv3.1.0 . when I execute your code it crashes . then I realized that it is the instruction MOG2- > apply ( RoiFrame , Mog_Mask ) ; that crashes the program and when I put this statement in any comment on. Excuse me sir can you give me an idea ?? wonderful task
http://study.marearts.com/2016/01/opencv-background-subtraction-sample.html
CC-MAIN-2017-26
refinedweb
443
56.96
In this tutorial we will learn how we can define class methods and make our classes more functional. And we will also learn about the concept of method overloading in Java. Methods describe behavior of an object. A method is a collection of statements that are grouped together to perform an operation. For example, if we have a class Human, then this class should have methods like eating(), walking(), talking() etc, which in a way describes the behaviour which the object of this class will have. Syntax: return-type methodName(parameter-list) { //body of method } public String getName(String st) { String name="StudyTonight"; name=name+st; return name; } Modifier : Modifier are access type of method. We will discuss it in detail later. Return Type : A method may return value. Data type of value return by a method is declare in method heading. Method name : Actual name of the method. Parameter : Value passed to a method. Method body : collection of statement that defines what method does. While talking about method, it is important to know the difference between two terms parameter and argument. Parameter is variable defined by a method that receives value when the method is called. Parameter are always local to the method they dont have scope outside the method. While argument is a value that is passed to a method when it is called. call-by-valueand call-by-reference There are two ways to pass an argument to a method NOTE :There is only call by value in java, not call by reference. call-by-value public class Test { public void callByValue(int x) { x=100; } public static void main(String[] args) { int x=50; Test t = new Test(); t.callByValue(x); //function call System.out.println(x); } } 50 If two or more method in a class have same name but different parameters, it is known as method overloading. Overloading always occur in the same class(unlike method overriding). Note: Overloaded method can have different access modifiers. There are two different ways of method overloading For. } } Sum is 13 Sum is 8.4 You can see that sum() method is overloaded two times. The first takes two integer arguments, the second takes two float arguments. For. } } Area is 40 Area is 48
https://www.studytonight.com/java/method-and-overloaded-method.php
CC-MAIN-2020-05
refinedweb
375
66.74
7.96 INSERTXMLBEFORE Note: The INSERTXMLBEFORE function is deprecated. It is still supported for backward compatibility. However, Oracle recommends that you use XQuery Update instead. See Oracle XML DB Developer's Guide for more information. Syntax Purpose INSERTXMLBEFORE inserts a user-supplied value into the target XML before the node indicated by the XPath expression. This function is similar to INSERTXMLAFTER, but it inserts before, not after, the target node. Compare this function with INSERTCHILDXML. XMLType_instanceis an instance of XMLType. XPath_stringis an Xpath expression indicating one or more nodes into which one or more child nodes are to be inserted. You can specify an absolute XPath_stringwith an initial slash or a relative XPath_stringby omitting the initial slash. If you omit the initial slash, then the context of the relative path defaults to the root node. value_expris a fragment of XMLTypethat defines one or more nodes being inserted and their position within the parent node. It must resolve to a string. The optional namespace_stringprovides namespace information for the XPath_string. This parameter must be of type VARCHAR2. See Also: Oracle XML DB Developer's Guide for more information about this function Examples>
https://docs.oracle.com/en/database/oracle/oracle-database/12.2/sqlrf/INSERTXMLBEFORE.html
CC-MAIN-2018-13
refinedweb
190
51.24
Details Description When hadoop framework doing the sorting, it will try to use binary version of comparator if available. The benefit of binary comparator is we do not need to instantiate the object before we compare. We see a ~30% speedup after we switch to binary comparator. Currently, Pig use binary comparator in following case: 1. When semantics of order doesn't matter. For example, in distinct, we need to do a sort in order to filter out duplicate values; however, we do not care how comparator sort keys. Groupby also share this character. In this case, we rely on hadoop's default binary comparator 2. Semantics of order matter, but the key is of simple type. In this case, we have implementation for simple types, such as integer, long, float, chararray, databytearray, string However, if the key is a tuple and the sort semantics matters, we do not have a binary comparator implementation. This especially matters when we switch to use secondary sort. In secondary sort, we convert the inner sort of nested foreach into the secondary key and rely on hadoop to sorting on both main key and secondary key. The sorting key will become a two items tuple. Since the secondary key the sorting key of the nested foreach, so the sorting semantics matters. It turns out we do not have binary comparator once we use secondary sort, and we see a significant slow down. Binary comparator for tuple should be doable once we understand the binary structure of the serialized tuple. We can focus on most common use cases first, which is "group by" followed by a nested sort. In this case, we will use secondary sort. Semantics of the first key does not matter but semantics of secondary key matters. We need to identify the boundary of main key and secondary key in the binary tuple buffer without instantiate tuple itself. Then if the first key equals, we use a binary comparator to compare secondary key. Secondary key can also be a complex data type, but for the first step, we focus on simple secondary key, which is the most common use case. We mark this issue to be a candidate project for "Google summer of code 2010" program. Activity Hi, Gianmarco, Thank you for your interest. Avro is one thing we definitely want to try. We have some previous work on Avro (PIG-794), but we haven't actively working on it for a while. The previous implementation might not be very right, and it is based on an very old version of Avro, and finally we did not include it into Pig codebase. We will be very happy if you want to try it again, and we would recommend you to build it from scratch. We are very interested to see the performance and additional gains provided by Avro. I will start to look at the documentation and source code of both Pig and Avro to get a rough idea of the work to do. Maybe having a look at the old patch would be good to identify the points in the source code where changes are needed? Where can I discuss this issue and ideas to solve it? mailing list? irc? Do you have any other suggestion? Hi, I have been reading the source code and the referenced PIG-1038 issue. Probably Avro integration is too big of a project for GSoC, but implementing the tuple binary comparator seems doable. I will write a proposal, any advices for it? My idea of the project's breakdown would be like this: Identify the cases that can be optimized and the appropriate visitor for those. Write a test unit for this optimization. Implement the comparator knowing the data types of the tuple. Write a second test unit with different types. Write the logic to extract tuple boundary from schema information (I suppose this optimization is possible only if the schema is known) Try to extend it to the general case of complex data type as secondary key. Thoughts? Thanks Gianmarco, My suggestion is to divide it into two step: 1. make binary comparator works 2. integrate it into the current Pig code It is better to make sure we have quality deliverable for step 1 before we move to step 2. I have drafted my proposal at Any feedback is more than welcome. Thanks Gianmarco, Proposal looks good. Besides unit test, we need to add some performance test in both phase 1 and phase 2. Here is my frist dratf for the binary comparator, together with some simple tests. This comparator should replace PigTupleRawComparator. I assume that the tuple we are comparing is a NullableTuple that wraps a DefaultTuple. There is a comment in the old comparator that says: // Users are allowed to implement their own versions of tuples, which means we have no // idea what the underlying representation is. This will need to be addressed. If I can know that the tuple is not a DefaultTuple looking at its data type byte, I can switch back to object deserialization. This means that the user must not use the same DataType.TUPLE as the default tuple. The comparator iterates over the serialized tuples and compares them following the same logic as the old comparator (first check if Null, then their sizes, then compare field by field, if the fields are of different types compare the datatypes, else compare the values). For now I implemented the logic only for simple types, I plan to add bytearrays and chararrays. I do not plan to add support for complex datatypes. In this case I will fall back to object deserialization. The implementation uses a ByteBuffer to easily iterate over the values. I don't know if it is acceptable or its performance impact but it is very handy. The alternative is manual bookkeeping of a cursor inside the byte arrays. To do this I would probably use a map that tells me how many bytes each data type uses. This map should probably go somewhere else though (DataType? DataReaderWriter?). I commented out an initial attempt for this at the end of the class. An alternative implementation could be to follow the strategy of the others PigRawComparators, that wrap a comparator from Hadoop. This would require keeping a number of comparators in memory, though. I commented out this approach at the beginning of the class. TODO: Implement fallback behaviour. Implement support for bytearrays and chararrays. Graceful handling of tuples different from DefaultTuple. Add performance tests. Any comment is more than welcome. I briefly review the patch, looks good. This is the approach we expected. Can we do some initial performance test first? I added some simple performance tests. The tests generate 1 million tuples modifying a prototypical tuple and compare them to the prototype. One test uses the new comparator and the other uses the old one. I generate exactly the same tuples using a fixed seed. I also check the correctness of the comparisons using the normal compareTo() method of the tuples. The logic to generate the tuples is a bit involved: I tried to exercise all the datatype comparisons in a uniform manner, so I mutate less the first elements of the tuple, in order to have more probability of getting the comparison further down the tuple. The probabilities are totally made up and do not make much sense. As a first approximation, I see a slight overall speedup in the test. I will do some profiling to see which margins of improvement we have. I did some more detailed profiling and testing. I ran a slightly modified version of the performance tests in the patch. On my machine, the new comparator takes between 8 and 10 seconds. The old one between 11 and 14. The performance advantage is not so big, but I found also that most of the time of the test is not spent comparting. The profiler shows that most of the time is spent in cloning (probably JUnit internals?): [junit] 84.0% 0 + 1769 java.lang.Object.clone For what concerns user code instead, most of the time is spent writing data. Randomization of the input also takes some time. The old comparator takes (0.9% + 0.3% | compareTuple + compare) more than 1% of the time. The new one takes only 0.2% of the time. [junit] 2.9% 62 + 0 org.apache.pig.data.DataReaderWriter.writeDatum [junit] 1.1% 23 + 0 java.io.DataOutputStream.writeInt [junit] 0.9% 19 + 0 java.util.Random.next [junit] 0.9% 18 + 0 java.io.DataOutputStream.writeLong [junit] 0.9% 18 + 0 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigTupleRawComparator.compareTuple [junit] 0.6% 11 + 1 org.apache.pig.data.DataReaderWriter.readDatum [junit] 0.5% 11 + 0 java.io.DataInputStream.readInt [junit] 0.5% 11 + 0 org.apache.pig.data.DefaultTuple.readFields [junit] 0.3% 7 + 0 org.apache.pig.impl.io.PigNullableWritable.write [junit] 0.3% 7 + 0 org.apache.pig.data.DefaultTuple.<init> [junit] 0.3% 3 + 3 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigTupleRawComparator.compare [junit] 0.2% 5 + 0 java.util.ArrayList.get [junit] 0.2% 5 + 0 org.apache.pig.data.DefaultTuple.write [junit] 0.2% 3 + 2 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigTupleRawComparatorNew.compare All in all, these tests are probably not much representative, but I also feel that the speedup we may get with a raw comparator is somewhat limited. I am open to suggestion on how to modify the performance tests to make them more representative. "the new comparator takes between 8 and 10 seconds. The old one between 11 and 14", that is a 137% or 140% speedup. That's significant enough for us to go forward. Can you also attach performance test code? I want to take a look. Thanks. Sure, here it is the revised file. I left the random testing for unit testing purposes, and I moved the performance testing to a separate main method. This makes profiling much easier. FYI, for profiling I am using a mixture of hprof (the java internal profiler) and jrat (). I can prepare a more detailed report of where most of the time is consumed, if needed. Roughly, 80% of the test time is spent in the org.apache.pig.data package to write tuples (DefaultTuple.write() , DataReaderWriter.writeDatum() , DataType.findType() are the most expensive methods). From targeted profiling I have seen that the raw version of the compare method is around 3x faster than the old one. I run your main problem. It also counts the tuple bytes generation time, which dominate the cpu profile. I run it in another way, generate the bytes first and don't include in the timing, and then compare the bytes using different comparator, Here is my code snippet: byte[][] toCompare1 = new byte[TIMES][]; byte[][] toCompare2 = new byte[TIMES][]; NullableTuple t; for (int i=0;i<TIMES;i++) { t = new NullableTuple(test.getRandomTuple(rand)); t.write(test.dos1); toCompare1[i] = test.baos1.toByteArray(); } for (int i=0;i<TIMES;i++) { t = new NullableTuple(test.getRandomTuple(rand)); t.write(test.dos2); toCompare2[i] = test.baos2.toByteArray(); } before = System.currentTimeMillis(); for (int loop = 0; loop < 10000; loop++) { for (int i = 0; i < TIMES; i++) { test.comparator.compare(toCompare1[i], 0, toCompare1[i].length, toCompare2[i], 0, toCompare2[i].length); } } after = System.currentTimeMillis(); before = System.currentTimeMillis(); for (int loop = 0; loop < 10000; loop++) { for (int i = 0; i < TIMES; i++) { test.oldComparator.compare(toCompare1[i], 0, toCompare1[i].length, toCompare2[i], 0, toCompare2[i].length); } } after = System.currentTimeMillis(); In this comparison, I see 6.5 times speedup. Also notice the code style for Apache is always use space instead of tab, make sure you take care of this. I modified the main in order to include your code snippet. Thanks very much for the suggestion! I reduced the number of tuples in order to avoid OutOfMemory exceptions on my machine (only 1 GiB of RAM). I see the same numbers you reported: 6.5 times improve in speed only for the compare() method. I took care of the tabs issue. I will add support for ByteArrays and CharArrays next, together with fallback behaviour for complex datatypes. Added support for BYTEARRAY and CHARARRAY data types. Added fallback behaviour for unknown data types. Added support for non-raw comparison. I changed to public the visibility of a the charset field in DataType, because I need to know which encoding is being used for strings. I am testing the patch locally before submitting it. Given that performance tests look good, I think the next step would be to handle tuples different from DefaultTuple. This requires some changes to the infrastructure. Hi, Gianmarco, I think we can stick on DefaultTuple which is the major use case. I doubt there is a way to do a binary compare for an unknown tuple implementation. If user do not use DefaultTuple then we fall back to the deserialize version. Ok, if the user does not use DefaultTuple we fall back to the default deserialization case. I added handling of nested tuples via recursion and appropriate unit tests. Thanks, is the patch ready for review? I think it is -1 overall. Here are the results of testing the latest attachment against trunk revision 958666. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 6 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. -1 javac. The applied patch generated 150 javac compiler warnings (more than the trunk's current 145 warnings). +1 findbugs. The patch does not introduce any new Findbugs warnings. -1 release audit. The applied patch generated 402 release audit warnings (more than the trunk's current 399 warnings). -1 core tests. The patch failed core unit tests. -1 contrib tests. The patch failed contrib unit tests. Test results: Release audit warnings: Findbugs warnings: Console output: This message is automatically generated. Addressed a small bug and added ASF license to source files Patch looks good, one comment: Where is the code "if the user does not use DefaultTuple we fall back to the default deserialization case"? 1. If utf8 encoded ordering is same as java string compareTo ordering (i could not find a quick answer), it will possible to compare the strings by just comparing the bytes , instead of creating additional string objects. 2. The comparison does not handle chararrays > 65k in length, ie when BIGCHARARRAY is used as the type in encoding. This is a problem even in existing pig code. 3. The comparison function is based on the DefaultTuple serialization format, we need to make sure that it gets used only when DefaultTuple is the default tuple. I am making changes to use a new tuple with different serialization format in PIG-1472 . I think we should have this comparison logic defined in the class/interface where the serialization format is defined. I think it should be part of the InterSedes interface in the patch in PIG-1472 . I added the code for "if the user does not use DefaultTuple we fall back to the default deserialization case". I assume the user defined tuple will have a different DataType byte from DataType.TUPLE. If this is not the case, I see no way of discerning DefaultTuple from any other Tuple implementation. Anyway, I think this issue needs to be properly addressed in the context of PIG-1472. I added support for BIGCHARARRAY. UTF-8 decoding is quite convoluted. It is a variable length encoding, so we cannot avoid using a String. UTF-8 Before tackling the integration with PIG-1472 I need to familiarize with the code in the patch. I will write a proposal for the integration in the next days. I also made some changes to DataByteArray in order to encapsulate the logic for comparison in a publicly accessible method. This way the raw comparison is consistent with the behavior of the class, in a way similar to the other cases where I delegate comparison to the class. With the change of PIG-1472, we need to change raw comparator accordingly: 1. Bag comparison should be changed to compare TINYBAG/SMALLBAG/BAG 2. Tuple comparison should be changed to compare TINYTUPLE/SMALLTUPLE/TUPLE 3. Map comparison should be changed to compare TINYMAP/SMALLMAP/MAP 4. Integer comparison should be changed to compare INTEGER_0/INTEGER_1/INTEGER_INBYTE/INTEGER_INSHORT/INTEGER 5. ByteArray comparison should be changed to compare TINYBYTEARRAY/SMALLBYTEARRAY/BYTEARRAY 6. Chararray comparison should be changed to compare SMALLCHARARRAY/CHARARRAY 7. Raw comparator is now depend on the serialization format. Now we have two serialization format, DefaultTuple and BinSedesTuple. It's better to move PigTupleRawComparatorNew inside BinSedesTuple. But in this project, we only focus on BinSedesTuple, which addres most use cases In the integration code, we shall check if TupleFactory is actually BinSedesTupleFactory, if it is, use this raw comparator; otherwise, use the original comparator. I was wrong for the customized tuple. we do not need a fall back scheme for customized tuple. In the serialized format, all Tuples including customized Tuple will be serialized into the same format. Looks like UTF-8 encoding is convoluted, we can leave it for now. More clarification for custom Tuple. There two cases for custom tuple: 1. User create custom tuple inside UDF. In this case, we do not have a special serialized format for custom tuple. After serialization, we cannot tell if it is a custom tuple. That is say, we lose track of tuple implementation after se/des. Since serialized format is the same, we can still use the same raw comparator. 2. If user use a custom tuple factory (by overriding "pig.data.tuple.factory.name"), then serialized format may be changed. If we detect that tuple factory is not BinSedesTupleFactory, we shall not use this raw comparator. I have studied PIG-1472 a bit. My idea is as follows: 1) write a new method "CompareBinSedesTuple" that uses the new serialization format. The method will look like the one for default tuples that already exists. 2) in the entry point compare(), assess which TupleFactory we are actually using (using instanceof or checking the "pig.data.tuple.factory.name" property). If it is the DefaultTupleFactory, or the BinSedesTupleFactory, use the appropriate raw comparison method, otherwise resort to deserialization 3) actually, I would think the best place for the comparators is inside each TupleFactory. We can later split the comparator in two classes and put them in the appropriate TupleFactory implementation. We will have to add a getRawComparatorClass to the TupleFactory abstract class and modify the code that instantiates the comparator to take the class name from the TupleFactory. (I am not sure of this design, I am guessing on many details). This sounds fine. In case of BinSedesTupleFactory/BinSedesTuple the serialization code is in the subclass (BinInterSedes) of InterSedes class. Since the comparator function is closely tied to serlialization logic, i think that would be the appropriate place to have the comparator implementation code. The BinSedesTupleFactory can return the class obtained from InterSedesFactory.getInterSedesInstance(). If "pig.data.tuple.factory.name" is BinSedesTupleFactory, we should use your raw comparator. DefaultTupleFactory is a minor case, we can use deserialization. In DataType the type bytes are sorted in such a way that the comparison between different data types yields a standard order. This is achieved by carefully assigning the byte values to the types. In BinInterSedes this does not happen. So, to reproduce the same order, I need to sort the bytes somehow. The easiest way is to reassign the values in a way that is coherent with DataType. The hard way would be to implement a comparison method with all the possible combinations taken into account, but this is crazy to maintain. I have also the same problem for costants: because for INTEGER_0/1 and BOOLEAN_TRUE/FALSE there is no value to read, and the two data type bytes are different, with the current design I need to ensure that BOOLEAN_TRUE > BOOLEAN_FALSE and INTEGER_1 > INTEGER_0. Furthermore, It would be good to sort the byte types so that INTEGER > INTEGER_INSHORT > INTEGER_INBYTE etc... Hi, Gianmarco, I think it's better not to assume INTEGER > INTEGER_INSHORT. What type tells you is how to read the next data correctly. So if you see a INTEGER_INSHORT, read INTEGER_INSHORT into an integer, so you can compare with other integer type correctly. To follow a backwards compatible behaviour I should group all the integers (for example) into the same case statement and then implement all the logic there (if it is INTEGER_0/1 do not read anything, if it is INTEGER_INBYTE read a byte, etc...). So in each case statement I would need to compare each data type with its siblings, implement the logic to tell which one sorts first, then if the other data type is not a sibling, impose the global sorting. This will result in some quite convoluted code compared to the actual patch, because I will not be able to compare data types directly. Is this the desired behaviour? We need to compare a category of data type, not data type itself. For example: if (bt1==INTEGER_0||bt1==INTEGER_1||bt1==INTEGER_INBYTE||bt1==INTEGER_INSHORT||bt1==INTEGER) { type1 = INTEGER; value1 = readInteger(bt1); } if (bt2==INTEGER_0||bt2==INTEGER_1||bt2==INTEGER_INBYTE||bt2==INTEGER_INSHORT||bt2==INTEGER) { type2 = INTEGER; value2 = readInteger(bt1); } if (type1==type2) return (value1 < value2 ? -1 : (value1 == value2 ? 0 : 1)); else { return dt1-dt2; } Implemented a new compareBinInterSedesTuple() method. This new method works with data serialized with PIG-1472 format. It relies on DataType for data type comparison by translating the data types. I implemented some logic to read unsinged ints from bytes and shorts because I am using ByteBuffer that does not implement the DataInput interface. We might want to change that later. For now complex data types cause the whole tuple to be deserialized, but I have put some placeholders for methods to deserialize single complex objects and continue the normal execution flow. I plan to fill them in when I move the code inside BinInterSedes so that I can use the methods to read complex data types (readBag, readMap, etc..). This will involve some juggling around between DataInputBuffer and ByteBuffer (this is why we might consider to switch out ByteBuffer) to get the cursors consistent among them. I think the next step would be splitting the comparator and putting it into the serialization class (BinInterSedes). I don't know yet how to modify the class interface to make the comparator class available outside (but I think this relates to phase 2). I see no good place to put the class that implements the comparator for DefaultTuple, and Daniel said it is a minor case, so should I just throw away the code for DefaultTuple? I modified the comparator to be fully recursive for arbitrarily nested data. There is a bit of code duplication for the fallback behaviour, I plan to clean that up later. I use InterSedes.readDatum() to read complex types. I had to modify the readWritable() method in order to return a WritableComparable. I think it should have been this way from the beginning looking at DataType.compare and given that is is dealing with a GENERIC_WRITABLECOMPARABLE as the constant says. I found there is no implementation of a compareTo() method for InternalMaps, nor in the class nor in DataType, so I commented that out. I added some tests for complex data types. I think this could be the final revision for phase 1, before I move the code into BinInterSedes. Patch looks pretty good. Thanks Gianmarco! Couple of comments: 1. PigTupleRawComparatorNew:324,332,343,357,367,377,387,399,416,474,483,501,512,etc, if GeneralizedDataType is not equal, we should throw exception to contain the error 2. PigTupleRawComparatorNew:455-464, if the comparison of two items is not equal, we shall return the result without comparing additional items, that's how we get performance gain 3. I am unable to run TestPigTupleRawComparator.main due to OOM, what is the speed up after the change? 4. PigTupleRawComparatorNew:132, we shall move the logic of choosing the right comparator to Pig code, and move comparator into BinSedesTuple and DefaultTuple. This is part of integration work and let's mark it as the first thing for phase 2. Thanks, here my observations: 1. If you look at the old comparator, it uses DataType.compare(). In DataType:454-458 if the two data types are not equal, the value returned is the difference between the datatypes. I retained the same behavior in the patch. 2. I think we already do that. There is an additional guard in the for loop, that goes on only if rc == 0 on line 452 for (int i = 0; i < tsz1 && rc == 0; i++) { 3. Yes, I somehow changed the number of tuples without noticing. I got back to 10e3 and I see a 10x improvement in time now 4. Sure! Good. I will commit the patch. Let's start with integration Here my first stab at integration. I split the class in two classes and put them into DefaultTuple and BinInterSedes. This asymmetry is not nice but the serialization code is in different places for different tuples, and I wanted to keep the comparison code as close as possible to the serialization code. I modified the TupleFactory interface adding a method to get the tuple comparator class. This method is implemented in BinSedesTupleFactory and overridden in DefaultTupleFactory. BinSedesTupleFactory delegates it to a package method in BinInterSedes (where the actual code is). DefaultTupleFactoru delegates it to DefaultTuple (where the actual code is). The actual code for both comparators is just a cut/paste of the methods in PigTupleRawComparatorNew, I just adjusted a bit the entry points. I left the new comparator and tests untouched for now. Please let me have any comments you may have. I have some problems understanding the SecondaryKeyOptimizer. For example, given this query: A = LOAD 'input1' AS (a0,a1,a2); B = LOAD 'input2' AS (b0,b1,b2); C = cogroup A by (a0,a1), B by (b0,b1) parallel 2; D = foreach C { E = limit A 10; F = E.a2; G = DISTINCT F; generate group, COUNT(G);}; The key type is correctly recognized to be a tuple, the so.getNumMRUseSecondaryKey() is 1, but when I get to JobControlCompiler mro.getUseSecondaryKey() is false. Then, when it chooses the comparator in selectComparator(), hasOrderBy , which is (mro.isGlobalSort() || mro.isLimitAfterSort() || mro.usingTypedComparator()) , is false. So I get into the second switch statement and I get these comparators case DataType.TUPLE: job.setSortComparatorClass(PigTupleWritableComparator.class); job.setGroupingComparatorClass(PigGroupingTupleWritableComparator.class); that, to me, look like they are already comparing tuples in raw format (they use WritableComparator.compareBytes). Is this because the query is one in which order semantics do not matter, so it is already optimized? Should it change if I add an ORDER BY somewhere before the LIMIT? Is this the relevant case we want to optimize? Hi, Gianmarco, When you do an explain, you will see "Secondary sort: true" if the MR plan will use secondary sort. I am not yet sure why your script not using secondary sort, I will check it tomorrow. However, the following script will use secondary sort: A = LOAD '1.txt' AS (a0, a1, a2); B = group A by $0 parallel 2; C = foreach B { D = limit A 10; E = order D by $1; generate group, E;}; explain C; When secondary sort is used, Pig will use PigSecondaryKeyComparator, which is not actually raw. We need to replace it with your raw comparator. Thanks for the suggestion Daniel. My idea to integrate the RawTupleComparator is to modify PigSecondaryKeyComparator to delegate the comparison to it. I cannot mimic the current behavior of making 2 calls (one for the main and one for the secondary key) because I do not know the boundaries between the keys. So I need to make a single call to compare the compound key. There are some issues though. 1) There are some inconsistencies between the behavior of PigTupleRawComparator (the original one) and PigSecondaryKeyComparator. Specifically, when tuples are null the first one returns 0 while the second one compares the indexes. Furthermore, indexes are compared also when one of the fields in the tuples is null, in order not to join them (if I understood correctly PIG-927). This is done in PigSecondaryKeyComparator but not in PigTupleRawComparator. Is it designed to be like this or is it a bug? I suppose the behaviors of the two comparators should be more or less the same. 2) In PigSecondaryKeyComparator the key is assumed to be a 2-field tuple where the 0th field is the main key and the 1st field is the secondary key. We can directly feed the binary representation of this tuple inside our new raw comparator, but we need to consolidate the sort orders. Right now there are 2 different and independent sort orders serialized in the jobConf (pig.sortOrder and pig.secondarySortOrder). In the simple case, when sort orders for all the columns are specified, we can just concatenate them together (sort of). There are some problems when we have WholeTuple sort orders as they might differ. I would like to keep all of this out of the tuple comparator and define some clean interface to pass the sort orders. One problem I see is that I probably need the tuple sizes (recursively) to do this, and this is not known at configuration time. I also need to fix this in the current comparator, in order to take into account the recursion inside nested tuples. 3) Should I keep all the mIndex/mNull handling outside the RawTupleComparator and write wrappers that deal with them? That is, should the RawTupleComparator know how to deal with a NullableTuple or should it just know its kind of Tuple (BinInterSedes or Default) Hi, Gianmarco, 1. Notice currently PigTupleRawComparator is only used in "order-by" job, PigSecondaryKeyComparator only used by "non-order-by" job, there is a semantic difference in null handling. In order-by, different null key are treated equal; In group-by, however, null from different relation cannot merge (this is to conform with SQL standard). In your case, RawTupleComparator can be used in both "order-by tuple" case and "non-order-by" with secondary key case. So there will be semantic gap between this two. We need to deal with it separately. For simplicity, we can focus on secondary key case first (which means, different null keys from different relations are not equal) 2. We need to deal with sort order. When we use PigTupleRawComparator for secondary sort, it's quite clear the structure will be (main_key, secondary_key, value). I think we can limit PigTupleRawComparator specific to this structure currently (don't think about order-by tuple case), so you can pass the sort order and deal with it in PigTupleRawComparator 3. Yes, I think it might be better to keep mIndex/mNull handling outside RawTupleComparator Ok, first working integration. Modified PigTupleRawComparatorNew to use the raw comparators via TupleFactory. Created a new class PigSecondaryKeyComparatorNew that should substitute the old one. This one uses the raw comparators. Modified JobControlCompiler to use the new comparators. Moved the null/index semantic outside the raw comparators and inside the wrappers. Modified BinSedesTupleComparator to correctly handle sort order. The sort order is applied to the first call to compare tuples. In case we are doing a secondary sort, the sort orders are propagated 1 level more (because we have a nested tuple with the keys, and we need to apply the sort orders to the content of the outermost tuple). The code is not the cleanest possible but TestPigTupleRawComparator and TestSecondarySort pass. TODO: Implement the logic for PIG-927. I plan to create a new interface (TupleRawComparator) and add a method to check if during the comparison a field of type NULL was encountered. This interface will be used instead of the simple RawComparator to hold the reference to our raw comparators. Write speed test. Is there something already made that can be used to test the speed improvement? The inputs for the unit test are of course too small. I performed a some speed tests using PigMix2. I used query L16 and generated 2 datasets: one with 1M rows (1.6GiB) and the other with 10M rows (16GiB). I took the times end to end using the "time" utility. I do not have a cluster so I ran pig locally. Here the results. Trunk 1M 0m53.469s Patched 1M 0m39.076s Trunk 10M 9m49.507s Patched 10M 8m0.048s We have a 20~30% improvement end-to-end for this query. I think this is consistent with the expectations. That is terrific! I will review your patch shortly. I reviewed and regenerate the patch. Couple of notes: 1. All unit test and end-to-end test pass, hudson warning are addressed 2. See consistent performance improvement (around 20%) in pigmix query L16 (using 10 reducers, on a cluster of 10 nodes) 3. Did some refactory, change some class names and move some code around, move getRawComparatorClass to Tuple instead of TupleFactory Gianmarco, can you take a look if my changes are good? PIG-1295_0.14.patch - - The comparison logic for BinInterSedes relies on the serliazation format, so it think its better to have it closer to where the serialization format is implemented. Ie add a function to InterSedes interface (getComparator() ?) , and move the implementation logic to BinInterSedes class. - I think TupleFactory is a better place for getRawComparatorClass() for the following reasons- - TupleFactory is a singleton class, Tuple is not. Having it in Tuple implies that you can have different values returned by different instances. - Adding it to Tuple interface breaks backward compatibility, all Tuple implementations will need to add this function. Also, does not make sense for load functions that return a custom tuple to implement this method, because it is not related to that tuple implementation. Response to Thejas: 1. Yes, you are right. I will put another layer of abstraction for InterSedes.getRawComparatorClass 2. Conceptually comparator is in the logic of Tuple. Ideally it should be a static method of Tuple, however Tuple interface do not allow me do that. But even this I still feel it's better to put it in Tuple. For backward compatibility, first, we will break either Tuple or TupleFactory, the impact is equivalent; second, in both PigSecondaryKeyComparator and PigTupleSortComparator, we will check if Tuple does not implement the new method, we fall back to the default serialize version. Thoughts? Conceptually comparator is in the logic of Tuple. This comparator is part of only the default tuple implementation used internally within pig. So the class that is the source of truth for the default internal tuple implementation seems a good place to have this function. A tuple returned by a loadfunction has nothing to do with the comparator logic. Ideally it should be a static method of Tuple, however Tuple interface do not allow me do that. Yes, a static method can't be overridden. Since this is supposed to return only one value per pig query, the singleton TupleFactory is a better place. For backward compatibility, first, we will break either Tuple or TupleFactory, the impact is equivalent; No. TupleFactory is an abstract class, while Tuple is an interface. Users will not be forced to change their implementation if we add a function to TupleFactory. Also, users are more likely to have custom Tuple than custom TupleFactory - because they might implement different tuples as part of their load function implementation, and are unlikely to change the default Tuple implementation used in internally in pig. second, in both PigSecondaryKeyComparator and PigTupleSortComparator, we will check if Tuple does not implement the new method, we fall back to the default serialize version. If Tuple interface is going to have this function, i think we should add in the javadoc that it makes sense to implement the function only if it is going to be used as the default internal tuple implementation. And that null value can be returned if user chooses to not implement it. Attach another patch to address Thejas's first point. My 2 cents on the issue. 1) I agree with Thejas that comparator logic is strictly tied to serialization, so they should be as close as possible. 2) I agree that comparator is something related to Tuple, but Tuple is an interface and this complicates things. Putting the method to access the comparator in TupleFactory seems more natural to me, as the Factory and the Tuple implementation are anyway strongly tied. I don't like to have to create a (useless) Tuple in order to get to the comparator class. Class<? extends TupleRawComparator> mComparatorClass = TupleFactory.getInstance().newTuple().getRawComparatorClass(); Other minor things: In PigSecondaryKeyComparator@54 if (mComparator==null) { try { mComparator = PigTupleDefaultRawComparator.class.newInstance(); } catch (InstantiationException e) { throw new RuntimeException(e); } catch (IllegalAccessException e) { throw new RuntimeException(e); } } ((Configurable)mComparator).setConf(jconf); We can directly instantiate the class instead of using reflection here. Furthermore, there is no need to cast mComparator to Configurable. if (mComparator==null) mComparator = new PigTupleDefaultRawComparator(); mComparator.setConf(jconf); All unit test pass. test-patch: generated 156 javac compiler warnings (more than the trunk's current 145 warnings). [exec] [exec] +1 findbugs. The patch does not introduce any new Findbugs warnings. [exec] [exec] -1 release audit. The applied patch generated 414 release audit warnings (more than the trunk's current 410 warnings). javac warnings is all about deprecate. For release audit warnings, I checked every new file have license header. Patch committed. Thanks Gianmarco! Thanks for your support Daniel. It was a great experience. Avro provides efficient binary comparison for tuples with generic schemas. A simple way to implement this would be to adapt Pig to use Avro for intermediate files. This would allow to use generic schemas for keys and solve the genral problem in an efficient and elegant way. I would be glad to give it a try.
https://issues.apache.org/jira/browse/PIG-1295?focusedCommentId=12896339&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2015-14
refinedweb
6,335
58.08
Contents Strategy Library The Momentum Strategy Based on the Low Frequency Component of Forex Market Abstract Trend estimation is a family of methods to detect and predict tendencies and trends in price series just using the history information. Moving average is a commonly used trend following trading tool. Lots of momentum trading strategies in the Forex market are based on the moving average rule, in which signals are triggered if the close is above or below the moving average. But MA has the time lag, therefore can't be used to predict the turning points of market price changes. In this tutorial, I developed a trend following strategy which is proposed in the paper Harris R D F, Yilmaz F(2009). This strategy exploits short-term momentum in the non-linear trend component of the exchange rate which is generated by Hodrick-Prescott Filter (rather than the exchange rate itself) and uses the MA(1, 2) rule to measure this momentum. The strategy was tested on seven kinds of exchange rates and the results shows less robustness and the performance is sensitive to the change of model parameters. Introduction Hodrick-Prescott Filter decomposes a time series\(y_t\) into two components: the cyclical part(which is short-term) and the trend part(which is long term).\[y_t=x _t +c_t\] The filter is the solution to the following optimization problem for \(x_t\)\[\min _{x_t}\left[\sum_{t=1}^n(y_t-x_t)^2+\lambda\sum_{t=2}^{n-1}[(x_{t+1}-x_t)-(x_{t}-x_{t-1})^2] \right]\] The second term is the discrete derivative of the trend xt which characterizes the smoothness of the curve. We can rewrite the above formula in vector form:\[\min_{\bf x}{\parallel {\bf{y}}-{\bf{x}}\parallel}_2^2+\lambda {\parallel D\bf x\parallel}_2^2\] where \({\bf y}=(y_1,y_2,...,y_n),{\bf x}=(x_1,x_2,...,x_n)\in {\rm I\!R}^n\),\(\parallel\cdot\parallel_2\) is the Euclidean norm. D is the (n-2)*n matrix:\[ \left[ \begin{matrix} 1 & -2 & 1 & \\ & 1 & -2 & 1 \\ & & & \ddots &\\ & & & 1 & -2 & 1 \\ & & & & 1 & -2 & 1 \\ \end{matrix} \right] \] The solution of this optimization problem is given by solving the following linear system:\[y=(I+2\lambda D^TD)^{-1}x\] def hpfilter(self,X, lamb=1600): X = np.asarray(X, float) if X.ndim > 1: X = X.squeeze() nobs = len(X) I = sparse.eye(nobs,nobs) offsets = np.array([0,1,2]) data = np.repeat([[1.],[-2.],[1.]], nobs, axis=1) K = sparse.dia_matrix((data, offsets), shape=(nobs-2,nobs)) use_umfpack = True self.trend = spsolve(I+lamb*K.T.dot(K), X,use_umfpack=use_umfpack) cycle = X - self.trend Method This low-frequency momentum trading strategies are applied to daily data on seven kinds of exchange rates. We use five years history data before January 2011 for initial estimation of the trend model. Daily exchange rates for the period January 2011 to May 2017 is used for out of sample trading. def Initialize(self): self.SetStartDate(2011,1,1) self.SetEndDate(2017,5,30) self.SetCash(100000) self.numdays = 360*5 # set the length of training period self.syl = self.AddSecurity(SecurityType.Forex, "EURUSD", Resolution.Daily).Symbol self.n,self.m = 2, 1 self.trend = None self.SetBenchmark(self.syl) self.MA_rules = None history = self.History(self.numdays,Resolution.Daily) self.close = [slice[self.syl].Close for slice in history] Step 1: Calibrating the Filter Smoothing Parameter λ The Ravn–Uhlig rule is commonly used to set the smoothing parameter λ in HP filter and must be greater than 0. It is adjusted by the changing frequency of observations and must be greater than 0. Hodrick and Prescott (1997) recommended setting λ to 1,600 for quarterly data. The Ravn–Uhlig rule sets \(\lambda = 1600p^4\) , where p is the number of periods per quarter. As for our daily exchange rate data, we should have set λ to be \(1600\times (30 \times 4)^4\). But when we use this value as the λ, the curve is almost a straight line since the trend becomes smoother as λ → ∞. In order to avoid excessive smoothing, we gradually decrease the λ and draw the smoothing curve. Below is the chart of EURUSD daily price from the year 2010 to 2011. t100 denotes the trend component after filter with the parameter λ=100. If we just plot the curve for the first 100 days, we find that the smaller the λ, the more apparent the trend. The curve does not change too much as the λ smaller than 100. Thus here we choose λ=100 to extract the trend of daily price data. This trend is our low-frequency component. Out-of-sample EUR/USD Trend Estimation Step 2: Setting up the Moving Average Rule Moving average (MA) rules are very commonly used to generate buy and sell signals from data on the spot exchange rate. The MA rule compares a short-run moving average of the current and lagged exchange rate with a long-run moving average.\[MA(m,n)=\frac{1}{m}\sum_{i=0}^{m-1}S_{t-i}-\frac{1}{n}\sum_{i=0}^{n-1}S_{t-i}\] For HP filter, the non-linear trend is estimated recursively as the paper did. The initial estimation was undertaken using 3 years history data before 2011. The estimation period is then rolled forward each day through the trading period from January 2011 to May 2017. Step 3: Generating the Trading Signals We generate buy and sell signals by applying an MA(1, 2) rule to the estimated low-frequency component. For MA(m,n), m must be 1 which denotes the current value of low-frequency component. n should be small since large n would generate large time lag, the judgment of turning points is not accurate. A buy signal is generated when the current day’s low-frequency trend is higher than the last day’s low-frequency trend and a sell signal is generated when it is lower. def OnData(self,data): self.close.append(self.Portfolio[self.syl].Price) self.hpfilter(self.close[-self.numdays:len(self.close)+1], 100) self.MA_rules_today = (np.mean(self.trend[-self.m : len(self.trend)]) - np.mean(self.trend[-self.n : len(self.trend)])) self.MA_rules_yesterday = (np.mean(self.trend[-self.m-1: len(self.trend)-1]) - np.mean(self.trend[-self.n-1 : len(self.trend)-1])) holdings = self.Portfolio[self.syl].Quantity if self.MA_rules_today > 0 and self.MA_rules_yesterday < 0: self.SetHoldings(self.syl, 1) elif self.MA_rules_today < 0 and self.MA_rules_yesterday > 0: self.SetHoldings(self.syl, -1) Trading Signals when λ=1600 Trading Signals when λ=100 The above charts are the in-sample trading signals after applying MA rules on the low-frequency component. The trend curve is more smooth with larger λ. Thus when we applied MA rules, a less smooth trend will trigger more trading opportunities. Summary The table reports the strategy performance statistics during six and a half years backtesting period. From the table we can see, most of them have the higher maximum drawdown. The number of total trades is small because we applied MA rules on the smoothed trend component. As the author indicated in the paper, we still find that the performance of this strategy is very sensitive to the choice of lag parameters in MA rules and in a non-monotonic way. The strategy does not generate more stable profits in Forex market generally. That might because that the HP filter technique was designed to be viewed as a trend curve through the entire set of data. When we applied it in trading strategy, the entry of new data into the filter model can cause the trend line to change the trend through past data, makes it harder to identify the trend accurately. References - Harris R D F, Yilmaz F. A momentum trading strategy based on the low-frequency component of the exchange rate[J]. Journal of Banking & Finance, 2009, 33(9): 1575-1585. online copy - Dao T L. Momentum Strategies with L1 Filter[J]. Browser Download This Paper, 2014. online copy You can also see our Documentation and Videos. You can also get in touch with us via Chat. Did you find this page helpful?
https://www.quantconnect.com/tutorials/strategy-library/the-momentum-strategy-based-on-the-low-frequency-component-of-forex-market
CC-MAIN-2021-25
refinedweb
1,381
57.47
It was reported from a field customer that global spin lock ptcg_lock is giving a lot of grief on munmap performance running on a large numa machine. What appears to be a problem coming from flush_tlb_range(), which currently unconditionally calls platform_global_tlb_purge(). For some of the numa machines in existence today, this function is mapped into ia64_global_tlb_purge(), which holds ptcg_lock spin lock while executing ptc.ga instruction. Here is a patch that attempt to avoid global tlb purge whenever possible. It will use local tlb purge as much as possible. Though the conditions to use local tlb purge is pretty restrictive. One of the side effect of having flush tlb range instruction on ia64 is that kernel don't get a chance to clear out cpu_vm_mask. On ia64, this mask is sticky and it will accumulate if process bounces around. Thus diminishing the possible use of ptc.l. Thoughts? Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> --- ./arch/ia64/mm/tlb.c.orig 2006-03-05 12:21:27.400110815 -0800 +++ ./arch/ia64/mm/tlb.c 2006-03-05 12:23:04.725304935 -0800 @@ -156,17 +156,19 @@ flush_tlb_range (struct vm_area_struct * nbits = purge.max_bits; start &= ~((1UL << nbits) - 1); -# ifdef CONFIG_SMP - platform_global_tlb_purge(mm, start, end, nbits); -# else preempt_disable(); +#ifdef CONFIG_SMP + if (mm != current->active_mm || cpus_weight(mm->cpu_vm_mask) != 1) { + platform_global_tlb_purge(mm, start, end, nbits); + preempt_enable(); + return; + } +#endif do { ia64_ptcl(start, (nbits<<2)); start += (1UL << nbits); } while (start < end); preempt_enable(); -# endif - ia64_srlz_i(); /* srlz.i implies srlz.d */ } EXPORT_SYMBOL(flush_tlb_range); - To unsubscribe from this list: send the line "unsubscribe linux-ia64" in the body of a message to majordomo@vger.kernel.org More majordomo info at on Tue Mar 07 09:13:37 2006 This archive was generated by hypermail 2.1.8 : 2006-03-07 09:13:48 EST
http://www.gelato.unsw.edu.au/archives/linux-ia64/0603/17395.html
CC-MAIN-2020-16
refinedweb
297
58.48
more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed Determine the PID of Tomcat (I'll call it TOMCAT_PID). 3. In short, "java.net.SocketException: Too many files open" can be seen any Java Server application e.g. not closing streams once done or due to increased volume. have a peek at this web-site The consensus was that it was to to with the number of open files. the preposition after "get stuck" Why is the bridge on smaller spacecraft at the front but not in bigger vessels? In order to fix java.io.IOException: Too many open files, you must remember to close any stream you open e.g. If your Java program, remember Tomcat, weblogic or any other application server are Java programs and they run on JVM, exceeds this limit, it will throw java.net.SocketException: Too many files open asked 2 years ago viewed 6735 times active 2 years ago Linked 122 How do I change the number of open files limit in Linux? 1 My spring+hibernate app does not errortroubleshootingfaqulimit 5 Comments Zdenek Skodik Running into this on installation can screw up instance (there won't be corrupted nodes on the repository, but there can be nodes missing completely) so it's java.net.SocketException: Too many files open issue is also common among FIX Engines, where client use TCP/IP protocol to connect with brokers FIX servers. but i am curious to know how a mismatch in these two parameter will lead to the error i stated above ! Trick or Treat polyglot Best way to repair rotted fuel line? On jboss 4.2.x I resolve this problem too.I just compile new jbossweb.jar from svn (2.0.x). Java.net.socketexception Too Many Open Files Websphere First I thought I had a leak somewhere, which prevented files and sockets from getting closed properly. EDIT Hello! Java.net.socketexception Too Many Open Files Tomcat asked 5 years ago viewed 46476 times active 19 days ago Linked 7 Error in tomcat “too many open files” 2 should connectionmanager really do the shutdown? 2 How to resolve First you edit /etc/security/limits.conf and add your new limit for the user running Tomcat. If you use something else, type bash to change the shell for the ulimit command. Re: java.net.SocketException: Too many open files on Red Hat Shane Weaver Sep 10, 2010 10:25 AM (in response to Leonid Batizhevsky) We're having the same problem on JBoss 4.2.1 GA - Java.net.socketexception: Too Many Open Files Httpclient Browse other questions tagged centos performance tomcat files or ask your own question. Mark as an Answer RE: Getting error : java.net.SocketException: Too many open files August 17, 2009 3:16 AM Answer Satish Kumar Gunuputi Rank: New Member Posts: 19 Join Date: May 18, We were using jboss webservices module and while load testing it was throwing same exception for WSDL java.net.SocketException: Too many open filesI believe there is a patch available for this. Reload to refresh your session. I'm going to leave the code running overnight and see if your suggested changes make any difference. –Raunak Apr 13 '11 at 23:34 2 I'm not sure what fixed the Socketexception Too Many Open Files Linux could give me more context around this exception ?also my liferay is hung with the above problem almost once in every week.Lisa, Thanks for your help on this. Java.net.socketexception Too Many Open Files Weblogic I am confused. Thank you again for answering. but finally all the transcations getting broken and the server got hang-up...04:55:34,751 ERROR [org.jboss.naming.Naming] Naming accept handler stoppingjava.net.SocketException: Too many open files at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384)java.net.SocketException: Broken pipe at java.net.SocketOutputStream.socketWrite0(Native java.io.FileNotFoundException: /usr/local/tomcat/webapps/myApp/repositories/magnolia/workspaces/config/index/redo.log (Too many open files) Or even SQLException: Re: java.net.SocketException: Too many open files on Red Hat Kiran Krishnamurthy Jun 29, 2009 11:57 PM (in response to Kiran Krishnamurthy) Yes. Is the ability to finish a wizard early a good idea? Swing is not thread-safe in Java - What does it me... Source also i would double check my web apps to see if we are not handling DB connections properly !!Thanks and Regards,Satish. this error is the only one that is logged in this whole 350mb server.log. Java Too Many Open Files Linux Badbox when using package todonotes and command missingfigure Java beginner exercise : Write a class "Air Plane" How to apply for UK visit visa after four refusal Is it unethical of Can someone follow these steps to provide me with some clues? 1. Bottom line to fix java.net.SocketException: Too many files open, is that either increasing number of open file handles or reducing TCP TIME_WAIT timeout. have two jndi SYS INFO: Probe Version: 2.1.2 Server version: Apache Tomcat/6.0.20 JVM: java version "1.6.0_24" OS: CentOS release 5.4 (Final) OS Version: 2.6.18-164.el5PAE Architecture: i386 description 2011-3-16 14:17:30 org.apache.tomcat.util.net.JIoEndpoint$Acceptor run Re: java.net.SocketException: Too many open files on Red Hat Leonid Batizhevsky Sep 15, 2010 1:08 PM (in response to Shane Weaver) Shane WeaverNo, I do not have any problems like this Org.apache.tomcat.util.net.jioendpoint$acceptor Run If that's seems unusual to your application, you can find the culprit client and prohibit them from reconnecting from making a connection, but if that is something, your application may expect Show 12 replies 1. In short, this error is coming because clients are connecting and disconnecting frequently.If you want to handle it on your side, you have two options : 1) Increase number of open How to draw a clock-diagram? Share to Twitter Share to Facebook Labels: core java , debugging , error and exception Location: United States 3 comments : Anonymous said... Mark as an Answer RE: Getting error : java.net.SocketException: Too many open files August 18, 2009 10:27 PM Answer Satish Kumar Gunuputi Rank: New Member Posts: 19 Join Date: May 18, Flag Please sign in to flag this as inappropriate. All Places > JBoss AS > Performance Tuning > Discussions Please enter a title. SQL Query to find all table names on database in M... I have some applications very similar to this one on other servers, the difference is that they are a Stand Alone version and this is a Multitenant architecture, I notice that Finding if two sets are equal Do DC-DC boost converters that accept a wide voltage range always require feedback to maintain constant output voltage? Flag Please sign in to flag this as inappropriate. E.g. The application that is running on the JBoss server makes HTTP connections to other systems using Apache HTTPClient and the code is releasing the connections using the HTTPClient.can somebody pls help Sign in to comment Contact GitHub API Training Shop Blog About © 2016 GitHub, Inc. Also, the HttpEntity.consumeContent() is deprecated for related reasons. Why can't the second fundamental theorem of calculus be proved in just two lines? Not the answer you're looking for? Its a Spring-Multitenant application that hosts webpages for about 30 clients. The applications starts fine, then after a while, Im getting this:60) at org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:216) at I'm not sure how to do that. Disproving Euler proposition by brute force in C Separate namespaces for functions and variables in POSIX shells Derogatory term for a nobleman Dozens of earthworms came on my terrace and died Re: java.net.SocketException: Too many open files on Red Hat vony jon Jul 1, 2009 5:35 PM (in response to Kiran Krishnamurthy) Do you really need maxThreads="1000" in your tomcat connector??You should Always remember to close them in finally block. The combined result was that I already was scratching the 1000 mark for all Tomcats after rebooting. Flag Please sign in to flag this as inappropriate. Original comment by [email protected] on 31 May 2011 at 5:28 GoogleCodeExporter commented Mar 16, 2015 That's weird. This can happen on server systems running more applications that use file system extensively (e.g. Sign in to vote.
http://degital.net/too-many/tomcat-error-java-net-socketexception-too-many-open-files.html
CC-MAIN-2018-22
refinedweb
1,439
56.55
A Django form widget implementing intl-tel-input. Project description A Django form widget for international telephone numbers based on the jQuery plugin intl-tel-input. This is a new package, so it doesn’t implement all the features of intl-tel-input. However, it is well tested, and has been stable in production. Installation Install from PyPI. pip install django-intl-tel-input Add intl-tel-input to your INSTALLED_APPS, so Django can find the init script. ... INSTALLED_APPS += ('intl_tel_input',) ... Usage Simply add IntlTelInputWidget to your form field. from intl_tel_input.widgets import IntlTelInputWidget class MyForm(forms.ModelForm): class Meta: model = MyModel fields = ['foo', 'bar'] widgets = { 'bar': IntlTelInputWidget() } ... With a standard form: class MyForm(forms.Form): tel_number = forms.CharField(widget=IntlTelInputWidget()) ... Form media Include {{ form.media.css }} in the <head> of your template. This will ensure all styles are parsed before the widget is displayed. If you have included jQuery at the end of your document, then don’t forget to update the template where this widget appears with a {{ form.media.js }}. Put it in a block that allows it to come after jQuery. If you’re using crispy-forms, the static content will be inserted automatically beside the input. To prevent this, be sure to set include_media = False on your form helper. class MyForm(forms.Form): ... def __init__(self, *args, **kwargs): self.helper = FormHelper() self.helper.include_media = False ... If you need to load all JS in the head, you can make the init.js script wait for the document to be ready with the following snippet. jQuery(document).ready( {{ form.media.js }} ); All this assumes your form context variable is called form. Options The widget can be invoked with keyword arguments which translate to the options available in intl-tel-input. - allow_dropdown - Shows the country dropdown. Default: True - default_code - Country code selected by default. Overridden when using auto_geo_ip. Default: 'us' - auto_geo_ip - When True, freegeoip will be used to autodetect the user’s country via Ajax. There is a limit of 15,000 queries per hour, so it should not be used on high-traffic sites. Alternatively use pygeoip, detect server-side, then set the default_code. Default: False Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/django-intl-tel-input/0.1.3/
CC-MAIN-2021-25
refinedweb
387
60.61
The QStringMatcher class holds a sequence of characters that can be quickly matched in a Unicode string. More... #include <QStringMatcher> The QStringMatcher class holds a sequence of characters that can be quickly matched in a Unicode string. This class is useful when you have a sequence of QChars that you want to repeatedly match against some strings (perhaps in a loop), or when you want to search for the same sequence of characters multiple times in the same string. Using a matcher object and indexIn() is faster than matching a plain QString with QString::indexOf() if repeated matching takes place. This class offers no benefit if you are doing one-off string matches. Create the QStringMatcher with the QString you want to search for. Then call indexIn() on the QString that you want to search. See also QString, QByteArrayMatcher, and QRegExp. Constructs an empty string matcher that won't match anything. Call setPattern() to give it a pattern to match. Constructs a string matcher that will search for pattern, with case sensitivity cs. Call indexIn() to perform a search. Copies the other string matcher to this string matcher. Destroys the string matcher. Returns the case sensitivity setting for this string matcher. See also setCaseSensitivity(). Searches the string str from character position from (default 0, i.e. from the first character), for the string pattern() that was set in the constructor or in the most recent call to setPattern(). Returns the position where the pattern() matched in str, or -1 if no match was found. See also setPattern() and setCaseSensitivity(). Returns the string pattern that this string matcher will search for. See also setPattern(). Sets the case sensitivity setting of this string matcher to cs. See also caseSensitivity(), setPattern(), and indexIn(). Sets the string that this string matcher will search for to pattern. See also pattern(), setCaseSensitivity(), and indexIn(). Assigns the other string matcher to this string matcher.
http://doc.trolltech.com/4.0/qstringmatcher.html
crawl-001
refinedweb
317
68.16
. The weak forms ((1) and (3)) of the functions are allowed to fail spuriously, that is, act as if *obj != *expected even if they are equal. When a compare-and-exchange is in a loop, the weak version will yield better performance on some platforms. When a weak compare-and-exchange would require a loop and a strong one would not, the strong one is preferable. These functions are defined in terms of member functions of std::atomic: Parameters Return value The result of the comparison: true if *obj was equal to *exp, false otherwise. Exceptions Example compare and exchange operations are often used as basic building blocks of lockfree data structures #include <atomic> template<class T> struct node { T data; node* next; node(const T& data) : data(data), next(nullptr) {} }; template<class(!std::atomic_compare_exchange_weak_explicit( &head, &new_node->next, new_node, std::memory_order_release, std::memory_order_relaxed)) ; // the body of the loop is empty } }; int main() { stack<int> s; s.push(1); s.push(2); s.push(3); }
http://en.cppreference.com/mwiki/index.php?title=cpp/atomic/atomic_compare_exchange&oldid=65735
CC-MAIN-2014-10
refinedweb
164
50.87
Adding Search to AppFuse with Compass Over 5 years ago, I recognized that AppFuse needed to have a search feature and entered an issue in JIRA. Almost 4 years later, a Compass Tutorial was created and shortly after Shay Banon (Compass Founder), sent in a patch. From the message he sent me: A quick breakdown of enabling search: - Added Searchable annotations to the User and Address. - Defined Compass bean, automatically scanning the model package for mapped searchable classes. It also automatically integrates with Spring transaction manager, and stores the index on the file system ([work dir]/target/test-index). - Defined CompassTemplate (similar in concept to HibernateTemplate). - Defined CompassSearchHelper. Really helps to perform search since it does pagination and so on. - Defined CompassGps, basically it allows for index operation allowing to completely reindex the data from the database. JPA and Hiberante also automatically mirror changes done through their API to the index. iBatis uses AOP. Fast forward 2 years and I finally found the time/desire to put a UI on the backend Compass implementation that Shay provided. Yes, I realize that Compass is being replaced by ElasticSearch. I may change to use ElasticSearch in the future; now that the search feature exists, I hope to see it evolve and improve. Since Shay's patch integrated the necessary Spring beans for indexing and searching, the only thing I had to do was to implement the UI. Rather than having an "all objects" results page, I elected to implement it so you could search on an entity's list screen. I started with Spring MVC and added a search() method to the UserController: @RequestMapping(method = RequestMethod.GET) public ModelAndView handleRequest(@RequestParam(required = false, value = "q") String query) throws Exception { if (query != null && !"".equals(query.trim())) { return new ModelAndView("admin/userList", Constants.USER_LIST, search(query)); } else { return new ModelAndView("admin/userList", Constants.USER_LIST, mgr.getUsers()); } } public List<User> search(String query) { List<User> results = new ArrayList<User>(); CompassDetachedHits hits = compassTemplate.findWithDetach(query); log.debug("No. of results for '" + query + "': " + hits.length()); for (int i = 0; i < hits.length(); i++) { results.add((User) hits.data(i)); } return results; } At first, I used compassTemplate.find(), but got an error because I wasn't using an OpenSessionInViewFilter. I decided to go with findWithDetach() and added the following search form to the top of the userList.jsp page: <div id="search"> <form method="get" action="${ctx}/admin/users" id="searchForm"> <input type="text" size="20" name="q" id="query" value="${param.q}" placeholder="Enter search terms"/> <input type="submit" value="<fmt:message"/> </form> </div> NOTE: I tried using HTML5's <input type="search">, but found Canoo WebTest doesn't support it. Next, I wrote a unit test to verify everything worked as expected. I found I had to call compassGps.index() as part of my test to make sure my index was created and up-to-date. public class UserControllerTest extends BaseControllerTestCase { @Autowired private CompassGps compassGps; @Autowired private UserController controller; public void testSearch() throws Exception { compassGps.index(); ModelAndView mav = controller.handleRequest("admin"); Map m = mav.getModel(); List results = (List) m.get(Constants.USER_LIST); assertNotNull(results); assertTrue(results.size() >= 1); assertEquals("admin/userList", mav.getViewName()); } } After getting this working, I started integrating similar code into AppFuse's other web framework modules (Struts, JSF and Tapestry). When I was finished, they all looked pretty similar from a UI perspective. Struts: <div id="search"> <form method="get" action="${ctx}/admin/users" id="searchForm"> <input type="text" size="20" name="q" id="query" value="${param.q}" placeholder="Enter search terms..."/> <input type="submit" value="<fmt:message"/> </form> </div> JSF: <div id="search"> <h:form <h:inputText <h:commandButton </h:form> </div> Tapestry: <div id="search"> <t:form <t:textfield <input t: </t:form> </div> One frustrating thing I found was that Tapestry doesn't support method="get" and AFAICT, neither does JSF 2. With JSF, I had to make my UserList bean session-scoped or the query parameter would be null when it listed the results. Tapestry took me the longest to implement, mainly because I had issues figuring out how it's easy-to-understand-once-you-know onSubmit() handlers worked and I had the proper @Property and @Persist annotations on my "q" property. This tutorial was the greatest help for me. Of course, now that it's all finished, the code looks pretty intuitive. Feeling proud of myself for getting this working, I started integrating this feature into AppFuse's code generation and found I had to add quite a bit of code to the generated list pages/controllers. So I went on a bike ride... While riding, I thought of a much better solution and added the following search method to AppFuse's GenericManagerImpl.java. In the code I added to pages/controllers previously, I'd already refactored to use CompassSearchHelper and I continued to do so in the service layer implementation. @Autowired private CompassSearchHelper compass; public List<T> search(String q, Class clazz) { if (q == null || "".equals(q.trim())) { return getAll(); } List<T> results = new ArrayList<T>(); CompassSearchCommand command = new CompassSearchCommand(q); CompassSearchResults compassResults = compass.search(command); CompassHit[] hits = compassResults.getHits(); if (log.isDebugEnabled() && clazz != null) { log.debug("Filtering by type: " + clazz.getName()); } for (CompassHit hit : hits) { if (clazz != null) { if (hit.data().getClass().equals(clazz)) { results.add((T) hit.data()); } } else { results.add((T) hit.data()); } } if (log.isDebugEnabled()) { log.debug("Number of results for '" + q + "': " + results.size()); } return results; } This greatly simplified my page/controller logic because now all I had to do was call manager.search(query, User.class) instead of doing the Compass login in the controller. Of course, it'd be great if I didn't have to pass in the Class to filter by object, but that's the nature of generics and type erasure. Other things I learned along the way: - To index on startup, I added compassGps.index() to the StartupListener.. - In unit tests that leveraged transactions around methods, I had to call compassGps.index() before any transactions started. - To scan multiple packages for searchable classes, I had to add a LocalCompassBeanPostProcessor. But more than anything, I was reminded it always helps to take a bike ride when you don't like the design of your code. This feature and many more will be in AppFuse 2.1, which I hope to finish by the end of the month. In the meantime, please feel free to try out the latest snapshot. One thing about Tapestry is that when you don't want to use parts of it, you can get that out of the way. In your situation, you want an HTML form, but don't really need the more advanced Tapestry mechanisms. In your template: And in your Java code: I call this dropping down to servlet mode. A Tapestry Form component is built for the worst-case, most complex scenario: one where there are loops and conditionals inside the form, where all kinds of state needs to be encoded into the form (as the t:formdata hidden field, thus the restriction to POST not GET), and where Tapestry needs to set up client-side and server-side validation. Hope that helps! I'm trying to thing of a suitable title to add this to the FAQ. Posted by Howard Lewis Ship on March 16, 2011 at 02:51 PM MDT #
http://raibledesigns.com/rd/entry/adding_search_to_appfuse
CC-MAIN-2017-47
refinedweb
1,223
57.87
import data into software on the cloudss Budget $30-250 USD import data into software on the cloudss 40 freelance font une offre moyenne de $107 pour ce travail Hi there. Can you please provide me more info as to what needs to be imported into the software stored in Cloud? I will be waiting for your response. Many thanks Saad. What software do you want data imported into and from what source? Could you give me more details, please? I am ready to give you sample work done before we start and we can go from there. Thanks, Good Day Hi there, I'm very much interested in this project. I've good experience in data importing project. I hope i'm perfect for this job. I'm ready to [login to view URL] ping me. Thanks Emamul H> Hey, I have 3 years of experience in many clouds like amazon and eBay. I want to know your job accurately to can specify the budget. let us discuss in chat. Hello. I am currently an university student. I am interested in this job. wish could provide me this opportunity to provide my assistance thanks Dear Sir, I am serious and professional freelancer employee. I know your needs and very carefully use and put your important data on software. Kindly give me this project. thanks
https://www.fr.freelancer.com/projects/data-entry/import-data-into-software-the-16719778/
CC-MAIN-2018-34
refinedweb
225
76.62
Senior Dev Lead @ Microsoft. If you would like to receive an email when updates are made to this post, please register here RSS Piyush Shah , a dev on my team, developed a way to embed MasterPages in assemblies using VirtualPathProvider The VirtualPathProvider is one of the ASP.NET pieces I looked at when 2.0 was new. I suspected then that "I suspected then that"...? What? That was a trackback from K Scott Allen's blog. Here is the link for the complete sentence - Thanks for sharing this. I was about to try a similar thing and you saved me a lot of trouble. Just as a tip for anyone implementing all this, you do not need to embed the .master.cs file. You can compile it normally, just remove the CodeFile/Codebehind property from the master directive in the .master file (make sure the Inherits stays there). Interesting read...ran across this while developing for a recent POC Sharing Master Pages amongst Applications I was working with VirtualPathProviders today for an upcoming talk at Tech Ed. VPPs are a technique whereby Hi, I just tried your approach. I opened your project(EmbedMasterPage.zip). I just opened new web site project and added a new reference to VirtualPathProvider.dll. Then I added the OnPreInit Event to Default.aspx.cs. Then I added new Global.asax and added Application_Start Event to it. But the new project is not compilable - "VirtualPathProvider.MasterPageVirtualPathProvider does not contain a definition for masterPageFileLocation". Hope you can help me... otherwise I don't know how to implement the MasterPage via a DLL. Additional: Starting the app with the result: "Server Error in '/WebSite1' Application" "Content controls have to be top-level controls in a content Page or a nested master page that references a master page" Can you make sure you do not have any content outside the <asp:content> tag? I've tried setting this up, but the Assmebly stream in ReadResource always returns null for me when I call this from the web application. My guess is that I haven't set the three class constants in the VirtualPathProvider class correctly. Can you explain a bit more about when each of those constants represents and how to modify them for a project? Hi, Piyush, it was nice to meet you today! I was curious how you managed to distribute a master page in a DLL, and wouldn't you know it? You have a blog post describing exactly how to do it. Excellent! I wonder if something similar could be done for Web User Controls as well. Yep. Absolutely, the same concept can be used for Usercontrols as well. 1. Embedd UserControls as Resource. 2. Read the string as a MemoryStream from the Assembly. 3. Use Page.ParseControl to add it to your page. However if this is something you are sharing amongst applications I would create a Server WebControl as above may not be that performant. With Server Control you can GAC it, have designer support, toolbar support, etc. I couldn't get ParseControl to do the job (HttpParseException), but System.Web.Compilation.BuildManager.CreateInstanceFromVirtualPath did the trick! Oops, I mean LoadControl, not CreateInstanceFromVirtualPath. CreateInstanceFromVirutalPath doesn't deal with server-side controls in the User Control Markup that well. Hi Piyush, Thanks so much for the brilliant technique. I've extended the idea, and using HttpModule to register master page, and all works fine. However, i'm stuck when I create a deployment project to the webapp project that uses the dll. What should the properties look like? The project has the masterpage dll in the bin (Copy to Output).I get a runtime error that it cannot find the virtual directory for the masterpage. Any help or pointers will be greatly appreciated! Thanks! After going through the MSDN documentation for the nth time, I finally saw the 'Note:' that this will not work for Precompiled websites. But I also found a workaround here Although, i'm not too happy since I have to muck with an internal method using reflection. Could you please give an update, if at all this will be fixed? (VPP in precompiled apps) Appreciate your time and all the great work - thanks! Thanks for sharing the link. As it said the problem is in the HostingEnvironment class. Unfortunately I do not know why this is there and will it be fixed. I tried the workaround using reflection to call the internal method of the HostingEnvironment class - Now i'm able to register the master page, however i get another error now <MasterPageDirectory/Master page> has not been pre-compiled, and cannot be requested. I'm developing a framework for several webapps in my company,and not having the webapps precompiled is not an option. Any suggestions ? Ok - I finally have it all working. For precompilation, apart from the workaround mentioned above, if we use a web application project as opposed to website, you have to remove the master page reference from the aspx files (I had them point to a dummy master page, for designer support, which we don't need in VS2008). However if you have a website solution, you don't need any intervention. I have a Httpmodule instead of Global.asax to register the VPP, and a base page that adds controls dynamically to the Master page, all wrapped in a dll. The website page will just inherit from this base page. Thanks ! Cool Nice tip. Thanks for keeping me updated. In reading the thread above, there is a reference to a EmbedMasterPage.zip file that I assume contains the code example described. Where is that available? Similar to ian's problem mentioned earlier, I too, continue to get a null reference when I modify the constants in the MasterPageVirtualPageProvider. My constants are: public const string MasterPageFileLocation = "~/MasterPage.master"; public const string VirtualPathProviderResourceLocation = "VirtualPathProvider.Resources"; public const string VirtualMasterPagePath = "~/"; I don't have the master page stored in any folders...stored in the root of the project. Any ideas what I'm doing wrong? Thanks for your help. bharman, The code shown for EmbedMasterPage in this post can be found here - Regarding the contstants. Here is the description - VirtualMasterPagePath - This is the path which should be handled by the VirtualPathProvider VirtualPathProviderResourceLocation - In the example the Master page is stored in the resource folder so that is what this contant is there for. MasterPageFileLocation - This is the location of the Master page which you can call from your client application. I would advise you to have the Virtual Path as a folder rathern than at the root. Let me know if that works. Thanks for this blog. Actually I am trying to create Global master page since a week by publishing it as DLL and then putting it in GAC but this was giving me a very Irritating Error: An error occurred while try to load the string resources (FindResource failed with error -2147023083). Which went away when the HTML in Global Master Page was very Small. I am going to try this. I am using a VB solution for the Master page and in that I am getting the return of this function ReadResource as nothing that is : assembly.GetManifestResourceStream(MasterPageVirtualPathProvider.VirtualPathProviderResourceLocation + "." + resourceFileName). How can make it available in VB.Net How does this work, if I need to have: - Any ASP.NET standard server controls or custom server controls in my master mage. - user controls in my master mage. - nested master pages Anumole Matthew, For VB.Net related code on Assembly, check this support article - Raj, It should work like any other master page. All I am doing is giving ASP.Net master page from DLL instead of File system. HTH Is the designer support fixed in VS2008? I see that Jayanthi mentioned "I had them point to a dummy master page, for designer support, which we don't need in VS2008". Brian Brian, No. Unfortunately that will not be available with this approach. I have several projects that is supposed to share same set of master pages. Also I have some code behind functionality that is common across applications. I need share these along with the master pages. In the above discussed methods I see that there is no way to get designer support. I would appreciate if someone give me some pointers on how to share master pages without loosing the designer support. Please help! Thanks in advance This example seems really incomplete. Eric, What is missing for you? What is this: public const string VirtualMasterPagePath = "~/MasterPageDir/"; Can VirtualMasterPagePath be set to anything? I keep getting stack overflow errors in Page_PreInit of my page that consumes the master page. I have a feeling that it is the constants declared in 'MasterPageVirtualPathProvider.cs' not being set properly (by me). I tried this method of storing MasterPages in DLLs and it worked great, it does basically exactly what I want it to do, but with one restriction... It only works if I remove the caching functionality from the MasterPageVirtualFile.Open function (its in the source you provide, but not in this post...) The issue is that whenever it retrieves a file from the cache (it works fine the first time I open the application, but does not work subsequently), I get an error telling me the stream is not open. Could you shed any light on this? Thanks, Nate FYI: In converting this to VB, I had to remove ".Resources" from the VirtualPathProviderResourceLocation. I also had to use Sergio Pereira's suggestion of compiling the master page class. For some further feedback (and the sake of those just diving in), the virtual path provider and virtual file provider classes could be named EmbeddedResourceVirtualFile and EmbeddedResourceVirtualPathProvider if you handle your constants differently. Basically if you had 10 embedded master pages, those two classes could serve all of them. The sample makes it seem that you should create different virtual provider classes for each master page whereas those two classes could serve anything embedded (images, master pages, scripts, styles, etc). Has anyone got this error while trying to use the embedded masterpage?? Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.ArgumentNullException: Value cannot be null. Parameter name: value Source Error: Line 33: if (HttpContext.Current.Cache[virPath] == null) Line 34: { Line 35: HttpContext.Current.Cache.Insert(virPath, ReadResource(virPath)); Line 36: } Line 37: return (Stream)HttpContext.Current.Cache[virPath]; Hello, I have got the same issue than you and finally found the root cause by step by step debugging: You need to set BOTH the Master Page and its code behind as being Embedded resources. The error is poping up because the .cs code behind file is not in your resource. Steph. For anyone having problems loading the manifest resource stream, make sure you are correctly referencing the resouce. An easy way to do that is to just pull up assembly__1.GetManifestResourceNames It will show you the name of all of the resources in the executing assembly. In my case the ".Resources" was unnecessary, and my assembly name was different. My call ended up looking like this: assembly__1.GetManifestResourceStream("EmbeddedMasterPage.MasterPage.Master") EmbeddedMasterPage being the name of my project/assembly You've been kicked (a good thing) - Trackback from DotNetKicks.com
http://blogs.msdn.com/shahpiyush/archive/2007/03/09/Sharing-Master-Pages-amongst-Applications-by-Embedding-it-in-a-Dll_2E00_.aspx
crawl-002
refinedweb
1,900
57.87
@Target(value={TYPE,METHOD,FIELD}) @Retention(value=RUNTIME) @Documented public @interface WebServiceRef The WebServiceRef annotation java.lang.String name java:comp/envnamespace. public abstract java.lang.Class<?> type public abstract java.lang.String mappedName nameelement or defaulted, is a name that is local to the application component using the resource. (When a relative JNDI name is specified, then it's a name in the JNDI java:comp/envnamespace.) Many application servers provide a way to map these local names to names of resources known to the application server. This mapped name is often a global JNDI name, but may be a name of any form. Application servers are not required to support any particular form or type of mapped name, nor the ability to use mapped names. The mapped name is product-dependent and often installation-dependent. No use of a mapped name is portable.Application servers are not required to support any particular form or type of mapped name, nor the ability to use mapped names. The mapped name is product-dependent and often installation-dependent. No use of a mapped name is portable. public abstract java.lang.Class<? extends Service> value javax.xml.ws.Service. This element MUST be specified whenever the type of the reference is a service endpoint interface. public abstract java.lang.String wsdlLocation public abstract java.lang.String lookup Copyright © 2009-2011, Oracle Corporation and/or its affiliates. All Rights Reserved. Use is subject to license terms. Generated on 10-February-2011 12:41
https://docs.oracle.com/javaee/6/api/javax/xml/ws/WebServiceRef.html
CC-MAIN-2018-05
refinedweb
250
53.58
Large OnLine Transaction Processing applications primarily use near third normal form databases that can have many relatively small code tables that provide an ID key for storage in large tables and user-friendly descriptions for use in your application's presentation. .NET allows binding an ID to DropDownLists or RadioButtonLists, but how about displaying the description when data is being viewed as read-only? The GridView above has 3 of the 4 columns that get their value from code tables. If you were a user, which would you rather see? DropDownList RadioButtonList Michael Jones DC 1 or District of Columbia Married Male When a row is retrieved for viewing, you will retrieve the IDs for many of the columns rather than the descriptions that you want to display to the user. You can join all those columns to their respective code tables to get the descriptions, but that makes for far more complicated statements and Stored Procedures. E.g.: select name, state, mstatid, genderid from userdata where uid = @uid select u.name s.statedesc, m.mstatdesc, g.genderdesc from userdata u join states s on u.state = s.state join mstat m on u.mstatid = m.mstatid join gender g on u.genderid = g.genderid where uid = @uid Other problems that arise when working with code tables is that occasionally a code will no longer be wanted, but deleting the code causes problems for old entries that used it. All the statements that you created to handle the joins will probably no longer work as you originally intended. As an example, say, you buy a cell phone plan 'A'. Later, the cell phone company no longer offers plan 'A' and now has plan 'B'. Plan 'A' is still valid for the person that purchased it, but any new customer would not have plan 'A' as an option. This example has two primary classes and an interface that can be used in any ASP.NET 2.0 application. DataSet DataView ObjectDataSource RadioButtomList CodeTableCache Also included in the download are the files necessary to see this capability in action: ICodeTableList To use this example, install in IIS or in a directory for use in Visual Studio. Create a database in SQL Server or Express, and execute the BuildDatabase.sql and Inserts.sql scripts to build the tables. The connection string in the web.config may need modification to access the database. If you do not have SQL Server or Express, you can download it and the Management Studio from Microsoft for free: Microsoft SQL Server Express and Microsoft SQL Server Management Studio Express. I am not a SQL Server Express user, and I was unable to get a connection to the server to work properly on one of my systems. If you have the same problem, set up impersonation for the application in the web.config. The node to add is: <identity impersonate="true" password="win_pwd" username="machinename\winlogin" /> In an ASPX page, a data source can be added as simply as this. These are used for the list items of DropDownLists and RadioButtonLists. In the example, AgreementDS is used by two RadioButtonLists. So, if you were getting multiple addresses, only one state data source would need to be defined. The CodeType field must match the simple name given to the code table in the CodeTableList class. By default, "Disabled" codes will not show up in these lists. Set DisplayEnabledOnly to False to see all values. AgreementDS CodeType CodeTableList DisplayEnabledOnly False <opp:CodeTableDataSource <opp:CodeTableDataSource <opp:CodeTableDataSource <opp:CodeTableDataSource <opp:CodeTableDataSource <opp:CodeTableDataSource Notice that the "Satisfaction" CodeType has two entries with different IdValues. These are configured to add the code ID of the existing row to the list even if it is disabled. Remember the cell phone plan example above? Two additional things are needed to get this to work. First, in the GridView, DetailsView, or FormView, add the columns you need to the DataKey attribute. Second, in the code behind in the Page_Load, tell the CodeTableDataSource how to find the currently selected value. IdValue GridView DetailsView FormView DataKey Page_Load CodeTableDataSource SatisfactionVDS.DatakeyValues = TestTableDetail.DataKey.Values; SatisfactionQDS.DatakeyValues = TestTableDetail.DataKey.Values; This isn't limited to the new ASP.NET 2.0 GridView, DetailsView, and FormView although it was made for them. What is required is to set the DatakeyValues attribute to an IOrderedDictionay with keys that match the IdValue attribute. DatakeyValues IOrderedDictionay For either a DropDownList or a RadioButtonList, set the DataValueField to "ID", the DataTextField to "Description", the DataSourceID to the appropriate CodeTableDataSource, and the SelectedValue to the bound field. To get a description back for the read-only values, just pass the Eval() value to the GetCodeDesc method with the correct CodeType directly in your ASPX page. DataValueField DataTextField DataSourceID SelectedValue Eval() GetCodeDesc <asp:TemplateField <EditItemTemplate> <asp:RadioButtonList </EditItemTemplate> <ItemTemplate> <asp:Label Runat="server" id="Label6" Text='<%# CodeTableCache.GetCodeDesc("Agreement", Eval("AgreeUseAgain").ToString()) %>' /> </ItemTemplate> </asp:TemplateField> string GetStatement(string CacheCode); string GetConnectionString(string CacheCode); System.DateTime GetExpiration(string CacheCode); bool IsAvailable(string CacheCode); CacheCode is the simple name that is given to each code table. I.e.: the States code table can just be recognized by "States". The GetStatement method is the most important and it should get all the rows of the table. The columns required in the statement are "ID", "Description", and "Disabled". If your code table uses different column names, they must be aliased to these names. If you don't have a "Disabled" column, just return the literal 'false' which will make all rows enabled. If you add a disabled column at a later date, just change the statement here and it will be functional. The states table statement would look like this - select state AS ID, name AS Description, 'false' AS Disabled from states order by name. In the example application, I created a single code table that can manage many code tables. It has a codetype column that must match the CacheCode used in the application. CacheCode GetStatement select state AS ID, name AS Description, 'false' AS Disabled from states order by name. The GetConnectionString method will just return the connection string. If your code tables are in different databases or schemas, different connection strings can be used. The GetExpiration returns when a cached item should be removed from the cache. Code tables have slowly changing values, so they are great candidates for caching. Since they can change, putting a reasonable expiration is good practice. In a high volume site, even a short expiration like 1 minute will save many trips to the database. IsAvailable will just return if the CacheCode is a valid CacheCode. GetConnectionString GetExpiration IsAvailable The CodeTableList class in the example inherits from the StringDictionary for tracking CacheCodes and statements. Your class can use any method for tracking that information including an XML file or a separate database table. StringDictionary CodeTableCache is a static class, so there is only one instance for your entire application. A static class never needs to be instantiated with new. When you need to call a method, just call it. The best time to register your class with the CodeTableCache is at application startup. The global.asax has just such an event. Add the following code to your global.asax: new void Application_Start(object sender, EventArgs e) { if (OppSol.Software.CodeTableHelpers.CodeTableCache.StatementList == null) OppSol.Software.CodeTableHelpers.CodeTableCache.StatementList = new CodeTableList(); } <pages> <controls> <add namespace="OppSol.Software.CodeTableHelpers" tagPrefix="opp"/> </controls> </pages> That's it. I hope you find it as useful as I do. I have used a slightly earlier revision of this in several good-sized applications, and it has worked very well. This example uses a SQLDataSource for the main tables, which I do not recommend. I like to use custom business objects which may be the next place that I integrate code tables with. SQLDataSource Another future enhancement may be a custom business object to hold the code table data which would allow some additional functionality. For now, the DataSet/DataView combination is just too easy though, so we will have to see. What I like the least about this solution is the requirement to set the DatakeyValues attribute. I don't know another way to get the values from the currently bound row from the control in a more elegant manner. Anybody else have thoughts on that? If you have any thoughts or enhancements with regard to code tables, add a comment to the discussion.
http://www.codeproject.com/Articles/15791/Binding-to-Database-Key-Code-Tables-with-Caching
CC-MAIN-2014-41
refinedweb
1,406
55.74
c Programming/C Reference/time.h Template:C Standard library In C programming language time.h (used as ctime in C++) is a header file defined in the C Standard Library that contains time and date function declarations to provide standardized access to time/date manipulation and formatting. Contents Functions[edit] char *asctime(const struct tm* tmptr) - Convert tmto a string in the format "Www Mmm dd hh:mm:ss yyyy", where Www is the weekday, Mmm the month in letters, dd the day of the month, hh:mm:ss the time, and yyyy the year. The string is followed by a newline and a terminating null character, containing a total of 26 characters. The string pointed at is statically allocated and shared by ctimeand asctimefunctions. Each time one of these functions is called the contents of the string is overwritten. clock_t clock(void) - Return number of clock ticks since process start. char* ctime(const time_t* timer) - Convert time_ttime value to string in the same format as asctime. The string pointed is statically allocated and shared by ctimeand asctimefunctions. Each time one of these functions is called the content of the string is overwritten. ctimealso uses internally the buffer used by gmtimeand localtimeas return value, so a call to this function will overwrite this. double difftime(time_t timer2, time_t timer1) - Returns timer2 minus timer1 to give the difference in seconds between the two times. struct tm* gmtime(const time_t* timer) - Convert a time_tvalue to a tm structure as UTC time. This structure is statically allocated and shared by gmtime, localtimeand ctimefunctions. Each time one of these functions is called the content of the structure is overwritten. struct tm* gmtime_r(const time_t* timer, struct tm* result) - Convert a time_tvalue to a tm structure as UTC time. The time is stored in the tm struct referred to by result. This function is the thread-safe version of gmtime. struct tm* localtime(const time_t* timer) - Convert a time_ttime value to a tm structure as local time (ie time adjusted for the local time zone and daylight savings). This structure is statically allocated and shared by gmtime, localtimeand ctimefunctions. Each time one of these functions is called the content of the structure is overwritten. time_t mktime(struct tm* ptm) - Convert tmto a time_ttime value. Checks the members of the tm structure passed as parameter ptm adjusting the values if the ones provided are not in the possible range or they are incomplete or mistaken and then translates that structure to a time_t value that is returned. The original values of tm_wday and tm_yday members of ptm are ignored and filled with the correspondent ones to the calculated date. The range of tm_mday is not checked until tm_mon and tm_year are determined. On error, a -1 value is returned. time_t time(time_t* timer) - Get the current time (number of seconds from the epoch) from the system clock. Stores that value in timer. If timeris null, the value is not stored, but it is still returned by the function. size t strftime(char* s, size t n, const char* format, const struct tm* tptr) - Format tminto a date/time string. char * strptime(const char* buf, const char* format, struct tm* tptr) - Scan values from bufstring into tptrstruct. On success it returns pointer to the character following the last character parsed. Otherwise it returns null. time_t timegm(struct tm *brokentime) - (but non-thread safe) conversion from a UTC broken-down time to a simple time, set the TZ environment variable to UTC, call mktime, then set TZ back. Unix extensions[edit] The Single UNIX Specification (IEEE 1003.1, formerly POSIX) adds two functions to time.h: asctime_r[1] and ctime_r.[2] These are reentrant versions of asctime and ctime. Both functions require the caller to provide a buffer in which to store the textual representation of a moment in time. The following sample demonstrates, how to use the reentrant version of localtime and asctime: #define _POSIX_C_SOURCE 200112L #include <stdio.h> #include <stdlib.h> #include <time.h> int main(void) { time_t rawtime; struct tm * timeinfo; struct tm timeinfoBuffer; char *result; time(&rawtime); /* call localtime */ timeinfo = localtime_r(&rawtime , &timeinfoBuffer); /* allocate memory for the result of asctime call*/ result = malloc(26 * sizeof(char)); /* call reentrant asctime function */ result = asctime_r(timeinfo, result); printf("The current date/time is: %s", result); /* free allocated memory */ free(result); return 0; } Since these functions are not in the C++ standard, they do not belong to the namespace std in that language. <vinu.h> Constants[edit] CLK_PER_SEC - Constant that defines the number of clock ticks per second. Used by the clock() function. CLOCKS_PER_SEC - An alternative name for CLK_PER_SEC used in its place in some libraries. CLK_TCK - Obsolete macro for CLK_PER_SEC. Data types[edit] clock_t - Data type returned by clock(). Generally defined as int or long int. time_t - Data type returned by time(). Generally defined as int or long int. struct tm - A "broken-down" (componentized) calendar representation of time. Calendar time[edit] Calendar time (also known as "broken-down time") in the C standard library is represented as the struct tm structure, consisting of the following members: Examples[edit] This source code snippet prints the current time to the standard output stream. #include <stdio.h> #include <time.h> int main(void) { time_t timer = time(NULL); printf("current time is %s", ctime(&timer)); return 0; } References[edit] - "Calendar Time". The GNU C Library Reference Manual. 2001-07-06.. Retrieved 2007-04-03. time.h: time types – Base Definitions Reference, The Single UNIX® Specification, Issue 7 from The Open Group - "gmtime". The Open Group Base Specifications. 2008-12-09.. - ↑ asctime. The Open Group Base Specifications Issue 7, IEEE Std 1003.1-2008. - ↑ ctime. The Open Group Base Specifications Issue 7, IEEE Std 1003.1-2008.
https://en.wikibooks.org/wiki/C_Programming/C_Reference/time.h
CC-MAIN-2015-48
refinedweb
957
56.66
CXF Component The cxf: component provides integration with Apache CXF for connecting to JAX-WS services hosted in CXF. - CXF Component - URI format - Options - - Attachment Support - Streaming Support in PAYLOAD mode - Using the generic CXF Dispatch mode - See Also Maven users will need to add the following dependency to their pom.xml for this component: URI format: Options The serviceName and portName are QNames, so if you provide them be sure to prefix them with their {namespace} as shown in the examples above. The descriptions of the dataformats You can determine the data format mode of an exchange by retrieving the exchange property, CamelCXFDataFormat. The exchange key constant is defined in org.apache.camel.component.cxf.CxfConstants.DATA_FORMAT_PROPERTY. How to enable CXF's LoggingOutInterceptor in MESSAGE mode. Available only in POJO mode. The relayHeaders=false setting asserts that all headers in-band and out-of-band will be dropped. header list by CXF. The in-band headers are incorporated into the MessageContentListin POJO mode. The camel-cxfcomponent does make any attempt to remove the in-band headers from the MessageContentList. If. Then, your endpoint can reference the CxfHeaderFilterStrategy. The MessageHeadersRelayinterface has changed slightly and has been renamed to MessageHeaderFilter. It is a property of CxfHeaderFilterStrategy. Here is an example of configuring user defined Message Header Filters:. Configuring the CXF Endpoints with Apache Aries Blueprint.. How to override the CXF producer address from message header The camel-cxf producer supports to override the services address by setting the message with the key of "CamelDestinationOverrideUrl". How to consume a message from a camel-cxf endpoint in POJO data format The camel-cxf endpoint consumer POJO data format is based on the cxf invoker, so the message header has a property with the name of CxfConstants.OPERATION_NAME and the message body is a list of the SEI method parameters. How to prepare the message for the camel-cxf endpoint in POJO data format. If you want to get the object array from the message body, you can get the body using message.getbody(Object[].class), as follows: How to deal with the message for a camel-cxf endpoint in PAYLOAD data format. How to get and set SOAP headers in POJO mode. How to get and set SOAP headers in PAYLOAD mode SOAP headers are not available in MESSAGE mode as SOAP processing is skipped. How to throw a SOAP Fault from Camel): How to propagate a camel-cxf endpoint's request and response context cxf client API provides a way to invoke the operation with request and response context. If you are using a camel-cxf endpoint producer to invoke the outside web service, you can set the request context and get response context with the following code:).
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=52098&showComments=true&showCommentArea=true
CC-MAIN-2015-22
refinedweb
456
54.73
Please help.. I have an assignment that states that we have to print out a matrix of random 0s and 1s, the size of n-by-n with n coming from user input. With that being said, I know I have to use Math.random() to get the 0s and 1s. My problem is trying to get the actual matrix because I've looked around and people say to use printf, but I've tried a lot of variations of that and I always end up with tons of errors or just several lines of decimals. I left a couple things out which I'll annotate with '???'. Code Java: //User inputs an integer and program displays matrix of random //0's and 1's import java.util.Scanner; public class Matrix { public static void main(String[] args) { Scanner input = new Scanner(System.in); //User inputs integer System.out.print("Please enter an integer: "); int n = input.nextInt(); printMatrix(n); } public static void printMatrix(int n) { while (n > 0) { System.out.println(" " + Math.random() * 2); n--; while (n > 0) { System.out.printf( ???, Math.random()); n--; } System.out.println(???); } } }
http://www.javaprogrammingforums.com/%20object-oriented-programming/36356-printing-matrix-printingthethread.html
CC-MAIN-2017-47
refinedweb
187
66.44
Version: (using KDE 4.4.1) OS: Linux Installed from: Archlinux Packages When I try to click for Edit/Approved in case a relevant expression, nothing happens, but if I use ctrl-u, then it works properly, so just it's not ok from the menu, I will try to provide later a patch for it, but confirm me, if the problem exists by you as well, thanks. Even I too tested the lokalize and found the same issue. I used kdesdk-4.3.4 Hello Nick, You can see my patch below for it, but I'd have a question to you: I'm not sure, but the problem seems to be one in KToolBarPopupAction. The docs say "this action is a simple menuitem when plugged into a menu, and has a popup only in a toolbar.", while in lokalize it also has the submenu when used in the menubar, its not a regular KAction but a KToolBarPopupAction. (it is expected to have a submenu, but also react on click). Would you be so kind as to tell me what the submenu is supposed to show ? (there is a slot called showStatesMenu) The code looks like there is some intention for it having a popup. May I commit this patch or would you do another approach then ? svn diff ../lokalize/src/editortab.cpp Index: ../lokalize/src/editortab.cpp =================================================================== --- ../lokalize/src/editortab.cpp (revision 1106451) +++ ../lokalize/src/editortab.cpp (working copy) @@ -75,6 +75,7 @@ #include <kurl.h> #include <kmenu.h> #include <kactioncategory.h> +#include <ktoggleaction.h> #include <kinputdialog.h> @@ -472,7 +473,7 @@ // - action = actionCategory->addAction("edit_approve", new KToolBarPopupAction(KIcon("approved"),i18nc("@option:check whether message is marked as translated/reviewed/approved (depending on your role)","Approved"),this)); + action = actionCategory->addAction("edit_approve", new KToggleAction(KIcon("approved"),i18nc("@option:check whether message is marked as translated/reviewed/approved (depending on your role)","Approved"),this)); action->setShortcut(QKeySequence( Qt::CTRL+Qt::Key_U )); action->setCheckable(true); connect(action, SIGNAL(triggered()), m_view,SLOT(toggleApprovement())); the popup menu is showm when editing XLIFF files (they support alot of states). The proper way to correct this is to call toggleApprovement() in case showStatesMenu() is going to show an empty menu. (so toggleApprovement() instead of showing empty menu):
https://bugs.kde.org/show_bug.cgi?id=231870
CC-MAIN-2022-05
refinedweb
372
55.84
Scope in VB.net By: Steven Holzner Printer Friendly Format The scope of an element in your code is all the code that can refer to it without qualifying its name (or making it available through an Imports statement). In other words, an element's scope is its accessibility in your code. As we write larger programs, scope will become more important, because we'll be dividing code into classes, modules, procedures, and so on. You can make the elements in those programming constructs private, which means they are tightly restricted in scope. In VB .NET, where you declare an element determines its scope, and an element can have scope at one of the following levels: Block scope—available only within the code block in which it is declared Procedure scope—available only within the procedure in which it is declared Module scope—available to all code within the module, class, or structure in which it is declared Namespace scope—available to all code in the namespace For example, if you declare a variable in a module outside of any procedure, it has module scope, as in this case, where I'm declaring and creating a LinkLabel control that has module scope: Dim LinkLabel1 As LinkLabel Private Sub Button1_Click(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles Button1.Click LinkLabel1 = New LinkLabel() LinkLabel1.AutoSize = True LinkLabel1.Location = New Point(15, 15) ⋮ Declaring a variable in a procedure gives it procedure scope, and so on. Inside these levels of scope, you can also specify the scope of an element when you declare it. Here are the possibilities in VB .NET: Public —The Public statement declares elements to be accessible from anywhere within the same project, from other projects that reference the project, and from an assembly built from the project. You can use Public only at module, namespace, or file level. This means you can declare a Public element in a source file or inside a module, class, or structure, but not within a procedure. Protected —The Protected statement declares elements to be accessible only from within the same class, or from a class derived from this class. You can use Protected only at class level, and only when declaring a member of a class. Friend —The Friend statement declares elements to be accessible from within the same project, but not from outside the project. You can use Friend only at module, namespace, or file level. This means you can declare a Friend element in a source file or inside a module, class, or structure, but not within a procedure. Protected Friend —The Protected statement with the Friend keyword declares elements to be accessible either from derived classes or from within the same project, or both. You can use Protected Friend only at class level, and only when declaring a member of a class. Private —The Private statement declares elements to be accessible only from within the same module, class, or structure. You can use Private only at module, namespace, or file level. This means you can declare a Private element in a source file or inside a module, class, or structure, but not within a procedure. Let's take a look at an example. Here's what block scope looks like—in this case, I'll declare a variable, strText in an If statement. That variable can be used inside the If statement's block, but not outside (VB .NET will tag the second use here as a syntax error): Module Module1 Sub Main() Dim intValue As Integer = 1 If intValue = 1 Then Dim strText As String = "No worries." System.Console.WriteLine(strText) End If System.Console.WriteLine(strText) 'Will not work! End Sub End Module Here's another example. In this case, I've created a second module, Module2, and defined a function, Function1, in that module. To make it clear that I want to be able to access Function1 outside Module2 (as when I call it as Module2. Function1 in the Main procedure), I declare Function1 public: Module Module1 Sub Main() System.Console.WriteLine(Module2.Function1()) End Sub End Module Module Module2 Public Function Function1() As String 'OK Return "Hello from Visual Basic" End Function End Module However, if I declared Function1 as private to Module2, it's inaccessible in Module1 (and VB .NET will tag Module2.Function1 below as a syntax error): Module Module1 Sub Main() System.Console.WriteLine(Module2.Function1()) 'Will not work! End Sub End Module Module Module2 Private Function Function1() As String Return "Hello from Visual Basic" End Function End Module Besides procedures, you also can make other elements—such as variables—public or private. Here, I'm declaring strData as public in Module2 to make it clear that I want to access it outside the module, which I can do in Module1, referring to strData as Module2.strData: Module Module1 Sub Main() System.Console.WriteLine(Module2.strData) End Sub End Module Module Module2 Public strData As String = "Hello from Visual Basic" End Module In fact, when you declare elements like strData public throughout the program, you need not qualify their names in other code, so I can refer to strData in Module1 as well: Module Module1 Sub Main() System.Console.WriteLine(strData) End Sub End Module Module Module2 Public strData As String = "Hello from Visual Basic" End Module Now that VB .NET is object-oriented, understanding scope is more important. In object-oriented programming, scope becomes a major issue, because when you create objects, you often want keep the data and code in those objects private from the rest of the program. Scope also becomes an issue when you derive one OOP class from. I really enjoyed reading your tutorial. I think th View Tutorial By: Hydrochloric_Koolaid at 2012-06-27 00:11:28
https://java-samples.com/showtutorial.php?tutorialid=1264
CC-MAIN-2022-21
refinedweb
970
53
CRI/T/53/93 IN THE HIGH COURT OF LESOTHO In the matter between REX and NKALIMENG MOTHOBI Accused Judgment Delivered by the Honourable Mr. Justice M.M. Ramodibedi on the 21st day of September 1999 The accused in this matter is charged with three (3) counts the full particulars of which are as follows :- COUNT 1: Murder: "In that upon or about the 10th day of September 1991 and at or near Sekamaneng in the district of Maseru, the said accused, acting in concert with others, did unlawfully and intentionally kill one TOLOKO CONSTANTINUS KIMANE." COUNT 2: Theft: "In that upon or about the 10th day of September 1991 and at or near Sekamaneng in the district of Maseru, the said accused, acting in concert with others, did unlawfully and intentionally steal a motor vehicle namely - a Toyota 2 Cressida Station Wagon with registration number A3360, the property or in the lawful possession of Tokoko Constantinus Kimane (now deceased)". COUNT 3: Conspiracy in contravention of Section 183(2) of the Criminal Procedure and Evidence Act: "In that during the period August to September, 1991 (the exact date to the prosecutor unknown) and at or near Maseru in the district of Maseru, the said accused, did unlawfully and intentionally conspire with Samuel Maliehe, Teboho Michael Chaka, Remaketse Sehlabaka, Monyake Mathibela and others, to aid or procure the commission of or to commit the offence of murdering one Sam Rahlao, an employee of Standard Bank Chartered, Maseru." On the 19th August 1999 when the indictment was read to the accused he pleaded guilty to Count 1 (Murder) and Count (3) Conspiracy). He pleaded not guilty to Count 2. The plea of the accused in respect of these three counts was duly accepted by the Learned Director of Public Prosecutions Mr. Mdhluli for the Crown. In respect of Count 2 the Learned Director of Public Prosecutions immediately withdrew the charge and duly submitted that the accused should be acquitted. In the circumstances the accused has since been found not guilty in respect of this count and has been acquitted. Back to Count 1 (murder) Mr. Phoofolo for the accused duly informed the Court that his clients plea of guilty to murder was in accordance with his instructions and in fairness to him he has consistently persisted in this attitude throughout the trial which must no doubt go in history as one of the shortest trials this Court has known to date in a matter as serious as this I should perhaps state at the outset that although it is extremely unusual and 3 perhaps unheard of for a represented accused person to plead guilty to murder on the instructions of his legal representative as a matter of general practice there is nothing stopping such an accused person from making a clean breast of the charge laid against him. Each case must however certainly depend upon its own particular circumstances. Despite the accused's plea of guilty to murder in Count 1 however this Court recorded a plea of not guilty and put the Crown to the proof of its case beyond reasonable doubt. The Court adopted this approach in terms of Section 240 (1) (a) of the Criminal Procedure and Evidence Act 1981 which reads as follows :- ; (emphasis added)" I have underlined the words "other than murder" to indicate my view that the Court has no power to record a plea of "guilty" in a charge of murder. The use of the word may in the section is in my view both empowering and also indicative of a judicial discretion vested in the Court that is to say in all charges other than murder coming before the High Court the Court shall have power and a discretion to bring in a verdict on the accused's plea of guilty without hearing any evidence. In the case of murder the High Court may not, or does not, have such power and/or discretion but simply to record a plea of "not guilty" and proceed to determine the issues in the ordinary way. It may only bring in a verdict at the end thereof. 4 The motivation for the above proposition is, I venture to say, that murder is obviously a very serious offence which is punishable by death where there are no extenuating circumstances. This is so in terms of Section 297 of the Criminal Procedure and Evidence Act 1981, Now experience shows that very often an accused person will plead guilty to an offence out of sheer ignorance or sometimes out of bravado or for reasons best known to himself, or because he was poorly advised (the list is not exhaustible). It thus behoves the Court to ensure that in a matter as serious as murder is that the full circumstances of the accused's guilt are proved beyond reasonable doubt. That is the very foundation of our criminal jurisprudence as I have always perceived it to be. So much for the law. I turn then to the facts of the case which are common cause and are mostly contained in a "summary of Substantial Facts" jointly prepared by the Defence Attorney Mr. Phoofolo and the Learned Director of Public Prosecutions admittedly in consultation with the accused himself This "summary of Substantial Facts" was handed in by consent as EX "A" and because of its importance I consider that it merits quotation in full: It crisply states the following- "The deceased TOLOKO CONSTANTINUS KIMANE was found dead near Sekamaneng in the district of Maseru on the llth-September 1991. His body was found near a donga on the Main North 1, not far from the National Abattoir. He had sustained gunshot wounds on the upper part of his body. Before the deceased met his death he had been employed by the erstwhile Barclays Bank PLC as a branch manager at Maseru. Sam Rahlao who is the subject of Count 3 was at all times material hereto employed in a managerial capacity at the erstwhile Standard Bank Chartered, Maseru branch. During or about the 22nd July 1991 members of the Lesotho Union 5 of Bank Employees (Lube) who were employed by the two commercial banks, namely Barclays Bank PLC and Standard Bank Chartered embarked on a strike and asked the two banks to start negotiations with them. The management at the said two banks regarded the strike declared by Lube as illegal and refused to enter into negotiations with Lube, insisting that the workers should return to work before negotiations could commence. Numerous attempts by mediators to try and settle the dispute between the striking Lube members and two commercial banks failed. The accused was at all material times hereto employed by the Barclays Bank PLC at its Leribe branch. He was one of those members of Lube who had decided to embark on strike action in July 1991. He was active in the activities of Lube at Leribe. As the strike by members of Lube proceeded it became apparent that the union's attempts to negotiate with the affected banks were taking a knock. The banks persisted in their refusal to negotiate with Lube while the strike which they perceived to be unlawful persisted. Some of the members of Lube who initially supported the strike action advocated by Lube began withdrawing their support and were returning to work. As some of the members of Lube broke ranks with those who supported continuation of the strike action, the leadership of Lube came out in favour of continuing the strike. There were then clear divisions between those members of Lube who wanted to return to work and those who chose to pursue the strike path. Some Lube members in Leribe realising that the strike action was losing steam decided to approach the leaders of Lube to seek advice as to what they could do to ensure that the strike succeeded and to bolster the waning support of continuation of the strike. The accused was one of those who came out solidly for the continuation of the strike. As a firm supporter for the continuance of the strike action, the accused approached some of the Lube leadership to discuss ways and means of thwarting efforts to break the strike. Apparently some members of Lube leadership agreed that stem measures needed to be taken to reaffirm the resolve of those members of Lube who wanted the strike to succeed The accused agreed with some Lube leaders that some of those in management positions could be considered legitimate targets for elimination in the hope that such 6 intimidatory tactics would induce management to enter into negotiations with Lube. After his meeting with some individuals in the Lube leadership the accused returned to Hlotse determined to recruit hired hands to help to assassinate those members of management who were perceived by hard core Lube strike advocates to be standing in the way of the strikers. On his return to Hlotse, in pursuance of the agreed objectives to kill certain members of the management of the affected banks, the accused recruited certain people who had some military training to assist them in carrying out their objective. Among those who were recruited by the accused were his cousin FUSI KOETJE and a friend of his MONYAKE JOSEPH MATHIBELA. The two aforementioned testified as accomplice witnesses at the trial of the co-conspirators of the accused. One of the co-conspirators of the accused was one SAMUEL MONONTSI MALIEHE, who was tried separately with others in CRI/T/2/92 for the same offences with which the accused is charged. Pursuant to the accused's and his co-conspirators' plan to kill certain members of management of the two affected banks, the accused together with his co-conspirators came to Maseru three times to make preparations for carrying out their intended objective. During one of their visits to Maseru two of their intended victims were identified. These were the deceased and Mr. SAM RAHLAO who is the subject of Count 3. On the 10th September 1991, one of their intended victims, the deceased, was shot and killed in his car. Present in the car when the deceased was shot were the accused, SAMUEL MONONTSI MALIEHE and MONYAKE JOSEPH MATHIBELA referred to herein. At the trial of SAMUEL MONONTSI MALIEHE, MR MATHIBELA admitted taking a leading role in the planning and execution of the conspiracy to kill the deceased. There is some dispute as to who did what when the deceased was killed but all those present in deceased car had all agreed that he should be killed. The prosecution alleges that at all material times hereto, the accused, those with him in the deceased's car and others acted in furtherance of a common objective to commit the alleged crimes. 7 The accused duly admitted and adhered to the contents of EX 'W which was read into the record as part of the Crown case. Now in terms of Section 273 of the Criminal Procedure and Evidence Act 1981 an accused or his legal representative in his presence may in any criminal proceedings admit any fact relevant to the issue and such admission constitutes sufficient evidence of that fact. EX "A" must therefore be viewed in that light as indeed it in effect constitutes admissions by the accused. By consent with Mr. Phoofolo for the accused the Learned Director of Public Prosecutions once more handed in the deceased's postmortem report by Dr. Olivier. This was marked EX "B". It reads as follows:- I am a registered medical practitioner and hold the degree of M.Med (Med Forens). I am registered (sic) the S A Medical and Dental Council as a Forensic Pathologist. 1 am employed by the University of the Orange Free State as Professor and head of the Department of Forensic Medicine in the University of the Orange Free State, Bloemfontein, the Provincial Administration of the Orange Free State and the Department of National Health as Professor and Chief State Pathologist. On the 25th September 1991, starting at 10:30, I conducted an external post-mortem on the body of the deceased, CONSTANTINUS KIMANE at the written request of attorneys Harley and Morris of Maseru. The body of the deceased was identified to me by Dr. Moorosi, Pathologist, Government Mortuary, Maseru as been (sic) that of Mr. Constantinus Kimane. Dr. Moorosi stated that the deceased had died on the 9th day of September 1991 and that he himself had conducted a full post-mortem on the body. Dr. Moorosi was present and very helpful during my examination of the body. At the time of my examination the body was totally naked. The clothes that the deceased had worn at the time of his death, were not 8 available for my examination. The appearance of the body indicated that a full post-mortem examination had already been done after which the body had been sutured. Body of an adult male, no facilities to record length and weight of the body. General early signs of muscular atrophy so that the body appears to be overweight. At the time of the examination post-mortem changes had already set in; dehydration and drying out of the skin and mucosa present. Changes especially accentuated around and in the wounds on the anterior aspect of the body. Hypostasis present. Rigor mortis totally gone. Post-mortem loss of a small area of epidermis in the left groin area. Slight ante-mortem contusion-abrasion lesions on and in the skin, probably caused by a slight blunt force, in the following areas: 7.1 Right forehead, about 3cm above the eyebrow, area of 2 x 2cm. 7.2 Smaller sligther area on the left forehead, about 1cm in size. 7.3 Lesion on the anterior aspect of the right lower leg, below the knee, over the head of the tibia. Four separate ante-mortem wounds and tracts in the chest, appearance of through and through bullet wounds. For description purposes see also annexure A. 8.1 Three bullet wounds through the chest. Relatively small round penetrating projectile wounds with identical appearance, size, circumference and diameter (7mm) on the posterior aspect of the chest on the right hand side. Each of these wounds shows a very typical abraision ring indicating entrance wounds. No other lesions or marks or any stains around these wounds. Wound Al to A2: Circular round entrance wound in the skin on the right back, about 6cm under the shoulder-neck area 8cm to the right of the midline. Projectile penetrated from back to the front with an angle towards the left with an exit wound A2 on the front of the chest, 2cm below the medial aspect of the right clavicula in the lateral part of the sternum. 9 The wound is star shaped, shows discolouration and drying and had the appearance of an exit wound. Wound tract B1 to B2: Round circular entrance wound (B1) on the back of the chest on the right hand side, 4cm inferior and medial of wound Al. Projectile penetrated from the back to front with an angle towards the left with the exit wound through the lateral edge of the sternum with the exit wound in the skin just to the left of the sternum. Projectile caused typical bevelling of the sternum outwards towards the front. Wound tract C1 to C2: Round circular entrance bullet wound in the right side of the chest, 4cm inferior and medially to wound B1. Projectile penetrated from the back to the front through the left lateral aspect of the sternum, with the exit wound in the skin just to the left of the sternum. Projectile causes very typical outward fractures of the sternum. 8.2 Two bullet wounds with wound tract indicating an (sic) through and through shot in the anterior aspect of the chest, obliquely from the right to the left. Wound and wound tract D1 to D2: Entrance wound, ovally shaped, on the anterior aspect of the chest on the right hand side just medially to the right nipple. Appearance of an oblique entrance shot. Wound in the skin, anterior on the chest on the right hand side, about 2cm medial to the right nipple. Projectile penetrated from right to left through the anterior aspect of the chest with a longitudinal exit wound on the left side of the chest, 2cm superior and lateral of the left nipple. Evidence that this projectile also penetrated the sternum. Summary: Adult male. Slight injuries due to slight blunt force applications to the head and the right lower leg. Three though and through projectile wounds through the right side of the chest with the three entrance wounds on the right posterior aspect of the shoulder and thoracks (sic) with the exit wounds in the central area on the anterior aspect of the chest. Forth (sic) oblique projectile wounds through the anterior aspect of the chest. Entrance wound in the region of the right nipple and the exit wound in the region of the left nipple. In the medico-legal evaluation of the above findings the following aspects must be kept in mind: 10 9.1 The clothes of the deceased were not available for inspection and examination. 9.2 Second post-mortem was conducted on the 25th of September about 16 days after death. 9.3 Appearance of the external wounds may have been altered by the post-mortem time factor. 9.4 The precise localization of the external wounds may have been disturbed by the previous medico-legal post-mortem examination. Dissection of the internal organs disturbed the internal tracts of the projectile. 9.5 Evidence that might have been available at the time of the death or shortly afterwards, may have been lost while the body was handled and a post-mortem done. After my examination I came to the conclusion that the death of the deceased was due to through and through projectile wounds through the chest with injury to vital organs. I know and understand the contents of this declaration. I have no objection to taking the prescribed oath. I considered the prescribed oath to be binding on my consience (sic). PROFESSOR J A OLIVIER CHIEF STATE PATHOLOGIST" Yet another postmortem report of the deceased was handed in by consent marked EX "C". It is by Dr. Moorosi and is dated 12th November 1991. Stripped of its side issues the essence of this report reveals that the deceased had a number of "perforating" wounds of 0.5cm in diameter each on his body. The sketch diagram provided shows no fewer than five of such wounds around the chest area of the deceased. There are also a number of such wounds indicated on the back of the deceased and the cause of death according to the Doctor was due to "contusion of aorta, pericardium and right ventricle, with bilateral haemothorax and hemopericardium" 11 It will be noticed that although EX "C" gives a more scientific cause of death than EX "B" the two really complement each other. Accordingly I am satisfied that the injuries sustained by the deceased were the primary cause of death and I so find. It is evident from EX "B" that the injuries on the deceased were inflicted with a firearm. The Doctor's findings were that they were "bullet" wounds causing injury to "vital" organs of the deceased's body. There is indeed no dispute about this. Weighing all of the aforegoing factors within the context of the circumstances of this case as a whole I come to the inescapable conclusion that the only reasonable inference to be drawn from the facts is that apart from the accused's confession the killing of the deceased was both unlawful and intentional. That being the case I consider that the offence of murder has actually been committed. Now in terms of Section 240 (2) of the Criminal Procedure and Evidence Act 1981 any court may convict a person of an." The real and sole question that arises for determination at this stage then is whether the accused's statement EX "D" amounts to a confession in law In this regard I should mention that by consent with Mr Phoofolo for the accused the Learned Director of Public Prosecutions handed in the accused's statement EX "D" made to a Magistrate at Mafeteng on 18 September 1991. I should mention as well 12 that the accused confirmed that the contents of the statement were indeed made by him freely and voluntarily without any undue influence. The statement reads as follows- "It happened that in our Bank of Barclays Bank as we have Board Union we asked that our salaries should be increased from our employers. We looked into the fact when our salaries were last increased and we found that they were last increased in 1985. We said our Union should negotiate with the employers and they seem not to understand. We approached the Labour Commissioner to make it possible that we should meet our employers. He failed, then from there we decided to go on a strike on 22nd July. While we were on strike we asked that the Labour Commissioner should make it possible that we meet our employers again. They said they were not going to have talks with people who were on strike. We approached the Minister of Finance. Employers sulked from that meeting. Attempts were made to meet the Lesotho Businessmen. They failed also. We tried to meet Ntate Ramaema. We failed as well - we failed to meet him. Bishops tried to meet the employers and the Government seniors. They failed. The community tried but failed. Priests tried - they failed. International Organisations tried but failed. And unions of workers in the country tried but failed. When this was the situation, these people who were on strike held a meeting that they should see to it how they are going to solve the problem by engaging in violence. I made the people of the NSS at Hlotse to be aware that these people who were on strike and their grievances not addressed want to make violence. I brought this to the attention of the police officer there at Hlotse who is ntate Molapo. And that did not help. It was obvious that banks were recruiting foreigners to come and work here and employing new employees. All people who were on strike planned that there be found people to come and invade/attack those people working in the banks. I found two at Hlotse and I heard that some found members of the ANC. Those that I found I made it possible for then to meet committee members. And they agreed as to when the job would commence. On Monday the 9th I came with one ntate by the name of Mosia to Maseru I introduced him to the leader so as to arrange for transport. The following day I came along with them being two in number. 13 We arrived at Lancers Inn. We travelled in Teboho's vehicle to Lakeside. While still waiting there, there arrived ntate Kimane. And ntate Chaka said "here is one of our targets." And they planned to attack that one that was nearer. I asked for a HA from him. We went to Borokhoaneng and back. After passing Maqalika, Ramaleke shot him on the right hand side. Mosia shot him at the back on the shoulders. Ramaleke drove the vehicle and left him at the donga. From there the vehicle proceeded to Hlotse. On arrival at Tsifalimali we had an accident. I even sustained injuries. The following day I went to my home at TY. I took four days treating the wounds. On Monday I returned back to Hlotse. On Tuesday when we had attended BCP meeting, one person arrived there and informed me that police were looking for me at my home. I asked him whether he identified them and he said 'no'. I proceeded to my place but found them absent. I went to search for them. I found them at Motsoeneng's place. I asked them that I heard that they were looking for me and they said 'yes'. They said we should go to Charge Office. On arrival there I was asked questions about Kimane's matter. They even informed me that they knew each and every action from Lakeside. From there they brought me here to Maseru where I was asked similar questions. My answers were the same. I even informed them that a thing of this nature would still continue if steps are not taken." Now it is trite law that for a confession to be admissible as evidence against the person making it, it must be proved to have been freely and voluntarily made by such person in his sound and sober senses and without having been unduly influenced thereto. Such is the whole import of Section 228 of the Criminal Procedure and Evidence Act 1981. It is common cause and I accordingly find that the accused made the statement in EX "D" after having been duly warned by the Learned Magistrate M. Makoa and that he made it freely and voluntarily in his sound and sober senses and without having been unduly influenced thereto. 14 The meaning of a confession was defined by De Villiers ACJ in R v Becker 1929 AD 167 at 171 in the following words with which I am in respectful agreement: "an unequivocal acknowledgement of guilt, the equivalent of a plea of guilty before a court of law." Taken as a whole the statement EX "D" reveals the accused's active participation in the conspiracy to attack the deceased and others who were opposed to the strike by the bank employees (Lube). He personally took part in luring the deceased to his killers who were for that matter in the company of the accused himself The strategy used to lure the deceased to his death was for the accused and his co-conspirators to ask for a lift from the deceased. It worked and while the unsuspecting deceased was driving along with this murderous group he was brutally shot a number of times and was admittedly killed in the process. His body was dumped in a donga, one would imagine, like a dog. Nobody apparently cared and that included the accused. No report was made to the police or indeed anybody to assist the deceased in any way. Weighing all of the aforesaid considerations I am satisfied that the context in which the statement EX "D" was made amounts to a confession to accused's participation in the murder of the deceased. The Crown has called the evidence of PW1 'Mathakane Setlaba perhaps to ensure that it left no stone unturned in its attempt to prove the accused's guilt beyond reasonable doubt as indeed it must. It is the unchallenged evidence of PW1 briefly that she is 48 years old and is 15 a teacher by profession having obtained a degree in B.Ed Accounting from the National Unversity of Lesotho (NUL) in 1996. In July 1991 she was employed at Barclays Bank (Leribe Branch) as an Accountant. She had been working for the bank for 17 years. She knows the accused who was her fellow worker at Barclay's Bank. He was a clerk and as such junior to her. PW1 confirms that in July 1991 the Lesotho Union of Bank Employees (Lube) went on strike and that she too was involved in the strike as a member of the Union. She further confirms that the accused too participated in the strike as he was also a member of the Union. While some members of the Union returned to work PW1 and the accused "did not give up the struggle" and they continued with the strike with others. It is the evidence of PW1 that on the 11th September 1991 she heard about the death of the deceased who was branch Manager of Barclay's Bank (Maseru). On the same day in the morning she met the accused while she was seated outside the bank with one 'Mampe Lehlabi who was her deputy. As PW1 opened the door of the vehicle on her way to the toilet she saw the accused standing at the door of the vehicle. PW1 testifies that the accused then told her that he (the accused) and others had killed the deceased and that PW1 and her companion should inform one Sejake Tuoane who was a shop steward but that if they could disclose this matter then they would follow suit. The accused then left. It is further her evidence that she did inform Mr. Tuoane as requested by the accused. PW1 testifies that when the accused spoke to her he "looked like" a frightened person. It is further her evidence however that she enjoyed warm relations with the latter. 16 PW1 was not cross examined at all and for my part I should like to say that I watched her demeanor as she gave evidence and 1 got the impression that she was all out to tell the truth. I believe her evidence that the accused confessed to her about his participation in killing the deceased. That completed the Crown case. It is pertinent to note that the accused did not testify in his own defence nor did he call any witnesses. Consistently with his plea of guilty he simply closed his case. It was indeed his right to adopt this approach. The onus of proof is always on the Crown to prove its case beyond reasonable doubt. Indeed our criminal justice system is such that it is not for the accused to prove his innocence. Conspiracy/Common Purpose There is no evidence that the accused engaged in the actual shooting of the deceased himself. His participation must then be determined from the point of view that he admittedly took part in a conspiracy to invade or attack those who were not engaged in the strike including the deceased. It is the Courts finding that not only was the accused active in this conspiracy but he also helped find some men to do the job. I imagine that in the underworld language such men would be called hired assassins but it really does not matter what they were called and I shall accordingly ignore this terminology at this stage. What matters is the length to which the accused went in ensuring that the conspiracy he shared with his co-conspirators became a reality. As I have previously stated this he ensured by personally luring the deceased to his fate. He was thus in the position of an instigator in furtherance of the common purpose to attack the deceased. 17 It is true the modem approach is that there is no magical power contained in the doctrine of common purpose and that where there is participation in a crime, each one of the participants must satisfy all the requirements of the definition of the crime in question before he can properly be convicted as a co-perpetrator. Such was the view of the Appellate Division in S v Maxaba 1981 (1) SA 1148 (A) per Viljoen JA. I do not understand the Learned Judge of Appeal however to say that common purpose no longer forms part of the law of South Africa. I can say with confidence for my part that the doctrine of common purpose is still part of our law in this country (see for example Costa Peter Saba v Rex 1991-96 LLR 13791). It must however be used with caution to ensure that innocent persons are not convicted for crimes committed by others. I should mention that 1 have cautioned myself accordingly in the instant case. I have attached due weight to the fact that there was a general plan by the accused and his co-conspirators to invade or attack the non-striking bank employees including the deceased. The culprits felt that all lawful means to address their grievances had failed. In the context of this case I consider that the plan to attack the deceased included an understanding, express or implied, that violence might be used which in turn might result in the death of the deceased. In my view the accused knew or ought to have known about this eventuality. This indeed is implicit in EX "A". It follows from the aforegoing that, in my view, the deceased was killed in furtherance of the common purpose in which the accused played an active role and was reckless as to the consequences. That explains why he did not dissociate himself from the attack on the deceased or to render any assistance to him. His attitude in this regard was consistent with and in furtherance of his avowed plan 18 shared with his co-perpetrators to "eliminate" or "assassinate" the deceased as EX "A" indicates. I have also attached due weight to the fact that the accused actually pleaded guilty to the murder and that the plea was in accordance with the instructions of his attorney Mr. Phoofolo who is very experienced indeed. As I have stated previously the accused's attitude in pleading guilty to murder has been very consistent throughout the trial. He has never at any stage sought to withdraw the plea of guilty and as such I consider that the plea of guilty came from his heart and was meant to reveal the truth that he personally took part in the murder of the deceased in furtherance of the common purpose. In R v Kumalo and Another 1930 AD 193 at page 207 Stratford J.A. had occasion to state the following:- ." I respectfully agree. As earlier stated the accused has for that matter never sought to withdraw his plea of guilty which in my view was made solemnly and freely. What this then means is that the accused's confession before me as demonstrated by his plea of guilty to murder stands as admissible evidence against him. The fact that the accused has neither testified nor proffered any evidence in his defence is in my view a factor that weighs against him in the particular 19 circumstances of this case where there can be no doubt that the Crown's case called for an answer. Weighing all of the aforegoing considerations it follows that the Crown has in my view proved its case on Count 1 beyond reasonable doubt. Accordingly the accused is found guilty of murder. Although me accused pleaded guilty to Count 3,I am being left with a very uncomfortable feeling that a miscarriage of justice arises here. This is so because as I read this Count the Act from which the section forming the subject matter of the charge has been quoted, has not been spelt out in full. It merely says "the Criminal Procedure and Evidence Act" and it does not quote the year of the Act. Yet as I see it the statutory law relating to criminal procedure and evidence in this country has always been in a state of transition. It is therefore important to inform the accused clearly where we presently stand. In my view, a charge must be reasonably informative enough to enable the accused to prepare for his defence and to know exactly what case he is facing. He cannot be expected to hazard a guess. Where a charge is founded on a statute it is imperative, in my judgment, that full particulars of the statute in question are quoted. Such particulars must be such that they are reasonably sufficient to inform the accused of the nature of the charge. It follows from the aforegoing therefore that Count 3 is, in my view, so fatally defective as to lead to a miscarriage of justice. Accordingly the accused is found not guilty on this Count and he is accordingly acquitted. 20 In sum therefore, the accused is found guilty of murder on Count 1. He is found not guilty on both Counts 2 and 3 and he is accordingly acquitted on those Counts. Both my assessors agree. M.M. Ramodibedi JUDGE 21st September 1999 For the Crown : Mr. G.S. Mdhluli For the Accused : Mr. E.H. Phoofolo 21 EXTENUATING CIRCUMSTANCES In terms of Section 296 (1) of the Criminal Procedure and Evidence Act 1981 the Court is now enjoined to determine whether or not there are any extenuating circumstances in this matter. That section reads as follows- "296 (I) Where the High Court convicts a person of murder, it shall state whether in its opinion there are any extenuating circumstances and if it is of the opinion that there arc such circumstances, it may specify them." In my view the most comprehensive definition of extenuating circumstances is to be found in S v Letsolo 1970 (3)S.A. 476 AD at 476F-477B per Wessels JA in the following words :- . 22 Sec. 296(1)). And it should be weighted with the most anxious deliberations,." I respectfully agree and it is on the basis of this definition that I approach this matter. So much for the law. I turn then to the facts of the case and I should mention at the outset that the accused did not give evidence even at this stage of the proceedings. The Court must then do its level best to determine on a balance of probabilities from the record as it presently stands whether or not extenuating circumstances exist. This is a moral judgment in which every relevant consideration tending to reduce the moral blameworthiness of the accused should be weighed with scrupulous care. 23 I have received full submissions from both the Learned Director of Public Prosecutions and Mr. Phoofolo for the accused on extenuating circumstances. They both eloquently argued in favour of the existence of extenuating circumstances on the authority of the case of Maliehe & Others v Rex 1997 - 1998 Lesotho Law Reports and Legal Bulletin 168. In that case which involved the co-perpetrators of the present accused, the Court of Appeal found that extenuating circumstances existed by virtue of the fact, inter alia, that the accomplice and the actual killer got scott free. The Court then felt that it would be "unconscionable" were the accused to be sentenced to death where the actual killer who was described as cold blooded and without conscience, did not hang. I respectfully discern the need to adopt the same approach as the Court of Appeal in the instant matter. The evidence before me points to great frustration amongst the employees of the Banks including the accused. Indeed the Court of Appeal considered this as an extenuating circumstance and observed in the process that these employees were indeed unaccustomed to the proper utilisation of the new legislation dealing with labour relations namely the Labour Code 1992. Once more I respectfully share the approach of the Court of Appeal in the matter before me. I consider that the Defendant is an unsophisticated Mosotho man who occupied a very junior rank in the hierarchy of the Union of the Bank Employees (Lube). I have also considered the accused's plea of guilty as a sign of contrition in the matter. Weighing all of the aforegoing factors cumulatively I have come to the 24 conclusion that extenuating circumstances exist in this matter. SENTENCE I confess that this is the most difficult part of the trial particularly in the special circumstances of this case. It is the task of the Court to ensure that the sentence fits the crime and in this regard the Court must balance the interests of justice with the personal circumstances of the accused. What makes sentence particularly difficult in this case is that although sentence is preeminently a matter for the discretion of the trial Court it is a salutary principle nonetheless for courts to strive for some uniformity in sentences. In this regard it is pertinent to observe that the co-pepetrators of the present accused were sentenced to an effective term of 16 years imprisonment by the Court of Appeal in Maliehe & Others v Rex (supra). This was on the 5th February 1997. It is useful even then to recall that the starting point in the view of the Court of Appeal was 20 years imprisonment which was merely reduced because the accused had been in custody since 1991. The trial Court had sentenced the accused to death. That indeed is an indication of how serious the matter is. On the lenient side however I shall bear in mind that extenuating circumstances having been found to exist, this Court is not bound to sentence the accused to death. Indeed in terms of Section 297 (3) of the Criminal Procedure and Evidence Act 1981 the High Court has a discretion to impose any sentence other than death upon any person convicted before or by it of murder if it is of the opinion that there are extenuating circumstances. I have taken into account all that has been eloquently said by both the 25 Learned Director of Public Prosecutions and the Defence attorney in mitigation of sentence. They have both supported a more lenient sentence than that imposed by the Court of Appeal. Indeed they have both suggested 12 years imprisonment. In particular I have taken into account all the personal circumstances of the accused as for example the fact that he is a first offender means that it is the first time he has clashed with the law. He deserves to be given an opportunity to reform. I shall also bear in mind that he is married with one minor child, a girl aged 6 years old. He has elderly parents and is the sole bread-winner. He suffers from piles. I have particularly been influenced by the accused's plea of guilty. There is no doubt in my mind that this is a sign of remorse and/or contrition. The accused has saved the time of the Court. Although the unlawful killing of a human being can never be justified, I have considered the fact that the accused and his colleagues had a legitimate grievance which does not seem to have been adequately addressed. Reprehensible as the accused's conduct may have been in taking the law into his own hands 1 think he deserves some measure of leniency. One other factor that has weighed heavily in my mind in favour of the accused is the fact that he has admittedly spent more than six (6) years awaiting trial in custody. He could not obtain bail because he had apparently absconded to the Republic of South Africa. He was re-arrested in 1993 and has been in custody since. This indeed is a sad state of affairs which may only bring the justice system in this country into disrepute. Indeed I have taken into account the fact that the accused has spent a much longer time in gaol awaiting his trial than his co- 26 perpetrators in Maliehe's case. On the other hand I should record at this stage that this Court believes in the sanctity of human life and as such the unlawful taking away of human life deserves to be punished adequately as a deterrent to others and for protection of the interests of members of the public. A signal needs to be sent out that it does not pay to take the law into one's own hands and to eliminate people perceived to be unwanted. The deceased's family has lost its beloved one as a result of this senseless killing. Indeed an aggravating factor in this matter is the fact that there is an element of premeditation in which the deceased was eventually killed execution-style. The sentence that this Court is about to impose is equally aimed at discouraging this. Indeed this Court believes that legitimate grievances should be addressed through courts of law. That is precisely what courts are there for. Accordingly the most appropriate sentence that I can think of in the particular circumstances of this case is one of thirteen (13 years imprisonment and it is so ordered. My Assessors agree. M.M. Ramodebed
https://lesotholii.org/ls/judgment/court-appeal/1999/98
CC-MAIN-2021-31
refinedweb
7,371
66.88
Details of Driver Development Environment In the previous article , we saw that upon installing WDK 7.1.0, we got build environments for Windows 7, Windows Server 2003, Windows Vista, Windows Server 2008 and Windows XP. Since we’re on Windows XP, we’ll be using this build environment. Inside the Windows XP folder are the checked and free build environments. The checked build environment builds a driver that has debugging enabled and compiler optimizations disabled, which can be of great help when debugging our driver. The free build environment should be used at the end to provide the production driver, which has debugging disabled and optimizations enabled. We need to keep in mind that the driver built for a specific version of Windows will actually be supported on all versions of Windows up until and including that Windows version. We need to keep this in mind when choosing the build target. If we open the relnote.htm file that was installed in the C:\WinDDK\7600.16385.1 directory, we’ll get to the WDK documentation. There’s a great table that describes what each of the subdirectories in the C:\WinDDK\7600.16385.1 directory contains. The table is shown on the picture below: The \src directory should contain the WDK driver samples, but it only contains a few pictures on how to sign the binary. This is because we didn’t check the Samples check box during the installation of the DDK. Nevertheless, the samples can also be obtained online at. Let’s download the first hello-world sample as shown below: We can see that the WpdHelloWorld driver supports four objects: device object, storage object, folder object and a file object. Once we click on the driver, we’ll be redirected to the page presented below, where we can download the sample driver. We can also see that we need to have Visual Studio 2012 installed. I had to switch to Windows 7 to install Visual Studio 2012 to be able to use this project. If we’re using Visual Studio 2012, we should download WDK 8.0, since it’s fully integrated with Visual Studio 2012. I downloaded the hello-world project archive, which contains the following: The description.html file contains the same information as the online version of the hello-world project. The C++ folder actually contains the code that we’re after. We need to extract the C++ folder somewhere on the disk and then open the WpdHelloWorldDriver.vcxproj file with Visual Studio 2012 as shown below: Once that is done, Visual Studio will load the project file and present us with the project code that we can browse. Building a Simple Program with Build.exe When we only have access to the 7.1.0 version of the WDK, which is true for Windows XP operating systems, we have to build programs and kernel drivers with the build.exe program. To do so, we first need to go to the build environment by opening the “x86 Checked Build Environment” as such: The reason why we’re doing this is because the build environment automatically sets the required environment variables as seen below (note that only a few environmental variables are presented for brevity): We need to create a new folder C:\driver\ and place the file main.c in it. The main.c file is a simple hello world program with this C++ code: #include <stdio.h> int __cdecl main(void) { printf("Hello World!\n"); return 0; } In our build environment console, we have to move to the newly created directory C:\driver\, which can be seen below. We first moved to the wanted directory and then listed the files in it, and only the file main.c is present as it should be. In order to continue, we must take a look at the TARGETTYPE macro, which specifies the type of program being built. TARGETTYPE is used by the build.exe program instructions to note what kind of input files to expect and what we would want to build. The build.exe program reads the macros from the filename sources that are located in the code directory. Each sources file must contain the following macros: - TARGETNAME: specifies the name of the binary to be built without the extensions. - TARGETTYPE: take a look below. - SOURCES: specifies the files with extensions to be compiled and separated by spaces. - TARGETLIBS: specify other libraries that we want to link to, separated by spaces. The sources file can also contain other variables like the following [11]: - UMTYPE: specifies the target type - UMENTRY: name of the default entry point function - USE_MSVCRT: use this library - USE_STL: enabled C++ STL library - USER_C_FLAGS: specify compiler flags - MSC_OPTIMIZATION: specify which optimization flags are enabled The TARGETTYPE macro can hold the following values: - PROGRAM: user-mode .exe program that does not export anything - PROGLIB: an executable program that also exports functions for other programs - DYNLINK: a DLL library that exports functions that other programs can use - LIBRARY: an import library that will be linked with other code in user-mode - DRIVER_LIBRARY: an import library that will be lined with other code in kernel-mode - DRIVER: a kernel mode driver - EXPORT_DRIVER: a kernel mode driver that also exports functions for other drivers - MINIPORT: a kernel mode driver that does not link with ntoskrnl.lib or hal.lib - GDI_DRIVER: a kernel mode graphics driver that links with win32k.sys - BOOTPGM: a kernel mode driver - HAL: the hardware abstraction layer - NOTARGET: no target should be actually created, only some processing In our hello world program, we therefore need to use the following values for the required variables: - TARGETNAME: main - TARGETTYPE: PROGRAM - SOURCES: main.c - UMTYPE: console - UMENTRY: main - USE_MSVCRT: 1 - MSC_OPTIMIZATION: /Od The actual sources file will look like this: After that, we should go back into the console window and execute the build command, which should read the sources file to get the values of specified variables to guide the build process. Then the build program should create an executable named main.exe as specified by the TARGETNAME variable. The whole compilation looks like this: Now the C:\driver\ directory contains the following files: The file main.c and the sources file is there, but there are also two other files present in the directory: a log file that contains the detailed log of the build process, which can be very useful for analysis if something goes wrong, and a folder named objchk_wcp_x86 that actually holds the executable, symbol file and some other files as well. This can be seen on the picture below: If we now run the executable main.exe, it should print “Hello World!” to the console window, as can be seen on the picture below: We’ve just built a simple hello world program, which is different from kernel driver by quite a bit. When programming a driver, we should probably specify the TARGETTYPE=DRIVER in the sources file. Building a Driver with WDK 8.0 We can use different methods to build the driver, but the method depends on whether we’re using WDK 8.0 or WDK 7.1.0. If we’re using WDK 8.0, then we can build a driver for Windows 8, Windows 7 or Windows Vista directly in Visual Studio 2012 or alternatively with MSBuild in command line. If we’re using WDK 7.1.0, then we can build a driver for Windows XP using the build.exe program. Build.exe is actually now replaced by MSBuild, which uses the same compiler and build tools as in Visual Studio. Let’s take a look at how to build a driver in Visual Studio 2012. To begin, we need to download and install WDK 8.0 if we haven’t done so already. It’s advisable that you use Windows 7 with this setup. Because Visual Studio 2012 uses the MSBuild script underneath the GUI, we’re going to describe how to use that to build the driver we’ve previously downloaded. First we have to open “Developer Command Prompt for VS2012” as we can see on the picture below: This is the development environment that Visual Studio 2012 uses to build its projects. If we execute the msbuild command, we can see that the command prints some error about us not specifying the project file; this means that the msbuild command is found and we can use it to build projects. We can print the help information of that command by executing the “msbuild /?” command, which will print all the arguments that we can pass to msbuild. We won’t display all the options here, because there are just too many of them. Rather than that, we’ll present the exact command that we can use to build the Visual Studio project. The actual command that we can use is the following (note that we have to extract the hello-world project from above into the C:\cpp\ directory): > msbuild /t:clean /t:build C:\cpp\WpdHelloWorldDriver.vcxproj When running that command, we’ll receive errors like those below: The errors are presented because we installed Visual Studio 2012 first and then installed WDK. If we want to make VS2012 recognize the WDK, we need to go to the “Add/Remove Programs” in Control Panel, right-click on WDK and repair the WDK installation. The picture below shows the initial repair window: Once the WDK has been repaired, we need to restart the “Developer Command Prompt for VS2012” and issue the same command again. After that, the driver compiles without a problem. From there on, we can load the driver into the kernel and start using its services. In future articles we’ll describe how to do this on our own example, so we won’t do so here. Conclusion In this article, we’ve seen how we can compile a simple program with Visual Studio, MSBuild and build.exe. The process is the same when building kernel drivers, except that some variables are different. If we run newer versions of Windows, like Windows Vista or Windows 7, then we can install WDK 8.0, which can integrate into Visual Studio very easily so we can use it to develop a kernel driver. However, in older versions of Windows operating systems, like Windows XP, we don’t have that luxury, because we can only install WDK version 7.1.0, which cannot be seamlessly integrated with Visual Studio. Because of this, we have to build the kernel driver manually by hand with the build.exe program. We’ve seen the number of variables and their values that affect the compile process; those variables need to be put into the sources file, where the build.exe program can find them. Basically, Visual Studio does the same as build.exe, but automates the building process so we don’t have to do it manually. Underneath, it still uses the new MSBuild program, which is still command-line based, but we don’t have to deal with that if we don’t want to. I guess at some point of Windows development, we have to take a look at how to do it manually with the build.exe tool, because we can learn the internals of the Windows compilation scheme which can be useful in many areas of Windows architecture.. It’s a very useful article, thanks.
http://resources.infosecinstitute.com/windows-building-environment-for-kernel-driver-development/
CC-MAIN-2018-05
refinedweb
1,903
61.67
Home -> Community -> Mailing Lists -> Oracle-L -> Re: Java to write a blob to disk, does any one have the java to read a blob from disk Your Java example is used to get data from PL/SQL to disk. If you MUST use java to go the other way, I cannot help. If you want to get from a disk file to a PL/SQL blob there is no need for java - use BFILE. Garry Gillies Database Administrator Business Systems Weir Pumps Ltd 149 Newlands Road, Cathcart, Glasgow, G44 4EX T: +44 0141 308 3982 F: +44 0141 633 1147 E: g.gillies_at_weirpumps.com "Juan Cachito Reyes Pacheco" To: <oracle-l_at_freelists.org> <jreyes_at_dazasoftwa cc: re.com> Subject: Java to write a blob to disk, does any one have Sent by: the java to read a blob from disk oracle-l-bounce_at_fr eelists.org 26/02/04 15:38 Please respond to oracle-l >From Mark A. Williams from Indianapolis, IN USA Here is the script in java to save a blob to disk (works perfectly) Adjust for your environment and line wrapping may need to be undone... connect / as sysdba; grant javauserpriv to scott; begin dbms_java.grant_permission('SCOTT', 'java.io.FilePermission','c:\temp\blob.txt', 'write'); end; / connect scott/tiger; create or replace java source named "exportBLOB" as import java.lang.*; import java.io.*; import java.sql.*; // get an input stream from the blob InputStream l_in = p_blob.getBinaryStream(); // get buffer size from blob and use this to create buffer for stream int l_size = p_blob.getBufferSize(); byte[] l_buffer = new byte[l_size]; int l_length = -1; // write the blob data to the output stream while ((l_length = l_in.read(l_buffer)) != -1) { l_out.write(l_buffer, 0, l_length); l_out.flush(); // close the streams l_in.close(); l_out.close(); } }; / -- Archives are at FAQ is at -----------------------------------------------------------------. ---------------------------------------------------------------- Please see the official ORACLE-L FAQ: ---------------------------------------------------------------- To unsubscribe send email to: oracle-l-request_at_freelists.org put 'unsubscribe' in the subject line. -- Archives are at FAQ is at -----------------------------------------------------------------Received on Thu Feb 26 2004 - 10:16:50 CST Original text of this message
http://www.orafaq.com/maillist/oracle-l/2004/02/26/2692.htm
CC-MAIN-2014-10
refinedweb
340
73.27
Post your Comment expand the row out line expand the row out line  ...;create out line for rows and columns. Finally we expand the row outline. ... in output between + and - linked with state line .Then at last we expand the row setting out line setting out line  ... out line for rows and columns. Code description The package we need... for column. You can create the out line for both rows and columns. The out line expand the column out line expand the column out line  ... out line for rows and columns. Finally we expand the column outline. Code... between + and - linked with state line .Then at last we expand the out line.   add image in a row of table add image in a row of table i have a table in which i have to add... jLabel.setIcon(new ImageIcon("E:/2.jpg"));.But when i pass this jlabel in the row it shows... project is on dead line...please Count Row - JSP-Servlet Count Row Hello all, Please I need your help on how to desplay the number of row(s) affected along with the affected row(s) in mssql database 2000...{ response.setContentType("text/html"); PrintWriter out = response.getWriter Adding button to each row for the table and adding row to another table Adding button to each row for the table and adding row to another table Hi I need to add button to each line in the table(Table data is retrived... row of the table COMMAND LINE ARGUMENTS COMMAND LINE ARGUMENTS JAVA PROGRAM TO ACCEPT 5 COMMAND LINE ARGUMENTS AND FIND THE SUM OF THAT FIVE.ALSO FIND OUT LARGEST AMONG THAT 5 Hi Friend, Try the following code: import java.util.*; class Deleting a Row from SQL Table Using EJB to delete a row from the SQL Table. Find out the steps given below that describes how to delete a particular row from the database table using EJB. The steps... Deleting a Row from SQL Table Using EJB   JavaScript add row dynamically to table JavaScript add row dynamically to table  ... to add row to a table dynamically then we can also do this with the JavaScript code. In this example we will describe you how to add row dynamically to the table line length in java - Java Beginners line length in java Write a program that asks the user to enter two words. The program then prints out both words on one line. The words will be separated by enough dots so that the total line length is 30 Hi friend Remove JTable row that read txt file records Remove JTable row that read txt file records Hi every one. i have... a "Delete" button that when select a row and clicked button, row must deleted. But, when i click the button, row dont deleted and a ArrayOutOutBoundsException How to delete the row from the Database by using servlet How to delete the row from the Database by using servlet Dear Sir... 25 users details are there. I am given 6th (some n th row)user details true...=UTF-8"); PrintWriter out = response.getWriter How to Manipulate List to String and print in seperate row header row BufferedWriter out = new BufferedWriter(new FileWriter(filename Command Line Arguments in Java line argument provides an convenient way passage to check out the behaviour...Command Line Arguments in Java The ongoing Java application can accept any number of arguments from the command line but it is command line argument, which Setting Line Style setting line style  ...;create oval then after set style of line and color. Code description... HSSFWorkbook. The method used in this example shift row setLineWidth(int lineWidth Deleting a Row from SQL Table Using EJB are going to delete a row from the SQL Table. Find out the steps given below that describes how to delete a particular row from the database table using EJB... Deleting a Row from SQL Table Using EJB how to display the selected row from the data table in model panel ?? how to display the selected row from the data table in model panel ?? the below displayed is my datatable:tableDatas.xhtml <rich...(); return dataList; } please help me out !~!!!!! i get the model panel but the values delete row delete row how to delete row using checkbox and button in php...("sourabh", $link); $rows=mysql_query("select * from sonu"); $row=mysql...;/tr> <?php while($row=mysql_fetch_array($rows)) { ?> <tr> How to delete the row from the Database by using while loop in servlet How to delete the row from the Database by using while loop in servlet Dear Sir/Madam, I am trying to delete the one user data in the Oracle SQL...;charset=UTF-8"); PrintWriter out = response.getWriter Post your Comment
http://roseindia.net/discussion/19890-expand-the-row-out-line.html
CC-MAIN-2014-42
refinedweb
791
72.16
Number of pairs with a given sum Sign up for FREE 1 month of Kindle and read all our books for free. Get FREE domain for 1st year and build your brand new site Reading time: 25 minutes | Coding time: 5 minutes In this article, we will find out how we can count the number of pairs in an array whose sum is equal to a certain number. Brute force approaches will take O(N^2) time but we can solve this in O(N) time using a Hash Map. We solve this problem using two approaches: - Brute force approach [ O(N^2) time and O(1) space ] - Efficient approach using Hash Map [ O(N) time and O(N) space ] For example: a[] = {1,2,3,4,5,6} sum = 5 so the pairs with sum 5 are: {1,4} {2,3} so the output is equal to 2. Note other pairs like (1,2) (3,4) and others do not sum upto 5 so these pairs are not considered. In fact, there are 15 pairs in total. Now to solve this problem we can take help of an efficient algorithm and use an good container data structure. But first we shall see the naive algorithm, and further solve it in a efficient approach. Brute force In this method we scan each and every element in the array and using the nested loop we find if any other element in the array makes the required sum in the array. Pseudocode: - Find all pairs - for each pair, check if the sum is equal to given number int count_pairs(int list, int sum) { int length = length_of(list); int count = 0; for(int i = 0; i<length; i++) for(int j = i+1; j<length; j++) if(list[i] + list[j] == sum) ++count; return count; } Code implementation: Following is the complete C++ implementation: #include <bits/stdc++.h> using namespace std; int pair_calc(int arr[], int n, int sum) { int count = 0; for (int i=0; i<n; i++) for (int j=i+1; j<n; j++) if (arr[i]+arr[j] == sum) count++; return count; } int main() { int n; int a[100]; cout<<"enter the size of array"<<endl; cin>>n; cout<<"enter the array"<<endl; for(int i=0;i<n;i++) { cin>>a[i]; } int sum; cout<<"enter the sum:"<<endl; cin>>sum; cout << "The number of pairs= " << pair_calc(a, n, sum); return 0; } Output input: enter the size of the array: 5 enter the array: 1 3 2 4 2 enter the sum: 4 The number of pairs=2 Complexity of Brute Force approach Time complexity: O(N^2) Space complexity: O(1) Efficient algorithm O(N) We use an unordered_map to fulfill our task. This algorithm consists of two simple traversals: - The first traversal stores the frequency of each element in the array, in the map. - The second traversal actually searches the pairs that have the required sum.But in any case the pair is counted two times so the counter's value has to be halved. And if in case the pair a[i] and a[i] satisfies the case then we will have to subtract 1 from the frequency counter. Pseudocode: int pairs(int a[], int sum) { int length = length_of(a); hashmap m; for (int i=0; i<length; i++) if a[i] is not in m add a[i] to m with value 1 (a[i], 1) else increment value of a[i] (a[i], value++) int count = 0; for (int i=0; i<length; i++) { if(sum - a[i] is in m) count = count + value of sum-a[i] if (sum-a[i] == a[i]) count--; // to ignore duplicates } return count/2; // as all pair has been counted twice } Code implementation: Following is the complete C++ implementation: #include <bits/stdc++.h> using namespace std; int Pairs_calc(int a[], int n, int sum) { unordered_map<int, int> m; for (int i=0; i<n; i++) m[a[i]]++; int count = 0; for (int i=0; i<n; i++) { count += m[sum-a[i]]; if (sum-a[i] == a[i]) count--; } return count/2; } int main() { int arr[] = {2,4,5,1,0} ; int n = sizeof(arr)/sizeof(arr[0]); int sum = 6; cout << "the number of pairs are = " << Pairs_calc(arr, n, sum); return 0; } Output: the number of pairs are = 2 Explanation: now in the array: 2,4,5,1,0 we wanna find the sum = 6 so the map first stores the value and its frequency of each number, here each element is unique so each has a frequency of 1. so after this the search for the pairs begins, here the target sum is 6 so we actually search 6-a[i] for the pair. now if we get the value of 6-a[i] in any bucket of the map,we increase the counter. But by doing so we actually increase the pair counter double times. So we need to half the value. Complexity: Time complexity: O(N) Space complexity: O(N) Note that the space complexity increases from O(1) in the brute force approach to O(N) in the efficient hashmap approach but the time complexity improves from O(N^2) to O(N) in the efficient hash map approach. The idea is that if we compromise the space complexity, we can actually improve the time complexity. Task How will you modify the above efficient approach to print the pairs? The idea is to simply print the value whenever you are incrementing the count value. To avoid duplicates, one can store the pairs in a set and at the end, print all values in the set. With this, you have the complete knowledge of solving this problem efficiently. Enjoy.
https://iq.opengenus.org/pairs-with-certain-sum/
CC-MAIN-2021-17
refinedweb
957
57.13
Posted on July 9, 2019 This year I was lucky to have both my papers accepted for the Haskell Symposium. The first one is about the problematic interaction of Typed Template Haskell and implicit arguments and the second, a guide to writing source plugins. Read on the abstracts and download links. Matthew Pickering, Nicolas Wu, Csongor Kiss (PDF). Matthew Pickering, Nicolas Wu, Boldizsár Németh (PDF).: This post is a question about whether the combination of nix, Cachix, Travis CI, haskell.nix and Hakyll was the perfect solution to these constraints or an exercise in overkill.. An obvious question at this stage is why is nix necessary at all? Wouldn’t a CI configuration which uses cabal have worked equally as well? On reflection, I could think of four reasons why I considered this to be a good idea. haskell-cigenerated travis file.. Posted on June 11, 2019 In the old days of the Make build system, the only reliable IDE-like feature which was useful whilst working on GHC was a tags file. Even loading GHC into GHCi was not easily possible, the most simple of interactive development workflows. Thankfully now times are changing, there are now build targets to start a GHCi session which enables developers to use tooling such as ghcid or vscode-ghc-simple. Something which is quite important when working on a project with over 500 modules! In this post we’ll briefly describe some recent advancements in developer tooling which have been made possible by the move to Hadrian. ghci The first target allows a developer to load GHC into GHCi. The -fno-code option is used which means that you can’t evaluate any expressions. It is useful for rapid feedback. ghcid ghcid can be used whilst working on ghc by invoking the ./hadrian/ghci.sh target. There is a .ghcid file included in the repo which includes some basic settings instructing .ghcid to reload the session if hadrian/ changes. It might also be useful to add further directories here so that working with the many components of ghc is seamless. haskell-ide-engine Once you have a working ghci target then in theory it becomes possible to use all other tooling with your build system. I realised that it would be possible to get haskell-ide-engine working with ghc but it required a very significant refactor. Here's a short demo of using haskell-ide-engine on GHC's code base using my fork which integrates HIE into hadrian/cabal/rules_haskell/stack/obelisk pic.twitter.com/rA1ps7dSb1— Matthew Pickering ((???)) March 27, 2019 As a result, the branch can’t easily be merged back into the main repo but once it is merged then haskell-ide-engine will be more flexible and target agnostic. :main A final goal is to be able to run GHC’s main function from inside the interpreter. In order to do this it’s necessary to interpret the code rather than pass -fno-code. With some modifications to the ./hadrian/ghci.sh script and patches by Michael Sloan we have been able to load load ghc into GHCi in the interpreted mode. Unfortunately, this isn’t enough as in order to build programs with HEAD you also need to build libraries such as base with HEAD. The way around this is to first compile stage2 and then use the stage2 compiler to launch GHCi and load GHC into that. Then the libraries will be the correct versions and can be used to compile other modules. A few months ago I got this working but since then it seems that the workflow has been broken. It’s a bit unfortunate that you have to jump through so many hoops in order to compile even a simple module but this is a unavoidable consequence of how GHC compiles and uses modules. Once you can execute :main, you can also use the GHCi debugger to debug GHC itself! This works without any problems but until you can use :main to compile programs then its of limited utility. I used the debugger to find the original reason why :main was failing whe compiling a program. Posted on June 11, 2019 The new GHC GitLab CI infrastructure builds hundreds of different commits a week. Each commit on master is built, as well as any merge requests; each build produces an bindist which can be downloaded and installed on the relevant platform. ghc-artefact-nix provides a program ghc-head-from which downloads and enters a shell providing an artefact built with GitLab CI. ghc-artefact-nix You can install ghc-head-from using NUR. nix-shell -p nur.repos.mpickering.ghc-head-from There are three modes of operation. master ghc-head-from ghc-head-from 1107 ghc-head-from The URL you provide has to be a direct link to a fedora27 bindist. The bindist is downloaded from the (very flaky) CDN and patched to remove platform specific paths. The fedora27 job is used because it is built using ncurses6 which works better with nix. The old-ghc-nix repo provides a mkGhc function which can be used in a nix expression to create an attribute for a specific bindist. It is also packaged using NUR. nur.repos.mpickering.ghc.mkGhc { url = ""; hash = "sha256"; ncursesVersion = "6"; } The ncursesVersion attribute is important to set for fedora27 jobs as the function assumes that the bindist was built with deb8 which uses ncurses5. If you plan on using the artefact for a while then make sure you click the “keep” button on the artefact download page as otherwise it will be deleted after a week. This is very useful if you are developing a library against an unreleased version of the compiler and want to make sure all your collaborators are using the same version of GHC.]]> Posted on February 14, 2019 Writing programs explicitly in stages gives you guarantees that abstraction will be removed. A guarantee that the optimiser most certainly does not give you. After spending the majority of my early 20s inside the optimiser, I decided enough was enough and it was time to gain back control over how my programs were partially evaluated. So in this post I’ll give an example of how I took back control and eliminated two levels of abstraction for an interpreter by writing a program which runs in three stages. Enter: An applicative interpreter for Hutton’s razor. data Expr = Val Int | Add Expr Expr eval :: Applicative m => Expr -> m Int eval (Val n) = pure n eval (Add e1 e2) = (+) <$> eval e1 <*> eval e2 Written simply at one level, there are two levels of abstraction which could be failed to be eliminated. Expr. Applicativethen we can remove the indirection from the typeclass. Using typed Template Haskell we’ll work out how to remove both of these layers. First we’ll have a look at how to stage the program just to eliminate the expression without discussion the application fragment. This is a two-stage program. module Two where import Language.Haskell.TH data Expr = Val Int | Add Expr Expr eval :: Expr -> TExpQ Int eval (Val n) = [|| n ||] eval (Add e1 e2) = [|| $$(eval e1) + $$(eval e2) ||] The eval function takes an expression and generates code which unrolls the expression that needs to be evaluated. Splicing in eval gives us a chain of additions which are computed at run-time. $$(eval (Add (Val 1) (Val 2))) => 1 + 2 By explicitly separating the program into stages we know that there will be no mention of Expr in the resulting program. That’s good. Eliminating the Expr data type was easy. We’ll have to work a bit more to eliminate the applicative. In the first stage, we will eliminate the expression in the same manner but instead of producing an Int, we will produce a SynApplicative which is a syntactic representation of an applicative. This allows us to inspect the structure of the program in the second stage and remove that overhead as well. data SynApplicative a where Return :: WithCode a -> SynApplicative a App :: SynApplicative (a -> b) -> SynApplicative a -> SynApplicative b data WithCode a = WithCode { _val :: a, _code :: TExpQ a } WithCode is a wrapper which pairs a value with a code fragment which was used to produce that value. If you notice in the earlier example, this wasn’t necessary when it was known that we needed to persist an Int, as there is a Lift instance for Int. However, in general, not all values can be persisted so using WithCode is more general and flexible, if a bit more verbose. elimExpr eliminates the first layer of abstraction and returns code which generates a SynApplicative. elimExpr :: Expr -> TExpQ (SynApplicative Int) elimExpr (Val n) = [|| Return (WithCode n (liftT n)) ||] elimExpr (Add e1 e2) = [|| Return (WithCode (+) codePlus) `App` $$(elimExpr e1) `App` $$(elimExpr e2) ||] liftT :: Lift a => a -> TExpQ a liftT = unsafeTExpCoerce . lift codePlus = [|| (+) ||] In the case for Add we encounter a situation where we would have liked to use nested brackets to persist the value of [|| (+) ||]. Instead you have to lift it to the top level and then persist that identifier. Next, it’s time to provide an interpreter to remove the abstraction of the applicative. In order to do this, we need to provide a dictionary which will be used to give the interpretation of the applicative commands. data ApplicativeDict m = ApplicativeDict { _return :: (forall a . WithCode (a -> m a)), _ap :: (forall a b . WithCode (m (a -> b) -> m a -> m b)) } WithCode is necessary again as it will be used to generate a program so it’s necessary to know how to implement the methods. elimApplicative :: SynApplicative a -> ApplicativeDict m -> TExpQ (m a) elimApplicative (Return v) d@ApplicativeDict{..} = [|| $$(_code _return) $$(_code v) ||] elimApplicative (App e1 e2) d@ApplicativeDict{..} = [|| $$(_code _ap) $$(elimApplicative e1 d) $$(elimApplicative e2 d) ||] This interpretation is very boring as it just amounts to replacing all the constructors with their implementations. However, it is exciting that we have guaranteed the removal of the overhead of the applicative abstraction. Now that we’ve written two functions independently to to eliminate the two layers, they need to be combined together. This is the birth of our three-stage program. import Three elim :: Identity Int elim = $$(elimApplicative $$(elimExpr (Add (Val 1) (Val 2))) identityDict) identityDict = ApplicativeDict{..} where _return = WithCode Identity [|| Identity ||] _ap = WithCode idAp [|| idAp ||] idAp :: Identity (a -> b) -> Identity a -> Identity b idAp (Identity f) (Identity a) = Identity (f a) elim is the combination of elimApplicative and elimExpr. The nested splices indicate that the program is more than two levels. Using -ddump-splices we can have a look at the program that gets generated. Test.hs:10:30-59: Splicing expression elimExpr (Add (Val 1) (Val 2)) ======> ((Return ((WithCode (+)) codePlus) `App` Return ((WithCode 1) (liftT 1))) `App` Return ((WithCode 2) (liftT 2))) Test.hs:10:11-73: Splicing expression elimApplicative $$(elimExpr (Add (Val 1) (Val 2))) identityDict ======> (idAp ((idAp (Identity (+))) (Identity 1))) (Identity 2) Both steps appear in the debug output with the code which was produced at each step. Notice that we had very precise control over what code was generated and that functions like idAp are not inlined. In this case, the compiler will certainly inline idAp and so on but in general it might be useful to generate code which contains calls to GHC.Exts.inline to force even recursive functions to be inlined once. In general, splitting your program up into stages is quite difficult so mechanisms like type class specialisation will be easier to achieve. In controlled situations though, staging gives you the guarantees you need. Posted on January 31, 2019 Quotation is one of the key elements of metaprogramming. Quoting an expression e gives us a representation of e. [| e |] :: Repr What this representation is depends on the metaprogramming framework and what we can do with the representation depends on the representation. The most common choice is to dissallow any inspection of the representation type relying on the other primative operation, the splice, in order to insert quoted values into larger programs. The purpose of this post is to explain how to implemented nested quotations. From our previous example, quoting a term e, gives us a term which represents e. It follows that we should be allowed to nest quotations so that quoting a quotation gives us a representation of that quotation. [| [| 4 + 5 |] |] However, nesting brackets in this manner has been disallowed in Template Haskell for a number of years despite nested splices being permitted. I wondered why this restriction was in place and it seemed that no one knew the answer. It turns out, there was no technical reason and implementing nested brackets is straightforward once you think about it correctly. We will now be concrete and talk about how these mechanisms are implemented in Template Haskell. In Template Haskell the representation type of expressions is called Exp. It is a simple ADT which mirrors source Haskell programs very closely. For example quoting 2 + 3 might be represented by: [| 2 + 3 |] :: Exp = InfixE (Just (LitE 5)) (VarE +) (Just (LitE 5)) Because Exp is a normal data type we can define its representation in the same manner as any user defined data type. This is the purpose of the Lift type class which defines how to turn a value into its representation. class Lift t where lift :: t -> Q Exp So we just need to implement instance Lift (Q Exp) and we’re done. To do that we implement a general instance for Lift (Q a) and then also an instance for Exp. instance Lift a => Lift (Q a) where lift qe = qe >>= \b' -> lift b' >>= \b'' -> return ((VarE 'return) `AppE` b'') This instance collapses effects from building the inner code value into a single outer layer. In order to make the types line up correctly, we have to insert a call to return to the result of lifting the inner expression. Instances for Exp and all its connected types are straightforward to define and thankfully we can use the DeriveLift extension in order to derive them. deriving instance Lift Exp ... 40 more instances deriving instance Lift PatSynDir It’s now possible to write a useless program which lifts a boolean value twice before splicing it twice to get back the original program. -- foo = True foo :: Bool foo = $($(lift (lift True))) Running this program with -ddump-splices would show us that when the first splice is run, the code that is insert is the representation of True. After the second splice is run, this representation is turned back into True. If you use variables in a bracket the compiler has to persist their value from one stage to another so that they remain bound and bound to the correct value when we splice in the quote. For example, quoting x, we need to remember that the x refers to the x bound to the top-level which is equal to 5. x = 5 foo = [| x |] If we didn’t when splicing in foo, in another module, we would use whatever x was in scope or end up with an unbound reference to x. No good at all. For a locally bound variable, we can’t already precisely know the value of the variable. We will only know it later at runtime when the function is applied. foo x = [| x |] Thus, we must know for any value that x can take, how we construct its representation. If we remember, that’s precisely what the Lift class is for. So, to correct this cross-stage reference, we replace the variable x with a splice (which lowers the level by one) and a call to lift. foo x = [| $(lift x) |] The logic for persisting variables has to be extended to work with nested brackets. foo3 :: Lift a => a -> Q Exp foo3 x = [| [| x |] |] In foo3, x is used at level 2 but defined at level 0, hence we must insert two levels of splices and two levels of lifting to rectify the stages. foo3 :: Lift a => a -> Q Exp foo3 x = [| [| $($(lift(lift x))) |] |] Now with nested brackets, you can also lift variables defined in future stages. foo4 :: Q Exp foo4 = [| \x -> [| x |] |] Now x is defined at stage 1 and used in stage 2. So, like normal, we need to insert a lift and splice in order to realign the stages. This time, just one splice as we just need to lift it one level. foo4 :: Q Exp foo4 = [| \x -> [| $(lift x) |] |] After renaming a bracket, all the splices inside the bracket are moved into an associated environment. foo = [| $(e) |] => [| x |]_{ x = e } When renaming the RHS of foo, we replace the splice of e with a new variable x, this is termed the “splice point” for the expression e. Then, a new binding is added to the environment for the bracket which says that any reference to x inside the bracket refers to e. That means when we make the representation of the code inside the bracket, occurences of x are replaced with e directly (rather than a representation of x) in the program. The same mechanism is used for the implicit splices we create by instances of cross-stage persistence. qux x = [| x |] => [| $(lift x) |] => [| x' |]_{ x' = lift x } The environment is special in the sense that it connects a stage 1 variable with an expression at stage 0. How is this implemented? When we see a splice we rename it and the write it to a state variable whose scope is delimited by the bracket. Once the contents of the bracket is finished being renamed we read the contents and use that as the environment. Nested splices work immediately with nested brackets. When there is a nested bracket, the expression on the inside is first floated outwards into the inner brackets environment. foo n = [| [| $($(n)) |] |] => [| [| x |]_{x=$(n)} |] => [| [| x |]_{x = y} |]_{y = n} Then it is floated again to the top-level leaving a behind a trail of bindings. Template Haskell represents renamed terms so that references remain constent after splicing. As such, our representation of a quotation in the TH AST should reflect the renamed form of brackets which includes the environment. data Exp = ... | BrackE [(Var, Exp)] Exp | ... The constructor therefore takes a list which is the environment mapping splice points to expressions and a representation of the quoted expression. It is invariant that there are no splice forms in renamed syntax as they are all replaced during renaming into this environment form. To represent a simple quoted expression will have an empty environment but if we also use splices then these are included as well. [| [| 4 |] |] => BrackE [] (representation of 4) [| [| $(foo) |] |] => BrackE [(x, representation of foo)] (representation of x) Those are the details of implementing nested brackets, if you ever need to for your own language. In the end, the patch was quite simple but it took quite a bit of thinking to work out the correct way to propagate the splices and build the correct representation.]]> Posted on September 19, 2018 This year I packaged two artefacts for the ICFP artefact evaluation process. This post explains the system I used to make it easy to produce the docker images using nix. I hope this documentation will be useful for anyone else submitting a Haskell library for evaluation. The end result will be an artefact.nix file which is used to build a docker image to submit. It will be an entirely reproducible process as we will fix the versions of all the dependencies we use. In this example, I am going to package the artefact from the paper “Generic Deriving of Generic Traversals”. The artefact was a Haskell library and an executable which ran some benchmarks. The resulting artefact will be a docker image which contains: To start with, I will assume that we have placed the source code and benchmarks code in our current directory. We will add the rest of the files >>> ls generic-lens-1.0.0.1/ benchmarks/ The most important step of the whole process is to “pin” our version of nixpkgs to a specific version so that anyone else trying to build the image will use the same versions of all the libraries and system dependencies. Once we have established a commit of nixpkgs that our package builds with. We can use nix-prefetch-git in order to create nixpkgs.json which will provide the information about the pin. nix-prefetch-git --rev 651239d5ee66d6fe8e5e8c7b7a0eb54d2f4d8621 --url > nixpkgs.json Now we have a file, nixpkgs.json which specifies which version of nixpkgs we should use. We then need to load this file. Some boilerplate, nixpkgs.nix, will do that for us. opts: let hostPkgs = import <nixpkgs> {}; pinnedVersion = hostPkgs.lib.importJSON ./nixpkgs.json; pinnedPkgs = hostPkgs.fetchFromGitHub { owner = "NixOS"; repo = "nixpkgs"; inherit (pinnedVersion) rev sha256; }; in import pinnedPkgs opts nixpkgs.nix will be imported in artefact.nix and will determine precisely the version of all dependencies we will use. dockerTools Now we have specified the set of dependencies we want to use we can go about starting to build our docker image. Nixpkgs provides a convenient set of functions called dockerTools in order to create docker images in a declarative manner. This is the start of our artefact.nix file. let pkgs = import ./nixpkgs.nix { }; in with pkgs; let debian = dockerTools.pullImage { imageName = "debian" ; imageTag = "9.5" ; sha256 = "1jxci0ph7l5fh0mm66g4apq1dpcm5r7gqfpnm9hqyj7rgnh44crb"; }; in dockerTools.buildImage { name = "generic-lens-artefact"; fromImage = debian; contents = [ bashInteractive glibcLocales ]; config = { Env = ["LANG=en_US.UTF-8" "LOCALE_ARCHIVE=${glibcLocales}/lib/locale/locale-archive"]; WorkingDir = "/programs"; }; } This is the barebones example we’ll start from. We firstly import nixpkgs.nix which defines the package set we want to use. Our docker file will be based on debian, and so we use the dockerTools.pullImage function to get this base image. The imageName comes from docker hub and the imageTag indicates the specific tag. This image is our base image when calling dockerTools.buildImage. For now, we add the basic packages bashInteractive and glibcLocales, in the next step we will add the specific contents that we need for our artefact. Setting the LANG and LOCALE_ARCHIVE env vars is important for Haskell programs as otherwise you can run into strange encoding errors. This is a complete image which can already be build with nix-build artefact.nix. The result will be a .tar.gz which can be loaded into docker and run as normal. First we’ll deal with making the executable itself available on the image. Remember that the source code of the benchmarks, which is a normal Haskell package, is located in benchmarks/. We need to tell nix how to build the benchmarks. The standard way to do this is to use cabal2nix to generate a package specification which we will pass to haskellPackages.callPackage. cabal2nix benchmarks/ > benchmarks.nix This will produce a file which looks a bit like { mkDerivation, base, criterion, deepseq, dlist, dump-core , generic-lens, geniplate-mirror, haskell-src, lens, mtl, one-liner , parallel, plugin, random, stdenv, syb, transformers, uniplate , weigh }: mkDerivation { pname = "benchmarks"; version = "0.1.0.0"; src = ./benchmarks; isLibrary = false; isExecutable = true; executableHaskellDepends = [ base criterion deepseq dlist dump-core generic-lens geniplate-mirror haskell-src lens mtl one-liner parallel plugin random syb transformers uniplate weigh ]; license = stdenv.lib.licenses.bsd3; } Now we will add the executable to the docker image. A new definition is created in the let bindings and then we add the executable to the contents of the image. run-benchmarks = haskellPackages.callPackage ./benchmarks.nix {}; So now our contents section will look like: contents = [ bashInteractive glibcLocales run-benchmarks ]; When we build this image, the executable will be available on the path by default. In our case, the user will type bench and it will run the benchmarks. The next step is to add the source files to the image. To do this we use the runCommand script to make a simple derivation which copies some files into the right place. benchmarks-raw = ./benchmarks; benchmarks = runCommand "benchmarks" {} '' mkdir -p $out/programs/benchmarks cp -r ${benchmarks-raw}/* $out/programs/benchmarks ''; All the derivation does is copy the directory into the nix store at a specific path. We then just add this to the contents list again and also do the same for the library itself and the README. contents = [ bashInteractive glibcLocales run-benchmarks benchmarks readme library]; Now once we build the docker image, we’ll have the executable bench available and also a file called README and two folders containing the library code and benchmarks code. Finally, we need to do two more things to make it possible to build the source programs in the container. Including cabal-install in the contents is the first so that we can use cabal in the container. contents = [ bashInteractive glibcLocales run-benchmarks benchmarks readme library cabal-install ]; The second is much less obvious, we need to make sure that the necessary dependencies are already installed in the environment so that someone can just use cabal build in order to build the package. The way to achieve this is to modify the benchmarks.nix file and change isLibrary to true. - isLibrary = false; + isLibrary = true; This means that all the build inputs for the benchmarks are propagated to the container so all the dependencies for the benchmarks will be available to rebuild them again. artefact.nix Here’s the complete artefact.nix that we ended up with. We also generated nixpkgs.json, nixpkgs.nix and benchmarks.nix along the way. let pkgs = import ./nixpkgs.nix {}; in with pkgs; let debian = dockerTools.pullImage { imageName = "debian" ; imageTag = "9.5" ; sha256 = "1y4k42ljf6nqxfq7glq3ibfaqsq8va6w9nrhghgfj50w36bq1fg5"; }; benchmarks-raw = ./benchmarks; benchmarks = runCommand "benchmarks" {} '' mkdir -p $out/programs/benchmarks cp -r ${benchmarks-raw}/* $out/programs/benchmarks ''; library-raw = ./generic-lens-1.0.0.1; library = runCommand "benchmarks" {} '' mkdir -p $out/programs/library cp -r ${library-raw}/* $out/programs/library ''; readme-raw = ./README; readme = runCommand "readme" {} '' mkdir -p $out/programs cp ${readme-raw} $out/programs/README ''; run-benchmarks = haskellPackages.callPackage ./benchmarks.nix {}; in dockerTools.buildImage { name = "generic-lens-artefact"; fromImage = debian; contents = [ bashInteractive cabal-install glibcLocales run-benchmarks benchmarks readme library]; config = { Env = ["LANG=en_US.UTF-8" "LOCALE_ARCHIVE=${glibcLocales}/lib/locale/locale-archive"]; WorkingDir = "/programs"; }; } Hopefully this tutorial will be useful for anyone having to package a Haskell library in future. Each artefact is different so you’ll probably have to modify some of the steps in order to make it work perfectly for you. It’s also possible that the dockerTools interface will change but it should be possible to modify the examples here to adapt to any minor changes. If you’re already using nix, you probably know what you’re doing anyway. dockerToolsdocumentation Posted on September 12, 2018 My latest project has been to plot a map of orienteering maps in the UK. This post explains the technical aspects behind the project and primarily the use of funflow to turn my assortment of scripts into into a resumable workflow. There was nothing wrong with my ad-hoc python and bash scripts but they downloaded and regenerated the whole output every time. The whole generation takes about 2 hours so it’s desirable to only recompute the necessary portions. This is where funflow comes in, by stringing together these scripts in their DSL, you get caching for free. The workflow is also highly parallelisable so in the future I could distribute the work across multiple machines if necessary. The code for the project can be found here. funflow There are already two blog posts introducing the concepts of funflow. The main idea is that you specify your workflow (usually a sequence of external scripts) using a DSL and then funflow will automatically cache and schedule the steps. My primary motivation for using funflow was the automatic caching. The store is content addressed which means that the location for each file in the store depends on its contents. funflow performs two different types of caching. The lack of output-based caching is one of the big missing features of nix which makes it unsuitable for this task. A content-addressed store where the address depends on the contents of the file is sometimes known as an intensional store. Nix’s store model is extensional as the store hash only depends on the inputs to the build. An intensional store relies on the program producing deterministic output hashes. It can be quite difficult to track down why a step is not being cached when you are relying on the output’s being identified in the store. There are two outputs to the project. This folder is then uploaded to online storage and served as a static site. The processing pipeline is as follows: As can be seen, the workflow is firstly highly parallelisable as much of the processing pipeline happens independently of other steps. However, the main goal is to avoid computing the tiles as much as possible as this is the step which takes by far the longest. At the time of writing there are about 500 maps to process. In general, there are about 5-10 maps added each week. Only recomputing the changed portions of the map saves a lot of time. funflow In theory, this is a perfect application for funflow but in order to achieve the perfect caching behaviour I had to rearchitecture several parts of the application. The recommended way to use funflow is to run each step of the flow in a docker container. I didn’t want to do this was my scripts already declared the correct environment to run in by using the nix-shell shebang. #! /usr/bin/env nix-shell #! nix-shell -i bash -p gdal By placing these two lines at the top of the file, the script will be run using the bash interpreter with the gdal package available. This is more lightweight and flexible than using a docker image as I don’t have to regenerate a new docker image any time I make a change. However, there is no native support for running these kinds of scripts built into funflow. It was easy enough to define my own function in order to run these kinds of scripts using the external' primitive. nixScript takes a boolean parameter indicating whether the script is pure and should be cached. The name of the script to run, the names of any files the script depends on and finally a function which supplies any additional arguments to the script. nixScriptX :: ArrowFlow eff ex arr => Bool -> Path Rel File -> [Path Rel File] -> (a -> [Param]) -> arr (Content Dir, a) CS.Item nixScriptX impure script scripts params = proc (scriptDir, a) -> do env <- mergeFiles -< absScripts scriptDir external' props (\(s, args) -> ExternalTask { _etCommand = "perl" , _etParams = contentParam (s ^</> script) : params args , _etWriteToStdOut = NoOutputCapture , _etEnv = [("NIX_PATH", envParam "NIX_PATH")] }) -< (env, a) where props = def { ep_impure = impure } absScripts sd = map (sd ^</>) (script : scripts) The use of perl as the command relies on the behaviour of perl that it will execute the #! line if it does not contain the word “perl”. Yes, this is dirty. It would be desirable to set NIX_PATH to a fixed nixpkgs checkout by passing a tarball directly but this worked for now. All the steps are then defined in terms of nixScriptX indirectly as two helper functions are defined for the two cases of a pure or impure scripts. nixScript = nixScriptX False impureNixScript = nixScriptX True Now to the nitty gritty details. Firstly, I had to decouple the processing of finding the map metainformation from downloading the image. Otherwise, I would end up doing a lot of redundant work downloading images multiple times. The python script scraper.py executes a selenium driver to extract the map information. For each map, the metainformation is serialised to its own file in the output directory. scrape = impureNixScript [relfile|scraper.py|] [[relfile|shell.nix|]] (\() -> [ outParam ]) This step is marked as impure as we have to run it every time the flow runs to work out if we need to perform any more work. It is important that the filename of the serialised information is the same if the content of the file is the same. Otherwise, funflow will calculate a different hash for the file. As such, we compute our own hash of the metainformation for the name the serialised file. In the end the output directory looks like: 9442c7eaa81f82f7e9889f6ee8382e8d047df76db2d5f6a6983d1c82399a2698.pickle 5e7e6994db565126a942d66a9435454d8b55cd7d3023dd37f64eca7bbb46df1f.pickle ... listDirContentsdefeats caching Now that we have a directory containing all the metainformation, we want to split it up and then execute the fetching, converting and warping in parallel for all the images. My first attempt was meta_dir <- step All <<< scrape -< (script_dir, ()) keys <- listDirContents -< meta_dir but this did not work and even if the keys remained the same, the images would be refetched. The problem was listDirContents does not have the correct caching behaviour. listDirContents takes a Content Dir and returns a [Content File] as required but the [Content File] are pointers into places into the Content Dir. This means that if the location of Content Dir changes (if there are any changes or new additions to any files in the directory) then the location of all the [Content File] will also be changed. This means the next stage of recompilation will be triggered despite being unnecessary. Instead, we have to put each file in the directory into its own store location so that the its location depends only on itself rather than the other contents of the directory. I defined the splitDir combinator in order to do this. splitDir :: ArrowFlow eff ex arr => arr (Content Dir) ([Content File]) splitDir = proc dir -> do (_, fs) <- listDirContents -< dir mapA reifyFile -< fs -- Put a file, which might be a pointer into a dir, into its own store -- location. reifyFile :: ArrowFlow eff ex arr => arr (Content File) (Content File) reifyFile = proc f -> do file <- getFromStore return -< f putInStoreAt (\d fn -> copyFile fn d) -< (file, CS.contentFilename f) It could be improved by using a hardlink rather than copyFile. Now we have split the metainformation up into individual components we have to download, convert and warp the map files. We define three flows to do this which correspond to three different scripts. fetch = nixScript [relfile|fetch.py|] [[relfile|shell.nix|]] (\metadata -> [ outParam, contentParam metadata ]) convertToGif = nixScript [relfile|convert_gif|] [] (\dir -> [ pathParam (IPItem dir), outParam ]) warp = nixScript [relfile|do_warp|] [] (\dir -> [ pathParam (IPItem dir), outParam ]) Each script takes an individual input file and produces output in a directory specified by funflow. fetch.py is a python script whilst convert_gif and do_warp are bash scripts. We can treat them uniformly because of the nix-shell shebang. These steps are all cached by default because they are external processes. In order to get a good looking result, we need to group together the processed images into groups of overlapping images. This time we will use a python script again invoked in a similar manner. The output is a directory of files which specify the groups, remember: splitDirafter creating the output to put each group file into it’s own store location so the next recompilation step will work. mergeRasters = nixScript [relfile|merge-rasters.py|] [[relfile|merge-rasters.nix|]] (\rs -> outParam : map contentParam rs ) This command also relies on merge-rasters.nix which sets up the correct python environment to run the script. mergeDirscan also defeat caching The original implementation of this used mergeDirs :: arr [Content File] (Content Dir) in order to group together the files and pass a single directory to merge-rasters.py. However, this suffers a similar problem to listDirContents as mergeDirs will create a new content store entry which contains all the files in the merge directories. The hash of this store location will then depend on the whole contents of the directory. In this case these file paths ended up in the output so it would cause the next steps to recompile even if nothing had changed. In this case, we would prefer a “logical” group which groups the files together with a stable filename which wouldn’t affect caching. The workaround for now was to use splitDir again to put each processed image into its own storage path and then pass each filename individually to merge-rasters.py rather than a directory as before. Making the tiles is another straightforward step which takes each of the groups and makes the necessary tiles for that group. makeTiles = nixScript [relfile|make_tiles|] [] (\dir -> [ contentParam dir, outParam, textParam "16" ]) mergeDirsdoesn’t merge duplicate files Once we have made all the tiles we need to merge them all together. This is safe as we already ensured that they didn’t overlap each other. The problem is that mergeDirs will not merge duplicate files. The make_tiles step creates some unnecessary files which we don’t need but would cause mergeDirs to fail as they are contained in the output of each directory. The solution was to write my own version of mergeDirs which checks to see whether a file already exists before trying to merge it. It would be more hygienic to ensure that the directories I was trying to merge were properly distinct but this worked well for this use case. Our final script is a python script which creates the static site displaying all the markers and the map tiles. It takes the output of processing all the images and the metainformation to produce a single html file. leaflet <- step All <<< makeLeaflet -< ( script_dir, (merge_dir, meta_dir)) The final step then merges together the static page and all the tiles. This is a nice bundle we can directly upload and serve our static site. mergeDirs -< [leaflet, tiles] The complete flow is shown below: mainFlow :: SimpleFlow () (Content Dir) mainFlow = proc () -> do cwd <- stepIO (const getCurrentDir) -< () script_dir <- copyDirToStore -< (DirectoryContent (cwd </> [reldir|scripts/|]), Nothing) # Step 1 meta_dir <- step All <<< scrape -< (script_dir, ()) keys <- splitDir -< meta_dir # Step 2 maps <- mapA (fetch) -< [( script_dir, event) | event <- keys] mapJpgs <- mapA convertToGif -< [(script_dir, m) | m <- maps] merge_dir <- mergeDirs' <<< mapA (step All) <<< mapA warp -< [(script_dir, jpg) | jpg <- mapJpgs ] toMerge <- splitDir -< merge_dir # Step 3 vrt_dir <- step All <<< mergeRasters -< (script_dir, toMerge) merged_vrts <- splitDir -< vrt_dir # Step 4 tiles <- mergeDirs' <<< mapA (step All) <<< mapA makeTiles -< [(script_dir, vrt) | vrt <- merged_vrts] # Step 5 leaflet <- step All <<< makeLeaflet -< ( script_dir, (merge_dir, meta_dir)) mergeDirs -< [leaflet, tiles] Once all the kinks are ironed out – it’s quite short but a very powerful specification which avoids a lot of redundant work being carried out. copyDirToStorecan defeat caching Using copyDirToStore seems much more convenient than copying each script into the store manually but it can again have confusing caching behaviour. The hash of the store location for script_dir depends on the whole script_dir directory. If you change any file in the directory then the hash of it will change. This means that all steps will recompile if you modify any script! This is the reason for the mergeFiles call in nixScriptX. mergeFiles will take the necessary files from script_dir and put them into their own store directory. The hash of this directory will only depend on the files necessary for that step. The flow is run with the simple local runner. We pass in a location for the local store to the runner which is just a local directory in this case. The library has support for more complicated runners but I haven’t explored using those yet. main :: IO () main = do cwd <- getCurrentDir r <- withSimpleLocalRunner (cwd </> [reldir|funflow-example/store|]) $ \run -> run (mainFlow >>> storePath) () case r of Left err -> putStrLn $ "FAILED: " ++ displayException err Right out -> do putStrLn $ "SUCCESS" putStrLn $ toFilePath out A nice feature of nix-build is that it displays the path of the final output in the nix store once the build has finished. This is possible to replicate using funflow after defining your own combinator. It would be good to put this in the standard library. storePath :: ArrowFlow eff ex arr => arr (Content Dir) (Path Abs Dir) storePath = getFromStore return It means that we can run our flow and deploy the site in a single command given we have a script which performs the deployment given an output path. Mine looks a bit like: #! /usr/bin/env nix-shell #! nix-shell -i bash -p awscli if [[ $# -eq 0 ]] ; then echo 'Must pass output directory' exit 1 fi aws s3 sync $1 s3://<bucket-name> Putting them together: cabal new-run | ./upload-s3 Once everything is set up properly, funflow is a joy to use. It abstracts beautifully away from the annoying problems of scheduling and caching leaving the core logic visible. An unfortunate consequence of the intensional store model is that debugging why a build step is not being cached can be very time consuming and fiddly. When I explain the problems I faced, they are obvious but each one required careful thought and reading the source code to understand the intricacies of each of the different operations. It was also very pleasant to combine using nix and funflow rather than the suggested docker support. Posted on August 10, 2018 Plugins have existed for a long time in GHC. The first plugins were implemented in 2008 by Max Bolingbroke. They enabled users to modify the optimisation pipeline. They ran after desugaring and hence were called “core” plugins. Later, Adam Gundry implemented what I shall refer to as “constraint solver” plugins which allow users to provide custom solver logic to solve additional constraints. Recently, Boldizsár Németh has extended the number of extension points again with a set of plugins which can inspect and modify the syntax AST. Plugins can run after parsing, renaming or type checking and hence are called “source” plugins. The idea behind plugins was great - a user can extend the compiler in their own specific way without having to modify the source tree and rebuild the compiler from scratch. It is far more convenient to write a plugin than to use a custom compiler. However, if a user wants to use a plugin, they will find that every module where the plugin is enabled is always recompiled, even if the source code didn’t change at all. Why is this? Well, a plugin can do anything, it could read the value from a temperature sensor and insert the room temperature into the program. Thus, we would always need to recompile a module if the temperature reading changed as it would affect what our program did. However, there are also “pure” plugins, whose output is only affected by the program which is passed as in input. For these plugins, if the source code doesn’t change then we don’t need to do any recompilation. This post is about a new metadata field which I added to the Plugin data type which specifies how a plugin should affect recompilation. This feature will be present in GHC 8.6. The Plugin data type is a record which contains a field for each of the different types of plugin. There is now also a new field pluginRecompile which specifies how the plugin should affect recompilation. data Plugin { installCoreToDos :: CorePlugin , tcPlugin :: TcPlugin , parsedResultAction :: [CommandLineOption] -> ModSummary -> HsParsedModule -> Hsc HsParsedModule ... omitted fields , pluginRecompile :: [CommandLineOpts] -> IO PluginRecompile } This function will be run during the recompilation check which happens at the start of every module compilation. It returns a value of the PluginRecompile data type. There are three different ways to specify how a plugin affects recompilation. PurePluginwhich means that it doesn’t contribute anything to the recompilation check. We will only recompile a module if we would normally recompile it. ImpurePluginwhich means that should always recompile a module. This is the default as it is backwards compatible. MaybeRecompile, we compute a Fingerprintwhich we add to the recompilation check to decide whether we should recompile. The Plugins interface provides some library functions for common configurations. We might want to use impurePlugin when our plugin injects some additional impure information into the program such as the result of reading a webpage. The purePlugin function is useful for static analysis tools which don’t modify the source program at all and just output information. Other plugins which modify the source program in a predictable manner such as the ghc-typelits-natnormalise plugin should also be marked as pure. If you have some options which affect the output of the plugin then you might want to use the flagRecompile option which causes recompilation if any of the plugin flags change. flagRecompile :: [CommandLineOption] -> IO PluginRecompile flagRecompile = return . MaybeRecompile . fingerprintFingerprints . map fingerprintString . sort The nature of this interface is that it is sometimes necessary to be overly conservative when specifying recompilation behaviour. For example, you can’t decide on a per-module basis whether to recompile or not. Perhaps the interface could be extended with this information if user’s found it necessary. There is now a simple mechanism for controlling how plugins should affect recompilation. This solves one of the major problems that large scale usage of plugins has faced. Using a plugin on a 1000 module code base was impractical but now shouldn’t impose any additional inconvenience. Posted on August 9, 2018 You may have heard about source plugins by now. They allow you to modify and inspect the compiler’s intermediate representation. This is useful for extending GHC and performing static analysis of Haskell programs. In order to test them out, I reimplemented the graphmod tool as a source plugin. graphmod generates a graph of the module structure of your package. Reimplementing it as a source plugin makes the implementation more robust. I implemented it as as a type checker plugin which runs after type checking has finished. The result: graphmod-plugin aesonpackage The plugin runs once at the end of type checking for each module. Therefore, if we want to collate information about multiple modules, we must first serialise the information we want and then once all the modules have finished compiling collect all serialised files and process the information. We will therefore first define a plugin which extracts all the import information from one module before defining the suitable executable which collects all the import information and produces the final output graph. graphmod-plugin consists of a library which exports the plugin and an executable which is then invoked to render the information. Here is how to directly use the two in tandem: # Run the plugin on the source file ghc -fplugin=GraphMod -fplugin-opt:GraphMod:/output/dir # Collect the information which was produced graphmod-plugin --indir /output/dir > modules.dot Once the dot file has been generated, you can use the normal graphviz utilities to render the file. tred removes transitive edges from the graph before we render the graph as a pdf. A type checker plugin is a function of the following type: The TcGblEnv is the output of the type checker, it contains all the type checked bindings in addition to lots of other useful information. We are interested in just the imports, these are located in the tcg_rn_imports field. An LImportDecl GhcRn is a data type which contains information about each import. -- GHC data types type LImportDecl pass = Located (ImportDecl pass) data ImportDecl pass = ImportDecl { ideclExt :: XCImportDecl pass, ideclSourceSrc :: SourceText, ideclName :: Located ModuleName, -- ^ Module name. ideclPkgQual :: Maybe StringLiteral, -- ^ Package qualifier. ideclSource :: Bool, -- ^ True <=> {-\# SOURCE \#-} import ideclSafe :: Bool, -- ^ True => safe import ideclQualified :: Bool, -- ^ True => qualified ideclImplicit :: Bool, -- ^ True => implicit import (of Prel ude) ideclAs :: Maybe (Located ModuleName), -- ^ as Module ideclHiding :: Maybe (Bool, Located [LIE pass]) -- ^ (True => hiding, names) } Along with the module name, there is lots of meta information about other aspects of the import such as whether it was qualified and so on. Our plugin will take this information and convert it into the format expected by the existing graphmod library. # Graphmod data types data Import = Import { impMod :: ModName, impType :: ImpType } data ImpType = NormalImp | SourceImp The graphmod Import data type is a simplified version of ImportDecl. It’s straightforward to extract the information we need. Notice how much simpler this approach is than the approach taken in the original library which uses a lexer to try to identify textually the position of the imports. convertImport :: ImportDecl GhcRn -> GraphMod.Import convertImport (ImportDecl{..}) = GraphMod.Import { impMod = convertModName (ideclName) , impType = if ideclSource then GraphMod.SourceImp else GraphMod.NormalImp } convertModName :: Located ModuleName -> GraphMod.ModName convertModName (L _ mn) = GraphMod.splitModName (moduleNameString mn) Notice that it is also possible to extend the GraphMod.Import data type to contain new information easily. In the previous implementation this would be much more effort as the lexing approach is fragile. Once we have gathered this information we need to serialise it and write it to disk so that once we have compiled all the modules we can deserialise it and render the final graph. As we are using GHC, we can use the same serialisation machinery as GHC uses to write interface files. Of course, you are free to use whatever serialisation library you like but there are already instances defined for GHC specific types. We won’t need any of them in this example but they can be useful. The writeBinary function takes a value serialisable by the GHC.Binary class and writes it to the file. import Binary initBinMemSize :: Int initBinMemSize = 1024 * 1024 writeBinary :: Binary a => FilePath -> a -> IO () writeBinary path payload = do bh <- openBinMem initBinMemSize put_ bh payload writeBinMem bh path We also needed to write some simple Binary instances by hand in order to do the serialisation. instance Binary GraphMod.Import where put_ bh (GraphMod.Import mn ip) = put_ bh mn >> put_ bh ip get bh = GraphMod.Import <$> get bh <*> get bh instance Binary GraphMod.ImpType where put_ bh c = case c of GraphMod.NormalImp -> putByte bh 0 GraphMod.SourceImp -> putByte bh 1 get bh = getByte bh >>= return . \case 0 -> GraphMod.NormalImp 1 -> GraphMod.SourceImp _ -> error "Binary:GraphMod" Once we have these parts, we can assemble them into the final plugin. We first get the imports out of tcg_rn_imports and then convert them using convertImport. We then write this information to a uniquely named file in the output directory which is passed as an argument to the plugin. -- The main plugin function, it collects and serialises the import -- information for a module. install :: [CommandLineOption] -> ModSummary -> TcGblEnv -> TcM TcGblEnv install opts ms tc_gbl = do let imps = tcg_rn_imports tc_gbl gm_imps = map (convertImport . unLoc) imps outdir = mkOutdir opts path = mkPath outdir (ms_mod ms) gm_modname = getModName ms liftIO $ do createDirectoryIfMissing False outdir writeBinary path (gm_modname, gm_imps) return tc_gbl mkPath tries to come up with a unique name for a module by using the moduleUnitId. The file name doesn’t matter particularly as long as it’s unique. We could instead write this information to a database or to a file handle. Writing to disk is just a convenient method of serialisation. mkPath :: FilePath -> Module -> FilePath mkPath fp m = fp </> (moduleNameString (moduleName m) ++ (show (moduleUnitId m))) Then, we define the plugin by making a definition called plugin and overriding the typeCheckResultAction field and the pluginRecompile field. purePlugin means that the result of our plugin only depends on the contents of the source file rather than any external information. This means that we don’t need to recompile the module every time just because we are using a plugin. -- Installing the plugin plugin :: Plugin plugin = defaultPlugin { typeCheckResultAction = install , pluginRecompile = purePlugin } Now that our module exports an identifier of type Plugin called plugin we are finished defining the plugin part of the project. Once all the modules have finished compiling. They will have written their information to a file in a certain directory that we can now inspect to create the dot graph. We define an executable to do this. The executable takes the directory of the files as an argument, reads all the files and then processes them to produce the graph. In the collectImports function, we first read the directory from a command line argument. Then we find all the files in this directory and read their contents into memory. We use the helper function readImports which uses functions from the Binary module to read the serialised files. Finally, we build the graph using all the import information and then pass the graph we have built to the existing graphmod backend. collectImports :: IO () collectImports = do raw_opts <- getArgs let (fs, _ms, _errs) = getOpt Permute options raw_opts opts = foldr ($) default_opts fs outdir = inputDir opts files <- listDirectory outdir usages <- mapM (readImports outdir) files let graph = buildGraph opts usages putStr (GraphMod.make_dot opts graph) readImports :: FilePath -> FilePath -> IO Payload readImports outdir fp = do readBinMem (outdir </> fp) >>= get The buildGraph function builds an in memory representation of the module graph. There is a node for each module and an edge between modules if one imports the other. We finally mimic the original graphmod tool and output the representation of the graph on stdout. This can then be piped to dot in order to render the graph. By far the most convenient way to run the plugin is with nix. This gets around the problem of having to run the finaliser after compiling the plugin. We use the haskell-nix-plugin infrastructure in order to do this. The information required to run the plugin consists of information about the plugin package but also an additional, optional, final phase which runs after the module has finished compiling. graphmod = { pluginPackage = hp.graphmod-plugin; pluginName = "GraphMod"; pluginOpts = (out-path: ["${out-path}/output"]); pluginDepends = [ nixpkgs.graphviz ]; finalPhase = out-path: '' graphmod-plugin --indir ${out-path}/output > ${out-path}/out.dot cat ${out-path}/out.dot | tred | dot -Gdpi=600 -Tpng > ${out-path}/modules.png ''; } ; I will add this definition to the plugins.nix file in haskell-nix-plugin once ghc-8.6.1 is released. We then would use the addPlugin function in order to run the plugin on a package. In order to get the module graph we inspect the GraphMod output. Running this script on aeson produces this quite large image which shows the whole module graph. A complete example default.nix can be found in the repo. We have described one way in which one can structure a plugin. There are probably other ways but this seems ergnomic and convenient. Hopefully others will find this quite detailed summary and reference code useful to build upon. Writing plugins is quite similar to modifying GHC itself so if you need help, the best place to ask is either on the ghc-devs mailing list or on #ghc on freenode. graphmod-plugin
https://mpickering.github.io/atom.xml
CC-MAIN-2019-35
refinedweb
9,035
62.27
Build Web applications with HTML 5 Create tomorrow's Web applications today Many new features and standards have emerged as part of HTML 5. Once you detect the available features in today's browsers, you can take advantage of those features in your application. In this article, learn how to detect and use the latest Web technologies by developing sample applications. Most of the code in this article is just HTML, JavaScript, and CSS—the core technologies of any Web developer. Getting started To follow along with the examples, the most important thing you need is multiple browsers for testing. The latest versions of Mozilla Firefox, Apple Safari, and Google Chrome are highly recommended. Mozilla Firefox 3.6, Apple Safari 4.04, and Google Chrome 5.0.322 were used for this article. You might also want to test on mobile browsers. For example, the latest Android and iPhone SDKs were used for testing their browsers on their emulators. You can download the source code used in this article. The examples include a very small back-end component that was written in Java™. JDK 1.6.0_17 and Apache Tomcat 6.0.14 were used. See Related topics for links to download the tools. Detecting capabilities There's an old joke that Web developers spend about 20% of their time writing code and the other 80% getting it to work the same in all browsers. To say that Web developers are used to dealing with cross-browser differences is an understatement. With a new wave of browser innovations unfolding, this pessimistic approach is once again warranted. The features supported by the latest and greatest browsers are always changing. On a positive note, however, the new feature sets are converging on Web standards, which gives you a chance to start using these new features today. You can: employ the old technique of progressive enhancement, provide some baseline features, check for advanced capabilities, and then enhance your applications with the extra features when they are present. To that end, take a look at how to detect some of the new features. Listing 1 shows a simple detection script. Listing 1. Detection script function detectBrowserCapabilities(){ $("userAgent").innerHTML = navigator.userAgent; var hasWebWorkers = !!window.Worker; $("workersFlag").innerHTML = "" + hasWebWorkers; var hasGeolocation = !!navigator.geolocation; $("geoFlag").innerHTML = "" + hasGeolocation; if (hasGeolocation){ document.styleSheets[0].cssRules[1].style.display = "block"; navigator.geolocation.getCurrentPosition(function(location) { $("geoLat").innerHTML = location.coords.latitude; $("geoLong").innerHTML = location.coords.longitude; }); } var hasDb = !!window.openDatabase; $("dbFlag").innerHTML = "" + hasDb; var videoElement = document.createElement("video"); var hasVideo = !!videoElement["canPlayType"]; var ogg = false; var h264 = false; if (hasVideo) { ogg = videoElement.canPlayType('video/ogg; codecs="theora, vorbis"') || "no"; h264 = videoElement.canPlayType('video/mp4; codecs="avc1.42E01E, mp4a.40.2"') || "no"; } $("videoFlag").innerHTML = "" + hasVideo; if (hasVideo){ var vStyle = document.styleSheets[0].cssRules[0].style; vStyle.display = "block"; } $("h264Flag").innerHTML = "" + h264; $("oggFlag").innerHTML = "" + ogg; } A huge number of new features and standards have emerged as part of the HTML 5 standard. This article focuses on only a few of the rather useful features. The script in Listing 1 detects four features: - Web workers (multi-threading) - Geolocation - Database storage - Native video playback The script starts by showing the user agent of the user's browser. This is (usually) a string that uniquely identifies the browser, though it can be easily faked. Just echoing it is good enough for this application. The next step is to start detecting features. First, check for Web workers by looking for the Worker function in the global scope (window). This is using some idiomatic JavaScript: the double negation. If the Worker function does not exist, then window.Worker will evaluate to undefined, which is a "falsey" value in JavaScript. Putting a single negation in front of it will evaluate to true, thus a double negation will evaluate to false. After testing for the value, the script prints the evaluation to the screen by modifying the DOM structure shown in Listing 2. Listing 2. Detection DOM <input type="button" value="Begin detection" onclick="detectBrowserCapabilities()"/> <div>Your browser's user-agent: <span id="userAgent"> </span></div> <div>Web Workers? <span id="workersFlag"></span></div> <div>Database? <span id="dbFlag"></span></div> <div>Video? <span id="videoFlag"></span></div> <div class="videoTypes">Can play H.264? <span id="h264Flag"> </span></div> <div class="videoTypes">Can play OGG? <span id="oggFlag"> </span></div> <div>Geolocation? <span id="geoFlag"></span></div> <div class="location"> <div>Latitude: <span id="geoLat"></span></div> <div>Longitude: <span id="geoLong"></span></div> </div> Listing 2 is a simple HTML structure used to display the diagnostic information that the detection script is gathering. As shown in Listing 1, the next thing to test for is geolocation. The double negation technique is used again, but this time you're checking for an object called geolocation that should be a property of the navigator object. If it is present, then use it to get the current location using the geolocation object's getCurrentPosition function. Getting the location can be a slow task, since it usually involves scanning Wi-Fi networks. On mobile devices, it might also include scanning cell towers and pinging GPS satellites. Since it could take a long time, the getCurrentPosition is asynchronous and takes a callback function as a parameter. In this case, use a closure for the callback that simply displays the location fields (by toggling its CSS) and then writes the latitude and longitude to the DOM. The next step is to check for database storage. Check for the presence of the global function openDatabase, which is used for creating and accessing client-side databases. Finally, check for native video playback. Use the DOM API to create a video element. Today, every browser will be able to create such an element. In older browsers this will be a valid DOM element, but it will have no special meaning. It would be like creating an element called foo. In a modern browser, this will be a specialized element, like creating a div or table element. It will have a function called canPlayType, so simply check its presence. Even if a browser has native video playback capability, the types of videos, or the supported codecs that it can playback, are not standardized. You'll probably want to check for the supported codecs in the browser. There is no standard list of codecs, but two of the most common are H.264 and Ogg Vorbis. To check for support for a particular codec, you can pass an identifying string to the canPlayType function. If the browser can support the codec, this function will return probably (seriously—that's not a joke). If not, then it will return null. In the detection code, simply check against these values and display the answer in the DOM. After testing out this code against some popular browsers, Listing 3 shows aggregated results. Listing 3. Capabilities of various browsers #Firefox 3.6 Your browser's user-agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6 Web Workers? true Database? false Video? true Can play H.264? no Can play OGG? probably Geolocation? true Latitude: 37.2502812 Longitude: -121.9059866 #Safari 4.0.4 Your browser's user-agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_2; en-us) AppleWebKit/531.21.8 (KHTML, like Gecko) Version/4.0.4 Safari/531.21.10 Web Workers? true Database? true Video? true Can play H.264? probably Can play OGG? no Geolocation? false #Chrome 5.0.322 Your browser's user-agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_2; en-US) AppleWebKit/533.1 (KHTML, like Gecko) Chrome/5.0.322.2 Safari/533.1 Web Workers? true Database? true Video? true Can play H.264? no Can play OGG? no Geolocation? false All of the popular desktop browsers above support quite a few features. - Firefox supports everything except for databases. For video, it only supports Ogg. - Safari supports everything except for geolocation. - Chrome supports everything except for geolocation, though it claims not to support H.264 or Ogg. This is probably a bug, either with the build of Chrome used for this test or with the test code. Chrome actually does support H.264. Geolocation is not widely supported on desktop browsers, but it is widely supported on mobile browsers. Listing 4 shows aggregated results for mobile browsers. Listing 4. Mobile browsers #iPhone 3.1.3 Simulator Your browser's user-agent: Mozilla/5.0 (iPhone Simulator; U; CPU iPhone OS 3.1.3 like Mac OS X; en-us) AppleWebKit/528.18 (KHTML, like Gecko) Version/4.0 Mobile/7E18 Safari/528.16 Web Workers? false Database? true Video? true Can play H.264? maybe Can play OGG? no Geolocation? true Latitude: 37.331689 Longitude: -122.030731 #Android 1.6 Emulator Your browser's user-agent: Mozilla/5.0 (Linux; Android 1.6; en-us; sdk Build/Donut) AppleWebKit/528.5+ (KHTML, like Gecko) Version/3.1.2 Mobile Safari/525.20.1 Web Workers? false Database? false Video? false Geolocation? false #Android 2.1 Emulator Your browser's user-agent: Mozilla/5.0 (Linux; U; Android 2.1; en-us; sdk Build/ERD79) AppleWebKit/530.17 (KHTML, like Gecko) Version/4.0 Mobile Safari/530.17 Web Workers? true Database? true Video? true Can play H.264? no Can play OGG? no Geolocation? true Latitude: Longitude: One of the latest iPhone simulators and two flavors of Android are shown above. Android 1.6 does not support anything that we tested for. It does, in fact, support all of these features except for video but it does so using Google Gears. These are equivalent APIs (in terms of function), but they do not conform to Web standards so you get the result in Listing 4. Compare this with Android 2.1, where everything is supported. Notice that the iPhone supports everything but Web workers. Listing 3 shows that the desktop version of Safari supports Web workers, so it would seem reasonable to expect that this feature will be coming to the iPhone very soon. Now that you've seen how to probe the features of the user's browser, let's explore a simple application that will use several of these features in combination—depending on what the user's browser can handle. You will build an application that uses the Foursquare API to search for popular venues near a user's location. Building the applications of tomorrow The example will focus on using geolocation on mobile devices, but keep in mind that Firefox 3.5+ also supports geolocation. The application starts off by searching for what Foursquare calls venues near the user's current location. Venues can be anything, but are typically restaurants, bars, stores, and so on. Being a Web application, our example is limited by the same origin policy enforced by all browsers. It cannot call Foursquare's APIs directly. You will use a Java servlet to essentially proxy these calls. There is nothing special about Java here; you could easily write a similar proxy in PHP, Python, Ruby, and so forth. Listing 5 shows a proxy servlet. Listing 5. Foursquare proxy servlet public class FutureWebServlet extends HttpServlet { protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { String operation = request.getParameter("operation"); if (operation != null && operation.equalsIgnoreCase("getDetails")){ getDetails(request,response); } String geoLat = request.getParameter("geoLat"); String geoLong = request.getParameter("geoLong"); String baseUrl = "?"; String urlStr = baseUrl + "geolat=" + geoLat + "&geolong=" + geoLong; PrintWriter out = response.getWriter(); proxyRequest(urlStr, out); } private void proxyRequest(String urlStr, PrintWriter out) throws IOException{ try { URL url = new URL(urlStr); InputStream stream = url.openStream(); BufferedReader reader = new BufferedReader( new InputStreamReader(stream)); String line = ""; while (line != null){ line = reader.readLine(); if (line != null){ out.append(line); } } out.flush(); stream.close(); } catch (MalformedURLException e) { e.printStackTrace(); } } private void getDetails(HttpServletRequest request, HttpServletResponse response) throws IOException{ String venueId = request.getParameter("venueId"); String urlStr = ""+venueId; proxyRequest(urlStr, response.getWriter()); } } The important thing to note here is that you're proxying two Foursquare APIs. One is for search, and the other is for getting the details of a venue. To distinguish between them, the details API adds an operation parameter. You're also specifying JSON as the return type, which will make it easy to parse the data from JavaScript. Now that you know what kind of calls can be made by the application code, let's see how it will make those calls and use the data from Foursquare. Using geolocation The first call is a search. Listing 5 shows that you need two parameters: geoLat and geoLong for the latitude and longitude. Listing 6 below shows how to get these in the application and call the servlet. Listing 6. Calling search with location if (!!navigator.geolocation){ navigator.geolocation.getCurrentPosition(function(location) { venueSearch(location.coords.latitude, location.coords.longitude); }); } var allVenues = [];); } } xhr.open("GET", "api?geoLat=" + geoLat + "&geoLong="+geoLong); xhr.send(null); } The code above checks for the geolocation capability of the browser. If it is present, the code gets the location and calls the venueSearch function with the latitude and longitude. This function uses Ajax (an XMLHttpRequest object to call the servlet in Listing 5). It uses a closure for the callback function, parses the JSON data from Foursquare, and passes an array of venue objects to a function called buildVenuesTable, as shown below. Listing 7. Building UI from venues function buildVenuesTable(venues){ var rows = venues.map(function (venue) { var row = document.createElement("tr"); var nameTd = document.createElement("td"); nameTd.appendChild(document.createTextNode(venue.name)); row.appendChild(nameTd); var addrTd = document.createElement("td"); var addrStr = venue.address + " " + venue.city + "," + venue.state; addrTd.appendChild(document.createTextNode(addrStr)); row.appendChild(addrTd); var distTd = document.createElement("td"); distTd.appendChild(document.createTextNode("" + venue.distance)); row.appendChild(distTd); return row; }); var vTable = document.createElement("table"); vTable.border = 1; var header = document.createElement("thead"); var nameLabel = document.createElement("td"); nameLabel.appendChild(document.createTextNode("Venue Name")); header.appendChild(nameLabel); var addrLabel = document.createElement("td"); addrLabel.appendChild(document.createTextNode("Address")); header.appendChild(addrLabel); var distLabel = document.createElement("td"); distLabel.appendChild(document.createTextNode("Distance (m)")); header.appendChild(distLabel); vTable.appendChild(header); var body = document.createElement("tbody"); rows.forEach(function(row) { body.appendChild(row); }); vTable.appendChild(body); $("searchResults").appendChild(vTable); if (!!window.openDatabase){ $("saveBtn").style.display = "block"; } } The code in Listing 7 is primarily DOM code for creating a data table with the venue information in it. There are a few interesting things, though. Note the use of advanced JavaScript features, such as the array object's map and forEach functions. These are features that are available on all the browsers that support geolocation. Also of interest are the last two lines. A detection for database support is performed. If it is present, then you enable a Save button that the user can click to save all of this venue data to a local database. The next section discusses how this is done. Structured storage Listing 7 demonstrates the classic progressive enhancement strategy. The example tests for database support. If it is found, then the code adds a UI element that adds a new feature to the application that makes use of this feature. In this case, it enables a single button. Clicking on the button calls the function saveAll, which is shown in Listing 8. Listing 8. Saving to the database var db = {}; function saveAll(){ db = window.openDatabase("venueDb", "1.0", "Venue Database",1000000); db.transaction(function(txn){ txn.executeSql("CREATE TABLE venue (id INTEGER NOT NULL PRIMARY KEY, "+ "name NVARCHAR(200) NOT NULL, address NVARCHAR(100), cross_street NVARCHAR(100), "+ "city NVARCHAR(100), state NVARCHAR(20), geolat TEXT NOT NULL, "+ "geolong TEXT NOT NULL);"); }); allVenues.forEach(saveVenue); countVenues(); } function saveVenue(venue){ // check if we already have the venue db.transaction(function(txn){ txn.executeSql("SELECT name FROM venue WHERE id = ?", [venue.id], function(t, results){ if (results.rows.length == 1 && results.rows.item(0)['name']){ console.log("Already have venue id=" + venue.id); } else { insertVenue(venue); } }) }); } function insertVenue(venue){ db.transaction(function(txn){ txn.executeSql("INSERT INTO venue (id, name, address, cross_street, "+ "city, state, geolat, geolong) VALUES (?, ?, ?, ?, "+ "?, ?, ?, ?);", [venue.id, venue.name, venue.address, venue.crossstreet, venue.city, venue.state, venue.geolat, venue.geolong], null, errHandler); }); } function countVenues(){ db.transaction(function(txn){ txn.executeSql("SELECT COUNT(*) FROM venue;",[], function(transaction, results){ var numRows = results.rows.length; var row = results.rows.item(0); var cnt = row["COUNT(*)"]; alert(cnt + " venues saved locally"); }, errHandler); }); } To save the venue data to the database, start by creating the table where you want to store the data. This is fairly standard SQL syntax for creating a table. (All the browsers that support databases use SQLite. Use SQLite documentation for data types supported, constraints, and so on.) SQL execution is done asynchronously. The transaction function is called, and a callback function is passed to it. The callback function gets a transaction object that it can use to execute SQL. The executeSQL function takes an SQL string, then optionally a parameter list, plus success and error handler functions. If there is no error handler, the error is "eaten." For the create table statement this is desirable. The first time the script executes, the table will be properly created. The next time it executes, the script will fail since the table already exists—but that's okay. You just need to make sure the table is there before you start inserting rows into it. After the table is created, use the forEach function to invoke the saveVenue function with each of the venues returned from Foursquare. This function first checks to see if the venue has already been stored locally by querying for it. Here you see the use of a success handler. The result set from the query will be passed to the handler. If there are no results, or the venue has not already been saved locally, then call the insertVenue function, which performs an insert statement. With saveAll, after all of the save/inserts are complete, you then call countVenues. This queries to see the total number of rows that have been inserted into the venue table. The syntax here ( row["COUNT(*)"]) pulls out the count from the result set of the query. Now that you've learned how to use database support if it is present, the next section explores how to use Web worker support. Background processing with Web workers Going back to Listing 6, let's make a slight modification. As shown in Listing 9 below, check for Web worker support. If it is there, use it to get more information on each of the venues retrieved from Foursquare. Listing 9. Modified venue search); if (!!window.Worker){ var worker = new Worker("details.js"); worker.onmessage = function(message){ var tips = message.data; displayTips(tips); }; worker.postMessage(allVenues); } } } xhr.open("GET", "api?geoLat=" + geoLat + "&geoLong="+geoLong); xhr.send(null); } The code above uses the same detection you've seen before. If Web workers are supported, then you create a new worker. To create a new worker, you need a URL to another script that the worker will execute—the details.js file, in this case. When the worker finishes its work, it will send a message back to the main thread. The onmessage handler is what will receive this message; you use a simple closure for it. Finally, to initiate the worker, call postMessage with some data for it to work on. You're passing in all of the venues retrieved from Foursquare. Listing 10 shows the contents of details.js, which is the script that will be executed by the worker. Listing 10. The worker's script, details.js var tips = []; onmessage = function(message){ var venues = message.data; venues.foreach(function(venue){ var xhr = new XMLHttpRequest(); xhr.onreadystatechange = function(){ if (this.readyState == 4 && this.status == 200){ var venueDetails = eval('(' + this.responseText + ')'); venueDetails.tips.forEach(function(tip){ tip.venueId = venue.id; tips.push(tip); }); } }; xhr.open("GET", "api?operation=getDetails&venueId=" + venueId, true); xhr.send(null); }); postMessage(tips); } The details script iterates over each of the venues. For each venue, the script then makes a call back to the Foursquare proxy to get the details of the venue using XMLHttpRequest, as usual. However, notice that when you use its open function to open the connection, you pass a third parameter ( true). This makes the call synchronous instead of the usual asynchronous. It's okay to do this from a worker since you are not on the main UI thread, and this is not going to freeze up the application. By making it synchronous, you know that each call has to finish before the next one begins. The handler is simply extracting the tips from the venue details and collecting all of these tips to be passed back to the main UI thread. To pass this data back, the postMessage function is called, which will invoke the onmessage callback function on the worker, as shown back in Listing 9. By default, the venue search returns ten venues. You can imagine how long it would take to make the ten extra calls to get the details. It makes sense to do this type of task in a background thread using Web workers. Summary This article covered some of the new HTML 5 capabilities of modern browsers. You learned how to detect the features and to progressively add them to your application. Most of the features are already widely supported in popular browsers—especially mobile browsers. Now you can start taking advantage of things like geolocation and Web workers to create innovative new Web applications. Downloadable resources - PDF of this content - Article source code (FutureWeb.zip | 9KB) Related topics - "Explore multithreaded programming in XUL" (Michael Galpin, developerWorks, Sep 2009) has more information about using Web workers. - "New elements in HTML 5" (developerWorks, Aug 2007) explores some of the many UI features in the HTML 5 spec. - "Android and iPhone browser wars, Part 1: WebKit to the rescue" (developerWorks, Dec 2009) explains how you can leverage the features of mobile browsers. - The W3C HTML 5 Specification is the definitive source on HTML 5. - "DOM Storage" (Mozilla Developer Center) discusses HTML 5's localStorage support in Firefox. - The Modernizr project provides a comprehensive utility for detecting HTML 5 features. - Get Mozilla Firefox 3.6. - Get Safari 4.04. - Get Google Chrome 5.0.322. - Download the Android SDK, access the API reference, and get the latest news on Android. - Get the latest iPhone SDK. Version 3.1.3 was used in this article. - Get the Android source code from the Android Open Source Project. - Get the Java SDK. JDK 1.6.0_17 was used in this article. - To listen to interesting interviews and discussions for software developers, check out developerWorks podcasts.
https://www.ibm.com/developerworks/library/wa-html5webapp/
CC-MAIN-2017-43
refinedweb
3,791
52.15
Warning! This page documents an earlier version of Flux, which is no longer actively developed. Flux v0.50 is the most recent stable version of Flux.(file: "/path/to/data-file.csv") // OR csv.from(csv: csvData) csv.from() is not avaialable in InfluxCloud. Parameters file The file path of the CSV file to query. The path can be absolute or relative. If relative, it is relative to the working directory of the influxd process. The CSV file must exist in the same file system running the influxd process. Data type: String csv Raw CSV-formatted text. CSV data must be in the CSV format produced by the Flux HTTP response standard. See the Flux technical specification for information about this format. Data type: String Examples Query CSV data from a file import "csv" csv.from(file: "/path/to/data-file.csv") Query raw CSV-formatted text import "csv" csvData = "Data)
https://docs.influxdata.com/flux/v0.24/functions/csv/from/
CC-MAIN-2019-47
refinedweb
151
61.73
I. It allows me to throw something together quickly. These are not production samples, but just for me to see that I can get it to work. Enough already. Here is a bar chart using D3 and MongoDB. I have not put this on OpenShift yet, but may. This bar chart is a simple python script using CherryPy and committing my favorite sin – passing HTML as a variable in a return. One reason to put it on OpenShift is so I can template it in Jinja. Here is the code: import cherrypy from pymongo import Connection class mongocherry(object): def index(self): db=Connection().geo output =[] output.append(‘>’) output.append(‘<script type=”text/javascript”>var dataset=[‘) for x in db.places.find(): output.append(str(x[“loc”][0])+’,’) output.append(‘0];’+”\n”+’d3.select(“body”).selectAll(“div”).data(dataset).enter().append(“div”).attr(“class”, “bar”).style(“height”, function(d) {var barHeight = d * 5;return barHeight + “px”;});</script></body></html>’) i=0 html=”” while i<len(output): html+=str(output[i]) i+=1 return html index.exposed = True cherrypy.config.update({‘server.socket_host’: ‘127.0.0.1’, ‘server.socket_port’: 8000, }) cherrypy.quickstart(mongocherry()) The Python code prints out HTML that looks like this: > <script type="text/javascript"> var dataset=[35,35.8,38,39,30,31,31,31,33,25,33,0]; d3.select("body") .selectAll("div") .data(dataset) .enter() .append("div") .attr("class", "bar") .style("height", function(d) {var barHeight = d * 5;return barHeight + "px";}); </script></body></html> Not much going on here, just a simple D3.js bar chart – not even done in SVG. The MongoDB part is in the variable dataset[..]. While writing the HTML, the python code loops through my mongodb with for x in db.places.find(): and it grabs the latitude of the data I have with x[“loc”][0] prints it out in the JavaScript variable dataset[]. I add a 0 at the end because I get a trailing comma. Sloppy, but oh well. I return all the HTML and you get the page displayed at the top of this post. The cool part of marrying D3.js and MongoDB is JSON. I have JSON in my DB and D3 takes JSON. For the record, you can avoid the ‘trailing comma’ issue by doing this: output.append( ‘,’.join( [ x[“loc”][0] for x in db.places.find() ] ) ) That takes each value and joins it together using commas, instead of printing each value and then a comma regardless. Then at the end, instead of looping through output, you can do a few things differently: First, you could do ‘for line in output: html += line’; secondly, you could do html += ”.join(output). Both of these will be more efficient, and are easier to read afterwards. That is good to know. I will use the join a lot! I am not entirely sure why you chose to give a python example using a blog style that doesn’t keep indentin information. The blog formatted the code poorly. I put it out there for me to remember as much as to show others. If it didn’t help you, sorry.
https://paulcrickard.wordpress.com/2012/11/28/d3-js-and-mongodb/
CC-MAIN-2018-09
refinedweb
522
67.65
I'm having a hard time getting flexget to download the quality I want. It's grabbing 720p no matter what I set. Any ideas on this? It's got to be something simple. 2017-07-16 14:10 VERBOSE task get-tv ACCEPTED:Power S04E04 720p HDTV x264-FLEETby series plugin because target quality 2017-07-16 14:10 VERBOSE task get-tv ACCEPTED: by series plugin because target quality templates: sonarr-tv: thetvdb_lookup: yes deluge: label: tv-sonarr content_filename: "{{ series_name|replace(' ','.') }}.{{ series_id }}.{{ quality|upper }}{% if proper %}-PROPER{% endif %}" magnetization_timeout: 30 exists_series: '/media/TV/{{series_name}}' tasks: get-tv: template: sonarr-tv configure_series: from: sonarr_list: base_url: port: 8989 api_key: [removed] include_ended: false only_monitored: true settings: quality: webrip+ <720p h264 inputs: - rss: url: link: 'torrent:magnetURI' - rss:;18;41;49 schedules: - tasks: '*' interval: minutes: 15 Please paste your config like this next time, to make it readable: ``` (put this also at the end, below your code) your code webrip+ <720p is a bit confusing to me, webrip+ means it will go for webrip and anything higher (which would be 720p) but <720p means 720p is not allowed. This is what I am using, I believe it forces non-720p downloads. It is also not what I want. I want a "reverse" fallback: find non-720p releases (webrip/hdtv) but if none is found (timeframe = 1hr) accept a higher quality (720p or 1080p). I haven't had the chance to do some trial and error to get that working. But the below does work getting non-720p releases. settings: identified_by: ep timeframe: 1 hours target: webrip+ <720p !10bit !h265 quality: webrip+ <720p !10bit !h265 propers: 48 hours specials: no Thanks for the tip about the formatting. That was driving me nuts! There can be 480p webrips. That's what I want. On lots of sites 480p is not specified in the filename so just having "Title WEBRIP...." means 480p. I'm starting to think the sonarr_list plugin is wiping out the quality even though I'm not using include_data. sonarr_list was the problem. It was setting a blank quality even though include_data was not set. I modified the code for it and now it works how I want it to. ACCEPTED:Power 2014 S04E04 We Are In This Together AHDTV x264-CRiMSONby series plugin because target quality ACCEPTED: Good stuff. I have no experience with Sonarr (I thought it was a less advanced competitor of Flexget, with a nice UI). There should be a little checkbox below your latest post, if you hit it, this topic will be marked as "solved" what was the issue with sonarr_list? if you fixed a bug, could you submit a PR please? sonarr_list I'll be honest and say I am not sure how to do a pull request. I can tell you the change I did. I'm not really a programmer or anything so maybe there is a better way to do what I did. From what I could tell configure_series_qualities was being set to fg_qualities regardless of INCLUDE_DATA being set. fg_qualities is initilized as '' and doesn't get populated unless INCLUDE_DATA is set so therefore configure_series_qualities is set to blank. sonarr_list.py lines 151-158 enclose in an IF statement 'if self.config.get('include_data'):' if self.config.get('include_data'): if len(fg_qualities) > 1: entry['configure_series_qualities'] = fg_qualities elif len(fg_qualities) == 1: entry['configure_series_quality'] = fg_qualities[0] else: entry['configure_series_quality'] = fg_qualities if path: entry['configure_series_path'] = path if entry.isvalid(): log.debug('returning entry %s', entry) entries.append(entry) else: log.error('Invalid entry created? %s' % entry) continue return entries Hi, I have a similar issue with the input list sickbeard... The quality is also ignored.I just want to download series below 720p and I got every time the best quality available...Should I open a issue or this is something that I'm doing wrong? My Config: serielist: configure_series: from: sickbeard: base_url: port: 8081 api_key: asdf1c397cewerfbaf128df9ddcefedf include_data: false settings: quality: hdtv <720p Log: 2018-05-25 00:11 DEBUG series start current quality req: any 2018-05-25 00:11 DEBUG series -------------------- process_propers --> 2018-05-25 00:11 DEBUG series propers - downloaded qualities: {} 2018-05-25 00:11 DEBUG series continuing best entity is: `Winx Club S06E14 Mythix 1080p HDTV x264-PLUTONiUM` 2018-05-25 00:11 DEBUG series -------------------- tracking --> 2018-05-25 00:11 DEBUG series no episodes found for series `Winx Club` with parameters season: None, downloaded: True 2018-05-25 00:11 DEBUG series no season packs found for series `Winx Club` with parameters season: None, downloaded: True 2018-05-25 00:11 DEBUG series latest download: None 2018-05-25 00:11 DEBUG series current: <Episode(id=1,identifier=S06E14,season=6,number=14)> 2018-05-25 00:11 VERBOSE task ACCEPTED: `Winx Club S06E14 Mythix 1080p HDTV x264-PLUTONiUM` by series plugin because matches quality I solved my problem, is not the right way... but at least works...1- setting the include_data true2- changing the translation sickbeards qualities into format used by Flexget like I want3- all series have Quality SD in sickbeard sb_to_fg = {'sdtv': 'webrip-webdl', 'sddvd': '<720p', 'hdtv': '720p hdtv', 'rawhdtv': '1080p hdtv', 'fullhdtv': '1080p hdtv', 'hdwebdl': '720p webdl', 'fullhdwebdl': '1080p webdl', 'hdbluray': '720p bluray', 'fullhdbluray': '1080p bluray', 'unknown': 'any'}
https://discuss.flexget.com/t/configure-series-quality-being-ignored/3586
CC-MAIN-2019-35
refinedweb
873
53.71
netwire Functional reactive programming library See all snapshots netwire appears in netwire-5.0.3@sha256:52f0e6d59d0033441f70dc6c5789bf4c896654823a5e6a7249f58aed4b3f9b38,2180 Module documentation for 5.0.3 - Control - FRP Netwire Netwire is a functional reactive programming (FRP) library with signal inhibition. It implements three related concepts, wires, intervals and events, the most important of which is the wire. To work with wires we will need a few imports: import FRP.Netwire import Prelude hiding ((.), id) The FRP.Netwire module exports the basic types and helper functions. It also has some convenience reexports you will pretty much always need when working with wires, including Control.Category. This is why we need the explicit Prelude import. In general wires are generalized automaton arrows, so you can express many design patterns using them. The FRP.Netwire module provides a proper FRP framework based on them, which strictly respects continuous time and discrete event semantics. When developing a framework based on Netwire, e.g. a GUI library or a game engine, you may want to import Control.Wire instead. Introduction The following type is central to the entire library: data Wire s e m a b Don’t worry about the large number of type arguments. They all have very simple meanings, which will be explained below. A value of this type is called a wire and represents a reactive value of type b, that is a value that may change over time. It may depend on a reactive value of type a. In a sense a wire is a function from a reactive value of type a to a reactive value of type b, so whenever you see something of type Wire s e m a b your mind should draw an arrow from a to b. In FRP terminology a reactive value is called a behavior. A constant reactive value can be constructed using pure: pure 15 This wire is the reactive value 15. It does not depend on other reactive values and does not change over time. This suggests that there is an applicative interface to wires, which is indeed the case: liftA2 (+) (pure 15) (pure 17) This reactive value is the sum of two reactive values, each of which is just a constant, 15 and 17 respectively. So this is the constant reactive value 32. Let’s spell out its type: myWire :: (Monad m, Num b) => Wire s e m a b myWire = liftA2 (+) (pure 15) (pure 17) This indicates that m is some kind of underlying monad. As an application developer you don’t have to concern yourself much about it. Framework developers can use it to allow wires to access environment values through a reader monad or to produce something (like a GUI) through a writer monad. The wires we have seen so far are rather boring. Let’s look at a more interesting one: time :: (HasTime t s) => Wire s e m a t This wire represents the current local time, which starts at zero when execution begins. It does not make any assumptions about the time type other than that it is a numeric type with a Real instance. This is enforced implicitly by the HasTime constraint. The type of this wire gives some insight into the s parameter. Wires are generally pure and do not have access to the system clock or other run-time information. The timing information has to come from outside and is passed to the wire through a value of type s, called the state delta. We will learn more about this in the next section about executing wires. Since there is an applicative interface you can also apply fmap to a wire to apply a function to its value: fmap (2*) time This reactive value is a clock that is twice as fast as the regular local time clock. If you use system time as your clock, then the time type t will most likely be NominalDiffTime from Data.Time.Clock. However, you will usually want to have time of type Double or some other floating point type. There is a predefined wire for this: timeF :: (Fractional b, HasTime t s, Monad m) => Wire s e m a b timeF = fmap realToFrac time If you think of reactive values as graphs with the horizontal axis representing time, then the time wire is just a straight diagonal line and constant wires (constructed by pure) are just horizontal lines. You can use the applicative interface to perform arithmetic on them: liftA2 (\t c -> c - 2*t) time (pure 60) This gives you a countdown clock that starts at 60 and runs twice as fast as the regular clock. So it after two seconds its value will be 56, decreasing by 2 each second. Testing wires Enough theory, we wanna see some performance now! Let’s write a simple program to test a constant ( pure) wire: import Control.Wire import Prelude hiding ((.), id) wire :: (Monad m) => Wire s () m a Integer wire = pure 15 main :: IO () main = testWire (pure ()) wire This should just display the value 15. Abort the program by pressing Ctrl-C. The testWire function is a convenience to examine wires. It just executes the wire and continuously prints its value to stdout: testWire :: (MonadIO m, Show b, Show e) => Session m s -> (forall a. Wire s e Identity a b) -> m c The type signatures in Netwire are known to be scary. =) But like most of the library the underlying meaning is actually very simple. Conceptually the wire is run continuously step by step, at each step increasing its local time slightly. This process is traditionally called stepping. As an FRP developer you assume a continuous time model, so you don’t observe this stepping process from the point of view of your reactive application, but it can be useful to know that wire execution is actually a discrete process. The first argument of testWire needs some explanation. It is a recipe for state deltas. In the above example we have just used pure (), meaning that we don’t use anything stateful from the outside world, particularly we don’t use a clock. From the type signature it is also clear that this sets s = (). The second argument is the wire to run. The input type is quantified meaning that it needs to be polymorphic in its input type. In other words it means that the wire does not depend on any other reactive value. The underlying monad is Identity with the obvious meaning that this wire cannot have any monadic effects. The following application just displays the number of seconds passed since program start (with some subsecond precision): wire :: (HasTime t s) => Wire s () m a t wire = time main :: IO () main = testWire clockSession_ wire Since this time the wire actually needs a clock we use clockSession_ as the second argument: clockSession_ :: (Applicative m, MonadIO m) => Session m (Timed NominalDiffTime ()) It will instantiate s to be Timed NominalDiffTime (). This type indeed has a HasTime instance with t being NominalDiffTime. In simpler words it provides a clock to the wire. At first it may seem weird to use NominalDiffTime instead of something like UTCTime, but this is reasonable, because time is relative to the wire’s start time. Also later in the section about switching we will see that a wire does not necessarily start when the program starts. Constructing wires Now that we know how to test wires we can start constructing more complicated wires. First of all it is handy that there are many convenience instances, including Num. Instead of pure 15 we can simply write 15. Also instead of liftA2 (+) time (pure 17) we can simply write: time + 17 This clock starts at 17 instead of zero. Let’s make it run twice as fast: 2*time + 17 If you have trouble wrapping your head around such an expression it may help to read a*b + c mathematically as a(t)*b(t) + c(t) and read time as simply t. So far we have seen wires that ignore their input. The following wire uses its input: integral 5 It literally integrates its input value with respect to time. Its argument is the integration constant, i.e. the start value. To supply an input simply compose it: integral 5 . 3 Remember that 3 really means pure 3, a constant wire. The integral of the constant 3 is 3*t + c and here c = 5. Here is another example: integral 5 . time Since time denotes t the integral will be t^2/2 + c, again with c = 5. This may sound like a complicated, sophisticated wire, but it’s really not. Surprisingly there is no crazy algebra or complicated numerical algorithm going on under the hood. Integrating over time requires one addition and one division each frame. So there is nothing wrong with using it extensively to animate a scene or to move objects in a game. Sometimes categorical composition and the applicative interface can be inconvenient, in which case you may choose to use the arrow interface. The above integration can be expressed the following way: proc _ -> do t <- time -< () integral 5 -< t Since time ignores its input signal, we just give it a constant signal with value (). We name time’s value t and pass it as the input signal to integral. Intervals Wires may choose to produce a signal only for a limited amount of time. We refer to those wires as intervals. When a wire does not produce, then it inhibits. Example: for 3 This wire acts like the identity wire in that it passes its input signal through unchanged: for 3 . "yes" The signal of this wire will be “yes”, but after three seconds it will stop to act like the identity wire and will inhibit forever. When you use testWire inhibition will be displayed as “I:” followed by a value, the inhibition value. This is what the e parameter to Wire is. It’s called the inhibition monoid: for :: (HasTime t s, Monoid e) => t -> Wire s e m a a As you can see the input and output types are the same and fully polymorphic, hinting at the identity-like behavior. All predefined intervals inhibit with the mempty value. When the wire inhibits, you don’t get a signal of type a, but rather an inhibition value of type e. Netwire does not interpret this value in any way and in most cases you would simply use e = (). Intervals give you a very elegant way to combine wires: for 3 . "yes" <|> "no" This wire produces “yes” for three seconds. Then the wire to the left of <|> will stop producing, so <|> will use the wire to its right instead. You can read the operator as a left-biased “or”. The signal of the wire w1 <|> w2 will be the signal of the leftmost component wire that actually produced a signal. There are a number of predefined interval wires. The above signal can be written equivalently as: after 3 . "no" <|> "yes" The left wire will inhibit for the first three seconds, so during that interval the right wire is chosen. After that, as suggested by its name, the after wire starts acting like the identity wire, so the left side takes precedence. Once the time period has passed the after wire will produce forever, leaving the “yes” wire never to be reached again. However, you can easily combine intervals: after 5 . for 6 . "Blip!" <|> "Look at me..." The left wire will produce after five seconds from the beginning for six seconds from the beginning, so effectively it will produce for one second. When you animate this wire, you will see the string “Look at me…” for five seconds, then you will see “Blip!” for one second, then finally it will go back to “Look at me…” and display that one forever. Events Events are things that happen at certain points in time. Examples include button presses, network packets or even just reaching a certain point in time. As such they can be thought of as lists of values together with their occurrence times. Events are actually first class signals of the Event type: data Event a For example the predefined never event is the event that never occurs: never :: Wire s e m a (Event b) As suggested by the type events contain a value. Netwire does not export the constructors of the Event type by default. If you are a framework developer you can import the Control.Wire.Unsafe.Event module to implement your own events. A game engine may include events for key presses or certain things happening in the scene. However, as an application developer you should view this type as being opaque. This is necessary in order to protect continuous time semantics. You cannot access event values directly. There are a number of ways to respond to an event. The primary way to do this in Netwire is to turn events into intervals. There are a number of predefined wires for that purpose, for example asSoonAs: asSoonAs :: (Monoid e) => Wire s e m (Event a) a This wire takes an event signal as its input. Initially it inhibits, but as soon as the event occurs for the first time, it produces the event’s last value forever. The at event will occur only once after the given time period has passed: at :: (HasTime t s) => t -> Wire s e m a (Event a) Example: at 3 . "blubb" This event will occur after three seconds, and the event’s value will be “blubb”. Using asSoonAs we can turn this into an interval: asSoonAs . at 3 . "blubb" This wire will inhibit for three seconds and then start producing. It will produce the value “blubb” forever. That’s the event’s last value after three seconds, and it will never change, because the event does not occur ever again. Here is an example that may be more representative of that property: asSoonAs . at 3 . time This wire inhibits for three seconds, then it produces the value 3 (or a value close to it) forever. Notice that this is not a clock. It does not produce the current time, but the time at the point in time when the event occurred. To combine multiple events there are a number of options. In principle you should think of event values to form a semigroup (of your choice), because events can occur simultaneously. However, in many cases the actual value of the event is not that interesting, so there is an easy way to get a left- or right-biased combination: (at 2 <& at 3) . time This event occurs two times, namely once after two seconds and once after three seconds. In each case the event value will be the occurrence time. Here is an interesting case: at 2 . "blah" <& at 2 . "blubb" These events will occur simultaneously. The value will be “blah”, because <& means left-biased combination. There is also &> for right-biased combination. If event values actually form a semigroup, then you can just use monoidal composition: at 2 . "blah" <> at 2 . "blubb" Again these events occur at the same time, but this time the event value will be “blahblubb”. Note that you are using two Monoid instances and one Semigroup instance here. If the signals of two wires form a monoid, then wires themselves form a monoid: w1 <> w2 = liftA2 (<>) w1 w2 There are many predefined event-wires and many combinators for manipulating events in the Control.Wire.Event module. A common events is the now event: now :: Wire s e m a (Event a) This event occurs once at the beginning. Switching We still lack a meaningful way to respond to events. This is where switching comes in, sometimes also called dynamic switching. The most important combinator for switching is -->: w1 --> w2 The idea is really straightforward: This wire acts like w1 as long as it produces. As soon as it stops producing it is discarded and w2 takes its place. Example: for 3 . "yes" --> "no" In this case the behavior will be the same as in the intervals section, but with two major differences: Firstly when the first interval ends, it is completely discarded and garbage-collected, never to be seen again. Secondly and more importantly the point in time of switching will be the beginning for the new wire. Example: for 3 . time --> time This wire will show a clock counting to three seconds, then it will start over from zero. This is why we usually refer to time as local time. Recursion is fully supported. Here is a fun example: netwireIsCool = for 2 . "Once upon a time..." --> for 3 . "... games were completely imperative..." --> for 2 . "... but then..." --> for 10 . ("Netwire 5! " <> anim) --> netwireIsCool where anim = holdFor 0.5 . periodic 1 . "Hoo..." <|> "...ray!" Changes 5.0.3: Maintenance release - Fixed constraints for Semigroup-Monoid-Proposal - Fixed flags for older GHCs Contributors: 5.0.2: Maintenance release - Moved to Git and GitHub. - Relaxed profunctors dependency (finally). - Moved language extensions into the individual modules. - Minor style changes.
https://www.stackage.org/lts-18.8/package/netwire-5.0.3
CC-MAIN-2022-40
refinedweb
2,844
63.7
Keypoints in imgaug are points on images, given as absolute x- and y- pixel coordinates with subpixel accuracy (i.e. as floats with value range [0, S), where S denotes the size of an axis). In the literature they are also called "landmarks" and may be used for e.g. human pose estimation. In imgaug, keypoints are only affected by augmenters changing the geometry of images. This is the case for e.g. horizontal flips or affine transformations. They are not affected by other methods, such as gaussian noise. Two classes are provided for keypoint augmentation in imgaug, listed in the following sections. imgaug.augmentables.kps.Keypoint: A very simple class instantiated as Keypoint(x=<number>, y=<number>). Noteworthy methods of the class are: project(from_shape, to_shape): Used to project keypoints from one image shape to another one, e.g. after image resizing. shift(x=0, y=0): Used to move points on the x/y-axis. Returns a new Keypoint. draw_on_image(image, color=(0, 255, 0), alpha=1.0, size=3, copy=True, raise_if_out_of_image=False): Draw this keypoint on an image. imgaug.augmentables.kps.KeypointsOnImage: Combines a list of keypoints with an image shape. It is instantiated as KeypointsOnImage(keypoints, shape), where keypoints is a list of Keypoint instances and shape is the shape of the image on which the keypoints are placed. Both arguments are later available as .keypoints and .shape attributes. Noteworthy methods are: on(image): Used to project the keypoints onto a new image, e.g. after image resizing. draw_on_image(image, color=(0, 255, 0), alpha=1.0, size=3, copy=True, raise_if_out_of_image=False): Draw keypoints as squares onto an image. The image must be given as a numpy array. shift(x=0, y=0): Analogous to the method in Keypoint, but shifts all Keypoints in .keypoints. to_xy_array(): Transforms the instance to a (N, 2)numpy array. from_xy_array(xy, shape): Creates an instance of KeypointsOnImagefrom an (N, 2)array. shapeis the shape of the corresponding image. to_distance_maps(inverted=False): Converts the keypoints to (euclidean) distance maps in the size of the image. Result is of shape (H, W, N), with Nbeing the number of keypoints. from_distance_maps(distance_maps, inverted=False, if_not_found_coords={"x": -1, "y": -1}, threshold=None, nb_channels=None): Inverse function for to_distance_maps(). To augment keypoints, the method augment(images=..., keypoints=...) may be called. An alternative is augment_keypoints(), which only handles keypoint data and expects a single instance of KeypointsOnImage or a list of that class. For more details, see the API: Keypoint, KeypointsOnImage, imgaug.augmenters.meta.Augmenter.augment(), imgaug.augmenters.meta.Augmenter.augment_keypoints(). Let's take a look at a simple example, in which we augment an image and five keypoints on it by applying an affine transformation. As the first step, we load an example image from the web: import imageio import imgaug as ia %matplotlib inline image = imageio.imread("") image = ia.imresize_single_image(image, (389, 259)) ia.imshow(image) Now let's place and visualize a few keypoints: from imgaug.augmentables.kps import Keypoint, KeypointsOnImage kps = [ Keypoint(x=99, y=81), # left eye (from camera perspective) Keypoint(x=125, y=80), # right eye Keypoint(x=112, y=102), # nose Keypoint(x=102, y=210), # left paw Keypoint(x=127, y=207) # right paw ] kpsoi = KeypointsOnImage(kps, shape=image.shape) ia.imshow(kpsoi.draw_on_image(image, size=7)) Note how we "merged" all keypoints of the image in an instance of KeypointsOnImage. We will soon augment that instance. In case you have to process the keypoints after augmentation, they can be accesses via the .keypoints attribute: print(kpsoi.keypoints) [Keypoint(x=99.00000000, y=81.00000000), Keypoint(x=125.00000000, y=80.00000000), Keypoint(x=112.00000000, y=102.00000000), Keypoint(x=102.00000000, y=210.00000000), Keypoint(x=127.00000000, y=207.00000000)] Now to the actual augmentation. We want to apply an affine transformation, which will alter both the image and the keypoints. We choose a bit of translation and rotation as our transformation. Additionally, we add a bit of color jittering to the mix. That color jitter is only going to affect the image, not the keypoints. import imgaug.augmenters as iaa ia.seed(3) seq = iaa.Sequential([ iaa.Affine(translate_px={"x": (10, 30)}, rotate=(-10, 10)), iaa.AddToHueAndSaturation((-50, 50)) # color jitter, only affects the image ]) And now we apply our augmentation sequence to both the image and the keypoints. We can do this by calling seq.augment(...) or its shortcut seq(...): image_aug, kpsoi_aug = seq(image=image, keypoints=kpsoi) If you have more than one image, you have to use images=... instead of image=.... You will also have to provide a list as input to keypoints. Though that list does not necessarily have to contain KeypointsOnImage instances. You may also provide (per image) a list of Keypoint instances or a list of (x,y) tuples or an (N,2) numpy array. Make sure however to provide both images and keypoints in a single call to augment(). Calling it two times -- once with the images as argument and once with the keypoints -- will lead to different sampled random values for the two datatypes. Now let's visualize the image and keypoints before/after augmentation: import numpy as np ia.imshow( np.hstack([ kpsoi.draw_on_image(image, size=7), kpsoi_aug.draw_on_image(image_aug, size=7) ]) ) When working with keypoints, you might at some point have to change the image size. The method KeypointsOnImage.on(image or shape) can be used to recompute keypoint coordinates after changing the image size. It projects the keypoints onto the same relative positions on a new image. In the following code block, the initial example image is increased to twice the original size. Then (1st) the keypoints are drawn and visualized on the original image, (2nd) drawn and visualized on the resized image without using on() and (3rd) drawn and visualized in combination with on(). image_larger = ia.imresize_single_image(image, 2.0) print("Small image %s with keypoints optimized for the size:" % (image.shape,)) ia.imshow(kpsoi.draw_on_image(image, size=7)) print("Large image %s with keypoints optimized for the small image size:" % (image_larger.shape,)) ia.imshow(kpsoi.draw_on_image(image_larger, size=7)) print("Large image %s with keypoints projected onto that size:" % (image_larger.shape,)) ia.imshow(kpsoi.on(image_larger).draw_on_image(image_larger, size=7)) Small image (389, 259, 3) with keypoints optimized for the size: Large image (778, 518, 3) with keypoints optimized for the small image size:
https://nbviewer.jupyter.org/github/aleju/imgaug-doc/blob/master/notebooks/B01%20-%20Augment%20Keypoints.ipynb
CC-MAIN-2019-43
refinedweb
1,064
59.4
[ RFC Index | RFC Search | Usenet FAQs | Web FAQs | Documents | Cities ] Alternate Formats: rfc3931.txt | rfc3931.txt.pdf RFC 3931 - Layer Two Tunneling Protocol - Version 3 (L2TPv3) RFC3931 - Layer Two Tunneling Protocol - Version 3 (L2TPv3) Network Working Group J. Lau, Ed. Request for Comments: 3931 M. Townsley, Ed. Category: Standards Track Cisco Systems I. Goyret, Ed. Lucent Technologies March 2005 Layer Two . Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1. Changes from RFC 2661. . . . . . . . . . . . . . . . . . 4 1.2. Specification of Requirements. . . . . . . . . . . . . . 4 1.3. Terminology. . . . . . . . . . . . . . . . . . . . . . . 5 2. Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3. Protocol Overview. . . . . . . . . . . . . . . . . . . . . . . 9 3.1. Control Message Types. . . . . . . . . . . . . . . . . . 10 3.2. L2TP Header Formats. . . . . . . . . . . . . . . . . . . 11 3.2.1. L2TP Control Message Header. . . . . . . . . . . 11 3.2.2. L2TP Data Message. . . . . . . . . . . . . . . . 12 3.3. Control Connection Management. . . . . . . . . . . . . . 13 3.3.1. Control Connection Establishment . . . . . . . . 14 3.3.2. Control Connection Teardown. . . . . . . . . . . 14 3.4. Session Management . . . . . . . . . . . . . . . . . . . 15 3.4.1. Session Establishment for an Incoming Call . . . 15 3.4.2. Session Establishment for an Outgoing Call . . . 15 3.4.3. Session Teardown . . . . . . . . . . . . . . . . 16 4. Protocol Operation . . . . . . . . . . . . . . . . . . . . . . 16 4.1. L2TP Over Specific Packet-Switched Networks (PSNs) . . . 16 4.1.1. L2TPv3 over IP . . . . . . . . . . . . . . . . . 17 4.1.2. L2TP over UDP. . . . . . . . . . . . . . . . . . 18 4.1.3. L2TP and IPsec . . . . . . . . . . . . . . . . . 20 4.1.4. IP Fragmentation Issues. . . . . . . . . . . . . 21 4.2. Reliable Delivery of Control Messages. . . . . . . . . . 23 4.3. Control Message Authentication . . . . . . . . . . . . . 25 4.4. Keepalive (Hello). . . . . . . . . . . . . . . . . . . . 26 4.5. Forwarding Session Data Frames . . . . . . . . . . . . . 26 4.6. Default L2-Specific Sublayer . . . . . . . . . . . . . . 27 4.6.1. Sequencing Data Packets. . . . . . . . . . . . . 28 4.7. L2TPv2/v3 Interoperability and Migration . . . . . . . . 28 4.7.1. L2TPv3 over IP . . . . . . . . . . . . . . . . . 29 4.7.2. L2TPv3 over UDP. . . . . . . . . . . . . . . . . 29 4.7.3. Automatic L2TPv2 Fallback. . . . . . . . . . . . 29 5. Control Message Attribute Value Pairs. . . . . . . . . . . . . 30 5.1. AVP Format . . . . . . . . . . . . . . . . . . . . . . . 30 5.2. Mandatory AVPs and Setting the M Bit . . . . . . . . . . 32 5.3. Hiding of AVP Attribute Values . . . . . . . . . . . . . 33 5.4. AVP Summary. . . . . . . . . . . . . . . . . . . . . . . 36 5.4.1. General Control Message AVPs . . . . . . . . . . 36 5.4.2. Result and Error Codes . . . . . . . . . . . . . 40 5.4.3. Control Connection Management AVPs . . . . . . . 43 5.4.4. Session Management AVPs. . . . . . . . . . . . . 48 5.4.5. Circuit Status AVPs. . . . . . . . . . . . . . . 57 6. Control Connection Protocol Specification. . . . . . . . . . . 59 6.1. Start-Control-Connection-Request (SCCRQ) . . . . . . . . 60 6.2. Start-Control-Connection-Reply (SCCRP) . . . . . . . . . 60 6.3. Start-Control-Connection-Connected (SCCCN) . . . . . . . 61 6.4. Stop-Control-Connection-Notification (StopCCN) . . . . . 61 6.5. Hello (HELLO). . . . . . . . . . . . . . . . . . . . . . 61 6.6. Incoming-Call-Request (ICRQ) . . . . . . . . . . . . . . 62 6.7. Incoming-Call-Reply (ICRP) . . . . . . . . . . . . . . . 63 6.8. Incoming-Call-Connected (ICCN) . . . . . . . . . . . . . 63 6.9. Outgoing-Call-Request (OCRQ) . . . . . . . . . . . . . . 64 6.10. Outgoing-Call-Reply (OCRP) . . . . . . . . . . . . . . . 65 6.11. Outgoing-Call-Connected (OCCN) . . . . . . . . . . . . . 65 6.12. Call-Disconnect-Notify (CDN) . . . . . . . . . . . . . . 66 6.13. WAN-Error-Notify (WEN) . . . . . . . . . . . . . . . . . 66 6.14. Set-Link-Info (SLI). . . . . . . . . . . . . . . . . . . 67 6.15. Explicit-Acknowledgement (ACK) . . . . . . . . . . . . . 67 7. Control Connection State Machines. . . . . . . . . . . . . . . 68 7.1. Malformed AVPs and Control Messages. . . . . . . . . . . 68 7.2. Control Connection States. . . . . . . . . . . . . . . . 69 7.3. Incoming Calls . . . . . . . . . . . . . . . . . . . . . 71 7.3.1. ICRQ Sender States . . . . . . . . . . . . . . . 72 7.3.2. ICRQ Recipient States. . . . . . . . . . . . . . 73 7.4. Outgoing Calls . . . . . . . . . . . . . . . . . . . . . 74 7.4.1. OCRQ Sender States . . . . . . . . . . . . . . . 75 7.4.2. OCRQ Recipient (LAC) States. . . . . . . . . . . 76 7.5. Termination of a Control Connection. . . . . . . . . . . 77 8. Security Considerations. . . . . . . . . . . . . . . . . . . . 78 8.1. Control Connection Endpoint and Message Security . . . . 78 8.2. Data Packet Spoofing . . . . . . . . . . . . . . . . . . 78 9. Internationalization Considerations. . . . . . . . . . . . . . 79 10. IANA Considerations. . . . . . . . . . . . . . . . . . . . . . 80 10.1. Control Message Attribute Value Pairs (AVPs) . . . . . . 80 10.2. Message Type AVP Values. . . . . . . . . . . . . . . . . 81 10.3. Result Code AVP Values . . . . . . . . . . . . . . . . . 81 10.4. AVP Header Bits. . . . . . . . . . . . . . . . . . . . . 82 10.5. L2TP Control Message Header Bits . . . . . . . . . . . . 82 10.6. Pseudowire Types . . . . . . . . . . . . . . . . . . . . 83 10.7. Circuit Status Bits. . . . . . . . . . . . . . . . . . . 83 10.8. Default L2-Specific Sublayer bits. . . . . . . . . . . . 84 10.9. L2-Specific Sublayer Type. . . . . . . . . . . . . . . . 84 10.10 Data Sequencing Level. . . . . . . . . . . . . . . . . . 84 11. References . . . . . . . . . . . . . . . . . . . . . . . . . . 85 11.1. Normative References . . . . . . . . . . . . . . . . . . 85 11.2. Informative References . . . . . . . . . . . . . . . . . 85 12. Acknowledgments. . . . . . . . . . . . . . . . . . . . . . . . 87 Appendix A: Control Slow Start and Congestion Avoidance. . . . . . 89 Appendix B: Control Message Examples . . . . . . . . . . . . . . . 90 Appendix C: Processing Sequence Numbers. . . . . . . . . . . . . . 91 Editors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 93 Full Copyright Statement . . . . . . . . . . . . . . . . . . . . . 94 1. Introduction The Layer Two Tunneling Protocol (L2TP) provides a dynamic mechanism for tunneling Layer 2 (L2) "circuits" across a packet-oriented data network (e.g., over IP). L2TP, as originally defined in RFC 2661, is a standard method for tunneling Point-to-Point Protocol (PPP) [RFC1661] sessions. L2TP has since been adopted for tunneling a number of other L2 protocols. In order to provide greater modularity, this document describes the base L2TP protocol, independent of the L2 payload that is being tunneled. The base L2TP protocol defined in this document consists of (1) the control protocol for dynamic creation, maintenance, and teardown of L2TP sessions, and (2) the L2TP data encapsulation to multiplex and demultiplex L2 data streams between two L2TP nodes across an IP network. Additional documents are expected to be published for each L2 data link emulation type (a.k.a. pseudowire-type) supported by L2TP (i.e., PPP, Ethernet, Frame Relay, etc.). These documents will contain any pseudowire-type specific details that are outside the scope of this base specification. When the designation between L2TPv2 and L2TPv3 is necessary, L2TP as defined in RFC 2661 will be referred to as "L2TPv2", corresponding to the value in the Version field of an L2TP header. (Layer 2 Forwarding, L2F, [RFC2341] was defined as "version 1".) At times, L2TP as defined in this document will be referred to as "L2TPv3". Otherwise, the acronym "L2TP" will refer to L2TPv3 or L2TP in general. 1.1. Changes from RFC 2661 Many of the protocol constructs described in this document are carried over from RFC 2661. Changes include clarifications based on years of interoperability and deployment experience as well as modifications to either improve protocol operation or provide a clearer separation from PPP. The intent of these modifications is to achieve a healthy balance between code reuse, interoperability experience, and a directed evolution of L2TP as it is applied to new tasks. Notable differences between L2TPv2 and L2TPv3 include the following: Separation of all PPP-related AVPs, references, etc., including a portion of the L2TP data header that was specific to the needs of PPP. The PPP-specific constructs are described in a companion document. Transition from a 16-bit Session ID and Tunnel ID to a 32-bit Session ID and Control Connection ID, respectively. Extension of the Tunnel Authentication mechanism to cover the entire control message rather than just a portion of certain messages. Details of these changes and a recommendation for transitioning to L2TPv3 are discussed in Section 4.7. 1.2. Specification of Requirements The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119]. 1.3. Terminology Attribute Value Pair (AVP) The variable-length concatenation of a unique Attribute (represented by an integer), a length field, and a Value containing the actual value identified by the attribute. Zero or more AVPs make up the body of control messages, which are used in the establishment, maintenance, and teardown of control connections. This basic construct is sometimes referred to as a Type-Length-Value (TLV) in some specifications. (See also: Control Connection, Control Message.) Call (Circuit Up) The action of transitioning a circuit on an L2TP Access Concentrator (LAC) to an "up" or "active" state. A call may be dynamically established through signaling properties (e.g., an incoming or outgoing call through the Public Switched Telephone Network (PSTN)) or statically configured (e.g., provisioning a Virtual Circuit on an interface). A call is defined by its properties (e.g., type of call, called number, etc.) and its data traffic. (See also: Circuit, Session, Incoming Call, Outgoing Call, Outgoing Call Request.) Circuit A general term identifying any one of a wide range of L2 connections. A circuit may be virtual in nature (e.g., an ATM PVC, an IEEE 802 VLAN, or an L2TP session), or it may have direct correlation to a physical layer (e.g., an RS-232 serial line). Circuits may be statically configured with a relatively long-lived uptime, or dynamically established with signaling to govern the establishment, maintenance, and teardown of the circuit. For the purposes of this document, a statically configured circuit is considered to be essentially the same as a very simple, long- lived, dynamic circuit. (See also: Call, Remote System.) Client (See Remote System.) Control Connection An L2TP control connection is a reliable control channel that is used to establish, maintain, and release individual L2TP sessions as well as the control connection itself. (See also: Control Message, Data Channel.) Control Message An L2TP message used by the control connection. (See also: Control Connection.) Data Message Message used by the data channel. (a.k.a. Data Packet, See also: Data Channel.) Data Channel The channel for L2TP-encapsulated data traffic that passes between two LCCEs over a Packet-Switched Network (i.e., IP). (See also: Control Connection, Data Message.) Incoming Call The action of receiving a call (circuit up event) on an LAC. The call may have been placed by a remote system (e.g., a phone call over a PSTN), or it may have been triggered by a local event (e.g., interesting traffic routed to a virtual interface). An incoming call that needs to be tunneled (as determined by the LAC) results in the generation of an L2TP ICRQ message. (See also: Call, Outgoing Call, Outgoing Call Request.) L2TP Access Concentrator (LAC) If an L2TP Control Connection Endpoint (LCCE) is being used to cross-connect an L2TP session directly to a data link, we refer to it as an L2TP Access Concentrator (LAC). An LCCE may act as both an L2TP Network Server (LNS) for some sessions and an LAC for others, so these terms must only be used within the context of a given set of sessions unless the LCCE is in fact single purpose for a given topology. (See also: LCCE, LNS.) L2TP Control Connection Endpoint (LCCE) An L2TP node that exists at either end of an L2TP control connection. May also be referred to as an LAC or LNS, depending on whether tunneled frames are processed at the data link (LAC) or network layer (LNS). (See also: LAC, LNS.) L2TP Network Server (LNS) If a given L2TP session is terminated at the L2TP node and the encapsulated network layer (L3) packet processed on a virtual interface, we refer to this L2TP node as an L2TP Network Server (LNS). A given LCCE may act as both an LNS for some sessions and an LAC for others, so these terms must only be used within the context of a given set of sessions unless the LCCE is in fact single purpose for a given topology. (See also: LCCE, LAC.) Outgoing Call The action of placing a call by an LAC, typically in response to policy directed by the peer in an Outgoing Call Request. (See also: Call, Incoming Call, Outgoing Call Request.) Outgoing Call Request A request sent to an LAC to place an outgoing call. The request contains specific information not known a priori by the LAC (e.g., a number to dial). (See also: Call, Incoming Call, Outgoing Call.) Packet-Switched Network (PSN) A network that uses packet switching technology for data delivery. For L2TPv3, this layer is principally IP. Other examples include MPLS, Frame Relay, and ATM. Peer When used in context with L2TP, Peer refers to the far end of an L2TP control connection (i.e., the remote LCCE). An LAC's peer may be either an LNS or another LAC. Similarly, an LNS's peer may be either an LAC or another LNS. (See also: LAC, LCCE, LNS.) Pseudowire (PW) An emulated circuit as it traverses a PSN. There is one Pseudowire per L2TP Session. (See also: Packet-Switched Network, Session.) Pseudowire Type The payload type being carried within an L2TP session. Examples include PPP, Ethernet, and Frame Relay. (See also: Session.) Remote System An end system or router connected by a circuit to an LAC. Session An L2TP session is the entity that is created between two LCCEs in order to exchange parameters for and maintain an emulated L2 connection. Multiple sessions may be associated with a single Control Connection. Zero-Length Body (ZLB) Message A control message with only an L2TP header. ZLB messages are used only to acknowledge messages on the L2TP reliable control connection. (See also: Control Message.) 2. Topology L2TP operates between two L2TP Control Connection Endpoints (LCCEs), tunneling traffic across a packet network. There are three predominant tunneling models in which L2TP operates: LAC-LNS (or vice versa), LAC-LAC, and LNS-LNS. These models are diagrammed below. (Dotted lines designate network connections. Solid lines designate circuit connections.) Figure 2.0: L2TP Reference Models (a) LAC-LNS Reference Model: On one side, the LAC receives traffic from an L2 circuit, which it forwards via L2TP across an IP or other packet-based network. On the other side, an LNS logically terminates the L2 circuit locally and routes network traffic to the home network. The action of session establishment is driven by the LAC (as an incoming call) or the LNS (as an outgoing call). +-----+ L2 +-----+ +-----+ | |------| LAC |.........[ IP ].........| LNS |...[home network] +-----+ +-----+ +-----+ remote system |<-- emulated service -->| |<----------- L2 service ------------>| (b) LAC-LAC Reference Model: In this model, both LCCEs are LACs. Each LAC forwards circuit traffic from the remote system to the peer LAC using L2TP, and vice versa. In its simplest form, an LAC acts as a simple cross-connect between a circuit to a remote system and an L2TP session. This model typically involves symmetric establishment; that is, either side of the connection may initiate a session at any time (or simultaneously, in which a tie breaking mechanism is utilized). +-----+ L2 +-----+ +-----+ L2 +-----+ | |------| LAC |........[ IP ]........| LAC |------| | +-----+ +-----+ +-----+ +-----+ remote remote system system |<- emulated service ->| |<----------------- L2 service ----------------->| (c) LNS-LNS Reference Model: This model has two LNSs as the LCCEs. A user-level, traffic-generated, or signaled event typically drives session establishment from one side of the tunnel. For example, a tunnel generated from a PC by a user, or automatically by customer premises equipment. +-----+ +-----+ [home network]...| LNS |........[ IP ]........| LNS |...[home network] +-----+ +-----+ |<- emulated service ->| |<---- L2 service ---->| Note: In L2TPv2, user-driven tunneling of this type is often referred to as "voluntary tunneling" [RFC2809]. Further, an LNS acting as part of a software package on a host is sometimes referred to as an "LAC Client" [RFC2661]. 3. Protocol Overview L2TP is comprised of two types of messages, control messages and data messages (sometimes referred to as "control packets" and "data packets", respectively). Control messages are used in the establishment, maintenance, and clearing of control connections and sessions. These messages utilize a reliable control channel within L2TP to guarantee delivery (see Section 4.2 for details). Data messages are used to encapsulate the L2 traffic being carried over the L2TP session. Unlike control messages, data messages are not retransmitted when packet loss occurs. The L2TPv3 control message format defined in this document borrows largely from L2TPv2. These control messages are used in conjunction with the associated protocol state machines that govern the dynamic setup, maintenance, and teardown for L2TP sessions. The data message format for tunneling data packets may be utilized with or without the L2TP control channel, either via manual configuration or via other signaling methods to pre-configure or distribute L2TP session information. Utilization of the L2TP data message format with other signaling methods is outside the scope of this document. Figure 3.0: L2TPv3 Structure +-------------------+ +-----------------------+ | Tunneled Frame | | L2TP Control Message | +-------------------+ +-----------------------+ | L2TP Data Header | | L2TP Control Header | +-------------------+ +-----------------------+ | L2TP Data Channel | | L2TP Control Channel | | (unreliable) | | (reliable) | +-------------------+----+-----------------------+ | Packet-Switched Network (IP, FR, MPLS, etc.) | +------------------------------------------------+ Figure 3.0 depicts the relationship of control messages and data messages over the L2TP control and data channels, respectively. Data messages are passed over an unreliable data channel, encapsulated by an L2TP header, and sent over a Packet-Switched Network (PSN) such as IP, UDP, Frame Relay, ATM, MPLS, etc. Control messages are sent over a reliable L2TP control channel, which operates over the same PSN. The necessary setup for tunneling a session with L2TP consists of two steps: (1) Establishing the control connection, and (2) establishing a session as triggered by an incoming call or outgoing call. An L2TP session MUST be established before L2TP can begin to forward session frames. Multiple sessions may be bound to a single control connection, and multiple control connections may exist between the same two LCCEs. 3.1. Control Message Types The Message Type AVP (see Section 5.4.1) defines the specific type of control message being sent. This document defines the following control message types (see Sections 6.1 through 6.15 for details on the construction and use of each message): Error Reporting 15 (WEN) WAN-Error-Notify Link Status Change Reporting 16 (SLI) Set-Link-Info 3.2. L2TP Header Formats This section defines header formats for L2TP control messages and L2TP data messages. All values are placed into their respective fields and sent in network order (high-order octets first). 3.2.1. L2TP Control Message Header The L2TP control message header provides information for the reliable transport of messages that govern the establishment, maintenance, and teardown of L2TP sessions. By default, control messages are sent over the underlying media in-band with L2TP data messages. The L2TP control message header is formatted as follows: Figure 3.2.1: L2TP Control Message Header 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |T|L|x|x|S|x|x|x|x|x|x|x| Ver | Length | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Control Connection ID | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Ns | Nr | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The T bit MUST be set to 1, indicating that this is a control message. The L and S bits MUST be set to 1, indicating that the Length field and sequence numbers are present. The x bits are reserved for future extensions. All reserved bits MUST be set to 0 on outgoing messages and ignored on incoming messages. The Ver field indicates the version of the L2TP control message header described in this document. On sending, this field MUST be set to 3 for all messages (unless operating in an environment that includes L2TPv2 [RFC2661] and/or L2F [RFC2341] as well, see Section 4.1 for details). The Length field indicates the total length of the message in octets, always calculated from the start of the control message header itself (beginning with the T bit). The Control Connection ID field contains the identifier for the control connection. L2TP control connections are named by identifiers that have local significance only. That is, the same control connection will be given unique Control Connection IDs by each LCCE from within each endpoint's own Control Connection ID number space. As such, the Control Connection ID in each message is that of the intended recipient, not the sender. Non-zero Control Connection IDs are selected and exchanged as Assigned Control Connection ID AVPs during the creation of a control connection. Ns indicates the sequence number for this control message, beginning at zero and incrementing by one (modulo 2**16) for each message sent. See Section 4.2 for more information on using this field. Nr indicates the sequence number expected in the next control message to be received. Thus, Nr is set to the Ns of the last in-order message received plus one (modulo 2**16). See Section 4.2 for more information on using this field. 3.2.2. L2TP Data Message In general, an L2TP data message consists of a (1) Session Header, (2) an optional L2-Specific Sublayer, and (3) the Tunnel Payload, as depicted below. Figure 3.2.2: L2TP Data Message Header +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | L2TP Session Header | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | L2-Specific Sublayer | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Tunnel Payload ... +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The L2TP Session Header is specific to the encapsulating PSN over which the L2TP traffic is delivered. The Session Header MUST provide (1) a method of distinguishing traffic among multiple L2TP data sessions and (2) a method of distinguishing data messages from control messages. Each type of encapsulating PSN MUST define its own session header, clearly identifying the format of the header and parameters necessary to setup the session. Section 4.1 defines two session headers, one for transport over UDP and one for transport over IP. The L2-Specific Sublayer is an intermediary layer between the L2TP session header and the start of the tunneled frame. It contains control fields that are used to facilitate the tunneling of each frame (e.g., sequence numbers or flags). The Default L2-Specific Sublayer for L2TPv3 is defined in Section 4.6. The Data Message Header is followed by the Tunnel Payload, including any necessary L2 framing as defined in the payload-specific companion documents. 3.3. Control Connection Management The L2TP control connection handles dynamic establishment, teardown, and maintenance of the L2TP sessions and of the control connection itself. The reliable delivery of control messages is described in Section 4.2. This section describes typical control connection establishment and teardown exchanges. It is important to note that, in the diagrams that follow, the reliable control message delivery mechanism exists independently of the L2TP state machine. For instance, Explicit Acknowledgement (ACK) messages may be sent after any of the control messages indicated in the exchanges below if an acknowledgment is not piggybacked on a later control message. LCCEs are identified during control connection establishment either by the Host Name AVP, the Router ID AVP, or a combination of the two (see Section 5.4.3). The identity of a peer LCCE is central to selecting proper configuration parameters (i.e., Hello interval, window size, etc.) for a control connection, as well as for determining how to set up associated sessions within the control connection, password lookup for control connection authentication, control connection level tie breaking, etc. 3.3.1. Control Connection Establishment Establishment of the control connection involves an exchange of AVPs that identifies the peer and its capabilities. A three-message exchange is used to establish the control connection. The following is a typical message exchange: LCCE A LCCE B ------ ------ SCCRQ -> <- SCCRP SCCCN -> 3.3.2. Control Connection Teardown Control connection teardown may be initiated by either LCCE and is accomplished by sending a single StopCCN control message. As part of the reliable control message delivery mechanism, the recipient of a StopCCN MUST send an ACK message to acknowledge receipt of the message and maintain enough control connection state to properly accept StopCCN retransmissions over at least a full retransmission cycle (in case the ACK message is lost). The recommended time for a full retransmission cycle is at least 31 seconds (see Section 4.2). The following is an example of a typical control message exchange: LCCE A LCCE B ------ ------ StopCCN -> (Clean up) (Wait) (Clean up) An implementation may shut down an entire control connection and all sessions associated with the control connection by sending the StopCCN. Thus, it is not necessary to clear each session individually when tearing down the whole control connection. 3.4. Session Management After successful control connection establishment, individual sessions may be created. Each session corresponds to a single data stream between the two LCCEs. This section describes the typical call establishment and teardown exchanges. 3.4.1. Session Establishment for an Incoming Call A three-message exchange is used to establish the session. The following is a typical sequence of events: LCCE A LCCE B ------ ------ (Call Detected) ICRQ -> <- ICRP (Call Accepted) ICCN -> 3.4.2. Session Establishment for an Outgoing Call A three-message exchange is used to set up the session. The following is a typical sequence of events: LCCE A LCCE B ------ ------ <- OCRQ OCRP -> (Perform Call Operation) OCCN -> (Call Operation Completed Successfully) 3.4.3. Session Teardown Session teardown may be initiated by either the LAC or LNS and is accomplished by sending a CDN control message. After the last session is cleared, the control connection MAY be torn down as well (and typically is). The following is an example of a typical control message exchange: LCCE A LCCE B ------ ------ CDN -> (Clean up) (Clean up) 4. Protocol Operation 4.1. L2TP Over Specific Packet-Switched Networks (PSNs) L2TP may operate over a variety of PSNs. There are two modes described for operation over IP, L2TP directly over IP (see Section 4.1.1) and L2TP over UDP (see Section 4.1.2). L2TPv3 implementations MUST support L2TP over IP and SHOULD support L2TP over UDP for better NAT and firewall traversal, and for easier migration from L2TPv2. L2TP over other PSNs may be defined, but the specifics are outside the scope of this document. Examples of L2TPv2 over other PSNs include [RFC3070] and [RFC3355]. The following field definitions are defined for use in all L2TP Session Header encapsulations. Session ID A 32-bit field containing a non-zero identifier for a session. L2TP sessions are named by identifiers that have local significance only. That is, the same logical session will be given different Session IDs by each end of the control connection for the life of the session. When the L2TP control connection is used for session establishment, Session IDs are selected and exchanged as Local Session ID AVPs during the creation of a session. The Session ID alone provides the necessary context for all further packet processing, including the presence, size, and value of the Cookie, the type of L2-Specific Sublayer, and the type of payload being tunneled. Cookie The optional Cookie field contains a variable-length value (maximum 64 bits) used to check the association of a received data message with the session identified by the Session ID. The Cookie MUST be set to the configured or signaled random value for this session. The Cookie provides an additional level of guarantee that a data message has been directed to the proper session by the Session ID. A well-chosen Cookie may prevent inadvertent misdirection of stray packets with recently reused Session IDs, Session IDs subject to packet corruption, etc. The Cookie may also provide protection against some specific malicious packet insertion attacks, as described in Section 8.2. When the L2TP control connection is used for session establishment, random Cookie values are selected and exchanged as Assigned Cookie AVPs during session creation. 4.1.1. L2TPv3 over IP L2TPv3 over IP (both versions) utilizes the IANA-assigned IP protocol ID 115. 4.1.1.1. L2TPv3 Session Header Over IP Unlike L2TP over UDP, the L2TPv3 session header over IP is free of any restrictions imposed by coexistence with L2TPv2 and L2F. As such, the header format has been designed to optimize packet processing. The following session header format is utilized when operating L2TPv3 over IP: Figure 4.1.1.1: L2TPv3 Session Header Over IP 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Session ID | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Cookie (optional, maximum 64 bits)... +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The Session ID and Cookie fields are as defined in Section 4.1. The Session ID of zero is reserved for use by L2TP control messages (see Section 4.1.1.2). 4.1.1.2. L2TP Control and Data Traffic over IP Unlike L2TP over UDP, which uses the T bit to distinguish between L2TP control and data packets, L2TP over IP uses the reserved Session ID of zero (0) when sending control messages. It is presumed that checking for the zero Session ID is more efficient -- both in header size for data packets and in processing speed for distinguishing between control and data messages -- than checking a single bit. The entire control message header over IP, including the zero session ID, appears as follows: Figure 4.1.1.2: L2TPv3 Control Message Header Over IP 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | (32 bits of zeros) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |T|L|x|x|S|x|x|x|x|x|x|x| Ver | Length | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Control Connection ID | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Ns | Nr | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Named fields are as defined in Section 3.2.1. Note that the Length field is still calculated from the beginning of the control message header, beginning with the T bit. It does NOT include the "(32 bits of zeros)" depicted above. When operating directly over IP, L2TP packets lose the ability to take advantage of the UDP checksum as a simple packet integrity check, which is of particular concern for L2TP control messages. Control Message Authentication (see Section 4.3), even with an empty password field, provides for a sufficient packet integrity check and SHOULD always be enabled. 4.1.2. L2TP over UDP L2TPv3 over UDP must consider other L2 tunneling protocols that may be operating in the same environment, including L2TPv2 [RFC2661] and L2F [RFC2341]. While there are efficiencies gained by running L2TP directly over IP, there are possible side effects as well. For instance, L2TP over IP is not as NAT-friendly as L2TP over UDP. 4.1.2.1. L2TP Session Header Over UDP The following session header format is utilized when operating L2TPv3 over UDP: Figure 4.1.2.1: L2TPv3 Session Header over UDP 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |T|x|x|x|x|x|x|x|x|x|x|x| Ver | Reserved | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Session ID | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Cookie (optional, maximum 64 bits)... +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The T bit MUST be set to 0, indicating that this is a data message. The x bits and Reserved field are reserved for future extensions. All reserved values MUST be set to 0 on outgoing messages and ignored on incoming messages. The Ver field MUST be set to 3, indicating an L2TPv3 message. Note that the initial bits 1, 4, 6, and 7 have meaning in L2TPv2 [RFC2661], and are deprecated and marked as reserved in L2TPv3. Thus, for UDP mode on a system that supports both versions of L2TP, it is important that the Ver field be inspected first to determine the Version of the header before acting upon any of these bits. The Session ID and Cookie fields are as defined in Section 4.1. 4.1.2.2. UDP Port Selection The method for UDP Port Selection defined in this section is identical to that defined for L2TPv2 [RFC2661]. When negotiating a control connection over UDP, control messages MUST be sent as UDP datagrams using the registered UDP port 1701 [RFC1700]. The initiator of an L2TP control connection picks an available source UDP port (which may or may not be 1701) and sends to the desired destination address at port 1701. The recipient picks a free port on its own system (which may or may not be 1701) and sends its reply to the initiator's UDP port and address, setting its own source port to the free port it found. Any subsequent traffic associated with this control connection (either control traffic or data traffic from a session established through this control connection) must use these same UDP ports. It has been suggested that having the recipient choose an arbitrary source port (as opposed to using the destination port in the packet initiating the control connection, i.e., 1701) may make it more difficult for L2TP to traverse some NAT devices. Implementations should consider the potential implication of this capability before choosing an arbitrary source port. A NAT device that can pass TFTP traffic with variant UDP ports should be able to pass L2TP UDP traffic since both protocols employ similar policies with regard to UDP port selection. 4.1.2.3. UDP Checksum The tunneled frames that L2TP carry often have their own checksums or integrity checks, rendering the UDP checksum redundant for much of the L2TP data message contents. Thus, UDP checksums MAY be disabled in order to reduce the associated packet processing burden at the L2TP endpoints. The L2TP header itself does not have its own checksum or integrity check. However, use of the L2TP Session ID and Cookie pair guards against accepting an L2TP data message if corruption of the Session ID or associated Cookie has occurred. When the L2-Specific Sublayer is present in the L2TP header, there is no built-in integrity check for the information contained therein if UDP checksums or some other integrity check is not employed. IPsec (see Section 4.1.3) may be used for strong integrity protection of the entire contents of L2TP data messages. UDP checksums MUST be enabled for L2TP control messages. 4.1.3. L2TP and IPsec The L2TP data channel does not provide cryptographic security of any kind. If the L2TP data channel operates over a public or untrusted IP network where privacy of the L2TP data is of concern or sophisticated attacks against L2TP are expected to occur, IPsec [RFC2401] MUST be made available to secure the L2TP traffic. Either L2TP over UDP or L2TP over IP may be secured with IPsec. [RFC3193] defines the recommended method for securing L2TPv2. L2TPv3 possesses identical characteristics to IPsec as L2TPv2 when running over UDP and implementations MUST follow the same recommendation. When operating over IP directly, [RFC3193] still applies, though references to UDP source and destination ports (in particular, those in Section 4, "IPsec Filtering details when protecting L2TP") may be ignored. Instead, the selectors used to identify L2TPv3 traffic are simply the source and destination IP addresses for the tunnel endpoints together with the L2TPv3 IP protocol type, 115. may be performed at the network layer above L2TP. These network layer access control features may be handled at an LCCE via vendor-specific authorization features, or at the network layer itself by using IPsec transport mode end-to-end between the communicating hosts. The requirements for access control mechanisms are not a part of the L2TP specification, and as such, are outside the scope of this document. Protecting the L2TP packet stream with IPsec does, in turn, also protect the data within the tunneled session packets while transported from one LCCE to the other. Such protection must not be considered a substitution for end-to-end security between communicating hosts or applications. 4.1.4. IP Fragmentation Issues Fragmentation and reassembly in network equipment generally require significantly greater resources than sending or receiving a packet as a single unit. As such, fragmentation and reassembly should be avoided whenever possible. Ideal solutions for avoiding fragmentation include proper configuration and management of MTU sizes among the Remote System, the LCCE, and the IP network, as well as adaptive measures that operate with the originating host (e.g., [RFC1191], [RFC1981]) to reduce the packet sizes at the source. An LCCE MAY fragment a packet before encapsulating it in L2TP. For example, if an IPv4 packet arrives at an LCCE from a Remote System that, after encapsulation with its associated framing, L2TP, and IP, does not fit in the available path MTU towards its LCCE peer, the local LCCE may perform IPv4 fragmentation on the packet before tunnel encapsulation. This creates two (or more) L2TP packets, each carrying an IPv4 fragment with its associated framing. This ultimately has the effect of placing the burden of fragmentation on the LCCE, while reassembly occurs on the IPv4 destination host. If an IPv6 packet arrives at an LCCE from a Remote System that, after encapsulation with associated framing, L2TP and IP, does not fit in the available path MTU towards its L2TP peer, the Generic Packet Tunneling specification [RFC2473], Section 7.1 SHOULD be followed. In this case, the LCCE should either send an ICMP Packet Too Big message to the data source, or fragment the resultant L2TP/IP packet (for reassembly by the L2TP peer). If the amount of traffic requiring fragmentation and reassembly is rather light, or there are sufficiently optimized mechanisms at the tunnel endpoints, fragmentation of the L2TP/IP packet may be sufficient for accommodating mismatched MTUs that cannot be managed by more efficient means. This method effectively emulates a larger MTU between tunnel endpoints and should work for any type of L2- encapsulated packet. Note that IPv6 does not support "in-flight" fragmentation of data packets. Thus, unlike IPv4, the MTU of the path towards an L2TP peer must be known in advance (or the last resort IPv6 minimum MTU of 1280 bytes utilized) so that IPv6 fragmentation may occur at the LCCE. In summary, attempting to control the source MTU by communicating with the originating host, forcing that an MTU be sufficiently large on the path between LCCE peers to tunnel a frame from any other interface without fragmentation, fragmenting IP packets before encapsulation with L2TP/IP, or fragmenting the resultant L2TP/IP packet between the tunnel endpoints, are all valid methods for managing MTU mismatches. Some are clearly better than others depending on the given deployment. For example, a passive monitoring application using L2TP would certainly not wish to have ICMP messages sent to a traffic source. Further, if the links connecting a set of LCCEs have a very large MTU (e.g., SDH/SONET) and it is known that the MTU of all links being tunneled by L2TP have smaller MTUs (e.g., 1500 bytes), then any IP fragmentation and reassembly enabled on the participating LCCEs would never be utilized. An implementation MUST implement at least one of the methods described in this section for managing mismatched MTUs, based on careful consideration of how the final product will be deployed. L2TP-specific fragmentation and reassembly methods, which may or may not depend on the characteristics of the type of link being tunneled (e.g., judicious packing of ATM cells), may be defined as well, but these methods are outside the scope of this document. 4.2. Reliable Delivery of Control Messages L2TP provides a lower level reliable delivery service for all control messages. The Nr and Ns fields of the control message header (see Section 3.2.1) belong to this delivery mechanism. The upper level functions of L2TP are not concerned with retransmission or ordering of control messages. The reliable control messaging mechanism is a sliding window mechanism that provides control message retransmission and congestion control. Each peer maintains separate sequence number state for each control connection. The message sequence number, Ns, begins at 0. Each subsequent message is sent with the next increment of the sequence number. The sequence number is thus a free-running counter represented modulo 65536. The sequence number in the header of a received message is considered less than or equal to the last received number if its value lies in the range of the last received number and the preceding 32767 values, inclusive. For example, if the last received sequence number was 15, then messages with sequence numbers 0 through 15, as well as 32784 through 65535, would be considered less than or equal. Such a message would be considered a duplicate of a message already received and ignored from processing. However, in order to ensure that all messages are acknowledged properly (particularly in the case of a lost ACK message), receipt of duplicate messages MUST be acknowledged by the reliable delivery mechanism. This acknowledgment may either piggybacked on a message in queue or sent explicitly via an ACK message. All control messages take up one slot in the control message sequence number space, except the ACK message. Thus, Ns is not incremented after an ACK message is sent. The last received message number, Nr, is used to acknowledge messages received by an L2TP peer. It contains the sequence number of the message the peer expects to receive next (e.g., the last Ns of a non-ACK message received plus 1, modulo 65536). While the Nr in a received ACK message is used to flush messages from the local retransmit queue (see below), the Nr of the next message sent is not updated by the Ns of the ACK message. Nr SHOULD be sanity-checked before flushing the retransmit queue. For instance, if the Nr received in a control message is greater than the last Ns sent plus 1 modulo 65536, the control message is clearly invalid. The reliable delivery mechanism at a receiving peer is responsible for making sure that control messages are delivered in order and without duplication to the upper level. Messages arriving out-of- order may be queued for in-order delivery when the missing messages are received. Alternatively, they may be discarded, thus requiring a retransmission by the peer. When dropping out-of-order control packets, Nr MAY be updated before the packet is discarded. Each control connection maintains a queue of control messages to be transmitted to its peer. The message at the front of the queue is sent with a given Ns value and is held until a control message arrives from the peer in which the Nr field indicates receipt of this message. After a period of time (a recommended default is 1 second but SHOULD be configurable) passes without acknowledgment, the message is retransmitted. The retransmitted message contains the same Ns value, but the Nr value MUST be updated with the sequence number of the next expected message. SHOULD be no less than 8 seconds per retransmission. If no peer response is detected after several retransmissions (a recommended default is 10, but MUST be configurable), the control connection and all associated sessions MUST be cleared. As it is the first message to establish a control connection, the SCCRQ MAY employ a different retransmission maximum than other control messages in order to help facilitate failover to alternate LCCEs in a timely fashion. When a control connection is being shut down for reasons other than loss of connectivity, the state and reliable delivery mechanisms MUST be maintained and operated for the full retransmission interval after the final message StopCCN message has been sent (e.g., 1 + 2 + 4 + 8 + 8... seconds), or until the StopCCN message itself has been acknowledged. A sliding window mechanism is used for control message transmission and retransmission. Consider two peers, A and B. Suppose A specifies a Receive Window Size AVP with a value of N in the SCCRQ or SCCRP message. B is now allowed to have a maximum of N outstanding (i.e., unacknowledged) control messages. Once N messages have been sent, B must wait for an acknowledgment from A that advances the window before sending new control messages. An implementation may advertise a non-zero receive window as small or as large as it wishes, depending on its own ability to process incoming messages before sending an acknowledgement. Each peer MUST limit the number of unacknowledged messages it will send before receiving an acknowledgement by this Receive Window Size. The actual internal unacknowledged message send-queue depth may be further limited by local resource allocation or by dynamic slow-start and congestion- avoidance mechanisms. When retransmitting control messages, a slow start and congestion avoidance window adjustment procedure SHOULD be utilized. A recommended procedure is described in Appendix A. A peer MAY drop messages, but MUST NOT actively delay acknowledgment of messages as a technique for flow control of control messages. Appendix B contains examples of control message transmission, acknowledgment, and retransmission. 4.3. Control Message Authentication L2TP incorporates an optional authentication and integrity check for all control messages. This mechanism consists of a computed one-way hash over the header and body of the L2TP control message, a pre- configured shared secret, and a local and remote nonce (random value) exchanged via the Control Message Authentication Nonce AVP. This per-message authentication and integrity check is designed to perform a mutual authentication between L2TP nodes, perform integrity checking of all control messages, and guard against control message spoofing and replay attacks that would otherwise be trivial to mount. At least one shared secret (password) MUST exist between communicating L2TP nodes to enable Control Message Authentication. See Section 5.4.3 for details on calculation of the Message Digest and construction of the Control Message Authentication Nonce and Message Digest AVPs. L2TPv3 Control Message Authentication is similar to L2TPv2 [RFC2661] Tunnel Authentication in its use of a shared secret and one-way hash calculation. The principal difference is that, instead of computing the hash over selected contents of a received control message (e.g., the Challenge AVP and Message Type) as in L2TPv2, the entire message is used in the hash in L2TPv3. In addition, instead of including the hash digest in just the SCCRP and SCCCN messages, it is now included in all L2TP messages. The Control Message Authentication mechanism is optional, and may be disabled if both peers agree. For example, if IPsec is already being used for security and integrity checking between the LCCEs, the function of the L2TP mechanism becomes redundant and may be disabled. Presence of the Control Message Authentication Nonce AVP in an SCCRQ or SCCRP message serves as indication to a peer that Control Message Authentication is enabled. If an SCCRQ or SCCRP contains a Control Message Authentication Nonce AVP, the receiver of the message MUST respond with a Message Digest AVP in all subsequent messages sent. Control Message Authentication is always bidirectional; either both sides participate in authentication, or neither does. If Control Message Authentication is disabled, the Message Digest AVP still MAY be sent as an integrity check of the message. The integrity check is calculated as in Section 5.4.3, with an empty zero-length shared secret, local nonce, and remote nonce. If an invalid Message Digest is received, it should be assumed that the message has been corrupted in transit and the message dropped accordingly. Implementations MAY rate-limit control messages, particularly SCCRQ messages, upon receipt for performance reasons or for protection against denial of service attacks. 4.4. Keepalive (Hello) L2TP employs a keepalive mechanism to detect loss of connectivity between a pair of LCCEs. This is accomplished by injecting Hello control messages (see Section 6.5) after a period of time has elapsed since the last data message or control message was received on an L2TP session or control connection, respectively. As with any other control message, if the Hello message is not reliably delivered, the sending LCCE declares that the control connection is down and resets its state for the control connection. This behavior ensures that a connectivity failure between the LCCEs is detected independently by each end of a control connection. Since the control channel is operated in-band with data traffic over the PSN, this single mechanism can be used to infer basic data connectivity between a pair of LCCEs for all sessions associated with the control connection. Periodic keepalive for the control connection MUST be implemented by sending a Hello if a period of time (a recommended default is 60 seconds, but MUST be configurable) has passed without receiving any message (data or control) from the peer. An LCCE sending Hello messages across multiple control connections between the same LCCE endpoints MUST employ a jittered timer mechanism to prevent grouping of Hello messages. 4.5. Forwarding Session Data Frames Once session establishment is complete, circuit frames are received at an LCCE, encapsulated in L2TP (with appropriate attention to framing, as described in documents for the particular pseudowire type), and forwarded over the appropriate session. For every outgoing data message, the sender places the identifier specified in the Local Session ID AVP (received from peer during session establishment) in the Session ID field of the L2TP data header. In this manner, session frames are multiplexed and demultiplexed between a given pair of LCCEs. Multiple control connections may exist between a given pair of LCCEs, and multiple sessions may be associated with a given control connection. The peer LCCE receiving the L2TP data packet identifies the session with which the packet is associated by the Session ID in the data packet's header. The LCCE then checks the Cookie field in the data packet against the Cookie value received in the Assigned Cookie AVP during session establishment. It is important for implementers to note that the Cookie field check occurs after looking up the session context by the Session ID, and as such, consists merely of a value match of the Cookie field and that stored in the retrieved context. There is no need to perform a lookup across the Session ID and Cookie as a single value. Any received data packets that contain invalid Session IDs or associated Cookie values MUST be dropped. Finally, the LCCE either forwards the network packet within the tunneled frame (e.g., as an LNS) or switches the frame to a circuit (e.g., as an LAC). 4.6. Default L2-Specific Sublayer This document defines a Default L2-Specific Sublayer format (see Section 3.2.2) that a pseudowire may use for features such as sequencing support, L2 interworking, OAM, or other per-data-packet operations. The Default L2-Specific Sublayer SHOULD be used by a given PW type to support these features if it is adequate, and its presence is requested by a peer during session negotiation. Alternative sublayers MAY be defined (e.g., an encapsulation with a larger Sequence Number field or timing information) and identified for use via the L2-Specific Sublayer Type AVP. Figure 4.6: Default L2-Specific Sublayer Format 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |x|S|x|x|x|x|x|x| Sequence Number | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The S (Sequence) bit is set to 1 when the Sequence Number contains a valid number for this sequenced frame. If the S bit is set to zero, the Sequence Number contents are undefined and MUST be ignored by the receiver. The Sequence Number field contains a free-running counter of 2^24 sequence numbers. If the number in this field is valid, the S bit MUST be set to 1. The Sequence Number begins at zero, which is a valid sequence number. (In this way, implementations inserting sequence numbers do not have to "skip" zero when incrementing.) The sequence number in the header of a received message is considered less than or equal to the last received number if its value lies in the range of the last received number and the preceding (2^23-1) values, inclusive. 4.6.1. Sequencing Data Packets The Sequence Number field may be used to detect lost, duplicate, or out-of-order packets within a given session. When L2 frames are carried over an L2TP-over-IP or L2TP-over-UDP/IP data channel, this part of the link has the characteristic of being able to reorder, duplicate, or silently drop packets. Reordering may break some non-IP protocols or L2 control traffic being carried by the link. Silent dropping or duplication of packets may break protocols that assume per-packet indications of error, such as TCP header compression. While a common mechanism for packet sequence detection is provided, the sequence dependency characteristics of individual protocols are outside the scope of this document. If any protocol being transported by over L2TP data channels cannot tolerate misordering of data packets, packet duplication, or silent packet loss, sequencing may be enabled on some or all packets by using the S bit and Sequence Number field defined in the Default L2- Specific Sublayer (see Section 4.6). For a given L2TP session, each LCCE is responsible for communicating to its peer the level of sequencing support that it requires of data packets that it receives. Mechanisms to advertise this information during session negotiation are provided (see Data Sequencing AVP in Section 5.4.4). When determining whether a packet is in or out of sequence, an implementation SHOULD utilize a method that is resilient to temporary dropouts in connectivity coupled with high per-session packet rates. The recommended method is outlined in Appendix C. 4.7. L2TPv2/v3 Interoperability and Migration L2TPv2 and L2TPv3 environments should be able to coexist while a migration to L2TPv3 is made. Migration issues are discussed for each media type in this section. Most issues apply only to implementations that require both L2TPv2 and L2TPv3 operation. However, even L2TPv3-only implementations must at least be mindful of these issues in order to interoperate with implementations that support both versions. 4.7.1. L2TPv3 over IP L2TPv3 implementations running strictly over IP with no desire to interoperate with L2TPv2 implementations may safely disregard most migration issues from L2TPv2. All control messages and data messages are sent as described in this document, without normative reference to RFC 2661. If one wishes to tunnel PPP over L2TPv3, and fallback to L2TPv2 only if it is not available, then L2TPv3 over UDP with automatic fallback (see Section 4.7.3) MUST be used. There is no deterministic method for automatic fallback from L2TPv3 over IP to either L2TPv2 or L2TPv3 over UDP. One could infer whether L2TPv3 over IP is supported by sending an SCCRQ and waiting for a response, but this could be problematic during periods of packet loss between L2TP nodes. 4.7.2. L2TPv3 over UDP The format of the L2TPv3 over UDP header is defined in Section 4.1.2.1. When operating over UDP, L2TPv3 uses the same port (1701) as L2TPv2 and shares the first two octets of header format with L2TPv2. The Ver field is used to distinguish L2TPv2 packets from L2TPv3 packets. If an implementation is capable of operating in L2TPv2 or L2TPv3 modes, it is possible to automatically detect whether a peer can support L2TPv2 or L2TPv3 and operate accordingly. The details of this fallback capability is defined in the following section. 4.7.3. Automatic L2TPv2 Fallback When running over UDP, an implementation may detect whether a peer is L2TPv3-capable by sending a special SCCRQ that is properly formatted for both L2TPv2 and L2TPv3. This is accomplished by sending an SCCRQ with its Ver field set to 2 (for L2TPv2), and ensuring that any L2TPv3-specific AVPs (i.e., AVPs present within this document and not defined within RFC 2661) in the message are sent with each M bit set to 0, and that all L2TPv2 AVPs are present as they would be for L2TPv2. This is done so that L2TPv3 AVPs will be ignored by an L2TPv2-only implementation. Note that, in both L2TPv2 and L2TPv3, the value contained in the space of the control message header utilized by the 32-bit Control Connection ID in L2TPv3, and the 16- bit Tunnel ID and 16-bit Session ID in L2TPv2, are always 0 for an SCCRQ. This effectively hides the fact that there are a pair of 16-bit fields in L2TPv2, and a single 32-bit field in L2TPv3. If the peer implementation is L2TPv3-capable, a control message with the Ver field set to 3 and an L2TPv3 header and message format will be sent in response to the SCCRQ. Operation may then continue as L2TPv3. If a message is received with the Ver field set to 2, it must be assumed that the peer implementation is L2TPv2-only, thus enabling fallback to L2TPv2 mode to safely occur. Note Well: The L2TPv2/v3 auto-detection mode requires that all L2TPv3 implementations over UDP be liberal in accepting an SCCRQ control message with the Ver field set to 2 or 3 and the presence of L2TPv2- specific AVPs. An L2TPv3-only implementation MUST ignore all L2TPv2 AVPs (e.g., those defined in RFC 2661 and not in this document) within an SCCRQ with the Ver field set to 2 (even if the M bit is set on the L2TPv2-specific AVPs). 5. Control Message Attribute Value Pairs To maximize extensibility while permitting interoperability, a uniform method for encoding message types is used throughout L2TP. This encoding will be termed AVP (Attribute Value Pair) for the remainder of this document. 5.1. AVP Format Each AVP is encoded as follows: Figure 5.1: AVP Format 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |M|H| rsvd | Length | Vendor ID | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Attribute Type | Attribute Value ... +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ (until Length is reached) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The first six bits comprise a bit mask that describes the general attributes of the AVP. Two bits are defined in this document; the remaining bits are reserved for future extensions. Reserved bits MUST be set to 0 when sent and ignored upon receipt. Mandatory (M) bit: Controls the behavior required of an implementation that receives an unrecognized AVP. The M bit of a given AVP MUST only be inspected and acted upon if the AVP is unrecognized (see Section 5.2). Hidden (H) bit: Identifies the hiding of data in the Attribute Value field of an AVP. This capability can be used to avoid the passing of sensitive data, such as user passwords, as cleartext in an AVP. Section 5.3 describes the procedure for performing AVP hiding. Length: Contains the number of octets (including the Overall Length and bit mask fields) contained in this AVP. The Length may be calculated as 6 + the length of the Attribute Value field in octets. The field itself is 10 bits, permitting a maximum of 1023 octets of data in a single AVP. The minimum Length of an AVP is 6. If the Length is 6, then the Attribute Value field is absent. Vendor ID: The IANA-assigned "SMI Network Management Private Enterprise Codes" [RFC1700] value. The value 0, corresponding to IETF-adopted attribute values, is used for all AVPs defined within this document. Any vendor wishing to implement its own L2TP extensions can use its own Vendor ID along with private Attribute values, guaranteeing that they will not collide with any other vendor's extensions or future IETF extensions. Note that there are 16 bits allocated for the Vendor ID, thus limiting this feature to the first 65,535 enterprises. Attribute Type: A 2-octet value with a unique interpretation across all AVPs defined under a given Vendor ID. Attribute Value: This is the actual value as indicated by the Vendor ID and Attribute Type. It follows immediately after the Attribute Type field and runs for the remaining octets indicated in the Length (i.e., Length minus 6 octets of header). This field is absent if the Length is 6. In the event that the 16-bit Vendor ID space is exhausted, vendor- specific AVPs with a 32-bit Vendor ID MUST be encapsulated in the following manner: Figure 5.2: Extended Vendor ID AVP Format 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |M|H| rsvd | Length | 0 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | 58 | 32-bit Vendor ID ... +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Attribute Type | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Attribute Value ... +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ (until Length is reached) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ This AVP encodes a vendor-specific AVP with a 32-bit Vendor ID space within the Attribute Value field. Multiple AVPs of this type may exist in any message. The 16-bit Vendor ID MUST be 0, indicating that this is an IETF-defined AVP, and the Attribute Type MUST be 58, indicating that what follows is a vendor-specific AVP with a 32-bit Vendor ID code. This AVP MAY be hidden (the H bit MAY be 0 or 1). The M bit for this AVP MUST be set to 0. The Length of the AVP is 12 plus the length of the Attribute Value. 5.2. Mandatory AVPs and Setting the M Bit If the M bit is set on an AVP that is unrecognized by its recipient, the session or control connection associated with the control message containing the AVP MUST be shut down. If the control message containing the unrecognized AVP is associated with a session (e.g., an ICRQ, ICRP, ICCN, SLI, etc.), then the session MUST be issued a CDN with a Result Code of 2 and Error Code of 8 (as defined in Section 5.4.2) and shut down. If the control message containing the unrecognized AVP is associated with establishment or maintenance of a Control Connection (e.g., SCCRQ, SCCRP, SCCCN, Hello), then the associated Control Connection MUST be issued a StopCCN with Result Code of 2 and Error Code of 8 (as defined in Section 5.4.2) and shut down. If the M bit is not set on an unrecognized AVP, the AVP MUST be ignored when received, processing the control message as if the AVP were not present. Receipt of an unrecognized AVP that has the M bit set is catastrophic to the session or control connection with which it is associated. Thus, the M bit should only be set for AVPs that are deemed crucial to proper operation of the session or control connection by the sender. AVPs that are considered crucial by the sender may vary by application and configured options. In no case shall a receiver of an AVP "validate" if the M bit is set on a recognized AVP. If the AVP is recognized (as all AVPs defined in this document MUST be for a compliant L2TPv3 specification), then by definition, the M bit is of no consequence. The sender of an AVP is free to set its M bit to 1 or 0 based on whether the configured application strictly requires the value contained in the AVP to be recognized or not. For example, "Automatic L2TPv2 Fallback" in Section 4.7.3 requires the setting of the M bit on all new L2TPv3 AVPs to zero if fallback to L2TPv2 is supported and desired, and 1 if not. The M bit is useful as extra assurance for support of critical AVP extensions. However, more explicit methods may be available to determine support for a given feature rather than using the M bit alone. For example, if a new AVP is defined in a message for which there is always a message reply (i.e., an ICRQ, ICRP, SCCRQ, or SCCRP message), rather than simply sending an AVP in the message with the M bit set, availability of the extension may be identified by sending an AVP in the request message and expecting a corresponding AVP in a reply message. This more explicit method, when possible, is preferred. The M bit also plays a role in determining whether or not a malformed or out-of-range value within an AVP should be ignored or should result in termination of a session or control connection (see Section 7.1 for more details). 5.3. Hiding of AVP Attribute Values The H bit in the header of each AVP provides a mechanism to indicate to the receiving peer whether the contents of the AVP are hidden or present in cleartext. This feature can be used to hide sensitive control message data such as user passwords, IDs, or other vital information. The H bit MUST only be set if (1) a shared secret exists between the LCCEs and (2) Control Message Authentication is enabled (see Section 4.3). If the H bit is set in any AVP(s) in a given control message, at least one Random Vector AVP must also be present in the message and MUST precede the first AVP having an H bit of 1. The shared secret between LCCEs is used to derive a unique shared key for hiding and unhiding calculations. The derived shared key is obtained via an HMAC-MD5 keyed hash [RFC2104], with the key consisting of the shared secret, and with the data being hashed consisting of a single octet containing the value 1. shared_key = HMAC_MD5 (shared_secret, 1) Hiding an AVP value is done in several steps. The first step is to take the length and value fields of the original (cleartext) AVP and encode them into the Hidden AVP Subformat, which appears as follows: Figure 5.3: Hidden AVP Subformat 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Length of Original Value | Original Attribute Value ... +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ... | Padding ... +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Length of Original Attribute Value: This is length of the Original Attribute Value to be obscured in octets. This is necessary to determine the original length of the Attribute Value that is lost when the additional Padding is added. Original Attribute Value: Attribute Value that is to be obscured. Padding: Random additional octets used to obscure length of the Attribute Value that is being hidden. To mask the size of the data being hidden, the resulting subformat MAY be padded as shown above. Padding does NOT alter the value placed in the Length of Original Attribute Value field, but does alter the length of the resultant AVP that is being created. For example, if an Attribute Value to be hidden is 4 octets in length, the unhidden AVP length would be 10 octets (6 + Attribute Value length). After hiding, the length of the AVP would become 6 + Attribute Value length + size of the Length of Original Attribute Value field + Padding. Thus, if Padding is 12 octets, the AVP length would be 6 + 4 + 2 + 12 = 24 octets. Next, an MD5 [RFC1321] hash is performed (in network byte order) on the concatenation of the following: + the 2-octet Attribute number of the AVP + the shared key + an arbitrary length random vector The value of the random vector used in this hash is passed in the value field of a Random Vector AVP. This Random Vector AVP must be placed in the message by the sender before any hidden AVPs. The same random vector may be used for more than one hidden AVP in the same message, but not for hiding two or more instances of an AVP with the same Attribute Type unless the Attribute Values in the two AVPs are also identical. When a different random vector is used for the hiding of subsequent AVPs, a new Random Vector AVP MUST be placed in the control message before the first AVP to which it applies. The MD5 hash value is then XORed with the first 16-octet (or less) segment of the Hidden AVP Subformat and placed in the Attribute Value field of the Hidden AVP. If the Hidden AVP Subformat is less than 16 octets, the Subformat is transformed as if the Attribute Value field had been padded to 16 octets before the XOR. Only the actual octets present in the Subformat are modified, and the length of the AVP is not altered. If the Subformat is longer than 16 octets, a second one-way MD5 hash is calculated over a stream of octets consisting of the shared key followed by the result of the first XOR. That hash is XORed with the second 16-octet (or less) segment of the Subformat and placed in the corresponding octets of the Value field of the Hidden AVP. If necessary, this operation is repeated, with the shared key used along with each XOR result to generate the next hash to XOR the next segment of the value with. The hiding method was adapted from [RFC2865], which was taken from the "Mixing in the Plaintext" section in the book "Network Security" by Kaufman, Perlman and Speciner [KPS]. A detailed explanation of the method follows: Call the shared key S, the Random Vector RV, and the Attribute Type A. Break the value field into 16-octet chunks p_1, p_2, etc., with the last one padded at the end with random data to a 16-octet boundary. Call the ciphertext blocks c_1, c_2, etc. We will also define intermediate values b_1, b_2, etc. b_1 = MD5 (A + S + RV) c_1 = p_1 xor b_1 b_2 = MD5 (S + c_1) c_2 = p_2 xor b_2 . . . . . . b_i = MD5 (S + c_i-1) c_i = p_i xor b_i The String will contain c_1 + c_2 +...+ c_i, where "+" denotes concatenation. On receipt, the random vector is taken from the last Random Vector AVP encountered in the message prior to the AVP to be unhidden. The above process is then reversed to yield the original value. 5.4. AVP Summary The following sections contain a list of all L2TP AVPs defined in this document. Following the name of the AVP is a list indicating the message types that utilize each AVP. After each AVP title follows a short description of the purpose of the AVP, a detail (including a graphic) of the format for the Attribute Value, and any additional information needed for proper use of the AVP. 5.4.1. General Control Message AVPs Message Type (All Messages) The Message Type AVP, Attribute Type 0, identifies the control message herein and defines the context in which the exact meaning of the following AVPs will be determined. The Attribute Value field for this AVP has the following format: 0 1 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Message Type | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The Message Type is a 2-octet unsigned integer. The Message Type AVP MUST be the first AVP in a message, immediately following the control message header (defined in Section 3.2.1). See Section 3.1 for the list of defined control message types and their identifiers. The Mandatory (M) bit within the Message Type AVP has special meaning. Rather than an indication as to whether the AVP itself should be ignored if not recognized, it is an indication as to whether the control message itself should be ignored. If the M bit is set within the Message Type AVP and the Message Type is unknown to the implementation, the control connection MUST be cleared. If the M bit is not set, then the implementation may ignore an unknown message type. The M bit MUST be set to 1 for all message types defined in this document. This AVP MUST NOT be hidden (the H bit MUST be 0). The Length of this AVP is 8. A vendor-specific control message may be defined by setting the Vendor ID of the Message Type AVP to a value other than the IETF Vendor ID of 0 (see Section 5.1). The Message Type AVP MUST still be the first AVP in the control message. Message Digest (All Messages) The Message Digest AVP, Attribute Type 59 is used as an integrity and authentication check of the L2TP Control Message header and body. The Attribute Value field for this AVP has the following format: 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Digest Type | Message Digest ... +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ... (16 or 20 octets) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Digest Type is a one-octet integer indicating the Digest calculation algorithm: 0 HMAC-MD5 [RFC2104] 1 HMAC-SHA-1 [RFC2104] Digest Type 0 (HMAC-MD5) MUST be supported, while Digest Type 1 (HMAC-SHA-1) SHOULD be supported. The Message Digest is of variable length and contains the result of the control message authentication and integrity calculation. For Digest Type 0 (HMAC-MD5), the length of the digest MUST be 16 bytes. For Digest Type 1 (HMAC-SHA-1) the length of the digest MUST be 20 bytes. If Control Message Authentication is enabled, at least one Message Digest AVP MUST be present in all messages and MUST be placed immediately after the Message Type AVP. This forces the Message Digest AVP to begin at a well-known and fixed offset. A second Message Digest AVP MAY be present in a message and MUST be placed directly after the first Message Digest AVP. The shared secret between LCCEs is used to derive a unique shared key for Control Message Authentication calculations. The derived shared key is obtained via an HMAC-MD5 keyed hash [RFC2104], with the key consisting of the shared secret, and with the data being hashed consisting of a single octet containing the value 2. shared_key = HMAC_MD5 (shared_secret, 2) Calculation of the Message Digest is as follows for all messages other than the SCCRQ (where "+" refers to concatenation): Message Digest = HMAC_Hash (shared_key, local_nonce + remote_nonce + control_message) HMAC_Hash: HMAC Hashing algorithm identified by the Digest Type (MD5 or SHA1) local_nonce: Nonce chosen locally and advertised to the remote LCCE. remote_nonce: Nonce received from the remote LCCE (The local_nonce and remote_nonce are advertised via the Control Message Authentication Nonce AVP, also defined in this section.) shared_key: Derived shared key for this control connection control_message: The entire contents of the L2TP control message, including the control message header and all AVPs. Note that the control message header in this case begins after the all-zero Session ID when running over IP (see Section 4.1.1.2), and after the UDP header when running over UDP (see Section 4.1.2.1). When calculating the Message Digest, the Message Digest AVP MUST be present within the control message with the Digest Type set to its proper value, but the Message Digest itself set to zeros. When receiving a control message, the contents of the Message Digest AVP MUST be compared against the expected digest value based on local calculation. This is done by performing the same digest calculation above, with the local_nonce and remote_nonce reversed. This message authenticity and integrity checking MUST be performed before utilizing any information contained within the control message. If the calculation fails, the message MUST be dropped. The SCCRQ has special treatment as it is the initial message commencing a new control connection. As such, there is only one nonce available. Since the nonce is present within the message itself as part of the Control Message Authentication Nonce AVP, there is no need to use it in the calculation explicitly. Calculation of the SCCRQ Message Digest is performed as follows: Message Digest = HMAC_Hash (shared_key, control_message) To allow for graceful switchover to a new shared secret or hash algorithm, two Message Digest AVPs MAY be present in a control message, and two shared secrets MAY be configured for a given LCCE. If two Message Digest AVPs are received in a control message, the message MUST be accepted if either Message Digest is valid. If two shared secrets are configured, each (separately) MUST be used for calculating a digest to be compared to the Message Digest(s) received. When calculating a digest for a control message, the Value field for both of the Message Digest AVPs MUST be set to zero. This AVP MUST NOT be hidden (the H bit MUST be 0). The M bit for this AVP SHOULD be set to 1, but MAY vary (see Section 5.2). The Length is 23 for Digest Type 1 (HMAC-MD5), and 27 for Digest Type 2 (HMAC-SHA-1). Control Message Authentication Nonce (SCCRQ, SCCRP) The Control Message Authentication Nonce AVP, Attribute Type 73, MUST contain a cryptographically random value [RFC1750]. This value is used for Control Message Authentication. The Attribute Value field for this AVP has the following format: 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Nonce ... (arbitrary number of octets) +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The Nonce is of arbitrary length, though at least 16 octets is recommended. The Nonce contains the random value for use in the Control Message Authentication hash calculation (see Message Digest AVP definition in this section). If Control Message Authentication is enabled, this AVP MUST be present in the SCCRQ and SCCRP messages. This AVP MUST NOT be hidden (the H bit MUST be 0). The M bit for this AVP SHOULD be set to 1, but MAY vary (see Section 5.2). The Length of this AVP is 6 plus the length of the Nonce. Random Vector (All Messages) The Random Vector AVP, Attribute Type 36, MUST contain a cryptographically random value [RFC1750]. This value is used for AVP Hiding. The Attribute Value field for this AVP has the following format: 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Random Octet String ... (arbitrary number of octets) +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The Random Octet String is of arbitrary length, though at least 16 octets is recommended. The string contains the random vector for use in computing the MD5 hash to retrieve or hide the Attribute Value of a hidden AVP (see Section 5.3). More than one Random Vector AVP may appear in a message, in which case a hidden AVP uses the Random Vector AVP most closely preceding it. As such, at least one Random Vector AVP MUST precede the first AVP with the H bit set. This AVP MUST NOT be hidden (the H bit MUST be 0). The M bit for this AVP SHOULD be set to 1, but MAY vary (see Section 5.2). The Length of this AVP is 6 plus the length of the Random Octet String. 5.4.2. Result and Error Codes Result Code (StopCCN, CDN) The Result Code AVP, Attribute Type 1, indicates the reason for terminating the control connection or session. The Attribute Value field for this AVP has the following format: 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Result Code | Error Code (optional) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Error Message ... (optional, arbitrary number of octets) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The Result Code is a 2-octet unsigned integer. The optional Error Code is a 2-octet unsigned integer. An optional Error Message can follow the Error Code field. Presence of the Error Code and Message is indicated by the AVP Length field. The Error Message contains an arbitrary string providing further (human-readable) text associated with the condition. Human-readable text in all error messages MUST be provided in the UTF-8 charset [RFC3629] using the Default Language [RFC2277]. This AVP MUST NOT be hidden (the H bit MUST be 0). The M bit for this AVP SHOULD be set to 1, but MAY vary (see Section 5.2). The Length is 8 if there is no Error Code or Message, 10 if there is an Error Code and no Error Message, or 10 plus the length of the Error Message if there is an Error Code and Message. Defined Result Code values for the StopCCN message are as follows: 0 - Reserved. 1 - General request to clear control connection. 2 - General error, Error Code indicates the problem. 3 - Control connection already exists. 4 - Requester is not authorized to establish a control connection. 5 - The protocol version of the requester is not supported, Error Code indicates highest version supported. 6 - Requester is being shut down. 7 - Finite state machine error or timeout General Result Code values for the CDN message are as follows: 0 - Reserved. 1 - Session disconnected due to loss of carrier or circuit disconnect. 2 - Session disconnected for the reason indicated in Error Code. 3 - Session disconnected for administrative reasons. 4 - Session establishment failed due to lack of appropriate facilities being available (temporary condition). 5 - Session establishment failed due to lack of appropriate facilities being available (permanent condition). 13 - Session not established due to losing tie breaker. 14 - Session not established due to unsupported PW type. 15 - Session not established, sequencing required without valid L2-Specific Sublayer. 16 - Finite state machine error or timeout. Additional service-specific Result Codes are defined outside this document. The Error Codes defined below pertain to types of errors that are not specific to any particular L2TP request, but rather to protocol or message format errors. If an L2TP reply indicates in its Result Code that a General Error occurred, the General Error value should be examined to determine what the error was. The currently defined General Error codes and their meanings are as follows: 0 - No General Error. 1 - No control connection exists yet for this pair of LCCEs. 2 - Length is wrong. 3 - One of the field values was out of range. 4 - Insufficient resources to handle this operation now. 5 - Invalid Session ID. 6 - A generic vendor-specific error occurred. 7 - Try another. If initiator is aware of other possible responder destinations, it should try one of them. This can be used to guide an LAC or LNS based on policy. 8 - The session or control connection was shut down due to receipt of an unknown AVP with the M bit set (see Section [RFC3629] using the Default Language [RFC22 [RFC2732]. When a General Error Code of 6 is used, additional information about the error SHOULD be included in the Error Message field. A vendor-specific AVP MAY be sent to more precisely detail a vendor-specific problem. 5.4.3. Control Connection Management AVPs Control Connection Tie Breaker (SCCRQ) The Control Connection Tie Breaker AVP, Attribute Type 5, indicates that the sender desires a single control connection to exist between a given pair of LCCEs. The Attribute Value field for this AVP has the following format: 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Control Connection Tie Breaker Value ... +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ... (64 bits) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The Control Connection Tie Breaker Value is an 8-octet random value that is used to choose a single control connection when two LCCEs request a control connection concurrently. The recipient of a SCCRQ must check to see if a SCCRQ has been sent to the peer; if so, a tie has been detected. In this case, the LCCE must compare its Control Connection Tie Breaker value with the one received in the SCCRQ. The lower value "wins", and the "loser" MUST discard its control connection. A StopCCN SHOULD be sent by the winner as an explicit rejection for the losing SCCRQ. In the case in which a tie breaker is present on both sides and the value is equal, both sides MUST discard their control connections and restart control connection negotiation with a new, random tie breaker value. If a tie breaker is received and an outstanding SCCRQ has no tie breaker value, the initiator that included the Control Connection Tie Breaker AVP "wins". If neither side issues a tie breaker, then two separate control connections are opened. Applications that employ a distinct and well-known initiator have no need for tie breaking, and MAY omit this AVP or disable tie breaking functionality. Applications that require tie breaking also require that an LCCE be uniquely identifiable upon receipt of an SCCRQ. For L2TP over IP, this MUST be accomplished via the Router ID AVP. Note that in [RFC2661], this AVP is referred to as the "Tie Breaker AVP" and is applicable only to a control connection. In L2TPv3, the AVP serves the same purpose of tie breaking, but is applicable to a control connection or a session. The Control Connection Tie Breaker AVP (present only in Control Connection messages) and Session Tie Breaker AVP (present only in Session messages), are described separately in this document, but share the same Attribute type of 5. This AVP MUST NOT be hidden (the H bit MUST be 0). The M bit for this AVP SHOULD be set to 1, but MAY vary (see Section 5.2). The length of this AVP is 14. Host Name (SCCRQ, SCCRP) The Host Name AVP, Attribute Type 7, indicates the name of the issuing LAC or LNS, encoded in the US-ASCII charset.. This AVP MUST NOT be hidden (the H bit MUST be 0). The M bit for this AVP SHOULD be set to 1, but MAY vary (see Section 5.2). The Length of this AVP is 6 plus the length of the Host Name. Router ID (SCCRQ, SCCRP) The Router ID AVP, Attribute Type 60, is an identifier used to identify an LCCE for control connection setup, tie breaking, and/or tunnel authentication. The Attribute Value field for this AVP has the following format: 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Router Identifier | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The Router Identifier is a 4-octet unsigned integer. Its value is unique for a given LCCE, per Section 8.1 of [RFC2072]. The Host Name AVP and/or Router ID AVP MUST be used to identify an LCCE as described in Section 3.3. Implementations MUST NOT assume that Router Identifier is a valid IP address. The Router Identifier for L2TP over IPv6 can be obtained from an IPv4 address (if available) or via unspecified implementation-specific means. This AVP MUST NOT be hidden (the H bit MUST be 0). The M bit for this AVP SHOULD be set to 1, but MAY vary (see Section 5.2). The Length of this AVP is 10. Vendor Name (SCCRQ, SCCRP) The Vendor Name AVP, Attribute Type 8, contains a vendor-specific (possibly human-readable) string describing the type of LAC or LNS being used. The Attribute Value field for this AVP has the following format: 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Vendor Name ... (arbitrary number of octets) +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The Vendor Name is the indicated number of octets representing the vendor string. Human-readable text for this AVP MUST be provided in the US-ASCII charset [RFC1958, RFC2277]. This AVP MAY be hidden (the H bit MAY be 0 or 1). The M bit for this AVP SHOULD be set to 0, but MAY vary (see Section 5.2). The Length (before hiding) of this AVP is 6 plus the length of the Vendor Name. Assigned Control Connection ID (SCCRQ, SCCRP, StopCCN) The Assigned Control Connection ID AVP, Attribute Type 61, contains the ID being assigned to this control connection Control Connection ID | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The Assigned Control Connection ID is a 4-octet non-zero unsigned integer. The Assigned Control Connection ID AVP establishes the identifier used to multiplex and demultiplex multiple control connections between a pair of LCCEs. Once the Assigned Control Connection ID AVP has been received by an LCCE, the Control Connection ID specified in the AVP MUST be included in the Control Connection ID field of all control packets sent to the peer for the lifetime of the control connection. Before the Assigned Control Connection ID AVP is received from a peer, all control messages MUST be sent to that peer with a Control Connection ID value of 0 in the header. Because a Control Connection ID value of 0 is used in this special manner, the zero value MUST NOT be sent as an Assigned Control Connection ID value. Under certain circumstances, an LCCE may need to send a StopCCN to a peer without having yet received an Assigned Control Connection ID AVP from the peer (i.e., SCCRQ sent, no SCCRP received yet). In this case, the Assigned Control Connection ID AVP that had been sent to the peer earlier (i.e., in the SCCRQ) MUST be sent as the Assigned Control Connection ID AVP in the StopCCN. This policy allows the peer to try to identify the appropriate control connection via a reverse lookup. This AVP MAY be hidden (the H bit MAY be 0 or 1). The M bit for this AVP SHOULD be set to 1, but MAY vary (see Section 5.2). The Length (before hiding) of this AVP is 10. Receive Window Size (SCCRQ, SCCRP) The Receive Window Size AVP, Attribute Type 10, specifies the receive window size being offered to the remote peer. The Attribute Value field for this AVP has the following format: 0 1 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Window Size | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The Window Size is a 2-octet unsigned integer. If absent, the peer must assume a Window Size of 4 for its transmit window. The remote peer may send the specified number of control messages before it must wait for an acknowledgment. See Section 4.2 for more information on reliable control message delivery. This AVP MUST NOT be hidden (the H bit MUST be 0). The M bit for this AVP SHOULD be set to 1, but MAY vary (see Section 5.2). The Length of this AVP is 8. Pseudowire Capabilities List (SCCRQ, SCCRP) The Pseudowire Capabilities List (PW Capabilities List) AVP, Attribute Type 62, indicates the L2 payload types the sender can support. The specific payload type of a given session is identified by the Pseudowire Type AVP. The Attribute Value field for this AVP has the following format: 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | PW Type 0 | ... | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | ... | PW Type N | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Defined PW types that may appear in this list are managed by IANA and will appear in associated pseudowire-specific documents for each PW type. If a sender includes a given PW type in the PW Capabilities List AVP, the sender assumes full responsibility for supporting that particular payload, such as any payload-specific AVPs, L2-Specific Sublayer, or control messages that may be defined in the appropriate companion document. This AVP MAY be hidden (the H bit MAY be 0 or 1). The M bit for this AVP SHOULD be set to 1, but MAY vary (see Section 5.2). The Length (before hiding) of this AVP is 8 octets with one PW type specified, plus 2 octets for each additional PW type. Preferred Language (SCCRQ, SCCRP) The Preferred Language AVP, Attribute Type 72, provides a method for an LCCE to indicate to the peer the language in which human- readable messages it sends SHOULD be composed. This AVP contains a single language tag or language range [RFC3066]. The Attribute Value field for this AVP has the following format: 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Preferred Language... (arbitrary number of octets) +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The Preferred Language is the indicated number of octets representing the language tag or language range, encoded in the US-ASCII charset. It is not required to send a Preferred Language AVP. If (1) an LCCE does not signify a language preference by the inclusion of this AVP. This AVP MAY be hidden (the H bit MAY be 0 or 1). The M bit for this AVP SHOULD be set to 0, but MAY vary (see Section 5.2). The Length (before hiding) of this AVP is 6 plus the length of the Preferred Language. 5.4.4. Session Management AVPs Local Session ID (ICRQ, ICRP, ICCN, OCRQ, OCRP, OCCN, CDN, WEN, SLI) The Local Session ID AVP (analogous to the Assigned Session ID in L2TPv2), Attribute Type 63, contains the identifier +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Local Session ID | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The Local Session ID is a 4-octet non-zero unsigned integer. The Local Session ID AVP establishes the two identifiers used to multiplex and demultiplex sessions between two LCCEs. Each LCCE chooses any free value it desires, and sends it to the remote LCCE using this AVP. The remote LCCE MUST then send all data packets associated with this session using this value. Additionally, for all session-oriented control messages sent after this AVP is received (e.g., ICRP, ICCN, CDN, SLI, etc.), the remote LCCE MUST echo this value in the Remote Session ID AVP. Note that a Session ID value is unidirectional. Because each LCCE chooses its Session ID independent of its peer LCCE, the value does not have to match in each direction for a given session. See Section 4.1 for additional information about the Session ID. This AVP MAY be hidden (the H bit MAY be 0 or 1). The M bit for this AVP SHOULD be 1 set to 1, but MAY vary (see Section 5.2). The Length (before hiding) of this AVP is 10. Remote Session ID (ICRQ, ICRP, ICCN, OCRQ, OCRP, OCCN, CDN, WEN, SLI) The Remote Session ID AVP, Attribute Type 64, contains the identifier that was assigned to this session +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Remote Session ID | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The Remote Session ID is a 4-octet non-zero unsigned integer. The Remote Session ID AVP MUST be present in all session-level control messages. The AVP's value echoes the session identifier advertised by the peer via the Local Session ID AVP. It is the same value that will be used in all transmitted data messages by this side of the session. In most cases, this identifier is sufficient for the peer to look up session-level context for this control message. When a session-level control message must be sent to the peer before the Local Session ID AVP has been received, the value of the Remote Session ID AVP MUST be set to zero. Additionally, the Local Session ID AVP (sent in a previous control message for this session) MUST be included in the control message. The peer must then use the Local Session ID AVP to perform a reverse lookup to find its session context. Session-level control messages defined in this document that might be subject to a reverse lookup by a receiving peer include the CDN, WEN, and SLI. This AVP MAY be hidden (the H bit MAY be 0 or 1). The M bit for this AVP SHOULD be set to 1, but MAY vary (see Section 5.2). The Length (before hiding) of this AVP is 10. Assigned Cookie (ICRQ, ICRP, OCRQ, OCRP) The Assigned Cookie AVP, Attribute Type 65, contains the Cookie value Cookie (32 or 64 bits) ... +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The Assigned Cookie is a 4-octet or 8-octet random value. The Assigned Cookie AVP contains the value used to check the association of a received data message with the session identified by the Session ID. All data messages sent to a peer MUST use the Assigned Cookie sent by the peer in this AVP. The value's length (0, 32, or 64 bits) is obtained by the length of the AVP. A missing Assigned Cookie AVP or Assigned Cookie Value of zero length indicates that the Cookie field should not be present in any data packets sent to the LCCE sending this AVP. See Section 4.1 for additional information about the Assigned Cookie. This AVP MAY be hidden (the H bit MAY be 0 or 1). The M bit for this AVP SHOULD be set to 1, but MAY vary (see Section 5.2). The Length (before hiding) of this AVP may be 6, 10, or 14 octets. Serial Number (ICRQ, OCRQ) The Serial Number AVP, Attribute Type 15, contains an identifier assigned by the LAC or LNS to this session. The Attribute Value field for this AVP has the following format: 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Serial Number | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The Serial Number is a 32-bit value. The Serial Number is intended to be an easy reference for administrators on both ends of a control connection to use when investigating session failure problems. Serial Numbers should be set to progressively increasing values, which are likely to be unique for a significant period of time across all interconnected LNSs and LACs. Note that in RFC 2661, this value was referred to as the "Call Serial Number AVP". It serves the same purpose and has the same attribute value and composition. This AVP MAY be hidden (the H bit MAY be 0 or 1). The M bit for this AVP SHOULD be set to 0, but MAY vary (see Section 5.2). The Length (before hiding) of this AVP is 10. Remote End ID (ICRQ, OCRQ) The Remote End ID AVP, Attribute Type 66, contains an identifier used to bind L2TP sessions to a given circuit, interface, or bridging instance. It also may be used to detect session-level ties. The Attribute Value field for this AVP has the following format: 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Remote End Identifier ... (arbitrary number of octets) +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The Remote End Identifier field is a variable-length field whose value is unique for a given LCCE peer, as described in Section 3.3. A session-level tie is detected if an LCCE receives an ICRQ or OCRQ with an End ID AVP whose value matches that which was just sent in an outgoing ICRQ or OCRQ to the same peer. If the two values match, an LCCE recognizes that a tie exists (i.e., both LCCEs are attempting to establish sessions for the same circuit). The tie is broken by the Session Tie Breaker AVP. By default, the LAC-LAC cross-connect application (see Section 2(b)) of L2TP over an IP network MUST utilize the Router ID AVP and Remote End ID AVP to associate a circuit to an L2TP session. Other AVPs MAY be used for LCCE or circuit identification as specified in companion documents. This AVP MAY be hidden (the H bit MAY be 0 or 1). The M bit for this AVP SHOULD be set to 1, but MAY vary (see Section 5.2). The Length (before hiding) of this AVP is 6 plus the length of the Remote End Identifier value. Session Tie Breaker (ICRQ, OCRQ) The Session Tie Breaker AVP, Attribute Type 5, is used to break ties when two peers concurrently attempt to establish a session for the same circuit. The Attribute Value field for this AVP has the following format: 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Session Tie Breaker Value ... +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ... (64 bits) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The Session Tie Breaker Value is an 8-octet random value that is used to choose a session when two LCCEs concurrently request a session for the same circuit. A tie is detected by examining the peer's identity (described in Section 3.3) plus the per-session shared value communicated via the End ID AVP. In the case of a tie, the recipient of an ICRQ or OCRQ must compare the received tie breaker value with the one that it sent earlier. The LCCE with the lower value "wins" and MUST send a CDN with result code set to 13 (as defined in Section 5.4.2) in response to the losing ICRQ or OCRQ. In the case in which a tie is detected, tie breakers are sent by both sides, and the tie breaker values are equal, both sides MUST discard their sessions and restart session negotiation with new random tie breaker values. If a tie is detected but only one side sends a Session Tie Breaker AVP, the session initiator that included the Session Tie Breaker AVP "wins". If neither side issues a tie breaker, then both sides MUST tear down the session. This AVP MUST NOT be hidden (the H bit MUST be 0). The M bit for this AVP SHOULD be set to 1, but MAY vary (see Section 5.2). The Length of this AVP is 14. Pseudowire Type (ICRQ, OCRQ) The Pseudowire Type (PW Type) AVP, Attribute Type 68, indicates the L2 payload type of the packets that will be tunneled using this L2TP session. The Attribute Value field for this AVP has the following format: 0 1 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | PW Type | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ A peer MUST NOT request an incoming or outgoing call with a PW Type AVP specifying a value not advertised in the PW Capabilities List AVP it received during control connection establishment. Attempts to do so MUST result in the call being rejected via a CDN with the Result Code set to 14 (see Section 5.4.2). This AVP MAY be hidden (the H bit MAY be 0 or 1). The M bit for this AVP SHOULD be set to 1, but MAY vary (see Section 5.2). The Length (before hiding) of this AVP is 8. L2-Specific Sublayer (ICRQ, ICRP, ICCN, OCRQ, OCRP, OCCN) The L2-Specific Sublayer AVP, Attribute Type 69, indicates the presence and format of the L2-Specific Sublayer the sender of this AVP requires on all incoming data packets for this L2TP session. 0 1 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | L2-Specific Sublayer Type | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The L2-Specific Sublayer Type is a 2-octet unsigned integer with the following values defined in this document: 0 - There is no L2-Specific Sublayer present. 1 - The Default L2-Specific Sublayer (defined in Section 4.6) is used. If this AVP is received and has a value other than zero, the receiving LCCE MUST include the identified L2-Specific Sublayer in its outgoing data messages. If the AVP is not received, it is assumed that there is no sublayer present. This AVP MAY be hidden (the H bit MAY be 0 or 1). The M bit for this AVP SHOULD be set to 1, but MAY vary (see Section 5.2). The Length (before hiding) of this AVP is 8. Data Sequencing (ICRQ, ICRP, ICCN, OCRQ, OCRP, OCCN) The Data Sequencing AVP, Attribute Type 70, indicates that the sender requires some or all of the data packets that it receives to be sequenced. The Attribute Value field for this AVP has the following format: 0 1 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Data Sequencing Level | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The Data Sequencing Level is a 2-octet unsigned integer indicating the degree of incoming data traffic that the sender of this AVP wishes to be marked with sequence numbers. Defined Data Sequencing Levels are as follows: 0 - No incoming data packets require sequencing. 1 - Only non-IP data packets require sequencing. 2 - All incoming data packets require sequencing. If a Data Sequencing Level of 0 is specified, there is no need to send packets with sequence numbers. If sequence numbers are sent, they will be ignored upon receipt. If no Data Sequencing AVP is received, a Data Sequencing Level of 0 is assumed. If a Data Sequencing Level of 1 is specified, only non-IP traffic carried within the tunneled L2 frame should have sequence numbers applied. Non-IP traffic here refers to any packets that cannot be classified as an IP packet within their respective L2 framing (e.g., a PPP control packet or NETBIOS frame encapsulated by Frame Relay before being tunneled). All traffic that can be classified as IP MUST be sent with no sequencing (i.e., the S bit in the L2- Specific Sublayer is set to zero). If a packet is unable to be classified at all (e.g., because it has been compressed or encrypted at layer 2) or if an implementation is unable to perform such classification within L2 frames, all packets MUST be provided with sequence numbers (essentially falling back to a Data Sequencing Level of 2). If a Data Sequencing Level of 2 is specified, all traffic MUST be sequenced. Data sequencing may only be requested when there is an L2-Specific Sublayer present that can provide sequence numbers. If sequencing is requested without requesting a L2-Specific Sublayer AVP, the session MUST be disconnected with a Result Code of 15 (see Section 5.4.2). This AVP MAY be hidden (the H bit MAY be 0 or 1). The M bit for this AVP SHOULD be set to 1, but MAY vary (see Section 5.2). The Length (before hiding) of this AVP is 8. Tx Connect Speed (ICRQ, ICRP, ICCN, OCRQ, OCRP, OCCN) The Tx Connect Speed BPS AVP, Attribute Type 74, contains the speed of the facility chosen for the connection attempt.) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The Tx Connect Speed BPS is an 8-octet value indicating the speed in bits per second. A value of zero indicates that the speed is indeterminable or that there is no physical point-to-point link. When the optional Rx Connect Speed AVP is present, the value in this AVP represents the transmit connect speed from the perspective of the LAC (i.e., data flowing from the LAC to the remote system). When the optional Rx Connect Speed AVP is NOT present, the connection speed between the remote system and LAC is assumed to be symmetric and is represented by the single value in this AVP. This AVP MAY be hidden (the H bit MAY be 0 or 1). The M bit for this AVP SHOULD be set to 0, but MAY vary (see Section 5.2). The Length (before hiding) of this AVP is 14. Rx Connect Speed (ICRQ, ICRP, ICCN, OCRQ, OCRP, OCCN) The Rx Connect Speed AVP, Attribute Type 75, represents the speed of the connection from the perspective of the LAC (i.e., data flowing from the remote system to the LAC).) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Connect Speed BPS is an 8-octet value indicating the speed in bits per second. A value of zero indicates that the speed is indeterminable or that there is no physical point-to-point link. Presence of this AVP implies that the connection speed may be asymmetric with respect to the transmit connect speed given in the Tx Connect Speed AVP. This AVP MAY be hidden (the H bit MAY be 0 or 1). The M bit for this AVP SHOULD be set to 0, but MAY vary (see Section 5.2). The Length (before hiding) of this AVP is 14. Physical Channel ID (ICRQ, ICRP, OCRP) The Physical Channel ID AVP, Attribute Type 25, contains the vendor-specific physical channel number used for a call. The Attribute Value field for this AVP has the following format: 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Physical Channel ID | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Physical Channel ID is a 4-octet value intended to be used for logging purposes only. This AVP MAY be hidden (the H bit MAY be 0 or 1). The M bit for this AVP SHOULD be set to 0, but MAY vary (see Section 5.2). The Length (before hiding) of this AVP is 10. 5.4.5. Circuit Status AVPs Circuit Status (ICRQ, ICRP, ICCN, OCRQ, OCRP, OCCN, SLI) The Circuit Status AVP, Attribute Type 71, indicates the initial status of or a status change in the circuit to which the session is bound. The Attribute Value field for this AVP has the following format: 0 1 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Reserved |N|A| +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The A (Active) bit indicates whether the circuit is up/active/ready (1) or down/inactive/not-ready (0). The N (New) bit indicates whether the circuit status indication is for a new circuit (1) or an existing circuit (0). Links that have a similar mechanism available (e.g., Frame Relay) MUST map the setting of this bit to the associated signaling for that link. Otherwise, the New bit SHOULD still be set the first time the L2TP session is established after provisioning. The remaining bits are reserved for future use. Reserved bits MUST be set to 0 when sending and ignored upon receipt. The Circuit Status AVP is used to advertise whether a circuit or interface bound to an L2TP session is up and ready to send and/or receive traffic. Different circuit types have different names for status types. For example, HDLC primary and secondary stations refer to a circuit as being "Receive Ready" or "Receive Not Ready", while Frame Relay refers to a circuit as "Active" or "Inactive". This AVP adopts the latter terminology, though the concept remains the same regardless of the PW type for the L2TP session. In the simplest case, the circuit to which this AVP refers is a single physical interface, port, or circuit, depending on the application and the session setup. The status indication in this AVP may then be used to provide simple ILMI interworking for a variety of circuit types. For virtual or multipoint interfaces, the Circuit Status AVP is still utilized, but in this case, it refers to the state of an internal structure or a logical set of circuits. Each PW-specific companion document MUST specify precisely how this AVP is translated for each circuit type. If this AVP is received with a Not Active notification for a given L2TP session, all data traffic for that session MUST cease (or not begin) in the direction of the sender of the Circuit Status AVP until the circuit is advertised as Active. The Circuit Status MUST be advertised by this AVP in ICRQ, ICRP, OCRQ, and OCRP messages. Often, the circuit type will be marked Active when initiated, but subsequently MAY be advertised as Inactive. This indicates that an L2TP session is to be created, but that the interface or circuit is still not ready to pass traffic. The ICCN, OCCN, and SLI control messages all MAY contain this AVP to update the status of the circuit after establishment of the L2TP session is requested. If additional circuit status information is needed for a given PW type, any new PW-specific AVPs MUST be defined in a separate document. This AVP is only for general circuit status information generally applicable to all circuit/interface types. This AVP MAY be hidden (the H bit MAY be 0 or 1). The M bit for this AVP SHOULD be set to 1, but MAY vary (see Section 5.2). The Length (before hiding) of this AVP is 8. Circuit Errors (WEN) The Circuit Errors AVP, Attribute Type 34, conveys circuit error information +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+- | Reserved | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Hardware Overruns | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Buffer Overruns | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Timeout Errors | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Alignment Errors | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The following fields are defined: Reserved: 2 octets of Reserved data is present (providing longword alignment within the AVP of the following values). Reserved data MUST be zero on sending and ignored upon receipt. Hardware Overruns: Number of receive buffer overruns since call was established. Buffer Overruns: Number of buffer overruns detected since call was established. Timeout Errors: Number of timeouts since call was established. Alignment Errors: Number of alignment errors since call was established. This AVP MAY be hidden (the H bit MAY be 0 or 1). The M bit for this AVP SHOULD be set to 0, but MAY vary (see Section 5.2). The Length (before hiding) of this AVP is 32. 6. Control Connection Protocol Specification The following control messages are used to establish, maintain, and tear down L2TP control connections. All data packets are sent in network order (high-order octets first). Any "reserved" or "empty" fields MUST be sent as 0 values to allow for protocol extensibility. The exchanges in which these messages are involved are outlined in Section 3.3. 6.1. Start-Control-Connection-Request (SCCRQ) Start-Control-Connection-Request (SCCRQ) is a control message used to initiate a control connection between two LCCEs. It is sent by either the LAC or the LNS to begin the control connection establishment process. The following AVPs MUST be present in the SCCRQ: Message Type Host Name Router ID Assigned Control Connection ID Pseudowire Capabilities List The following AVPs MAY be present in the SCCRQ: Random Vector Control Message Authentication Nonce Message Digest Control Connection Tie Breaker Vendor Name Receive Window Size Preferred Language 6.2. Start-Control-Connection-Reply (SCCRP) Start-Control-Connection-Reply (SCCRP) is the control message sent in reply to a received SCCRQ message. The SCCRP is used to indicate that the SCCRQ was accepted and that establishment of the control connection should continue. The following AVPs MUST be present in the SCCRP: Message Type Host Name Router ID Assigned Control Connection ID Pseudowire Capabilities List The following AVPs MAY be present in the SCCRP: Random Vector Control Message Authentication Nonce Message Digest Vendor Name Receive Window Size Preferred Language 6.3. Start-Control-Connection-Connected (SCCCN) Start-Control-Connection-Connected (SCCCN) is the control message sent in reply to an SCCRP. The SCCCN completes the control connection establishment process. The following AVP MUST be present in the SCCCN: Message Type The following AVP MAY be present in the SCCCN: Random Vector Message Digest 6.4. Stop-Control-Connection-Notification (StopCCN) Stop-Control-Connection-Notification (StopCCN) is the control message sent by either LCCE to inform its peer that the control connection is being shut down and that the control connection should be closed. In addition, all active sessions are implicitly cleared (without sending any explicit session control messages). The reason for issuing this request is indicated in the Result Code AVP. There is no explicit reply to the message, only the implicit ACK that is received by the reliable control message delivery layer. The following AVPs MUST be present in the StopCCN: Message Type Result Code The following AVPs MAY be present in the StopCCN: Random Vector Message Digest Assigned Control Connection ID Note that the Assigned Control Connection ID MUST be present if the StopCCN is sent after an SCCRQ or SCCRP message has been sent. 6.5. Hello (HELLO) The Hello (HELLO) message is an L2TP control message sent by either peer of a control connection. This control message is used as a "keepalive" for the control connection. See Section 4.2 for a description of the keepalive mechanism. HELLO messages are global to the control connection. The Session ID in a HELLO message MUST be 0. The following AVP MUST be present in the HELLO: Message Type The following AVP MAY be present in the HELLO: Random Vector Message Digest 6.6. Incoming-Call-Request (ICRQ) Incoming-Call-Request (ICRQ) is the control message sent by an LCCE to a peer when an incoming call is detected (although the ICRQ may also be sent as a result of a local event). It is the first in a three-message exchange used for establishing a session via an L2TP control connection. The ICRQ is used to indicate that a session is to be established between an LCCE and a peer. The sender of an ICRQ provides the peer with parameter information for the session. However, the sender makes no demands about how the session is terminated at the peer (i.e., whether the L2 traffic is processed locally, forwarded, etc.). The following AVPs MUST be present in the ICRQ: Message Type Local Session ID Remote Session ID Serial Number Pseudowire Type Remote End ID Circuit Status The following AVPs MAY be present in the ICRQ: Random Vector Message Digest Assigned Cookie Session Tie Breaker L2-Specific Sublayer Data Sequencing Tx Connect Speed Rx Connect Speed Physical Channel ID 6.7. Incoming-Call-Reply (ICRP) Incoming-Call-Reply (ICRP) is the control message sent by an LCCE in response to a received ICRQ. It is the second in the three-message exchange used for establishing sessions within an L2TP control connection. The ICRP is used to indicate that the ICRQ was successful and that the peer should establish (i.e., answer) the incoming call if it has not already done so. It also allows the sender to indicate specific parameters about the L2TP session. The following AVPs MUST be present in the ICRP: Message Type Local Session ID Remote Session ID Circuit Status The following AVPs MAY be present in the ICRP: Random Vector Message Digest Assigned Cookie L2-Specific Sublayer Data Sequencing Tx Connect Speed Rx Connect Speed Physical Channel ID 6.8. Incoming-Call-Connected (ICCN) Incoming-Call-Connected (ICCN) is the control message sent by the LCCE that originally sent an ICRQ upon receiving an ICRP from its peer. It is the final message in the three-message exchange used for establishing L2TP sessions. The ICCN is used to indicate that the ICRP was accepted, that the call has been established, and that the L2TP session should move to the established state. It also allows the sender to indicate specific parameters about the established call (parameters that may not have been available at the time the ICRQ was issued). The following AVPs MUST be present in the ICCN: Message Type Local Session ID Remote Session ID The following AVPs MAY be present in the ICCN: Random Vector Message Digest L2-Specific Sublayer Data Sequencing Tx Connect Speed Rx Connect Speed Circuit Status 6.9. Outgoing-Call-Request (OCRQ) Outgoing-Call-Request (OCRQ) is the control message sent by an LCCE to an LAC to indicate that an outbound call at the LAC is to be established based on specific destination information sent in this message. It is the first in a three-message exchange used for establishing a session and placing a call on behalf of the initiating LCCE. Note that a call may be any L2 connection requiring well-known destination information to be sent from an LCCE to an LAC. This call could be a dialup connection to the PSTN, an SVC connection, the IP address of another LCCE, or any other destination dictated by the sender of this message. The following AVPs MUST be present in the OCRQ: Message Type Local Session ID Remote Session ID Serial Number Pseudowire Type Remote End ID Circuit Status The following AVPs MAY be present in the OCRQ: Random Vector Message Digest Assigned Cookie Tx Connect Speed Rx Connect Speed Session Tie Breaker L2-Specific Sublayer Data Sequencing 6.10. Outgoing-Call-Reply (OCRP) Outgoing-Call-Reply (OCRP) is the control message sent by an LAC to an LCCE in response to a received OCRQ. It is the second in a three-message exchange used for establishing a session within an L2TP control connection. OCRP is used to indicate that the LAC has been able to attempt the outbound call. The message returns any relevant parameters regarding the call attempt. Data MUST NOT be forwarded until the OCCN is received, which indicates that the call has been placed. The following AVPs MUST be present in the OCRP: Message Type Local Session ID Remote Session ID Circuit Status The following AVPs MAY be present in the OCRP: Random Vector Message Digest Assigned Cookie L2-Specific Sublayer Tx Connect Speed Rx Connect Speed Data Sequencing Physical Channel ID 6.11. Outgoing-Call-Connected (OCCN) Outgoing-Call-Connected (OCCN) is the control message sent by an LAC to another LCCE after the OCRP and after the outgoing call has been completed. It is the final message in a three-message exchange used for establishing a session. OCCN is used to indicate that the result of a requested outgoing call was successful. It also provides information to the LCCE who requested the call about the particular parameters obtained after the call was established. The following AVPs MUST be present in the OCCN: Message Type Local Session ID Remote Session ID The following AVPs MAY be present in the OCCN: Random Vector Message Digest L2-Specific Sublayer Tx Connect Speed Rx Connect Speed Data Sequencing Circuit Status 6.12. Call-Disconnect-Notify (CDN) The Call-Disconnect-Notify (CDN) is a control message sent by an LCCE to request disconnection of a specific session. Its purpose is to inform the peer of the disconnection and the reason for the disconnection. The peer MUST clean up any resources, and does not send back any indication of success or failure for such cleanup. The following AVPs MUST be present in the CDN: Message Type Result Code Local Session ID Remote Session ID The following AVP MAY be present in the CDN: Random Vector Message Digest 6.13. WAN-Error-Notify (WEN) The WAN-Error-Notify (WEN) is a control message sent from an LAC to an LNS to indicate WAN error conditions. The counters in this message are cumulative. This message should only be sent when an error occurs, and not more than once every 60 seconds. The counters are reset when a new call is established. The following AVPs MUST be present in the WEN: Message Type Local Session ID Remote Session ID Circuit Errors The following AVP MAY be present in the WEN: Random Vector Message Digest 6.14. Set-Link-Info (SLI) The Set-Link-Info control message is sent by an LCCE to convey link or circuit status change information regarding the circuit associated with this L2TP session. For example, if PPP renegotiates LCP at an LNS or between an LAC and a Remote System, or if a forwarded Frame Relay VC transitions to Active or Inactive at an LAC, an SLI message SHOULD be sent to indicate this event. Precise details of when the SLI is sent, what PW type-specific AVPs must be present, and how those AVPs should be interpreted by the receiving peer are outside the scope of this document. These details should be described in the associated pseudowire-specific documents that require use of this message. The following AVPs MUST be present in the SLI: Message Type Local Session ID Remote Session ID The following AVPs MAY be present in the SLI: Random Vector Message Digest Circuit Status 6.15. Explicit-Acknowledgement (ACK) The Explicit Acknowledgement (ACK) message is used only to acknowledge receipt of a message or messages on the control connection (e.g., for purposes of updating Ns and Nr values). Receipt of this message does not trigger an event for the L2TP protocol state machine. A message received without any AVPs (including the Message Type AVP), is referred to as a Zero Length Body (ZLB) message, and serves the same function as the Explicit Acknowledgement. ZLB messages are only permitted when Control Message Authentication defined in Section 4.3 is not enabled. The following AVPs MAY be present in the ACK message: Message Type Message Digest 7. Control Connection State Machines The state tables defined in this section govern the exchange of control messages defined in Section 6. Tables are defined for incoming call placement and outgoing call placement, as well as for initiation of the control connection itself. The state tables do not encode timeout and retransmission behavior, as this is handled in the underlying reliable control message delivery mechanism (see Section 4.2). 7.1. Malformed AVPs and Control Messages Receipt of an invalid or unrecoverable malformed control message SHOULD be logged appropriately and the control connection cleared to ensure recovery to a known state. The control connection may then be restarted by the initiator. An invalid control message is defined as (1) a message that contains a Message Type marked as mandatory (see Section 5.4.1) but that is unknown to the implementation, or (2) a control message that is received in the wrong state. Examples of malformed control messages include (1) a message that has an invalid value in its header, (2) a message that contains an AVP that is formatted incorrectly or whose value is out of range, and (3) a message that is missing a required AVP. A control message with a malformed header MUST be discarded. When possible, a malformed AVP should be treated as an unrecognized AVP (see Section 5.2). Thus, an attempt to inspect the M bit SHOULD be made to determine the importance of the malformed AVP, and thus, the severity of the malformation to the entire control message. If the M bit can be reasonably inspected within the malformed AVP and is determined to be set, then as with an unrecognized AVP, the associated session or control connection MUST be shut down. If the M bit is inspected and is found to be 0, the AVP MUST be ignored (assuming recovery from the AVP malformation is indeed possible). This policy must not be considered as a license to send malformed AVPs, but rather, as a guide towards how to handle an improperly formatted message if one is received. It is impossible to list all potential malformations of a given message and give advice for each. One example of a malformed AVP situation that should be recoverable is if the Rx Connect Speed AVP is received with a length of 10 rather than 14, implying that the connect speed bits-per-second is being formatted in 4 octets rather than 8. If the AVP does not have its M bit set (as would typically be the case), this condition is not considered catastrophic. As such, the control message should be accepted as though the AVP were not present (though a local error message may be logged). In several cases in the following tables, a protocol message is sent, and then a "clean up" occurs. Note that, regardless of the initiator of the control connection destruction, the reliable delivery mechanism must be allowed to run (see Section 4.2) before destroying the control connection. This permits the control connection management messages to be reliably delivered to the peer. Appendix B.1 contains an example of lock-step control connection establishment. 7.2. Control Connection States The L2TP control connection protocol is not distinguishable between the two LCCEs but is distinguishable between the originator and receiver. The originating peer is the one that first initiates establishment of the control connection. (In a tie breaker situation, this is the winner of the tie.) Since either the LAC or the LNS can be the originator, a collision can occur. See the Control Connection Tie Breaker AVP in Section 5.4.3 for a description of this and its resolution. State Event Action New State ----- ----- ------ --------- idle Local open Send SCCRQ wait-ctl-reply request idle Receive SCCRQ, Send SCCRP wait-ctl-conn acceptable idle Receive SCCRQ, Send StopCCN, idle not acceptable clean up idle Receive SCCRP Send StopCCN, idle clean up idle Receive SCCCN Send StopCCN, idle clean up wait-ctl-reply Receive SCCRP, Send SCCCN, established acceptable send control-conn open event to waiting sessions wait-ctl-reply Receive SCCRP, Send StopCCN, idle not acceptable clean up wait-ctl-reply Receive SCCRQ, Send SCCRP, wait-ctl-conn lose tie breaker, Clean up losing SCCRQ acceptable connection wait-ctl-reply Receive SCCRQ, Send StopCCN, idle lose tie breaker, Clean up losing SCCRQ unacceptable connection wait-ctl-reply Receive SCCRQ, Send StopCCN for wait-ctl-reply win tie breaker losing connection wait-ctl-reply Receive SCCCN Send StopCCN, idle clean up wait-ctl-conn Receive SCCCN, Send control-conn established acceptable open event to waiting sessions wait-ctl-conn Receive SCCCN, Send StopCCN, idle not acceptable clean up wait-ctl-conn Receive SCCRQ, Send StopCCN, idle SCCRP clean up established Local open Send control-conn established request open event to (new call) waiting sessions established Administrative Send StopCCN, idle control-conn clean up close event established Receive SCCRQ, Send StopCCN, idle SCCRP, SCCCN clean up idle, Receive StopCCN Clean up idle wait-ctl-reply, wait-ctl-conn, established The states associated with an LCCE for control connection establishment are as follows: idle Both initiator and recipient start from this state. An initiator transmits an SCCRQ, while a recipient remains in the idle state until receiving an SCCRQ. wait-ctl-reply The originator checks to see if another connection has been requested from the same peer, and if so, handles the collision situation described in Section 5.4.3. wait-ctl-conn Awaiting an SCCCN. If the SCCCN is valid, the control connection is established; otherwise, it is torn down (sending a StopCCN with the proper result and/or error code). established An established connection may be terminated by either a local condition or the receipt of a StopCCN. In the event of a local termination, the originator MUST send a StopCCN and clean up the control connection. If the originator receives a StopCCN, it MUST also clean up the control connection. 7.3. Incoming Calls An ICRQ is generated by an LCCE, typically in response to an incoming call or a local event. Once the LCCE sends the ICRQ, it waits for a response from the peer. However, it may choose to postpone establishment of the call (e.g., answering the call, bringing up the circuit) until the peer has indicated with an ICRP that it will accept the call. The peer may choose not to accept the call if, for instance, there are insufficient resources to handle an additional session. If the peer chooses to accept the call, it responds with an ICRP. When the local LCCE receives the ICRP, it attempts to establish the call. A final call connected message, the ICCN, is sent from the local LCCE to the peer to indicate that the call states for both LCCEs should enter the established state. If the call is terminated before the peer can accept it, a CDN is sent by the local LCCE to indicate this condition. When a call transitions to a "disconnected" or "down" state, the call is cleared normally, and the local LCCE sends a CDN. Similarly, if the peer wishes to clear a call, it sends a CDN and cleans up its session. 7.3.1. ICRQ Sender States State Event Action New State ----- ----- ------ --------- idle Call signal or Initiate local wait-control-conn ready to receive control-conn incoming conn open idle Receive ICCN, Clean up idle ICRP, CDN wait-control- Bearer line drop Clean up idle conn or local close request wait-control- control-conn-open Send ICRQ wait-reply conn wait-reply Receive ICRP, Send ICCN established acceptable wait-reply Receive ICRP, Send CDN, idle Not acceptable clean up wait-reply Receive ICRQ, Process as idle lose tie breaker ICRQ Recipient (Section 7.3.2) wait-reply Receive ICRQ, Send CDN wait-reply win tie breaker for losing session wait-reply Receive CDN, Clean up idle ICCN wait-reply Local close Send CDN, idle request clean up established Receive CDN Clean up idle established Receive ICRQ, Send CDN, idle ICRP, ICCN clean up established Local close Send CDN, idle request clean up The states associated with the ICRQ sender are as follows: idle The LCCE detects an incoming call on one of its interfaces (e.g., an analog PSTN line rings, or an ATM PVC is provisioned), or a local event occurs. The LCCE initiates its control connection establishment state machine and moves to a state waiting for confirmation of the existence of a control connection. wait-control-conn In this state, the session is waiting for either the control connection to be opened or for verification that the control connection is already open. Once an indication that the control connection has been opened is received, session control messages may be exchanged. The first of these messages is the ICRQ. wait-reply The ICRQ sender receives either (1) a CDN indicating the peer is not willing to accept the call (general error or do not accept) and moves back into the idle state, or (2) an ICRP indicating the call is accepted. In the latter case, the LCCE sends an ICCN and enters the established state. established Data is exchanged over the session. The call may be cleared by any of the following: + An event on the connected interface: The LCCE sends a CDN. + Receipt of a CDN: The LCCE cleans up, disconnecting the call. + A local reason: The LCCE sends a CDN. 7.3.2. ICRQ Recipient States State Event Action New State ----- ----- ------ --------- idle Receive ICRQ, Send ICRP wait-connect acceptable idle Receive ICRQ, Send CDN, idle not acceptable clean up idle Receive ICRP Send CDN idle clean up idle Receive ICCN Clean up idle wait-connect Receive ICCN, Prepare for established acceptable data wait-connect Receive ICCN, Send CDN, idle not acceptable clean up wait-connect Receive ICRQ, Send CDN, idle ICRP clean up idle, Receive CDN Clean up idle wait-connect, established wait-connect Local close Send CDN, idle established request clean up established Receive ICRQ, Send CDN, idle ICRP, ICCN clean up The states associated with the ICRQ recipient are as follows: idle An ICRQ is received. If the request is not acceptable, a CDN is sent back to the peer LCCE, and the local LCCE remains in the idle state. If the ICRQ is acceptable, an ICRP is sent. The session moves to the wait-connect state. wait-connect The local LCCE is waiting for an ICCN from the peer. Upon receipt of the ICCN, the local LCCE moves to established state. established The session is terminated either by sending a CDN or by receiving a CDN from the peer. Clean up follows on both sides regardless of the initiator. 7.4. Outgoing Calls Outgoing calls instruct an LAC to place a call. There are three messages for outgoing calls: OCRQ, OCRP, and OCCN. An LCCE first sends an OCRQ to an LAC to request an outgoing call. The LAC MUST respond to the OCRQ with an OCRP once it determines that the proper facilities exist to place the call and that the call is administratively authorized. Once the outbound call is connected, the LAC sends an OCCN to the peer indicating the final result of the call attempt. 7.4.1. OCRQ Sender States State Event Action New State ----- ----- ------ --------- idle Local open Initiate local wait-control-conn request control-conn-open idle Receive OCCN, Clean up idle OCRP wait-control- control-conn-open Send OCRQ wait-reply conn wait-reply Receive OCRP, none wait-connect acceptable wait-reply Receive OCRP, Send CDN, idle not acceptable clean up wait-reply Receive OCCN Send CDN, idle clean up wait-reply Receive OCRQ, Process as idle lose tie breaker OCRQ Recipient (Section 7.4.2) wait-reply Receive OCRQ, Send CDN wait-reply win tie breaker for losing session wait-connect Receive OCCN none established wait-connect Receive OCRQ, Send CDN, idle OCRP clean up idle, Receive CDN Clean up idle wait-reply, wait-connect, established established Receive OCRQ, Send CDN, idle OCRP, OCCN clean up wait-reply, Local close Send CDN, idle wait-connect, request clean up established wait-control- Local close Clean up idle conn request The states associated with the OCRQ sender are as follows: idle, wait-control-conn When an outgoing call request is initiated, a control connection is created as described above, if not already present. Once the control connection is established, an OCRQ is sent to the LAC, and the session moves into the wait-reply state. wait-reply If a CDN is received, the session is cleaned up and returns to idle state. If an OCRP is received, the call is in progress, and the session moves to the wait-connect state. wait-connect If a CDN is received, the session is cleaned up and returns to idle state. If an OCCN is received, the call has succeeded, and the session may now exchange data. established If a CDN is received, the session is cleaned up and returns to idle state. Alternatively, if the LCCE chooses to terminate the session, it sends a CDN to the LAC, cleans up the session, and moves the session to idle state. 7.4.2. OCRQ Recipient (LAC) States State Event Action New State ----- ----- ------ --------- idle Receive OCRQ, Send OCRP, wait-cs-answer acceptable Place call idle Receive OCRQ, Send CDN, idle not acceptable clean up idle Receive OCRP Send CDN, idle clean up idle Receive OCCN, Clean up idle CDN wait-cs-answer Call placement Send OCCN established successful wait-cs-answer Call placement Send CDN, idle failed clean up wait-cs-answer Receive OCRQ, Send CDN, idle OCRP, OCCN clean up established Receive OCRQ, Send CDN, idle OCRP, OCCN clean up wait-cs-answer, Receive CDN Clean up idle established wait-cs-answer, Local close Send CDN, idle established request clean up The states associated with the LAC for outgoing calls are as follows: idle If the OCRQ is received in error, respond with a CDN. Otherwise, place the call, send an OCRP, and move to the wait-cs-answer state. wait-cs-answer If the call is not completed or a timer expires while waiting for the call to complete, send a CDN with the appropriate error condition set, and go to idle state. If a circuit-switched connection is established, send an OCCN indicating success, and go to established state. established If the LAC receives a CDN from the peer, the call MUST be released via appropriate mechanisms, and the session cleaned up. If the call is disconnected because the circuit transitions to a "disconnected" or "down" state, the LAC MUST send a CDN to the peer and return to idle state. 7.5. Termination of a Control Connection The termination of a control connection consists of either peer issuing a StopCCN. The sender of this message SHOULD wait a full control message retransmission cycle (e.g., 1 + 2 + 4 + 8 ... seconds) for the acknowledgment of this message before releasing the control information associated with the control connection. The recipient of this message should send an acknowledgment of the message to the peer, then release the associated control information. When to release a control connection is an implementation issue and is not specified in this document. A particular implementation may use whatever policy is appropriate for determining when to release a control connection. Some implementations may leave a control connection open for a period of time or perhaps indefinitely after the last session for that control connection is cleared. Others may choose to disconnect the control connection immediately after the last call on the control connection disconnects. 8. Security Considerations This section addresses some of the security issues that L2TP encounters in its operation. 8.1. Control Connection Endpoint and Message Security If a shared secret (password) exists between two LCCEs, it may be used to perform a mutual authentication between the two LCCEs, and construct an authentication and integrity check of arriving L2TP control messages. The mechanism provided by L2TPv3 is described in Section 4.3 and in the definition of the Message Digest and Control Message Authentication Nonce AVPs in Section 5.4.1. This control message security mechanism provides for (1) mutual endpoint authentication, and (2) individual control message integrity and authenticity checking. Mutual endpoint authentication ensures that an L2TPv3 control connection is only established between two endpoints that are configured with the proper password. The individual control message and integrity check guards against accidental or intentional packet corruption (i.e., those caused by a control message spoofing or man-in-the-middle attack). The shared secret that is used for all control connection, control message, and AVP security features defined in this document never needs to be sent in the clear between L2TP tunnel endpoints. 8.2. Data Packet Spoofing Packet spoofing for any type of Virtual Private Network (VPN). L2TPv3 provides traffic separation for its VPNs via a 32-bit Session ID in the L2TPv3 data header. When present, the L2TPv3 Cookie (described in Section 4.1), provides an additional check to ensure that an arriving packet is intended for the identified session. Thus, use of a Cookie with the Session ID provides an extra guarantee that the Session ID lookup was performed properly and that the Session ID itself was not corrupted in transit. In the presence of a blind packet spoofing attack, the Cookie may also provide security against inadvertent leaking of frames into a customer VPN.1750] value is far less likely to be discovered by brute-force attacks compared to an IP address. For protection against brute-force, blind, insertion attacks, a 64- bit Cookie MUST be used with all sessions. A 32-bit Cookie is vulnerable to brute-force guessing at high packet rates, and as such, should not be considered an effective barrier to blind insertion attacks (though it is still useful as an additional verification of a successful Session ID lookup). The Cookie provides no protection against a sophisticated man-in-the-middle attacker who can sniff and correlate captured data between nodes for use in a coordinated attack. The Assigned Cookie AVP is used to signal the value and size of the Cookie that must be present in all data packets for a given session. Each Assigned Cookie MUST be selected in a cryptographically random manner [RFC1750] such that a series of Assigned Cookies does not provide any indication of what a future Cookie will be. The L2TPv3 Cookie must not be regarded as a substitute for security such as that provided by IPsec when operating over an open or untrusted network where packets may be sniffed, decoded, and correlated for use in a coordinated attack. See Section 4.1.3 for more information on running L2TP over IPsec. 9. Internationalization Considerations The Host Name and Vendor Name AVPs are not internationalized. The Vendor Name AVP, although intended to be human-readable, would seem to fit in the category of "globally visible names" [RFC2277] and so is represented in US-ASCII. If (1) an LCCE does not signify a language preference by the inclusion of a Preferred Language AVP (see Section 5.4.3). 10. IANA Considerations This document defines a number of "magic" numbers to be maintained by the IANA. This section explains the criteria used by the IANA to assign additional numbers in each of these lists. The following subsections describe the assignment policy for the namespaces defined elsewhere in this document. Sections 10.1 through 10.3 are requests for new values already managed by IANA according to [RFC3438]. The remaining sections are for new registries that have been added to the existing L2TP registry and are maintained by IANA accordingly. 10.1. Control Message Attribute Value Pairs (AVPs) This number space is managed by IANA as per [RFC3438]. A summary of the new AVPs follows: Control Message Attribute Value Pairs Attribute Type Description --------- ------------------ 58 Extended Vendor ID AVP 59 Message Digest 60 Router ID 61 Assigned Control Connection ID 62 Pseudowire Capabilities List 63 Local Session ID 64 Remote Session ID 65 Assigned Cookie 66 Remote End ID 68 Pseudowire Type 69 L2-Specific Sublayer 70 Data Sequencing 71 Circuit Status 72 Preferred Language 73 Control Message Authentication Nonce 74 Tx Connect Speed 75 Rx Connect Speed 10.2. Message Type AVP Values This number space is managed by IANA as per [RFC3438]. There is one new message type, defined in Section 3.1, that was allocated for this specification: Message Type AVP (Attribute Type 0) Values ------------------------------------------ Control Connection Management 20 (ACK) Explicit Acknowledgement 10.3. Result Code AVP Values This number space is managed by IANA as per [RFC3438]. New Result Code values for the CDN message are defined in Section 5.4. The following is a summary: Result Code AVP (Attribute Type 1) Values ----------------------------------------- General Error Codes. 10.4. AVP Header Bits This is a new registry for IANA to maintain. Leading Bits of the L2TP AVP Header ----------------------------------- There six bits at the beginning of the L2TP AVP header. New bits are assigned via Standards Action [RFC2434]. Bit 0 - Mandatory (M bit) Bit 1 - Hidden (H bit) Bit 2 - Reserved Bit 3 - Reserved Bit 4 - Reserved Bit 5 - Reserved 10.5. L2TP Control Message Header Bits This is a new registry for IANA to maintain. Leading Bits of the L2TP Control Message Header ----------------------------------------------- There are 12 bits at the beginning of the L2TP Control Message Header. Reserved bits should only be defined by Standard Action [RFC2434]. Bit 0 - Message Type (T bit) Bit 1 - Length Field is Present (L bit) Bit 2 - Reserved Bit 3 - Reserved Bit 4 - Sequence Numbers Present (S bit) Bit 5 - Reserved Bit 6 - Offset Field is Present [RFC2661] Bit 7 - Priority Bit (P bit) [RFC2661] Bit 8 - Reserved Bit 9 - Reserved Bit 10 - Reserved Bit 11 - Reserved 10.6. Pseudowire Types This is a new registry for IANA to maintain, there are no values assigned within this document to maintain. L2TPv3 Pseudowire Types ----------------------- The Pseudowire Type (PW Type, see Section 5.4) is a 2-octet value used in the Pseudowire Type AVP and Pseudowire Capabilities List AVP defined in Section 5.4.3. 0 to 32767 are assignable by Expert Review [RFC2434], while 32768 to 65535 are assigned by a First Come First Served policy [RFC2434]. There are no specific pseudowire types assigned within this document. Each pseudowire-specific document must allocate its own PW types from IANA as necessary. 10.7. Circuit Status Bits This is a new registry for IANA to maintain. Circuit Status Bits ------------------- The Circuit Status field is a 16-bit mask, with the two low order bits assigned. Additional bits may be assigned by IETF Consensus [RFC2434]. Bit 14 - New (N bit) Bit 15 - Active (A bit) 10.8. Default L2-Specific Sublayer bits This is a new registry for IANA to maintain. Default L2-Specific Sublayer Bits --------------------------------- The Default L2-Specific Sublayer contains 8 bits in the low-order portion of the header. Reserved bits may be assigned by IETF Consensus [RFC2434]. Bit 0 - Reserved Bit 1 - Sequence (S bit) Bit 2 - Reserved Bit 3 - Reserved Bit 4 - Reserved Bit 5 - Reserved Bit 6 - Reserved Bit 7 - Reserved 10.9. L2-Specific Sublayer Type This is a new registry for IANA to maintain. L2-Specific Sublayer Type ------------------------- The L2-Specific Sublayer Type is a 2-octet unsigned integer. Additional values may be assigned by Expert Review [RFC2434]. 0 - No L2-Specific Sublayer 1 - Default L2-Specific Sublayer present 10.10. Data Sequencing Level This is a new registry for IANA to maintain. Data Sequencing Level --------------------- The Data Sequencing Level is a 2-octet unsigned integer Additional values may be assigned by Expert Review [RFC2434]. 0 - No incoming data packets require sequencing. 1 - Only non-IP data packets require sequencing. 2 - All incoming data packets require sequencing. 11. References 112473] Conta, A. and S. Deering, "Generic Packet Tunneling in IPv6 Specification", RFC 2473, December 1998. [RFC2661] Townsley, W., Valencia, A., Rubens, A., Pall, G., Zorn, G., and Palter, B., "Layer Two Tunneling Layer Two Tunneling Protocol (L2TP)", RFC 2661, August 1999. [RFC2865] Rigney, C., Willens, S., Rubens, A., and W. Simpson, "Remote Authentication Dial In User Service (RADIUS)", RFC 2865, June 2000. [RFC3066] Alvestrand, H., "Tags for the Identification of Languages", BCP 47, RFC 3066, January 2001. [RFC3193] Patel, B., Aboba, B., Dixon, W., Zorn, G., and Booth, S., "Securing L2TP using IPsec", RFC 3193, November 2001. [RFC3438] Townsley, W., "Layer Two Tunneling Protocol (L2TP) Internet Assigned Numbers Authority (IANA) Considerations Update", BCP 68, RFC 3438, December 2002. [RFC3629] Yergeau, F., "UTF-8, a transformation format of ISO 10646", STD 63, RFC 3629, November 2003. 11.2. Informative References [RFC1034] Mockapetris, P., "Domain Names - Concepts and Facilities", STD 13, RFC 1034, November 1987. [RFC1191] Mogul, J. and S. Deering, "Path MTU Discovery", RFC 1191, November 1990. [RFC1321] Rivest, R., "The MD5 Message-Digest Algorithm", RFC 1321, April 1992. [RFC1661] Simpson, W., Ed., "The Point-to-Point Protocol (PPP)", STD 51, RFC 1661, July 1994. [RFC1700] Reynolds, J. and Postel, J., "Assigned Numbers", STD 2, RFC 1700, October 1994. [RFC1750] Eastlake, D., Crocker, S., and Schiller, J., "Randomness Recommendations for Security", RFC 1750, December 1994. [RFC1958] Carpenter, B., Ed., "Architectural Principles of the Internet", RFC 1958, June 1996. [RFC1981] McCann, J., Deering, S., and Mogul, J., "Path MTU Discovery for IP version 6", RFC 1981, August 1996. [RFC2072] Berkowitz, H., "Router Renumbering Guide", RFC 2072, January 1997. [RFC2104] Krawczyk, H., Bellare, M., and Canetti, R., "HMAC: Keyed- Hashing for Message Authentication", RFC 2104, February 1997. [RFC2341] Valencia, A., Littlewood, M., and Kolar, T., "Cisco Layer Two Forwarding (Protocol) L2F", RFC 2341, May 1998. [RFC2401] Kent, S. and Atkinson, R., "Security Architecture for the Internet Protocol", RFC 2401, November 1998. [RFC2581] Allman, M., Paxson, V., and Stevens, W., "TCP Congestion Control", RFC 2581, April 1999. [RFC2637] Hamzeh, K., Pall, G., Verthein, W., Taarud, J., Little, W., and Zorn, G., "Point-to-Point Tunneling Protocol (PPTP)", RFC 2637, July 1999. [RFC2732] Hinden, R., Carpenter, B., and Masinter, L., "Format for Literal IPv6 Addresses in URL's", RFC 2732, December 1999. [RFC2809] Aboba, B. and Zorn, G., "Implementation of L2TP Compulsory Tunneling via RADIUS", RFC 2809, April 2000. [RFC3070] Rawat, V., Tio, R., Nanji, S., and Verma, R., "Layer Two Tunneling Protocol (L2TP) over Frame Relay", RFC 3070, February 2001. [RFC3355] Singh, A., Turner, R., Tio, R., and Nanji, S., "Layer Two Tunnelling Protocol (L2TP) Over ATM Adaptation Layer 5 (AAL5)", RFC 3355, August 2002. [KPS] Kaufman, C., Perlman, R., and Speciner, M., "Network Security: Private Communications in a Public World", Prentice Hall, March 1995, ISBN 0-13-061466-1. [STEVENS] Stevens, W. Richard, "TCP/IP Illustrated, Volume I: The Protocols", Addison-Wesley Publishing Company, Inc., March 1996, ISBN 0-201-63346-9. 12. Acknowledgments Many of the protocol constructs were originally defined in, and the text of this document began with, RFC 2661, "L2TPv2". RFC 2661 authors are W. Townsley, A. Valencia, A. Rubens, G. Pall, G. Zorn and B. Palter. The basic concept for L2TP and many of its protocol constructs were adopted from L2F [RFC2341] and PPTP [RFC2637]. Authors of these versions are A. Valencia, M. Littlewood, T. Kolar, K. Hamzeh, G. Pall, W. Verthein, J. Taarud, W. Little, and G. Zorn. Danny Mcpherson and Suhail Nanji published the first "L2TP Service Type" version, which defined the use of L2TP for tunneling of various L2 payload types (initially, Ethernet and Frame Relay). The team for splitting RFC 2661 into this base document and the companion PPP document consisted of Ignacio Goyret, Jed Lau, Bill Palter, Mark Townsley, and Madhvi Verma. Skip Booth also provided very helpful review and comment. Some constructs of L2TPv3 were based in part on UTI (Universal Transport Interface), which was originally conceived by Peter Lothberg and Tony Bates. Stewart Bryant and Simon Barber provided valuable input for the L2TPv3 over IP header. Juha Heinanen provided helpful review in the early stages of this effort. Jan Vilhuber, Scott Fluhrer, David McGrew, Scott Wainner, Skip Booth and Maria Dos Santos contributed to the Control Message Authentication Mechanism as well as general discussions of security. James Carlson, Thomas Narten, Maria Dos Santos, Steven Bellovin, Ted Hardie, and Pekka Savola provided very helpful review of the final versions of text. Russ Housley provided valuable review and comment on security, particularly with respect to the Control Message Authentication mechanism. Pekka Savola contributed to proper alignment with IPv6 and inspired much of Section 4.1.4 on fragmentation. Aside of his original influence and co-authorship of RFC 2661, Glen Zorn helped get all of the language and character references straight in this document. A number of people provided valuable input and effort for RFC 2661, on which this document was based: RFC 2661. Thomas Narten provided a great deal of critical review and formatting. He wrote the first version of the IANA Considerations section. Dory Leifer made valuable refinements to the protocol definition of L2TP and contributed to the editing of early versions leading to RFC 2661. Steve Cobb and Evan Caves redesigned the state machine tables. Barney Wolff provided a great deal of design input on the original endpoint authentication mechanism. Appendix A: Control] (this algorithm is also described in [RFC2581]). acknowledgment (either explicit or piggybacked). When the acknowledgment is received, the congestion window is incremented from one to two. During slow start, CWND is increased by one packet each time an ACK (explicit ACK message Control Connection Establishment In this example, an LCCE establishes a control connection, with the exchange involving each side alternating in sending messages. This example shows the final acknowledgment explicitly sent within an ACK message. An alternative would be to piggyback the acknowledgment within a message sent as a reply to the ICRQ or OCRQ that will likely follow from the side that initiated the control connection. LCCE A LCCE B ------ ------ SCCRQ -> Nr: 0, Ns: 0 <- SCCRP Nr: 1, Ns: 0 SCCCN -> Nr: 1, Ns: 1 <- ACK Nr: 2, Ns: 1 B.2: Lost Packet with Retransmission An existing control connection has a new session requested by LCCE A. The ICRP is lost and must be retransmitted by LCCE B. Note that loss of the ICRP has two effects: It not only keeps the upper level state machine from progressing, but also keeps LCCE A from seeing a timely lower level acknowledgment of its ICRQ. LCCE A LCCE B ------ ------ ICRQ -> Nr: 1, Ns: 2 (packet lost) <- ICRP Nr: 3, Ns: 1 (pause; LCCE A's timer started first, so fires first) ICRQ -> Nr: 1, Ns: 2 (Realizing that it has already seen this packet, LCCE B discards the packet and sends an ACK message) <- ACK Nr: 3, Ns: 2 (LCCE B's retransmit timer fires) <- ICRP Nr: 3, Ns: 1 ICCN -> Nr: 2, Ns: 3 <- ACK Nr: 4, Ns: 2 Appendix C: Processing Sequence Numbers The Default L2-Specific Sublayer, defined in Section 4.6, provides a 24-bit field for sequencing of data packets within an L2TP session. L2TP data packets are never retransmitted, so this sequence is used only to detect packet order, duplicate packets, or lost packets. The 24-bit Sequence Number field of the Default L2-Specific Sublayer contains a packet sequence number for the associated session. Each sequenced data packet that is sent must contain the sequence number, incremented by one, of the previous sequenced packet sent on a given L2TP session. Upon receipt, any packet with a sequence number equal to or greater than the current expected packet (the last received in-order packet plus one) should be considered "new" and accepted. All other packets are considered "old" or "duplicate" and discarded. Note that the 24-bit sequence number space includes zero as a valid sequence number (as such, it may be implemented with a masked 32-bit counter if desired). All new sessions MUST begin sending sequence numbers at zero. Larger or smaller sequence number fields are possible with L2TP if an alternative format to the Default L2-Specific Sublayer defined in this document is used. While 24 bits may be adequate in a number of circumstances, a larger sequence number space will be less susceptible to sequence number wrapping problems for very high session data rates across long dropout periods. The sequence number processing recommendations below should hold for any size sequence number field. When detecting whether a packet sequence number is "greater" or "less" than a given sequence number value, wrapping of the sequence number must be considered. This is typically accomplished by keeping a window of sequence numbers beyond the current expected sequence number for determination of whether a packet is "new" or not. The window may be sized based on the link speed and sequence number space and SHOULD be configurable with a default equal to one half the size of the available number space (e.g., 2^(n-1), where n is the number of bits available in the sequence number). Upon receipt, packets that exactly match the expected sequence number are processed immediately and the next expected sequence number incremented. Packets that fall within the window for new packets may either be processed immediately and the next expected sequence number updated to one plus that received in the new packet, or held for a very short period of time in hopes of receiving the missing packet(s). This "very short period" should be configurable, with a default corresponding to a time lapse that is at least an order of magnitude less than the retransmission timeout periods of higher layer protocols such as TCP. For typical transient packet mis-orderings, dropping out-of-order packets alone should suffice and generally requires far less resources than actively reordering packets within L2TP. An exception is a case in which a pair of packet fragments are persistently retransmitted and sent out-of-order. For example, if an IP packet has been fragmented into a very small packet followed by a very large packet before being tunneled by L2TP, it is possible (though admittedly wrong) that the two resulting L2TP packets may be consistently mis-ordered by the PSN in transit between L2TP nodes. If sequence numbers were being enforced at the receiving node without any buffering of out-of-order packets, then the fragmented IP packet may never reach its destination. It may be worth noting here that this condition is true for any tunneling mechanism of IP packets that includes sequence number checking on receipt (i.e., GRE [RFC2890]). Utilization of a Data Sequencing Level (see Section 5.4.3) of 1 (only non-IP data packets require sequencing) allows IP data packets being tunneled by L2TP to not utilize sequence numbers, while utilizing sequence numbers and enforcing packet order for any remaining non-IP data packets. Depending on the requirements of the link layer being tunneled and the network data traversing the data link, this is sufficient in many cases to enforce packet order on frames that require it (such as end-to-end data link control messages), while not on IP packets that are known to be resilient to packet reordering. If a large number of packets (i.e., more than one new packet window) are dropped due to an extended outage or loss of sequence number state on one side of the connection (perhaps as part of a forwarding plane reset or failover to a standby node), it is possible that a large number of packets will be sent in-order, but be wrongly detected by the peer as out-of-order. This can be generally characterized for a window size, w, sequence number space, s, and number of packets lost in transit between L2TP endpoints, p, as follows: If s > p > w, then an additional (s - p) packets that were otherwise received in-order, will be incorrectly classified as out-of-order and dropped. Thus, for a sequence number space, s = 128, window size, w = 64, and number of lost packets, p = 70; 128 - 70 = 58 additional packets would be dropped after the outage until the sequence number wrapped back to the current expected next sequence number. To mitigate this additional packet loss, one MUST inspect the sequence numbers of packets dropped due to being classified as "old" and reset the expected sequence number accordingly. This may be accomplished by counting the number of "old" packets dropped that were in sequence among themselves and, upon reaching a threshold, resetting the next expected sequence number to that seen in the arriving data packets. Packet timestamps may also be used as an indicator to reset the expected sequence number by detecting a period of time over which "old" packets have been received in-sequence. The ideal thresholds will vary depending on link speed, sequence number space, and link tolerance to out-of-order packets, and MUST be configurable. Editors' Addresses Jed Lau cisco Systems 170 W. Tasman Drive San Jose, CA 95134 EMail: jedlau@cisco.com W. Mark Townsley cisco Systems EMail: mark@townsley.net Ignacio Goyret Lucent Technologies EMail: igoyret@lucent ]
http://www.faqs.org/rfcs/rfc3931.html
crawl-002
refinedweb
26,739
60.55
I'm working on a python app in Python Bottle. The app works fine if I'm on 1 lvl deep URLs like /dashboard or /rules or /page. However, if I go deeper like /dashboard/overview or /rules/ruleone or /page/test the CSS, JS, fonts and images will fail. :( The HTML source code still poinsts to /assets/ but if I'm on an URL like /rules/ruleone, the right path should be something like ../assets or ./assets right? The path /assets/ only works on the first level but not on deeper lvls, in other words: bottle doesnt adapt the static file path to the current directory. How do I fix this? I'm stuck on this problem for days now, I realy hope someone can help me. :( My code (simplified): #!/usr/bin/env python import lib.bottle as bottle from lib.bottle import route, template, debug, static_file, TEMPLATE_PATH, error, auth_basic, get, post, request, response, run, view, redirect, SimpleTemplate, HTTPError, abort import os, sys, re @route('/dashboard') @view('secure_page') def show__page_dashboard(): return dict(page = 'Dashboard') @route('/rules/<rule>') @view('secure_page') def show_page_rules_more(rule): return dict(page = rule) @route('/assets/<filepath:path>') def server_static(filepath): return static_file(filepath, root='/var/myapp/assets') TEMPLATE_PATH.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "view"))) bottle.debug(True) from lib.bottledaemon import daemon_run if __name__ == "__main__": daemon_run() Alright I found the solution for my problem. Bottle offers an URL tag to dynamicly build URLs. from bottle import url @route('/dashboard') @view('secure_page') def show__page_dashboard(): return dict(page='Dashboard', url=url) @route('/assets/<filepath:path>', name='assets') def server_static(filepath): return static_file(filepath, root='/var/myapp/assets') This is how I load my CSS/JS/images <link href="{{ url('assets', filepath='css/style.css') }}" rel="stylesheet" type="text/css"/> Dynamic menu URL's (in the navigation for example) is done this way: {{ url('/dashboard') }} I hope this info will help someone who is strugling with the same problem as I was. Tested on v0.12 and v0.13dev
https://codedump.io/share/HcDmQKwJYjvv/1/how-to-load-static-files-in-python-bottle-in-url39s
CC-MAIN-2016-50
refinedweb
337
57.16
(For more resources related to this topic, see here.) Why NServiceBus? Before diving in, we should take a moment to consider why NServiceBus might be a tool worth adding to your repertoire. If you're eager to get started, feel free to skip this section and come back later. So what is NServiceBus? It's a powerful, extensible framework that will help you to leverage the principles of Service-oriented architecture ( SOA ) to create distributed systems that are more reliable, more extensible, more scalable, and easier to update. That's all well and good, but if you're just picking up this book for the first time, why should you care? What problems does it solve? How will it make your life better? Ask yourself whether any of the following situations describe you: - My code updates values in several tables in a transaction, which acquires locks on those tables, so it frequently runs into deadlocks under load. I've optimized all the queries that I can. The transaction keeps the database consistent but the user gets an ugly exception and has to retry what they were doing, which doesn't make them very happy. - Our order processing system sometimes fails on the third of three database calls. The transaction rolls back and we log the error, but we're losing money because the end user doesn't know if their order went through or not, and they're not willing to retry for fear of being double charged, so we're losing business to our competitor. - We built a system to process images for our clients. It worked fine for a while but now we've become a victim of our own success. We designed it to be multithreaded (which was no small feat!) but we already maxed out the original server it was running on, and at the rate we're adding clients it's only a matter of time until we max out this one too. We need to scale it out to run on multiple servers but have no idea how to do it. - We have a solution that is integrating with a third-party web service, but when we call the web service we also need to update data in a local database. Sometimes the web service times out, so our database transaction rolls back, but sometimes the web service call does actually complete at the remote end, so now our local data and our third-party provider's data are out of sync. - We're sending emails as part of a complex business process. It is designed to be retried in the event of a failure, but now customers are complaining that they're receiving duplicate emails, sometimes dozens of them. A failure occurs after the email is sent, the process is retried, and the emails is sent over and over until the failure no longer occurs. - I have a long-running process that gets kicked off from a web application. The website sits on an interstitial page while the backend process runs, similar to what you would see on a travel site when you search for plane tickets. This process is difficult to set up and fairly brittle. Sometimes the backend process fails to start and the web page just spins forever. - We added latitude and longitude to our customer database, but now it is a nightmare to try to keep that information up-to-date. When a customer's address changes, there is nothing to make sure the location information is also recalculated. There are dozens of procedures that update the customer address, and not all of them are under our department's control. If any of these situations has you nodding your head in agreement, I invite you to read on. NServiceBus will help you to make multiple transactional updates utilizing the principle of eventual consistency so that you do not encounter deadlocks. It will ensure that valuable customer order data is not lost in the deep dark depths of a multi-megabyte log file. By the end of the book, you'll be able to build systems that can easily scale out, as well as up. You'll be able to reliably perform non-transactional tasks such as calling web services and sending emails. You will be able to easily start up long-running processes in an application server layer, leaving your web application free to process incoming requests, and you'll be able to unravel your spaghetti codebases into a logical system of commands, events, and handlers that will enable you to more easily add new features and version the existing ones. You could try to do this all on your own by rolling your own messaging infrastructure and carefully applying the principles of service-oriented architecture, but that would be really time consuming. NServiceBus is the easiest solution to solve the aforementioned problems without having to expend too much effort to get it right, allowing you to put your focus on your business concerns, where it belongs. So if you're ready, let's get started creating an NServiceBus solution. Getting the code We will be covering a lot of information very quickly in this article, so if you see something that doesn't immediately make sense, don't panic! Once we have the basic example in place, we will loop back and explain some of the finer points more completely. There are two main ways to get the NServiceBus code integrated with your project, by downloading the Windows Installer package, and via NuGet. I recommend you use Windows Installer the first time to ensure that your machine is set up properly to run NServiceBus, and then use NuGet to actually include the assemblies in your project. Windows Installer automates quite a bit of setup for you, all of which can be controlled through the advanced installation options: - NServiceBus binaries, tools, and sample code are installed. - The NServiceBus Management Service is installed to enable integration with ServiceInsight. . - Microsoft Message Queueing ( MSMQ ) is installed on your system if it isn't already. MSMQ provides the durable, transactional messaging that is at the core of NServiceBus. - The Distributed Transaction Coordinator ( DTC ) is configured on your system. This will allow you to receive MSMQ messages and coordinate data access within a transactional context. - RavenDB is installed, which provides the default persistence mechanism for NServiceBus subscriptions, timeouts, and saga data. - NServiceBus performance counters are added to help you monitor NServiceBus performance. Download the installer from and install it on your machine. After the install is complete, everything will be accessible from your Start Menu. Navigate to All Programs | Particular Software | NServiceBus as shown in the following screenshot: The install package includes several samples that cover all the basics as well as several advanced features. The Video Store sample is a good starting point. Multiple versions of it are available for different message transports that are supported by NServiceBus. If you don't know which one to use, take a look at VideoStore.Msmq. I encourage you to work through all of the samples, but for now we are going to roll our own solution by pulling in the NServiceBus NuGet packages. NServiceBus NuGet packages Once your computer has been prepared for the first time, the most direct way to include NServiceBus within an application is to use the NuGet packages. There are four core NServiceBus NuGet packages: - NServiceBus.Interfaces: This package contains only interfaces and abstractions, but not actual code or logic. This is the package that we will use for message assemblies. - NServiceBus: This package contains the core assembly with most of the code that drives NServiceBus except for the hosting capability. This is the package we will reference when we host NServiceBus within our own process, such as in a web application. - NServiceBus.Host: This package contains the service host executable. With the host we can run an NServiceBus service endpoint from the command line during development, and then install it as a Windows service for production use. - NServiceBus.Testing: This package contains a framework for unit testing NServiceBus endpoints and sagas. The NuGet packages will also attempt to verify that your system is properly prepared through PowerShell cmdlets that ship as part of the package. However, if you are not running Visual Studio as an Administrator, this can be problematic as the tasks they perform sometimes require elevated privileges. For this reason it's best to run Windows Installer before getting started. Creating a message assembly The first step to creating an NServiceBus system is to create a messages assembly. Messages in NServiceBus are simply plain old C# classes. Like the WSDL document of a web service, your message classes form a contract by which services communicate with each other. For this example, let's pretend we're creating a website like many on the Internet, where users can join and become a member. We will construct our project so that the user is created in a backend service and not in the main code of the website. Follow these steps to create your solution: - In Visual Studio, create a new class library project. Name the project UserService.Messages and the solution simply Example. This first project will be your messages assembly. - Delete the Class1.cs file that came with the class project. - From the NuGet Package Manager Console, run this command to install the NServiceBus.Interfaces package, which will add the reference to NServiceBus.dll. PM> Install-Package NServiceBus.Interfaces –ProjectName UserService.Messages - Add a new folder to the project called Commands. - Add a new class to the Commands folder called CreateNewUserCmd.cs. - Add using NServiceBus; to the using block of the class file. It is very helpful to do this first so that you can see all of the options available with IntelliSense. - Mark the class as public and implement ICommand. This is a marker interface so there is nothing you need to implement. - Add the public properties for EmailAddress and Name. When you're done, your class should look like this: using System; using System.Collections.Generic; using System.Linq; using System.Text; using NServiceBus; namespace UserService.Messages.Commands { public class CreateNewUserCmd : ICommand { public string EmailAddress { get; set; } public string Name { get; set; } } } Congratulations! You've created a message! This will form the communication contract between the message sender and receiver. Unfortunately, we don't have enough to run yet, so let's keep moving. Creating a service endpoint Now we're going to create a service endpoint that will handle our command message. - Add a new class library project to your solution. Name the project UserService. - Delete the Class1.cs file that came with the class project. - From the NuGet Package Manager Console window, run this command to install the NServiceBus.Host package: PM> Install-Package NServiceBus.Host –ProjectName UserService - Take a look at what the host package has added to your class library. Don't worry; we'll cover this in more detail later. - References to NServiceBus.Host.exe, NServiceBus.Core.dll, and NServiceBus.dll - An App.config file - A class named EndpointConfig.cs - In the service project, add a reference to the UserService.Messages project you created before. - Right-click on the project file and click on Properties , then in the property pages, navigate to the Debug tab and enter NServiceBus.Lite under Command line arguments . This tells NServiceBus not to run the service in production mode while we're just testing. This may seem obvious, but this is part of the NServiceBus promise to be safe by default, meaning you won't be able to mess up when you go to install your service in production. Creating a message handler Now we will create a message handler within our service. - Add a new class to the service called UserCreator.cs. - Add three namespaces to the using block of the class file: using NServiceBus; using NServiceBus.Logging; using UserService.Messages.Commands; - Mark the class as public. - Implement IHandleMessages<CreateNewUserCmd>. - Implement the interface using Visual Studio's tools. This will generate a Handle(CreateNewUserCmd message) stub method. Normally we would want to create the user here with calls to a database, but we don't have time for that! We're on a mission, so let's just demonstrate what would be happening by logging a message. It is worth mentioning that a new feature in NServiceBus 4.0 is the ability to use any logging framework you like, without being dependent upon that framework. NServiceBus can automatically hook up to log4net or NLog—just add a reference to either assembly, and NServiceBus will find it and use it. You can even roll your own logging implementation if you wish. However, it is not required to pick a logging framework at all. NServiceBus internalizes log4net, which it will use (via the NServiceBus.Logging namespace) if you don't explicitly include a logging library. This is what we will be doing in our example. Now let's finish our fake implementation for the handler: - Above the Handle method, add an instance of a logger: private static readonly ILog log = LogManager.GetLogger(typeof(UserCreator)); - To handle the command, remove NotImplementedException and replace it with: log.InfoFormat("Creating user '{0}' with email '{1}'", message.Name, message.EmailAddress); When you're done, your class should look like this: using System; using System.Collections.Generic; using System.Linq; using System.Text; using UserService.Messages.Commands; using NServiceBus; namespace UserService { public class UserCreator : IHandleMessages<CreateNewUserCmd> { private static readonly ILog log = LogManager.GetLogger(typeof(UserCreator)); public void Handle(CreateNewUserCmd message) { log.InfoFormat("Creating user '{0}' with email '{1}'", message.Name, message.EmailAddress); } } } Now we have a command message and a service endpoint to handle it. It's OK if you don't understand quite how it all connects quite yet. Next we need to create a way to send the command. Sending a message from an MVC application An ASP.NET MVC web application will be the user interface for our system. It will be sending a command to create a new user to the service layer, which will be in charge of processing it. Normally this would be from a user registration form, but in order to keep the example to the point, we'll take a shortcut and enter the information as query string parameters, and return data as JSON. Because we will be viewing JSON data directly within a browser, it would be a good idea to make sure your browser supports displaying JSON directly instead of downloading it. Firefox and Chrome natively display JSON data as plain text, which is readable but not very useful. Both browsers have an extension available called JSONView (although they are unrelated) which allows you to view the data in a more readable, indented format. Either of these options will work fine, so you can use whichever browser you prefer. Beware that Internet Explorer will try to download JSON data to a file, which makes it cumbersome to view the output. Creating the MVC website First, follow these directions to get the MVC website set up. You can use either MVC 3 or MVC 4, but for the example we will be using MVC 3. - Add a new ASP.NET MVC project to your solution and name it ExampleWeb. Select the Empty template and the Razor view engine. - From the NuGet Package Manager Console , run this command to install the NServiceBus package: PM> Install-Package NServiceBus –ProjectName ExampleWeb - Add a reference to the UserService.Messages project you created before. Because the MVC project isn't fully controlled by NServiceBus, it is a little more involved to set up. First, create a class file within the root of your MVC application and name it ServiceBus.cs, then fill it with this code. For the moment, don't worry about what it does. using System; using System.Collections.Generic; using System.Linq; using System.Web; using NServiceBus; using NServiceBus.Installation.Environments; namespace ExampleWeb { public static class ServiceBus { public static IBus Bus { get; private set; } public static void Init() { if (Bus != null) return; lock (typeof(ServiceBus)) { if (Bus != null) return; Bus = Configure.With() .DefineEndpointName("ExampleWeb") .DefaultBuilder() .UseTransport () .PurgeOnStartup(true) .UnicastBus() .CreateBus() .Start(() => Configure.Instance .ForInstallationOn () .Install()); } } } } That was certainly a mouthful! Don't worry about remembering all this; it's part of a fluent API that makes it pretty easy to discover things you need to configure through IntelliSense. For now, suffice it to say that this is the code that initializes the service bus within our MVC application, and provides access to a single static instance of the IBus interface that we can use to access the service bus. If we were to compare the service bus to Ethernet (which is a fairly apt comparison) we have just detailed how to turn on the Ethernet card. Now we need to call the Init() method from our Global.asax.cs file so that the Bus property is initialized when the application starts up. protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterGlobalFilters(GlobalFilters.Filters); RegisterRoutes(RouteTable.Routes); ServiceBus.Init(); } Now NServiceBus has been set up to run in the web application, so we can send our message. Create a HomeController class and add these methods to it: public ActionResult Index() { return Json(new { text = "Hello world." }); } public ActionResult CreateUser(string name, string email) { var cmd = new CreateNewUserCmd { Name = name, EmailAddress = email }; ServiceBus.Bus.Send(cmd); return Json(new { sent = cmd }); } protected override JsonResult Json(object data, string contentType, System.Text.Encoding contentEncoding, JsonRequestBehavior behavior) { return base.Json(data, contentType, contentEncoding, JsonRequestBehavior.AllowGet); } The first and last methods aren't too important. The first returns some static JSON for the /Home/Index action because we aren't going to bother adding a view for it. The last one is for convenience to make it easier to return JSON data as a result of an HTTP GET request. The highlighted method is the important one—this is where we create an instance of our command class and send it on the bus via the static instance ServiceBus.Bus. Lastly we return the command to the browser as JSON data so that we can see what we created. The last step is to add some NServiceBus configuration to the MVC application's Web.config file. We need to add two configuration sections. We already saw MessageForwardingInCaseOfFaultConfig in the app.config file that NuGet added to the service project, so we can copy it from there. However we need to add a new section called UnicastBusConfig anyway, so the XML for both is included here for convenience: ="UserService.Messages" Endpoint="UserService" /> </MessageEndpointMappings> </UnicastBusConfig> <!-- Rest of Web.config --> </configuration> The first highlighted line determines what happens to a message that fails. The second highlighted line determines routing for messages. For now it is sufficient to say that it means that all messages found in the UserService.Messages assembly will be sent to the UserService endpoint, which is our service project. NServiceBus also includes PowerShell cmdlets that make it a lot easier to add these configuration blocks. You could generate these sections using the Add-NServiceBusMessageForwardingInCaseOfFaultConfig cmdlet and the Add-NServiceBusUnicastBusConfig cmdlet. Running the solution One thing that will be useful when developing NServiceBus solutions is being able to specify multiple startup projects for a solution. - In the Solution Explorer , right-click on the solution file and click on Properties . - On the left, navigate to Common Properties | Startup Project . - Select the Multiple startup projects radio button. - Set the Action for the service project and the MVC project to Start and order them so that the MVC project starts last. - Click on OK . Now, build the solution if you haven't already, and assuming there are no compilation errors, click on the Start Debugging button or press F5 . So what happens now? Let's take a look. When you run the solution, both the MVC website and a console window should appear as shown in the preceding screenshots. As we can see, the browser window isn't terribly exciting right now; it's just showing the JSON results of the /Home/Index action. The console window is far more interesting. If you remember, we never created a console application; our service endpoint was a class project. When we included the NServiceBus.Host NuGet package, a reference to NServiceBus.Host.exe was added to the class project (remember a .NET executable is also an assembly that can be referenced) and the project was set to run that executable when you debug it. While it might not be easy to see in the screenshot, NServiceBus uses different colors to log messages of different levels of severity. In the screenshot, INFO messages are logged in green, and WARN messages are displayed in yellow. In addition, there can be DEBUG messages displayed in white, or ERROR and FATAL messages which are both logged in red. By default, the INFO log level is used for display, which is filtering out all the DEBUG messages here, and luckily we don't have any ERROR or FATAL messages! The entire output is too much to show in a screenshot. It's worth reading through, even though you may not understand everything that's going on quite yet. Here are some of the important points: - NServiceBus reports how many total message types it has found. In my example, four messages were found. Only one of those is ours; the rest are administrative messages used internally by NServiceBus. If this had said zero messages were found, that would have been distressing! - The License Manager checks for a valid license. You can get a free developer license that allows unrestricted non-production use for 90 days. At the end of that, you can get a new one for another 90 days. For all licensing concerns, go to. - The status of several features is listed for debugging purposes. - NServiceBus checks for the existence of several queues, and creates them if they do not exist. In fact, if we go to the Message Queuing manager, we will see that the following private queues have now been created: - audit - error - exampleweb - exampleweb.retries - exampleweb.timeouts - exampleweb.timeoutdispatcher - userservice - userservice.retries - userservice.timeouts - userservice.timeoutsdispatcher That's a lot of plumbing that NServiceBus takes care of for us! But this just gets the endpoint ready to go. We still need to send a message! Visual Studio will likely give you a different port number for your MVC project than in the example, so change the URL in your browser to the following, keeping the host and port the same. Feel free to use your own name and email address: /Home/CreateUser?name=David&email=david@example.com Look at what happens in your service window: INFO UserService.UserCreator [(null)] <(null)> - Creating user 'David' with email 'david@example.com' This might seem simple, but consider what had to happen for us to see this message. First, in the MVC website, an instance of our message class was serialized to XML, and then that payload was added to an MSMQ message with enough metadata to describe where it came from and where it needed to go. The message was sent to an input queue for our background service, where it waited to be processed until the service was ready for it. The service pulled it off the queue within a transaction, deserialized the XML payload, and was able to determine a handler that could process the message. Finally, our message handler was invoked, which resulted in the message being output to the log. This is a great start, but there is a great deal more to discover. Summary In this article, we created an MVC web application and an NServiceBus hosted service endpoint. Through the web application, we sent a command to the service layer to create a user, where we just logged the fact that the command was received, but in real life we would likely perform database work to actually create the user. For our example, our service was running on the same computer, but our command could just as easily be sent to a different server, enabling us to offload work from our web server. Resources for Article : Further resources on this subject: - Setting up Node [Article] - Behavior-driven Development with Selenium WebDriver [Article] - Coding for the Real-time Web [Article]
https://www.packtpub.com/books/content/getting-ibus
CC-MAIN-2015-27
refinedweb
4,052
56.45
Opened 10 years ago Closed 10 years ago Last modified 9 years ago #683 closed defect (fixed) [patch] Saving with custom db_column fails Description Given class Poll(meta.Model): poll_id = meta.IntegerField(db_column="poll_pk", primary_key=True) question = meta.CharField(maxlength=200) pub_date = meta.DateTimeField('date published') this fails: from django.models.polls import polls from datetime import datetime p = polls.Poll(question="spam?", pub_date=datetime(2005,10,22,19,22)) p.save() Traceback (most recent call last): File "<stdin>", line 1, in ? File "/usr/lib/python2.4/site-packages/django/utils/functional.py", line 3, in _curried return args[0](*(args[1:]+moreargs), **dict(kwargs.items() + morekwargs.items())) File "/usr/lib/python2.4/site-packages/django/core/meta/__init__.py", line 793, in method_save pk_val = getattr(self, opts.pk.column) AttributeError: 'Poll' object has no attribute 'poll_pk' but: p.poll_pk = None p.save() p2 = polls.get_object(question__exact="spam?") >>> p2.poll_pk 2 Attachments (2) Change History (7) comment:1 Changed 10 years ago by jdunck@… Changed 10 years ago by jdunck@… patch changing .column to .name when it seems to mean that. comment:2 Changed 10 years ago by hugo - Summary changed from Saving with PK other than "id" fails to [patch] Saving with PK other than "id" fails comment:3 Changed 10 years ago by adrian - Status changed from new to assigned comment:4 Changed 10 years ago by adrian - Summary changed from [patch] Saving with PK other than "id" fails to [patch] Saving with custom db_column fails Changed 10 years ago by adrian A patch that fixes it by introducing Field.attname comment:5 Changed 10 years ago by adrian - Resolution set to fixed - Status changed from assigned to closed Note: See TracTickets for help on using tickets. OK, unless I'm hugely wrong, this is the tip of an iceberg. Looking in core.meta.init.py, it seems to me that there're lots of places using field.column when it really means field.name. Of course, meta.init.py is making my head hurt, so I could be wrong. Another example of this problem: On latest trunk, given: result: (Note the prints coming out of the .add_choice are my attempts to determine what's going on). In the hopes that it saves some time, I'm attaching a patch to meta.init.py; I basically changed every place that seemed like it meant ".name" when using ".column"; 9 tests fail, but 8 of these are many_to_one_null not having "a.reporter_id", which I think is related to the problem above, which I can't wrap my head around right now.
https://code.djangoproject.com/ticket/683
CC-MAIN-2015-35
refinedweb
432
67.96
%load_ext autoreload %autoreload 2 %matplotlib inline %config InlineBackend.figure_format = 'retina' from IPython.display import YouTubeVideo, display YouTubeVideo("6pnl7Eu2wN0") Eliminating for-loops that have carry-over using lax.scan We are now going to see how we can eliminate for-loops that have carry-over using lax.scan. From the JAX docs, lax.scan replaces a for-loop with carry-over, with some of my own annotations added in for clarity: Scan a function over leading array axes while carrying along state. The semantics are described as follows: def scan(f, init, xs, length=None): if xs is None: xs = [None] * length carry = init ys = [] for x in xs: carry, y = f(carry, x) # carry is the carryover ys.append(y) # the `y`s get accumulated into a stacked array return carry, np.stack(ys) A key requirement of the function f, which is the function that gets scanned over the array xs, is that it must have only two positional arguments in there, one for carry and one for x. You'll see how we can thus apply functools.partial to construct functions that have this signature from other functions that have more arguments present. Let's see some concrete examples of this in action. Example: Cumulative Summation One example where we might use a for-loop is in the cumulative sum or product of an array. Here, we need the current loop information to update the information from the previous loop. Let's see it in action for the cumulative sum: import jax.numpy as np a = np.array([1, 2, 3, 5, 7, 11, 13, 17]) result = [] res = 0 for el in a: res += el result.append(res) np.array(result) WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.) DeviceArray([ 1, 3, 6, 11, 18, 29, 42, 59], dtype=int32) This is identical to the cumulative sum: np.cumsum(a) DeviceArray([ 1, 3, 6, 11, 18, 29, 42, 59], dtype=int32) Now, let's write it using lax.scan, so we can see the pattern in action: from jax import lax def cumsum(res, el): """ - `res`: The result from the previous loop. - `el`: The current array element. """ res = res + el return res, res # ("carryover", "accumulated") result_init = 0 final, result = lax.scan(cumsum, result_init, a) result DeviceArray([ 1, 3, 6, 11, 18, 29, 42, 59], dtype=int32) As you can see, scanned function has to return two things: - One object that gets carried over to the next loop ( carryover), and - Another object that gets "accumulated" into an array ( accumulated). The starting initial value, result_init, is passed into the scanfunc as res on the first call of the scanfunc. On subsequent calls, the first res is passed back into the scanfunc as the new res. Exercise 1: Simulating compound interest We can use lax.scan to generate data that simulates the generation of wealth by compound interest. Here's an implementation using a plain vanilla for-loop: wealth_record = [] starting_wealth = 100.0 interest_factor = 1.01 num_timesteps = 100 prev_wealth = starting_wealth for t in range(num_timesteps): new_wealth = prev_wealth * interest_factor wealth_record.append(prev_wealth) prev_wealth = new_wealth wealth_record = np.array(wealth_record) Now, your challenge is to implement it in a lax.scan form. Implement the wealth_at_time function below. from functools import partial def wealth_at_time(prev_wealth, time, interest_factor): # The lax.scannable function to compute wealth at a given time. # your answer here pass # Comment out the import to test your answer from dl_workshop.jax_idioms import lax_scan_ex_1 as wealth_at_time wealth_func = partial(wealth_at_time, interest_factor=interest_factor) timesteps = np.arange(num_timesteps) final, result = lax.scan(wealth_func, init=starting_wealth, xs=timesteps) assert np.allclose(wealth_record, result) The two are equivalent, so we know we have the lax.scan implementation right. import matplotlib.pyplot as plt plt.plot(wealth_record, label="for-loop") plt.plot(result, label="lax.scan") plt.legend(); Example: Simulating compound interest from multiple starting points Previously, was one simulation of wealth generation by compound interest from one starting amount of money. Now, let's simulate the wealth generation for different starting wealth levels; onemay choose any 300 starting points however one likes. This will be a demonstration of how to compose lax.scan with vmap to do computation without loops. To do so, you'll likely want to start with a function that accepts a scalar starting wealth and generates the simulated time series from there, and then vmap that function across multiple starting points (which is an array itself). from jax import vmap def simulate_compound_interest( starting_wealth: np.ndarray, timesteps: np.ndarray ): final, result = lax.scan(wealth_func, init=starting_wealth, xs=timesteps) return final, result num_timesteps = np.arange(200) starting_wealths = np.arange(300).astype(float) simulation_func = partial(simulate_compound_interest, timesteps=np.arange(200)) final, growth = vmap(simulation_func)(starting_wealths) growth.shape (300, 200) plt.plot(growth[1]) plt.plot(growth[2]) plt.plot(growth[3]); Exercise 2: Stick breaking process The stick breaking process is one that is important in Bayesian non-parametric modelling, where we want to model something that may have potentially an infinite number of components while being biased towards a smaller subset of components. The stick-breaking process uses the following generative process: - Take a stick of length 1. - Draw a number between 0 and 1 from a Beta distribution (we will modify this step for this notebook). - Break that fraction of the stick, and leave it aside in a pile. - Repeat steps 2 and 3 with the fraction leftover after breaking the stick. We repeat ad infinitum (in theory) or until a pre-specified large number of stick breaks have happened (in practice). In the exercise below, your task is to write the stick-breaking process in terms of a lax.scan operation. Because we have not yet covered drawing random numbers using JAX, the breaking fraction will be a fixed variable rather than a random variable. Here's the vanilla NumPy + Python equivalent for you to reference. # NumPy equivalent num_breaks = 30 breaking_fraction = 0.1 sticks = [] stick_length = 1.0 for i in range(num_breaks): stick = stick_length * breaking_fraction sticks.append(stick) stick_length = stick_length - stick sticks = np.array(sticks) sticks DeviceArray([0.1 , 0.09 , 0.081 , 0.0729 , 0.06561 , 0.059049 , 0.0531441 , 0.04782969, 0.04304672, 0.03874205, 0.03486785, 0.03138106, 0.02824295, 0.02541866, 0.02287679, 0.02058911, 0.0185302 , 0.01667718, 0.01500946, 0.01350852, 0.01215767, 0.0109419 , 0.00984771, 0.00886294, 0.00797664, 0.00717898, 0.00646108, 0.00581497, 0.00523348, 0.00471013], dtype=float32) def lax_scan_ex_2(num_breaks: int, frac: float): # Your answer goes here! pass # Comment out the import if you want to test your answer. from dl_workshop.jax_idioms import lax_scan_ex_2 sticksres = lax_scan_ex_2(num_breaks, breaking_fraction) assert np.allclose(sticksres, sticks)
https://ericmjl.github.io/dl-workshop/02-jax-idioms/02-loopy-carry.html
CC-MAIN-2022-33
refinedweb
1,109
58.69
Getting Started on Heroku with Kotlin Introduction This tutorial will have you deploying a Kotlin app in minutes. Hang on to learn how it all works, so you can make the most out of Heroku. The tutorial assumes that you have: - a free Heroku account - Java 8 installed. Once installed, you can use the heroku command from your command shell. kotlin young-spire-55658... done, ⬢ young-spire-55658 | Git remote heroku added When you create an app, a git remote (called heroku) is also created and associated with your local git repository. Heroku generates a random name (in this case young-spire-55658): -----> Gradle app detected remote: -----> Spring Boot detected remote: -----> Installing OpenJDK 1.8... done remote: -----> Building Gradle app... remote: -----> executing ./gradlew build -x test ... remote: BUILD SUCCESSFUL remote: remote: Total time: 56.003 secs remote: -----> Discovering process types remote: Procfile declares types -> (none) remote: Default types for buildpack -> web remote: remote: -----> Compressing... remote: Done: 101.2-06-13T19:42:30.937420+00:00 app[web.1]: _ _ _ 2017-06-13T19:42:30.937429+00:00 app[web.1]: | | | | | | 2017-06-13T19:42:30.937430+00:00 app[web.1]: | |__| | ___ _ __ ___ | | ___ _ 2017-06-13T19:42:30.937430+00:00 app[web.1]: | __ |/ _ \ '__/ _ \| |/ / | | | 2017-06-13T19:42:30.937431+00:00 app[web.1]: | | | | __/ | | (_) | <| |_| | 2017-06-13T19:42:30.937432+00:00 app[web.1]: |_| |_|\___|_| \___/|_|\_\\__,_| 2017-06-13T19:42:30.937433+00:00 app[web.1]: 2017-06-13T19:42:30.937433+00:00 app[web.1]: :: Built with Spring Boot :: 1.5.3.RELEASE 2017-06-13T19:42:30.937462+00:00 app[web.1]: ... 2017-06-13T19:42:47.268525+00:00 app[web.1]: 2017-06-13 19:42:47.268 INFO 4 --- [ main] com.example.Application$Companion : Started Application.Companion in 17. 773 seconds (JVM running for 19.622) 2017-06-13T19:42:47.542769+00/kotlin build/libs/kotlin Gradle by the existence of a gradlew or build.gradle file in the root directory. The demo app you deployed already has a build.gradle (see it here). Here’s an excerpt: dependencies { compile "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version" compile 'org.springframework.boot:spring-boot-starter' compile 'org.springframework.boot:spring-boot-starter-web' ... } The build.gradle file specifies dependencies that should be installed with your application. When an app is deployed, Heroku reads this file and installs the dependencies using the ./gradlew command. Another file, system.properties, determines the version of Java to use (Heroku supports many different versions). The contents of this file, which is optional, is quite straightforward: java.runtime.version=1.8 Run the Gradle build in your local directory to install the dependencies, preparing your system for running the app locally. Note that this app requires Java 8, but that you can push your own apps using a different version of Java: On Windows, run this comannd > gradlew.bat clean build On Mac and Linux run this command: $ ./gradlew clean build In either case, you’ll see output like this: :clean :compileKotlin Using kotlin incremental compilation :compileJava NO-SOURCE :copyMainKotlinClasses :processResources :classes :jar :findMainClass :startScripts :distTar :distZip :bootRepackage :assemble :compileTestKotlin NO-SOURCE :compileTestJava NO-SOURCE :copyTestKotlinClasses :processTestResources NO-SOURCE :testClasses UP-TO-DATE :test NO-SOURCE :check UP-TO-DATE :build BUILD SUCCESSFUL Total time: 6.177 secs If you see an error such as Unsupported major.minor version 52.0, then Gradle is trying to use Java 7. Check that your JAVA_HOME environment variable is set correctly. The Gradle process, by virtue of the Spring Boot plugin, will package the application in the build/libs/kotlin-getting-started-1.0.jar file. If you’re not using SPring Boot, you will need to vendor your dependencies manually as described in the Deploying Gradle Apps on Heroku guide. Once dependencies are installed, you will be ready to run your app locally. Run the app locally Now start your application locally using the heroku local command, which was installed as part of the Heroku CLI (make sure you’ve already run gradlew clean build :: 1.5 to run. Open with your web browser. You should see your app running locally. To stop the app from running locally, go back to your terminal window and press Ctrl+ C to exit.. The dependencies section should include something like this: compile 'org.jscience:jscience:4.3.1' Modify src/main/kotlin/com/example/Controller.kt so that it imports this library at the start, by including the following imports: import javax.measure.unit.SI import javax.measure.quantity.Mass import org.jscience.physics.model.RelativisticModel import org.jscience.physics.amount.Amount And modify the get("/hello",...) method so that it reads like this: @RequestMapping("/hello") internal fun hello(model: MutableMap<String, Any>): String { RelativisticModel.select() val m = Amount.valueOf("12 GeV").to(SI.KILOGRAM) model.put("science", "E=mc^2: 12 GeV = $m") return "hello" } Here’s the final source code for Controller.kt - yours should look similar. Here’s a diff of all the local changes you should have made. Now test locally: $ ./gradlew build $ young-spire-55658..._212-heroku" OpenJDK Runtime Environment (build 1.8.0_212-heroku-b03) OpenJDK 64-Bit Server VM (build 25.212-b03, mixed mode) If you receive an error, Error connecting to process, then you may need to configure your firewall. Don’t forget to type exit to exit the shell and terminate the dyno. Define config vars Heroku lets you externalise configuration - storing data such as encryption keys or external resource addresses in config vars. At runtime, config vars are exposed as environment variables to the application. For example, modify src/main/kotlin/com/example/Controller.java so that the method repeats grabs an energy value from the ENERGY environment variable: @RequestMapping("/hello") internal fun hello(model: MutableMap<String, Any>): String { RelativisticModel.select() val energy = System.getenv().get("ENERGY"); val m = Amount.valueOf(energy).to(SI.KILOGRAM) model.put("science", "E=mc^2: 12 GeV = $m") return "hello" } Now compile the app again so that this change is integrated by running gradlew clean build. young-spire-55658... done, v10 ENERGY: 20 GeV View the config vars that are set using heroku config: $ heroku config == young-spire-55658 Config Vars PAPERTRAIL_API_TOKEN: erdKhPeeeehIcdfY7ne ENERGY: 20 GeV .... A database is an add-on, and so you can find out a little more about the database provisioned for your app using the addons command in the CLI: $ heroku addons === Resources for young-spire-55658 Plan Name Price --------------------------- ------------------ ----- heroku-postgresql:hobby-dev singing-aptly-6889 free papertrail:choklad gazing-nimbly-9108 free === Attachments for young-spire-55658 Name Add-on Billing App ---------- ------------------ ----------------- DATABASE singing-aptly-6889 young-spire-55658 PAPERTRAIL gazing-nimbly-9108 young-spire-55658 Listing the config vars for your app will display the URL that your app is using to connect to the database, DATABASE_URL: $ heroku config === young-spire-55658 var dbUrl: String? = null @Autowired lateinit private var dataSource: DataSource ... @RequestMapping("/db") internal fun db(model: MutableMap<String, Any>): String { val connection = dataSource.getConnection() try { val stmt = connection.createStatement() stmt.executeUpdate("CREATE TABLE IF NOT EXISTS ticks (tick timestamp)") stmt.executeUpdate("INSERT INTO ticks VALUES (now())") val rs = stmt.executeQuery("SELECT tick FROM ticks") val output = ArrayList<String>() while (rs.next()) { output.add("Read from DB: " + rs.getTimestamp("tick")) } model.put("records", output) return "db" } catch (e: Exception) { connection.close() model.put("message", e.message ?: "Unknown error") return "error" } } @Bean @Throws(SQLException::class) fun dataSource(): DataSource { if (dbUrl?.isEmpty() ?: true) { return HikariDataSource() } else { val config = HikariConfig() config.jdbcUrl = dbUrl return last is a pointer to the main Java category here on Dev Center: - Read How Heroku Works for a technical overview of the concepts you’ll encounter while writing, configuring, deploying and running applications. - Read Deploying Java Apps on Heroku to understand how to take an existing Java app and deploy it to Heroku. - Visit the Java category to learn more about developing and deploying Java applications.
https://devcenter.heroku.com/articles/getting-started-with-kotlin
CC-MAIN-2020-29
refinedweb
1,343
51.34
A development we were working on required us to submit an AS400 job and wait for that job to complete. It was quite a long process that involved numerous file operations and sockets communication to various sub systems. We had to submit the job using the SBMJOB command which looked similar to the this... SBMJOB CMD(CALL PGM(ORD040CL) PARM('0000157')) JOB(CREATE_ORDS) JOBQ(*JOBD) This is simple enough using either the COM interop library cwbx.dll (IBM AS/400 iSeries Access for Windows ActiveX Object Library) or the managed .Net provider IBM.Data.DB2.iSeries. However, because you are submitting the job rather than running it interactively, responsibility for processing that job is passed to the AS400. Using either the COM or .Net tools, there was no simple way to search for jobs by name or system. After a bit of searching and speaking with smug Java colleagues, I learned that this is very simple to do in Java using the open source toolkit jtOpen. jtOpen exposes a number of classes that aren't available in either of the COM or .Net libraries. The main class I was interested in was JobList. IBM even have a nice example of exactly what I was trying to achieve Using JobList to get a List of Jobs Since the code is open source, I thought I'd have a quick look and see if it would be a simple case to port the code I was interested in into a .Net library. Unfortunately, this wasn't the case - the JobList class actually performs a number of API calls in order to retrieve a handle to a list of objects and iterate them. You can view the source here if you like What I wanted to do was simply use the java code from within my .Net code. This is where IKVM comes to the rescue IKVM ships with a Bytecode compiler named ikvmc. Using this tool, you can run the jar file through ikvmc and create a dll file that you can reference from your .Net code. Have a look at the command line options available for ikvmc. The JobList class exists in the main jt400.jar file, so to convert it to a dll you would use syntax similar to ikvmc -target:jt400.jar -out:jt400.dll You might see a few warnings while creating the dll, but you should still find that jt400.dll has been created OK Once the dll is ready, go to visual studio and reference the file in your project. You might think that's all that's required, but you still need to add all the references to provide the bridging between your .Net code in the java source. If you look in the bin directory of your IKVM download, you'll find numerous libraries prefixed IKVM.OpenJDK - you'll need to include some of these so that your jt400 library will work OK. I couldn't find any obvious reference for which files were required, so a bit of trial and error was needed here. Eventually, the minimum references that I needed for jt400 to work were... IKVM.OpenJDK.Beans.dll IKVM.OpenJDK.Core.dll IKVM.OpenJDK.SwingAWT.dll IKVM.OpenJDK.Text.dll IKVM.OpenJDK.Util.dll IKVM.OpenJDK.Runtime.dll Once these references are included, you should be able to use the library OK. Taking the IBM example link from earlier in this article, you could use library to find jobs that match a name by writing a .Net method similar to [Serializable] public class JobDetails { public string Status { get; set; } public string Number { get; set; } public string Name { get; set; } } /// <summary> /// Gets a list of jobs from the target server that match the specifed JOB_NAME /// </summary> /// <param name="system">The name of the AS400 target server to retrieve the job list from</param> /// <param name="user">The security user to authenticate with the target server</param> /// <param name="password">The security password to authenticate with the target server</param> /// <param name="jobName">The name of the job to limit results by </param> /// <returns>A list of jobs that match the specifed criteria</returns> public List<JobDetails> GetJobsByName(string system, string user, string password, string jobName) { List<JobDetails> results = new List<JobDetails>(); AS400 server = new AS400(); server.setSystemName(system); server.setUserId(user); server.setPassword(password); JobList list = new JobList(server); list.addJobSelectionCriteria(JobList.SELECTION_JOB_NAME, jobName); try { Enumeration items = list.getJobs(); while (items.hasMoreElements()) { Job job = (Job)items.nextElement(); var details = new JobDetails { Name = job.getName().Trim(), Number = job.getNumber().Trim(), Status = job.getStatus().Trim() }; results.Add(details); } if (server.isConnected()) { server.disconnectAllServices(); } } catch (Exception ex) { throw new ApplicationException( string.Format("Failed to retrieve jobs from server {0}", system), ex); } return results; } There are a number of attributes you can use to filter and search the jobs on the server, however, for this example I only needed to find jobs that matched a specified JOB_NAME. I created a simple class to hold the Job Details I was interested in (Status, Number and Name) and had my method return that rather than anything from the jt400 dll. This works really well and we can now find all jobs that match a specifed job name from our .Net code - happy days. The only downside to using jtOpen in .Net is that I don't particularly like mixing the java cased language with .Net, but there's no way around this if you want to consume the library without porting it by hand. Instead, think about keeping code that uses jtOpen seperate from your main application code and write a wrapper method similar to GetJobsByName that hides the jtOpen syntax and returns .Net objects you have created. This will keep your source clean and consistent. Hope that helps anyone else who wants to use jtOpen in .Net! CodeProject This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) java.net.preferIPv4Stack=true <appSettings> <add key="ikvm:java.net.preferIPv4Stack" value="true" /> </appSettings> General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/339899/Using-jtOpen-from-Net-code-IKVM?fid=1692642&df=90&mpp=10&sort=Position&spc=None&tid=4192038
CC-MAIN-2014-52
refinedweb
1,033
62.48
Transpiling EcmaScript6 code to ES5 using Babel If you watch this blog, you have probably already heard of ng-admin, the REST-based admin panel powered by Angular.js. Developed in good old EcmaScript5 JavaScript, it was time to make a step forward, introducing ES6 and its wide set of new features. We, at marmelab, spent the last two weeks updating the ng-admin configuration classes to ES6. We voluntarily restricted the perimeter to do the spadework for this new JavaScript version. And, as admin configuration is independent of Angular.js, it was the the perfect target for our experiment. As always in web development, we can't use a new technology, no matter how exciting, directly out of the box. To ensure browser compatibility, we need to transpile it into more common JavaScript. So, even if we write our code in ES6, we have to include a phase of transpiling, to convert it into pure ES5 code, understandable by all browsers. Babel vs Traceur There are two main ES6 to ES5 transpilers: Babel (formerly known as 6to5) and Traceur. We had to take one of them: Babel. The main reason we chose Babel is it doesn't need any runtime extra script to run. Everything is done server-side. You just have to execute a compilation task once, and then deploy the compiled sources. At the opposite, Traceur needs to embed such a script, bringing an extra overhead. Yet, it should be nuanced as we need a polyfill even for Babel (core-js) for some missing browser methods, like Array.from. Another issue is that Traceur is not compliant with React.js. This is not a big deal in this case, but as we also use the Facebook framework at marmelab, let's accumulate knowledge on a single technology. He who can do more can do less. And, icing on the cake, Babel has an online REPL if you want to quickly give it a try. How to use Babel to transpile your code? Turning your ES6 code to ES5 JavaScript First, you need to install Babel: npm install babel --save-dev Then, let's consider the following simple class: class View { constructor(name) { this._name = name; } name(name) { if (!arguments.length) return this._name; this._name = name; return this; } } export default View; Compiling it into pure ES5 JavaScript is as simple as the following command: babel View.js By default it will output the file in standard output. You can of course redirect it to a file using the standard > operator. babel View.js > build/View.js Babel modules Previous example had no dependencies. Yet, what would happen if we had to import another class? Let's experiment it right now: import View from "./View"; class ListView extends View { constructor(name) { super(name); this._type = "ListView"; } } Compile these two classes using the commands: babel View.js > build/View.js babel ListView.js > build/ListView.js If you open the build/ListView.js file, you will see some calls to require function: var View = _interopRequire(require("./View")); On ng-admin, we use requirejs to load our dependencies. So, I first thought I just had to embed requirejs and that everything would work out of the box. Unfortunately, after several hours of debug, I learnt it wasn't the case. Indeed, this require call is not the same than the require from requirejs. While the first is related to CommonJS, requirejs uses the AMD standard. AMD? CommonJS? UMD? I have always been confused about all these standards. I took profit of this project to clarify them. After digging the topic, it is simple. All of these standards aimed to simplify development of modular JavaScript. Asynchronous Module Definition (AMD) is the requirejs module loader. It targets browsers only, and is supposed to simplify front-end development (even if I hardly found any more difficult to configure library). AMD modules are defined through the define function, such as: define(['dependencyA', 'dependencyB', function(dependencyA, dependencyB) { return { doSomething: dependencyA.foo() + dependencyB.foo(); } }); CommonJS is based on the Node.js module definition. This is not compatible with requirejs, but has been brought to front developers thanks to libraries such as browserify or webpack. Our previously AMD module would looks like the following in CommonJS: var dependencyA = require("dependencyA"); var dependencyB = require("dependencyB"); module.exports = { doSomething: dependencyA.foo() + dependencyB.foo(), }; Finally, as neither AMD nor CommonJS succeeded in standing out from the crowd, another attempt of standardization emerged: Universal Module Definition (UMD). It has been built to be compatible with both of them. (function(root, factory) { if (typeof define === "function" && define.amd) { // AMD define(["dependencyA", "dependencyB"], factory); } else if (typeof exports === "object") { // Node, CommonJS-like module.exports = factory(require("dependencyA"), require("dependencyB")); } else { // Browser globals (root is window) root.returnExports = factory(root.dependencyA, root.dependencyB); } })(this, function(dependencyA, dependencyB) { doSomething: dependencyA.foo() + dependencyB.foo(); }); Really ugly and verbose, isn't it? Using Babel with requirejs So, we need AMD to be able to use requirejs with Babel. Yet, Babel exports by default to the CommonJS format. Fortunately, we can specify it a --modules option: babel --modules amd ListView.js Checking generated output shows that we are now compliant with requirejs: define(["exports", "module", "./View"], function(exports, module, _View) { // ... }); Testing with Babel ng-admin uses two testing framework: Karma and Mocha. Here is how to configure these frameworks. Karma For Karma, we just have to install an extra package: npm install --save-dev karma-babel-preprocessor Then, update your karma.conf.js configuration file: config.set({ // ... plugins: [/* ... */ "karma-babel-preprocessor"], preprocessors: { "ng-admin/es6/lib/**/*.js": "babel", }, babelPreprocessor: { options: { modules: "amd", }, }, }); We add the freshly installed plug-in, specifying it to transpile to AMD module. We also have to specify which files should be transpiled, via the preprocessors option. Mocha For Mocha, install process is similar and requires the mocha-traceur plugin: npm install --save-dev mocha-traceur grunt-mocha-test We also installed the grunt-mocha-test as we are using Grunt. Then, a little bit of configuration: // Gruntfile.js grunt.initConfig({ mochaTest: { test: { options: { require: "mocha-traceur", }, src: ["src/javascripts/ng-admin/es6/tests/**/*.js"], }, }, }); grunt.loadNpmTasks("grunt-mocha-test"); // enable "grunt mochaTest" command If you prefer using mocha directly, just specify the compilers option: mocha --compilers mocha-traceur --recursive src/javascripts/ng-admin/es6/tests/ Now you got the big picture on how we succeeded to rewrite ng-admin configuration classes using EcmaScript6. There is still a lot of work to do on ng-admin for a full migration. Don't hesitate to give a helping hand!
https://marmelab.com/blog/2015/03/12/transpiling-es6-to-es5-using-babel.html
CC-MAIN-2022-27
refinedweb
1,093
50.12
But? I seem to remember that there were/are a bunch of scripts for importing RSS feeds into iCal and the like. There appear to be a zillion things called "iCal". Which did you mean? (I've been using this one for years.) Ah, true enough. I was thinking of Apple's iCal software (though, unless Mac users the world over are fascinated by DNA Lounge's upcoming events, its unlikely that it accounts for that many hits)... If one wanted to put your RSS feed in ical I suppose someone could whip up a Tcl-based RSS parser or something :-) I thought IE was the DEVIl. I use it with NetNewswire as my reader. However, you don't export enough information to make it really useful. Take a look at the RSS feed that I export for StudioZ.tv...it includes information about the event, links to flyers, etc... iCal is Apple's crappy implementation of the iCalendar spec. The application is lacking in sooo many ways...it can't have events which span past midnight making it almost useless for night clubs. =) For StudioZ.tv I also export in that format as well just to be complete...it was easy to do since everything on the site is run with a database backend that I wrote...I imagine you have something similar... Ugh, iCalendar! Years ago I got roped into proofreading the english-as-a-second-language RFC for that monstrosity before they submitted it. I had hoped it died in the cradle. *shudder* I've been trying to avoid having to learn about post-0.91 versions of RSS (that namespace stuff smacks of overengineered stupidity, and I really just don't want to know how bad it really is). So, I just changed the calendar summary to include table-ized HTML descriptions. Is it the Done Thing to dump encoded HTML in <description> fields? Or are there (popular) RSS tools that can only deal with plain text? I can just as easily emit plain text, but it'll be uglier. I think 1.0 and 2.0 of RSS clean things up considerably. I think dumping HTML in the description field is kind of scarey, mainly because any tool that would happily pass along HTML from somewhere else just sounds like a security hazard, never mind leaving an ugly tag open ... I keep meaning to get around to writing up a web-based aggregator that puts the link in a small frame where you get to say if you liked the link much, then the aggregator can aggregate scores and give you reccomendations. If one can't use HTML, one can't do crazy things like, say, putting different links on different words. If one can't put HTML in it, then RSS is basically useless compared to normal web pages. Which is why everyone puts HTML in them for weblog-oriented RSS feeds. I was just wondering whether people did the same for calendar-oriented RSS feeds, having never actually seen one in action before today. So, we realize that formatting capability is desirable. We also realize that a reader may ditch the HTML. What would Angus Davis do? Graceful degradation. Try to craft your HTML so it still looks pretty with all the tags removed. Good luck. The thing is, I've heard that some programs (some IRC client someone I was talking to was using) don't even bother stripping the tags. It just blatted the whole raw HTML string into a tooltip. Which is, of course, bogus. But common? Who knows. You know, in Canada they don't lock their doors. My aggregator just passes HTML. It collapses entries with this Javascriptamijiggy, so to view its output, Mozilla gets to load every freaking image URL referenced in every single feed. (Thanks for having a lame LJ RSS feed full of images for my lame aggregator to suck on, jwz!) It would bug me more, but worrying about computer crap upsets me. Especially if nobody is passing me fat checks for my time. Caveat emptor. Smoke a bowl. Hey, you need any bartender trainees? The espresso machine is starting to bore me. Yeah... that's what trillians aggregator does. it's annoying as all hell, but at least lets me know when dna is updated. wouldn't want to ask for *too* much functionality now, would we. I wonder what will happen to all those great plugins and extensions after Moz 1.4 and they start using Firebird. (Incidentally, if someone felt like sending me the JavaScript to make that sidebar trick work in MSIE, that would be cool.) Just set the target for the href to "_search" and the url pops up in a sidebar. Set targets to "_main" for links you want to come up in the main browser window rather than replace the content in the sidebar. A friend did this with some javascript for browser detection described here.
https://www.jwz.org/blog/2003/06/what-uses-calendar-like-rss/
CC-MAIN-2018-43
refinedweb
829
74.79
Robust CSV file import with DictReader and chardet Wondering how to import CSV file in Python? I’ve got you covered! Read on and learn how to do it with DictReader and chardet. As easy as it seems, Python is a language of great opportunities and mastery that comes with a lot of practice. It has a lot of insanely useful libraries and csv (a member of which is the DictReader class) is definitely one of them. This will be an introductory post so you don’t have to worry about your knowledge of Python. If you’re looking for tips on how to start learning Python, we’ve got something for you as well. Import CSV file in Python: The absolute basics You might ask: if it’s that easy, then why should I even read this? Well… it’s easy, but it might also be a bit confusing because of the amount of options available. Moreover, validating columns and detecting whether the file is a valid CSV file are not built-in functionalities of the csv library, so after the introduction I will describe those as well. As I mentioned earlier, parsing the file is pretty simple: import csv # import the csv module with open('example.csv') as csv_file: # open example.csv as csv_file and iterate over rows reader = csv.DictReader(csv_file, delimiter=';') for row in reader: print row.get('Col 1') + ' ' + row.get('Col 2') # print out values of Col 1 and Col 2 … and that’s all! Yup, it’s that easy. Well, at least the basics. We open the file, read all lines, get columns, profit! What you might want to do besides that is: - check whether that really is a CSV file or not - validate that only the columns you want are there - check the encoding Validation csv has a thing called “Sniffer” that, given a portion of the file, checks whether it is valid or not (besides checking the dialect, it raises an exception when parsing an invalid file and that’s probably what you’re be looking for): import csv with open('example.csv') as csv_file: try: csv.Sniffer().sniff(csv_file.read(1024)) # take a 1024B (max) portion of the file and try to get the Dialect csv_file.seek(0) except csv.Error: print 'I did not expect the spanish inqusition!' If you want to check whether columns provided are what you expect, it gets a tiny bit trickier. Firstly, provide the fieldnames parameter for the DictReader instance (they will be used as keys for the dictionary): import csv with open('example.csv') as csv_file: reader = csv.DictReader(csv_file, delimiter=';', fieldnames=['Col 1', 'Col 2']) for row in reader: print row.get('Col 1') + ' ' + row.get('Col 2') Running that example you’ll notice, that the first row — containing the header — is printed as well; in general we don’t want that to happen. Let’s change this: import csv with open('example.csv') as csv_file: is_first_row = True reader = csv.DictReader(csv_file, delimiter=';', fieldnames=['Col 1', 'Col 2']) for row in reader: if is_first_row: is_first_row = False continue # skip the row print row.get('Col 1') + ' ' + row.get('Col 2') Now that we skipped the header, let’s get back to it to check whether it has the column set we want. Before we do it, I’ll define a small helper function: from itertools import chain def flatten_list(nested_list): return list(chain(*[item if isinstance(item, list) else [item] for item in nested_list])) Don’t worry, it’s not as complicated as it seems to be. We take each element of the list ( item (...) for item in nested_list), check if it’s a list and if it isn’t, we make a list out of it and join all lists into a single one, that’ll give us a nice flattened list (i.e. for [1,2,[3,4]] we’ll get [1,2,3,4]). Now we can improve our import yet again: import csv from itertools import chain def flatten_list(nested_list): return list(chain(*[item if isintance(item, list) else [item] for item in nested_list])) with open('example.csv') as csv_file: is_first_row = True valid_columns = ['Col 1', 'Col 2'] reader = csv.DictReader(csv_file, delimiter=';', fieldnames=valid_columns) for row in reader: if is_first_row: current_columns = flatten_list(row.values()) # ^-- if there are columns we don't want, row.values() will return ['Col 1', 'Col 2', ['Col 3', 'Col 4', (...)]] if set(valid_columns) != set(current_columns): # ^-- compare sets, because when comparing arrays order is important as well print 'This is not the file I expected! I quit!' break is_first_row = False continue print row.get('Col 1') + ' ' + row.get('Col 2') Encoding detection As you probably now already, text file content can be represented using different encodings, for example UTF-8, windows-1250, iso-8859-2 etc. In some cases we want to detect that encoding and decode strings so that we can parse them the way we want. To do that, we’ll use the chardet library (it isn’t available by default, so you need to use either pip or easy_install to get it). Doing it is (again) pretty easy and what you need to do is to read file content and then pass it to chardet.detect(): import chardet csv_file_raw = csv_file.read() encoding = chardet.detect(csv_file_raw)['encoding'] if not encoding: print 'No encoding found for the file! Is it valid?' if 'UTF' in encoding: encoding = encoding.replace('-').lower() The last two lines (replacing ‘-‘ if the encoding is UTF-* and then making it lowerscore are necessary if you want to decode a string using that information. To do that, I’ll define the last helper function: def string_to_utf8(string, source_encoding): if source_encoding == 'utf8': return string else: return string.decode(source_encoding).encode("utf8") Here we assume the target encoding is utf8 (UTF-8), and if the string isn’t in that format, we simply decode it and encode again using UTF-8. Now let’s put together all things we discussed here: import csv import chardet from itertools import chain def string_to_utf8(string, source_encoding): if source_encoding == 'utf8': return string else: return string.decode(source_encoding).encode("utf8") def string_list_to_utf8(string_list): return [string_to_utf8(element) for element in string_list] def flatten_list(nested_list): return list(chain(*[item if isinstance(item, list) else [item] for item in nested_list])) with open('example.csv') as csv_file: csv_file_content = csv_file.read() encoding = chardet.detect(csv_file_content)['encoding'] if not encoding: print 'No encoding found for the file! Is it valid?' if 'UTF' in encoding: encoding = encoding.replace('-').lower() is_first_row = True valid_columns = ['Col 1', 'Col 2'] reader = csv.DictReader(csv_file, delimiter=';', fieldnames=valid_columns) for row in reader: if is_first_row: current_columns = string_list_to_utf8(flatten_list(row.values())) if set(valid_columns) != set(current_columns): print 'This is not the file I expected! I quit!' break is_first_row = False continue print string_to_utf8(row.get('Col 1')) + ' ' + string_to_utf8(row.get('Col 2')) As you can see, I added (now really the last) helper function, that changes encoding of columns we want to validate to utf8 to rule out the possibility of our code crashing on the string compare (if valid columns are unicode strings). And that’s really all there is to import CSV file in Python! Isn’t that awesome? Summary As you can see, Python is a great language that enables you to solve problems faster and at a lower initial cost. I hope my code helps you in one way or another as much as it helped me while working on one of our projects. Have a nice day!
https://apptension.com/blog/2018/05/07/import-csv-file-in-python/
CC-MAIN-2020-10
refinedweb
1,247
65.62
Perl 6 small stuff #16: All your base are belong to us It’s the second week of the Perl Weekly Challenge, and like last week we’ve got two assignments — one “beginner” and one “advanced”. The advanced assigment this time was: “Write a script that can convert integers to and from a base35 representation, using the characters 0–9 and A-Y.” Even though this is a blog mainly about Perl 6 I thought it’d be fun to start with my Perl 5 solutions to the advanced assigment, just so it’s even more easy to appreciate the simplicity of the Perl 6 solution… although not, as you will see, without some discussion. PERL 5 # Convert from base35 to base10 perl -E '%d = map { $_ => $c++ } (0..9,A..Y); $i = 1; for (reverse(split("", @ARGV[0]))) { $e += $i * $d{$_}; $i = $i * 35; } say $e' 1M5 # Output: 2000 # Convert from base10 to base35 perl -E '%d = map { $c++ => $_ } (0..9,A..Y); while ($ARGV[0] > 0) { push @n, $d{$ARGV[0] % 35}; $ARGV[0] = int($ARGV[0] / 35); } say join("", reverse(@n));' 2000 # Output: 1M5 So these are working one-liners but hardly readable ones. They also violate a lot of best practices. So I expanded them into a full script that’s easier to reuse and understand with added strict and error handling as well as support for positive/negative (+and - prefixes). #!/usr/bin/env perl # # Usage: # perl base35.pl [+-]NUMBER FROM-BASE, e.g. # # perl base35.pl 1000 10 # Output: SK # # perl base35.pl SK 35 # Output: 1000 # # perl base35.pl -SK 35 # Output: -1000 use v5.18; say base35_conv(@ARGV); sub base35_conv { my ($no, $base) = (uc(shift), shift); if ($base != 10 && $base != 35) { warn "Not a valid base, must be 10 or 35"; return -1; } if (($base == 35 && $no !~ /^[\+\-]{0,1}[0-9A-Y]+$/) || ($base == 10 && $no !~ /^[\+\-]{0,1}[0-9]+$/)) { warn "You have to provide a valid number for the given base"; return -1; } my ($c, $e) = (0, 0); my $prefix = $no =~ s/^(\+|-)// ? $1 : ""; my %d = map { if ($base == 35) { $_ => $c++ } else { $c++ => $_ } } (0..9,'A'..'Y'); if ($base == 35) { my $i = 1; for (reverse(split("", $no))) { $e += $i * $d{$_}; $i = $i * 35; } } else { my @digits; while ($no > 0) { push @digits, $d{$no % 35}; $no = int($no / 35); } $e = join("", reverse(@digits)); } return ( $prefix ? $prefix : "" ) . $e; } There’s really not much to comment about the code above. It works and is reasonably readable. It’s quite long, however, and that’s where Perl 6 comes in and destroys it. PERL 6 # Convert from base35 to base10 perl6 -e 'say "1M5".parse-base(35)' # Output: 2000 # Convert from base10 to base35 perl6 -e 'say 2000.base(35)' # Output: 1M5 At this you’re allowed to stop for a second an appreciate the simplicity of Perl 6. But: Since these are built-in functions in Perl 6 this wasn’t — in my opinion — the best Perl 6 assigment. I guess the point of the assigment is to write a solution from scratch —had I solved the Perl 5 version of the assigment by using a ready-made CPAN module such as Math::Int2Base I’d feel that I cheated. Maybe that’s just me? As for the “beginner” assigment this time — “Write a script or one-liner to remove leading zeros from positive numbers” — my Perl 6 and Perl 5 solutions are identical: perl -E 'say "001000"*1;' perl6 -e 'say "001000"*1;' # Both output: 1000 Although the assignment wants a script that removes leading zeroes from positive numbers, this will just as easily remove them from negative numbers as well. These will also work on floating point numbers: perl -E 'say "-001000"*1;' perl6 -e 'say "-001000"*1;' # Both output: -1000 perl -E 'say "001000.234"*1;' perl6 -e 'say "001000.234"*1;' # Both output: 1000.234 You can take it one step further with Perl 6, though. Should you for some reason — and I’m not able to think of a good one to be honest — want to do the same on a fraction, this is the way to do it: perl6 -e 'say "003/4".Rat.nude.join("/");' # Output: 3/4 .nude returns a two-element list with the numerator and denominator so that we can choose how to represent it (a naive say "3/4"*1; would print 0.75 and would therefore not be a satisfying solution considering how the assignment is specified). So that’s it for now. It may sound a little silly to write this in a Perl 6 centric blog, but what made the assignment interesting this week was Perl 5. I look forward to next week’s assignment already.
https://medium.com/@jcoterhals/perl-6-small-stuff-16-all-your-base-are-belong-to-us-266763713d64
CC-MAIN-2019-18
refinedweb
788
79.09
Python 3.0 Released 357 licorna writes "The 3.0 version of Python (also known as Python3k and Python3000) just got released few hours ago. It's the first ever intentionally backwards-incompatible Python release." "Just think, with VLSI we can have 100 ENIACS on a chip!" -- Alan Perlis good (Score:4, Funny) previous releases were incompatibilie with earlier ones unintentinally. No mac version yet? (Score:3, Funny) Where's the mac version..? Re:woohoo (Score:5, Funny) But I just came in here for an argument! from __future__ import braces (Score:5, Funny):And now to wait (Score:5, Funny) Nope. Python 3.11 for Workgroups.. I don't know why this story's flagged "endofdays" (Score:5, Funny) That'll be when Perl 6.0 ships. Porting? Instantly! (Score:3, Funny) I heard they're going to use Python 3.0 for the impending from-scratch rewrite of DNF. Re:woohoo (Score:5, Funny) No you didn't. Re:Hey! (Score:3, Funny) Re:Libraries (Score:3, Funny) unless the language is in the tail end of its life, like Fortran and Cobol Thus the phrases "The looooooooooong tail" and "You're ALL tail, baby". Re:That marks my end of use for Python (Score? Re:print function (Score:3, Funny) You seem to want Perl. You can find it at [perl.org]:woohoo (Score:5, Funny) Re:And now to wait (Score:5, Funny) Re:That marks my end of use for Python (Score? No, he generated that comment with Python 2.6 code but ran it with the new release. Re:woohoo (Score:4, Funny) An argument isn't just contradiction! Re:woohoo (Score:5, Funny) Yes it is.:No mac version yet? (Score:3, Funny) >You download the .tart.gz or .tar.bz2 source packages and build it. \ At last, what the world has been waiting for: a language for bimbos and airheads! :) hawk Re:No mac version yet? (Score:3, Funny) WTF are you talking about? Visual Basic has been around since 1991! Re:And now to wait (Score:1, Funny) I think you got that wrong. A hammer usually isn't considered very subtle. Re:woohoo (Score:1, Funny) Out of interest, why did they decide to calculate 1/2 as float in Python 3? We got sick of explaining integer math to newbies on the python list each and every single day. So it was decided that if we used '//' for integer division and let '/' do what newbs expect we'd be saving ourselves muchos keystrokes in the long run. Re:I still won't take Python seriously... (Score:1, Funny) I'm glad you brought that up, I can't believe that there is not a single other post or discussion thread here regarding whitespace in Python. I've always wondered why, whenever there's a story about Python on Slashdot or Ars or wherever, there's never, ever, ever even one single comment about how Python deals with whitespace. But you, sir, have broken the seal! Bravo. Re:woohoo (Score:3, Funny) It's scary to code something while drunk then come back the next day and think "god, whoever wrote this is clever". I don't even need to be drunk! That happens to me regularly ... ah the ageing process.
http://developers.slashdot.org/story/08/12/04/0420219/python-30-released/funny-comments
CC-MAIN-2015-35
refinedweb
551
77.03
Use JavaFX to quickly create applications JavaFX + Eclipse IDE = Easy GUI Overview JavaFX is a Java-based platform for building Rich Internet Applications (RIAs) that can run on a desktop or on mobile devices. Applications built with JavaFX are Java bytecode-based, so they can run on any desktop with the Java runtime environment or on any mobile with Java2 ME installed. JavaFX makes GUI programming very easy; it uses a declarative syntax and provides animation support. In this article, learn how to get started with JavaFX to build RIAs. Download and install the JavaFX SDK, install the JavaFX Eclipse plug-in, and explore some basic features of JavaFX by creating sample applications. Download the source code for the Login Application and the Animated Circle examples used in this article. Installation Follow these steps to download and install the JavaFX SDK and the JavaFX Eclipse plug-in. - Download the latest JavaFX SDK installer file for Windows, which is an ".exe" extension. After the download is complete, double-click the ".exe" to run the installer. - Complete the steps in the installation wizard. The default installation location for Windows is C:\Program Files\JavaFX\javafx-sdk-version. - Start the Eclipse IDE. Provide a workspace name, such as C:/workspace/jfx_projects. - Select Help > Install New Software. - Click Add in the Install dialog that pops up. - As shown in Figure 1, enter JavaFX Plugin Sitefor the Name, and the Location from which the plug-in needs to be installed. Figure 1. Add JavaFX Plug-in Site Click OK. - Check the JavaFX feature that needs to be installed, as shown in Figure 2. Figure 2. Check the JavaFX feature to be installed Click Next. - The JavaFX feature version is displayed in the Install Details dialog. Click Next. - Accept the terms of license agreements and click Finish. - Upon successful installation of the plug-in, restart the Eclipse workbench when prompted. If you installed the JavaFX SDK in a non-default location, you might be prompted to set the JAVAFX_HOME variable, as shown in Figure 3. You will also need to create a classpath variable called JAVAFX_HOME if it was not created by the Eclipse plug-in installation. Point it to the JavaFX install location. Figure 3. Setting the JAVAFX_HOME classpath variable Creating a Login application In this section, build a sample JavaFX application to validate users against their passwords and allow them to log in to a system if they can provide the required credentials. Upon successful authorization, the user will see a Welcome screen. If authorization is not successful, a message in the Eclipse Console view will provide the failure details. You'll use the JavaFX swing components to build the login screen. You can download the source code for the Login application. - Create a new JavaFX project. Click File > New > Project > JavaFX > JavaFX project, as shown in Figure 4. Figure 4. Create a new JavaFX project Click Next. - Enter LoginAppas the Project name. Select the Desktop profile. These selections are shown in Figure 5. Figure 5. Configuring the JavaFX project Click Finish. - Create a package called com.sample.login within the LoginApp project. - Right-click the package and select New > Empty JavaFX Script. - Provide the name Main, and then click Finish. - You'll need to declare a few variables for the example application. As shown in Listing 1, you need a Boolean variable called loginthat maintains the login state of the user (whether or not the last login was successful). Declare the string variable usernameso that it holds the user name entered by the user. There's also a hard-coded system user testwho can only log in to our application. Listing 1. Declaration of global variables var login = false; var userName = ""; var systemUser = "test"; - In the Snippets window, select the Applications tab to expand it. - Select and drag the Stage object to the source editor, as shown in Figure 6. The Stage is the top-level container for holding the user interface JavaFX objects. Figure 6. Drag the Stage object onto the editor - Edit the title to be displayed for the Stage by entering Login App, as shown in Figure 7. Set both the width and the height to 300. Figure 7. Configuring the Stage object Click Insert, which will add a Sceneelement to the Stage. The Sceneelement is like a drawing platform or surface, which is used to render the graphical elements. It has a contentvariable that holds the child elements. - Add a javafx.scene.Groupelement to the Scenewith an import statement, as shown in Listing 2. This group will act like a container for the rest of the controls you create. Listing 2. Import the group class import javafx.scene.Group; - Add the groupelement, as shown in Listing 3, to the content element. Listing 3. Add the group inside the content content: [ Group { } ] - Begin adding child controls to the parent group control. First, add a label by importing the SwingLabelclass, as shown in Listing 4. Listing 4. Import SwingLabel class import javafx.ext.swing.SwingLabel; Add the following code to the contentelement of the group, as shown in Listing 5. Listing 5. Add the SwingLabel to the group content : [ SwingLabel { text : "User Name :"; } ] - Add a Text field control that will accept user input. Import the SwingTextFieldclass, as shown in Listing 6. Listing 6. Declaration of variables import javafx.ext.swing.SwingTextField; Add the highlighted code to add the text field, as shown in Listing 7. Listing 7. Add the SwingTextField to the group SwingLabel { text : "User Name :"; }, SwingTextField { text : bind userName with inverse; columns : 10; editable : true; layoutX : 30; layoutY : 20; borderless : false; selectOnFocus : true; } - Add a button that will invoke the action to validate the user name entered. If the user name matches the system user, then the user successfully logs in to the system. Import the JavaFX SwingButtonusing the import statement shown in Listing 8. Listing 8. Import the SwingButton class import javafx.ext.swing.SwingButton; Add the code shown in Listing 9 to include the button just below the Text field. Listing 9. Add the SwingButton to the group SwingButton{ translateX: 50 translateY: 50 text: "Submit" action: function(){ if((userName != systemUser)) { println("Invalid UserName"); } login = (userName == systemUser); } } - The actionfunction in Listing 9 checks if the userNamethat was entered is the same as the system user name. If it is not, the example prints out the error message. Otherwise, the result is stored in the login Boolean variable. So far you've handled the case where the login fails. You need to use the state of the login variable to advance to the successful login screen. This demands an if-elsestatement. Add the if-elseclause, and in the elseclause, first add an empty group with a content object in it. Add the highlighted code, as shown in Listing 10. Listing 10. Add the if-elseclause: [ ] } - Finally, add some text to indicate a successful login message and a Log out button that will return the user to the login screen. Import the Textclass, as shown in Listing 11. Listing 11. Import the Textclass import javafx.scene.text.Text; Add the code shown in Listing 12 inside the content body of the elseclause group element you added earlier. Listing 12. Add the Text Class and SwingButton to the else group Text { x: 10 y: 30 content: "You have successfully logged in." }, SwingButton{ translateX: 10 translateY: 50 text: "Log out" action: function(){ userName = ""; login = false; } } The complete code is shown in Listing 13. Listing 13. LoginApp example code package com.sample.login; import javafx.stage.Stage; import javafx.scene.Scene; import javafx.scene.Group; import javafx.scene.text.Text; import javafx.ext.swing.SwingLabel; import javafx.ext.swing.SwingTextField; import javafx.ext.swing.SwingButton; var login = false; var userName = ""; var systemUser = "test"; Stage { title : "Login App" scene: Scene { width: 300 height: 300: [ Text { x: 10 y: 30 content: "You have successfully logged in." }, SwingButton{ translateX: 10 translateY: 50 text: "Log out" action: function(){ userName = ""; login = false; } } ] } } } Running the application In this section, you'll test the example Login application. Save the changes you've made so far. - Right click on the Main.fx file and select Run As > JavaFX application. Leave the configuration settings with the defaults and click Run. A new window opens with the Login Application, as shown in Figure 8. Figure 8. Login Application - Enter abcand click Submit. The login fails, so you can see the error message logged in the console. - Enter testand click Submit. The system accepts this user name and logs in successfully, as shown in Figure 9. Figure 9. Successful login Creating an application to run on a mobile emulator The LoginApp created above used the Desktop profile. In this section, create an application that uses a Mobile profile and runs on a mobile emulator. This example explores how to create animated graphics. You'll also render a circle that has varying opacity at different time intervals. - Create a new JavaFX project. Click File > New > Project > JavaFX > JavaFX project. - Enter the Project name AnimatedCircle, as shown in Figure 10. Select the Mobile profile. Figure 10. Login Application Click Finish. - Create a new package called com.sample.animation. - Create a new empty JavaFX Script. Right click on the package and select New > Empty JavaFX Script. - Enter Mainas the Name, and click Finish. - In the Snippets window, select the Applications tab to expand it. - Select and drag the Stage object to the source editor. - Enter Animated Circleas the Title. Leave the rest of the defaults as they are and click Insert. - In the Snippets window, select the Basic Shapes tab to expand it. - Select and drag the Circle element to the source editor inside the content[]element. Enter Color.BLUEas the fillproperty in the Insert Template dialog box, as shown in Figure 11. Figure 11. Adding a Circle Click Insert. - When adding a Linear Gradient pattern to the circle, you can specify two or more gradient colors. In the Snippets window, click the Gradients tab to expand it. - Delete Color.BLUE from the fill value, then select and drag the Linear Gradient object to the source editor, as shown in Figure 12. Figure 12. Adding a Linear Gradient pattern to the circle - Now run the application to see what has developed so far. Save the changes. Right click on the Main.fx file and select Run As > JavaFX Application. The mobile emulator window will appear, displaying the circle with a linear gradient, as shown in Figure 13. Figure 13. Animated Circle App running in a mobile emulator Adding animation support Add animation support to the circle. The example walks through changing the opacity of the circle at different time intervals. You need a TimeLine that contains KeyFrames. The example has two keyframes: one that varies the opacity of the circle from 0.0 to 0.5 for 5 seconds, and one that varies the opacity from 0.5 to 1.0 in the next 10 seconds. - Define a variable called opacityby adding the code in Listing 14. Listing 14. Declare global variable opacity var opacity = 1.0; - Add a local variable for the circle and bind it to the global variable, as shown in Listing 15. Listing 15. Bind global variable to the circle's property opacity Circle { opacity : bind opacity; centerX: 100, centerY: 100, radius: 40, - Add the TimeLineelement. In the Snippets window, select the Animations tab to expand it. Drag the TimeLineelement onto the editor. From the Insert Template dialog box, enter 5sfor the time value, as shown in Figure 14. Figure 14. Adding a TimeLine Click Insert. Figure 15 shows the code that gets generated after dragging the TimeLine to the editor. Figure 15. TimeLine added - Drag the Valueselement from the Animations tab within the KeyFrame object after the canSkipattribute. In the Insert Template dialog, enter opacityfor the variable value, as shown in Figure 16. Figure 16. Adding Values to a KeyFrame Click Insert. In the generated code, shown in Figure 17, change the opacity value to 0.5. Figure 17. KeyFrame with Values added - Add another KeyFrame, just below the KeyFrame in the example in Figure 17, with a time variable of 10 seconds and a Valueselement that changes the opacity to 1.0. The code should look similar to Figure 18. Figure 18. Timeline with two keyframes - Finally, play the timeline. Add .play(), as shown in Figure 19. Figure 19. Playing the TimeLine - Run the application again to see the animated circle in action. Summary In this article, you learned about JavaFX and how to use it to quickly build GUI applications. The examples showed how to build forms using the Swing components. You also explored how to develop graphical applications and add animation support. Downloadable resources - PDF of this content - Login Application Sample Code (LoginApp.zip | 23KB) - Animated Circle Sample Code (AnimatedCircle.zip | 2KB) Related topics - Get all the latest news and other information about JavaFX. - Browse through the JavaFX Sample Gallery. - Read JavaFX technical documentation and take tutorials. - See JavaFX in action on the Vancouver 2010 Olympics site. - Download JavaFX. - Download IBM product evaluation versions and get your hands on application development tools and middleware products from DB2®, Lotus®, Rational®, Tivoli®, and WebSphere®.
https://www.ibm.com/developerworks/web/library/wa-javafxapp/index.html?ca=drs-
CC-MAIN-2018-26
refinedweb
2,193
67.45
import "cmd/go/internal/module" Package module defines the module.Version type along with support code. CanonicalVersion returns the canonical form of the version string v. It is the same as semver.Canonical(v) except that it preserves the special build suffix "+incompatible". Check checks that a given module path, version pair is valid. In addition to the path being a valid module path and the version being a valid semantic version, the two must correspond. For example, the path "yaml/v2" only corresponds to semantic versions beginning with "v2.". CheckFilePath checks whether a slash-separated file path is valid. CheckImportPath checks that an import path is valid. CheckPath checks that a module path is valid. DecodePath returns the module path of the given safe encoding. It fails if the encoding is invalid or encodes an invalid path. DecodeVersion returns the version string for the given safe encoding. It fails if the encoding is invalid or encodes an invalid version. Versions are allowed to be in non-semver form but must be valid file names and not contain exclamation marks. EncodePath returns the safe encoding of the given module path. It fails if the module path is invalid. EncodeVersion returns the safe encoding of the given module version. Versions are allowed to be in non-semver form but must be valid file names and not contain exclamation marks. MatchPathMajor returns a non-nil error if the semantic version v does not match the path major version pathMajor. PathMajorPrefix returns the major-version tag prefix implied by pathMajor. An empty PathMajorPrefix allows either v0 or v1. Note that MatchPathMajor may accept some versions that do not actually begin with this prefix: namely, it accepts a 'v0.0.0-' prefix for a '.v1' pathMajor, even though that pathMajor implies 'v1' tagging. Sort sorts the list by Path, breaking ties by comparing Versions. SplitPathVersion returns prefix and major version such that prefix+pathMajor == path and version is either empty or "/vN" for N >= 2. As a special case, gopkg.in paths are recognized directly; they require ".vN" instead of "/vN", and for all N, not just N >= 2. VersionError returns a ModuleError derived from a Version and error. An InvalidVersionError indicates an error specific to a version, with the module path unknown or specified externally. A ModuleError may wrap an InvalidVersionError, but an InvalidVersionError must not wrap a ModuleError. func (e *InvalidVersionError) Error() string func (e *InvalidVersionError) Unwrap() error A ModuleError indicates an error specific to a module. func (e *ModuleError) Error() string func (e *ModuleError) Unwrap() error type Version struct { Path string // Version is usually a semantic version in canonical form. // There are two exceptions to this general rule. // First, the top-level target of a build has no specific version // and uses Version = "". // Second, during MVS calculations the version "none" is used // to represent the decision to take no version of a given module. Version string `json:",omitempty"` } A Version is defined by a module path and version pair. Package module imports 7 packages (graph) and is imported by 26 packages. Updated 2019-09-05. Refresh now. Tools for package owners.
https://godoc.org/cmd/go/internal/module
CC-MAIN-2019-39
refinedweb
520
59.4
Post your Comment How to use JAR file in Java to digitally sign the JAR file. Users who want to use the secured file can check... do not use JAR file, then they will have to download every single file one... on any platform. All operating system support Java. JAR file works as normal Zip jar file jar file how to create a jar file in java JAR FILE and applications. The Java Archive (JAR) file format enables to bundle multiple...JAR FILE WHAT IS JAR FILE,EXPLAIN IN DETAIL? A JAR file... link: Jar File Explanation Java Execute Jar file operating system or platform. To execute a JAR file, one must have Java... file type java -jar [Jar file Name] in the command prompt. To execute...JAR stands for Java ARchive and is platform independent. By making all Changing Executable Jar File Icon ;Hi, You may use JSmooth to create executable java file and also associate icon...Changing Executable Jar File Icon I have created an executable jar file for my java program and the icon that appears is the java icon. I will like change jar file icon - Java Beginners change jar file icon How to create or change jar file icon Hi friend, The basic format of the command for creating a JAR file... JAR file to have. You can use any filename for a JAR file. By convention, JAR Creating a JAR file in Java by the JDK (Java Development Kit). Here, you can learn how to use the jar command... Creating a JAR file in Java This section provides you to the creating a jar file Only change jar file icon - Java Beginners Only change jar file icon Dear Friend I know that how to create a jar file but i don't know How to change jar file Icon. I can change .exe file... that you want the resulting JAR file to have. You can use any filename for a JAR jar file - Java Beginners an application packaged as a JAR file (requires the Main-class manifest header)java -jar...jar file jar file When creating a jar file it requires a manifest file. what is the manifest file, and how to create it. And tell me more about Java FTP jar Java FTP jar Which Java FTP jar should be used in Java program for uploading files on FTP server? Thanks Hi, You should use commons-net-3.2.jar in your java project. Read more at FTP File Upload in Java where located mysql jar file - Java Beginners where located mysql jar file pklz any one help me how can work... a jdk 1.4 tomcat 4.0 mysql 5.0 and eclipse 3.1 plz guide me how can i use eclise with jsp give brief steps in my personal given email id its urgent a single executable jar of it... so pls tell me how it will b possible for me??? The given code creates a jar file using java. import java.io.*; import.. Creating JAR File - Java Beginners me, in letting me know, as to how to create JAR file from my JAVA source...(); } out.close(); fout.close(); System.out.println("Jar File... main(String[]args){ CreateJar jar=new CreateJar(); File folder = new..., which says, Failed to load Main-Class manifest attribute from H:\Stuff\NIIT\Java... other folders as well). I would like you to please, advice me as to Post your Comment
http://roseindia.net/discussion/48865-How-to-use-JAR-file-in-Java.html
CC-MAIN-2015-32
refinedweb
575
75.2
Issues ZF-5238: Feed protocol versions are not taken into consideration when looking up XML namespaces Description Changes were recently introduced to Zend_Gdata_FeedEntryParent to support associating a protocol version with a feed and/or entry. However, this protocol version is not taken into account when looking up XML namespaces during object instantiation. As a result, elements using a v2 namespace will be incorrectly stored as extension elements. This is blocking development of features for YouTube v2. Posted by Trevor Johns (tjohns) on 2008-12-12T19:50:10.000+0000 Uploaded patch for review: Posted by Trevor Johns (tjohns) on 2008-12-15T15:57:30.000+0000 Committed to trunk as r13290. Posted by Trevor Johns (tjohns) on 2008-12-15T16:00:46.000+0000 Merged to release-1.7 as r13289. Marking as fixed for next mini release. Posted by Trevor Johns (tjohns) on 2008-12-19T11:27:32.000+0000 This hadn't been merged to release-1.7 as I had expected -- r13289 is actually for ZF-5186. I've properly merged this to release-1.7 in r13379.
http://framework.zend.com/issues/browse/ZF-5238
CC-MAIN-2016-26
refinedweb
181
62.24
#include <CodeComplete.h> Definition at line 45 of file CodeComplete.h. Whether to use the clang parser, or fallback to text-based completion (using identifiers in the current file and symbol indexes). Definition at line 111 of file CodeComplete.h. Model to use for ranking code completion candidates. Definition at line 133 of file CodeComplete.h. Definition at line 71 of file CodeComplete.h. Returns options that can be passed to clang's completion engine. Definition at line 1810 of file CodeComplete.cpp. Whether to include index symbols that are not defined in the scopes visible from the code completion point. This applies in contexts without explicit scope qualifiers. Such completions can insert scope qualifiers. Definition at line 107 of file CodeComplete.h. Combine overloads into a single completion item where possible. If none, the implementation may choose an appropriate behavior. (In practice, ClangdLSPServer enables bundling if the client claims to supports signature help). Definition at line 62 of file CodeComplete.h. Weight for combining NameMatch and Prediction of DecisionForest. CompletionScore is NameMatch * pow(Base, Prediction). The optimal value of Base largely depends on the semantics of the model and prediction score (e.g. algorithm used during training, number of trees, etc.). Usually if the range of Prediciton is [-20, 20] then a Base in [1.2, 1.7] works fine. Semantics: E.g. For Base = 1.3, if the Prediciton score reduces by 2.6 points then completion score reduces by 50% or 1.3^(-2.6). Definition at line 153 of file CodeComplete.h. Callback used to score a CompletionCandidate if DecisionForest ranking model is enabled. This allows us to inject experimental models and compare them with baseline model using A/B testing. Definition at line 144 of file CodeComplete.h. Whether to present doc comments as plain-text or markdown. Definition at line 69 of file CodeComplete.h. Whether to generate snippets for function arguments on code-completion. Needs snippets to be enabled as well. Definition at line 100 of file CodeComplete.h. When true, completion items will contain expandable code snippets in completion (e.g. return ${1:expression} or `foo(${1:int a}, ${2:int b})). Definition at line 52 of file CodeComplete.h. Include completions that require small corrections, e.g. change '.' to '->' on member access etc. Definition at line 96 of file CodeComplete.h. Include results that are not legal completions in the current context. For example, private members are usually inaccessible. Definition at line 56 of file CodeComplete.h. If Index is set, it is used to augment the code completion results. FIXME(ioeric): we might want a better way to pass the index around inside clangd. Definition at line 91 of file CodeComplete.h. Limit the number of results returned (0 means no limit). If more results are available, we set CompletionList.isIncomplete. Definition at line 66 of file CodeComplete.h. Definition at line 93 of file CodeComplete.h. Callback invoked on all CompletionCandidate after they are scored and before they are ranked (by -Score). Thus the results are yielded in arbitrary order. This callbacks allows capturing various internal structures used by clangd during code completion. Eg: Symbol quality and relevance signals. Definition at line 130 of file CodeComplete.h. Expose origins of completion items in the label (for debugging). Definition at line 84 of file CodeComplete.h.
https://clang.llvm.org/extra/doxygen/structclang_1_1clangd_1_1CodeCompleteOptions.html
CC-MAIN-2021-31
refinedweb
556
53.88
To extend my restfull api with GPS locations I decided to try geoalchemy. I already have a database going and I think it saves the points to my database already. However, everytime I try to print a point that I saved (for instance to return to the the user) I get a memory adress or something like this: <WKBElement at 0x7ffad5310110; '0101000000fe47a643a7f3494049f4328ae5d61140'> POINT(40.5563 30.5567) That would be in the Well-Known Binary format; you can use the geoalchemy2.functions.ST_AsText to convert them to the WKT text format. This would work in the database itself, thus you'd apply this to your query to ask for the results in WKT instead of WKB; that is in your SQLAlchemy you select Model.column.ST_AsText() or ST_AsText(Model.column) For off-database conversions between WKT and WKB, you can use the shapely module. Note that the wkb functions need binary, not hex: from shapely import wkb, wkt from binascii import unhexlify >>> binary = unhexlify(b'0101000000fe47a643a7f3494049f4328ae5d61140') >>> binary b'\x01\x01\x00\x00\x00\xfeG\xa6C\xa7\xf3I@I\xf42\x8a\xe5\xd6\x11@' >>> point = wkb.loads(binary) >>> point.x, point.y (51.903542, 4.45986) >>> wkt.dumps(point) 'POINT (51.9035420000000016 4.4598599999999999)'
https://codedump.io/share/3W4lE4Uvr4Di/1/representing-coordinates-in-geoalchemy2
CC-MAIN-2017-43
refinedweb
203
63.29
All locking and unlocking of mutexes should be performed in the same module and at the same level of abstraction. Failure to follow this recommendation can lead to some lock or unlock operations not being executed by the multithreaded program as designed, eventually resulting in deadlock, race conditions, or other security vulnerabilities, depending on the mutex type. A common consequence of improper locking is for a mutex to be unlocked twice, via two calls to mtx_unlock(). This can cause the unlock operation to return errors. In the case of recursive mutexes, an error is returned only if the lock count is 0 (making the mutex available to other threads) and a call to mtx_unlock() is made. Noncompliant Code Example In this noncompliant code example for a simplified multithreaded banking system, imagine an account with a required minimum balance. The code would need to verify that all debit transactions are allowable. Suppose a call is made to debit() asking to withdraw funds that would bring account_balance below MIN_BALANCE, which would result in two calls to mtx_unlock(). In this example, because the mutex is defined statically, the mutex type is implementation-defined. #include <threads.h> enum { MIN_BALANCE = 50 }; int account_balance; mtx_t mp; /* Initialize mp */ int verify_balance(int amount) { if (account_balance - amount < MIN_BALANCE) { /* Handle error condition */ if (mtx_unlock(&mp) == thrd_error) { /* Handle error */ } return -1; } return 0; } void debit(int amount) { if (mtx_lock(&mp) == thrd_error) { /* Handle error */ } if (verify_balance(amount) == -1) { if (mtx_unlock(&mp) == thrd_error) { /* Handle error */ } return; } account_balance -= amount; if (mtx_unlock(&mp) == thrd_error) { /* Handle error */ } } Compliant Solution This compliant solution unlocks the mutex only in the same module and at the same level of abstraction at which it is locked. This technique ensures that the code will not attempt to unlock the mutex twice. #include <threads.h> enum { MIN_BALANCE = 50 }; static int account_balance; static mtx_t mp; /* Initialize mp */ static int verify_balance(int amount) { if (account_balance - amount < MIN_BALANCE) { return -1; /* Indicate error to caller */ } return 0; /* Indicate success to caller */ } int debit(int amount) { if (mtx_lock(&mp) == thrd_error) { return -1; /* Indicate error to caller */ } if (verify_balance(amount)) { mtx_unlock(&mp); return -1; /* Indicate error to caller */ } account_balance -= amount; if (mtx_unlock(&mp) == thrd_error) { return -1; /* Indicate error to caller */ } return 0; /* Indicate success */ } Risk Assessment Improper use of mutexes can result in denial-of-service attacks or the unexpected termination of a multithreaded program. 12 Comments David Svoboda Good rule so far. Comments: Unknown User (krishant) Summary of the changes made: Is POS47-C really an exception? The CCE for that rule follows this recommendation, even though the NCCE's do not. David Svoboda I think it is, so I added the exception. Everything else looks good. Martin Sebor A few comments: volatilequalifier in the examples is unnecessary. I suggest to remove it for the sake of clarity (see also POS03-C. Do not use volatile as a synchronization primitive and DCL34-C. Use volatile for data that cannot be cached). David Svoboda Martin, I made all the changes you suggested, except for the following: The two main hazards with locking bugs are deadlocks and race conditions AKA data races, which the locks are designed to prevent. I don't have any particular details of worse things that can happen. (I expect a double unlock might cause an out-of-bounds read or a null pointer dereference on some platform.) I think we've been using an external table to map CWE references and CERT rules. So this association belongs on the wiki, but not necessarily here. Martin Sebor Great! I suspect you might be right about double unlock potentially having the same effects as double free even though I don't see anything to support that hypothesis in the Solaris mutex_unlock()code or in glibc pthread_mutex_unlock(). Are you fine with renaming the practice? David Svoboda Yes, I adjusted the title as you suggested. BTW double free() is considerably worse than I was suggesting...in the right circumstances, a double free() can permit an attacker to run shellcode. I don't see how that is possible with a double unlock. AFAICT a double unlock is undefined behavior, so what happens next is up to the implementation. I just suspect it might cause an out-of-bounds read or null pointer dereference. Which might lead to a program crash. Which is much less harmful than shellcode. EDIT: I suppose a double-unlock could lead to executable shellcode if it caused a double-free. I still think its very unlikely. Martin Sebor Looks good, thanks! Re: double unlock and double free, I was thinking that if pthread_mutex_unlock()resulted in unlinking the mutex from a linked list as it seems to in the Solaris implementation (see the call to queue_lock()in mutex_wakeup()) then it could have the same effect as double free (i.e., writing arbitrary values to arbitrary memory). It does seem like it would be pretty hard to control though. It might be fun to try to produce an exploit – if only I had a few weeks of free time on my hands... Santiago Urueña I think the verify_balancefunction of the compliant solution is not OK because it is reading the account_balanceglobal variable without first acquiring the mutex. POSIX avoids race conditions preventing memory conflicts (i.e. two threads cannot access the same memory location at the same time, and at least one is a write), so just locking the mutex from the writer thread is not enough: The reader must also lock the mutex before reading the shared variable to ensure this thread does not read it while being modified by other thread (is not guaranteed to be an atomic operation). Martin Sebor I think the intent is that verify_balance()is a implementation function that can only be called from debit()when the mutex is locked. I tried to make it clearer by declaring the function static. However, there was a race condition in the compliant solution in assigning to the global variable ret. Since the variable isn't necessary to demonstrate the problem or the solution I removed it. Santiago Urueña Thanks for the clarification, I though verify_balance()was meant to be called concurrently. No data race is possible then. Good idea using staticfor private objects and functions! Dean Sutherland This is clearly good advice. Perhaps we should add the true requirement, which permits more possible placements than this. That true requirement is this: where the domination and post-domination properties are defined over the executing thread's global control-flow graph. Further, intervening acquisition or release of the mutex is permitted only for recursive mutexes and only when the acquisitions and releases are correctly balanced. Thinking a bit more, even that requirement isn't (quite) the true minimal requirement. We actually need a (set of) mutex acquisitions and a matching set of mutex releases where the acquisition(s) collectively dominate the release(s) and the releases collectively post-dominate the acquisitions (and the rule about intervening operations on the same mutex still holds). But that's getting rather baroque.
https://wiki.sei.cmu.edu/confluence/pages/viewpage.action?pageId=87152303
CC-MAIN-2019-22
refinedweb
1,167
52.9
I am a relative newcomer to C++. I have some history in BASIC ( I like goto loops!), but the C++ syntax is new to me.I like goto loops!), but the C++ syntax is new to me. The program I am trying to create is a simple game modifier that changes a value in the game's user.ini file so that a new action will be bound to a certain key. What I want it to do is open and run through the user.ini file, copying every line exactly until it finds the line that begins with "X=" (with X being the letter the user picked.) where I want to make it "X=playvehiclehorn 1" then continue on copying the rest of the lines of the program and stop when it reaches the end. I don't want the program to overwrite the file it reads from, just to make a copy in the same directory as the program is run, so the user can manually add it. I am using Bloodshed Dev C++. The compiler is throwing some errors at me I have never seen before. It is giving me errors on lines that don't have any code and past the end of the program. I have used all these commands before, maybe not in the same program, so I should be able to make it work. But no one can honestly say the error messages in Bloodshed help a novice very much . Thanks in advance!. Thanks in advance! here is my code: lines with errors are marked with: *linenumber*lines with errors are marked with: *linenumber*Code:#include <iostream> #include <cstdlib> #include <string> #include <fstream> using namespace std; int main() { char peeker = 'z', def = 'y', newkey = 'H', custkey = 'z', gettest = 'z', peeker2 = 'z', done = 'n'; string filename, linein; ifstream infile; ofstream outfile; while (def != 'y' || def != 'n') { cout << "Is your program installed in the default directory?" << endl << "c:\UT2004 (y/n): "; cin >> def; } if (def == 'n') { cout << "Enter the location of your user.ini file" << endl << "Example: c:\ut2004\system\user.ini" << endl << ": "; cin.ignore(400, '\n'); getline(cin, filename); infile.open(filename.c_str() ); } *29* else infile.open("c:\ut2004\system\user.ini"); if (!infile) { system("pause"); return 1; } outfile.open("user.ini"); cout << "Do you want to use the 'H' key? (y/n): "; while (custkey != 'y' || custkey != 'n') cin >> custkey; if (custkey == 'n') { cout << "Enter your new key to bind to the horn. It must be a CAPITAL letter." << endl << "Ex: H : "; cin >> newkey; while (newkey < 'A' || newkey > 'Z') { cout << " Invalid key, enter another: "; cin >> newkey; } while (!infile.eof()) { peeker = infile.peek(); if (peeker == newkey && done == 'n') { infile.get(gettest); peeker2 = infile.peek(); if (peeker2 == '=') { outfile << newkey << "=" << "playvehiclehorn 1" << endl; done = 'y'; infile.ignore(100, '\n'); } else { infile.putback(gettest); getline(infile, linein); outfile << linein << endl; } } else *79* { getline(infile, linein); outfile << linein << endl; } } infile.close(); outfile.close(); system("pause"); return 0; *90* } here are my error messages Line | Error -------------- 102: non-hex digit 'T' in universal-character-name *note, the program has only 90 lines 79: non-hex digit 't' in universal-character-name 79: [Warning] unknown escape sequence '\s' 79: non-hex digit 's' in universal-character-name 29: non-hex digit 't' in universal-character-name 29: [Warning] unknown escape sequence '\s' 29: non-hex digit 's' in universal-character-name In function `int main()': *no line number 91: syntax error at end of input
https://cboard.cprogramming.com/cplusplus-programming/58795-unknown-errors-me-least.html
CC-MAIN-2017-26
refinedweb
574
74.79
How to Merge Dask DataFrames • February 1, 2022 This post demonstrates how to merge Dask DataFrames and discusses important considerations when making large joins. You’ll learn: - how to join a large Dask DataFrame to a small pandas DataFrame - how to join two large Dask DataFrames - how to structure your joins for optimal performance The lessons in this post will help you execute your data pipelines faster and more reliably, enabling you to deliver value to your clients in shorter cycles. Let’s start by diving right into the Python syntax and then build reproducible data science examples you can run on your machine. Dask Dataframe Merge You can join a Dask DataFrame to a small pandas DataFrame by using the dask.dataframe.merge() method, similar to the pandas api. Below we create a Dask DataFrame with multiple partitions and execute a left join with a small pandas DataFrame: import dask.dataframe as dd import pandas as pd # create sample large pandas dataframe df_large = pd.DataFrame( { "Name": ["Azza", "Brandon", "Cedric", "Devonte", "Eli", "Fabio"], "Age": [29, 30, 21, 57, 32, 19] } ) # create multi-partition dask dataframe from pandas large = dd.from_pandas(df_large, npartitions=2) # create sample small pandas dataframe small = pd.DataFrame( { "Name": ["Azza", "Cedric", "Fabio"], "City": ["Beirut", "Dublin", "Rosario"] } ) # merge dask dataframe to pandas dataframe join = ddf.merge(df2, how="left", on=["Name"]) # inspect results join.compute() To join two large Dask DataFrames, you can use the exact same Python syntax. If you are planning to run repeated joins against a large Dask DataFrame, it’s best to sort the Dask DataFrame using the .set_index() method first to improve performance.method first to improve performance. large.set_index() large_join = large.merge(also_large, how="left", left_index=True, right_index=True) Dask DataFrame merge to a small pandas DataFrame Dask DataFrames are divided into multiple partitions. Each partition is a pandas DataFrame with its own index. Merging a Dask DataFrame to a pandas DataFrame is therefore an embarrassingly parallel problem. Each partition in the Dask DataFrame can be joined against the single small pandas DataFrame without incurring overhead relative to normal pandas joins. Let’s demonstrate with a reproducible Python code example. Import dask.dataframe and pandas and then load in the datasets from the public Coiled Datasets S3 bucket. This time the Dask DataFrame is actually large and not just a placeholder: it contains a 35GB dataset of time series data. This means the data is too large to run with pandas on almost all machines. import dask.dataframe as dd import pandas as pd # create dataframes large = dd.read_parquet(“s3://coiled-datasets/dask-merge/large.parquet”) small = pd.read_parquet(“s3://coiled-datasets/dask-merge/small.parquet”) # inspect large dataframe large.head() # check number of partitions >>> large.npartitions 359 # inspect small dataframe small.head() The large dataframe contains a large dataset of synthetic time series data with entries at a frequency of 1 second. The small dataframe contains synthetic data over the same time interval but at a frequency of one entry per day. Let’s execute a left join on the timestamp column by calling dask.dataframe.merge(). We’ll use large as the left dataframe. joined = large.merge( small, how="left", on=["timestamp"] ) joined.head() As expected, the column z is filled with NaN for all entries except the first per-second entry of every day. If you’re working with a small Dask DataFrame instead of a pandas DataFrame, you have two options. You can convert it into a pandas DataFrame using .compute(). This will load the DataFrame into memory. Alternatively, if you can’t or don’t want to load it into your single machine memory, you can turn the small Dask DataFrame into a single partition by using the .repartition() method instead. These two operations are programmatically equivalent which means there’s no meaningful difference in performance between them. Rule of thumb here is to keep your Dask partitions under 100MB each. # turn dask dataframe into pandas dataframe small = small.compute() # OR turn dask dataframe into one partition small = small.repartition(npartitions=1) Merge two large Dask DataFrames You can merge two large Dask DataFrames with the same .merge() API syntax. large_joined = large.merge( also_large, how="left", on=[“timestamp”] ) However, merging two large Dask DataFrames requires careful consideration of your data structure and the final result you’re interested in. Joins are expensive operations, especially in a distributed computing context. Understanding both your data and your desired end result can help you set up your computations efficiently to optimize performance. The most important consideration is whether and how to set your DataFrame’s index before executing the join. Note that in the previous example, the timestamp column is the index for both dataframes. Unsorted vs Sorted Joins As explained above, Dask DataFrames are divided into partitions, where each single partition is a pandas DataFrame. Dask can track how the data is partitioned (i.e. where one partition starts and the next begins) using a DataFrame’s divisions. If a Dask DataFrame’s divisions are known, then Dask knows the minimum value of every partition’s index and the maximum value of the last partition’s index. This enables Dask to take efficient shortcuts when looking up specific values. Instead of searching the entire dataset, it can find out which partition the value is in by looking at the divisions and then limit its search to only that specific partition. This is called a sorted join. The join stored in large_joined we executed above is an example of a sorted join since the timestamp column is the index for both of the dataframes in that join. Let’s look at another example. The divisions of the DataFrame df below are known: it has 4 divisions. This means that if we look up the row with index 2015-02-12, Dask will only search the 2nd partition and won’t bother with the other three. In reality, Dask Dataframes often have hundreds or even thousands of partitions, which means the benefit of knowing where to look for a specific value becomes even greater. >>> df.known_divisions True >>> df.npartitions 4 >>> df.divisions ['2015-01-01', '2015-02-01', '2015-03-01', '2015-04-01', '2015-04-31'] If divisions are not known, then Dask will need to move all of your data around so that rows with matching values in the joining columns end up in the same partition. This is called an unsorted join and it’s an extremely memory-intensive process, especially if your machine runs out of memory and Dask will have to read and write data to disk instead. This is a situation you want to avoid. Read more about unsorted large to large joins in the Dask documentation. Sorted Join with set index To perform a sorted join of two large Dask DataFrames, you will need to ensure that the DataFrame’s divisions are known by setting the DataFrame’s index. You can set a Dask DataFrame’s index and pass it the known divisions using: # use set index to get divisions dask_divisions = large.set_index("id").divisions unique_divisions = list(dict.fromkeys(list(dask_divisions))) # apply set index to both dataframes large_sorted = large.set_index("id", divisions=unique_divisions) also_large_sorted = also_large.set_index("id", divisions=unique_divisions) large_join = large_sorted.merge( also_large_sorted, how="left", left_index=True, right_index=True ) Note that setting the index is itself also an expensive operation. The rule of thumb here is that if you’re going to be joining against a large Dask DataFrame more than once, it’s a good idea to set that DataFrame’s index first. Read our blog about setting a Dask DataFrame index to learn more about how and when to do this. It’s good practice to write sorted DataFrames to the Apache Parquet file format in order to preserve the index. If you’re not familiar with Parquet, then you might want to check out our blog about the advantages of Parquet for Dask analyses. Joining along a non-Index column You may find yourself in the situation of wanting to perform a join between two large Dask DataFrames along a column that is not the index. This is basically the same situation as not having set the index (i.e. an unsorted join) and will require a complete data shuffle, which is an expensive operation. Ideally you’ll want to think about your computation in advance and set the index right from the start. The Open Source Engineering team at Coiled is working actively to improve shuffling. Read the Proof of Concept for better shuffling in Dask if you’d like to learn more. Sorted Join Fails Locally (MemoryError) Even a sorted join may fail locally if the datasets are simply too large for your local machine. For example, this join: large = dd.read_parquet(“s3://coiled-datasets/dask-merge/large.parquet”) also_large = dask.datasets.timeseries(start="1990-01-01", end="2020-01-01", freq="1s", partition_freq="1M", dtypes={"foo": int}) large_join = large.merge( also_large, how="left", left_index=True, right_index=True ) large_join.persist() Will throw the following error when run on a laptop with 32GB of RAM or less. [ error ] Note that we didn’t set the index explicitly here because the index of large is preserved in the Parquet file format and dask.datasets.timeseries() automatically sets the index when creating the synthetic data. Run Massive Joins on Dask Cluster When this happens, you can scale out to a Dask cluster in the cloud with Coiled and run the join there in 3 steps: - Spin up a Coiled cluster: cluster = coiled.Cluster( name="dask-merge", n_workers=50, worker_memory='16Gib', backend_options={'spot':'True'}, ) - Connect Dask to the running cluster: from distributed import Client client = Client(cluster) - Run the massive join on 50 workers’ multiple cores in parallel: large_join = large.merge( also_large, how="left", left_index=True, right_index=True ) %%time joined = large_join.persist() distributed.wait(joined) CPU times: user 385 ms, sys: 38.5 ms, total: 424 ms Wall time: 14.6 s This Coiled cluster with 50 workers can run a join of two 35GB DataFrames in 14.6s. Dask Dataframe Merge Summary - You can merge a Dask DataFrame to a small pandas DataFrame using the mergemethod. This is an embarrassingly parallel problem that requires little to no extra overhead compared to a regular pandas join. - You can merge two large Dask DataFrames using the same mergemethod. Think carefully about whether to run an unsorted join or a sorted join using set_indexfirst to speed up the join. - For very large joins, delegate computations to a Dask cluster in the cloud with Coiled for high-performance joins using parallel computing.
https://coiled.io/blog/dask-dataframe-merge-join/
CC-MAIN-2022-21
refinedweb
1,766
64.51
Classloader Acrobatics: Code Generation with OSGi Porting great infrastructure to OSGi often means solving complex class loading problems. This article is dedicated to the frameworks that face the hardest issues in this area: those that do dynamic code generation. Incidentally these are also the coolest frameworks: AOP wrappers, ORM mappers, and service proxy generators are just a few examples. We will examine in order of increasing complexity some typical classloading problems and develop a tiny bit of code to solve the most interesting one. Even if you don't plan to write code generation frameworks any time soon, this article can give you some insight into the low level operation of a modular runtime with statically defined dependencies, such as OSGi. This article comes with a working demo project that contains not only the code presented here, but also two ASM-based code generators you can play with. Classload Site Conversion Porting a framework to OSGi usually requires it to be refactored to the extender pattern. This pattern allows the framework to delegate all classloading to OSGi but at the same time retain control over the lifecycle of application code. The goal of the conversion is to replace the traditional plethora of classloading policies with loading classes from the application bundle. For example we want to replace code like this: ClassLoader appLoader = Thread.currentThread(). getContextClassLoader(); Class appClass = appLoader.loadClass("com.acme.getContextClassLoader(); Class appClass = appLoader.loadClass("com.acme. devices.SinisterEngine"); ... ClassLoader appLoader = ... Class appClass = appLoader.loadClass("com.acme.devices.SinisterEngine"); ... ClassLoader appLoader = ... Class appClass = appLoader.loadClass("com.acme. devices.SinisterEngine");devices.SinisterEngine"); With: Bundle appBundle = ... Class appClass = appBundle.loadClass("com.acme. devices.SinisterEngine");devices.SinisterEngine"); Although we must do a non-trivial amount of work to get OSGi to load the application code for us, we at least have a nice and correct way to get things working. And now they will work even better than before! Now the user can add/remove applications just by installing/uninstalling bundles into the OSGi container. Also the users can break up their application into as many bundles as they wish, share libraries between applications, and utilize other such capabilities of modularity. Since the context classloader is the current standard way for frameworks to load application code, it deserves a few extra words. Currently OSGi does not define a policy for setting the context classloader. For this reason developers need to know in advance when a framework relies on the context loader, and set it manually every time they call into that framework. Because this is error-prone and inconvenient the context loader is almost never used under OSGi. There are efforts under way to define how the OSGi container should automatically manage the context classloader. Until an official standard emerges it is best to convert the sites where is it used into classloads from a concrete application bundle. Adapter ClassLoader Sometimes the code we convert has externalized its classloading a classic incarnation of the adapter pattern: public class BundleClassLoader extends ClassLoader { private final Bundle delegate; public BundleClassLoader(Bundle delegate) { this.delegate = delegate; } @Override public Class<?> loadClass(String name) throws ClassNotFoundException { return delegate.loadClass(name); } } Now we can pass this adapter to the converted framework code. We can also add bundle tracking code to create the adapters as new bundles come and go - for example, we can adapt a Java framework to OSGi "externally", avoiding the effort of browsing through the codebase and the converting each individual classload site. Here is a highly schematic sample of some code that converts a framework to use OSGi classloading: ... Bundle app = ... BundleClassLoader appLoader = new BundleClassLoader(app); DeviceSimulationFramework simfw = ... simfw.simulate("com.acme.devices.SinisterEngine", appLoader); ... Bridge ClassLoader Many interesting Java frameworks do fancy classworking on client code at runtime. The goal usually is to dynamically build classes out of stuff living in the application's class space. Let's call these generated classes enhancements. Usually the enhancement implements some application-visible interface or extends an application-visible class. Sometimes additional interfaces and their implementations are also mixed in. Enhancements augment application code - the generated objects are meant to be called directly by the application. For example, a service proxy is passed to the application code to free it from the need to track a dynamic service. Similarly, a wrapper that adds some AOP feature is passed to the application code in place of the original object. Enhancements start their lives as byte[] blocks, produced by your favorite class engineering library (ASM, BCEL, CGLIB, ...). Once we have generated our class, we must turn the raw bytes into a Class object, i.e. we must make some ClassLoader call its framework and the application bundles in a way that is "invisible" to the OSGi container. As a result the enhancements can potentially be exposed to incompatible versions of the same class Class space completeness Enhancements are backed by code private to the Java framework that generated them - this implies that the framework should introduce the new class into its own class space. On the other hand, the enhancements implement interfaces or extend classes visible in the application class space, which implies that we should define the enhancement class there. We cannot define a class in two class spaces at the same time, so we have a problem. Because there is no class space that sees all required classes, we have no other option but to make a new class space. A class space equals a ClassLoader instance, so our first job is to maintain one dedicated ClassLoader on top of every application bundle. These are called Bridge ClassLoaders, because they merge two class spaces by chaining their loaders:); /* * Framework space * * We assume this code is executed inside the framework */ ClassLoader fwSpace = this.getClass(). getClassLoader(); /* Bridge */ ClassLoader bridge = new BridgeClassLoader(appSpace, fwSpace);getClassLoader(); /* Bridge */ ClassLoader bridge = new BridgeClassLoader(appSpace, fwSpace); This loader will first serve requests from the application space - if that fails, it will then try the framework space. Notice that we still let OSGi do lots of heavy lifting for us. When we delegate to either class space, we are in fact delegating to an OSGi-backed ClassLoader - basically, the primary and secondary loaders can delegate to other bundle loaders in accordance to the import/export metadata of their respective bundles. At this point we might be pleased with ourselves. The bitter truth, however, is that the framework and application class spaces combined may not be enough! Everything hinges on the particular way the JVM links classes (also known as resolving classes). There are a variety of explanations for how this works: The short answer: JVM resolution works on a fine-grained (one symbol at a time) level. The long answer: When the JVM links a class, it does not need the complete descriptions of all classes referenced by the linked class. It only needs information about the individual methods, fields and types that are really used by the linked class. What to our intuition is a monolithic whole to the JVM, is a class name, plus a superclass, plus a set of implemented interfaces, plus a set of method signatures, plus a set of field signatures. All these symbols are resolved independently and lazily. For example, to link a method call site, the class space of the caller needs to supply Class objects only for the target class and for all types used in the method signature. Definitions for the numerous other things that the target class may contain, are not needed and the ClassLoader of the caller will never receive a request to load them. The formal answer: Class A from class space SpaceA must be represented by the same Class object in class space SpaceB if and only if: - A class Bfrom SpaceBexists, that refers to Afrom its symbol table (known also as the constant pool). - The OSGi container has wired SpaceAas the provider of class Afor SpaceB. The wire is established based on the static metadata of all bundles in the container. By example: Imagine we have a bundle BndA that exports a class A. Class A has 3 methods, distributed between 3 interfaces: IX.methodX(String) IY.methodY(String) IZ.methodZ(String) Imagine also that implemented interfaces. Finally, even BndA does not have to supply any of the super-interfaces of IX, IY, IZ, because they are also not directly referenced. Now let's imagine we want to present an enhanced version of class A from class space BndA to class B from class space BndB. The enhancement needs to extend class A and override some or all of its methods. Because of that, the enhancement needs to see the classes used in the signatures of all overridden methods. However, BndB will import all these classes only if it contains code that calls each overridden method. It is very unlikely that BndB calls exactly the methods of A that we mean to override with our enhancement. Therefore BndB likely does not see enough classes to define the enhancement in its class space. In fact the complete set of classes can only be supplied by BndA. We have a problem! Turns out that we must bridge not the framework and application spaces, but the framework space and the space of the enhanced class - so, rather than "bridge per application space" we must shift our strategy to "bridge per enhanced space". We need to make a transitive hop from the application to the class space of some third party bundle, from where the application imports the class it wants us to enhance. How do we make that transitive leap? Simple! As we know, every Class object can tell us which is the class space where it was first defined. For example, all we need to do to get the defining class loader of A is to call A.class.getClassLoader(). In many cases however, we have a String name rather than a Class object, so how do we get A.class in the first place? Simple again! We can ask the application bundle to give us the exact Class object it sees under the name "A". Than we can bridge the space of that Class with the framework space. This is a critical step because we need the enhanced and original classes to be interchangeable within the application. Out of potentially many available versions of class A, we need to pick the class space of exactly the one used by the application. Here is a schematic of how the framework can maintain a cache of classloader bridges: ... /* Ask the app to resolve the target class */ Bundle app = ... Class target = app.loadClass("com.acme.devices.SinisterEngine"); /* Get the defining classloader of the target */ ClassLoader targetSpace = target.getClassLoader(); /* Get the bridge for the class space of the target */ BridgeClassLoaderCache cache = ... ClassLoader bridge = cache.resolveBridge( targetSpace);targetSpace); Where the bridge cache would look something like: public class BridgeClassLoaderCache { private final ClassLoader primary; private final Map<ClassLoader, WeakReference<ClassLoader>> cache; public BridgeClassLoaderCache(>( bridge)); } return bridge; } }bridge)); } return bridge; } } To prevent memory leaks due to ClassLoader retention, we had to use both weak keys and weak values. The goal is to not retain the class space of an uninstalled bundle in memory. We had to use weak values because the value ( BridgeClassLoader) of each map entry references strongly the key ( ClassLoader), thus negating its "weakness". This is the standard advice prescribed by the WeakHashMap javadoc. By using a weak cache we avoid the need to track a whole lot of bundles and do eager reactions to their lifecycles. Visibility Okay, we finally have our exotic bridge class space. Now how do we define our enhancements in it? The problem as earlier mentioned, is that defineClass() is a protected method of BridgeClassLoader. We could override it with a public method, but that would be rude. Also we will have to code our own checks to see if the requested enhancement has already been defined. It is a better idea to follow the intended design of ClassLoader. This design prescribes that we should override findClass(), which can call defineClass() when it determines it can supply the requested class from an arbitrary binary source. In findClass() we can rely only on the name of the requested class make decisions. So our BridgeClassLoader must think to itself: This is a request for "A$Enhanced", so I must call the enhancement generator for a class named "A"! Then I call defineClass() on the produced byte[]. Then I return the new Class object. There are two remarkable things about that statement. - We introduced a text protocol for the names of enhancement classes - We can pass a single item of data to our ClassLoader- a Stringfor the name of the requested class. At the same time we need to pass two items of data - the name of the original class and a flag, marking it as a subject of enhancement. We pack these two items into a single string of the form [name of target class]"$Enhanced"Now findClass()can look for the enhancement marker $Enhancedand, when it is present, extract the name of the target class. In this way we also introduce a convention for the names of our enhancements. Whenever we see a class name ending with $Enhanced there is no chance that we will bridge the same bundle space twice or generate redundant enhancement classes Here we must also mention the option to call defineClass() through reflection. This approach is used by cglib. This is a viable option when we want the user to pass us a ready to use ClassLoader. By using reflection we avoid the need to create yet another loader on top of that, just so we can access its defineClass() method. Class space consistency At the end of the day, what we have done is to merge two distinct, unconnected class spaces using the OSGi modular layer. Also we introduced a search order between those spaces similar to the search order of the evil Java classpath. In effect, we have somewhat eroded the class space consistency of the OSGi container. Here is a scenario of how bad things can happen: - Framework uses package com.acme.devicesand requires exactly version 1.0 - Application uses package com.acme.devicesand requires exactly version 2.0. - Class Arefers directly to com.acme.devices.. SinisterDevice - It just happens that class A$Enhanceduses com.acme.devices.from it's internal implementation. SinisterDevice - Because we search the application space, first A$Enhancedwill be linked against com.acme.devices.version 2.0, while it's internal code has been compiled against SinisterDevice com.acme.devices.version 1.0. SinisterDevice As a result the application will see mysterious LinkageErrors and/or ClassCastExceptions. Needless to say, this is a problem. Alas, an automated way to handle this problem does not exist yet. We must simply make sure the enhancement-internal code refers directly only to "very private" implementation classes that are not likely to be used by anyone else. We can even build private adapters for any external APIs we might want to use and then refer to those from the enhancement code. Once we have a well defined implementation subspace, we can use that knowledge to limit the class leakage. We now delegate to the framework space requests only for the special subset of private implementation classes. This will also limit the search order problem, making it irrelevant if we do application-first or framework-first search. One good policy to keep things under control is to have a dedicated package for all enhancement implementation code. Then the bridge loader can check for classes whose name begins with that package and delegate their loading to framework loader. Finally, we sometimes need to judiciously relax this isolation policy for certain singleton packages like org.osgi.framework - we can feel pretty safe compiling our enhancement code directly against org.osgi.framework, because at runtime everyone inside the OSGi container will see the same org.osgi.framework - it is supplied by the OSGi core. Putting it all together Everything from this classloading.); } Enhancer captures only the bridging pattern. The code generation logic is externalized framework); ...devices.SinisterEngine"); Class<SinisterDevice> enhanced = enhancer.enhance(target); ... The Enhancer framework presented above is more than pseudocode. In fact, during the research of this article, it was really built and tested with two demo code generators operating simultaneously in the same OSGi container. The result is loads of fun and is now available on Google Code for everyone to play with. Those interested in the class generation process itself can examine the two demo ASM-based generators. Those who read the article on service dynamics may notice that the proxy generator uses the ServiceHolder code presented there as a private implementation. Conclusion The classload acrobatics that were presented are used in a number of infrastructural frameworks under OSGi. For example, classloader bridging is used by Guice, Peaberry, and Spring Dynamic Modules to get their AOP wrappers and service proxies to work. When we hear the Spring guys say they did serious work on Tomcat to adapt it to OSGi, we can speculate they had to do classload site conversion or a more serious refactor to externalize Tomcat's servlet classloading altogether. Acknowledgements Many of the lessons in this article were extracted from the excellent code Stuart McCulloch wrote for Google Guice and Peaberry. For examples of industrial strength classload bridging, look at BytecodeGen.java from Google Guice and ImportProxyClassLoader.java from Peaberry. There you will see how to handle some additional aspects like security, the system classloader, better lazy caching and concurrency. Thank you Stuart! The author is also obliged to Classy Solutions to Tricky Proxies by Peter Kriens. Hopefully, the explanations on JVM linking in the current article will make a useful contribution to Peter's work. Thank you Peter! About the Author Todor Boev has been involved with OSGi for the past eight years as an employee at ProSyst. He is passionate about developing OSGi into a general purpose programming environment for the JVM. Currently he explores this topic both professionally and as a contributor to the Peaberry project. He maintains a blog at rinswind.blogspot.com. Quite Informative by Chetan Mehrotra Great but..... by Alessandro Mottadelli "Currently OSGi does not define a policy for setting the context classloader." Maybe context management is beyond the scope of OSGi, unless osgi evolves to become a full application container. "For this reason developers need to know in advance when a framework relies on the context loader, and set it manually every time they call into that framework" Again, there is a missing piece here. Context management, thread management, resource management.... are things that should be defined in applications containers build on top of an OSGi container. Regards Re: Great but..... by Todor Boev Very sophisticated, but I have the impression that one should not try to find inside OSGi the solution to problems that are generated by things that are not in the scope of OSGi. Sure. I am a firm believer in keeping OSGi a thin layer on top of the JVM. It's purpose should be to enable modularity and service orientation and get out of the way. Context management, thread management, resource management.... are things that should be defined in applications containers build on top of an OSGi container. I agree. The article is exactly about building context management on top of OSGi. The problem can be summarized in one sentence "Let OSGi do the class loading for you or suffer exploding complexity". With contexts this is not possible so we need clever ways to control the complexity like the little Enhancer I presented. As for the context class loader: this is the same problem. Notice that the Generator interface above takes as a parameter a ClassLoader named..."context". In the end this bridging pattern seems to be quite general. This justifies an effort to implement something like the Enhancer utility into the OSGi core. This is what the Eclipse guys are doing. This will give the core yet another way to do sophisticated class loading and let the application containers define what matters: how contexts span across the bundles. Cheers
http://www.infoq.com/articles/code-generation-with-osgi/
CC-MAIN-2014-35
refinedweb
3,358
54.42
Code Style. XML Use this page to configure formatting options for XML files. When you change these settings, the Preview pane shows how this will affect your code. Tabs and Indents Other Arrangement Use the Matching rules area to define the list of rules and their order. Each rule can match the following: Type: match only tags or attributes. Click a type twice to disable the filter and match both. Name: match the entire name of the element. This filter supports regular expressions and uses the standard syntax. Namespace: match the namespace attribute. Order: Select how to order multiple elements that match the same rule. For example, if there are multiple attributes with the same name, select keep order to arrange them in the same order, or select order by name to sort the matching attributes alphabetically by their value. Rules with alphabetical sorting are designated by . Use the Force rearrange list to select the default rearrangement behavior when you reformat the code. This defines the default state of the Rearrange entries checkbox in the Reformat Code dialog. Use current mode (toggled in the Reformat Code dialog): The Rearrange entries checkbox is enabled by default but you can change it. Always: The Rearrange entries checkbox is enabled by default and you cannot change it. Never: The Rearrange entries checkbox is disabled by default and you cannot change it..
https://www.jetbrains.com/help/pycharm/settings-code-style-xml.html
CC-MAIN-2021-04
refinedweb
228
65.42
SETLOGMASK(3) Linux Programmer's Manual SETLOGMASK(3) setlogmask - set log priority mask #include <syslog.h> int setlogmask(int mask);. This function returns the previous log priority mask. None. For an explanation of the terms used in this section, see attributes(7). ┌─────────────┬───────────────┬────────────────────────┐ │Interface │ Attribute │ Value │ ├─────────────┼───────────────┼────────────────────────┤ │setlogmask() │ Thread safety │ MT-Unsafe race:LogMask │ └─────────────┴───────────────┴────────────────────────┘ POSIX.1-2001, POSIX.1-2008. LOG_UPTO() will be included in the next release of the POSIX specification (Issue 8). closelog(3), openlog(3), syslog(3) This page is part of release 5.08 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. 2020-06-09 SETLOGMASK(3) Pages that refer to this page: closelog(3), openlog(3), syslog(3), vsyslog(3)
https://man7.org/linux/man-pages/man3/setlogmask.3.html
CC-MAIN-2020-40
refinedweb
131
52.15
Hello, I have been unable to synchronize collected data in one of my Collector apps (I believe this might be due to the number of attachments). I've tried all suggestions in this forum, and now have manually extracted and converted my collected data into a File Geodatabase. I'm wondering how to go about "merging" this file geodatabase into my SDE geodatabase on our GIS Server, while maintaining the Global IDs and related attachments I've configured. Has anyone been through this process? If so, could you provide some info on how you did it? Thank you! Jacqueline Solved! Go to Solution. Here are instructions I wrote for myself a few months ago. A few of the steps regarding new feature class you can omit because you already have a "new feat class" which is your original SDE gdb. Rem. to add field, as specified in step 2. HOW TO MOVE FEATURE ATTACHMENTS FROM 1 GDB TO ANOTHER 1. create new geodatabase 2. create new feat. class in new geodatabase -geometry has to be the same as the source feature class that has feat. attachments -for fields, use the import button and specify source feature class -add additional field name: alias_GID type: text, length: 150 3. enable attachments on new feature class 4. append source feature class to new feature class -NO TEST -add field source.GlobalID to target.alias_GID 5. create new field in new feature attachment table named "REL_alias_GID" type: text, length: 150 5. append source feature attachment table to new feature attachment table 7. field calculate new feature attachment table REL_alias_GID to [REL_GLOBALID] 8. join new feature class.alias_GID .. TO .. new feature attachment table.REL_alias_GID left table is new feature attachment table right table is new feature class 9. field calculate new feature attachment table REL_GLOBALID to new feature class.GLOBALID Hi Jacqueline, I have a same issue. I want to transfer my FGDB with Collector attachments to SDE without losing attachments. I tried to run this script but I am getting an error. I just changed the updateFeatureClass & targetFeatureClass parameters in the entire script def main(): updateFeatureClass = r'\\central\gis\development\slm_pj\PJ\OSC_SurveyData_Final.gdb\osc_lito' targetFeatureClass = r'Database Connections\GISDATA@gdbmaint@stgisgdb1.sde\gdbmaint.GISDATA.OSCTest\gdbmaint.GISDATA.osc_lito' Error: File "U:\PJ\SLM\Codes\Append-Features-With-Attachments-master\AppendFeaturesWithAttachments.py", line 74, in appendFeatures editor= arcpy.da.Editor(desc.path) RuntimeError: cannot open workspace Any idea what should I do to run it successfully? Thanks in Advance. PJ Hi Dan, Thanks for the instructions. I just had one quick question. In your last step you mention to field calculate the GlobalID field. I cannot figure out how you did this. Since this is a GlobalID field type, the Field Calculate option is grayed out. Am I missing something? Thanks in advance! Craig
https://community.esri.com/t5/arcgis-collector-questions/append-extracted-data-to-sde-collector-for-arcgis/td-p/533881
CC-MAIN-2022-27
refinedweb
470
60.61
Steem Developer Portal PY: Edit Content Patching How to edit post content with diff_match_patch using Python. Full, runnable src of Edit Content Patching can be downloaded as part of the PY tutorials repository. In this tutorial we show you how to patch and update posts/comments on the Steem blockchain using the commit class found within the steem-python library. Intro Being able to patch a post is critical to save resources on Steem. The Steem python library has a built-in function to transmit transactions to the blockchain. We are using the diff_match_patch class for python to create a patch for a post or comment. We then use the post method found within the commit class in the library. It should be noted that comments and new post are both treated as commit.post operation with the only difference being that a comment/reply has got an additional parameter containing the parent post/comment. There is already a tutorial on how to create a new post so the focus of this tutorial will be on patching the content of the post. We will be using a couple of methods within the diff_match_patch class. diff_main - This compares two text fields to find the differences. diff_cleanupSemantic - This reduces the number of edits by eliminating semantically trivial equalities. diff_levenshtein - Computes the Levenshtein distance: the number of inserted, deleted or substituted characters patch_make - Creates a patch based on the calculated differences. This method can be executed in 3 different ways based on the parameters. By using the two separate text fields in question, by using only the calculated difference, or by using the original text along with the calculated difference. patch_apply - This applies the created patch to the original text field. Steps - App setup - Library install and import. Connection to testnet - User information and steem node - Input user information and connection to Steem node - Post to update - Input and retrieve post information - Patching - Create the patch to update the post - New post commit - Commit the post to the blockchain 1. App setup In this tutorial we use 2 packages: steem- steem-python library and interaction with Blockchain diff_match_patch- used to compute the difference between two text fields to create a patch We import the libraries and connect to the testnet. import steembase import steem from diff_match_patch import diff_match_patch posting POSTING key: ') #demo account: 5JEZ1EiUjFKfsKP32b15Y7jybjvHQPhnvCYZ9BW62H1LDUnMvHz #connect node and private active key client = steem.Steem(nodes=[''], keys=[wif]) 3. Post to update The user inputs the author and permlink of the post that they wish to edit. It should be noted that a post cannot be patched once it has been archived. We suggest referring to the submit post tutorial to create a new post before trying the patch process. #check valid username userinfo = client.get_account(username) if(userinfo is None) : print('Oops. Looks like user ' + username + ' doesn\'t exist on this chain!') exit() post_author = input('Please enter the AUTHOR of the post you want to edit: ') post_permlink = input('Please enter the PERMLINK of the post you want to edit: ') #get details of selected post details = client.get_content(post_author, post_permlink) print('\n' + 'Title: ' + details['title']) o_body = details['body'] print('Body:' + '\n' + o_body + '\n') n_body = input('Please enter new post content:' + '\n') The user also inputs the updated text in the console/terminal. This will then give us the two text fields to compare. 4. Patching The module is initiated and the new post text is checked for validity. #initialise the diff match patch module dmp = diff_match_patch() #Check for null input if (n_body == '') : print('\n' + 'No new post body supplied. Operation aborted') exit() else : # Check for equality if (o_body == n_body) : print('\n' + 'No changes made to post body. Operation aborted') exit() The diff is calculated and a test is done to check the diff length against the total length of the new text to determine if it will be better to patch or just replace the text field. The value to be sent to the blockchain is then assigned to the new_body parameter. #check for differences in the text field diff = dmp.diff_main(o_body, n_body) #Reduce the number of edits by eliminating semantically trivial equalities. dmp.diff_cleanupSemantic(diff) #check patch length if (dmp.diff_levenshtein(diff) < len(o_body)) : #create patch patch = dmp.patch_make(o_body, diff) #create new text based on patch patch_body = dmp.patch_apply(patch, o_body) new_body = patch_body[0] else : new_body = n_body 5. New post commit The only new parameter is the changed body text. All the other parameters to do a commit is assigned directly from the original post entered by the user. #commit post to blockchain with all old values and new body text client.commit.post(title=details['title'], body=new_body, author=details['author'], permlink=details['permlink'], json_metadata=details['json_metadata'], reply_identifier=(details['parent_author'] + '/' + details['parent_permlink'])) print('\n' + 'Content of the post has been successfully updated') A simple confirmation is displayed on the screen for a successful commit. We encourage users to play around with different values and data types to fully understand how this process works. You can also check the balances and transaction history on the testnet portal. To Run the tutorial - review dev requirements - clone this repo cd tutorials/12_edit_content_patching pip install -r requirements.txt python index.py - After a few moments, you should see a prompt for input in terminal screen.
https://developers.steem.io/tutorials-python/edit_content_patching
CC-MAIN-2018-47
refinedweb
880
55.13
Given a set of two dimensional vectors (or data points), a Voronoi graph is a separation of those points into compartments where all points inside one compartment are closer to the contained data point than to any other data point. I won't give any demonstration here, but if you want to know more about Voronoi graphs, check out this. The applications of Voronoi graphs are quite broad. Very useful for a lot of optimization problems (in most cases, the Delaunay Triangulation which can be easily derived from a Vononoi graph is used there), it ranges to computing topological maps from bitmaps. [This is an article for freaks. After a rather painful experience writing the thing I hope it will benefit everyone who is looking for this algorithm in a civilized language (or simply does not want to use Fortune's original C implementation).] In 1987, Steve Fortune described an algorithm to compute such a graph by using a sweep line in combination with a binary tree. A PowerPoint explanation of the algorithm (the one I used to implement it) can be found here. Note that I did not use the linked data structure to represent a graph - I think that is an unnecessary difficulty in the age of ArrayLists and HashSets. ArrayList HashSet Data points are represented by my own Vector class. It can do much more than needed here (but there was no reason to strip it before bringing it) but I won't explain it here. The most important fact is that although working with doubles the Vector class automatically rounds values to 10 digits (or whatever is set in the Vector.Precision field). Yes, sadly, this is very important if you want to sort of compare doubles. Vector double Vector.Precision A VoronoiGraph is a class that only contains a HashSet of vertices (as 2D vectors) and a HashSet of VoronoiEdges - each with references to the left and right data point and (of course) the two vertices that bound the edge. If the edge is (partially or completely) unbounded, the vector Fortune.VVUnknown is used. VoronoiGraph VoronoiEdge Fortune.VVUnknown BinaryPriorityQueue is used for the sweep line event queue. BinaryPriorityQueue The algorithm itself (Fortune.ComputeVoronoiGraph(IEnumerable)) takes any IEnumerable containing only two dimensional vectors. It will return a VoronoiGraph. The algorithm's complexity is O(n ld(n)) with a factor of about 10 microseconds on my machine (2GHz). Fortune.ComputeVoronoiGraph(IEnumerable) IEnumerable This article, along with any associated source code and files, is licensed under The Mozilla Public License 1.1 (MPL 1.1) Collection<CLPolygon> VoronoiPolygonOut = new Collection<CLPolygon>(); public class CLPolygon { Point SeedPoint = new Point(); Collection <Point> VertexPoints = new Collection<Point>(); } if(VC.Y>=ys) if(VC.Y>ys || Math.Abs(VC.Y - ys) < 1e-10) General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/11275/Fortune-s-Voronoi-algorithm-implemented-in-C?msg=4267441
CC-MAIN-2014-35
refinedweb
491
54.32
In this article we’re going to explore two approaches for dynamically loading SVG icons with Vue.js. We’ll use the wonderful vue-svgicon package as a foundation for our SVG icon workflow. If you want to take a closer look at the example code, you can find it on GitHub. Or you can checkout a live example of the code hosted on Netlify. Dynamically loading icons with Vue.js Installing vue-svgicon There are multiple ways of how to integrate SVG icons into a website, but because we’re using Vue.js, we want to use an approach which enables us to use components, like we’re used too with Vue.js, to load icons. Luckily, the vue-svgicon package makes it possible to automatically convert .svg files into Vue components. npm install vue-svgicon --save After we’ve installed vue-svgicon, we can use it to automatically generate icon components of SVG files for us. The best way to run the script is, to add a new line to the scripts section of our package.json file. { "scripts": { "icons": "vsvg -s src/assets/icons -t src/components/icons", "prebuild": "npm run icons", "build": "node build/build.js" } }. Configuring vue-svgicon After we’ve installed vue-svgicon, we must configure it in the src/main.js file of our application. // src/main.js import Vue from 'vue'; import * as svgicon from 'vue-svgicon'; import App from './App'; // We install `vue-svgicon` as plugin // and configure it to prefix all CSS // classes with `AppIcon-`. Vue.use(svgicon, { classPrefix: 'AppIcon-', }); new Vue({ el: '#app', render: h => h(App), }); For this demo application, I downloaded three icons from flaticon.com, ran them through SVGOMG to save some bytes, and put them into the src/assets/icons directory. npm run icons Now we can generate the Vue icon components by running the command above. Don’t forget to add the directory which contains the automatically generated icon components to your .gitignore file, to prevent them from ending up in your Git repository. # .gitignore src/components/icons Dynamically loading icons Although, most SVG icons are quite small in file size, their size can add up. To prevent them slowing down the initial page load, we can use dynamic loading to lazy load icons which are not visible on initial page load. Approach 1: Default component + watch Let’s start with the default way of using icons generated with vue-svgicon and enhance it with lazy loading powered by dynamic imports and the Vue.js watch feature. <svgicon v-</svgicon> <button @ Toggle Magic Hat Icon </button> In the template code above, you can see that we’re defining a svgicon component tag which is only rendered if showMagicHat is true. The name property is used to define which icon should be rendered, in this case we’re rendering the icon with the file name magic-hat.svg. The button beneath the icon toggles the value of showMagicHat between true and false. export default { name: 'App', data() { return { showMagicHat: false, }; }, watch: { // This method is triggered whenever // the value of `showMagicHat` changes. showMagicHat(value) { if (value) import(/* webpackChunkName: "svgicon-magic-hat" */ './components/icons/magic-hat'); }, }, }; In the code above, you can see that we’re using a watcher function named showMagicHat() to trigger a dynamic import of the magic-hat icon component. By specifying a webpackChunkName, we can control the name of the chunk file which is generated by webpack. For the webpack chunk name feature to work, make sure that you’re using the [name] placeholder in the webpack chunkFilename setting (you can look at the example config on GitHub). If we run this code in production mode, you can see in the network tab of the developer tools of your browser, that the icon is not loaded initially but it is lazy loaded whenever the button is clicked the first time. Approach 2: Wrapper component Although the first approach is perfectly fine, we still can do better, and add a layer of abstraction to make things a little bit easier to reuse. <app-iconToggle Music Icon</button> The template code above looks pretty much the same as what we’ve seen before. The only major difference is, that we’re using an app-icon tag, instead of the default svgicon tag, to load the icon component. import AppIcon from './components/AppIcon'; export default { name: 'App', components: { AppIcon, }, data() { return { showMusic: false, }; }, }; In this code snippet, you can see, that we’re not loading anything dynamically and we don’t have to use a watcher function for dynamic loading. Instead we’re importing and using a new AppIcon component. Let’s take a closer look at the implementation of the AppIcon component in the following code snippet. <template> <svgicon : </svgicon> </template> <script> export default { name: 'AppIcon', props: { name: { type: String, required: true, }, size: { type: String, }, }, created() { // The `[request]` placeholder is replaced // by the filename of the file which is // loaded (e.g. `AppIcon-music.js`). import(/* webpackChunkName: "AppIcon-[request]" */ `./icons/${this.name}`); }, }; </script> <style> .AppIcon { display: inline-block; height: 1em; color: inherit; vertical-align: middle; fill: none; stroke: currentColor; } .AppIcon--fill { fill: currentColor; stroke: none; } .AppIcon--s { height: 0.5em; } .AppIcon--m { height: 1em; } .AppIcon--l { height: 3em; } </style> The AppIcon component you can see above, is basically a simple wrapper around the svgicon component. Let’s walk through the code. In the template you can see, that we’re using the components name, which is stored in $options.name, to define the CSS class. Additionally to the default CSS class, we’re also adding a size class, which can be controlled by passing one of the three sizes ( s, m or l) as a property to the component. Because we’re using the svgicon tag as root tag of the AppIcon component, all additional properties are directly passed to the svgicon component, so we can use all of the properties provided by the svgicon component. In the components created() method, we’re dynamically loading the icon component which matches the given name property. Because the created() method is not executed as long as the component isn’t initialized, the icon isn’t loaded until the component is rendered. In the style section of the component, we’re adding some default styles which are recommended by vue-svgicon and we add the necessary styling to make the size classes work. Downsides of dynamic loading Like with most things in life, dynamically loading SVG icons also has its downsides. Especially if you’re displaying a lot of (different) icons initially, without requiring any user interaction for them to show up, loading all those icons in separate HTTP requests is most likely slower than loading them all in one JavaScript bundle. The wrapper component approach I’ve shown in this article isn’t very flexible in that regard: all icons are always loaded dynamically. No matter if you’re showing them instantly or after a certain user interaction. Do you have any questions? You can find me on Twitter. Recap Depending on your situation, you might consider to use one of the two approaches I’ve shown in this article. The first approach, of using the Vue.js watch feature to dynamically load icons if needed and bundle them with the main bundle otherwise, is more flexible but also more complicated. Using a wrapper component for your icons makes them pretty straightforward to use. There might be a performance hit with making a lot of separate HTTP requests in certain situations. Although, the default Vue.js webpack template comes with the CommonsChunkPlugin preconfigured, which should take care of such situations. Aside from dynamically loading icons only if they are needed, using a wrapper component has two other benefits. First of all, because we’ve added a layer of abstraction, we’re more flexible if we decide to use some other tool instead of vue-svgicon in the future. And second, this approach makes it possible to put the icon styling where it belongs: in a separate (icon) component.
https://markus.oberlehner.net/blog/dynamically-loading-svg-icons-with-vue/
CC-MAIN-2019-22
refinedweb
1,341
54.42
17 May 2010 07:21 [Source: ICIS news] By James Dennis SINGAPORE (ICIS news)--Crude futures fell more than a $1/bbl (€0.82/bbl) on Monday pushing prices down to their lowest level in three months as the US dollar strengthened amid growing worries over European debt and high US oil inventories, traders said. At 05:40 GMT, June NYMEX light sweet crude futures were trading at $70.23/bbl, down $1.38/bbl. Earlier the ?xml:namespace> At the same time, July Brent on Crude prices have fallen sharply since hitting intra day highs for the year in early May with NYMEX light sweet crude futures tumbling around 19% since then and Brent down roughly 14%. Monday’s crude price decline followed a sharp rise in the value of the US dollar, which was triggered by growing worries over oil demand from the debt ridden economies of The International Monetary Fund (IMF) in its latest Fiscal Monitor report warned that developed countries faced an urgent need to cut their budget deficits. The IMF said that failure to control the deficits would harm economic recovery. Equity markets in Asia fell sharply on Monday amid worries over European debt with the Nikkei Index in Meanwhile, recent increases in US crude and product inventories have raised worries about demand in the world’s leading economy. Data from the US Energy Information Administration (EIA) released last week revealed that crude stocks held at the land locked Cushing terminal in Oklahoma rose to a record level of 37m bbls. The Cushing terminal is the delivery point of NYMEX light sweet crude, and the build-up of stocks there has pushed WTI progressively lower relative to Brent since mid April. ($1 = €0.82)
http://www.icis.com/Articles/2010/05/17/9359898/crude-down-1bbl-lowest-in-3-months-on-strong-dollar.html
CC-MAIN-2015-22
refinedweb
289
60.58
latest feature release GIT 1.5.0 is available at the usual places: git-1.5.0.tar.{gz,bz2} (tarball) git-htmldocs-1.5.0.tar.{gz,bz2} (preformatted docs) git-manpages-1.5.0.tar.{gz,bz2} (preformatted docs) RPMS/$arch/git-*-1.5.0-1.$arch.rpm (RPM) ---------------------------------------------------------------- make the repository unusable with older versions of git. Specifically, the available options are: - There is a configuration variable core.legacyheaders that clients over dumb transports (e.g. http) using older versions of git will also be affected. - Since v1.4.3, configuration repack.usedeltabaseoffset allows packfile to be created in more space efficient format, which cannot be read by git older than that version. The above two are not enabled by default and you explicitly have to ask for them, because these two features. - 'git pack-refs' appeared in v1.4.4; this command allows tags to be accessed much more efficiently than the traditional 'one-file-per-tag' format. Older git-native clients can still fetch from a repository that packed and pruned refs (the server side needs to run the up-to-date version of git), but older dumb transports cannot. Packing of refs is done by an explicit user action, either by use of "git pack-refs --prune" command or by use of "git gc" command. - 'git -p' to paginate anything -- many commands do pagination by default on a tty. Introduced between v1.4.1 and v1.4.2; this may surprise old timers. - 'git archive' superseded 'git tar-tree' in v1.4.3; - 'git cvsserver' was new invention in v1.3.0; - 'git repo-config', 'git grep', 'git rebase' and 'gitk' were seriously enhanced during v1.4.0 timeperiod. - 'gitweb' became part of git.git during v1.4.0 timeperiod and seriously modified since then. - reflog is an v1.4.0 invention. This allows you to name a revision that a branch used to be at (e.g. "git diff master@{yesterday} master" allows you to see changes since yesterday's tip of the branch). Updates in v1.5.0 since v1.4.4 series ------------------------------------- * Index manipulation - git-add is to add contents to the index (aka "staging area" for the next commit), whether the file the contents happen to be is an existing one or a newly created one. - git-add without any argument does not add everything anymore. Use 'git-add .' instead. Also you can add otherwise ignored files with an -f option. - git-add tries to be more friendly to users by offering an interactive mode ("git-add -i"). - git-commit <path> used to refuse to commit if <path> was different between HEAD and the index (i.e. update-index was used on it earlier). This check was removed. - git-rm is much saner and safer. It is used to remove paths from both the index file and the working tree, and makes sure you are not losing any local modification before doing so. - git-reset <tree> <paths>... can be used to revert index entries for selected paths. - git-update-index is much less visible. Many suggestions to use the command in git output and documentation have now been replaced by simpler commands such as "git add" or "git rm". * Repository layout and objects transfer - The data for origin repository is stored in the configuration file $GIT_DIR/config, not in $GIT_DIR/remotes/, for newly created clones. The latter is still supported and there is no need to convert your existing repository if you are already comfortable with your workflow with the layout. - git-clone always uses what is known as "separate remote" layout for a newly created repository with a working tree. A repository with the separate remote layout starts with only one default branch, 'master', to be used for your own development. Unlike the traditional layout that copied all the upstream branches into your branch namespace (while renaming their 'master' to your 'origin'), the new layout puts upstream branches into local "remote-tracking branches" with their own namespace. These can be referenced with names such as "origin/$upstream_branch_name" and are stored in .git/refs/remotes rather than .git/refs/heads where normal branches are stored. This layout keeps your own branch namespace less cluttered, avoids name collision with your upstream, makes it possible to automatically track new branches created at the remote after you clone from it, and makes it easier to interact with more than one remote repository (you can use "git remote" to add other repositories to track). There might be some surprises: * 'git branch' does not show the remote tracking branches. It only lists your own branches. Use '-r' option to view the tracking branches. * If you are forking off of a branch obtained from the upstream, you would have done something like 'git branch my-next next', because traditional layout dropped the tracking branch 'next' into your own branch namespace. With the separate remote layout, you say 'git branch next origin/next', which allows you to use the matching name 'next' for your own branch. It also allows you to track a remote other than 'origin' (i.e. where you initially cloned from) and fork off of a branch from there the same way (e.g. "git branch mingw j6t/master"). Repositories initialized with the traditional layout continue to work. - New branches that appear on the origin side after a clone is made are also tracked automatically. This is done with an wildcard refspec "refs/heads/*:refs/remotes/origin/*", which older git does not understand, so if you clone with 1.5.0, you would need to downgrade remote.*.fetch in the configuration file to specify each branch you are interested in individually if you plan to fetch into the repository with older versions of git (but why would you?). - Similarly, wildcard refspec "refs/heads/*:refs/remotes/me/*" can be given to "git-push" command to update the tracking branches that is used to track the repository you are pushing from on the remote side. - git-branch and git-show-branch know remote tracking branches (use the command line switch "-r" to list only tracked branches). - git-push can now be used to delete a remote branch or a tag. This requires the updated git on the remote side (use "git push <remote> :refs/heads/<branch>" to delete "branch"). - git-push more aggressively keeps the transferred objects packed. Earlier we recommended to monitor amount of loose objects and repack regularly, but you should repack when you accumulated too many small packs this way as well. Updated git-count-objects helps you with this. - git-fetch also more aggressively keeps the transferred objects packed. This behavior of git-push and git-fetch can be tweaked with a single configuration transfer.unpacklimit (but usually there should not be any need for a user to tweak it). - A new command, git-remote, can help you manage your remote tracking branch definitions. - You may need to specify explicit paths for upload-pack and/or receive-pack due to your ssh daemon configuration on the other end. This can now be done via remote.*.uploadpack and remote.*.receivepack configuration. * Bare repositories - Certain commands change their behavior in a bare repository (i.e. a repository without associated working tree). We use a fairly conservative heuristic (if $GIT_DIR is ".git", or ends with "/.git", the repository is not bare) to decide if a repository is bare, but "core.bare" configuration variable can be used to override the heuristic when it misidentifies your repository. - git-fetch used to complain updating the current branch but this is now allowed for a bare repository. So is the use of 'git-branch -f' to update the current branch. - Porcelain-ish commands that require a working tree refuses to work in a bare repository. * Reflog -?"). This facility is enabled by default for repositories with working trees, and can be accessed with the "branch@{time}" and "branch@{Nth}" notation. - "git show-branch" learned showing the reflog data with the new -g option. "git log" has -s option to view reflog entries in a more verbose manner. - git-branch knows how to rename branches and moves existing reflog data from the old branch to the new one. - In addition to the reflog support in v1.4.4 series, HEAD reference maintains its own log. "HEAD@{5.minutes.ago}" means the commit you were at 5 minutes ago, which takes branch switching into account. If you want to know where the tip of your current branch was at 5 minutes ago, you need to explicitly say its name (e.g. "master@{5.minutes.ago}") or omit the refname altogether i.e. "@{5.minutes.ago}". - The commits referred to by reflog entries are now protected against pruning. The new command "git reflog expire" can be used to truncate older reflog entries and entries that refer to commits that have been pruned away previously with older versions of git. Existing repositories that have been using reflog may get complaints from fsck-objects and may not be able to run git-repack, if you had run git-prune from older git; please run "git reflog expire --stale-fix --all" first to remove reflog entries that refer to commits that are no longer in the repository when that happens. * Crufts removal - We used to say "old commits are retrievable using reflog and 'master@{yesterday}' syntax as long as you haven't run git-prune". We no longer have to say the latter half of the above sentence, as git-prune does not remove things reachable from reflog entries. - 'git-prune' by default does not remove _everything_ unreachable, as there is a one-day grace period built-in. - There is a toplevel garbage collector script, 'git-gc', that runs periodic cleanup functions, including 'git-repack -a -d', 'git-reflog expire', 'git-pack-refs --prune', and 'git-rerere gc'. - The output from fsck ("fsck-objects" is called just "fsck" now, but the old name continues to work) was needlessly alarming in that it warned missing objects that are reachable only from dangling objects. This has been corrected and the output is much more useful. * Detached HEAD - You can use 'git-checkout' to check out an arbitrary revision or a tag as well, instead of named branches. This will dissociate your HEAD from the branch you are currently on. A typical use of this feature is to "look around". E.g. $ git checkout v2.6.16 ... compile, test, etc. $ git checkout v2.6.17 ... compile, test, etc. - After detaching your HEAD, you can go back to an existing branch with usual "git checkout $branch". Also you can start a new branch using "git checkout -b $newbranch" to start a new branch at that commit. - You can even pull from other repositories, make merges and commits while your HEAD is detached. Also you can use "git reset" to jump to arbitrary commit, while still keeping your HEAD detached. Going back to attached state (i.e. on a particular branch) by "git checkout $branch" can lose the current stat you arrived in these ways, and "git checkout" refuses when the detached HEAD is not pointed by any existing ref (an existing branch, a remote tracking branch or a tag). This safety can be overridden with "git checkout -f $branch". * Packed refs - Repositories with hundreds of tags have been paying large overhead, both in storage and in runtime, due to the traditional one-ref-per-file format. A new command, git-pack-refs, can be used to "pack" them in more efficient representation (you can let git-gc do this for you). - Clones and fetches over dumb transports are now aware of packed refs and can download from repositories that use them. * Configuration - configuration related to color setting are consolidated under color.* namespace (older diff.color.*, status.color.* are still supported). - 'git-repo-config' command is accessible as 'git-config' now. * Updated features - git-describe uses better criteria to pick a base ref. It used to pick the one with the newest timestamp, but now it picks the one that is topologically the closest (that is, among ancestors of commit C, the ref T that has the shortest output from "git-rev-list T..C" is chosen). - git-describe gives the number of commits since the base ref between the refname and the hash suffix. E.g. the commit one before v2.6.20-rc6 in the kernel repository is: v2.6.20-rc5-306-ga21b069 which tells you that its object name begins with a21b069, v2.6.20-rc5 is an ancestor of it (meaning, the commit contains everything -rc5 has), and there are 306 commits since v2.6.20-rc5. - git-describe with --abbrev=0 can be used to show only the name of the base ref. - git-blame learned a new option, --incremental, that tells it to output the blames as they are assigned. A sample script to use it is also included as contrib/blameview. - git-blame starts annotating from the working tree by default. * Less external dependency - We no longer require the "merge" program from the RCS suite. All 3-way file-level merges are now done internally. - The original implementation of git-merge-recursive which was in Python has been removed; we have a C implementation of it now. - git-shortlog is no longer a Perl script. It no longer requires output piped from git-log; it can accept revision parameters directly on the command line. * I18n - We have always encouraged the commit message to be encoded in UTF-8, but the users are allowed to use legacy encoding as appropriate for their projects. This will continue to be the case. However, a non UTF-8 commit encoding _must_ be explicitly set with i18n.commitencoding in the repository where a commit is made; otherwise git-commit-tree will complain if the log message does not look like a valid UTF-8 string. - The value of i18n.commitencoding in the originating repository is recorded in the commit object on the "encoding" header, if it is not UTF-8. git-log and friends notice this, and reencodes the message to the log output encoding when displaying, if they are different. The log output encoding is determined by "git log --encoding=<encoding>", i18n.logoutputencoding configuration, or i18n.commitencoding configuration, in the decreasing order of preference, and defaults to UTF-8. - Tools for e-mailed patch application now default to -u behavior; i.e. it always re-codes from the e-mailed encoding to the encoding specified with i18n.commitencoding. This unfortunately forces projects that have happily been using a legacy encoding without setting i18n.commitencoding to set the configuration, but taken with other improvement, please excuse us for this very minor one-time inconvenience. - See the above I18n section. - git-format-patch now enables --binary without being asked. git-am does _not_ default to it, as sending binary patch via patches and it is prudent to require the person who is applying the patch to explicitly ask for it. - The default suffix for git-format-patch output is now ".patch", not ".txt". This can be changed with --suffix=.txt option, or setting the config variable "format.suffix" to ".txt". * Foreign SCM interfaces - git-svn now requires the Perl SVN:: libraries, the command-line backend was too slow and limited. - the 'commit' subcommand of git-svn has been renamed to 'set-tree', and 'dcommit' is the recommended replacement for day-to-day work. - git fast-import backend. * User support - Quite a lot of documentation updates. - Bash completion scripts have been updated heavily. - Better error messages for often used Porcelainish commands. - Git GUI. This is a simple Tk based graphical interface for common Git operations. * Sliding mmap - We used to assume that we can mmap the whole packfile while in use, but with a large project this consumes huge virtual memory space and truly huge ones would not fit in the userland address space on 32-bit platforms. We now mmap huge packfile in pieces to avoid this problem. * Shallow clones - There is a partial support for 'shallow' repositories that keeps only recent history. A 'shallow clone' is created by specifying how deep that truncated history should be (e.g. "git clone --depth=5 git://some.where/repo.git"). Currently a shallow repository has number of limitations: - Cloning and fetching _from_ a shallow clone are not supported (nor tested -- so they might work by accident but they are not expected to). - Pushing from nor into a shallow clone are not expected to work. - Merging inside a shallow repository would work as long as a merge base is found in the recent history, but otherwise it will be like merging unrelated histories and may result in huge conflicts. but this would be more than adequate for people who want to look at near the tip of a big project with a deep history and send patches in e-mail format. -
http://lwn.net/Articles/222086/
crawl-002
refinedweb
2,819
57.06
10 months, 3 weeks ago. CPU freezes on second attempt to use the DMA to write to UART. I have been trying exhaustively to program my STM32F7xx microcontroller to use DMA to transmit to UART. Three things are going on and I cannot explain or understand why this is happening, and hope somebody can help me out with this issue. - In the main while loop, I am printing three interrupt status flags. These flags are set if the corresponding ISR has been called. I added this to check if the ISR was called without adding blocking statements in the ISRs. None of the interrupts, however, are called. - The DMA only transmits 1 sequence of 513 bytes. When I modify the while loop in my main to only contain HAL_UART_Transmit_DMA(&handleUart4, dmxBuffer, 513);, nothing changes, the function is only called/executed once. - In the while loop, I print the status of the ISR flags. After printing, the CPU stops/locks/shutdown/exits the while loop. At first, I thought I was congesting the AHB by using the UART to my terminal and the UART for the DMA controller. I disabled my terminal, and used LEDs, this didn't change anything. Currently, I have three running hypothesis and am not sure how I can prove/disprove either one: - Interrupts are disabled for my microcontroller - The IVR table contains a different function pointer for the DMA1 Stream4 global interrupt than the function I have defined. - The CPU is reading/writing to SRAM while the DMA is reading from the same addresses before forwarding it to the UART. This can cause the AHB/APB1 bus to congest and cause the CPU to lockup. What is going on here? #include "stm32f7xx.h" #include "mbed.h" uint8_t dmxBuffer[513]; volatile bool irqA = false; volatile bool irqB = false; volatile bool irqC = false; Serial pc(USBTX, USBRX, 115200); UART_HandleTypeDef handleUart4; DMA_HandleTypeDef handleDma; void initialiseGPIO() { GPIO_InitTypeDef GPIO_InitStruct; __GPIOA_CLK_ENABLE(); /**UART4 GPIO Configuration PA0 ------> USART4_TX */ GPIO_InitStruct.Pin = GPIO_PIN_0; GPIO_InitStruct.Mode = GPIO_MODE_AF_PP; GPIO_InitStruct.Pull = GPIO_PULLUP; GPIO_InitStruct.Speed = GPIO_SPEED_HIGH; GPIO_InitStruct.Alternate = GPIO_AF8_UART4; HAL_GPIO_Init(GPIOA, &GPIO_InitStruct); } void initialiseDMAController() { /* DMA controller clock enable */ __DMA1_CLK_ENABLE(); /* Peripheral DMA init*/ handleDma.Instance = DMA1_Stream4; handleDma.Init.Channel = DMA_CHANNEL_4; handleDma.Init.Direction = DMA_MEMORY_TO_PERIPH; handleDma.Init.PeriphInc = DMA_PINC_DISABLE; handleDma.Init.MemInc = DMA_MINC_ENABLE; handleDma.Init.PeriphDataAlignment = DMA_MDATAALIGN_BYTE; handleDma.Init.MemDataAlignment = DMA_MDATAALIGN_BYTE; handleDma.Init.Mode = DMA_NORMAL; handleDma.Init.Priority = DMA_PRIORITY_MEDIUM; handleDma.Init.FIFOMode = DMA_FIFOMODE_DISABLE; HAL_DMA_Init(&handleDma); //Define __HAL_LINKDMA(&handleUart4,hdmatx,handleDma); /* DMA interrupt init */ HAL_NVIC_SetPriority(DMA1_Stream4_IRQn, 0, 0); HAL_NVIC_EnableIRQ(DMA1_Stream4_IRQn); } void initialiseUart() { __UART4_CLK_ENABLE(); handleUart4.Instance = UART4; handleUart4.Init.BaudRate = 250000; handleUart4.Init.WordLength = UART_WORDLENGTH_8B; handleUart4.Init.StopBits = UART_STOPBITS_2; handleUart4.Init.Parity = UART_PARITY_NONE; handleUart4.Init.Mode = UART_MODE_TX; handleUart4.Init.HwFlowCtl = UART_HWCONTROL_NONE; handleUart4.Init.OverSampling = UART_OVERSAMPLING_16; HAL_UART_Init(&handleUart4); /* Peripheral interrupt init*/ HAL_NVIC_SetPriority(UART4_IRQn, 0, 0); HAL_NVIC_EnableIRQ(UART4_IRQn); } /* This function handles DMA1 stream4 global interrupt. */ void DMA1_Stream4_IRQHandler(void) { irqA = true; HAL_DMA_IRQHandler(&handleDma); } /* This function handles the UART4 interups */ void UART4_IRQHandler(void) { irqB = true; HAL_UART_IRQHandler(&handleUart4); } //HAL_UART_TxCpltCallback /* This callback function is called when the DMA successfully transmits all scheduled bytes. */ void HAL_UART_TxCpltCallback(UART_HandleTypeDef *huart) { irqC = true; } int main(void) { /* Reset of all peripherals */ HAL_Init(); //Initialise peripherals initialiseGPIO(); initialiseDMAController(); initialiseUart(); //Fill buffer with test data for (int x = 0; x < 100; x++) { dmxBuffer[x] = x; } //Now instruct the UART peripheral to transmit 513 bytes using the DMA controller. HAL_UART_Transmit_DMA(&handleUart4, dmxBuffer, 513); while(1) { pc.printf("irqA: %d - irqB: %d - irqC: %d\r\n", irqA, irqB, irqC); wait_ms(100); //Wait to see if any of the interupt handlers / callback functions are called //Check if all bytes are sent, if so, retransmit if (irqC) { irqC = false; HAL_UART_Transmit_DMA(&handleUart4, dmxBuffer, 513); } } } 1 Answer 10 months, 2 weeks ago. Hi Alex, After a quick review of your code we see two possible explanations: 1) The call to HAL_UART_TxCpltCallback() only occurs if the DMA is in circular mode (and you are in normal mode). Please see: - 2) We don't think you need to enable UART interrupts as the DMA is feeding the UART. This might be interfering in some way. Have you confirmed that you are receiving data from the first DMA transfer? Regards, Ralph, Team Mbed Hi Alex, Sorry - but I jumped the gun on my response! After closer inspection, in normal mode the DMA transmit complete function is manually setting the UART transmit complete interrupt - so you should be seeing that fire (and have it enabled): - Hi Ralph, Thanks for your answer. I only just found out that you tried to answer the question. Thanks a lot! As you suggested, I tried to disable the UART interrupt. This did not change anything. According to my debugger, the stm is stuck in an infinite loop. If I pause my IDE, the program counter points to WWDG_IRQHandler(). I do not expect that it has anything to do with the window watchdog as its the first label in the infinite loop assembly section. I read on the internet that it could be a hard fault. As handlers are defined as weak, I tried overwriting the hard fault handler. Unfortunately, the breakpoint I set in this handler is never called as well. How can I confirm if, and if so, which hard fault is active?posted by 11 Dec 2018 You need to log in to post a question Hi Alex - please post your code if you still need assistance.posted by Ralph Fulchiero 29 Nov 2018 Thanks Ralph! I totally forgot to post the code.posted by Alex van Rijs 29 Nov 2018
https://os.mbed.com/questions/83337/CPU-freezes-on-second-attempt-to-use-the/
CC-MAIN-2019-43
refinedweb
901
50.73
Oops, sorry. :-P This is a discussion on Creating an object of a class inside another class - possible? within the C++ Programming forums, part of the General Programming Boards category; Oops, sorry. :-P... Oops, sorry. :-P Just GET it OFF out my mind!! Supposedly, its bundled with the free edition of QT (or so said someone on their forums), but I downloaded the latest QT SDK from their site, and it didn't include QT Designer or QT Assistant. I'm searching again right now, on Google... EDIT: And that's not the code I used. (I'll post the code in a few) Last edited by Programmer_P; 08-15-2009 at 10:21 AM. Yea, according to a post found at this link: where can i download free qt designer? - Qt Designer - QtForum.org its included with QT, but that is just not the case. I had hopes the link in the second post might be where I could download it from, but it says the page does not exist... Header: Implementation:Implementation:Code:#ifndef TTS_MOD_H #define TTS_MOD_H #include <QDialog> class QPushButton; class TextToSpeechDialog1 : public QDialog { Q_OBJECT public: TextToSpeechDialog1(QWidget* parent = 0); private slots: void nextClicked(); private: QPushButton* nextButton; }; #endif // TTS_MOD_H Code:TextToSpeechDialog1::TextToSpeechDialog1(QWidget *parent) : QDialog(parent) { nextButton = new QPushButton(tr("&Next")); nextButton->setDefault(true); connect(nextButton, SIGNAL(clicked()), this, SLOT(nextClicked())); QHBoxLayout *bottomLayout = new QHBoxLayout; bottomLayout->addWidget(cancelButton); bottomLayout->addWidget(nextButton); QVBoxLayout *mainLayout = new QVBoxLayout; mainLayout->addLayout(bottomLayout); setLayout(mainLayout); setWindowTitle(tr("Text to Speech Wizard (Screen 1)")); setFixedHeight(sizeHint().height()); } void nextDialog() { TextToSpeechDialog1* p = new TextToSpeechDialog1; p->hide(); //for some reason, though I wrote this line, the first dialog is not hidden TextToSpeechDialog2* dialog2 = new TextToSpeechDialog2; dialog2->show(); } void TextToSpeechDialog1::nextClicked() { nextDialog(); //call the nextDialog() } Download from the leftmost link, the complete SDK, not the rightmost which is only the library. I love Qt because it is best and simple coding convention, just prefix anything with 'Q' or 'q' without any namespaces, although it isn't reusable and extensible enough to be used for big application. Just GET it OFF out my mind!! There is a directory in the QT/version/bin folder called "designer", but it doesn't contain the .exe. EDIT: Oh, and I downloaded both the Windows and Linux versions, but neither included QT Designer. What do you mean by "big"? The application that I am writing in QT is going to be reasonably big (i'd say maybe 2,000 lines of code, not counting the included QT header files).What do you mean by "big"? The application that I am writing in QT is going to be reasonably big (i'd say maybe 2,000 lines of code, not counting the included QT header files).I love Qt because it is best and simple coding convention, just prefix anything with 'Q' or 'q' without any namespaces, although it isn't reusable and extensible enough to be used for big application. Last edited by Programmer_P; 08-15-2009 at 11:17 AM. Reason: left something out So the dialog1 show the new dialog2 and then the dialog1 is hidden? 1. Make the dialog2 to be member of the 1 so it can be deleted. 2. Just call hide() without creating new instance. Just GET it OFF out my mind!! It should be in Qt/ver/qt/bin Just GET it OFF out my mind!! I have thought about making 2 a derived class of 1, but then I would have to go to extra work to parse the code of 1 that i don't need in 2, and there must be a way to get it to work without that. Maybe just jump back to int main(), and call the hide function from the first instance. Last edited by Programmer_P; 08-15-2009 at 11:45 AM. The designer also "embedded" inside the creator, yet it can be launched inside the creator too. About the "delete this" Code:std::auto_ptr< YourClass<T> > thisIsUglierAndUnfrendlySyntaxYouEverUsed(void); Just GET it OFF out my mind!! It's not unfriendly and it's not ugly. Compare it to something like this: void ThisIsADescriptiveFunctionName(); void TIADFN(); Which name is more descriptive here? If I see a function returning a shared pointer, I can rest assured that I won't have to do anything special with it or handle it with care. Also, delete this only works if it was allocated with new. And new is potentially 50x or more slower than creating the object on the stack. So why use it in the first place? Let the programmer be able to choose where to create the object. Also, auto_ptr is possibly bad in this case since it has exclusive ownership, not shared. Well, my last assumption was almost correct. However, since the hide function would be called without hitting the next button (due to me putting it in int main, as is), I had to think of a different method. So what I did instead was I created a function (named sendHideReqest()) which I put the hide line in. I then simply passed "dialog" (the first dialog's class's object) into the function when I called it from int main. Unfortunately, the end result was the same. I tried using an if statement to control when the sendHideRequest() function is called, but for some reason, it seems to call it anyway (maybe due to the nature of if statements?), even before I push the next button of the first dialog. Here's the relevant code: Code:int nextDialog() { TextToSpeechDialog2* dialog2 = new TextToSpeechDialog2; dialog2->show(); return 1; } void sendHideRequest(TextToSpeechDialog1* hide) { hide->hide(); //this line hides TextToSpeechDialog1 } void TextToSpeechDialog1::nextClicked() { nextDialog(); //call the nextDialog() } int main(int argc, char* argv[]) { QApplication app(argc, argv); TextToSpeechDialog1* dialog = new TextToSpeechDialog1; dialog->show(); if (nextDialog() == true) //the trouble line: it seems to by default call nextDialog() sendHideRequest(dialog); //pass the "dialog" object into the function return app.exec(); //hand control of the program over to QT } So what happens is when the program opens, you only see the second dialog (the first dialog is already hidden). The code was supposed to wait until I press Next on the first dialog to call the second dialog (and subsequently hide the first dialog), but it seems to call nextDialog() as a result of the if line, and so the end result is only the second dialog is displayed. But obviously, you just show the first window, then proceed to show the second window, then immediately return 1 (which really should be true and bool), and goes on to hiding the first dialog. You need to wait for some event that indicates that the "next" button has been pressed before proceeding. How that's done in Qt, I have no idea, though.
http://cboard.cprogramming.com/cplusplus-programming/118578-creating-object-class-inside-another-class-possible-2.html
CC-MAIN-2015-06
refinedweb
1,129
60.24
LinearLrWarmup¶ - class paddle.fluid.dygraph.learning_rate_scheduler. LinearLrWarmup ( learning_rate, warmup_steps, start_lr, end_lr, begin=1, step=1, dtype='float32' ) [source] - Api_attr imperative. begin (int, optional) – The begin step. The initial value of global_step described above. The default value is 0. step (int, optional) – The step size used to calculate the new global_step in the description above. The default value is 1. dtype (str, optional) – The data type used to create the learning rate variable. The data type can be set as ‘float32’, ‘float64’. The default value is ‘float32’. - Returns Warm-up learning rate with the same data type as learning_rate. - Return type Variable Examples: import paddle.fluid as fluid learning_rate = 0.1 warmup_steps = 50 start_lr = 0 end_lr = 0.1 with fluid.dygraph.guard(): lr_decay = fluid.dygraph.LinearLrWarmup( learning_rate, warmup_steps, start_lr, end_lr) - create_lr_var ( lr ) create_lr_var¶ convert lr from float to variable - Parameters lr – learning rate - Returns learning rate variable
https://www.paddlepaddle.org.cn/documentation/docs/en/api/paddle/fluid/dygraph/learning_rate_scheduler/LinearLrWarmup_en.html
CC-MAIN-2021-31
refinedweb
146
55.1
Powerful animations in React Native In this blog post we’re going to present the main issues we ran into implementing complex animations in React Native at Xmartlabs. We’ll show how Reanimated helps achieving smooth animations and at which cost. After reading this blog post you will be able to determine if react native is a proper choice to create your app considering its animations requirements. So let’s start with a little introduction to the matter. “When speaking of animations, the key to success is to avoid frame drops” What makes React Native so special regarding this topic of avoiding frame drops? To answer this question, first we need to get deep into React Native architecture that is shown in the image below. As we can see React Native has two main threads. - UI Thread - Where all the native code runs - JavaScript Thread - Where all the JavaScript code runs. These two threads communicate between each other through JSON messages that are sent and received asynchronously from which is called the React Native Asynchronous Bridge and all interactions between JavaScript and the UI are made in this way. What does this have to do with animations? Well, since we want to have an awesome user experience, we would need animations to run at 60fps. This means there’s only ~16ms to calculate an animation and we have to render each animation frame within this 16ms otherwise we are going to lose frames. This is when the Bridge comes in the way of animations, the asynchronous communication between the two threads makes it difficult to guarantee that the next frame is calculated in such limited amount of time, JS thread might be busy working on another task or device CPU might be too slow. What impact does this have in React Native? It’s really huge, because if we have JavaScript driven animation using the requestAnimationFrame() we have no guarantees that we could achieve the frame calculation, especially in low-grade Android devices, and taking into account that we also use the JavaScript Thread to do all the things in our React Native app, such as API requests, storage updates, etc. So it’s very likely we’re going to lose some frames and experience some animation freeze. How can we solve this? So if the bridge is our major “trade-off”, how we can get rid of this? Well, there is a solution by using react-native-reanimated library which uses a declarative animations approach. What is the advantage of this? If we do our animations in a declarative way, when we interact with the device through UI gestures everything is executed in the UI Native thread and with this we can achieve the magic number of 60fps and avoid the losing frames. Write animations in a declarative way, how can we achieve this in React Native? React Native by default provides two API’s, one for gestures and one for animations, but please don’t ever use this because both rely on imperative code and on the communication between the JavaScript thread and the UI thread through the bridge. As we mentioned before in order to do it declarative we are going to use these two libraries: - React-Native-Reanimated, for animations. - React-Native-Gesture-Handler, for gestures. First of all, our code needs to be written with the Reanimated.API, what does this mean? We cannot use if-else, Views neither the + * == operators because they live in the JavaScript thread, we need to use the auxiliary functions that are provided by the Reanimated.API, let’s see some examples of these auxiliary functions: Explaining each parameter and details of each Reanimated APIis out of the scope of this blogpost. So if we apply this in a simple example this is how it looks. import React, { useState } from "react"; import { View, SafeAreaView, Text, StyleSheet } from "react-native"; import Animated from "react-native-reanimated"; import { useMemoOne } from "use-memo-one"; import { RectButton } from "react-native-gesture-handler"; export const Example = () => { const { Value, useCode, block, cond, Clock, not, clockRunning, startClock, set, interpolate, Extrapolate, add, eq, stopClock } = Animated; const animationDuration = 500; const [show, updateShow] = useState<boolean>(true); const { time, clock, progress } = useMemoOne( () => ({ time: new Value(0), clock: new Clock(), progress: new Value(0) }), [] ); const opacity = interpolate(progress, { inputRange: [0, 1], outputRange: show ? [0, 1] : [1, 0], extrapolate: Extrapolate.CLAMP }); useCode( block([ cond(not(clockRunning(clock)), [startClock(clock), set(time, clock)]), set( progress, interpolate(clock, { inputRange: [time, add(time, animationDuration)], outputRange: [0, 1], extrapolate: Extrapolate.CLAMP }) ), cond(eq(progress, 1), stopClock(clock)) ]), [show] ); return ( <View style={styles.container}> <View style={styles.mainContent}> <Animated.View style=> <View style={styles.card} /> </Animated.View> </View> <RectButton onPress={() => updateShow(!show)}> <SafeAreaView style={styles.button} accessible> <View style={styles.button}> <Text style={styles.label}>{show ? "Hide" : "Show"}</Text> </View> </SafeAreaView> </RectButton> </View> ); }; Conclusions As we can see in the example above it’s easier to write, understand, and maintain the animation using the default Animated.API. For simple animations it could work like a charm but when the animations get more complex and we need to guarantee they always run smoothly we’re not able to achieve that by just using Animated.API. In that cases reanimated could be like water in the desert but don’t forget that nothing comes without a cost and with reanimated the cost is the code complexity and the increase in the development time. At first, it might feel a bit awful and kind of antinatural, but when you get accustomed to it you will be able to create powerful animations. In conclusion, we can achieve powerful animations in React Native but it comes with a little trade-off in complexity. So if you are looking to make an App that has simple animations and just few complex ones, maybe with React Native + Reanimated you could get a nice looking app but if you want to make an app in which the animations are a core aspect of the product, React Native may not be the best choice for you. Well, I hope you now have a better idea of how to implement powerful animations in React Native! Are you doing something regarding animations in your RN projects and have learned something not covered in this post? Let me know in the comments. I’d be interested to get your perspective. Have questions about Reanimated? I’d be happy to answer those in the comments if I can.
https://blog.xmartlabs.com/2020/04/27/powerful-animations-in-RN/
CC-MAIN-2022-21
refinedweb
1,077
51.89
Pedro Gonzalez2,097 Points I am stuck in the Foreach loop task. Any help is greatly appreciated My code can't run because i get a compiling error stating "Cannot convert type Treehouse.CodeChallenges.Frog' todouble' ". So my question is How can I fix my code so I don't get this compiling error and what does it mean? Thank you in advance to whoever replies. namespace Treehouse.CodeChallenges { class FrogStats { public static double GetAverageTongueLength(Frog[] frogs) { double total = 0; foreach(double i in frogs) { total = total + frogs[i].TongueLength; } return total/frogs.Length; } } } namespace Treehouse.CodeChallenges { public class Frog { public int TongueLength { get; } public Frog(int tongueLength) { TongueLength = tongueLength; } } } 1 Answer Jennifer NordellTreehouse Staff Hi there! Oh wow, you're really close here! Inside your foreach loop you're saying for every double i in the frogs array. But the frogs array doesn't contain a set of doubles. It contains a set of frogs of type Frog! So your code is trying to explicitly cast a Frog to a double, which cannot be done and results in the compiler error. If I modify your code just slightly, it passes. namespace Treehouse.CodeChallenges { class FrogStats { public static double GetAverageTongueLength(Frog[] frogs) { double total = 0; foreach(Frog i in frogs) { total = total + i.TongueLength; } return total/frogs.Length; } } } Now our i variable is of type Frog instead of a double. Hope this helps! Pedro Gonzalez2,097 Points Pedro Gonzalez2,097 Points Thank you so much for helping me.Hope you have a great day.
https://teamtreehouse.com/community/i-am-stuck-in-the-foreach-loop-task-any-help-is-greatly-appreciated
CC-MAIN-2019-51
refinedweb
254
59.9
> From: Michael Meissner <meissner@cygnus.com> > > > > Hmm, I've come across a problem in practice with this. I'd like to > > solicit opinions on how to proceed. > > > > Suppose you have the following code: > > > > > #include <limits.h> > > > unsigned int i = UINT_MAX; > > > > and suppose that your system is a modern one supporting U so limits.h > > defines UINT_MAX to 4294967295U. Now you are hosed, there is no way > > to silence the warning. Even though UINT_MAX is defined in a system > > header, the use of it occurs in user code so the warning will appear. > > Ditto for uses in cpp #if conditionals. > > > > Any suggestions on how to avoid this problem? > > I don't believe the traditional limits.h had UINT_MAX, so a > traditional program shouldn't be refering to it. Programs intended to run on traditional C frequently refer to UINT_MAX. Currently gcc does, so does gettext which gcc uses. Perhaps this expanded code fragment will make clear how this happens in practice. > #ifdef HAVE_LIMITS_H > #include <limits.h> > #endif > > #ifndef UINT_MAX > #define UINT_MAX <some expression to calculate it without using U> > #endif > > unsigned int i = UINT_MAX; Although the backup definition of UINT_MAX often uses "sizeof" as part of the expression thus making it no longer suitable for #if conditionals, the example as I have written it is pretty common. The problem again is that on modern systems which *do* have UINT_MAX in limits.h, they use `U' and there's no way to avoid the warning when one adds -Wtraditional. I hate adding new warnings if there's no way to silence it in legitimate code. :-( --Kaveh -- Kaveh R. Ghazi Engagement Manager / Project Services ghazi@caip.rutgers.edu Qwest Internet Solutions
https://gcc.gnu.org/pipermail/gcc-patches/2000-August/034862.html
CC-MAIN-2021-21
refinedweb
278
59.9
A TR1 Tutorial: Class std::tr1::tuple In a previous article, I gave a tutorial on the array class from the TR1 implementation released by Microsoft with the VC++ 2008 Feature Pack. In this article, I will talk about class tuple and the other classes and functions from the header with the same name, <tuple>. Type tuple A tuple is a sequence with a fixed number of elements of different types. The difference from an array is that a tuple is a heterogeneous sequence, whereas the array is a homogeneous sequence. Class tuple is a template class and is defined in the header <tuple> under the namespace std::tr1. Although theoretically a tuple can hold any number of elements (but finite), the implementation from VC++ 2008 Feature Pack only supports at most 15 elements. Premises In the examples from this tutorial, I will use a tuple with three elements, an int, a double, and a string. For the clarity of the samples, I will use this type definition: typedef std::tr1::tuple<int, double, std::string> tuple_ids; Also, I will use the following function for printing a tuple of the above type (I will provide details about accessing a tuple's elements in a following paragraph): void print_tuple(const std::tr1::tuple<int, double, std::string>& t) { std::cout << std::tr1::get<0>(t) << " " << std::tr1::get<1>(t) << " " << std::tr1::get<2>(t) << std::endl; } Creating tuples The simplest way to define a tuple is by using the default constructor: tuple_ids t; print_tuple(t); 0 0 Another option is using the constructor with parameters: tuple_ids t(1, 2.5, "three"); print_tuple(t); The output is: 1 2.5 three A third option is creating a tuple using function make_tuple from the header <tuple>. tuple_ids t = std::tr1::make_tuple(10, 20.5, "thirty"); print_tuple(t); 10 20.5 thirty Tuple size For the size of a tuple (number of elements), there is another template class called tuple_size, that has a member called value that represents the size of a tuple. std::cout << "size: " << std::tr1::tuple_size<tuple_ids>::value << std::endl; size: 3 Accessing tuple elements To access elements from a tuple, you have to use function get: - Reading tuple_ids t(1, 2.5, "three"); std::cout << std::tr1::get<0>(t) << " " << std::tr1::get<1>(t) << " " << std::tr1::get<2>(t) << std::endl; the output is 1 2.5 three tuple_ids t; print_tuple(t); std::tr1::get<0>(t) = 10; std::tr1::get<1>(t) = 20.5; std::tr1::get<2>(t) = "thirty"; print_tuple(t); the output is 0 0 10 20.5 thirty There are no comments yet. Be the first to comment!
https://www.codeguru.com/cpp/cpp/cpp_mfc/stl/article.php/c15287/A-TR1-Tutorial-Class-stdtr1tuple.htm
CC-MAIN-2018-43
refinedweb
444
51.89
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards Includes the system headers <functional>, <new>, <cstddef>, <cstdlib>, and <exception>. Includes the Boost headers "detail/ct_gcd_lcm.hpp" (see ct_gcd_lcm.html), "detail/gcd_lcm.hpp" (see gcd_lcm.html), and "simple_segregated_storage.hpp" (see simple_segregated_storage.html). namespace details { template <typename SizeType> class PODptr { public: typedef SizeType size_type; PODptr(char * ptr, size_type size); PODptr(); // Copy constructor, assignment operator, and destructor allowed bool valid() const; void invalidate(); char * & begin(); char * begin() const; char * end() const; size_type total_size() const; size_type element_size() const; size_type & next_size() const; char * & next_ptr() const; PODptr next() const; void next(const PODptr & arg) const; }; } // namespace details template <typename UserAllocator = default_user_allocator_new_delete> class pool: protected simple_segregated_storage<typename UserAllocator::size_type> { ... // public interface protected: details::PODptr<size_type> list; simple_segregated_storage<size_type> & store(); const simple_segregated_storage<size_type> & store() const; const size_type requested_size; size_type next_size; details::PODptr<size_type> find_POD(void * chunk) const; static bool is_from(void * chunk, char * i, size_type sizeof_i); size_type alloc_size() const; public: // extensions to public interface pool(size_type requested_size, size_type next_size); size_type get_next_size() const; void set_next_size(size_type); }; Whenever an object of type pool pool.. PODptr is a class that pretends to be a "pointer" to different class types that don't really exist. It provides member functions to access the "data" of the "object" it points to. Since these "class" types are of differing sizes, and contain some information at the end of their memory (for alignment reasons), PODptr must contain the size of this "class" as well as the pointer to this "object". A PODptr holds the location and size of a memory block allocated from the system. Each memory block is split logically into three sections: The PODptr class just provides cleaner ways of dealing with raw memory blocks. A PODptr object is either valid or invalid. An invalid PODptr is analogous to a null pointer. The default constructor for PODptr will result in an invalid object. Calling the member function invalidate will result in that object becoming invalid. The member function valid can be used to test for validity. A PODptr may be created to point to a memory block by passing the address and size of that memory block into the constructor. A PODptr constructed in this way is valid. A PODptr may also be created by a call to the member function next, which returns a PODptr which points to the next memory block in the memory block list, or an invalid PODptr if there is no such block. Each PODptr keeps the address and size of its memory block. The address may be read or written by the member functions begin. The size of the memory block may only be read, and is done so by the member function total_size. The chunk area may be accessed by the member functions begin and end, in conjunction with element_size. The value returned by end is always the value returned by begin plus element_size. Only begin is writeable. end is a past-the-end value; using pointers beginning at begin and ending before end allows one to iterate through the chunks in a memory block. The next pointer area may be accessed by the member function next_ptr. The next size area may be accessed by the member function next_size. Both of these are writeable. They may both be read or set at the same time through the member function next. This is the list of memory blocks that have been allocated by this Pool object. It is not the same as the list of free memory chunks (exposed by simple segregated storage as first). These are convenience functions, used to return the base simple segregated storage object. The first argument passed into the constructor. Represents the number of bytes in each chunk requested by the user. The actual size of the chunks may be different; see alloc_size, below. The number of chunks to request from the UserAllocator the next time we need to allocate system memory. See the extensions descriptions, above. Searches through the memory block list, looking for the block that chunk was allocated from or may be allocated from in the future. Returns that block if found, or an invalid value if chunk has been allocated from another Pool or may be allocated from another Pool in the future. Results for other values of chunk may be wrong. Tests chunk to see if it has been allocated from the memory chunk at i with an element size of sizeof_i. Note that sizeof_i is the size of the chunk area of that block, not the total size of that block. Returns true if chunk has been allocated from that memory block or may be allocated from that block in the future. Returns false if chunk has been allocated from another block or may be allocated from another block in the future. Results for other values of chunk may be wrong. Returns the calculated size of the memory chunks that will be allocated by this Pool. For alignment reasons, this is defined to be lcm(requested_size, sizeof(void *), sizeof(size_type)). Revised 05 December, 2006 Copyright © 2000, 2001 Stephen Cleary (scleary AT jerviswebb DOT com) Distributed under the Boost Software License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at)
http://www.boost.org/doc/libs/1_41_0/libs/pool/doc/implementation/pool.html
CC-MAIN-2017-30
refinedweb
878
55.54
Specification link: 15.4 The listener element script-listener-202-t ← index → script-listener-204-t This test tests that an element with the name 'listener' is not interpreted as an svg element, and that it's not the same as an element named 'listener' in the XML Events namespace. The test has passed if handlers are implemented (the handler with xml:id="passhandler" has run) but the handler with xml:id="failhandler" has not run. If the handler is run then the text in the testcase will show "Test failed: 'listener' is not an svg element.". The pass condition is indicated by two rects that must both be green after running the test, if either of the rects is red then the test has failed.
http://www.w3.org/Graphics/SVG/Test/20080912/htmlEmbedHarness/script-listener-203-t.html
CC-MAIN-2015-11
refinedweb
126
56.79
Python offers itself not only as a popular scripting language, but also supports the object-oriented programming paradigm. Classes in Python describe data and provide methods to manipulate that data, all encompassed under a single object. Furthermore, classes allow for abstraction by separating concrete implementation details from abstract representations of data. Code utilizing classes is generally easier to read, understand, and maintain. Classes in Python: Introduction to classes A class, functions as a template that defines the basic characteristics of a particular object. Here’s an example: class Person(object): """A simple class.""" # docstring species = "Homo Sapiens" # class attribute def init(self, name): # special method """This is the initializer. It's a special method (see below). """ self.name = name # instance attribute def str(self): # special method """This method is run when Python tries to cast the object to a string. Return this string when using print(), etc. """ return self.name def rename(self, renamed): # regular method """Reassign and print the name attribute.""" self.name = renamed print("Now my name is {}".format(self.name)) There are a few things to note when looking at the above example. The class is made up of attributes (data) and methods (functions). Attributes and methods are simply defined as normal variables and functions. As noted in the corresponding docstring, the __init__() method is called the initializer. It's equivalent to the constructor in other object oriented languages, and is the method that is first run when you create a new object, or new instance of the class. Attributes that apply to the whole class are defined first, and are called class attributes. Attributes that apply to a specific instance of a class (an object) are called instance attributes. They are generally defined inside __init__(); this is not necessary, but it is recommended (since attributes defined outside of __init__() run the risk of being accessed before they are defined). Every method, included in the class definition passes the object in question as its first parameter. The word self is used for this parameter (usage of self is actually by convention, as the word self has no inherent meaning in Python, but this is one of Python's most respected conventions, and you should always follow it). Those used to object-oriented programming in other languages may be surprised by a few things. One is that Python has no real concept of private elements, so everything, by default, imitates the behavior of the C++/Java public keyword. For more information, see the "Private Class Members" example on this page. Some of the class's methods have the following form: __functionname__(self, other_stuff). All such methods are called "magic methods" and are an important part of classes in Python. For instance, operator overloading in Python is implemented with magic methods. For more information, see the relevant documentation. Now let’s make a few instances of our Person class! Instances kelly = Person("Kelly") joseph = Person("Joseph") john_doe = Person("John Doe") We currently have three Person objects, kelly, joseph, and john_doe. We can access the attributes of the class from each instance using the dot operator . Note again the difference between class and instance attributes: Attributes kelly.species 'Homo Sapiens' john_doe.species 'Homo Sapiens' joseph.species 'Homo Sapiens' kelly.name 'Kelly' joseph.name 'Joseph' We can execute the methods of the class using the same dot operator .: Methods john_doe.str() 'John Doe' print(john_doe) 'John Doe' john_doe.rename("John") 'Now my name is John' Classes in Python: Bound, unbound, and static methods The idea of bound and unbound methods was removed in Python 3. In Python 3 when you declare a method within a class, you are using a def keyword, thus creating a function object. This is a regular function, and the surrounding class works as its namespace. In the following example we declare method f within class A, and it becomes a function A.f: Python 3.x Version ≥ 3.0 class A(object): def f(self, x): return 2 * x A.f (in Python 3.x) In Python 2 the behavior was different: function objects within the class were implicitly replaced with objects of type instancemethod, which were called unbound methods because they were not bound to any particular class instance. It was possible to access the underlying function using .func property. Python 2.x Version ≥ 2.3 A.f (in Python 2.x) A.f.class A.f.func The latter behaviors are confirmed by inspection – methods are recognized as functions in Python 3, while the distinction is upheld in Python 2. Python 3.x Version ≥ 3.0 import inspect inspect.isfunction(A.f) True inspect.ismethod(A.f) False Python 2.x Version ≥ 2.3 import inspect inspect.isfunction(A.f) False inspect.ismethod(A.f) True In both versions of Python function/method A.f can be called directly, provided that you pass an instance of class A as the first argument. A.f(1, 7) Python 2: TypeError: unbound method f() must be called with A instance as first argument (got int instance instead) Python 3: 14 a = A() A.f(a, 20) Python 2 & 3: 40 Now suppose a is an instance of class A, what is a.f then? Well, intuitively this should be the same method f of class A, only it should somehow “know” that it was applied to the object a – in Python this is called method bound to a. The nitty-gritty details are as follows: writing a.f invokes the magic getattribute method of a, which first checks whether a has an attribute named f (it doesn’t), then checks the class A whether it contains a method with such a name (it does), and creates a new object m of type method which has the reference to the original A.f in m.func, and a reference to the object a in m.self. When this object is called as a function, it simply does the following: m(…) => m.func(m.self, …). Thus this object is called a bound method because when invoked it knows to supply the object it was bound to as the first argument. (These things work same way in Python 2 and 3). a = A() a.f > a.f(2) 4 Note: the bound method object a.f is recreated every time you call it: a.f is a.f # False As a performance optimization you can store the bound method in the object's dict, in which case the method object will remain fixed: a.f = a.f a.f is a.f # True Finally, Python has class methods and static methods – special kinds of methods. Class methods work the same way as regular methods, except that when invoked on an object they bind to the class of the object instead of to the object. Thus m.self = type(a). When you call such bound method, it passes the class of a as the first argument. Static methods are even simpler: they don’t bind anything at all, and simply return the underlying function without any transformations. class D(object): multiplier = 2 @classmethod def f(cls, x): return cls.multiplier * x @staticmethod def g(name): print("Hello, %s" % name) D.f > D.f(12) 24 D.g D.g("world") Hello, world Note that class methods are bound to the class even when accessed on the instance: d = D() d.multiplier = 1337 (D.multiplier, d.multiplier) (2, 1337) d.f > d.f(10) 20 It is worth noting that at the lowest level, functions, methods, staticmethods, etc. are actually descriptors that invoke get, set and optionally del special methods. For more details on classmethods and staticmethods: What is the difference between @staticmethod and @classmethod in Python? Meaning of @classmethod and @staticmethod for beginner? Classes in Python: Basic inheritance Inheritance Built-in functions that work with inheritance Monkey Patching In this case, “monkey patching” means adding a new variable or method to a class after it’s been defined. For instance, say we defined class A as class A(object): def init(self, num): self.num = num def add(self, other): return A(self.num + other.num) But now we want to add another function later in the code. Suppose this function is as follows. def get_num(self): return self.num But how do we add this as a method in A? That’s simple we just essentially place that function into A with an assignment statement. A.get_num = get_num Why does this work? Because functions are objects just like any other object, and methods are functions that belong to the class. The function get_num shall be available to all existing (already created) as well to the new instances of A These additions are available on all instances of that class (or its subclasses) automatically. For example: foo = A(42) A.get_num = get_num bar = A(6); foo.get_num() # 42 bar.get_num() # 6 Note that, unlike some other languages, this technique does not work for certain built-in types, and it is not considered good style. Classes in Python: New-style vs. old-style classes Python 2.x Version ≥ 2.2.0 New-style classes were introduced in Python 2.2 to unify classes and types. They inherit from the top-level object GoalKicker.com – Python® Notes for Professionals 207 type. A new-style class is a user-defined type, and is very similar to built-in types. new-style class class New(object): pass new-style instance new = New() new.class type(new) issubclass(New, object) True Old-style classes do not inherit from object. Old-style instances are always implemented with a built-in instance type. old-style class class Old: pass old-style instance old = Old() old.class type(old) issubclass(Old, object) False Python 3.x Version ≥ 3.0.0 In Python 3, old-style classes were removed. New-style classes in Python 3 implicitly inherit from object, so there is no need to specify MyClass(object) anymore. class MyClass: pass my_inst = MyClass() type(my_inst) my_inst.class issubclass(MyClass, object) True Class methods: alternate initializers Class methods present alternate ways to build instances of classes. To illustrate, let’s look at an example. Let’s suppose we have a relatively simple Person class: class Person(object): def init(self, first_name, last_name, age): self.first_name = first_name self.last_name = last_name self.age = age self.full_name = first_name + " " + last_name def greet(self): print("Hello, my name is " + self.full_name + ".") It might be handy to have a way to build instances of this class specifying a full name instead of first and last name separately. One way to do this would be to have last_name be an optional parameter, and assuming that if it isn’t given, we passed the full name in: class Person(object): def init(self, first_name, age, last_name=None): if last_name is None: self.first_name, self.last_name = first_name.split(" ", 2) else: self.first_name = first_name self.last_name = last_name self.full_name = self.first_name + " " + self.last_name self.age = age def greet(self): print("Hello, my name is " + self.full_name + ".") However, there are two main problems with this bit of code: The parameters first_name and last_name are now misleading, since you can enter a full name for first_name. Also, if there are more cases and/or more parameters that have this kind of flexibility, the if/elif/else branching can get annoying fast. Not quite as important, but still worth pointing out: what if last_name is None, but first_name doesn't split into two or more things via spaces? We have yet another layer of input validation and/or exception handling... Enter class methods. Rather than having a single initializer, we will create a separate initializer, called from_full_name, and decorate it with the (built-in) classmethod decorator. class Person(object): def init(self, first_name, last_name, age): self.first_name = first_name self.last_name = last_name self.age = age self.full_name = first_name + " " + last_name @classmethod def from_full_name(cls, name, age): if " " not in name: raise ValueError first_name, last_name = name.split(" ", 2) return cls(first_name, last_name, age) def greet(self): print("Hello, my name is " + self.full_name + ".") Notice cls instead of self as the first argument to from_full_name. Class methods are applied to the overall class, not an instance of a given class (which is what self usually denotes). So, if cls is our Person class, then the returned value from the from_full_name class method is Person(first_name, last_name, age), which uses Person’s init to create an instance of the Person class. In particular, if we were to make a subclass Employee of Person, then from_full_name would work in the Employee class as well. To show that this works as expected, let’s create instances of Person in more than one way without the branching in init: In [2]: bob = Person("Bob", "Bobberson", 42) In [3]: alice = Person.from_full_name("Alice Henderson", 31) In [4]: bob.greet() Hello, my name is Bob Bobberson. In [5]: alice.greet() Hello, my name is Alice Henderson. Other references: Python @classmethod and @staticmethod for beginner? Classes in Python: Multiple Inheritance Python uses the C3 linearization algorithm to determine the order in which to resolve class attributes, including methods. This is known as the Method Resolution Order (MRO). Here’s a simple example: class Foo(object): foo = 'attr foo of Foo' class Bar(object): foo = 'attr foo of Bar' # we won't see this. bar = 'attr bar of Bar' class FooBar(Foo, Bar): foobar = 'attr foobar of FooBar' Now if we instantiate FooBar, if we look up the foo attribute, we see that Foo’s attribute is found first fb = FooBar() and fb.foo 'attr foo of Foo' Here’s the MRO of FooBar: FooBar.mro() [, , , ] It can be simply stated that Python’s MRO algorithm is Depth first (e.g. FooBar then Foo) unless a shared parent (object) is blocked by a child (Bar) and no circular relationships allowed. That is, for example, Bar cannot inherit from FooBar while FooBar inherits from Bar. For a comprehensive example in Python, see the wikipedia entry. Another powerful feature in inheritance is super. super can fetch parent classes features. class Foo(object): def foo_method(self): print "foo Method" class Bar(object): def bar_method(self): print "bar Method" class FooBar(Foo, Bar): def foo_method(self): super(FooBar, self).foo_method() Multiple inheritance with init method of class, when every class has own init method then we try for multiple inheritance then only init method get called of class which is inherit first. for below example only Foo class init method getting called Bar class init not getting called class Foo(object): def init(self): print "foo init" class Bar(object): def init(self): print "bar init" class FooBar(Foo, Bar): def init(self): print "foobar init" super(FooBar, self).init() a = FooBar() Output: foobar init foo init But it doesn’t mean that Bar class is not inherit. Instance of final FooBar class is also instance of Bar class and Foo class. print isinstance(a,FooBar) print isinstance(a,Foo) print isinstance(a,Bar) Output: True True True Classes in Python: Properties Python classes support properties, which look like regular object variables, but with the possibility of attaching custom behavior and documentation. class MyClass(object): def init(self): self._my_string = "" @property def string(self): """A profoundly important string.""" return self._my_string @string.setter def string(self, new_value): assert isinstance(new_value, str), \ "Give me a string, not a %r!" % type(new_value) self._my_string = new_value @string.deleter def x(self): self._my_string = None The object’s of class MyClass will appear to have a property .string, however it’s behavior is now tightly controlled: mc = MyClass() mc.string = "String!" print(mc.string) del mc.string As well as the useful syntax as above, the property syntax allows for validation, or other augmentations to be added to those attributes. This could be especially useful with public APIs – where a level of help should be given to the user. Another common use of properties is to enable the class to present ‘virtual attributes’ – attributes which aren’t actually stored but are computed only when requested. class Character(object): def init(name, max_hp): self._name = name self._hp = max_hp self._max_hp = max_hp Make hp read only by not providing a set method @property def hp(self): return self._hp Make name read only by not providing a set method @property def name(self): return self.name def take_damage(self, damage): self.hp -= damage self.hp = 0 if self.hp <0 else self.hp @property def is_alive(self): return self.hp != 0 @property def is_wounded(self): return self.hp < self.max_hp if self.hp > 0 else False @property def is_dead(self): return not self.is_alive bilbo = Character('Bilbo Baggins', 100) bilbo.hp out : 100 bilbo.hp = 200 out : AttributeError: can't set attribute hp attribute is read only. bilbo.is_alive out : True bilbo.is_wounded out : False bilbo.is_dead out : False bilbo.take_damage( 50 ) bilbo.hp out : 50 bilbo.is_alive out : True bilbo.is_wounded out : True bilbo.is_dead out : False bilbo.take_damage( 50 ) bilbo.hp out : 0 bilbo.is_alive out : False bilbo.is_wounded out : False bilbo.is_dead out : True Classes in Python: Default values for instance variables If the variable contains a value of an immutable type (e.g. a string) then it is okay to assign a default value like this class Rectangle(object): def init(self, width, height, color='blue'): self.width = width self.height = height self.color = color def area(self): return self.width * self.height Create some instances of the class default_rectangle = Rectangle(2, 3) print(default_rectangle.color) # blue red_rectangle = Rectangle(2, 3, 'red') print(red_rectangle.color) # red One needs to be careful when initializing mutable objects such as lists in the constructor. Consider the following example: class Rectangle2D(object): def init(self, width, height, pos=[0,0], color='blue'): self.width = width self.height = height self.pos = pos self.color = color r1 = Rectangle2D(5,3) r2 = Rectangle2D(7,8) r1.pos[0] = 4 r1.pos # [4, 0] r2.pos # [4, 0] r2's pos has changed as well This behavior is caused by the fact that in Python default parameters are bound at function execution and not at function declaration. To get a default instance variable that’s not shared among instances, one should use a construct like this: class Rectangle2D(object): def init(self, width, height, pos=None, color='blue'): self.width = width self.height = height self.pos = pos or [0, 0] # default value is [0, 0] self.color = color r1 = Rectangle2D(5,3) r2 = Rectangle2D(7,8) r1.pos[0] = 4 r1.pos # [4, 0] r2.pos # [0, 0] r2's pos hasn't changed See also Mutable Default Arguments and “Least Astonishment” and the Mutable Default Argument. Class and instance variables Instance variables are unique for each instance, while class variables are shared by all instances. class C: x = 2 # class variable def init(self, y): self.y = y # instance variable C.x 2 C.y AttributeError: type object 'C' has no attribute 'y' c1 = C(3) c1.x 2 c1.y 3 c2 = C(4) c2.x 2 c2.y 4 Class variables can be accessed on instances of this class, but assigning to the class attribute will create an instance variable which shadows the class variable c2.x = 4 c2.x 4 C.x 2 Note that mutating class variables from instances can lead to some unexpected consequences. class D: x = [] def init(self, item): self.x.append(item) # note that this is not an assignment! d1 = D(1) d2 = D(2) d1.x [1, 2] d2.x [1, 2] D.x [1, 2] Classes in Python: Class composition Class composition allows explicit relations between objects. In this example, people live in cities that belong to countries. Composition allows people to access the number of all people living in their country: class Country(object): def init(self): self.cities=[] def addCity(self,city): self.cities.append(city) class City(object): def init(self, numPeople): self.people = [] self.numPeople = numPeople def addPerson(self, person): self.people.append(person) def join_country(self,country): self.country = country country.addCity(self) for i in range(self.numPeople): person(i).join_city(self) class Person(object): def init(self, ID): self.ID=ID def join_city(self, city): self.city = city city.addPerson(self) def people_in_my_country(self): x= sum([len(c.people) for c in self.city.country.cities]) return x US=Country() NYC=City(10).join_country(US) SF=City(5).join_country(US) print(US.cities[0].people[0].people_in_my_country()) 15 Listing All Class Members The dir() function can be used to get a list of the members of a class: dir(Class) For example: dir(list) ['] It is common to look only for “non-magic” members. This can be done using a simple comprehension that lists members with names not starting with __: [m for m in dir(list) if not m.startswith('__')] ['append', 'clear', 'copy', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort'] Caveats: Classes can define a dir() method. If that method exists calling dir() will call dir(), otherwise Python will try to create a list of members of the class. This means that the dir function can have unexpected results. Two quotes of importance from the official python documentation: If the object does not provide dir(), the function tries its best to gather information from the object’s dict attribute, if defined, and from its type object. The resulting list is not necessarily complete, and may be inaccurate when the object has a custom getattr().. Singleton class A singleton is a pattern that restricts the instantiation of a class to one instance/object. For more info on python singleton design patterns, see here. class Singleton: def new(cls): try: it = cls.it except AttributeError: it = cls.it = object.new(cls) return it def repr(self): return '<{}>'.format(self.class.name.upper()) def eq(self, other): return other is self Another method is to decorate your class. Following the example from this answer create a Singleton class:argument. Other than that, there are no restrictions that apply to the decorated class. To get the singleton instance, use the Instancemethod. Trying to use __call__will result in a TypeErrorbeing) To use you can use the Instance method @Singleton class Single: def init(self): self.name=None self.val=0 def getName(self): print(self.name) x=Single.Instance() y=Single.Instance() x.name='I\'m single' x.getName() # outputs I'm single y.getName() # outputs I'm single Descriptors and Dotted Lookups Descriptors are objects that are (usually) attributes of classes and that have any of get, set, or delete special methods. Data Descriptors have any of set, or delete These can control the dotted lookup on an instance, and are used to implement functions, staticmethod, classmethod, and property. A dotted lookup (e.g. instance foo of class Foo looking up attribute bar – i.e. foo.bar) uses the following algorithm: bar is looked up in the class, Foo. If it is there and it is a Data Descriptor, then the data descriptor is used. That's how property is able to control access to data in an instance, and instances cannot override this. If a Data Descriptor is not there, then bar is looked up in the instance __dict__. This is why we can override or block methods being called from an instance with a dotted lookup. If bar exists in the instance, it is used. If not, we then look in the class Foo for bar. If it is a Descriptor, then the descriptor protocol is used. This is how functions (in this context, unbound methods), classmethod, and staticmethod are implemented. Else it simply returns the object there, or there is an AttributeError What is Machine Learning? – A Complete Beginners Guide on ML
https://codingcompiler.com/classes-in-python/
CC-MAIN-2022-27
refinedweb
3,955
59.9
The latest version of the book is P (13-Sep-07) Paper page: 1 It would be easier to find the samples in the download if they were ordered by Chapter Number. (JMHO)--TW Scannell - Reported in: P3.0 (14-May-07) PDF page: 13 in_place_select_editor is missing some options that are in: See the rdoc for JavaScriptMacrosHelper.html This seems to work: def in_place_select_editor(field_id, options = {}) function = "new Ajax.InPlaceSelectEditor(" function << "'#{field_id}', " function << "'#{url_for(options[:url])}'" js_options = {} js_options['cancelText'] = %('#{options[:cancel_text]}') if options[:cancel_text] js_options['okText'] = %('#{options[:save_text]}') if options[:save_text] js_options['loadingText'] = %('#{options[:loading_text]}') if options[:loading_text] js_options['savingText'] = %('#{options[:saving_text]}') if options[:saving_text] js_options['rows'] = options[:rows] if options[:rows] js_options['cols'] = options[:cols] if options[:cols] js_options['size'] = options[:size] if options[:size] js_options['externalControl'] = "'#{options[:external_control]}'" if options[:external_control] js_options['loadTextURL'] = "'#{url_for(options[:load_text_url])}'" if options[:load_text_url] js_options['ajaxOptions'] = options[:options] if options[:options] js_options['evalScripts'] = options[:script] if options[:script] js_options['callback'] = "function(form) { return #{options[:with]} }" if options[:with] js_options['clickToEditText'] = %('#{options[:click_to_edit_text]}') if options[:click_to_edit_text] js_options['selectOptionsHTML'] = %('#{escape_javascript(options[:select_options].gsub(/\n/, ""))}') if options[:select_options] function << (', ' + options_for_javascript(js_options)) unless js_options.empty? function << ')' javascript_tag(function) end--Ian Connor - Reported in: P3.0 (27-Apr-09) PDF page: 13 Scriptaculous in_place_editor does not work with Rails 2.1 as described in the book.--Tim DeBaillie - Reported in: P3.0 (14-Sep-06) PDF page: 32 "we put relative path to our JavaScript action" -- it's an absolute path, no?--Mark Meves - Reported in: P3.0 (18-May-10) PDF page: 45 Not sure whether the code here is correct in Rails 2.3.x What I have working so far is: module ApplicationHelper class TabularFormBuilder < ActionView::Helpers::FormBuilder (field_helpers - %w(check_box radio_button hidden_field)).each do |selector| define_method "#{selector}" do |field, *options| @template.content_tag("tr", @template.content_tag("td", field.to_s.humanize + ":") + @template.content_tag("td", super)) end end end end --Patrick Mulder - Reported in: P3.0 (13-Aug-07) PDF page: 46 I tried the code, and the class CodeStatistics was not known. So I had to include require 'code_statistics' to get it to work.--Markus Liebelt - Reported in: P3.0 (07-Jul-06) PDF page: 53 I'm not sure of the version of the text used in what I saw, because it is the pdf extract for "Rails Without a Database" (I do not currently have an actual copy of the book, or the full pdf). Anyway, on page 53 the example DatabaselessApplication/lib/tasks/clear_database_prerequisites.rake does not really work. If you use the example code given, rake test_units will not run any of your unit tests. The task which actually needs to have its prerequisites cleared is test:units, not test_units. Clearing test_units prereqs causes the unit tests to not be run at all. Here is code I'm using in my database-less rails app to make all the rake test_* tasks work: [:'test:units', :'test:functionals', :'test:recent', :'test:uncommitted', :'test:integration'].each do |name| Rake::Task[name].prerequisites.clear end --Nick - Reported in: P1.0 (18-Nov-08) PDF page: 55 Paper page: Ztdaz good guest page. thank you.--gUuSOTgJueBOqIGst - Reported in: P3.0 (19-Sep-06) PDF page: 57 In the final sentence of the first paragraph: ... all of its child classes (your application-specific) models. should read: ... all of its child classes (your application-specific models). right? Thanks. --Mark Meves - Reported in: P3.0 (19-Sep-06) PDF page: 61 In the second sentence of the second paragraph, "You'd" should not be capitalized. --Mark Meves - Reported in: P3.0 (31-May-07) PDF page: 71 Recipe 19, "Tagging Your Content", uses a plugin that is no longer available. Attempting to follow the book fails at the first step (script/plugin install acts_as_taggable). The plugin is now considered legacy, as per DHH comment on dev.rubyonrails.org. The book needs to instruct on how to retrieve a legacy plugin, or (preferred) be updated with one of the community-supported tagging systems (agilewebdevelopment.com/plugins/search?search=acts_as_taggable) - Reported in: P3.0 (04-Apr-07) Paper page: 75 3rd paragraph: "Now we can find and destroy records by their primary keys:" An example follows which is wrapped in <![[CDATA ]]> tags by mistake. (These tags probably originate from some error in the XML source of the book.)--Sergei Yakovlev - Reported in: P3.0 (01-Mar-08) PDF page: 78 script/plugin discover script/plugin install acts_as_versioned --- This doesn't work chapter.version --- This doesn't work either As I got the book specifically for acts_as_versioned - I have to say I was quite dissapointed--Ran Moshe - Reported in: P1.0 (29-Jun-06) PDF page: 80 In the 2nd example of versioning, the one that introduces chapter.revert_to(2), I don't understand the final version number. The example starts with 3 versions, then revert_to is supposed to add a new version according to the text, and then the title is modified and chapter.save is called. This should then bump up the versions again, so the final chapter.version should be 5. Ok, if that's not correct, then perhaps you could clarify the explanation a little :-).--Thorsten - Reported in: P3.0 (07-Apr-07) Paper page: 86 4th paragraph: "--Sergei Yakovlev - Reported in: P3.0 (03-Jun-08) Paper page: 87 Instead of "detail.tags.collect.." should the following be "contact.tags.collect.." ? <%= text_field_tag "tag_list", detail.tags.collect{|t| t.name}.join(" "), :size => 40 %> The partial is called 'detail' while the model obj is 'Contact' Typo or not, the book is great!--norbert.ryan3@gmail.com - Reported in: P3.0 (11-Apr-07) PDF page: 101 I found the GradeFinder example very useful... but it took me a while to figure out how to reuse GradeFinder for the base class. For example, while Student.find(1).grades.below_average works well, I can't readily use Grades.below_average So, it seems to me, that in order to reuse the GradeFinder logic, I can just write "extend GradeFinder" in the class definition for Grade (just under the "class Grade < ActiveRecord::Base" statement). For a Ruby novice such as myself, it would be nice to have that in your book.--Zenon - Reported in: P1.0 (21-Sep-06) Paper page: 112 Don't forget to save the changes to the database: >> address.save--Manu Cammaert - Reported in: P3.0 (26-Sep-06) PDF page: 119 look at middle at page: We can now simplify the signin action to look like this: def signin if request.post? .... Now look at bottom of page: def signin session[:user] = User.authenticate(params[:username], params[:password]).id .... IMHO, you forget add "if request.post?" in second time. So, user never saw login\password form and will be receive "Username or password invalid" --Viacheslav Kaloshin - Reported in: P3.0 (05-Sep-06) PDF page: 130 Halfway down the page, you have the simplified signin action. At the bottom of the page, you pring the the entire controller, but the signin method there is missing the "if request.post?" test.--Eric Wagoner - Reported in: P3.0 (16-Aug-06) PDF page: 134 Your date route is spot-on, except for: "Defaulting to the current day" --Fred Alger - Reported in: P1.0 (24-Jul-08) Paper page: 136 Don't know is this is Rails 2.x specific, but self.password_salt, self.password_hash = salt, Digest::SHA256.hexdigest(pass+salt) should be self[:password_salt] = salt self[:password_hash] = Digest::SHA2.hexdigest(pass+salt) --Christopher Villalobos - Reported in: P3.0 (02-Oct-06) Paper page: 139 "if request.post?" is missing in the signin action of the full AdminController code listing (the last code piece on the page).--Hans-Eric Gr - Reported in: P3.0 (30-Oct-06) PDF page: 140 model_class = Object.const_get(table.classify) should be model_class = Object.const_get(table.to_s.classify) Otherwise you get an error NoMethodError: undefined method `classify' for :contacts:Symbol if you pass in :contacts as shown in the example --Jim Morris - Reported in: P3.0 (05-Feb-08) Paper page: 142 At least using Rails 2.0.2, I get an "unknown table roles_users" message when I run this recipe. I had to add a new migration to rename roles_users to users_roles in order to get the code to execute without errors. So I think that the AddRolesAndRightsTables migration may need an update.--Ben Kimball - Reported in: P3.0 (30-Jul-07) Paper page: 146 Please give page numbers on ALL pages; not putting page numbers on blank pages and on the receipe front pages means that you can go several pages without a page number, making it hard to locate sections from the index. E.g., pages 146-149 have no page numbers.--Jon Seidel - Reported in: P3.0 (30-Jul-07) Paper page: 149 In the description of the solution, you say that you will use an "after_filter", but then you actually implement a before_filter.--Jon Seidel - Reported in: P3.0 (30-Jul-07) Paper page: 149 The calculation in def session_expiry: @time_left = (session[:expires_at] - Time.now).to_i fails with the following error message: NoMethodError (You have a nil object when you didn't expect it! You might have expected an instance of Array. The error occurred while evaluating nil.-): when the expiry time is exceeded. This method actually requires code such as: if Time.now < session[:expires_at] @time_left = (session[:expires_at] - Time.now).to_i else reset_session render '/session/redirect' end--Jon Seidel - Reported in: P3.0 (17-Mar-08) Paper page: 150 session_expiry() doesn't invoke the RJS template, this is the error it produces: You called render with invalid options : /application/redirect --Sigfrid Dusci - Reported in: B1.0 (04-Apr-07) Paper page: 151 Render CSV Example code gives error: NameError in ExportController#orders Need to add require "CSV" - Reported in: P1.0 (20-Dec-06) Paper page: 155 Mild grammatical : "We have multiple ways to achieve this affect, ..." should that be "effect" ? --John Simmonds - Reported in: P3.0 (12-Mar-07) PDF page: 173 The setup() method suggested doesn't appear to work with current Rails (1.2) -- namely the last line. An update to this recipe would be nice in future editions.--James H - Reported in: P3.0 (09-May-07) PDF page: 197 It would seem to me that Recipe 49: "Dealing with Time Zones" ignores daylight saving time. To have that included would be immensely helpful!--Sebastian Winkler - Reported in: P3.0 (18-Jul-07) PDF page: 198 The @headers instance variable is deprecated in Rails 1.2. Instead, simply remove the "@" symbol so that the headers setter method is called. --Trevor Harmon - Reported in: P3.0 (18-Jul-07) PDF page: 199 If you get an error when calling CGI.rfc1123_date saying "undefined method `gmtime'", you are probably passing a Date object (instead of the required Time object) as a parameter. Simply use my_date_variable.to_time to convert it to a type that CGI can use.--Trevor Harmon - Reported in: P1.0 (10-Sep-07) PDF page: 219 Image#save_fullsize should be using write, not puts to write out the binary data.--Ryan Davis - Reported in: P3.0 (10-Sep-07) PDF page: 228 I couldn't find this is the errata. In recipe 57 "Processing Uploaded Images", in Image#save_fullsize, you should be using write, not puts.--Ryan Davix - Reported in: P1.0 (20-May-06) PDF page: 234 Globalization example does not work. Since I'm still waiting for the printed book and have no access to the beta, I'm referering to the excerpt (so this error might have been fixed already). Before you can switch the locale and create a translation (here in ar-LB) you need to explicitely save and reload the record with the base locale. For more info see here: --Martin Bernd Schmeil - Reported in: P3.0 (02-Mar-09) Paper page: 243 In the example "SecretURLs/app/models/inbox.rb" the line @attributes['access_key'] = MD5.hexdigest((object_id + rand(255)).to_s) should read @attributes['access_key'] = Digest::MD5.hexdigest((object_id + rand(255)).to_s) the class also needs to require 'Digest' at some point. --Scott Grimmett - Reported in: P1.0 (25-May-06) PDF page: 254 It seems that if you call the view for the HTML part "multipart_alternative", as is suggested in the manual, Rails will use this as the first part of the email, as well as the HTML part. This produces a three part email, instead of the desired two, and Mail at least presumes that the first part is the plain text alternative, which causes some problems. When I changed the name of the view being called to "multipart_alternative_rich" this solved the problem.--James Doy - Reported in: P3.0 (04-Jul-07) PDF page: 272 It should be more clear than "The default generated model, Delivery, is sufficient for our needs on the Ruby side of the house." that the reader should be running ./script/generate model Delivery early on on this page.--Edward Ocampo-Gooding - Reported in: P3.0 (24-Oct-06) PDF page: 276 Appendix A - 'tis' should be 'this'--Brian Riggs - Reported in: P1.0 (22-Aug-06) Paper page: 283 The code in the book for whitelists doesn't work. But the new code posted in the resources section: does. I'm not sure why.--Jason L Michael (railsnoob) - Reported in: P3.0 (02-Jun-07) PDF page: 287 Paper page: 321 A small error in the first sentence in paragraph A.2: The source code in tis book... Maybe right: The source code in this book...--Dave - Reported in: P2.0 (05-Mar-07) Paper page: 296 The code from notifier.rb as it stands doesn't work in the latest version of Rails because Rails automagically inserts the contents of multipart_alternative first (but without @name resolving) and then inserts the two parts as coded. This can be fixed by renaming multipart_alternative to multipart_alternative_html and altering the render_message accordingly. This whole setup should have been tested on more than one (Mac-based) email agent. Google mail only displays the plain text section; Outlook only displays the HTML section.--Old Dog Stuff To Be Considered in the Next Edition - Reported in: B1.3 (08-Apr-06) PDF page: 1 For the section on html email. I saw in your source files that you do not have a css file or any images included. A slightly more advanced example will be immediately required by almost any reader. Complete paths are required for everything. Also the <link> and <head> tags have issues with email readers. The use of layouts and templates are tricky too. See the Agile Rails book errata web page for a suggestion for page 415 (and 413). see this link for details about what works in email readers Something like <html> <body> <style type="text/css"> @import url(); </style> <div id="main"> <img src="" /> <div id="popupMain"> <%= @content_for_layout %> </div> </div> </body> </html> --Peter Michaux - Reported in: P1.0 (23-Jun-06) PDF page: 6 Since you make a note stating to write your own actions for in-place edits, it would be nice to see at least one sample one. I'm having trouble locating a solution for this, and I'm not sure how to return a successful value or a custom error. Another reason I mention this is because if someone uses an in-place edit field and updates it to an empty string, it becomes impossible to edit the field again, since there's nothing to 'click' to activate the edit mode. - Reported in: B1.5 (14-May-06) PDF page: 15 Hi Using innerHTML here to add options to a select box won't work in, for example, IE6/win. I would recommend adapting the recipe to use the new Ajax.InPlaceCollectionEditor (as of script.aculo.us version 1.5.3). I've got some code if you want it: dave@textgoeshere.org.uk Best, D--Dave Nolan - Reported in: B1.5 (08-May-06) PDF page: 31 I suggest adding a Discussion entry to this recipe to explain why lists were used instead of tables. Due to browser incompatibilities sorting in tables isn't supported by Script.aculo.us (table row sorting works in Firefox for example but not in Safari). Thomas Fuchs could probably go into more detail about this issue.--Sean Mountcastle - Reported in: B1.5 (21-Apr-06) PDF page: 51 A suggestion for the discussion section of recipe 12 (Creating a custom form builder): In this section you mention meta-programming in Ruby and show some really cool examples of it. It suprised me because I never thought to even try stuff like that before. As a programmer who is learning Ruby through Rails, it would be helpful if the discussion included where to find out more information on meta-programming in Ruby and what it means. Or perhaps how one can get "under the hood" of Rails and try something similar to what you've shown.--Brian Chamberlain - Reported in: B1.5 (29-Apr-06) PDF page: 77 I'd like to see included in this recipe how one would prevent a person from adding oneself as a friend. For example, if Chad were to become a friend of Erik's, but Chad is such a nice guy that he instantly becomes a friend of all of Erik's friends, Chad would also become a friend to himself. Person.find_by_name("Chad").friends << Person.find_by_name("Erik").friends Is there a way to prevent that in the model without raising an exception with a before_add callback and thus pushing it off the controller to prevent? Thanks!--Victor Cosby - Reported in: B1.2 (09-Mar-06) PDF page: 90 Recipe Suggestion: How to use LDAP/Active Directory for authentication only - integrate LDAP into Recipe 15 (Authentication) but also have it play nicely with Recipe 16 (Role-Based Authorization)--Tom Brice - Reported in: B1.1 (21-Feb-06) PDF page: 143 Near the bottom of the page, "|procmail" in an italic font makes the "|" look like a "/". (Dave says: tricky to deal with this. Let me think about it)--Rob Biedenharn - Reported in: P1.0 (25-May-06) PDF page: 264 As the default behaviour of Rails in test mode is to append Email to an array, not actually send it, it might be an idea to briefly cover Email Configuration at the start of this chapter. Otherwise people might be confused when their HTML email never arrives in their inbox.--James Doy - Reported in: B1.5 (02-May-06) PDF page: 268 Unfortunately the method described for graceful degradation of emails will not work for certain handheld devices, including the Blackberry's direct email inbox, which cannot handle multiple content types in the same email. You may want to mention this drawback. (Chad says: I don't know the landscape here, and these are clients that don't support the standard properly.)--Henry - Reported in: B1.5 (06-May-06) PDF page: 281 Your example only produces a plain text e-mail. What if the user wants to send either an HTML email or a multipart/alternative email along with the attachment?--Dylan Markow
https://pragprog.com/titles/fr_rr/errata
CC-MAIN-2017-09
refinedweb
3,188
58.28
Art shader, or load from content browser ? On 30/08/2018 at 02:01, xxxxxxxx wrote: Hi, First off I'm new here, and also very new with python. I'm trying to create a pretty basic material using python, but it seems not so easy. I have two questions: How can I add the Art shader to a channel? It's not listed in the SDK shader list. So it's probably not like I'm just missing the right word here, is it ? > matt[c4d.MATERIAL_USE_LUMINANCE] = True > > Art = c4d.BaseList2D(c4d.X...something.....) > > matt.InsertShader( Art ) > > matt[c4d.MATERIAL_LUMINANCE_SHADER] = Art I also want it to load a image in the art shader, so I figured, why don't I just create a preset folder in the content browser and load the material form that folder. Turns out it's quite tricky to import materials from the content browser too. > > c4d.documents.LoadFile(fn) If I use anything else then LoadFile, it will tell me to use a base document in the console. From my understanding, I have to make a virtual base document and then import it from that. How does that look like in code ? I only need one to work, but I am very interested in both! Cheers, Tim On 30/08/2018 at 23:11, xxxxxxxx wrote: I figured out how to load the material form the content browser: > doc = c4d.documents.GetActiveDocument() > > > c4d.documents.MergeDocument(doc, location, c4d.SCENEFILTER_MATERIALS) But I prefer creating the material in python rather then loading it, if anyone knows how to access and apply the Art shader, let me know ;) - Tim On 31/08/2018 at 04:07, xxxxxxxx wrote: Hi Myosis, first of all, welcome in the plugincafe community! Even if shader didn't get a proper symbols name, each BaseList2D get an ID. And symbols only refer to this ID. So if you want to know the current ID of an Art shader. Simply create a shader in a material, then drag and drop the channel where the shader is into the console. Press enter and it will print you the ID. For the art shader, it's 1012161. About bitmap do the same technique, simply drag and drop the parameter into the console to know the parameter ID. Then you can create it using the following script import c4d def main() : mat = doc.GetActiveMaterial() if not mat: return # Create the bitmap shader bmpShader = c4d.BaseShader(c4d.Xbitmap) bmpShader[c4d.BITMAPSHADER_FILENAME] = "YourTexturePath" # Create a art Shader and insert our bmp shader artShader = c4d.BaseShader(1012161) artShader.InsertShader(bmpShader) artShader[c4d.ARTSHADER_TEXTURE] = bmpShader # Then insert our art Shader into the active material, and use it in the color channel mat.InsertShader(artShader) mat[c4d.MATERIAL_COLOR_SHADER] = artShader c4d.EventAdd() if __name__=='__main__': main() As you already figured out, in python the only way to load stuff from content browser is to use LoadFile/MergeDocument. If you have any question, please let me know! Cheers, Maxime!
https://plugincafe.maxon.net/topic/10942/14402_art-shader-or-load-from-content-browser-/
CC-MAIN-2020-05
refinedweb
496
65.52
Looking for php web developer to complete my code Bajet $10-30 USD this will not take more than few hour for experienced person it is my incomplete code for which i have given step by step instruction. Do you have experience of using oops programming, namespace, mvc, controller, final class, cookies and sessions? It is incomplete code where all the class are present we need to write the function under them according to functionality also instruction are given in detail for every class and function. The project is very simple todo application. there are 2 database table account and table. The homepage will show the login form and create account and after login the session will start for that account holder and it will show their todo list. They can create update and delete their todo and logout. login will be done through verification password will be bcrypt and matched from database. Also the url will be developed according to function. Let me know how much time you need for this i have attached the pic of how it looks right now no styling required but we need to code the function and follow the instruction. Let me know how and by when you can do this. 16 pekerja bebas membida secara purata $33 untuk pekerjaan ini . Relevant Skills and Experience Java, Javascript, MySQL, PHP, Software Architecture Proposed Milestones $30 USD - .
https://www.my.freelancer.com/projects/php/looking-for-php-web-developer-15794447/
CC-MAIN-2019-22
refinedweb
232
61.87
#include <pow/aserti32d.h> #include <chain.h> #include <consensus/activation.h> #include <consensus/params.h> #include <validation.h> #include <atomic> Go to the source code of this file. Definition at line 164 of file aserti32d.cpp. Returns a pointer to the anchor block used for ASERT. As anchor we use the first block for which IsAxionEnabled() returns true. This block happens to be the last block which was mined under the old DAA rules. This function is meant to be removed some time after the upgrade, once the anchor block is deeply buried, and behind a hard-coded checkpoint. Preconditions: - pindex must not be nullptr Definition at line 39 of file aserti32d.cpp. Definition at line 19 of file aserti32d.cpp. Definition at line 90 of file aserti32d.cpp. Compute the next required proof of work using an absolutely scheduled exponentially weighted target (ASERT). With ASERT, we define an ideal schedule for block issuance (e.g. 1 block every 600 seconds), and we calculate the difficulty based on how far the most recent block's timestamp is ahead of or behind that schedule. We set our targets (difficulty) exponentially. For every [nHalfLife] seconds ahead of or behind schedule we get, we double or halve the difficulty. Definition at line 108 of file aserti32d.cpp. ASERT caches a special block index for efficiency. If block indices are freed then this needs to be called to ensure no dangling pointer when a new block tree is created. (this is temporary and will be removed after the ASERT constants are fixed) Definition at line 15 of file aserti32d.cpp. Definition at line 13 of file aserti32d.cpp.
https://bitcoindoxygen.art/ABC/aserti32d_8cpp.html
CC-MAIN-2021-49
refinedweb
274
68.67
17 April 2007 09:54 [Source: ICIS news] SHANGHAI (ICIS news)--Polyethylene (PE) prices were forecast to decline in the next three years in tandem with falling crude prices, a global trader said on Tuesday.?xml:namespace> “But margins for PE producers are likely to improve as PE prices are unlikely to fall as steeply as ethylene,” said Aaron Yap, a senior trader at Integra. He was speaking at the 3rd ICIS Asian Polymers conference in ?xml:namespace> The waning influence of PE on feedstock ethylene and vice-versa coincided with the increasing correlation between crude and PE, said “However, polypropylene and propylene are still tied very closely together,” he added. Despite increasing PE supply since 2005, prices were not expected to fall very steeply as demand was expected to remain strong, he said. The two-day
http://www.icis.com/Articles/2007/04/17/9020919/pe-prices-to-soften-but-margins-may-grow-analyst.html
CC-MAIN-2014-52
refinedweb
137
53.55
3.2.3 Initialization When the DFS server is started: It MUST notify the SMB server for reasons as specified in [MS-CIFS] section 3.3.4.3 and [MS-SMB2] section 3.3.4.8. The exact means of how this is accomplished is outside the scope of this specification. It MUST initialize HomeDomain with the domain name of the domain to which it is joined, if any. It MUST initialize DFSNamespaceList to the list of domain-based and stand-alone DFS namespaces that it hosts. This list can be obtained from DFS metadata (as specified in [MS-DFSNM]), a configuration file, a configuration store, or from other implementation-defined means. Show:
https://msdn.microsoft.com/en-us/library/cc227043.aspx
CC-MAIN-2017-22
refinedweb
113
54.73
Things used in this project Story Here I'm showing how to control home appliances via xBee and Arduino Uno. _z3yETqPAIv.png?auto=compress%2Cformat&w=680&h=510&fit=max) Here you can see there is a GAS SENSOR and beside that there is a FLAME SENSOR (the library can be downloaded from) and the xBee is connected to Arduino through Tx and Rx pins of Arduino and the lamps and the DC motor are connected to supply voltage through a relay. Each relay has one pin connected to Arduino and the other to ground. If the relay is on, then that component is connected to supply and vice versa. _fWCQ1RG7So.png?auto=compress%2Cformat&w=680&h=510&fit=max) Here Is Your Control System Here you can see that there is another Arduino (in another Proteus project) and there is also a LCD to show the sensor data coming from the FLAME and the GAS sensors (in the Home System, different from this project). The second xBee (in Control System) is connected to the first xBee (in Home System) through the VIRTUAL SERIAL PORT DRIVER (you can download this from). This driver helps you to connect the port driver virtually. Like this: _uEOqJUhxcX.png?auto=compress%2Cformat&w=680&h=510&fit=max) Here you can see that I'm in the VSPD and on the left side you can see that my COM1 and COM2 are virtually connected. Similarly you can connect as many virtual ports, adding them by pressing the ADD PAIR button on the right side. _qKayL4Vbe3.png?auto=compress%2Cformat&w=680&h=510&fit=max) Here in the above picture, you can see that in the Control System xBee the Physical port is written as COM1 and, in a similar way, you have to change the Physical port in the 2nd xBee by going to its properties. You have to write COM2 there so that both the xBees are connected through VSPD and can share data with each other. Schematics Code Code for Home SystemArduino #include<LiquidCrystal.h> LiquidCrystal lcd(A0,A1,A2,A3,A4,A5); void setup() { // put your setup code here, to run once: Serial.begin(9600); for(int i = 2;i<=7;i++){ pinMode(i,OUTPUT); } pinMode(9,INPUT); pinMode(10,INPUT); } int i,state = 0; int high9 = 0,high10 = 0; void loop() { // put your main code here, to run repeatedly: if(Serial.available()>0){ i = Serial.parseInt(); changeState(i); } if(digitalRead(9)==1 && high9 ==0){ Serial.print(91); high9 = 1; } else if(digitalRead(9)==0 && high9 == 1){ Serial.print(90); high9 = 0; } else if(digitalRead(10)==1 && high10 ==0){ Serial.print(101); high10 = 1; } else if(digitalRead(10)==0 && high10 == 1){ Serial.print(100); high10 = 0; } } void changeState(int i){ state = digitalRead(i+2); digitalWrite(i+2,!state); Serial.print(12); } Code for Control SystemArduino #include <Keypad.h> #include<LiquidCrystal.h> LiquidCrystal lcd(A0,A1,A2,A3,A4,A5);); for(int i=2;i<9;i++){ pinMode(i,OUTPUT); } } void loop(){ char key = keypad.getKey(); if (key != NO_KEY){ Serial.print(key); } if(Serial.available()>0){ int k = Serial.parseInt(); if(k == 91){ lcd.setCursor(0,0); lcd.print("GAS DETECTED"); } else if(k == 90){ lcd.setCursor(0,0); lcd.print("GAS NOT DETECTED"); } else if(k == 101){ lcd.setCursor(0,1); lcd.print("FLAME DETECTED"); } else if(k == 100){ lcd.setCursor(0,1); lcd.print("FLAME NOT DETECTED"); } } } Credits Replications Did you replicate this project? Share it!I made one Love this project? Think it could be improved? Tell us what you think!
https://www.hackster.io/kalyan-prusty/home-automation-using-xbee-and-arduino-7ba5f4
CC-MAIN-2018-09
refinedweb
595
57.67
VTK/FAQ From KitwarePublic General information and availability What is the Visualization Toolkit? The Visualization ToolKit (vtk) is a software system for 3D Computer Graphics and Visualization. VTK includes a textbook published by Kitware Inc. ([ The Visualization Toolkit, An Object-Oriented Approach to 3D Graphics]), a C++ class library, and Tcl, Python and Java implementations based on the class library. For more information, see and. What is the current release? The current release of vtk is 5.4.0 (released on 2009-3-26). This release for download available from: Nightly development releases are available at: Can I contribute code or bug fixes? We encourage people to contribute bug fixes as well as new contributions to the code. We will try to incorporate these into future releases so that the entire user community will benefit from them. See for information on contributing to VTK. For some ideas take a look at some of the entries in the "Changes to the VTK API" FAQ section, for example: What changes are being considered for VTK We now have a bug tracker that allow keeping track of any bug you could find. See BugTracker. You'll need an email to report a bug. To improve the chance of a bug being fixed, do not hesisitate to add as many details as possible, a demo sample code + sample data is always a good idea. Providing a patch almost guarantees that your patch will be incorporated into VTK. Can I contribute money? Please don't send money. Not that we think you're going to send in unsolicited money. But if you were thinking about it, stop. It would just complicate our lives and make for all sorts of tax problems. (Note: if you are a company or funding institution, and would like to fund features or development, please contact Kitware .) Is there a mailing list or Usenet newsgroup for VTK? There is a mailing list: vtkusers@vtk.org To subscribe or unsubscribe to the mailing list, go to: To search the list archives go to: There is also a newsgroup that mirrors the mailinglist. At this point it seems that mirror is down. Mail to the mailinglist used to be posted the newsgroup, but posts on the newsgroup were not sent to the mailinglist. The newsgroup was located at: news://scully.esat.kuleuven.ac.be/vtk.mailinglist is a bidirectional mail-to-news gateway that carries the vtkusers mailing list. Its located here: news://news.gmane.org/gmane.comp.lib.vtk.user or here:. vtkusers mails have been archived since April 2002 and they never expire. You can read and send mails to the vtkusers list but sent mail will bounce back without having subscribed to the list first. Is the VTK mailing list archived anywhere? The mailing list is archived at: You can search the archive at: Are answers for the exercises in the VTK book available? Not anymore. The answers to the exercises of the textbook used to be maintained by Martin Stoufer (kudos), and will be made available by Kitware in the near future. Is VTK regression tested on a regular basis? Can I help? Yes, it is. You can view the current regression test results at: VTK uses Dart to perform builds, run tests, and generate dashboards. You can find more information about Dart at: You can help improve the quality of VTK by supplying the authors with Tcl scripts that can be used as or turned into regression tests. A good regression test will: - Cover code that is not already covered. - Illustrate a bug that is occuring now or that has occurred in the past. - Use data that is on the 2nd Edition book CDROM or use "small" data files or use no data at all. - Optionally, produce an interesting result. Currently almost all regression tests are written in Tcl. Please send your Tcl regression tests to: mailto:wlorens1@mail.nycap.rr.com Bill will evaluate them for applicability and integrate them into the nightly test process.). - To learn the innards of VTK, you can attend a VTK course or sponsor a VTK course at your site through Kitware. - Buy Bill a beer and get him talking about VTK How should I ask questions on the mailing lists? The best online resource for this question is Eric S. Raymond's excellent guide on the topic titled [How to ask questions the smart way]. [Getting Answers] is a good starting point too. Please do read it and follow his advice. Thanks! Please also remember the following when you post your messages to the VTK mailing lists. - Mention the version of VTK you are using and the version of the compiler or scripting language you are using. - Mention your platform, OS and their versions. - Include hardware details if relevant. - Include all relevant error messages (appropriately trimmed of course). - The lists have a very large number of subscribers (in the thousands), so please keep messages to the point. - Avoid HTML emails. - Use a sensible and descriptive subject line. - Do NOT post large data files or images to the list. Instead put them in your web page and mention the URLs. - Quote the messages you reply to appropriately. Remove unnecessary details. When asking a question or reporting a problem try to include a small example program that demonstrates the problem. Make sure that this example program is as small as you can make it, simple (and uses VTK alone), complete and demonstrates the problem adequately. Doing this will go a *long way* towards getting a quick and meaningful response. Sometimes you might not get any acceptable response. This happens bacause the others think the question has either been already answered elsewhere (the archives, FAQ and google are your friends), or believe that you have not done enough homework to warrant their attention, or they don't know the answer or simply don't have the time to answer. Please do be patient and understanding. Most questions are answered by people volunteering their time to help you. Happy posting! How NOT to go about a programming assignment This is really a link you should read before posting to the mailing list. [This article is an attempt to show these irrational attitudes in an ironical way, intending to make our students aware of bad habits without admonishing them.] Accessing VTK CVS from behind a firewall Use the sourceforge project: Just download the script and type something like: cvsgrab -rootUrl -packagePath VTK -destDir . -proxyUser xxx -proxyPassword xxx -proxyHost xxx -proxyPort xx (Thanks to Ingo H. de Boer) Also cvsgrab support the following option to access a particular branch: -tag <version tag> [optional] The version tag of the files to download For example to get the latest 4.4 branch: cvsgrab -rootUrl -packagePath VTK -destDir . -proxyUser xxx -proxyPassword xxx -proxyHost xxx -proxyPort xxx -tag release-4-4 Where can I obtain test and sample datasets? See this page for details on downloading datasets that VTK can read. Language bindings Are there bindings to languages other than Tcl? Aside from C++ (which it's written in) and Tcl, vtk is also bound into Java as of JDK 1.1 and Python 1.5, 1.6 and 2.X. All of the Tcl/Java/Python wrapper code is generated from some LEX and YACC code that parses our classes and extracts the required information to generate the wrapper code. What version of Tcl/Tk should I use with VTK? Currently we recommend that you use Tcl/Tk 8.2.3 with VTK. This is the best-supported version combination at this time. VTK has also been tested with Tcl/Tk 8.3.2 and works well. Tcl/Tk 8.3.4 has been tested to a limited extent but seems to have more memory leaks that Tcl 8.3.2 has. Tcl/Tk 8.4.x seems to work well with VTK too, but you might have to change a couple of configuration settings depending on the version of VTK you are using. Check the Does VTK support Tcl/Tk 8.4?. Where can I find Python 2.x binaries? All of the Python binaries available on the kitware site are built for Python 1.5.2. This includes the official release VTK3.2 and the nightly builds (as at 2001-07-16). For Python 2.x binaries, you will have to compile your own from source. It is worth checking the mailing list archives for comments by others who have been through this process. There are some user-contributed binaries available at other sites. Check the mailing list archives for possible leads. Some win32 binaries for Python 2.1 are available at; YMMV... Why do I get the Python error -- ValueError: method requires a VTK object? You just built VTK with Python support and everything went smoothly. After you install everything and try running a Python-VTK script you get a traceback with this error: ValueError: method requires a VTK object. This error occurs if you have two copies of the VTK libraries on your system. These copies need not be in your linkers path. The VTK libraries are usually built with an rpath flag (under *nix). This is necessary to be able to test the build in place. When you install VTK into another directory in your linkers path and then run a Python script the Python modules remember the old path and load the libraries in the build directory as well. This triggers the above error since the object you passed the method was instantiated from the other copy. So how do you fix it? The easiest solution is to simply delete the copy of the libraries inside your build directory or move the build directory to another place. For example, if you build the libraries in VTK/bin then move VTK/bin to VTK/bin1 or remove all the VTK/bin/*.so files. The error should no longer occur. Another way to fix the error is to turn the CMAKE_SKIP_RPATH boolean to ON in your CMakeCache.txt file and then rebuild VTK. You shouldn't have to rebuild all of VTK, just delete the libraries (*.so files) and then re-run cmake and make. The only trouble with this approach is that you cannot have BUILD_TESTING to ON when you do this. Alternatively, starting with recent VTK CVS versions (post Dec. 6, 2002) and with VTK versions greater than 4.1 (i.e. 4.2 and beyond) there is a special VTK-Python interpreter built as part of VTK called 'vtkpython' that should eliminate this problem. Simply use vtkpython in place of the usual python interpreter when you use VTK-Python scripts and the problem should not occur. This is because vtkpython uses the libraries inside the build directory. 2002 by Prabhu Ramachandran Does VTK support Tcl/Tk 8.4 ? Short answer: yes, but it might require some adjustments, depending on the VTK and CMake versions you are using. - The VTK 4.x CVS nightly/development distribution supports Tcl/Tk 8.4 as long as you use a release version of CMake > 1.4.5. Since VTK 4.2 will require CMake 1.6, the next release version will support Tcl/Tk 8.4. - The VTK 4.0 release distribution does not support Tcl/Tk 8.4 out-of-the-box. In either cases, the following solutions will adress the problem. This basically involves setting two definition symbols that will make Tcl/Tk 8.4 backward compatible with previous versions of Tcl/Tk (i.e. discard the "const correctness" and Tk_PhotoPutBlock compositing rule features) : a) Edit your C/C++ flags: Run your favorite CMake cache editor (i.e. CMakeSetup, or ccmake), display the advanced values and add the USE_NON_CONST and USE_COMPOSITELESS_PHOTO_PUT_BLOCK definition symbols to the end of any of the following CMake variables (if they exist): CMAKE_CXX_FLAGS, CMAKE_C_FLAGS. Example: On Unix your CMAKE_CXX_FLAGS will probably look like: -g -O2 -DUSE_NON_CONST -DUSE_COMPOSITELESS_PHOTO_PUT_BLOCK On Windows (Microsoft MSDev nmake mode): /W3 /Zm1000 /GX /GR /YX /DUSE_NON_CONST /DUSE_COMPOSITELESS_PHOTO_PUT_BLOCK b) or a more intrusive solution: Edit the top VTK/CMakeList.txt file and the following lines add at the top of this file: ADD_DEFINITIONS( -DUSE_NON_CONST -DUSE_COMPOSITELESS_PHOTO_PUT_BLOCK ) When I try to run my program with Java-wrapped VTK, why do I get "java.lang.NoClassDefFoundError: vtk/vtkSomeClassName"? The file vtk.jar is not in your CLASSPATH in your execution environment. When I try to run my program with Java-wrapped VTK, why do I get "java.lang.UnsatisfiedLinkError: no vtkSomeLibraryName"? Some or all of the library (e.g., dll) files cannot be found. Make sure the files exist and that the PATH environment variable of your execution environment points to them. When I try to run my program with Java-wrapped VTK, why do I get Exception in thread "main" java.lang.UnsatisfiedLinkError: GetOutput_2 at vtk.vtkPolyDataAlgorithm.GetOutput_2(Native Method) ? Using VTK The C++ compiler cannot convert some pointer type to another pointer type in my little program For instance, the C++ compiler cannot convert a vtkDataSet * type to a vtkImageData * type. It means the compiler does not know the relationship between a vtkDataSet and a vtkImageData. This relationship is actually inheritance: vtkImageData is a subclass of vtkDataSet. The only way for the compiler to know this relationship is to include the header file of the subclass, that is: #include "vtkImageData.h" If you wonder why the compiler did not complain about an unknown type, it is because somewhere (probably in a filter header file) there is a forward class declaration, like: class vtkImageData; Accessing a pointer in Python If you use VTK code with Python and need to pass some VTK data onto them, there are 2 approaches to wrap your code: - first, you can use the VTK wrapper (already used for the wrapping of VTK code) - you can use SWIG, which results in a light-weight module. In the second case, you will need to convert some VTK data, say a vtkPolyData, to a void pointer (no, it is not sufficient to just pass the object). For that, you can use the __this__ member variable in Python for the VTK data - see mailing archives: What object/filter should I use to do ??? Frequently when starting out with a large visualization system people are not sure what object to use to achieve a desired effect. The most up-to-date information can be found in the VTK User's Guide (). Alternative sources for information are the appendix of the book which has nice one line descriptions of what the different objects do and the VTK man pages (). Additionally, the VTK man pages feature a "Related" section that provide links from each class to all the examples or tests using that class (). This information is also provided in each class man page under the "Tests" or "Examples" sub-section. Some useful books are listed at What 3D file formats can VTK import and export? The following table identifies the file formats that VTK can read and write. Importer and Exporter classes move full scene information into or out of VTK. Reader and Writer classes move just geometry. † See the books [ The Visualization Toolkit, An Object-Oriented Approach to 3D Graphics] or the User's Guide for details about structured grid and poly data file formats. ‡ The class vtkGenericEnSightReader allows the user to read an EnSight data set without a priori knowledge of what type of EnSight data set it is (among vtkEnSight6BinaryReader, vtkEnSight6Reader, vtkEnSightGoldBinaryReader, vtkEnSightGoldReader, vtkEnSightMasterServerReader, vtkEnSightReader). For any other file format you may want to search for a converter to a known VTK file format, more info on: Why can't I find vtktcl (vtktcl.c)? In versions of VTK prior to 4.0 VTK Tcl scripts would require a: catch {load vtktcl} so that they could be executed directly from wish. In VTK 4.0 the correct mechanism is to use: package require vtk For people using versions earlier than 4.0, vtktcl is a shared library that is built only on the PC. Most examples used the "catch" notation so that they will work on UNIX and on the PC. On UNIX you must use the vtk executable/shell which should be in vtk/tcl/vtk. Why does this filter not produce any output? eg. GetPoints()==0 This is a very common question for VTK users. VTK uses a pipeline mechanism for rendering, which has multiple benefits, including the fact that filters that aren't used don't get called. This means that when you call a function such as x->GetOutput()->GetPoints() this will return 0 if the filter has not yet been executed. Just call x->Update() beforehand to make the pipeline update everything up to that point and it should work. -timh Problems with vtkDecimate and vtkDecimatePro vtkDecimate and vtkDecimatePro have been tested fairly heavily so all known bugs have been removed. However, there are three situations where you can encounter weird behavior: - The mesh is not all triangles. Solution: use vtkTriangleFilter to triangulate polygons. - The mesh consists of independent triangles (i.e., not joined at vertices - no decimation occurs). Solution: use vtkCleanPolyData to link triangles. - Bad triangles are present: e.g., triangles with duplicate vertices such as (1,2,1) or (100,100,112), or (57,57,57), and so on. Solution: use vtkCleanPolyData. How can I read DICOM files ? Starting with VTK 4.4, you can use the vtkDICOMImageReader class to read DICOM files. Note however that DICOM is a huge protocol, and vtkDICOMImageReader is not able to read every DICOM file out there. If it does not meet your needs, we suggest you look for an existing converter before coding your own. Some of them are listed in the The Medical Image Format FAQ (Part 8). GDCM For a more elaborate DICOM library that supports more image format, you might try GDCM. Specifically: vtkGDCMImageReader & vtkGDCMImageWriter. If GDCM is too complex to integrate in your environment you can also consider simply using the command line converter: gdcmconv to convert an unsupported DICOM file into something that vtkDICOMImageReader, can support. Typically you would want: gdcmconv --raw compressed_input.dcom uncompressed_output.dcom dicom2 Sebastien BARRE wrote a free DICOM converter, named dicom2, that can be used to convert medical images to raw format. This tool is a command line program and does not provide any GUI at the moment. There is a special section dedicated to the VTK:, then "Convert to raw (vtk)" The following page also provide links to several other DICOM converters: vtkVolume16Reader When searching the vtkusers mailing list a lot of posts are still using vtkVolume16Reader to read in DICOM file. It will works in the following case: - You know the dimension (cols & rows) of your image - You know the spacing of your image - You know the pixel type (pixel type & #components) of your image - You know Pixel Data (7fe0,0010) is the last element in the image - You know Pixel Data (7fe0,0010) was sent in uncompressed format (not encapsulated) All those requirements are a stronger set of requirements than vtkDICOMImageReader, therefore it is encourage to use vtkDICOMImageReader instead. The spacing in my DICOM files are wrong Image Position (Patient) (0020,0032) is the only attribute that can be relied on to determine the "reconstruction interval" or "space between the center of slices". If the distance between Image Position (Patient) (0020,0032) of two parallel slices along the normal to Image Orientation (Patient) (0020,0037) is not the same as whatever happens to be in the DICOM Spacing Between Slices (0018,0088) attribute, then (0018,0088) is incorrect, without question This is a known bug in some scanners. When Slice Thickness (0018,0050) + Spacing Between Slices (0018,0088) equals the computed reconstruction interval, then chances are the modality implementor has made the obvious mistake of misinterpreting the definition of (0018,0088) to mean the distance between edges (gap) rather than the distance between centers. Further, one should never use Slice Location (0020,1041) either, an optional and purely annotative attribute, though chances are that the distance between the Slice Location (0020,1041) values of two slices will match the distance along the normal to the orientation derived from the position. The GDCM library simply discard any information present in the (0018,0088) tag and instead recompute the spacing by computing the distance in between two consecutive slices (along the normal). GDCM 1.x: typedef std::vector<gdcm::File *> FileList; FileList l; gdcm::SerieHelper sh; sh.OrderFileList(l); // calls ImagePositionPatientOrdering() zspacing = sh.GetZSpacing(); GDCM 2.x: IPPSorter ipp; ipp.Sort( filenames ); zspacing = ipp.GetZSpacing(); How to handle large data sets in VTK One of the challenges in VTK is to efficiently handle large datasets. By default VTK is tuned towards smaller datasets. For large datasets there are a couple of changes you can make that should yield a much smaller memory footprint (less swapping) and also improve rendering performance. The solution is to: - Use ReleaseDataFlag, - Turn on ImmediateModeRendering - Use triangle strips via vtkStripper - Use a different filter or mapper Each of these will be discussed below. Using ReleaseDataFlag By default VTK keeps a copy of all intermediate results between filters in a pipeline. For a pipeline with five filters this can result in having six copies of the data in memory at once. This can be controlled using ReleaseDataFlag and GlobalReleaseDataFlag. If ReleaseDataFlag is set to one on a data object, then once a filter has finished using that data object, it will release its memory. Likewise, if GlobalReleaseDataFlag is set on ANY data object, all data objects will release their memory once their dependent filter has finished executing. For example in Tcl and C++ # Tcl vtkPolyDataReader reader [reader GetOutput] ReleaseDataFlagOn // C++ vtkPolyDataReader *reader = vtkPolyDataReader::New(); reader->GetOutput()->ReleaseDataFlagOn(); or // C++ vtkPolyDataReader *reader = vtkPolyDataReader::New(); reader->GetOutput()->GlobalReleaseDataFlagOn(); While turning on the ReleaseDataFlag will reduce your memory footprint, the disadvantage is that none of the intermediate results are kept in memory. So if you interactively change a parameter of a filter (such as the isosurface value), all the filters will have to re-execute to produce the new result. When the intermediate results are stored in memory, only the downstream filters would have to re-execute. One hint for good interactive performance. If only one stage of the pipeline can have its parameters changed interactively (such as the target reduction in a decimation filter), only retain the data just prior to that step (which is the default) and turn ReleaseDataFlag on for all other steps. Use ImmediateModeRendering By default, VTK uses OpenGL display lists which results in another copy of the data being stored in memory. For most large datasets you will be better off saving memory by not using display lists. You can turn off display lists by turning on ImmediateModeRendering. This can be controlled on a mapper by mapper basis using ImmediateModeRendering, or globally for all mappers in a process by using GlobalImmediateModeRendering. For example: # Tcl vtkPolyDataMapper mapper mapper ImmediateModeRenderingOn // C++ vtkPolyDataMapper *mapper = vtkPolyDataMapper::New(); mapper->ImmediateModeRenderingOn(); or // C++ vtkPolyDataMapper *mapper = vtkPolyDataMapper::New(); mapper->GlobalImmediateModeRenderingOn(); The disadvantage to using ImmediateModeRendering is that if memory is not a problem, your rendering rates will typically be slower with ImmediateModeRendering turned on. Use triangle strips via vtkStripper. Most filters in VTK produce independent triangles or polygons which are not the most compact or efficient to render. To create triangle strips from polydata you can first use vtkTriangleFilter to convert any polygons to triangles (not required if you only have triangles to start with) then run it through a vtkStipper to convert the triangles into triangle strips. For example in C++ vtkPolyDataReader *reader = vtkPolyDataReader::New(); reader->SetFileName("yourdatafile.vtk"); reader->GetOutput()->ReleaseDataFlagOn(); vtkTriangleFilter *tris = vtkTriangleFilter::New(); tris->SetInput(reader->GetOutput()); tris->GetOutput()->ReleaseDataFlagOn(); vtkStripper *strip = vtkStripper::New(); strip->SetInput(tris->GetOutput()); strip->GetOutput()->ReleaseDataFlagOn(); vtkPolyDataMapper *mapper = vtkPolyDataMapper::New(); mapper->ImmediateModeRenderingOn(); mapper->SetInput(tris->GetOutput()); The only disadvantage to using triangle strips is that they require time to compute, so if your data is changing every time you render, it could actually be slower. Use a different filter or mapper This is a tough issue. In VTK there are typically a couple of ways to solve any problem. For example an image can be rendered as a polygon for each pixel, or it can be rendered as a single polygon with a texture map on it. For almost all cases the second approach will be much faster than the first event though VTK supports both. There isn't a single good answer for how to find the best approach. If you suspect that it is running more slowly than it should, try posting to the mailing list or looking for other ways to achieve the same result. VTK is slow, what is wrong? We have heard people say that VTK is really slow. In many of these cases, changing a few parameters can make a huge difference in performance. If you find that VTK is slower than other visualization systems running the same problem first take a look at the FAQ section dealing with large data: How to handle large data sets in VTK. Many of its suggestions will improve VTK's performance significantly for many datasets. If you still find VTK slow, please let us know and send us an example (to mailto:kitware@kitware.com). In the past there have been some filters that simply were not written to be fast. When we come across one of these we frequently can make minor changes to the filter that will make it run much more quickly. In fact many changes in the past couple years have been this type of performance improvement. Is VTK thread-safe ? The short answer is no. Many VTK sources and filters cache information and will not perform as expected when used in multiple threads. When writing a multithreaded filter, the developer has to be very careful about how she accesses data. For example, GetXXX() methods which return a pointer should only be used to read. If the pointer returned by these methods are used to change data in multiple threads (without mutex locks), the result will most probably be wrong and unpredictable. In many cases, there are alternative methods which copy the data referred by the pointer. For example: float* vtkDataArray::GetTuple(const vtkIdType i); is thread-safe only for reading whereas: void vtkDataArray::GetTuple (const vtkIdType i, float * tuple); copies the requested tuple and is thread safe even if tuple is modified afterwards (as long as the same pointer is not passed as the argument tuple simultaneously by different threads). Unfortunately, only very few methods are clearly marked as thread-(un)safe and, in many situations, the developer has to dig into the source code to figure out whether an accessor is thread safe or not. vtkDataSet and most of it's sub-classes are well documented and almost all methods are marked thread-safe or not thread-safe. This might be a good place to start. Most of the filters in imaging and some filters in graphics (like vtkStreamer) are good examples of how a multi-threaded filter can be written in VTK. However, if you are not interested in developing multithreaded filters but want to process some data in parallel using the same (or similar) pipeline, your job is much easier. To do this, create a different copy of the pipeline on each thread and execute them in parallel on a different piece of the data. This is best accomplished by using vtkThreadedController (instead of vtkMultiThreader). See the documentation of vtkMultiProcessController and vtkThreadedController and the examples in the parallel directory for details on how this can be done. Also, note that most of the OpenGL libraries are not thread-safe. Therefore, if you are rendering to multiple render windows from different threads, you are likely to get in trouble, even if you have mutex locks around the render calls. Can I use STL with VTK? As of VTK version 4.2, you can use the STL. However, see the VTK Coding Standards for limitations. Here's an example (from vtkInterpolateVelocityField): In the .h file (the PIMPL) forward declare class vtkInterpolatedVelocityFieldDataSetsType; // class VTK_COMMON_EXPORT vtkInterpolatedVelocityField : public vtkFunctionSet { private: vtkInterpolatedVelocityFieldDataSetsType* DataSets; }; In the .cxx file define the class (here deriving from the STL vector container) # include <vtkstd/vector> typedef vtkstd::vector< vtkSmartPointer<vtkDataSet> > DataSetsTypeBase; class vtkInterpolatedVelocityFieldDataSetsType: public DataSetsTypeBase {}; In the .cxx file construct and destruct the class: vtkInterpolatedVelocityField::vtkInterpolatedVelocityField() { this->DataSets = new vtkInterpolatedVelocityFieldDataSetsType; } vtkInterpolatedVelocityField::~vtkInterpolatedVelocityField() { delete this->DataSets; } And in the .cxx file use the container as you would any STL container: for ( DataSetsTypeBase::iterator i = this->DataSets->begin(); i != this->DataSets->end(); ++i) { ds = i->GetPointer(); .... } What image file formats can VTK read and write? The following table identifies the image file formats that VTK can read and write. † A typical example of use is: # Image pipeline reader = vtkImageReader() reader.SetDataByteOrderToBigEndian() reader.SetDataExtent(0,511,0,511,0,511) reader.SetFilePrefix("Ser397") reader.SetFilePattern("%s/I.%03d") reader.SetDataScalarTypeToUnsignedShort() reader.SetHeaderSize(5432) Printing an object. Sometimes when debugging you need to print an object to a string, either for logging purposes, or in the case of windows applications, to a window. Here is a way to do this: std::ostringstream os; // // "SomeVTKObject" could be, for example, // declared somewhere as: vtkCamera *SomeVTKObject; // SomeVTKObject->Print(os); vtkstd::string str = os.str(); // // Process the string as you want Writing a simple CMakeLists.txt. If you get something that looks like: undefined reference to `__imp___ZN13vtkTIFFReader3NewEv' collect2: ld returned 1 exit status You certainly forgot to pass in a library to your executable. The easisest way is to use CMakeLists.txt file. For example the minimal project is: FIND_PACKAGE(VTK) IF (VTK_FOUND) INCLUDE (${VTK_USE_FILE}) ENDIF (VTK_FOUND) ADD_EXECUTABLE(tiff tiff.cxx ) TARGET_LINK_LIBRARIES (tiff vtkRendering ) Since vtkRendering is link against all other vtk lib. Except if you are building VTK with Hybrid or Parallel in that case you need to explicitely specify which library you want to link against. Testing for VTK within a configure script VTK uses CMake as build tool but if you VTK-based application wants to use autoconf and/or automake, then you will find very useful an M4 macro file which detects from your configure script the presence/absence of VTK on the user system. VTK won't add such file into the official distribution but you can always write your own, as I did. Look in VTK_Autoconf page for more info. How do I get my C++ code editor to do VTK-style indentation? If you are writing code with VTK, you may want to follow the VTK Coding Standards. This is particularly important if you plan to contribute back to VTK. Most C++ code editors will help you with indenting, but the indenting may differ significantly from that prescribed by the VTK Coding Standards. Fortunately, most editors have enough options to allow you to change the indention enough to get at least close to the VTK-style indentation. Below is a list of C++ editors and some suggestions on getting the indentation VTK compliant. If you use a popular editor that is not listed here, please feel free to contribute. Microsoft Visual C++ .NET indentation Under the "Tools" menu, select "Options". Go to the options under "Text Editor" and then "C/C++". Click the "Tabs" options. Set "Indenting" to "Smart", "Indent Size" to 2, and select "Insert spaces". Click the "Formatting" options enable "Indent braces". This will make most of the indentation correct. However, it will indent all of the braces. In VTK classes, most of the braces are indented, but those starting a class, method, or function are typically flush left. You will have to correct this on your own. Emacs indentation Place the Elisp Code for VTK-Style C Indentation in your .emacs file. Vim indentation Andy Cedilnik has some information on following the VTK coding guidelines using vim. You may place the following in your ~/.vimrc file set tabstop=2 " Tabs are two characters set shiftwidth=2 " Indents are two charactes too set expandtab " Do not use tabs set cinoptions={1s,:0,l1,g0,c0,(0,(s,m1 "Keep tabs in makefiles as they are significant: :autocmd BufRead,BufNewFile [Mm]akefile :set noexpandtab How to display transparent objects? (keywords: alpha, correct, depth, geometry, object, opacity, opaque, order, ordering, peel, peeling, sorting, translucent, transparent.) When opaque geometry is rendered, there is no need to sort it because the depth buffer (or z-buffer) is used and the sorting is done automatically by keeping the geometry closest to the viewpoint at a given pixel. (It is easy because it is a MAX/MIN calculation, not a real sorting). With translucent geometry the final color of a pixel is the contribution of all the geometry primitives visible through the pixel. The color of the pixel is the result of a blending operation between the colors of all visible primitives. Blending operations themselves are usually order-dependent (ie not commutative). That's why depth sorting is required. There are two ways to fix the ordering in VTK: - 1. Append all your polygonal geometry with vtkAppendPolyData and pass it to vtkDepthSortPolyData. See this tcl example. Depth sorting is done per centroid of geometry primitives, not per pixel. For this reason it is not exact but it solves most of the ordering and gives result usually good enough. - 2. If the graphics card supports it, use " depth peeling". It performs per pixel sorting (better result) but it is really slow.. It has been tested on Suns, SGIs, HPs, Alphas, RS6000s and many Windows and Mac workstations. What Graphics Cards work with VTK VTK uses OpenGL to perform almost all of its rendering and some graphics cards/drivers have better support for OpenGL than others. This is not a listing of what cards perform well. It is a listing of what cards actually produce correct results. Here is a list of cards and their status roughly in best to worst order. - Any Nvidia desktop card on Windows -- 100% compatible - Any ATI desktop cards on Windows -- 100% compatible - Mesa -- most releases pass all VTK tests - Microsoft Software OpenGL -- passes all VTK tests but does have a couple bugs - Mac graphics cards -- these usually pass all VTK tests. Older cards may have some issues, for example, the ATI Rage 128 Pro does not support textures larger than 1024x1024. - Non-linux UNIX cards (Sun HP SGI) -- These generally work - Any Nvidia card under linux -- these usually pass all VTK tests but have some issues - Any ATI card under linux -- these usually pass all VTK tests but have some issues - Nvidia laptop graphics cards under Windows -- known to have some issues, newer cards pass all tests - ATI laptop graphics cards under Windows -- known to have some issues, newer cards pass all tests (e.g. ATI Mobility Radeon 9600) - Intel Extreme Graphics -- fails some VTK tests How do I build the examples on the PC running Windows? Since building the C++ examples on the PC isn't all that easy, here are some instructions from Jack McInerney. Steps for creating a VTK C++ project 8/14/96 This is based on what I learned creating a project to run the Mace example. These steps allowed me to successfully built and run this example. - Create a console project (File, New, then select Console application). - Add the files of interest to the project. (e.g., Mace.cxx) - Under Build, select Update all Dependencies. A long list of .hh files will show up under dependencies For this to work, Visual C++ needs to know where to look to find the include files. In my case they are at C:\VTK\VTK12SRC\INCLUDE. To tell Visual C++ to look there, go to Tools, Options. Select the tab Directories. Under the list for Include files add: C:\VTK\VTK12SRC\INCLUDE - Compile the file Mace.cxx. This will lead to many warnings about data possibly lost as double variables are converted to float variables. These can be gotten rid of by going to Build, Settings, and select the C++ tab. Under the General catagory, set Warning Level to 1* (instead of 3). - Before linking, some additional settings must be modified. Go to Build, Settings, and select the Link tab. In the General catagory, add the libraries opengl32.lib and glaux.lib to the Object/Library Modules. Put a space between each file name. Then select the C++ tab and the Category: Code Generation. Under Use Run-Time Library, select Debug Multithreaded DLL. Select OK to exit the dialog box. The above libraries are available from Microsoft's Web site at: or This is a self extracting archive which contains these files. Simply place them in your windows system directory. - Link the code by selecting Build, Build MaceProject.exe. I still get one warning when I do this, but it appears to be harmless When you go to run the program, it will bomb out unless it can find 2 DLLs: Opengl32.dll and Glu32.dll. These need to be located either in the project directory or the C:\WINDOWS directory. These files are supplied on the vtk CD-ROM (in the vtk\bin directory). How do I build the Java examples on the PC running Windows? One common issue building the examples is missing one or all of vtkPanel, vtkCanvas and AxesActor classes. For whatever reason these are not in the vtk.jar (at least for 4.2.2). But you can get them from the source distribution (just unzip the source and extract these needed .java files, and point your Java-compiler to them). Another common issue appears to be class loading dependency errors. Make sure the directory with the .dll files is in your classpath when you run (default location is C:\Program Files\vtk42\bin\). Yet this still seems insufficient for some of the libraries. One possible solution is to copy the Java awt.dll to this directory as well. 64-bit System Issues vtk builds on 64 bit systems, that is, systems where sizeof(void*) is 64 bits. However, parts of the vtk codebase are not 64 bit clean and so runtime problems are likely if that code is used. General VTK binary files are not compatible between 32-bit and 64-bit systems. For portability, use the default file type, ASCII, for vtkPolyDataWriter, etc. You may be able to write a binary file on a 64-bit system and read it back in. Mac OS X Specific Mac OS X 10.3 and earlier have no support for 64 bit. On Mac OS X 10.4, VTK cannot be built as 64 bit because it requires Carbon, Cocoa, or X11, none of which are available to 64 bit processes. On Mac OS X 10.5, Cocoa is available to 64 bit processes, but Carbon is not. VTK is known to work reasonably with 64 bit Cocoa. Windows Specific todo What size swap space should I use on a PC? Building vtk on the PC requires a significant amount of memory (at least when using Visual C++)... but the final product is nice and compact. To build vtk on the PC, we recommend setting the min/max swap space to at least 400MB/500MB (depending on how much RAM you have... the sum of RAM and swap space should be roughly 500+ MB). Are there any benchmarks of VTK and/or the hardware it runs on? Take a look at the "Simple Sphere Benchmark": It is not a "real world" benchmark, but provide synthetic results comparing different hardware running VTK: Why is XtString undefined when using VTK+Python on Unix? This is a side effect of dynamic linking on (some?) Unix systems. It appears often on Linux with the Mesa libraries at least. The solution is to make sure your Mesa libraries are linked with the Xt library. One way to do this is to add "-lXt" to MESA_LIB in your user.make file. How do I get the Python bindings to work when building VTK with Borland C++? If you've built VTK with the freely downloadable Borland C++ 5.5 (or its commercial counterpart) and you're using the Python binaries from, you'll note that when you try to run a VTK Python example you get something similar to the following error message: from vtkCommonPython import * ImportError: dynamic module does not define init function (initvtkCommonPython) This is because BCC32 prepends an underscore ("_") to all exported functions, so (in this case) the vtkCommonPython.dll contains a symbol _initvtkCommonPython which Python does not find. All kits (e.g. Rendering, Filtering, Patented) will suffer from this problem. The solution is to create Borland module definition in the VTK binary (output) directory, in my case VTK/bin. You have to do this for all kits that you are planning to use in Python. Each .def file must have the same basename as the DLL, e.g. "vtkCommonPython.def" for vtkCommonPython.dll and it must be present at VTK link time. The def file contains an export alias, e.g.: EXPORTS initvtkCommonPython=_initvtkCommonPython The Borland compiler will create an underscore-less alias in the DLL file and Python will be able to load it as a module. How do I build Python bindings on AIX? There is a problem with dynamic loading on AIX. Old AIX did not have dlopen/dlsym, but they used load mechanism. Python still reflects this. VTK is however not compatible with the old load mechanism. The following patch to Python 2.2.2 makes python use dlopen/dlsym on AIX 5 or greater. How to build VTK for offscreen rendering? [this section is obsolete. Mangle Mesa is not supported anymore in VTK>=5.2] (not sure about 5.0) Struggled a few hours to get VTK to do offscreen rendering. I use it to batch process medical images. Without actually producing output on the screen, I still print resulting images in a report to easily review the results of an experiment. Here is how I solved this problem for VTK version 4.2.2. 1. Download Mesa-4.0.4 source Modify Mesa-4.0.4/Make-config in the 'linux:' target the following vars: GL_LIB = libVTKMesaGL.so GLU_LIB = libVTKMesaGLU.so GLUT_LIB = libVTKMesaglut.so GLW_LIB = libVTKMesaGLw.so OSMESA_LIB = libOSVTKMesa.so In Mesa 6.2.1 you need to edit Mesa/configs/default instead: # Library names (base name) GL_LIB = VTKMesaGL GLU_LIB = VTKMesaGLU GLUT_LIB = VTKMesaglut GLW_LIB = VTKMesaGLw OSMESA_LIB = VTKMesaOSMesa And then export this env var: export CFLAGS="-O -g -ansi -pedantic -fPIC -ffast-math-DUSE_MGL_NAMESPACE -D_POSIX_SOURCE -D_POSIX_C_SOURCE=199309L-D_SVID_SOURCE -D_BSD_SOURCE -DUSE_XSHM -DPTHREADS -I/usr/X11R6/include" then For Mesa 4.0.4 make -f Makefile.X11 linux cp Mesa-4.0.4/lib/* /data/usr/mesa404/lib/ in Mesa 6.2.1: make linux-x86 make install (I generally use /opt/VTKMesa/*) I use 'VTKMesa' name extension to avoid conflicts with my RH9.0 libs (especially OSMesa lib in XFree!). I'm using shared libraries, because that allows me to use dynamic libs from VTK and not the vtk program itself without explicitly having to load VTKMesaGL with my app. I copied the 'VTKMesa' libs in /data/usr/mesa404/lib/, but any odd place probably will work. Avoid /usr/lib /usr/local/lib for now. 2. Follow normal instructions to get a proper working vtk, then ccmake with the following options: test using /data/prog/VTK-4.2.2/Examples/MangledMesa/Tcl scripts If you're doing things on UNIX, you should also look at VTK Classes. It has links to RenderWindow objects that are probably easier to use than rebuilding VTK with Mesa. How to get keyboard events working on Mac OS X? On Mac OS X, there are (at least) two kinds of executables: - Application Bundles - plain UNIX executables For a program to be able to display a graphical interface (that is, display windows that allow mouse and keyboard interaction) it really should be an Application Bundle. If a plain UNIX executable tries, there will be various bugs, such as keyboard and mouse events not working reliably. Many, but not all, of the example VTK applications are built as plain UNIX executables, and thus have these problems. This is VTK bug 2025. When you build your own VTK application, it is best to make it in the form of an Application Bundle. With CMake 2.0.5 or later, simply add the following to your CMakeLists.txt file: IF(APPLE) SET(EXECUTABLE_FLAG MACOSX_BUNDLE) ENDIF(APPLE) If for some reason you cannot build as an Application Bundle (perhaps because your app needs command line parameters) you might be able to avoid the above problems by adding an __info_plist section to your Mach-O executable. If you succeed, please post to the VTK list. Can VTK be built as a Universal Binary on Mac OS X? For VTK 5.0.4 and older, the short answer is "no". For VTK CVS the short answer is "mostly". You need to set CMAKE_OSX_ARCHITECTURES to the architectures you want and CMAKE_OSX_SYSROOT to a Mac OS X SDK that supports Universal builds. The usual settings are: CMAKE_OSX_ARCHITECTURES=ppc;i386 CMAKE_OSX_SYSROOT=/Developer/SDKs/MacOSX10.4u.sdk This will result in a Universal build. However, there may be runtime bugs due to VTK's use of TRY_RUN. Work is being done to improve this situation. How can I stop Java Swing or AWT components from flashing or bouncing between values? While not strictly a VTK problem, this comes up fairly often when using Java-wrapped VTK. Try the following two JRE arguments to stop the Swing/AWT components flashing: -Dsun.java2d.ddoffscreen=false -Dsun.java2d.gdiblit=false Note that these are classified as "unsupported properties," so may not work on all platform or installations (in particular, ddoffscreen refers to DirectDraw and, as such, is specific to Windows). How can a user process access more than 2 GB of ram in 32-bit Windows? By default on Windows, the most memory that a user process can access is 2 GB, no matter how much RAM you have installed in your system. With Windows XP Professional you can make it possible for a process to use up to 3 GB of memory by doing two things: 1) Modify the boot parameters in boot.ini (on my 32 bit WinXP Pro machine, it's in: "C:\boot.ini") to tell the operating system that you want user processes to have access to up to 3GB of RAM (This is a really important file, and if you don't know what you are doing, stop reading this and go back to work!). This is done by adding the /3GB flag to the line of the file that tells the boot loader where the operating system is. My boot.ini file looks like: [boot loader] timeout=30 default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows XP Professional" /3GB This is a very bad file to make mistakes on, so don't - it may be very difficult to repair your computer to boot if you mess up this file. There is a nice description of this in the Microsoft article Memory Support and Windows Operating Systems. 2) The other thing that you need to do is make your executable LARGEADDRESSAWARE. Assuming that you have a Windows binary that you want to try this on, you can use the 'editbin' utility that comes with Visual Studio to change the setting of one bit (the IMAGE_FILE_LARGE_ADDRESS_AWARE bit) in the image header of the executable. For a program 'prog.exe' you can make the change by editbin /LARGEADDRESSAWARE prog.exe Of course, depending on how your program handles memory you might find that it crashes when you try to use the extra memory, but that's a separate issue. If you are compiling your program with a version of Visual Studio you should be able to find the switch to make your program /LARGEADDRESSAWARE. Assuming that you have built a shared build of VTK and you may or may not have a set it up such that there is a path to the release version of VTK in your PATH statement. Then if you debug a project that is using QVTKWidget, you will come across a problem in that if you are debugging a debug version; the application depends upon the debug version of QVTK.dll which will depend upon QtGui4d.dll (among others) and load it. But, because the release version of QVTK.dll is in the path, QtGiu4.dll will also be loaded preventing the application from running. You will get a "QWidget: Must construct a QApplication before a QPaintDevice" The solution to this problem is to set the path to the correct build of VTK on the "Debugging" properties of your project. Right click on your project, bring up the properties dialog, and select "Debugging" from the list on the left. There should be an "Environment" line. You can add variables here using key=value pairs. For example, add the following line: PATH=<Path To VTK>\bin\$(OutDir);%PATH% You can then add the same line to other configurations, such as the release one, by selecting them from the top left drop down box labelled Configuration. $(OutDir) will be set by Visual Studio to either Debug or Release, depending upon what configuration you have selected. Make sure that ;%PATH% is appended so that Qt and other files can be appended to the PATH statement. Changes to the VTK API What is the policy on Changes to the API Between patch releases maintain the API unless there is a really strong reason not to. Between regular releases maintain backwards compatibility to the API with prior releases of VTK when doing so does not increase the complexity or readability of the current VTK or when the benefits of breaking the API are negligible. Clearly these statements have a lot of wiggle room. For example in vtkLightKit BackLight and Headlight were used and released. Now BackLight and HeadLight might make more sense and probably will be easier for non-native English speakers, but is it worth breaking the API for it, probably not. Another factor is how long the API has been around and how widely used it is. These all indicate how painful it will be to change the API which is half of the cost/benefit decision. Change to vtkIdList::IsId() vtkIdList::IsId(int id) used to return a 0 or 1 to indicate whether the specified id is in the list. Now it returns -1 if the id is not in the list; or a non-negative number indicating the position in the list. Changes vtkEdgeTable vtkEdgeTable had two changes. The constructor now takes no arguments, and you use InitEdgeInsertion() to tell the class how many points are in the dataset. Also, IsEdge(p1,p2) now returns a -1 if the edge (defined by points p1,p2) is not defined. otherwise a non-negative integer value is returned. These changes were made to support the association of attributes with edges. Changes between VTK 4.2 and VTK 4.4 (and how to update) We have removed the CVS date, revision, and the language from the copyright on all the files. This information wasn't being used much and it created extra work for developers. For example you edit vtkObject.h rebuild all of VTK, check in you change, then you must rebuild all of VTK again because commiting the header file causes it to be changed by CVS (because the revision number changed) This change will also make it easier to compare different branches of VTK since these revision number differences will no longer show up. The CVS revision number is still in the cxx file in the RevisionMacro. You don't need to make any changes to your code for this. The DataArray classes now use a templated intermediate class to share their implementation. Again there is no need for you to make changes to your code. Legacy code has been removed. Specifically none of the old style callbacks are supported and observers should be used instead. So where you used a filter->SetStartMethod(myFunc) you should do a filter->AddObserver(vtkCommand::StartEvent,myCommand) Usually this will require you to create a small class for the observer. vtkImageOpenClose3D.cxx has an example of using an observer and there are a few other examples in VTK. If you switch to using Observers your code should also work with versions of VTK from 3.2 or later since the Observers have been in VTK since VTK 3.2. Many functions that previously took or returned float now take or return double. To change your code to work with VTK 4.4 or later you can just replace float with double for the appropriate calls and variables. If you want your code to work with both old and new versions of VTK you can use vtkFloatingPointType which is defined to be double in VTK 4.4 and later and it is float in vtk 4.2.5. In versions of VTK prior to 4.2.5 you can use something like: #ifndef vtkFloatingPointType #define vtkFloatingPointType vtkFloatingPointType typedef float vtkFloatingPointType; #endif at the beginning of your code. That will set it to the correct value for all versions of VTK old and new. Use of New() and Delete() now enforced (vs. new & delete) Constructors and destructors in VTK are now protected. This means you can no longer use little "new" or "delete" to create VTK instances. You'll have to use the methods ::New() and ::Delete() (as has been standard practice for some time). The reason for this is to enforce the use of New() and Delete(). Not using New() and Delete() can lead to bad mojo, mainly reference counting problems or not taking advantage of special procedures incorporated into the New() method (e.g., selecting the appropriate hardware interface during instance creation time). If you've used New() and Delete() in your code, these changes will not affect you at all. If you're using little "new" or "delete", your code will no longer and compile and you'll have to switch to New() and Delete(). Changes between VTK 4.4 and VTK 4.6 Collection Changes Collections have had some small changes (originally started by Chris Volpe) to better support reentrant iteration. Specifically all the collection have an InitTraversal(sit) and GetNextFoobar(sit) methods. (where Foobar is what the collection contains, for example GetNextActor(sit)) The argument to both of these methods is a vtkCollectionSimpleIterator. Most of the collection use in VTK has been modified to use these new methods. The advantage is that these new methods support having the same collection be iterated through in a reentrant safe manner. In the past this was not true and led to a number of problems. In the future for C++ class development please use this approach to iterating through a collection. These changes are fully backwards compatible and no old APIs were harmed in the making of these changes. So in summary, for the future, where you would have written: for (actors->InitTraversal(); (actor = actors->GetNextActor());) you would now have: vtkCollectionSimpleIterator actorIt; for (actors->InitTraversal(actorIt); (actor = actors->GetNextActor(actorIt));) Changes in VTK between 3.2 and 4.0 - Changes to vtkDataSetAttributes, vtkFieldData and vtkDataArray: All attributes (scalars, vectors...) are now stored in the field data as vtkDataArray's. vtkDataSetAttributes became a sub-class of vtkFieldData. For backwards compatibility, the interface which allows setting/getting the attributes the old way (by passing in a sub-class of vtkAttributeData such as vtkScalars) is still supported but it will be removed in the future. Therefore, the developers should use the new interface which requires passing in a vtkDataArray to set an attribute. vtkAttributeData and it's sub-classes (vtkScalars, vtkVectors...) will be deprectated in the near future; developers should use vtkDataArray and it's sub-classes instead. We are in the process of removing the use of these classes from vtk filters. - Subclasses of vtkAttributeData (vtkScalars, vtkVectors, vtkNormals, vtkTCoords, vtkTensors) were removed. As of VTK 4.0, vtkDataArray and it's sub-classes should be used to represent attributes and fields. Detailed description of the changes and utilities for upgrading from 3.2 to 4.0 can be found in the package: - Added special methods to data arrays to replace methods like tc SetTCoord i x y 0 or vc SetVector i vx vy vz in interpreted languages (Tcl, Python, Java). Use: tc SetTuple2 i x y or vc SetTuple3 i vx vy vz - Improved support for parallel visualization: vtkMultiProcessController and it's sub-classes have been re-structured and mostly re-written. The functionality of vtkMultiProcessController have been re-distributed between vtkMultiProcessController and vtkCommunicator. vtkCommunicator is responsible of sending/receiving messages whereas vtkMultiProcessController (and it's subclasses) is responsible of program flow/control (for example processing rmi's). New classes have been added to the Parallel directory. These include vtkCommunicator, vtkMPIGroup, vtkMPICommunicator, vtkSharedMemoryCommunicator, vtkMPIEventLog... There is now a tcl interpreter which supports parallel scripts. It is called pvtk and can be build on Windows and Unix. Examples for both Tcl and C++ can be found in the examples directories. - vtkSocketCommunicator and vtkSocketController have been added. These support message passing via BSD sockets. Best used together with input-output ports. - Since it was causing very long compile times (it essentially includes every vtk header file) and it was hard to maintain (you had to add a line whenever you added a class to VTK) vtk.h was removed. You will have to identify the header files needed by your application and include them one by one. - vtkIterativeClosestPointTransform has been added. This class is an implementation of the ICP algorithm. It matches two surfaces using the iterative closest point (ICP) algorithm. The core of the algorithm is to match each vertex in one surface with the closest surface point on the other, then apply the transformation that modify one surface to best match the other (in a least square sense). - The SetFileName, SaveImageAsPPM and related methods in vtkRenderWindow have been removed. vtkWindowToImageFilter combined with any of the image writers provides greater functionality. - Support for reading and writing PGM and JPEG images has been included. - Methods with parameters of the form "type param[n]" are wrapped. Previously, these methods were only wrapped if the array was declared 'const'. The python wrappers will allow values to be returned in the array. - The directory structure was completely reorganized. There are now subdirectories for Common (core common classes) Filtering (superclasses for filtering operations) Imaging (filters and sources that produce images or structured points) Graphics (filters or sources that produce data types other than ImageData and StructuredPoints) IO (file IO classes that do not require Rendering support) Rendering (all actors mappers annotation and rendering classes) Hybrid (typically filters and sources that require support from Rendering or both Imaging and Graphics) Parallel (parallel visualization support classes) Patented (patented classes) Examples (documented examples) Wrapping (support for the language wrappers). In many directories you will see a Testing subdirectory. The Testing subdirectories contain tests used to validate VTKs operation. Some tests may be useful as examples but they are not well documented. - The Build process for VTK now uses CMake (found at) This replaces pcmaker on windows and configure on UNIX. This resolves some longstanding problems and limitation we were having with pcmaker and configure, and unifies the build process into one place. Changes to VTK between 4.0 and 4.2 - Use of macros to support serialization, standardize the New method, and provide the Superclass typedef. - Subclassing of VTK classes in the python wrappers (virtual method hooks are not provided). - vtkImageWindow, vtkImager, vtkTkImageWindowWidget and their subclasses have been removed to reduce duplicated code and enable interation in ImageWindows. Now people should use vtkRenderer and vtkRenderWindow instead. vtkImageViewer still works as a turn key image viewing class although it now uses vtkRenderWindow and vtkRenderer internally instead of vtkImageWindow and vtkImager. - New class: vtkBandedPolyDataContourFilter. Creates solid colored bands (like you find on maps) of scalar value. - Event processing: Several new events to VTK were added (see vtkCommand.h). Also event processing can now be prioritized and aborted. This allows applications to manage who processes which events, and terminates the processing of a particular event if desired. - 3D Widgets: A new class vtkInteractorObserver was added to observe events on vtkRenderWindowInteractor. Using the new event processing infrastructure, multiple 3D widgets (subclasses of vtkInteractorObserver) can be used simultaneously to process interactions. Several new 3D widgets have been added including: - vtkLineWidget - vtkPlaneWidget - vtkImagePlaneWidget - vtkBoxWidget - vtkSphereWidget - Besides providing a representation, widgets also provide auxiliary functionality such as providing transforms, implicit functions, plane normals, sphere radius and center, etc. - New class: vtkInstantiator provides a means by which one can create an instance of a VTK class using only the name of the class as a string. - New class: vtkXMLParser provides a wrapper around the Expat XML parsing library. A new parser can be written by subclassing from vtkXMLParser and providing a few simple virtual method implementations. - TIFF reader is now implemented using libtiff, which makes it capable of reading almost all available TIFF formats. The libtiff is also available internally as vtktiff. - New method (all sub-classes of vtkObject): Added a virtual function called NewInstance to vtkTypeMacro. NewInstance creates and returns an object of the same type as the current one. It does not copy any properties. The returned pointer is of the same type as the pointer the method was invoked with. This method should replace all the MakeObject methods scattered through VTK. - vtkSetObject macro is depricated for use inside the VTK. It is still a valid construct in projects that use VTK. Instead use vtkCxxSetObjectMacro which does the same thing. - vtkPLOT3DReader have been improved. It now supports: - multigrid (each block is one output) - ascii - fortran-style byte counts - little/big endian - i-blanking (partial) - A new vtkTextProperty class has been created, and duplicated text API s have been obsoleted accordingly. Check the Text properties in VTK 4.2 FAQ entry for a full description of the change. How do I upgrade my existing C++ code from 3.2 to 4.x? This is (a corrected version of) an email that was posted to vtkusers. Please feel free to correct or add anything. What is the release schedule for VTK VTK has a formal release every eight to sixteen months. VTK 4.0 was cut in December 2001 and released in March 2002. VTK 4.2 was releaseed in February 2003. VTK 4.4 (which was an interim release) was released at the end of 2003. VTK 5.0 was released in January 2006, 5.0.1 in July 2006, 5.0.2 in September 2006, 5.0.3 in March 2007, and 5.0.4 in January 2008. Roadmap: What changes are being considered for VTK This is a list of changes that are being considered for inclusion into VTK. Some of these changes will happen while other changes we would like to see happen but may not due to funding or time issues. For each change we try to list what the change is, when we hope to complete it, if it is actively being developed. Detailed discussion on changes is limited to the vtk-developers mailing list. - Modify existing image filters to use the new vtkImageIterator etc. Most simple filters have been modified to use ithe iterator in VTK 4.2. It would be nice to have some sort of efficient neighborhood iterators but so far we haven't come up with any. - Rework the polydata and unstructured grid structures (vtkMesh ??). Related ideas include: - Make UnstructuredGrid more compact by removing the cell point count from the vtkCellArray. This will reduce the storage required by each cell by 4 bytes. - Make vtkPolyData an empty subclass of vtkUnstructuredGrid. There are a number of good reasons for this but it is a tricky task and backwards compatibility needs to be maintained. - More parallel support, including parallel compositing algorithms - Algorithms like LIC (), maybe a couple terrain-decimation algorithms - Further integration of STL and other important C++ constructs (like templates) VTK 4.4 (intermediate release, end of 2003) - convert APIs to double (done) - remove old callbacks (done) - blanking - ref count observers (done) - switch collections to use iterators (done) - improve copyright (done) VTK 5.0 (major release, early 2005) - new pipeline mechanism (see Pipeline.pdf) - time support - true AMR support Changes to Interactors The Interactors have been updated to use the Command/Observer events of vtk. The vtkRenderWindowInteractor now has ivars for all the event information. There is a new class called vtkGenericRenderWindowInteractor that can be used to set up the bindings from other languages like python, Java or TCl. A new class vtkInteractorObserver was also added. It has a SetInteractor() method. It observes the keypress and delete events invoked by the render window interactor. The keypress activation value for a widget is now 'i' (although this can be programmed). vtkInteractorObserver has the state ivar Enabled. All subclasses must have the SetEnabled(int) method. Convenience methods like On(), Off(), EnabledOn(), and EnabledOff() are available. The state of the interactor observer is obtained using GetEnabled(). The SetEnabled(1) method adds observers to watch the interactor (appropriate to the particular interactor observer) ; SetEnabled(0) removes the observers . There are two new events: EnableEvent and DisableEvent which are invoked by the SetEnabled() method. The events also support the idea of priority now. When you add an observer, you can specify a priority from 0 to 1. Higher values will be called back first. An observer can also tell the object not to call any more observers. This way you can handle an event, and stop further processing. In this way you can add handlers to InteractorStyles without sub-classing and from wrapped languages. For more information see: vtkGenericRenderWindowInteractor, vtkRenderWindowInteractor, vtkInteractorObserver. Header files and vtkSetObjectMacro On some platforms such as MS Visual Studio .NET, compiler is not capable of handling too big input files. Some VTK files with all includes do become big enough to overwhelm the compiler. The solution is to minimize the amount of includes. This especially goes for header files, because they propagate to other files. Every class header file should include only the parent class header file. If there is no other alternative, you should put a comment next to include file explaining why the file has to be included. The related issue is with vtkSetObjectMacro. This file calles some methods on an argument class, which implies that the argument class header file has to be included. The result is bloat on the header files. The solution is to not use vtkSetObjectMacro but vtkCxxSetObjectMacro. The difference is that vtkCxxSetObjectMacro goes in the Cxx file and not in the header file. Example: Instead of #include "vtkBar.h" class vtkFoo : public vtkObject { ... vtkSetObjectMacro(Bar, vtkBar); ... }; Do: class vtkBar; class vtkFoo : public vtkObject { ... virtual void SetBar(vtkBar*); ... }; and add the following line to vtkFoo.cxx vtkCxxSetObjectMacro(vtkFoo,Bar,vtkBar); Text properties in VTK 4.2 A new vtkTextProperty class has been added to VTK 4.2. This class factorizes text attributes that used to be spread out and duplicated in many different classes (mostly 2D actors and text mappers). Among those attributes, font family, font size, bold/italic/shadow properties, horizontal and vertical justification, line spacing and offset have been retained, whereas new attributes like color and opacity have been introduced. We tried to make sure that you can use a vtkTextProperty to modify text properties in the same way a vtkProperty can be used to modify the surface properties of a geometric object. In that regard, you should be able to share a vtkTextProperty between different actors or assign the same vtkTextProperty to an actor that offers multiple vtkTextProperty attributes (vtkXYPlot for example). Here is a quick example: vtkTextActor *actor0 = vtkTextActor::New(); actor0->GetTextProperty()->SetItalic(1); // vtkTextProperty *tprop = vtkTextProperty::New(); tprop->SetBold(1); // vtkTextActor *actor1 = vtkTextActor::New(); actor1->SetTextProperty(tprop); // vtkTextActor *actor2 = vtkTextActor::New(); actor2->SetTextProperty(tprop); - Backward compatibility issues*: 1) Color and Opacity: The text color and text opacity settings are now controlled by the vtkTextProperty Color and Opacity attributes instead of the corresponding actor's color and opacity attributes. In the following example, those settings were controlled by the attributes of the vtkProperty2D attached to the vtkActor2D (vtkTextActor). The vtkTextProperty attributes should be used instead: vtkTextActor *actor = vtkActor::New(); actor->GetProperty()->SetColor(...); actor->GetProperty()->SetOpacity(...); becomes: actor->GetTextProperty()->SetColor(...); actor->GetTextProperty()->SetOpacity(...); To make migration easier for a while, we have set the vtkTextProperty default color to (-1.0, -1.0, -1.0) and the default opacity to -1.0. These "magic" values are checked by the underlying text mappers at rendering time. If they are found, the color and opacity of the 2D actor's vtkProperty2D are used, just as it was in VTK 4.1. 2) API (i.e. SetBold(), SetItalic(), etc) Most of the VTK classes involving text used to provide their own text attributes like Bold, Italic, Shadow, FontFamily. Thus, each of those classes would duplicate the vtkTextMapper API through methods like SetItalic(), SetBold(), SetFontFamily(), etc. Moreover, if one class had different text elements (say, for example, the title and the labels of a scalar bar), there was no way to modify the text properties of these elements separately. The vtkTextProperty class has been created to address both issues, by obsoleting those duplicated attributes and methods and providing a unified way to access text properties, and by allowing each class to associate different vtkTextProperty to different text elements. Migrating your code basically involves using the old API on your actor's vtkTextProperty instead of the actor itself. For example: actor->SetBold(1); becomes: actor->GetTextProperty()->SetBold(1); When a class provides different vtkTextProperty for different text elements, the TextProperty attribute is usually prefixed with that element type. Example: AxisTitleTextProperty, or AxisLabelTextProperty. This allows you to set different aspect for each text elements. If you want to use the same properties, you can either set the same values on each vtkTextProperty, or make both vtkTextProperty point to the same vtkTextProperty object. Example: actor->GetAxisLabelTextProperty()->SetBold(1); actor->GetAxisTitleTextProperty()->SetBold(1); or: vtkTextProperty *tprop = vtkTextProperty::New(); tprop->SetBold(1); actor->SetAxisLabelTextProperty(tprop); actor->SetAxisTitleTextProperty(tprop); or: actor->SetAxisLabelTextProperty(actor->GetAxisTitleTextProperty()); actor->GetAxisTitleTextProperty()->SetBold(1); The following list specifies the name of the text properties used in the VTK classes involving text. - you can still use the vtkTextMapper + vtkActor2D combination, but we would advise you to use a single vtkTextActor instead, this will give you maximum flexibility. - has 1 text prop: TextProperty, but although you have access to it, do not twwak it unless you are using vtkTextMapper with a vtkActor2D. In all other cases, use the text prop provided by the actor (see below). - has 1 text prop: TextProperty. - has 1 text prop: LabelTextProperty. - has 1 text prop: CaptionTextProperty. - has 1 text prop: EntryTextProperty. vtkAxisActor2D, vtkParallelCoordinatesActor, and vtkScalarBarActor: - have 2 text props: TitleTextProperty, LabelTextProperty. - has 3 text prop: TitleTextProperty (plot title), AxisTitleTextProperty, AxisLabelTextProperty (title and labels of all axes) - the legend box text prop (i.e. entry text prop) can be retrieved through actor->GetLegendBoxActor()->GetEntryTextProperty() - the X (or Y) axis text props (i.e. title and label text props) can be retrieved through actor->GetX/YAxisActor2D->GetTitle/LabelTextProperty(), and will override the corresponding AxisTitleTextProperty or AxisLabelTextProperty props as long as they remain untouched. - has 2 text props: AxisTitleTextProperty, AxisLabelTextProperty (title and label of all axes) - the X (Y or Z) axis text props (i.e. title and label text props) can be retrieved through actor->GetX/Y/ZAxisActor2D->GetTitle/LabelTextProperty(), and will override the corresponding AxisTitleTextProperty or AxisLabelTextProperty props as long as they remain untouched. Forward declaration in VTK 4.x Since VTK 4.x all classes have been carefully inspected to only include the necessesary headers, and do what is called 'forward declaration' for all other needed classes. Thus, when you compile your projects using a filter that takes as input a dataset and you are passing an imagedata: you need to explicitely include imagedata within your implementation file. This is true for all data types. For example, if you get this error: no matching function for call to `vtkContourFilter::SetInput(vtkImageData*)' VTK/Filtering/vtkDataSetToPolyDataFilter.h:44: candidates are: virtual void vtkDataSetToPolyDataFilter::SetInput(vtkDataSet*) This means you need to add in your code : #include "vtkImageData.h" Using Volume Rendering in VTK I recently updated my VTK CVS version. And my c++ code that use to work fine are now complaining about: undefined reference to `vtkUnstructuredGridAlgorithm::SetInput(vtkDataObject*)' undefined reference to `vtkUnstructuredGridAlgorithm::GetOutput()' There is now a new subfolder and a new option to enable building the VolumeRendering library. You have to turn VTK_USE_VOLUMERENDERING to ON in order to use it. Also make sure that your executable is linking properly to this new library: ADD_EXECUTABLE(foo foo.cxx) TARGET_LINK_LIBRARIES(foo vtkVolumeRendering) API Changes in VTK 5.2 vtkProp::RenderTranslucentGeometry() is gone vtkProp::RenderTranslucentGeometry() is gone and has been broken down into 3 methods: - HasTranslucentPolygonalGeometry() - RenderTranslucentPolygonalGeometry() - RenderVolumetricGeometry() Here is what to change in a vtkProp subclass: - If RenderTranslucentGeometry() was used to render translucent polygonal geometry only, override HasTranslucentPolygonalGeometry() and RenderTranslucentPolygonalGeometry(). Just renaming RenderTranslucentGeometry() as RenderTranslucentPolygonalGeometry() is not enough! - If RenderTranslucentGeometry() was used to render translucent volumetric geometry only, override RenderVolumetricGeometry(). In this case, just renaming RenderTranslucentGeometry() as RenderVolumetricGeometry() is OK. - If RenderTranslucentGeometry() was used to render translucent polygonal geometry and translucent volumetric geometry, override all 3 methods. The reason of this change is that HasTranslucentPolygonalGeometry() is used to decide if an expensive initialization of the new rendering algorithm of translucent polygonal geometry (depth peeling) is necessary. RenderTranslucentPolygonalGeometry() is called multiple times during the rendering of the translucent polygonal geometry of the scene. RenderVolumetricGeometry() is called in an additional pass, after depth peeling. For this reason, RenderTranslucentGeometry() cannot just be marked as deprecated but had to be removed from the API. vtkImagePlaneWidget has action names changed from: enum { CURSOR_ACTION = 0, SLICE_MOTION_ACTION = 1, WINDOW_LEVEL_ACTION = 2 }; to: enum { VTK_CURSOR_ACTION = 0, VTK_SLICE_MOTION_ACTION = 1, VTK_WINDOW_LEVEL_ACTION = 2 }; GetOutput() now returns vtkDataObject for some algorithms The following algorithms now work on vtkGraph as well as vtkDataSet, so no GetOutput() longer returns vtkDataSet. To obtain the dataset, use vtkDataSet::SafeDownCast(filter->GetOutput()) - vtkArrayCalculator - vtkAssignAttribute - vtkProgrammableFilter API Changes in VTK 5.4 - empty right now. API Changes in VTK 5.5 - vtkStreamTracer Changed enum Units { TIME_UNIT, LENGTH_UNIT, CELL_LENGTH_UNIT } to enum Units { LENGTH_UNIT = 1, CELL_LENGTH_UNIT = 2 } Changed - OUT_OF_TIME = 4 to - OUT_OF_LENGTH = 4 in enum ReasonForTermination Changed - LastUsedTimeStep to - LastUsedStepSize Changed - MaximumPropagation - MaximumIntegrationStep - MinimumIntegrationStep - InitialIntegrationStep from type IntervalInformation to type double. Added a member variable to the class - int IntegrationStepUnit The following APIs were removed from the class: - void SetMaximumProgration(int unit, double max) - void SetMaximumProgrationUnit(int unit) - int GetMaximumPropagationUnit() - void SetMaximumPropagationUnitToTimeUnit() - void SetMaximumPropagationUnitToLengthUnit() - void SetMaximumPropagationUnitToCellLengthUnit() - void SetMinimumIntegrationStep(int unit, double step) - void SetMinimumIntegrationStepUnit(int unit) - int GetMinimumIntegrationStepUnit() - void SetMinimumIntegrationStepUnitToTimeUnit() - void SetMinimumIntegrationStepUnitToLengthUnit() - void SetMinimumIntegrationStepUnitToCellLengthUnit() - void SetMaximumIntegrationStep(int unit, double step) - void SetMaximumIntegrationStepUnit(int unit) - int GetMaximumIntegrationStepUnit() - void SetMaximumIntegrationStepUnitToTimeUnit() - void SetMaximumIntegrationStepUnitToLengthUnit() - void SetMaximumIntegrationStepUnitToCellLengthUnit() - void SetInitialIntegrationStep(int unit, double step) - void SetInitialIntegrationStepUnit(int unit) - int GetInitialIntegrationStepUnit() - void SetInitialIntegrationStepUnitToTimeUnit() - void SetInitialIntegrationStepUnitToLengthUnit() - void SetInitialIntegrationStepUnitToCellLengthUnit() - void SetIntervalInformation(int unit, double interval, IntervalInformation& currentValues) - void SetIntervalInformation(int unit,IntervalInformation& currentValues) - void ConvertIntervals(double& step, double& minStep, double& maxStep, int direction, double cellLength, double speed) - static double ConvertToTime(IntervalInformation& interval, double cellLength, double speed) - static double ConvertToLength(IntervalInformation& interval, double cellLength, double speed) - static double ConvertToCellLength(IntervalInformation& interval, double cellLength, double speed) - static double ConvertToUnit(IntervalInformation& interval, int unit, double cellLength, double speed) The following APIs were added to the class: - int GetIntegrationStepUnit() - void SetIntegrationStepUnit(int unit) - void ConvertIntervals(double& step, double& minStep, double& maxStep, int direction, double cellLength) - static double ConvertToLength(double interval, int unit, double cellLength) - static double ConvertToLength(IntervalInformation& interval, double cellLength) - vtkInterpolatedVelocityField Added a new member variable and two associated functions: - bool NormalizeVector - vtkSetMacro(NormalizeVector, bool) - vtkGetMacro(NormalizeVector, bool) OpenGL requirements Terminology - a software component using OpenGL (like VTK) requires some minimal version of OpenGL and some minimal set of OpenGL extensions at runtime. At compile time, it requires an OpenGL header file (gl.h) compatible with some minimal version of the OpenGL API. - an OpenGL implementation (software (like Mesa) or hardware (combination of a graphic card and its driver) ) supports some OpenGL versions and a set of extensions. How do I check which OpenGL versions or extensions are supported by my graphic card or OpenGL implementation? Linux/Unix Two ways: - General method $ glxinfo - vendor specific tool if you have an nVidia card and nvidia-settings installed on it, run it and go to the OpenGL/GLX Information item under the X Screen 0 item. Windows You can download and use GLview Mac OS X With Xcode installed, Macintosh HD->Developer->Applications->Graphic Tools->OpenGL Driver Monitor.app->Monitors->Renderer Info-><name of the OpenGL driver>->OpenGL Extensions VTK 5.0 What is the minimal OpenGL version of the API required to compile VTK5.0? The gl.h file provided by your compiler/system/SDK has to define at least the OpenGL 1.1 API. (Note: the functions and macros defined in higher OpenGL API versions or in other OpenGL extensions are provided by glext.h, glxext.h and wglext.h. Those 3 files are official files taken from and already part of the VTK source tree). What is the minimal OpenGL version required by VTK5.0 at runtime? All the VTK classes using OpenGL require an OpenGL implementation (software or hardware) >=1.1 except for vtkVolumeTextureMapper3D. VTK 5.2 What is the minimal OpenGL version of the API required to compile VTK5.2? Same answer than for VTK 5.0. What is the minimal OpenGL version required by VTK5.2 at runtime? All the VTK classes using OpenGL require an OpenGL implementation (software or hardware) >=1.1 except for vtkVolumeTextureMapper3D, vtkHAVSVolumeMapper, vtkGLSLShaderProgram, depth peeling and some hardware offscreen rendering using framebuffer objects (FBO). If you want to use vtkHAVSVolumeMapper, the following extensions or OpenGL versions are required (at runtime): - OpenGL>=1.3 - GL_ARB_draw_buffers or OpenGL>=2.0 - GL_ARB_fragment_program - GL_ARB_vertex_program - GL_EXT_framebuffer_object - either GL_ARB_texture_float or GL_ATI_texture_float The following extension or OpenGL version is used by vtkHAVSVolumeMapper if provided (at runtime), but it is optional: - GL_ARB_vertex_buffer_object or OpenGL>=1.5 If you want to use vtkGLSLShaderProgram, the following extensions or OpenGL versions are required (at runtime): - OpenGL>=1.3 - GL_ARB_shading_language_100 or OpenGL>=2.0, - GL_ARB_shader_objects or OpenGL>=2.0 - GL_ARB_vertex_shader or OpenGL>=2.0 - GL_ARB_fragment_shader or OpenGL>=2.0. Depth peeling ( see VTK Depth Peeling for more information) requires (at runtime): - Hardware-based offscreen rendering using framebuffer object (FBO) will be used as the default offscreen method if the following extensions or OpenGL version are available (at runtime): - GL_EXT_framebuffer_object and either - GL_ARB_texture_non_power_of_two or OpenGL>=2.0 or - GL_ARB_texture_rectangle In addition, if the the framebuffer needs a stencil buffer, extension GL_EXT_packed_depth_stencil is required. Even if all those extensions are supported, the chosen FBO format might not be supported by the card; in this case, this method of offscreen rendering is not used. Miscellaneous questions Can't you split up the data file? The data is now in one file that is about 15 Megabytes. This is smaller than the original data files VTK used and we hope that this size is not a problem for people anymore. If it is please let us know. VTK version 4.0 and later supports both shared and static libraries on most all platforms. For development we typically use shared libraries since they are faster to link when making small changes. You can control how VTK builds by setting the BUILD_SHARED_LIBS option in CMake. Legal issues Is VTK FDA-Approved ? Given the fact that VTK is a software toolkit, it cannot be the subject of FDA approval as a medical device. We have discussed this topic in several occasions and received advice from FDA representatives, that can be summarized as follow: VTK is to be considered as an off-the-shelf (OTS) product that is used for supporting a higher level medical application/product. The developer of such application/product will be responsible for performing the validation processes described in FDA published guidelines for the development of software-related medical devices. For mode details see the page FDA Guidelines for Software Development What are the legal issues? The Visualization Toolkit software is provided under the following copyright. We think it's pretty reasonable. We do restrict the distribution of modified code. This is primarily a revision control issue. We don't want a bunch of renegade vtks running around without us having some idea why the changes were made and giving us a chance to incorporate them into the general release. The text of the VTK copyright is available here. What is the deal with the patents As the copyright mentions there are some patents used in VTK. If you use any code in the Patented/ directory for commercial application you should contact the patent holder and obtain a license. As of VTK4.0 the following classes are known to use algorithms patented by General Electric Company: vtkDecimate, vtkMarchingCubes, vtkMarchingSquares, vtkDividingCubes, vtkSliceCubes and vtkSweptSurface. The GE contact is: Carl B. Horton Sr. Counsel, Intellectual Property 3000 N. Grandview Blvd., W-710 Waukesha, WI 53188 Phone: (262) 513-4022 E-Mail: mailto:Carl.Horton@med.ge.com As of VTK4.0 the following classes are known to use algorithms patented by Kitware, Inc.: vtkGridSynchronizedTemplates3D, vtkKitwareContourFilter.h, vtkSynchronizedTemplates2D, and vtkSynchronizedTemplates3D. The Kitware contact is: Ken Martin Kitware 28 Corporate Drive, Suite 204, Clifton Park, NY 12065 Phone:1-518-371-3971 E-Mail: mailto:kitware@kitware.com Can VTK be used as part of a project distributed under a GPL License ? Short Answer Yes, it is fine to take VTK code and to include it in a project that is distributed under a GPL license. Long Answer Terms Let's call project X the larger project that: - Will include source code from VTK (in part or as a whole) - Will be distributed under GPL license Note in particular that: - The copyright notices in VTK files must be kept. - If VTK files are modified by the developers of project X, that fact must be clearly indicated. - Only the modifications of VTK files made by the developers of project X will be covered by a GPL license. The original VTK code remains covered by the VTK license. - The collection of copyrighted works (project X in this case), that includes VTK (in part or as a whole) and their software will be covered by a GPL license. Details As the VTK license is a variation of the Modified BSD license, to which only the following term has been added: Modified source versions must be plainly marked as such, and must not be misrepresented as being the original software. and that the Modified BSD license is itself compatible with the GPL (Modified BSD license) Then the VTK license is also compatible with the GPL license. Since the terms of the GPL license do not preclude the additional term of the VTK license from being followed. NOTE: The licenses are only one way compatible. - You can use VTK code inside a GPL licensed project. - You can not use GPL licensed code inside VTK. That is the reason why there are no GPL third party libraries in VTK. Having GPL third party libraries in VTK would prevent closed source projects from being built against VTK. Common problems and their solutions - There are some problems may that arise while building VTK that have very straight forward solutions. Here they are. - There are some problems that arise frequently that have very straight forward solutions. Here they are.
http://www.paraview.org/Wiki/VTK_FAQ
crawl-003
refinedweb
13,921
55.95
In most cases, the client sends a request to the server when following methods of an item are executed: In this case the client sends to the server the ID of the item’s task, the ID item, the type of the request and its parameters. The server on receiving the request, based on passed IDs, finds the task (it can be Project task or Application builder task) and the item on the server, executes the corresponing method with passed parameters and returns the result of the execution to the client. The server method can trigger events that can modify its default behavior. Every item of the task tree have the environ and session attributes that store context of the current request. The most common server events are: Note Please do always remember that events triggered by server methods can be executed in parallel threads or processes. Do be very carefull when modifying the attributes of the items of the tasks tree. You can use the copy method to create a copy of an item. This copy is an exact copy of an item at the time of creating of the task tree. It is not added to the task tree and will be destroyed by Python garbage collector when no longer needed. Example: def on_generate(report): cust = report.task.customers.copy() cust.open() report.print_band('title') for c in cust: firstname = c.firstname.display_text lastname = c.lastname.display_text company = c.company.display_text country = c.country.display_text address = c.address.display_text phone = c.phone.display_text email = c.email.display_text report.print_band('detail', locals())
http://jam-py.com/docs/programming/server/index.html
CC-MAIN-2018-05
refinedweb
263
65.62
This forum is closed. Thank you for your contributions. I am trying to build the sample from Chapter 7, the SimulatedQuadDifferentialDrive but I am having problems using the command line as mentioned in the book. I write the code shown in the book, replacing Microsoft Robotics Studio (1.5) with Microsoft Robotics Dev Studio R3, but it is not recognizing the /i: ....I see the help shown after it it not recognized, but there is no :i listed. Any suggestions? Here is what I am typing using the DSS command prompt dssnewservice /s:SimulatedQuadDifferentialDrive /i:"\Microsoft Robotics Dev Studio R3\bin\RoboticsCommon.dll" /alt: /namespace:"ProMRDS.Simualtion.QuadDifferentialDrive" /year:"2007" /month:"07" the error I get is that it does not recognize the command I think this was answered elsewhere, but you don't need the /i anymore. Trevor
https://social.msdn.microsoft.com/Forums/en-US/76e2544a-952f-42ca-8ae6-398e22c45c50/promrds-chapter-6-unable-to-connect-to-default-quad-differential-drive-service?forum=roboticsdss
CC-MAIN-2021-39
refinedweb
139
57.16
You can subscribe to this list here. Showing 8 results of 8 paul@... wrote: > > Anyway, if you take a look at my Web site () you'll > find the almost-latest installment of my XMLForms package (the successor to my > XForms package). It contains form descriptions in XML and some interesting > applications of hidden fields in order to provide client-side persistence > without cookies. Whilst I'm sure your time is precious, you may want to > investigate some of the concepts - it's really quite interesting, honest! ;-) yes, it works without problems (I uncommented the context stuff in __init__.py to use the examples and in products.py and sites.py I had to use from WebKit.Examples.ExamplePage import ExamplePage Strange enough in another Webware version it worked whitout this changes. But my reference right now is Webware-0.5.1rc3.tar.gz and Python-2.1 (Webware needs some small tweeks to be completely 2.1 compliant. regex -> re.. changes mostly). XMLForms incorporates a lot of interesting ideas. It is not really fast though but it's the first time that PyXML/PYthon and some application works without any problems for me :-) I have the impression that a lof of the data handling can be done with Cans, but the more approaches out there the better. I'm still looking for my favorite way to write Webware applications (I havn't found it yet :-)) -- Tom Schwaller tschwaller@... > -----Original Message----- > From: webware-discuss-admin@... > [mailto:webware-discuss-admin@...]On Behalf Of Tavis > Rudd > Sent: 2. maí 2001 16:48 > To: Sasa Zivkov; webware-discuss@... > Subject: Re: [Webware-discuss] Er, parsing? > > > > > It seems you can: > > >>> import re > > >>> > >>> re.findall("""\$\(\w* \w*="[^"]*"\)""", a) > > > > ['$(foo bar=")")'] > > > > yes, but what about: > """ $(functionName( $anotherFunc(1234)))""" > ? I just answered Chuck's question :-) Did not carefully read the whole discussion and did not know about all possible cases. Any way, nested parenthesis can not be expressed with a regular expression if my memory works well. - Sasa > It seems you can: > >>> import re > >>> >>> re.findall("""\$\(\w* \w*="[^"]*"\)""", a) > > ['$(foo bar=")")'] > yes, but what about: """ $(functionName( $anotherFunc(1234)))""" ? At 08:10 AM 5/2/2001 -0700, Mike Orr wrote: >What's the difference between webware-discuss and webware-devel in terms >of what to post where? [snip] It's a good question and I think the answer is still being formed as we go. One thing I notice is that detailed discussions of the implementation of some feature (such as templates) usually result in many messages. On those days that we have a lot of these messages, between 1 and 3 people drop off of webware-discuss (I get notified when that happens). In general, I think of "webware-discuss" as ordinary users and "webware-devel" as people who are "lifting up the hood" and tinkering with the engine. I think ordinary users are concerned with installation, usage, future directions, posting feedback on API or behavior, etc. I hope that helps in some fashion. -Chuck What's the difference between webware-discuss and webware-devel in terms of what to post where? It seems like since Webware is a development platform, most of the discussion is about development anyway. And when talking about generic modules which may or may not be included in Webware someday, it becomes difficult to decide whether you're "using Webware for development" or "developing Webware", because they both merge into each other. The only thing I can tell is that basic installation issues and discussion about third-party applications which will never be part of Webware belong in webware-discuss. -- -Mike (Iron) Orr, iron@... (if mail problems: mso@...) English * Esperanto * Russkiy * Deutsch * Espan~ol It seems you can: >>> import re >>>>> re.findall("""\$\(\w* \w*="[^"]*"\)""", a) ['$(foo bar=")")'] >>> - Sasa > -----Original Message----- > From: webware-discuss-admin@... > [mailto:webware-discuss-admin@...]On Behalf Of Chuck > Esterbrook > Sent: 30. apríl 2001 15:38 > To: webware-discuss@... > Subject: [Webware-discuss] Er, parsing? > > > How do you parse something like: > > $(foo bar=")") > > with a regex? My impression is that you can't. > > > _______________________________________________ > Webware-discuss mailing list > Webware-discuss@... > > Yesterday I installed 0.5.1 rc#3 and migrated my app to it. A few problems I could workaround have disappeared. Fine! But I saw that I still have problems with the OneShot.cgi on NT4/IIS4 (OneShot.exe on IIS4, made with py2exe 0.2.6). Until now I always have killed the AppServer and restarted it, if I did changes in underlying modules of my app. But to be honest, I long for OneShot.cgi, because it would make life a bit easier. I tried it on 2 different machines. The problem seems to lie within PlugIn.py (around line 71): # Make a directory for it in Cache/ cacheDir = os.path.join(os.path.dirname(__file__),'Cache', self._name) if not os.path.exists(cacheDir): os.mkdir(cacheDir) __file__ has the value "<WebKit\PlugIn from archive>" so cacheDir becomes "<WebKit\Cache\COMKit" which raises an exception, which is not valid. Or did I miss something? Best regards Franz Geiger > -----Ursprüngliche Nachricht----- > Von: webware-discuss-admin@... > [mailto:webware-discuss-admin@...]Im Auftrag von Chuck > Esterbrook > Gesendet: Mittwoch, 02. Mai 2001 01:22 > An: webware-discuss@...; > webware-devel@... > Betreff: [Webware-discuss] Cut final 0.5.1? > > > Any objections to cutting the final release of 0.5.1 sometime on > Wednesday > and then announcing it to the world? > > The only outstanding recent problems I'm aware of: > > - OS/2 has URL path issues > > - Someone had issues setting multiple cookies > > > I believe in both cases, the ball has been in the user's court to > try out a > patch or send back additional info. > > > -Chuck > > > _______________________________________________ > Webware-discuss mailing list > Webware-discuss@... > > At 08:32 PM 4/30/2001 -0500, Ian Bicking wrote: >Anyway, I have two classes, one which is the container for another: > >Portfolio contains Pieces, (as in .pieces()), and Pieces have a >reference to portfolio (as in .portfolio()). > >If I move a piece between portfolios, the .pieces() will be >inaccurate, since it seems to cache these backreferences as >self._pieces. Yeah, I never dealt with moving objects between 2 lists before. e.g., not in the test suite. Seems like we need a removeFromPieces() >Are these objects unique? Like, if I fetch the same object from the >store twice, will they be equal (i.e., portfolio1 is portfolio2)? Yes. This is called "uniquing" and MK does it. >If so, should the generated code be such that: > >class Portfolio: > def _removePiece(self, piece): > """Semi-private because it must be called along with > setPortfolio or self._pieces will be incorrect""" > if self._pieces is not None: > self._pieces.remove(piece) > >class Piece: > def setPortfolio(self, portfolio): > if self._portfolio is not None: > self._portfolio._removePiece(self) > self._portfolio = portfolio > >Except for all the asserts, and (I guess?) something where you can use >objectRefs instead of the actual objects...? Maybe there needs to be >a weak-fetch from the store, so that Piece can fetch its actual >portfolio if it's been instantiated (in which case it may have an >invalid cache), but if it hasn't then it doesn't matter. I'd have to think about it some more. Geoff and I, in private discussions, reworked the design for MKs list support. We need to enhance that with moving an object between lists and then post it as a WEP for review. My feeling is it needs to be tackled as a mini-project so that we nail down all the semantics simultaneously and back them up with regression tests. >And on a slightly related note -- if when I edit a Piece I run this >method: > > def changePiece(self): > piece = self._piece > piece.setTitle(self.field('title')) > piece.setName(self.field('name')) > piece.setDescription(self.field('description')) > piece.setDisplayOrder(self.field('displayOrder')) > piece.setPortfolio(self.store().fetchObject('Portfolio', > self.field('portfolio'))) > self.store().saveChanges() > self.write('Changes saved.<p>\n') > >And I get this error from MySQLdb: > Warning: Rows matched: 1 Changed: 1 Warnings: 1 > >I don't know why MySQLdb even bothers giving an error message this >lame (well Warning/error message)... but any idea what the warning is >about, or how I can fix it? I'm not sure. I spent several hours earlier trying to figure out how to get MySQL to give me details for a warning. I read a bunch of docs, joined the mailing list, posted my question, etc. I could never find out. BTW You could rework some of your code like so: for name in 'title name description displayOrder'.split(): piece.setAttr(name, self.field(name)) Make sure you are using Webware CVS for this. Obviously this only works for "simple" attributes. setAttr() will actually call your setFoo(), setBar(), etc. (Previous setAttr() was called _set()) -Chuck
http://sourceforge.net/p/webware/mailman/webware-discuss/?viewmonth=200105&viewday=2
CC-MAIN-2015-06
refinedweb
1,463
67.35
Qt Quick Controls provides a set of controls that can be used to build complete interfaces in Qt Quick. The module was introduced in Qt 5.7. Qt Quick Controls comes with a selection customizable styles. See Styling Qt Quick Controls for more details. The QML types can be imported into your application using the following import statement in your .qml file: import QtQuick.Controls 2.15. When building from source, ensure that the Qt Graphical Effects module is also built, as Qt Quick Controls requires it. The Qt Image Formats module is recommended, but not required. It provides support for the .webp format used by the Imagine style. Qt Quick Controls.0 was introduced in Qt 5.7. Subsequent minor Qt releases increment the import version of the Qt Quick Controls modules by one, until Qt 5.12, where the import versions match Qt's minor version. The experimental Qt Labs modules use import version 1.0.: © The Qt Company Ltd Licensed under the GNU Free Documentation License, Version 1.3.
https://docs.w3cub.com/qt~5.15/qtquickcontrols-index
CC-MAIN-2021-21
refinedweb
172
69.58
IJulia An IJulia Frontend for Sublime Text 3 Details Installs - Total 8K - Win 3K - OS X 3K - Linux 2K Readme - Source - raw.githubusercontent.com Sublime-IJulia Successor to the Sublime-Julia project, now based on the IJulia backend. Julia is a new, open-source technical computing language built for speed and simplicity. The IJulia project built an IPython kernel for Julia to provide the typical IPython frontend-backend functionality such as the popular notebook, qtconsole, and regular terminal. Sublime-IJulia builds on these efforts by providing a frontend to the IJulia backend kernel within the popular text editor, Sublime Text. All within Sublime, a user can start up an IJulia frontend in a Sublime view and interact with the kernel. This allows for rapid code development through REPL testing and debugging without ever having to leave our favorite editor. This project is still in beta, so please be patient and open issues liberally. ZMQ/IJulia Installation Before installing the Sublime-IJulia package, you must first ensure you have added and successfully built the ZMQ julia package from within julia itself. You will also need the IJulia to be added, though not necessarily successfully built (the reason is that IJulia requires IPython to be installed, while Sublime-IJulia does not require IPython itself to be installed). Simply adding the IJulia package will ensure the needed files are installed, whether or not it can be used with ipython (though the use of IPython notebooks is highly encouraged for code presentation!) These steps can be done by running the following from within julia: Pkg.add("ZMQ") # Needs to install and build successfully Pkg.add("IJulia") # Needs to install, but not necessarily build successfully See the IJulia page for additional help. Sublime-IJulia Installation The Sublime-IJulia project requires Sublime Text 3 with build version > 3019. You can get the latest version here. It also requires a version of the ZMQ library >= 2.0 (the default installation through Julia brings in a working version, so this is only an issue when trying to use system ZMQ libraries). - Within Sublime Text 3, install the Package Control package from here - With Package Control successfully installed (you may need to resetart sublime), run Ctrl+Shift+p( Cmd+Shift+pon OSX) to open the Sublime command pallette and start typing “Install Package”, then select “Package Control: Install Package”. - From the list of packages that are then shown, start typing “IJulia” and then select the “IJulia” package. This installs the IJulia package into your Sublime packages directory. - From the menu bar, open Preferences => Package Settings => Sublime-IJulia => Settings - Default - Then, from the menu bar, open Preferences => Package Settings => Sublime-IJulia => Settings - User - Copy everything from the Settings - Defaultfile into the Settings - Userfile - Now, in the Settings - Userfile, scroll down to your platform, you should typically not have to change the zmq_shared_libraryfield value. These are the expected standard installation locations when installing/building the ZMQ package from within julia, so they should work out of the box. Note, however, for Linux or OSX, that if the ZMQ library was already installed via apt/yum/homebrew, the default path will probably not be correct. The easiest way to locate your ZMQ library (on any platform) is to to run the following commands from within julia: using ZMQ; ZMQ.zmq. Sometimes, all that is returned is libzmqwhich obviously isn't that helpful. In any case, if you do a file system search for libzmq, you should be able to locate the absolute path to the library which is needed for your zmq_shared_librarysettings value. PLEASE NOTE: When setting your path, you MUST specify the library extension as well. /path/to/zmq/libzmq.soor /path/to/zmq/libzmq.so.3for linux, /path/to/zmq/libzmq.dylibfor OSX, and /path/to/zmq/libzmq.dllon windows. This is by far the toughest step to manage because of the cross-platform issues and non-standard installation locations, but be willing to try a few different paths and restart Sublime in between each attempt. Another tip is to specify the absolute path to the ZMQ library (instead of using ~/...or relative paths. If you're still having issues, please open an issue as mentioned above. - Now change the value of the "julia": "julia",field to the absolute path to your julia executable. If julia is on your path, this may not involve changing anything (i.e if you can type juliaor julia-readlinefrom the command line from any directory). Otherwise put the full path to your julia executable (i.e. /usr/home/julia/usr/bin/julia). - With the above two values properly set in the settings file (you should not have to change the “ijulia_kernel” value, you can now run Ctrl+Shift+pto open the command pallette and start typing “Open IJulia” and select “Sublime-IJulia: Open New IJulia Console”. If all goes well, a new view should open up in Sublime, titled *IJulia 0*and the julia banner should display shortly (2-5 seconds). Success! - If an error message pops up, it's probably because Sublime can't find your ZMQ library or julia executable, return to step 7/8. If ***Kernel Died***shows up in a new view, there's been some kind of error in your julia command, so return to step 7. In any case, please go back over the steps to ensure everything was followed, restart Sublime, and if the results are the same, please open an issue here and I'm more than happy to help troubleshoot the installation. Using Sublime-IJulia - Commands can be entered directly in the IJulia console view, pressing shift+enterto execute. - A newline can be entered without executing the command by typing Enterfor multi-line commands. - Pressing the upand downarrows in the console view will navigate through command history (if any). escapewill clear the current command - All other regular sublime features should work as normal also in the console view (multiple cursors, macros, etc.) From a julia file (extension .jl), you also have the ability to “send” code to the console to be evaluated. * Shift+enter without any code selected will send the current line to the console to be executed * Shift+enter with code selected will send the selected text to the console to be executed * Ctrl+shift+enter will send the entire file's contents to the console to be executed Other Sublime-IJulia Package Features - A suggestion when working with .jljulia files is to have your Sublime tab settings set to spaces, since this is the preferred styled. You can do this by having "translate_tabs_to_spaces": true,in your Preferences=>Settings – User file. - Auto-completion: Most of the stdlib julia functions can be auto-completed from the console and julia (.jl) files. Just start typing a function name and press tabto auto-complete with the expected arguments. - Syntax: Syntax highlighting is available for julia files (.jl), you can set them manually by clicking in the lower right-hand side of sublime (there will be “Text” or some other language displayed) and select “Julia” from the list - You can also automatically apply the Julia syntax by typing Ctrl+Shift+pand start typing “Apply Julia syntax” and select that command. This will automatically apply the Julia syntax to all open (.jl) files. - Build: A basic build file is provided, but will probably have to be manually tweaked to provide the path to your julia executable. This can be done by opening your Sublime packages folder, going to the IJulia directory and opening the julia-build.sublime-buildfile. Then you just have to change the “julia” in "cmd": ["julia", "$file"],to the same value you set in your settings file for julia_command(i.e. absolute path to julia, etc.). - Multiple julia commands can be set in the settings file. This is done by adding another “command” object to the “commands” array of your platform. If you were on windows, your entire platform key-value would be: "windows": { "zmq_shared_library": "~/.julia/v0.3/ZMQ/deps/usr/lib/libzmq.dll", "commands": [ { "command_name": "default", "julia": "julia-readline.exe", "julia_args": "", "ijulia_kernel": "~/.julia/v0.3/IJulia/src/kernel.jl" }, { "command_name": "myfork", "julia": "C:/Users/karbarcca/myfork/usr/bin/julia-readline.exe", "julia_args": "-p 4", "ijulia_kernel": "~/.julia/v0.3/IJulia/src/kernel.jl" } ] } Note the comma , after the original default command. Another command is then copied down. The "julia": field is changed to a separate julia executable (in this case, a separate branch of julia, but it could also be a past version of julia or whatever). There are also some arguments passed to the julia executable by "julia_args": "-p 4", meaning to start the julia executable with 4 additional processes. With the above commands set, when I go to open a console with ctrl+shift+p, type “Open IJulia”, a second popup will show me a list of default myfork from which I can choose which julia I want to launch. Cheers! -Jacob Quinn
https://packagecontrol.io/packages/IJulia
CC-MAIN-2019-51
refinedweb
1,480
52.6
CityDesk comment in JOS about "cleaner" HTML Joel comments on CityDesk: "In the next release of CityDesk we're doing really heroic amounts of work ..." tk Tuesday, July 08, 2003 Good tea leaf reading :) I finally got it mostly working. It's really unbelievable how much work it took, and it's only 99% done as we speak. Here's the story. I had the new source preservation feature working nicely, using the DHTML editor (from IE) in "source preserving" mode. In CD 1.0, editing templates and HTML files is done in source preserving mode while editing articles is done in non-source preserving mode. Unfortunately then we discovered that the DHTML editor had a bug when working in "source preserving" mode: if you wrote This is a sentence. Then selected the word sentence and made it bold, you would get the following HTML This is a <strong>sentence</strong> . Looks good right? Look closer. There's an unwanted space before the dot. This wee tiny bug was completely non-work-aroundable. It's a bug in IE and we had no way to fix it. Instead, I wrote a complete HTML source preservation system basically from scratch, so that your indenting and line breaks are preserved in the WYSIWYG editor. Basically, before we shove your HTML into IE for editing, we find all the whitespace and replace it with a custom tag that looks like this: <cd:preserve xxx is an encoded version of the whitespace that used to be there, for example CLCL means two crlf's and a tab. This custom tag is completely invisible in the editor but IE does preserve it for us. When the HTML comes back from IE we search for those things and replace them with the original whitespace. Then I incorporated the source code from TidyLib (a.k.a. HTML Tidy) to clean up the code that comes back from IE in non-source-preserving mode and make it a little less offensive: * all tags will be closed * everything will be cleaned up into valid xhtml * attributes will be quoted * unclosed tags are done the xhtml way: <br> becomes <br /> * everything is lowercased * etc. All the xhtml stuff, now built in. Effectively this gave us an xhtml compliant wysiwyg editor, but there was more work to do. The next bug I wanted to workaround was the fact that IE tends to sprinkle extra in your code. Why? Because if you type foo, space, space, bar, IE wants to preserve both spaces so it has to convert the first one to an . Then if you use the cursor to delete the SECOND space you are left with foo bar. IE doesn't think this is wrong. I do. So now there is an incredible amount of logic in CityDesk to convert these s back to regular spaces. It's complicated, because any *real* 's -- either the ones you put there yourself, or ones that are necessary to get several blank spaces in a row -- must not be tampered with. Hard. While I was at it, there was another problem. Look at this: <ul> {$ forEach ... $} <li>{$ x.headline $}</li> </ul> The forEach and next statements are located in a place where it's illegal to have any text. The only thing legal in a <ul> is a <li>. So IE was moving them around, either losing them or putting them before the <ul> or at the end of the page or some such travesty. To fix that problem we had to figure out all the places you might put CityScript that the IE HTML editor wouldn't be happy about, and protect it for you. Places like: * in a <ul> outside a <li> * in a <table> outside a <tr> * in a <tr> outside a <td> and a few others. In all those cases we now "protect" any CityScript we find in those places by wrapping it in a temporary tag which we strip out on the way back. It's absurdly complicated but the good news is It Just Works. All this, because I don't think it's OK for CityDesk to be inserting extra spaces in front of your .'s where you didn't want them...! Joel Spolsky Friday, July 11, 2003 Joel - if people don't tell you this enough, YOU RAWK. Kudos to everyone @ Fog Creek for putting out an excellent product and continuing to provide top notch support & development. Friday, July 11, 2003 Thanks Joel! You say you incorporated the source code from TidyLib - HTML Tidy. Does this mean the garbage html you get when pasting from MS Word will be cleaned up too? Paul Iliano Saturday, July 12, 2003 Hip, Hip, Hooray!!! David Burch Saturday, July 12, 2003 The improved editor sounds great. Joel said: "unclosed tags are done the xhtml way: <br> becomes <br />" Will that validate as HTML 4? If we had been creating valid HTML 4 transitional with CityDesk v1, are we looking at switching to XHTML 1 transitional if/when we upgrade to CD v2 (if we want to keep validating)? Pete Riis Sunday, July 13, 2003 <br> </br> won't validate as HTML 4. I imagine you'll have to change the html in your templates to become xhtml compliant but that shouldn't be a big job. My only concern is all the articles that CityDesk 1.x has already created. The thought of opening every single one and saving them to trigger the html tidying could be quite a repetitive task... Still, it's all good fun! John C Monday, July 14, 2003 It would be great if we had the option of staying with 4.01 transitional. I seem to remember the problem with XHTML was with unclosed <p> tags as a result of CityScript loops, which are OK in HTML 4 but not in XHTML. Pete Riis Monday, July 14, 2003 It would be great if CD read the doctype and used html if doctype didn't say xhtml Joel Goldstick Monday, July 14, 2003 In general, our approach is going to be xhtml only. We don't have the resources to produce both HTML 4.0 and xhtml 1.0 valid code, and the more popular choice for people who like to produce validating web sites has long been xhtml. Joel Spolsky Monday, July 14, 2003 I can see that. Now where did I put that doctype list?....... oh, here it is: Joel Goldstick Tuesday, July 15, 2003 Oh, my godness. I really can feel the pain, since I've had my own quote of pain working with DHTMLEdit and now MSHTML. Leonardo Herrera Wednesday, July 16, 2003 Are you also converting & in a link to & ? Phillip Harrington Saturday, July 19, 2003 Speaking of the DHTML Control and MSHTML, when the hell is Microsoft going to get around to rewriting (or replacing) this component -- the guts of IE -- so it doesn't produce such god-awful code? At this point in the evolution of IE and W3C standards I'd say it's about time... [PS: Would this problem fall with the scope of the System.CodeDom and System.CodeDom.Compiler namespaces of the .NET Framework? Perhaps that's where they're headed -- a fully managed (X)HTML browsing and editing implementation in Longhorn...] Chris Weed Wednesday, November 26, 2003 FYI -- an XHTML 1.1 compliant ActiveX editing component: There is a freeware 'Lite' version and a 'Pro' version. Chris Weed Sunday, November 30, 2003 Recent Topics Fog Creek Home
http://discuss.fogcreek.com/CityDesk/default.asp?cmd=show&ixPost=7870&ixReplies=15
CC-MAIN-2017-13
refinedweb
1,254
71.65
I have tried every example I could find on the web for setting the C++ standard (to C++20, or at least C++17) in a cmake project in VS 2019 (v16.0.3), yet no matter what I try, Resharper C++ seems to consider the standard to remain at C++14. I know that for standard VS projects, setting the compiler option /std=c++latest would do the trick, but this is a cmake "open folder" scenario and not a VS "project". My simple test is to code up a C++17 style nested namespace (i.e. namespace x::y ), resulting in Resharper highlighting the ::y part and stating that its a C++17 feature. What I really want is to set the standard to C++20 (2a) so I can get access to the new resharper support for the forthcoming "Concepts TS (or working draft), however at the very least, C++17 is a must. Is there a proper method for setting the C++ standard in cmake "open folder" scenarios such that Resharper accepts the features of that C++ standard? Hello! The modern way to set the target language standard in CMake is to use target_compile_features (see e.g. for more details): If what you are saying is you have a CMake project that compiles but R++ shows red code inside it, please share the project with us and we'll investigate. R++ 2019.1 does not support concepts yet, and neither does MSVC. Hopefully we'll add concepts support in 2019.2. Thanks!
https://resharper-support.jetbrains.com/hc/en-us/community/posts/360003534559-Unable-to-set-C-standard-in-a-Visual-Studio-2019-cmake-open-folder-style-project
CC-MAIN-2020-29
refinedweb
256
74.93
MySQLdb compiled -- Import issue Discussion in 'Python' started by Kurian Thayil, Mar 25, can't import MySQldb on OS XAlan Little, Jun 25, 2003, in forum: Python - Replies: - 0 - Views: - 903 - Alan Little - Jun 25, 2003 import vs from module import : any performance issue?Pierre Rouleau, Mar 6, 2004, in forum: Python - Replies: - 4 - Views: - 947 - Pierre Rouleau - Mar 7, 2004 IDLE & MySQLdb import errorMichael Boldin via alt email, Mar 4, 2007, in forum: Python - Replies: - 0 - Views: - 657 - Michael Boldin via alt email - Mar 4, 2007 If I create a page, then it's compiled upon first request, where cani find the compiled code??lander, Mar 5, 2008, in forum: ASP .Net - Replies: - 5 - Views: - 697 - bruce barker - Mar 5, 2008
http://www.thecodingforums.com/threads/mysqldb-compiled-import-issue.718685/
CC-MAIN-2015-32
refinedweb
122
66.91
Hi Suresh I am writing proposal for monitoring tool . The monitoring tool is based on pub-sub model (ws-messenger). While writing proposal , I have to back it by technical stuff that tells how can we achieve our purpose. As this monitoring tool is supposed to be a web based , and we are thinking in the lines of developing it in javascript. I was looking into javascript libraries that can we used with ws-messenger in the monitoring module. Please correct me if I am wrong. I came across some of the libraries - jQuery custom events<> - AmplifyJS Pub/Sub <> - PubSubJS <> - js-signals <> please tell me am I thinking in right direction? Regards Vijayendra On Wed, May 1, 2013 at 5:30 PM, Suresh Marru <smarru@apache.org> wrote: > Hi Shameera, > > This is great, I appreciate you sharing it, I realize this is still > working document, but I want other students to start seeing it and model > their proposals in a similar way. > > Airavata Mentors, > > Please provide feedback directly on the melange site and uncheck the > "private" box when you comment. > > Suresh > > On May 1, 2013, at 7:52 AM, Shameera Rathnayaka <shameerainfo@gmail.com> > wrote: > > > Hi Suresh and All, > > > > Of course I am very much happy to share my proposal with everybody, > > actually i was going to update this thread with the melange link in few > > hours once i have done writing all the sections in the proposal. I > haven't > > yet added the milestone plan into it and now working on it. > > > > The sub area i am going to work from the Master project is ' > Implementing > > a JSON interface to Airavata Client side and Registry component'. Here is > > the link > > > > > . > > > > > > Please note that i haven't completed everything in this and still doing > > modifications .Therefore proposal content may be changed bit, need to add > > more technical details of the approach which explains it well. > > > > I would like to know the feedback from all of you regarding the proposal > > and will be modifying it if there is anything to be done. Also please > > contact me if you need any help and i am very much willing to share my > > thoughts with all. > > > > Thanks! > > Shameera > > > > > > > > On Wed, May 1, 2013 at 4:51 PM, Suresh Marru <smarru@apache.org> wrote: > > > >> Hi Shameera, > >> > >> Excellent proposal, great job. Would you mind to make your proposal > >> public and post the link here? Your proposal should help others to look > at > >> it and learn from. > >> > >> Again I emphasize to all students, please don't feel you will be > competing > >> with each others. If all of you write good proposals, there is a good > >> chance all of you will be selected. But without a good proposal, we > cannot > >> > >> Suresh > >> > >> > >> On Apr 23, 2013, at 1:22 PM, Shameera Rathnayaka < > shameerainfo@gmail.com> > >> wrote: > >> > >>> Hi, > >>> > >>> Yes it is not easy to solve all problems, But defining our own standard > >> or > >>> adhere to any standard > >>> provided by third party library will solve the problem to some extend. > >>> > >>> Here i see two possible approaches, > >>> > >>> 1. Use existing third party library(we can find which is best) adhere > to > >> it > >>> standard and see how we change the > >>> backend to be inline with it. > >>> > >>> 2. Use our own convention with help of XMLSchema (The way i suggest). > >>> > >>> As Suresh mentioned we can do a POC with both approaches to compare > >>> performance > >>> and changes need to be done in server side. Then select the best one. > >>> > >>> Another question was, can we works with graph data in JSON format. > >>> There are few JS graph framworks[1] which provide that functionality. > >>> we can use one of them to show airavata monitoring data as graphs > >>> > >>> Thanks, > >>> Shameera. > >>> > >>> [1] jqPlot <> , D3 <> , > >>> Processing.js <> , Sencha > >>> Charts<> > >>> > >>> > >>> On Tue, Apr 23, 2013 at 5:44 PM, Suresh Marru <smarru@apache.org> > wrote: > >>> > >>>> Hi Vijeyandra, > >>>> > >>>> Airavata Messaging is based on a pub-sub model and the events > themselves > >>>> are xml (WS-Eventing [1]). > >>>> > >>>> The Messenger paper [2] should give you more information. > >>>> > >>>> Hi All (Especially those at WS02): > >>>> > >>>> Here is an old effort from a Morotuwa undergrad project, you may want > to > >>>> read through these papers and chat with the authors to get > experiences: > >>>> > >>>> > >>>> > >>>> > >>>> > >>>> > >>>> > >> > > >>>> > >>>> Suresh > >>>> [1] - > >>>> [2] - > >>>> > >> > > >>>> > >>>> On Apr 23, 2013, at 6:20 AM, Vijayendra Grampurohit < > >>>> vijayendra.sdm@gmail.com> wrote: > >>>> > >>>>> Hi Suresh > >>>>> > >>>>> I wanted to know more about the monitoring tool . > >>>>> Currently from where does the monitoring tool gets data . Is it from > >>>>> workflow interpreter ? or Is it from the WS Messenger ( that might > >>>> continuously > >>>>> send messages to monitoring tool as to tell how much is the progress > >>>>> and what are the variables getting changed) ? > >>>>> > >>>>> Again the how is the data being exchanged. I guess it must be xml ? > >>>>> It must be one way data exchange . I mean the data is TO the > >>>>> monitoring module. > >>>>> Then monitoring Tool is sending back this > >>>>> data to Xbaya for displaying to the user ? Please correct me if I am > >>>> wrong > >>>>> > >>>>> I have downloaded the source code from the trunk . can you please > point > >>>>> me which part of code should I be code at for this module. > >>>>> > >>>>> Regards > >>>>> Vijayendra > >>>>> > >>>>> > >>>>> On Tue, Apr 23, 2013 at 3:16 PM, Vijayendra Grampurohit < > >>>> vijayendra.sdm@gmail.com> wrote: > >>>>> Hi > >>>>> > >>>>> What i am suggesting is, we send the JSON message directly to > Airavata > >>>>> Backend(or Registry) > >>>>> When the message gets build after the transport phase, convert JSON > >>>>> to SOAP(XML). > >>>>> From that point message will treated as SOAP message. > >>>>> > >>>>> If we look at the JSON <--> XML conversion there are set of third > party > >>>>> libraries we > >>>>> can use for. But before selecting a one we need to think about > problems > >>>>> having > >>>>> > >>>>> with JSON <--> XML and how these libraries handle those issues. > Because > >>>> we > >>>>> need a robust > >>>>> way to do this conversions. > >>>>> > >>>>> > >>>>> > >>>>> Shameera what you are suggesting is sending the JSON message directly > >> to > >>>> Registry. > >>>>> when the message gets built after the transport phase , convert it to > >>>> SOAP . > >>>>> > >>>>> If you are suggesting Registry will have JSON data instead of WSDL , > >>>> Then this might > >>>>> complicate the things for us . > >>>>> The workflow interpreter needs wsdl(xml) to interpret the workflows > and > >>>> for other details . > >>>>> Which means we might again have to do some changes with workflow > >>>> interpretor . > >>>>> Yesterday from what I heard in discussion is that , they do not want > to > >>>> mess with workflow > >>>>> interpreter atleast for GSOC projects. > >>>>> > >>>>> What I want to suggest is , why carry the JSON data till Regisrty . > >>>> Build a interface > >>>>> before (Apache server API) where we can do the necessary conversion > >>>> (JSON to SOAP). > >>>>> In this way we can avoid messing up with Airavata server as a whole. > >>>>> Client ( using a we browser) is interacting with JSON (web service) > but > >>>> the Apache server > >>>>> is interacting with SOAP. > >>>>> > >>>>> > >>>>> > >>>>> Secondly yesterday Suresh was speaking about validating the > connections > >>>> of the workflow. > >>>>> for example , the workflow is expecting a file as input > >>>>> but user is giving a sting or int . > >>>>> > >>>>> Here what I suggest is , while creating wsdl in the registry for a > >>>> particular > >>>>> workflow , we can add extra information in the form of > >>>>> annotation as the kind of input/ output the workflow is accepting. > >>>>> Then we will be able to check these against users entry during > >> execution. > >>>>> Please correct me if I am wrong. > >>>>> > >>>>> Regards > >>>>> Vijayendra > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> On Tue, Apr 23, 2013 at 1:13 PM, Subho Banerjee <subs.zero@gmail.com > > > >>>> wrote: > >>>>> Well exactly, as long as you can define standard way of > communication. > >>>> That > >>>>> is, you can define in advance what should be a string, array and what > >>>>> should be a integer etc. We have no problem. > >>>>> > >>>>> So, when you look at problems, with JSON <-> XML or the other way > >> round, > >>>>> they talk of the very general case (where you no nothing about the > data > >>>> you > >>>>> are converting other than it is valid XML/JSON). There are a myriad > of > >>>>> problems in that case, which you pointed out. > >>>>> > >>>>> But when there is standard, there is only one way of doing things, > and > >>>> not > >>>>> several. I think that is the way forward. So what I am proposing is > >> maybe > >>>>> we all discuss and define this standard within the first week of GSoC > >>>>> starting and then actually move into coding. So as long as we work > with > >>>> the > >>>>> presumption that this will be done, we really dont have to worry a > lot > >>>>> about this. > >>>>> > >>>>> Cheers, > >>>>> Subho. > >>>>> > >>>>> > >>>>> On Tue, Apr 23, 2013 at 11:52 AM, Shameera Rathnayaka < > >>>>> shameerainfo@gmail.com> wrote: > >>>>> > >>>>>> Hi, > >>>>>> > >>>>>> On Tue, Apr 23, 2013 at 2:25 AM, Subho Banerjee < > subs.zero@gmail.com> > >>>>>> wrote: > >>>>>> > >>>>>>> Some of these problems are very specific to what the XML is > >>>>>> representing, > >>>>>>> it might not be an actual problem in Airavata, > >>>>>>> maybe some one more experienced with the codebase can point this > out. > >>>>>>> > >>>>>> > >>>>>> All issues pointed out in the paper is not directly valid to our > >>>>>> conversion, I didn't list the issues actually need to address in > this > >>>> case > >>>>>> because thought it is worth to read that introduction part which > >>>> explain > >>>>>> the all the issues we have with this conversion and give us a solid > >>>>>> background of that. > >>>>>> > >>>>>>> 1. Anonymous values, Arrays, Implicit Typing, Character sets -- I > >>>>>> really > >>>>>>> dont see these as problems, as long as you can agree that all > >>>> parts of > >>>>>>> airavata will treat the JSON in a standard (probably we have to > >>>> define > >>>>>>> this) way. > >>>>>>> > >>>>>> > >>>>>> > >>>>>> The issue with JSON array only comes when we try to convert XML to > >>>> JSON not > >>>>>> the other way. If we map with JSON, inputparameters and > >>>> outputparameters in > >>>>>> the ServiceDescription.xsd will map with JSON Arrays. Therefore we > >>>> need to > >>>>>> solve this issue. > >>>>>> > >>>>>> JSON XML JSON > >>>>>> {"inputs":["test"]} --> <inputs>test<inputs> --> > {"inputs":["test"]} > >>>> // > >>>>>> correct one > >>>>>> --> {"inputs":"test"} // incorrect one > >>>>>> > >>>>>> 2. Namespaces, Processing Instructions -- Is this required? > >>>>>> > >>>>>>> Are separate namespaces used in Airavata? Only place I can see > >>>> this > >>>>>>> being > >>>>>>> used is probably in the WSDL, but if we can agree on another way > >>>>>>> of communicating registered applications' I/O parameters to the > >>>> front > >>>>>>> end > >>>>>>> (JSON based), then maybe we can work around this (minor) problem. > >>>> Are > >>>>>>> custom processing instructions to the Xbaya XML parse even used? > >>>>>>> 3. Attributes -- Again, this can be fixed easily > >>>>>>> > >>>>>> > >>>>>> Yes,attributes convertion will not be a big issues we can solve it. > As > >>>>>> Lahiru mentioned in Hangout session namesapce handling is not a big > >>>> issue > >>>>>> with Airavata. > >>>>>> > >>>>>> > >>>>>> > >>>>>>> > >>>>>>> <array name="abc"> > >>>>>>> <element>1</element> > >>>>>>> <element>2</element> > >>>>>>> <element>3</element> > >>>>>>> <element>4</element> > >>>>>>> </array> > >>>>>>> > >>>>>>> Can become > >>>>>>> > >>>>>>> { > >>>>>>> > >>>>>>> abc : ['1', '2', '3', '4'] > >>>>>>> > >>>>>>> } > >>>>>>> > >>>>>> > >>>>>> With this example it show us we need to change the XML message > format > >>>> of > >>>>>> server side, which require to change the all schemas, If we are > going > >>>> to > >>>>>> change the schemas then we need to change the way it process it in > >>>> Ariavara > >>>>>> core. We have dropped our initial major requirement, which is keep > the > >>>>>> Airavata Server side as it is. > >>>>>> > >>>>>> with this conversion we only deal with json strings, yes we can send > >>>> JSON > >>>>>> request with other formats supported by JSON like boolen, null, > Number > >>>>>> etc.. But there is no way to get the same JSON from XML as XML only > >>>> deal > >>>>>> only with Strings. I think it is good if we can consume a this > >> features > >>>>>> with JSON. > >>>>>> > >>>>>> let say i need to send a integer or float to the server using JSON > >> then > >>>>>> proper way is to send {"<name>":123.45} this will works fine but > >>>> problem is > >>>>>> how we get the same output ? > >>>>>> > >>>>>> Thanks, > >>>>>> Shameera. > >>>>>> > >>>>>> > >>>>>>> > >>>>>>> > >>>>>>> Cheers, > >>>>>>> Subho. > >>>>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>>> -- > >>>>>> Best Regards, > >>>>>> Shameera Rathnayaka. > >>>>>> > >>>>>> > >>>>> > >>>>> > >>>> > >>>> > >>> > >>> > >>> -- > >>> Best Regards, > >>> Shameera Rathnayaka. > >>> > >> > >> > > > > > > -- > > Best Regards, > > Shameera Rathnayaka. > > > >
http://mail-archives.apache.org/mod_mbox/airavata-dev/201305.mbox/%3CCAF-nVS=YFhVrwJF28dynH6Wvfa0n-pqvTLJAsKqxORL3-8NM5A@mail.gmail.com%3E
CC-MAIN-2017-43
refinedweb
1,900
71.24
Reinforcement Learning - Part 1 Introduction¶ I'm going to begin a multipart series of posts on Reinforcement Learning (RL) that roughly follow an old 1996 textbook "Reinforcement Learning An Introduction" by Sutton and Barto. From my research, this text still seems to be the most thorough introduction to RL I could find. The Barto & Sutton text is itself a great read and is fairly approachable even for beginners, but I still think it's worth breaking down even further. It still amazes me how most of machine learning theory was established decades ago yet we've seen a huge explosion of interest and use in just the past several years largely due to dramatic improvements in computational power (i.e. GPUs) and the availibility of massive data sets ("big data"). The first implementations of neural networks date back to the early 1950s! While really neat results have been achieved using supervised learning models (e.g. Google's DeepDream), many consider reinforcement learning to be the holy grail of machine learning. If we can build a general learning algorithm that can learn patterns and make predictions with unlabeled data, that would be a game-changer. Google DeepMind's Deep Q-learning algorithm that learned to play dozens of old Atari games with just the raw pixel data and the score is a big step in the right direction. Clearly, there is much to be done. The algorithm still struggles with long timespan rewards (i.e. taking actions that don't result in reward for a relatively long period of time), which is why it failed to learn how to play Montezuma's Revenge and similar games. Q-learning is something that was first described in 1989, and while DeepMind's specific implementation had some novelties, it's largely the same algorithm from way back then. In this series, I will be covering major topics and algorithms in RL mostly from the Barto & Sutton text, but I will also include more recent advances and material where appropriate. My goal (as with all my posts) is to help those with limited mathematical backgrounds to grasp the concepts and be able to translate the equations into code (I'll use Python here). As a heads-up, the code presented here will (hopefully) maximize for readability and understandability often at the expense of computational efficiency and quality. I.e. my code will not be production-quality and is just for enhanced learning. My only assumumptions for this series is that you're proficient with Python and Numpy and have at least some basic knowledge of linear algebra and statistics/probability. n-armed bandit problem¶ We're going to build our way up from very simple RL algorithms to much more sophisticated ones that could be used to learn to play games, for example. The theory and math builds on each preceding part, so I strongly recommend you follow this series in order even though the first parts are less exciting. Let's consider a hypothetical problem where we're at a casino and in a section with some slot machines. Let's say we're at a section with 10 slot machines in a row and it says "Play for free! Max payout is \$10!" Wow, not bad right! Let's say we ask one of the employees what's going on here, it seems too good to be true, and she says "It's really true, play as much as you want, it's free. Each slot machine is gauranteed to give you a reward between 0 and \$10. Oh, by the way, keep this on the down low but those 10 slot machines each have a different average payout, so try to figure out which one gives out the most rewards on average and you'll be making tons of cash!" What kind of casino is this?! Who knows, but it's awesome. Oh by the way, here's a joke: What's another name for a slot machine? .... A one-armed bandit! Get it? It's got one arm (a lever) and it generally steals your money! Huh, well I guess we could call our situation a 10-armed bandit problem, or an n-armed bandit problem more generally, where n is the number of slot machines. Let me restate our problem more formally. We have n possible actions (here n = 10) and at each play (k) of this "game" we can choose a single lever to pull. After taking an action $a$ we will receive a reward $R_k$ (reward at play k). Each lever has a unique probability distribution of payouts (rewards). For example, if we have 10 slot machines, slot machine #3 may give out an average reward of \$9 whereas slot machine \#1 only gives out an average reward of \$4. Of course, since the reward at each play is probabilistic, it is possible that lever #1 will by chance give us a reward of \$9 on a single play. But if we play many games, we expect on average slot machine #1 is associated with a lower reward than #3. Thus in words, our strategy should be to play a few times, choosing different levers and observing our rewards for each action. Then we want to only choose the lever with the largest observed average reward. Thus we need a concept of expected reward for taking an action $a$ based on our previous plays, we'll call this expected reward $Q_k(a)$ mathematically. $Q_k(a)$ is a function that accepts action $a$ and returns the expected reward for that action. Formally, $$Q_k(a) = \frac{R_1 + R_2 + {...} + R_k}{k_a}$$ That is, the expected reward at play k for action $a$ is the arithmetic mean of all the previous rewards we've received for taking action a. Thus our previous actions and observations influence our future actions, we might even say some of our previous actions reinforce our current and future actions. We'll come back to this later. Some keywords for this problem are exploration and exploitation. Our strategy needs to include some amount of exploitation (simply choosing the best lever based on what we know so far) and some amount of exploration (choosing random levers so we can learn more). The proper balance of exploitation and exploration will be important to maximizing our rewards. So how can we come up with an algorithm to figure out which slot machine has the largest average payout? Well, the simplest algorithm would be to select action $a$ for which this equation is true: $$Q_k(A_k) = max_a(Q_k(a))$$ This equation/rule states that the expected reward for the current play k for taking action $A$ is equal to the maximum average reward of all previous actions taken. In other words, we use our above reward function $Q_k(a)$ on all the possible actions and select the one that returns the maximum average reward. Since $Q_k(a)$ depends on a record of our previous actions and their associated rewards, this method will not select actions that we haven't already explored. Thus we might have previously tried lever 1 and lever 3, and noticed that lever 3 gives us a higher reward, but with this method, we'll never think to try another lever, say #6, which, unbeknownst to us, actually gives out the highest average reward. This method of simply choosing the best lever that we know of so far is called a "greedy" method. Obviously, we need to have some exploration of other levers (slot machines) going on to discover the true best action. One simple modification to our above algorithm is to change it to an $\epsilon$ (epsilon)-greedy algorithm, such that, with a probability $\epsilon$, we will choose an action $a$ at random, and the rest of the time (probability $1-\epsilon$) we will choose the best lever based on what we currently know from past plays. So most of the time we play greedy, but sometimes we take some risks and choose a random lever and see what happens. This will of course influence our future greedy actions. Alright, I think that's an in-depth enough discussion of the problem and how we want to try to solve it with a rudimentary RL algorithm. Let's start implementing this with Python. #imports, nothing to see here import numpy as np from scipy import stats import random import matplotlib.pyplot as plt %matplotlib inline n = 10 arms = np.random.rand(n) eps = 0.1 Per our casino example, we will be solving a 10-armed bandit problem, hence n = 10. I've also defined a numpy array of length n filled with random floats that can be understood as probabilities. The way I've chosen to implement our reward probability distributions for each arm/lever/slot machine is this: Each arm will have a probability, e.g. 0.7. The maximum reward is \$10. We will setup a for loop to 10 and at each step, it will add +1 to the reward if a random float is less than the arm's probability. Thus on the first loop, it makes up a random float (e.g. 0.4). 0.4 is less than 0.7, so reward += 1. On the next iteration, it makes up another random float (e.g. 0.6) which is also less than 0.7, thus reward += 1. This continues until we complete 10 iterations and then we return the final total reward, which could be anything 0 to 10. With an arm probability of 0.7, the average reward of doing this to infinity would be 7, but on any single play, it could be more or less. def reward(prob): reward = 0; for i in range(10): if random.random() < prob: reward += 1 return reward The next function we define is our greedy strategy of choosing the best arm so far. This function will accept a memory array that stores in a key-value sort of way the history of all actions and their rewards. It is a $2\ x\ k$ matrix where each row is an index reference to our arms array (1st element) and the reward received (2nd element). For example, if a row in our memory array is [2, 8] it means that action 2 was taken (the 3rd element in our arms array) and we received a reward of 8 for taking that action. #initialize memory array; has 1 row defaulted to random action index av = np.array([np.random.randint(0,(n+1)), 0]).reshape(1,2) #av = action-value #greedy method to select best arm based on memory array (historical results) def bestArm(a): bestArm = 0 #just default to 0 bestMean = 0 for u in a: avg = np.mean(a[np.where(a[:,0] == u[0])][:, 1]) #calc mean reward for each action if bestMean < avg: bestMean = avg bestArm = u[0] return bestArm And here is the main loop for each play. I've set it to play 500 times and display a matplotlib scatter plot of the mean reward against plays. Hopefully we'll see that the mean reward increases as we play more times. plt.xlabel("Plays") plt.ylabel("Avg Reward") for i in range(500): if random.random() > eps: #greedy arm selection choice = bestArm(av) thisAV = np.array([[choice, reward(arms[choice])]]) av = np.concatenate((av, thisAV), axis=0) else: #random arm selection choice = np.where(arms == np.random.choice(arms))[0][0] thisAV = np.array([[choice, reward(arms[choice])]]) #choice, reward av = np.concatenate((av, thisAV), axis=0) #add to our action-value memory array #calculate the percentage the correct arm is chosen (you can plot this instead of reward) percCorrect = 100*(len(av[np.where(av[:,0] == np.argmax(arms))])/len(av)) #calculate the mean reward runningMean = np.mean(av[:,1]) plt.scatter(i, runningMean) As you can see, the average reward does indeed improve after many plays. Our algorithm is learning, it is getting reinforced by previous good plays! And yet it is such a simple algorithm. I encourage you to download this notebook (scroll to bottom) and experiment with different numbers of arms and different values for $\epsilon$. The problem we've considered here is a stationary problem because the underlying reward probability distributions for each arm do not change over time. We certainly could consider a variant of this problem where this is not true, a non-stationary problem. In this case, a simple modification would be to weight more recent action-value pairs greater than distant ones, thus if things change over time, we will be able to track them. Beyond this brief mention, we will not implement this slightly more complex variant here. Incremental Update¶ In our implementation we stored each action-value (action-reward) pair in a numpy array that just kept growing after each play. As you might imagine, this is not a good use of memory or computational power. Although my goal here is not to concern myself with computational efficiency, I think it's worth making our implementation more efficient in this case as it turns out to be actually simpler. Instead of storing each action-value pair, we will simply keep a running tab of the mean reward for each action. Thus we reduce our memory array from virtually unlimited in size (as plays increase indefinitely) to a hard-limit of a 1-dimensional array of length n (n = # arms/levers). The index of each element corresponds to an action (e.g. 1st element corresponds to lever #1) and the value of each element is the running average of that action. Then whenever we take a new action and receive a new reward, we can simply update our running average using this equation: $$Q_{k+1} = Q_k + \frac{1}{k}[R_k - Q_k]$$ where $Q_k$ is the running average reward for action $a$ so far and $R_k$ is the reward we received right now for taking action $A_k$, and $k$ is the number of plays so far. n = 10 arms = np.random.rand(n) eps = 0.1 av = np.ones(n) #initialize action-value array counts = np.zeros(n) #stores counts of how many times we've taken a particular action def reward(prob): total = 0; for i in range(10): if random.random() < prob: total += 1 return total #our bestArm function is much simpler now def bestArm(a): return np.argmax(a) #returns index of element with greatest value plt.xlabel("Plays") plt.ylabel("Mean Reward") for i in range(500): if random.random() > eps: choice = bestArm(av) counts[choice] += 1 k = counts[choice] rwd = reward(arms[choice]) old_avg = av[choice] new_avg = old_avg + (1/k)*(rwd - old_avg) #update running avg av[choice] = new_avg else: choice = np.where(arms == np.random.choice(arms))[0][0] #randomly choose an arm (returns index) counts[choice] += 1 k = counts[choice] rwd = reward(arms[choice]) old_avg = av[choice] new_avg = old_avg + (1/k)*(rwd - old_avg) #update running avg av[choice] = new_avg #have to use np.average and supply the weights to get a weighted average runningMean = np.average(av, weights=np.array([counts[j]/np.sum(counts) for j in range(len(counts))])) plt.scatter(i, runningMean) This method achieves the same result, getting us better and better rewards over time as it learns which lever is the best option. I had to create a separate array counts to keep track of how many times each action is taken to properly recalculate the running reward averages for each action. Importantly, this implementation is simpler and more memory/computationally efficient. Softmax Action Selection¶ Imagine another type of bandit problem: A newly minted doctor specializes in treating patients with heart attacks. She has 10 treatment options of which she can choose only one to treat each patient she sees. For some reason, all she knows is that these 10 treatments have different efficacies and risk-profiles for treating heart attacks, and she doesn't know which one is the best yet. We could still use our same $\epsilon$-greedy algorithm from above, however, we might want to reconsider our $\epsilon$ policy of completely randomly choosing a treatment once in awhile. In this new problem, randomly choosing a treatment could result in patient death, not just losing some money. So we really want to make sure to not choose the worst treatment but still have some ability to explore our options to find the best one. This is where a softmax selection might be the most appropriate. Instead of just choosing an action at random during exploration, softmax gives us a probability distribution across our options. The option with the largest probability would be equivalent to our best arm action from above, but then we have some idea about what are the 2nd and 3rd best actions for example. This way, we can randomly choose to explore other options while avoiding the very worst options. Here's the softmax equation: When we implement the slot machine 10-armed bandit problem from above using softmax, we don't need our bestArm() function anymore. Since softmax produces a weighted probability distribution across our possible actions, we will just randomly (but weighted) select actions according to their relative probabilities. That is, our best action will get chosen more often because it will have the highest softmax probability, but other actions will be chosen at random at lesser frequency. n = 10 arms = np.random.rand(n) av = np.ones(n) #initialize action-value array, stores running reward mean counts = np.zeros(n) #stores counts of how many times we've taken a particular action #stores our softmax-generated probability ranks for each action av_softmax = np.zeros(n) av_softmax[:] = 0.1 #initialize each action to have equal probability def reward(prob): total = 0; for i in range(10): if random.random() < prob: total += 1 return total tau = 1.12 #tau was selected by trial and error def softmax(av): probs = np.zeros(n) for i in range(n): softm = ( np.exp(av[i] / tau) / np.sum( np.exp(av[:] / tau) ) ) probs[i] = softm return probs plt.xlabel("Plays") plt.ylabel("Mean Reward") for i in range(500): #select random arm using weighted probability distribution choice = np.where(arms == np.random.choice(arms, p=av_softmax))[0][0] counts[choice] += 1 k = counts[choice] rwd = reward(arms[choice]) old_avg = av[choice] new_avg = old_avg + (1/k)*(rwd - old_avg) av[choice] = new_avg av_softmax = softmax(av) #update softmax probabilities for next play runningMean = np.average(av, weights=np.array([counts[j]/np.sum(counts) for j in range(len(counts))])) plt.scatter(i, runningMean) Softmax action selection seems to do at least as well as epsilon-greedy, perhaps even better; it looks like it converges on an optimal policy faster. The downside to softmax is having to manually select the $\tau$ parameter. Softmax here was pretty sensitive to $\tau$ and it took awhile of playing with it to find a good value for it. Obviously with epsilon-greedy we had the parameter epsilon to set, but choosing that parameter was much more intuitive. Conclusion¶ Well that concludes Part 1 of this series. While the n-armed bandit problem is not all that interesting, I think it does lay a good foundation for more sophisticated problems and algorithms. Stay tuned for part 2 where I'll cover finite Markov decision processes and some associated algorithms. References:¶ - "Reinforcement Learning: An Introduction" Andrew Barto and Richard S. Sutton, 1996 - -
http://outlace.com/rlpart1.html
CC-MAIN-2018-17
refinedweb
3,238
61.87
How to Deal With Exceptions How to Deal With Exceptions Learning to deal with exceptions can be tough, but will greatly benefit you as a developer. Read on to get one Java dev's advice on the topic. Join the DZone community and get the full member experience.Join For Free Atomist automates your software deliver experience. It's how modern teams deliver modern software. I recently had a discussion with a friend, who is a relatively junior but very smart software developer. She asked me about exception handling. The questions were pointing to a tips-and-tricks kind of path and there is definitely a list of them. But I am a believer in context and motivation behind the way we write software so I decided to write my thoughts on exceptions from such a perspective. Exceptions in programming (using Java as a stage for our story) are used to notify us that a problem occurred during the execution of our code. Exceptions are a special category of classes. What makes them special is that they extend the Exception class which in turn extends the Throwable class. Being implementations of Throwable allows us to "throw" them when necessary. So, how can an exception happen? Instances of exception classes are thrown either from the JVM or in a section of code using the throw statement. That is the how, but why? I am sure that most of us cringe when we see exceptions occur, but they are a tool we can use to our benefit. Before the inception of exceptions, special values or error codes were returned to let us know that an operation did not succeed. Forgetting (or being unaware) to check for such error codes, could lead to unpredictable behavior in our applications. So yay for exceptions! There are 2 things that come to mind as I write the above. Exceptions are a bad event because when they are created we know a problem occurred. Exceptions are a helpful construct because they give us valuable information about what went wrong and allow us to behave properly in each situation. Trying to distil the essence of this design issue: a method/request is triggered to do something but it might fail - how do we best notify the caller that it failed? How do we communicate information about what happened? How do we help the client decide what to do next? The problem with using exceptions is that we “give up” and not just that, we do it in an “explosive” way and the clients/callers of our services have to handle the mess. So my first advice when it comes to exceptions, since they are a bad event, is to try to avoid them. In the sections of software under your control, implement a design that makes it difficult for errors to happen. You can use features of your language that support this behavior. I believe the most common exception in Java is the NullPointerException and Optional can help us avoid this. For instance, let's say we want to retrieve an employee with a specified id: public Optional<Employee> tryGetEmployee(String employeeId) { return Optional.ofNullable(employeeService.getEmployee(employeeId)); } So much better now. But besides the features of our language, we can design our code in a way that makes it difficult for errors to occur. If we consider a method, which can only receive positive integers as an input, we can set our code up so that it is extremely unlikely for clients to mistakenly pass invalid input. First, we create a PositiveInteger class: public class PositiveInteger { private Integer integerValue; public PositiveInteger(Integer inputValue) { if(inputValue <= 0) { throw new IllegalArgumentException("PositiveInteger instances can only be created out of positive integers"); } this.integerValue = inputValue; } public Integer getIntegerValue() { return integerValue; } } Then, we make a method that can only use a positive integer as an input: public void setNumberOfWinners(PositiveInteger numberOfWinners) { … } These are of course simple examples and I did argue that the heart of the issue is that occasionally problems occur and then we have to inform clients about what happened. So let’s say we retrieve a list of employees from an external back-end system and things go wrong. How can we handle this? We can set our response object to GetEmployeesResponse, which would look something like this: public class GetEmployeesResponse { private Ok ok; private Error error; … class Ok { private List<Employee> employeeList; ... } class Error { private String errorMessage; ... } } But let’s be realists, you do not have control over every part of your codebase and you are not going to change everything either. Exceptions do and will happen, so let’s start with some brief background information on them. As mentioned before, the Exception class extends the Throwable class. All exceptions are subclasses of the exception class. Exceptions can be categorized in checked and unchecked exceptions. That simply means that some exceptions, the checked ones, require us to specify at compile time how the application will behave in case the exception occurs. The unchecked exceptions do not mandate compile time handling from us. To create such exceptions, you extend the RuntimeException class which is a direct subclass of Exception. An old and common guideline when it comes to checked vs unchecked is that runtime exceptions are used to signal situations which the application usually cannot anticipate or recover from, while checked exceptions are situations that a well-written application should anticipate and recover from. Well, I am an advocate of only using runtime exceptions. And if I use a library that has a method with a checked exception, I create a wrapper method that turns it into a runtime. Why not checked exceptions then? Uncle Bob, in his “Clean Code” book, argues that they break the Open/Closed principle, since a change in the signature with a new throws declaration could have effects in many levels of our program calling the method. Now, checked or unchecked, since exceptions are a construct to give us insights on what went wrong, they should be as specific and as informative as possible on what happened. So try to use standard exceptions, as other developers will understand what happened easier. When seeing a NullPointerException, the reason is clear to anyone. If you make your own exceptions, make them sensible and specific. For example, a ValidationException lets me know a certain validation failed, an AgeValidationException points me to the specific validation failure. Being specific allows one to both to diagnose what happened but also to specify a different behavior based on what happened (the type of exception). That is the reason why you should always catch the most specific exception first! So here comes another common piece of advice that instructs us to not catch on “Exception.” It is valid advice which I occasionally do not follow. In the boundaries of my API (let’s say the endpoints of my REST service) I always have generic catch Exception clauses. I do not want any surprises and something that I did not manage to predict or guard against in my code, to potentially reveal things to the outside world. Be descriptive but also provide exceptions according to the proper level of abstraction. Consider creating a hierarchy of exceptions that provide semantic information in different abstraction levels. If an exception is thrown from the lower levels of our program, such as a database related exception, it does not have to provide the details to the caller of our API. Catch the exception and throw a more abstract one, that simply informs callers that their attempted operation failed. This might seem like it goes against the common approach of “catch only when you can handle,” but it is not. Simply, in this case, our “handling” is the triggering of a new exception. In these cases, make the whole history of the exception available from throw to throw by passing the original exception to the constructor of the new exception. The word “handle” was used many times. What does it mean? An exception is considered to be handled when it gets “caught” in our familiar catch clause. When an exception is thrown, first it will search for exception handling in the code where it happened, and, if none are found, it will go to the calling context of the method in which it is enclosed and so on until an exception handler is found or the program will terminate. One nice piece that I like, from Uncle Bob again, is that the try-catch-finally blocks define a scope within the program. And besides the lexical scope, we should think of its conceptual scope, and treat the try block as a transaction. What should we do if something goes wrong? How do we make sure to leave our program in a valid state? Do not ignore exceptions! I am guessing many hours of unhappiness for programmers were caused by silent exceptions. The catch and finally block are the place where you will do your cleaning up. Make sure you wait until you have all the information to handle the exception properly. This can be tied to the throw early-catch late principle. We throw early so we don’t make operations that we have to revert later because of the exception and we catch late in order to have all the information to correctly handle the exception. And, by the way, when you catch exceptions, only log when you resolve them, or else a single exception event would cause clutter in your logs. Finally, for exception handling, I personally prefer to create an error handling service that I can use in different parts of my code and take appropriate actions in regards to logging, rethrowing, cleaning resources, etc. It centralizes my error handling behavior, avoids code repetition, and helps me keep a more high-level perspective of how errors are handled in the application. So now that we have enough context, paradoxes, rules and their exceptions, let's summarize: - Try to avoid exceptions. Use the language features and proper design in order to achieve it. - Use runtime exceptions, wrap methods with checked exceptions and turn them in at runtime. - Try to use standard exceptions. - Make your exceptions specific and descriptive. - Catch the most specific exception first. - Do not catch on Exception. - But catch on Exception on the boundaries of your API. Have complete control over what comes out to the world. - Create a hierarchy of exceptions that match the layers and functionalities of your application. - Throw exceptions at the proper abstraction level. Catch an exception and throw a higher level one as you move from layer to layer. - Pass the complete history of exceptions when rethrowing by providing the exception in the constructor of the new one. - Think of the try-catch-finally block as a transaction. Make sure you leave your program in a valid state when something goes wrong. - Catch exceptions when you can handle it. - Never have empty catch clauses. - Log an exception when you handle it. - Have a global exception handling service and have a strategy on how you handle errors. That was it! Go on and be exceptional! Get the open source Atomist Software Delivery Machine and start automating your delivery right there on your own laptop, today! Published at DZone with permission of Tasos Martidis , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/how-to-deal-with-exceptions
CC-MAIN-2019-04
refinedweb
1,911
63.29
Opened 7 years ago Closed 6 years ago #14505 closed (invalid) Multiple Namespaces and reverse lookup does not work as advertised. Description ### urls.py urlpatterns = patterns('', (r'^butter/', include(milkpost.urls, {% url milkpost:myview %} </body> both urls: /newsletter/myview/ and /butter/myview/ print out /newsletter/myview/ As the following page points out, two instances of the same app should be distinguishable. This does not happen though and can be very frustrating! Change History (2) comment:1 Changed 7 years ago by comment:2 Changed 6 years ago by This is working as designed. As Frank notes, 'milkpost:myview' is referencing the app namespace using the default name. If you want to differentiate them, you need to either use 'butter:myview' or 'newsletter:myview' to differentiate the instances, *or* pass in a current_app as part of the context when rendering the view. As for the docs not being clear... I thought they were. They describe the step by step process, and gives an example as well. Suggestions on how to clarify the docs are welcome. The docs aren't super clear, but after a quick read shouldn't your template just use {% url myview %} and then it will resolve properly inside each app? By using 'milkpost:myview' you're actually telling it to use the last installed instance ( i.e /newsletter/ ) here.
https://code.djangoproject.com/ticket/14505
CC-MAIN-2017-22
refinedweb
221
65.22
import "go.chromium.org/luci/common/sync/cancelcond" Package cancelcond implements a wrapper around sync.Cond that response to context.Context cancellation. Cond is a wrapper around a sync.Cond that overloads its Wait method to accept a Context. This Context can be cancelled to prematurely terminate the Wait(). New creates a new Context-cancellable Cond. Wait wraps sync.Cond's Wait() method. It blocks, waiting for the underlying Conn to be signalled. If the Context is cancelled prematurely, Wait() will signal the underlying Cond and unblock it. Wait must be called while holding the Cond's lock. It yields the lock while it is blocking and reclaims it prior to returning. Package cancelcond imports 2 packages (graph) and is imported by 2 packages. Updated 2018-08-14. Refresh now. Tools for package owners.
https://godoc.org/go.chromium.org/luci/common/sync/cancelcond
CC-MAIN-2018-34
refinedweb
134
62.85
In this tutorial you will learn how rendering works in asp.net mvc application, how to use RenderBody, RenderSection and RenderPage methods to design your webpage, setting different layout for different pages, also will learn how to set render script and css files section. What is Layout in Asp.net MVC? Layout is basically a default master page, when we create any asp.net MVC project using Visual Studio, asp.net mvc default layout template is created in shared folder, the file name is "_layout.chtml". When we add any new view, that default master pages (layout) is automatically applied (You may not see any additional code added in your view). In case you want to add a new view, but don’t want to set any default layout, you can set the layout property null in your view This is how you can make asp.net mvc disable layout, just by setting layout null. @{ Layout = null; } You also can add asp.net mvc different layout for any view in your project, add a custom layout. @{ Layout = "~/Views/shared/_LayoutCustom.cshtml"; } How does Layouts RenderBody, RenderSection, RenderPage, Html.Partial work in ASP.NET MVC, Layouts are used to maintain a consistent look and feel across multiple views, Layouts is like master page. Let's see how the rendering life cycle works, we learn each function and its use. <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width" /> <title>@ViewBag.Title</title> @RenderSection("metatags", required: false) @Styles.Render("~/Content/css") @ Scripts.Render("~/bundles/modernizr") </head> <body> @Html.Partial("HeaderBar") @RenderBody() @Html.Partial("bottomBar") @Scripts.Render("~/bundles/jquery") @RenderSection("scripts", required: false) </body> </html> RenderBody method exists in the Layout page to render child page/view. It is just like the ContentPlaceHolder in master page. A layout page can have only one RenderBody method. This is how you can define a RenderSection method in your asp.net mvc layout page. @RenderSection("metatags", required: false) We can have any number of @RenderSection("metatags") in a Layout, but key (ex. "metatags") has to be unique. "required: false" is means this section is optional for the view, that will consume this masterpage, Otherwise this section must be specified in the view. Here is an example of how you can add section in your view. @section metatags { <meta name="description" content="How Layouts, RenderBody, RenderSection, RenderPage, Html.Partial works" /> } Scripts.Render is used to render a bundle of Script files by rendering script tag(s) for the Script bundle in BundleConfig Style.Render is used to render a bundle of CSS files defined in BundleConfig.cs files. let's look at code public class BundleConfig { public static void RegisterBundles(BundleCollection bundles) { bundles.Add(new ScriptBundle("~/bundles/jqueryval").Include( "~/Scripts/jquery.unobtrusive*", "~/Scripts/jquery.validate*"));.theme.css")); } } Scripts.Render & Styles.Render generate multiple script & style tags for each item in the Script bundle and CSS bundle when optimizations are disabled. But, When optimizations are enabled, they generate a single style and script tag to a version-stamped URL which represents the entire bundle for Scripts & CSS Html.Partial("") is just like calling user control (partial view) in main page @Html.Partial("bottomBar") RenderPage("") is just like calling another page into main page, note: while calling you have to specify ".cshtml"
https://www.webtrainingroom.com/aspnetmvc/layout-render
CC-MAIN-2021-49
refinedweb
549
58.28
Basic auth The basic auth guard uses the HTTP basic authentication for authenticating the requests. There is no concept of explicit login and logout with basic auth. The credentials for authentication are sent on every request and you can validate them using the auth.authenticate method. - If the user credentials are incorrect, then the auth package will deny the request with WWW-Authenticateheader. - If the credentials are correct, then you will be able to access the logged in user details. The basic auth guard relies on the underlying user provider to lookup and validate the user credentials import Route from '@ioc:Adonis/Core/Route'Route.get('posts', async ({ auth }) => {await auth.use('basic').authenticate()return `You are logged in as ${auth.user!.email}`}) You can also make use of the auth middleware to guard routes using the basic auth guard. import Route from '@ioc:Adonis/Core/Route'Route.get('posts', async ({ auth }) => {return `You are logged in as ${auth.user!.email}`}).middleware('auth', { guards: ['basic'] })
https://docs-adonisjs-com.pages.dev/guides/auth/basic-auth-guard
CC-MAIN-2021-49
refinedweb
165
51.04
There. Apparently I’m not allowed to point out errors, and BEST isn’t allowed to correct any before release, such as the six incorrectly spelled citations of the Fall et al 2011 paper I pointed out to BEST a week earlier, which they couldn’t be bothered to fix. And then there’s the issue of doing a 60 year study on siting, when we only guaranteed 30. Even NOAA’s Menne et al paper knew not to make such a stupid mistake. Making up data where there isn’t any is what got Steig et al into trouble in Antarctica and they got called on it by Jeff Id, Steve McIntyre, and Ryan O’Donnell in a follow on peer-reviewed paper. But I think it’s useful to note here (since I know some other bloggers will just say “denier” and be done with it) what I do in fact agree with and accept, and what I don’t. They wanted an instant answer, before I had a chance even to read the other three papers. Media outlets were asking for my opinion even before the release of these papers, and I stated clearly that I had only seen one and I couldn’t yet comment on the others. That didn’t matter, they lumped that opinion on one I had seen into an opinion on all four. What I agree with: - The Earth is warmer than it was 100-150 years ago. But that was never in contention – it is a straw man argument. The magnitude and causes are what skeptics question. - From the BEST press release “Global Warming is real” …see point one. Notably, “man-made global warming” was not mentioned by BEST, and in their findings they point out explicitly they didn’t address this issue as they state in this screencap from the press release: - As David Whitehouse wrote: overstated.”. Here’s a screencap from that paper: - The unique BEST methodology has promise. The scalpel method used to deal with station discontinuity was a good idea and I’ve said so before. - The findings of the BEST global surface analysis match the finding of other global temperature metrics. This isn’t surprising, as much of the same base raw data was used. There’s a myth that NASA GISS, HadCRUT, NOAA’s, and now Berkeley’s source data are independent of one another. That’s not completely true. They share a lot of common data from GHCN, administered by NOAA’s National Climatic Data. So it isn’t surprising at all they would match. What I disagree with: 1. The way they dealt with my surfacestation data in analysis was flat-out wrong, and I told them so days ahead of this release. They offered no correction, nor even an acknowledgement of the issue. The issue has to do with the 60 year period they used. Both peer-reviewed papers on the subject, Menne et al 2010, and Fall et al 2011 used 30 year periods. This is a key point because nobody knows (not me, not NOAA, not BEST) what the siting quality of weather stations was 30-60 years ago. Basically they did an analysis on a time period for which metadata doesn’t exist. I’ve asked simply for them to do it on 30 years as the two peer reviewed papers did, an apples-to-apples comparison. If they do that and the result is the same, I’m satisfied. OTOH, they may find something new when done correctly, we all deserve that opportunity.? 2. The UHI study seems a bit strange in its approach. They write in their press release that: They didn’t adequately deal with that 1% in my opinion, by doing a proper area weighting. And what percentage of weather stations were in that 1%? While they do have some evidence of the use of a “kriging” technique, I’m not certain is has been done properly. The fact that 33% of the sites show a cooling is certainly cause for a much harder look at this. That’s not something you can easily dismiss, though they attempt to. This will hopefully get sorted out in peer review. 3. The release method they chose, of having a media blitzkrieg of press release and writers at major MSM outlets lined up beforehand is beyond the pale. While I agree with Dr. Muller’s contention that circulating papers among colleagues for wider peer review is an excellent idea, what they did with the planned and coordinated (and make no mistake it was coordinated for October 20th, Liz Muller told me this herself) is not only self-serving grandiosity, but quite risky if peer review comes up with a different answer.. A lie of omission is still a lie, and I feel that I was not given the true intentions of the BEST group when I met with them. So there you have it, I accept their papers, and many of their findings, but disagree with some methods and results as is my right. It will be interesting to see if these survive peer review significantly unchanged. One thing we can count on that WON’T normally be transparent is the peer review process, and if that process includes members of the “team” who are well versed enough to but already embracing the results such as Phil Jones has done, then the peer review will turn into “pal review”. The solution is to make the names of the reviewers known. Since Dr. Muller and BEST wish to upset the apple cart of scientific procedure, putting public review before peer review, and because they make this self-assured and most extraordinary claim in their press release:? I say, if BEST and Dr. Muller truly believes in a transparent approach, as they state on the front page of their website… …let’s make the peer review process transparent so that there is no possibility of “pal review” to ramrod this through without proper science being done. Since Dr. Muller claims this is “one of the most important questions ever”, let’s deal with it in an open a manner as possible. Ensuring that these four papers get a thorough and non-partisan peer review is the best way to get the question answered. Had they not made the claim I highlighted above of it passing peer review and being in the next IPCC report before any of that even is decided, I would never think to ask for this. That overconfident claim is a real cause for concern, especially when the media blitzkrieg they launched makes it difficult for any potential review scientists to not notice and read these studies and news stories ahead of time, thus becoming biased by media coverage. We can’t just move the “jury pool” of scientists to the next county to ensure a fair trial now that is been blathered worldwide can we? Vote on it: 171 thoughts on “BEST: What I agree with and what I disagree with – plus a call for additional transparency to prevent "pal" review” Thanks. Point 3 of what you disagree with: should that say October 20th? I nominate Steve McIntyre and William M. Briggs as peer reviewers. Third line: ” they only one I got to review” should be “the only one…” What I disagree with, Item #1: “This is a key point because we nobody knows …” should be “This is a key point because nobody knows…” Just tryin’ to help. Why does the data in the graphic stop at 2006? Anyone know why? Willis is onto something vis a vis the UHI work. ——— ? ——– Crazy like a fox…I posted this at Bishop Hill’s. Like many moves in this game the comparison may very well show the butcher’s thumb on the scale: . > Since Dr. Muller claims this is “one of the most important questions ever”, … I get the strong sense that the BEST team “knows” they’re good, knows they’re right, and knows they are addressing “one of the most important questions ever.” Therefore it’s logical that they announce their results far and wide as soon as they’re available. And probably didn’t even consider someone might refer to their study as “non-peer reviewed.” At least Pons & Fleischman held their press conference in large part to establish primacy after getting with that Steven Jones at BYU (I think that’s right) was working on an interesting cold fusion paper himself. (It turned out to be interesting and orthogonal – about tritium in volcanic gases. I forget where that research ended up.) Perhaps BEST just likes the attention, perhaps they’re trying to lead the hype to Durban (Nov 28 to Dec 9). Perhaps they’ll learn that pride goeth before the fall. Anthony, I truly admire your ethical and dignified stance in the midst of this farce! Surely any editor who has received these BEST papers should resign now and apologise to Kevin Trenberth, given the precedent caused by Wagner? ROFL BEST got $623,097 funding for this. Imagine what Anthony and his volunteers could have done with that amount of funding. Imagine if that amount was spent on actually fixing the bad sites. “The fact that 33% of the sites show a cooling is certainly cause for a much harder look at this. That’s not something you can easily dismiss, though they attempt to.” I was taking a look at that thing some time ago (it’s not new information that 1/3 of stations shows cooling) and my conclusion was, it’s caused by the fact that the amplitude of global warming is comparable with (actually about half of) amplitude of statistical error. Or in other words, amplitude of chaotic temperature changes is about two times the amplitude of deterministic temperature changes (i.e. warming). There is something strange about how this was all rushed out before it was ready – spelling mistakes, unusual errors such as the 60 year time-line, open and transparent database that noone can download yet – the complete 39,000 dataset trend is only produced on their website versus included in the papers – peer review not done yet. Judith Curry comments that they didn’t want to get scooped. Scooped by who, scooped how exactly. I note the NCDC has done some scooping before. What is the back-story to how this was rolled-out in such an unplanned way? I understand BEST was trying to get this out earlier in the year and maybe deadlines crept up faster that expected but there is something about this that we don’t know yet. Here’s their stated goal according to the BEST project website: “Our aim is to resolve current criticism of the former temperature analyses, and to prepare an open record that will allow rapid response to further criticism or suggestions…” It sounds like they set out to vindicate analyses of the current temperature data and blunt any future criticism. Am I misreading their intentions? Did not Roy Spencer pre-publish for PAL review on this very site? I just watched the video clip of temp anomalies since the early 1800’s thru 2009. Does it strike anyone else as rather odd that that there is plenty of variation from the blues to the reds till roughly 2000 when everything became mostly red to very red.? The approach to the UHI work seems reasonable to me. The question that we want the answer to is “what is the impact of UHI on the global trend?” By attempting to remove urban station from the dataset (about 1/2 of the stations) and the comparing the resulting rural subset with the entire set you get a good idea of the impact on the overall trend. Since the overall effect of the UHI on the dataset is basically 0, confirming other studies, it really wasn’t worth looking at any further. However, the code and data is all available, knock your socks off. Just remember that the important question is not the absolute temperature effects, but only the effect of the UHI on temperature anomaly trends. Urban heat islands have a large effect of absolute temperature, the question is whether there is much effect on the trend. Thanks for laying out your PsOV, Anthony. “Since Dr. Muller and BEST wish to upset the apple cart of scientific procedure, putting public review before peer review..” You will therefore recommend upsetting the apple cart of peer-review even further by removing the anonymity of reviewers? An alternative reading here is that we get to see the papers before and after review, and be witness to how they are corrected. They will let their mistakes be aired to the public also. In purely scientific terms, this is much more open than the usual process. There’s no doubt in my mind that the media blitz of their pre-reviewed papers is wrong. It appears to have forced you, unfortunately, to call for a further abandoning of the proper process. This may be good politics, but puts you in opposition to the normal peer-review process, which you have otherwise maintained should be upheld – having learned to your cost the perils of circumventing it (per your comments in the other thread). I propose an alternative. The papers should pass through two peer-review processes. One should be the normal anonymous review, satisfying the journal’s obligations, and the other should be an open peer-review, Care should be taken that they are not created or seen as competitive, but complimentary. Because scientific imprimatur is given if the journal selects its reviewers, the journal should choose all six reviewers, three anonymous, and three open. This not only satisfies Wattians and mainstreamers, it also lends twice the assistance to strengthening the papers. Does this seem equitable? REPLY: Interesting, but only if the journal would be bound by the idea that the paper has to pass both reviews to be published, otherwise they just walk right by it while thumbing their nose at open public review. – Anthony Their behavior thus far is actually the same method of dealing with the media that the IPCC uses. They release the (pre-review conclusions) SPM before the full report so that the masses see the conclusions. After that, who cares what is found in the actual report by experts, everyone already believes them, right? After all, they’re some of the experts and they wouldn’t let their work be misrepresented in the media? The killer is the fact that they can actually say that with a straight face, since an un-reviewed paper isn’t their final work. Actually, what struck me was the absurd amount of smearing early on. I mean, 1/3rd of the U.S. is covered by their first thermometer? Did they adjust their resolution later? If so, what determines their resolution, do they have a function wrt to time for it? Rattus Norvegicus says: October 21, 2011 at 6:28 pm (Edit) Surely you see the logical problem with that claim, Rattus? Hint. There was a time before the urban temperature rose … w. The UHI paper shows the “very-rural” sections with a significantly greater warming trend. Obviously global warming must be thwarted by paving those areas. CTD says: October 21, 2011 at 7:05 pm The UHI paper shows the “very-rural” sections with a significantly greater warming trend. Obviously global warming must be thwarted by paving those areas. So, to prevent CAGW, all we need to do is pave paradise and put up a parking lot. Frank Lasner has developed a world wide, unadjusted, rural temperature index. A guest post at Jo Nova with implications for the rural/UHI question. I know this should be in TIPs but I feel it is relevant to your argument if you have not had a chance to see it yet. No surprises, coastal stations match GISS ocean data. However, inland, rural stations have behaved very differently over the instrument record (1880 to date) and Lasner correlates them with terrestrial glacier advance and retreat, greenland glacier melt rates and seal level variation over the instrument period . With a tip to Willis vs Grinsted Lasner’s correlation goes up and down with the sea level unlike that naughy trace gas. Russ Steele asked, “Why does the data in the graphic stop at 2006?” I’m assuming you’re referring to this graphic; there was a similar one published with The Economist article. The curves are 10-year moving averages, which are plotted at the midpoint of the 10-year interval. The BEST results (here) include monthly data up to and including May 2010. The last 10-year sliding window is centered at May 2005. Ooops, typo but I guess if sea level is changing then so is the seal level. “The rush to judgment they fomented before science had a chance to speak is worse than anything I’ve ever seen” Consider the conclusion of Watts, 2009, in a well-publicised but not-peer-reviewed publication, “The conclusion is inescapable: The U.S. temperature record is unreliable. And since the U.S. record is thought to be “the best in the world,” it follows that the global database is likely similarly compromised and unreliable.” Looks to me like a rush to judgment, combined with publicity-seeking, before “science had a chance to speak.”; in fact, before there was much of any mathematical testing to support the conclusion. Care to comment? REPLY: Sure, the booklet wasn’t destined for peer review and the conclusions weren’t supposed to be mathematical for it, but qualitative, and the recent GAO report agrees with my conclusions in that booklet. Clearly, THEY took it seriously, even if you and your rabbet friends do not. Of course people like yourself that operate in the shadows behind fake names would much rather just ignore such problems and sweep them under the rug and not deal with them. The fact that NOAA has also systemically followed our survey and closed dozens of stations (or removed the thermometer while retaining the rain gauge) also shows that they know the station(s) are unreliable. I recall Tom Peterson of NCDC writing a big hullaballoo report claiming how Marysville (the station that started this all) was just fine. Guess what? Yep NOAA closed it: Finally, the biggest proof of the USHCN network being unreliable is the fact that NCDC commissioned and built the Climate Reference Network, so that they would have a truly accurate network. In their own words from the Climate Reference Network Handbook in 2002:. Source: You don’t build a second independent network if the primary is doing just fine, now do you? Read some of the money pleadings for that one and you’ll see why NCDC knew there was a real problem, they just hoped nobody would notice. Too late. So not only does the GAO think there’s a problem, but so does NOAA and NCDC, where their actions speak louder than words. Oh, and we’ve already surveyed part of GHCN, and it has even worse problems. Now scurry off to the hole you live in, rodent. – Anthony Why does the “Decadel land-surface average temperature” graph stop at 2005 ?? This is almost 2012 why have they dropped 7 years of data ??? Yeah, I’m with Willis on that one. This is fairly silly to put in the same sentence considering there was a time when urban centers did not contribute to the overall temperature, and our urban centers have been growing. This sounds very much like an issue I discussed some time ago with a friend (a scientific technician) who was involved in setting up experiments scientists at a certain major Australian University. It was a quite common response when the :”first run” of the experiment gave favourable results they would say “that’s enough stop” but as the technician explained to them, there was little cost in actually running the the process repeatedly over the next 48 hours as the main cost had been absorbed in the set up, therefore it would be silly to abandon after one run. As he explained to them, the ability to repeat the result would give their paper much more credibility, so they reluctantly yielded and waited anxiously for the confirmed results. Of course some “scientists of worth” might consider themselves sufficient authority and excellence above all else to ignore technical or mathematical advice, after all they know what they want!! We were discussing the mad rush to get poor results into the media and the sloppy work that can result. Perhaps nervous desire dominated this research? Willis, Quite frankly, I don’t see a problem with the approach. Increasing urbanization is accounted for by not including sites ranked as urban in the “very-rural” set. This means that the trend due to increasing urbanization is included in the full dataset and factored out in the subset. Why on earth do you think that ‘peer review’ consists only of getting comments from the journal referees? It is a much wider process that continues after publication, and it is not confined to the initial referees’ comments. These papers are undergoing ‘peer review’ right here right now, and clearly some of you do not like them. The ultimate ‘peer review’ however consists of those subsequent papers which are published commenting, favorably or unfavorably, on the original. So go publish something. REPLY: Why on Earth do you think it is OK to list conclusions to the media prior to the papers being released to the public for that “extra review” you claim, and prior to publishing peer review? That’s what they did. Papers went out to MSM days before October 20th, and I got calls to comment on 3 papers I had not seen. Explain how that is OK. Of course if it were “I” that had “published something” and done it that way, I’d be excoriated for doing so because we all know there are two sets of rules for: People saving the planet rule: “end justifies the means” Those rotten anti-science skeptics rule: “Peer review with impossibly high bar” So go get a box of scruples or something – Anthony Rattus. The data is actually not all available. To assess their categorization of urban and rural you need access to modis 500 data and you need a list of stations they categorized as rural. It takes some doing but the dataset is available but you have to do some footwork to get it. I’m pretty sure, given what I went through to get it that most people would not be able to. There are some other points to be made here. i’ll save them for now Anthony: Apparently, you are not aware of this: Muller & Associates Richard Muller , President and Chief Scientist Hats what they say, here is what you said: Now do you begin to understand? Look above, their website, they call it greenGov, sounds like a mixture of green and government to me, sound like skeptics to you? They want to “help” you avoid “climate change” (their words) to avoid “carbon dioxide” (their words), sound like skeptics? They want to help you with “clean energy” and to be “sustainable”, (their words”, sound like skeptics? They have a prior agreement to appear in the IPCC report (that’s sure what it sounds like), sound like skeptics to you? Sooo, why might they have a media blitz? It’s rather obvious, actually, Muller & Associates wants business, and the BEST project assures that they will get it. Having a media blitz, and actually being in the IPCC report, sets them up as “the experts”, you know, the ones to call if you need “help” (P.S., bring cash). Does it begin to make sense now? And they want you associated with them, you and Judith Curry, and as many other bigger name skeptics as they can get. That way they can say, “see, even the skeptics agree with us, we are that good”. That way, they can even get business with people who are somewhat skeptical, which is at least half of them now. Hey, double the business of all those other companies, who wouldn”t want that? You say that some people are going overboard (“There’s lots of hay being made by the usual romminesque flaming bloggers”), suspecting the BEST people of bad motives, then you come here and provide practically definitive proof that, yes, they are doing exactly that. Look at their own website, figure it out for yourself. Well, thats what it looks like to me, I would sure love to be proven wrong, but if this thing gets pal reviewed, and gets into the next IPCC report, that’s pretty much an open and shut case. As soon as someone has their dataset for how they classified temp sensors in Southern California, let me know, I’ve got a couple good GPS units and a thomas guide. I’ll do what they were unwilling to do, photograph the sites they chose as “rural”. Based on that map in one of their papers, I’m nearly certain there’s some good candidates for questioning. Anthony, Did I say that I approved of the way the BEST papers were pre-published? No? Then do not make assumptions please. The simplest way to publish something is as a comment to the same journal. As soon as the papers officially appear, submit a criticism. You actually have an advantage – you have already seen these papers. Anthony, I have a favor to ask: I have been following up on Willis Eschenbach’s work presented here: I am getting results that are unexpected. Could you please send him my email and ask if he would be willing to send me his data and R code. Alternatively, I could send him my data, SAS code, and results, but most likely I can read his R code better than he can read my SAS code. Actually, anything he would care to communicate to me I would be willing to read. I thought that he had put up his data and R code for that post, but in searching I have not found it. yours truly, Matthew R. Marler, PhD (statistics, Carnegie-Mellon University) most of my work has been in nonstationary behavioral and biological time series. REPLY: Done – Anthony Ummm, Let’s not lose our focus here. Temperatures are not, in the U.S., (or as measured in the developing world, with their long and accurate data), rising at a rate exceeding natural variability. There are political and capitalistic forces looking to expand their reach, by exploiting any method to drive opinion. The thing they fear most, is the voter. Mosher, I’d be happy to see that list of rural stations. Is it possible for someone to provide a list or a link to the list? Jeremy: I’ll do what they were unwilling to do, photograph the sites they chose as “rural”. You’ll have to get in line…. : < ) .” I think that last word, ‘openly’, is not what you want. I think you mean “innocently” or “fully.” I realize that the discussion on this thread is very focused but I think that eventually we must finally look at these issues in a wider view. I did a presentation last year that looked at the land use of the earth and found some interesting land use information about forests-32%, pastures-26%, arable land-10.6%, urban areas-2.4% and other-29% (of which deserts make up the more than half of this “other” category). I’ve lifted the few slides from the Urban part of the presentation. It mostly shows that the definition of urban areas is still in flux and is often controversial. I wanted to see if we used a fairly liberal definition of an urban area what percent of the earth would be represented. I suspect that historical surface temperature measurement sites might be encompassed or very near these urban areas especially as these areas have been spreading out to surround the older sites. As controversial as definitions might be, I suspect that defining 1% of earth’s land area as urban is too low and does not represent the present reality. •Surface of the Earth is water 70.8% (361.132 million km2 ) and land 29.2% (148.94 million km2 ) DISCUSSION OF URBAN ISSUES •Until recently most studies have used 1.5% of global land as urban (CIA Factbook included) •The definition of urban seems to wander between the use of population density (preferred), light pixels at night, and/or surface disruption characterizations •With the advent of more detailed satellite image analysis new values of land use seem to favor a higher value of perhaps 3% •Some reliable sources claim that 3% has been fudged too high and that 2.4% is probably closer to the correct number •2.4% = 3.575 million km2 •There is some controversy on this point but I expect to see more effort and greater accuracy of this value in the next few years •In 2010 about half the world’s population lived in urban areas (3.5 billion) while about 3.4 billion lived in rural areas •I found a study that estimated that roads and parking lots covered between 1.5 to 2% of the world’s land surface •I am assuming that a significant portion of roads and parking lots are already included in the 2.4% value of urban land use CONCLUSION (Relating to the Urban part of the presentation) •Land use questions grew out of climate issues and questions about where surface temperatures were being measured (mostly in urban areas?) •Does surface temperature measurement (especially if it is mostly done in cities) tell us much about global climate? Bernie “They didn’t adequately deal with that 1% [Earth’s urban regions] in my opinion, by doing a proper area weighting. And what percentage of weather stations were in that 1%?” On this question, Warmista have done nothing but duck, weave, and, if necessary, sit down on the mat. There is a crucial ambiguity in the claim that UHI does not contribute to average land temperature rise. One interpretation of the claim is that the heat generated by urban regions does not contribute to average land temperature rise. That claim is most likely true but is logically independent of the other interpretation. The other interpretation is the claim that UHI does not disproportionately affect the thermometers that are used to measure the temperatures that are raw data for calculations of average land temperature rise. That second claim about the impact of urban growth on thermometers has not been investigated except in Anthony’s work and the data for it stretches back only 30 years. That claim defies the highly educated and focused common sense of a multitude of sceptics who are residents of metropolitan areas. For example, the claim that thermometers in the Atlanta area have not been overwhelmed by UHI is preposterous. Thermometers in Douglas, Paulding, Cobb, Forsyth, and Gwinnett counties were all rural 30 years ago but are all urban now. All those counties are north or west of Fulton and Dekalb counties which made up the Atlanta metropolitan area in 1968. In addition, the claim that encroaching urban areas cause a one-time jump in the record of each thermometer is preposterous. The growth of urban areas is incremental and continues for decades. Cobb County has warmed incrementally from urbanization since I first moved there in 1968.. Anthony, you state that the GAO agreed with your conclusions that “The U.S. temperature record is unreliable.” But GAO did not assess that. This is from the conclusions in the GAO report. It is unscientific to assume NOAA has failed to winnow a reliable temperature record from problematic data. This is where quantitative analysis must be done, not qualitative. You also cite NOAA’s own 2002 directive on the USCRN. Here again, this points to shortcomings with the data and siting, but is not a ‘conclusion’ that NOAA’s temperature record, which tries to overcome problems with the data, is unreliable. The latest quantitative assessment shows that there is very close agreement between USHCN and USCRN. This is not proof positive that the temp record is reliable – although it does corroborate – but certainly there is little evidence from proper quantitative analyses that the record is unreliable. It is important not to forgo accuracy in the heat of this debate. GAO does not support your conclusion on the (non)reliability of the US temp record. It does support your observations on siting issues. You were right in the other thread to say that your previous, non-peer reviewed conclusions were hasty. Don’t let your enemies, real or imagined, encourage you to muddy objective analysis with politics. (I have offered the same advice to the ‘other side’, most recently at Deltoid. If this game of tribes is to abate it will require discipline from all participants, whatever the extenuating circumstances) In my view, Richard Muller has lost all credibility. In fact, I’d go so far as to call him a vile hypocrite. To quote him, “Some people lump the properly sceptical in with the deniers and that makes it easy to dismiss them, because the deniers pay no attention to science. But there have been people out there who have raised legitimate issues.” Deniers pay no attention to science? Really? In this case, he has deliberately, consciously chosen to circumvent the scientific process, and yet he has the nerve to claim that a “denier” such as me ignores the science. Bunk. Well, if he has the audacity to label me a denier and claim that I ignore the science, I think I certainly ought to be able to label him a vile hypocrite. And, I’d like to point out, he did a lot more than ignore the science. He deliberately, knowingly, and very consciously circumvented the scientific process. My speculation is that he ignored the scientific process simply because he knows that he can get away with it. He knows the global warming true believers will never call him on it. He knows the news media certainly won’t call him on it. He knows that his university’s administration will never call him on it (and will very likely reward him, as this will no doubt bring in research money and good publicity). He knows that his fellow physics faculty will absolutely never call him on it. He knows that he’s been tenured a very long time, so the usual standards and rules don’t apply to him. (Just imagine what would happen to a younger, freshly-minted Ph.D. in his department, who deliberately circumvented peer review! His career would be over instantly.) He knows he’s privileged, and he’s using his status to deliberately destroy the scientific process. I used to respect this man, and I enjoyed his first book. No longer. Juanslayton. I would like to see the list as well. The problem is we dont have the list of stations they counted as rural. While I have the Modis500 data I would need their complete list and the ones they counted as very rural to double check. There are NON TRIVIAL issues working with Modis data. Its very good data. In fact its the best at doing this work. However, there are two tricky issues ?” Yes, it is now incumbent upon BEST to ask that the names of their reviewers be revealed. BEST has given us reason to believe that they are receiving PAL review. They should be willing to go on record asking the editor to reveal the names. jimmi_the_dalek says: October 21, 2011 at 8:05 pm “Why on earth do you think that ‘peer review’ consists only of getting comments from the journal referees? It is a much wider process that continues after publication, and it is not confined to the initial referees’ comments.” You are making up your own meanings on the fly. Peer Review is managed by a journal editor to serve the purposes of the journal editor. The journal editor will usually accept the recommendations of his peer reviewers though he is not obligated to do so. Once the journal editor has decided that peer review is finished for a particular submission then it is. There is nothing else in the world of scientific journals that qualifies as “peer review.” Once an article has been published, the authors have received all professional benefits that can flow from that article. Most authors will not read published replies to an article unless it is written by someone who is prominent in the field. To talk of “peer review” after publication is silly romanticism. I really hate to say this….but given your experience with NOAA were you not…..never mind. “First time…shame on you….second time ….shame on me. Third time will be the previously mentioned association with Kevin Trenberth which, unless I’m mistaken, will yield you nothing but more heartache. REPLY: It has been cancelled, see the thread on it – Anthony Nick Stokes says: October 21, 2011 at 5:23 pm Not only should the names be public, but the text of the review as well, and that text you do have, so in the name of transparency, publish it here. I do not buy the lame argument that the text is the property of the reviewers and cannot be published without their consent. barry, good advice. Steve Mosher and Nick Stokes. Muller may be [snip] Crazed with fulfilling his political agenda without any weather facts whatsoever. That is my opinion. This post from Jay Currie earlier sums up so much about the total lack of understanding about the temperature record and how it is calculated. ‘Imagine’ Jay tries to present an example of what the impact of rural vs non-rural temps is based on AVERAGING their temperatures. In this he is reflecting a profund misunderstanding about how the temperature record is calculated. And this has an important bearing on the surfacestations.org results, the ‘march of the thermometers’ argument etc. Simple take-home-message. The temperature record IS NOT CALCULATED BY AVERAGING TEMPERATURES!!! That would be a mathematical nonsense! If you want to understand why, you could read the following series of posts I published at SkS earlier this year.’Of Averages and Anomalies – Part 1A. A Primer on how to measure surface temperature change’, If you are reluctant to read them, ask yourself this question. Do I understand the implicartions of the difference between averaging temperatures and averaging temperature anomalies? Do you really know the difference between the two? If you don’t, how can you be certain that all this kerfuffle isn’t a storm in a thimble. An earlier poster commented that they were going to look at some stations in California. Someone else suggested there would be a queue. Simple question. Why? Do you think that looking at a few stations provides any relevent information. I repeat my point. Do you think that the temperature record is calculated by averaging temperatures so if a station is warmer for some local reason that biases the average? If so you are wrong! Not because a falsely warm station wouldn’t bias a calculation based on averages of temperatures. Of course it would. But the temp records ARE NOT CALCULATED BY AVERAGING TEMPERATURES. Precisely because averaging temperatures would be a mathematically ‘fragile’ approach – just plain Dumb, NOBODY DOES IT THAT WAY. NOBODY. Not, GISS, Not NOAA, Not HadCRU, Not JMA or the Dutch of the Australian BoM or anyone else. Averaging temperatures is mathematical nonsense!! So nobody does that. If you don’t understand why averaging temperatures is a SERIOUSLY dumb way of calculating a temperature record, perhaps doing some research before you comment might be a worthwhile activity. So now we’re back on the Conspiracy Train? That didn’t take too long, now did it? There is an article mentioning you here.” ‘ The deals done , come up with the ‘right results ‘ and peer review and IPCC acceptance are in the bag and they get the full resources of the ‘Team’ and friends to support them. AGW has long been a dirty game which has more to do with PR than science , BEST has merely joined in under the rules of the game. Anthony, I greatly admire your transparently honest and gentlemanly approach to this. Many other commenters are also trying to engage with the “scientific” basis of BEST’s 4 reports. Not being so “gentlemanly”, I suggest that all you need to know is:- (1) BEST got $623,097 funding for this. (2) 20 October (before Durban) was scheduled for a media storm “demonstrating” that the “deniers” had been decisively trounced and cAGW was “confirmed”. (3) Muller used “tarbaby” tactics to try to cover you and other sceptics with the tar of the media storm. (4) They have a pre-ordained slot reserved in the IPCC’s AR5 and no doubt the commentary there has already been sketched out by Schmidt, Trenberth, Santer, Jones & Mann. Personally, I have no doubt that all this was written into the Contract with Muller before he started work. The reports he produced were produced purely to fulfil the agenda set out above. If (as seems improbable) Muller ever had any integrity, he sold it for a mess of pottage. But, if the “Team” seriously believes that this will solve anything, then they are being very naive indeed. Sooner or later the truth will out. “The thing they fear most, is the voter” Not in the UK, where we are ruled by the EU, the decisions of which are made by unelected persons while the elected persons board the gravy train. “Four papers that have not been peer-reviewed yet, and they KNOW they’ll pass peer review and will be in the next IPCC report?” Wahl & Amman managed it! Anthony I do think you are over reacting to BEST, but that aside, for me the real story of BEST is the reasons behind the need for BEST. Muller is on record as saying he had lost faith in GISS temp & CRU because of the behaviour of Jones, & Hansen. Muller said he couldn’t trust their ‘science’ any more and because of this BEST was necessary. I posted a comment to this effect on Richard Black’s column in the BBC but it was deleted. I deliberately didn’t include names of people & data sets in the post, but still they deleted it anyway… I was also thought it was perverse that the BBC used a picture of the hockey stick to promote the Black’s story. Again the hockey stick was something that Muller said offended him. Anthony, I for one would like to thank you for your effort and time not principally for the surface stations but for this blog. You may not sense it yet but the effect of this site opening up the discussion of climate change for scientists and non-scientists alike has been instrumental in getting to this point. The discussion being open is why the blog achieves awards and has such a high hit rate. If the peer review process finds anything different from this open discussion then the reviewers will not have contributed to the open discussion and the next question would be why did they not participate or if they did then why do their views differ from the views of the open consensus? The movement towards open scientific discussion has to continue in this, it’s revolutionary, again thank you. It would appear to me that this, so called, new interpretation of the already mashed and mangled data is actually the first stage in the proof that CO2 is not the main driver of climate. I have heard from Dr Santer that CO2 is considered to be well mixed in the atmosphere yet Best is showing an enhanced hockey stick for the “Land only” component of global temperature, about 1 degree in 60 years.. As they themselves seem to infer that their next great PR exercise will be the release of the sea based air temperature record and they expect that to show much less warming, (to balance the figure with their mates it will have to be about 0.3 of a degree), does this not infer that a consistant mix of CO2 somehow increases temperature more rapidly over the land than the sea. Surely a more logical explanation of any land warming would be that induced by the increasing use of land for agriculture and building. Therefore whilst Man’s use of the land area may well be linked to a small amount of temperature change it is not the CO2 that causes it but the change in Albedo. My second thought is the use of the word “Island” in UHI. This seems to infer that the land area is in a constant state of still air when in fact the movement of warmed air over rural and Urban areas is constant and sometimes dramatic especially in the corn belt and central tornado corridor. My third thought is that they are using a set of thermometers put in place at the end of runways to check whether the wings need de-icing, and sea temperatures taken by ships designed to assist Chief Officers in their assessement of the relative humidity in the ships holds not to interpolate to the “Nth” degree for political reasons. I have other thoughts but the cat is walking across my computer key board and needs feeding.. A skeptical position is reasonable, there are many reasons to be skeptical about the political bandwagon and gravy-trains that surrounds the issue. That’s why people like Muller are worth listening to, as they try to point out where the science is, and also where the reality of CO2 output is (i.e. CO2 output of the developing world is the real problem) Anyway, worth a watch: Also, several important points about the BEST work: 1) Even though the raw data they use is largely the same as others have used, they have returned to the original data as much as possible, rather than using ‘adjusted’ data (which has always been one of the complaints about the other data sets) 2) Their methods allow much more of the data to be used, including incomplete sequences. 3) They publish both their data & their programs freely for others to see & use. (No more need for FOI requests!) From the screen shot of their text: “The UHI effect is locally large and real …” Hang on , I thought they came to the rather surprising conclusion it was negative. So the delinquent teenager is growing up quickly and is now finding new and more sophisticated ways of getting its own way. In the past the cabal that as been bullied into submission, no longer provides the legitimacy it needs so a new ally is sought. Step forward leading sceptics who get a few promises that this time things will be done properly get involved with the delinquents friends. On the platform of open and transparent conduct these friends decide to confirm the answer to a question no one disputes and gets rewarded with access to the teenager’s exclusive club where fortunes are to be made. The delinquent teenager smiles to itself in the mirror realising that no matter what it does it will always get its own way.. If one assumes that there are only three reasons for such discontinuties: 1. Change in weather station siting or instrumentation. 2. Change in environment surrounding weather station site. 3. Rapid climatic change (such as 1977 PDO shift in oceanic temperatures near Alaska) you will presumably be able to stress test your temperature graphs based on eliminating the effects of such discontinuties through simple subtraction or readdition of the size of the discontinuity from all subsequent records? It would again be a major point of bringing concordance to the discussion by eliminating a valid concern, not through saying that the adjustments are valid or not, but highlighting what the effect of such adjustments would be were there incorporated. The conclusion that would be drawn would either be that it has little if any effect; it has a significant effect in explaining away the increase in the ‘global temperature index’; it eliminates the current effect; or it amplifes the current effect by in fact highlghting a bias toward changes which led to colder temperature records being erroneously recorded. I have no clue what the outcome would be, but it is clearly in the interests of all public officials to know the outcome fo that stress test. I find this statement a bit odd “Apparently I’m not allowed to point out errors”, I am sure that lots of people will be doing exactly that, in fact I posted a link to someone who has already done so on the first thread. At least they think it is an error? Sun Spot says: “Why does the “Decadel land-surface average temperature” graph stop at 2005 ?? This is almost 2012 why have they dropped 7 years of data ???” Worried me too until I guessed that they have used the midpoint of the ten year average. This gets rid of the (awkward for them) 1998 max. Presumably if temparatures really do decline over the next deacde they can use a twenty year average ro “hide the decline”. You can’t call for changes in the review process just because this is a very important topic to us. The process works quite well and any “howlers” in these new papers will be exposed. It isn’t right that there should be a subjective sliding scale of review because as we all say “the proof of the pudding is in the eating”. Our cause is just and it doesn’t need special pleading. “Serious claims belong in a serious scientific paper” “If you have a serious new claim to make, it should go through scientific publication and peer review before you present it to the media” […] .” “Ben Goldacre” “This. When Professor Dorothy Bishop raised concerns, Professor Greenfield responded: “It’s not really for Dorothy to comment on how I run my career.” My next thought was that if one third of the temperatures are cooling and the other two thirds are warming over only 25% of the planet surface surely the word “Global” is not in any sense of the word applicable. So why do we continue to be bombarded with this fallacious statement when everyone already knows and agrees that there are areas of increasing temperatures and areas of decreasing temperatures. Rising sea levels and falling sea levels, accumulating sea ice and receding sea ice. No one has yet to my knowledge demonstated that any supposed warming is any more than the noise in the signal. Statistical hoop jumping does not improve the historical record. When there is 60 or 90 years of satelite data you can dig up my corpse and etch “you were wrong” on to my bleached bones. Until such time as that I will continue to see projects such as Best as the interpretation of garbage using flawed computer models and confirmation bias for political purposes. Why do I always misread the word “Our Finding” as “Our Funding” in all these PR exercises? Is it that the two words are so inextricably linked that the first cannot be written without an eye to the second. I don’t know about you guys, but I think that their averaging approach is a huge advance over what was done in the past. They are quite open about the fact that this is still early days in the analysis, in the paper. I didn’t read the press release. This is such an improvement over gridding by dividing the lat and longitude by 5. The temperature adjustments don’t seem outrageous to me. What is interesting is the graph that Anthony didn’t show. Where they took the GHCN data, not Hansen’s or Jones’s and ran it back to 1800. They divided it into five subsets and graphed them, so the results are somewhat robust, and it shows that 1800 was approximately the same temp as today. Anthony, When you wrote which premise did you mean? It’s being quoted in a number of places, e.g., here with a presumption that it’s quite clear what ‘my premise’ refers to, but I can’t be sure from the blog post. Why not go all the way and run |rurality” as a factor in the regression. And test for trend! NotTheAussiePhilM says: October 22, 2011 at 3:24 am. Show us one of these “deniers” of global warming. Sure, you’ll find some who will show that it hasn’t warmed in the last 10 years or so, but they will also state that we have been warming since the end of the LIA. Muller using the “denier” label is more of a hateful, mean-spirited usage than anything else, especially when he allows the listener/reader to think that most of the “skeptics” are these “deniers”. Muller loses a lot of respect and acceptance within the skeptic position by his words and now his BEST actions. That you, or anyone, accepts this and would excuse it shows how effective he and other users of “denier” are. Tainted Jury Science. The media eventing is deliberate and intentional. Peer review will now be irrelevant as it was intended to be, and the process is indicative of contempt for the peer review process (not that that’s necessarily a bad thing). The paper(s) will not be read, or more importantly, understood, by the very vast majority of those who will receive the [press] “news”. There is nothing about science in this – its simply about gaming the system. If the science publishing world had any integrity, the press gaming before review would disqualify the paper from publication in any journal for anyreason. But then, its never really about the science. McLuhan would be proud. There are two levels here. Firstly there is the papers. Then there is the PR campaign. The problems in the papers are of less concern to me. I trust that these will be addressed and dealt with by the scientific process … eventually. The real problem as I see it is the PR blitz which is purely propaganda. I find the massive straw man arguments particularly offensive. Efforts to address issues with the papers themselves really do nothing to address the PR campaign. The propagandists that wrote this drivel obviously don’t care what the papers say since they massively misquote and misrepresent it. I suspect they really couldn’t care less if the papers are revised. The papers are not being reported on, they are merely being used as an excuse for mounting an independent propaganda blitz. The PR blitz needs a direct response in my opinion. It isn’t just that the project states that human contributions may be somewhat overstated. Look at the numbers on page 12, 2nd to last paragraph before Acknowledgments. There is a feature that won’t let me copy and paste, and you want to read the whole paragraph, but the last words are, “In that case, the human component of global warming may be somewhat overestimated,” as stated above. These words come right after showing that the Atlantic Multidecadal Oscillation (AMO), which they previously showed to be tightly correlated with temperature changes up or down, rose sharply since 1975 by 0.55 degrees, while global temps as they calculate them rose by 0.8 degrees. That would mean by simple subtraction that in the last 35 years, temperature forcings other than the AMO (e.g., CO2, methane, ozone, black carbon, deforestation, and reduced sulfate) caused temps to increase by 0.25 degrees, in 35 years. That is a rate of less than a degree per century. Back of the envelope, yes, but very encouraging. I’m sure Richard Lindzen will be quite happy. The link to the study: The reason for the media blitz on Oct 20 is right here:. “And what percentage of weather stations were in that 1%?” Very valid question and it is the first the statisticians should have dealt with. It answers the question of how representative the sample of measurement is. Interesting to note that the discrepancies between HadCRU and Berkeley is bigger then the statistical uncertainty interval. Why is there no comparison to UAH? (for the period 79-now)? Would this not make sense? Local weather Wonk, Paul Douglas…completely AWG bought off, “interprets” Meuller as a “skeptic who now believes”. (Amen Brothers and Sisters and Hallelulia! We have a convert…) Of course this is NOT the case at all. Meuller has never been a “skeptic” in the sense of the rising of temperatures…shall we say, since the early 1800’s. This is, again, the “control” issue of “controlling the language” and defining the debate “as WE tell you WE will..!” That’s why I’m happy that Anthony has made it clear that the arguement over whether the “climate is changing” (it is, always has been, always will), or the SURFACE TEMPERATURES ARE TRENDING UPWARD, (they maybe are…I have some doubts as to the significance of certain measures, I personally find the fact that the RURAL temps in the USA are pretty much DEAD LEVEL over the last 70 years of GOOD data…), or that the Arctic ice is diminishing (while the Antarctic is going up!)..is not an argument. The central question is the influence of CO2 on the atmospheric HEAT balance. (Again, the masses have a terrible time differentiating between “ENERGY” and “TEMPERATURE”, “HEAT” and “WARMING”!) I cannot help but think of a highway warning sign I saw recently: “TRAFFIC FROM THE LEFT DOES NOT STOP.” Yes, that is true. To Glenn Tamblyn ! You are 100% coorect when you address issues on avering temperature data. Even well known sceptic jumps happily into this trap of “validating” GHCN temperature data and the like by averaging blindly exactly as they are supposed to do, and then afterward can be quotedd: see ex1a,b,c … to ex4. And Glenn, im happy that im not the only sceptic thats aware of this essential problem. We are being fooled like complete [I snip this myself] by a wolf dressed like a math teacher 🙂 and end it up that we dont see tha banal overwhelming issues in temperature data. SELECTING temperature data and periods for these is MORE effective way to control the resulting temperature trend than adjusting. – and then its “clean”, you did not adjust…! just average so that warm trends occurs where there where none in the real world. K.R. Frank JohnWho: He’s saying that out-and-out deniers are as bad a AGW-exaggerators (e.g Gore) – they don’t help the debate – as Monkton says, the actual debate is not whether there is Global Warming, or even if human burning of fossil fuels is a contributing factor, – the actual debate is how much warming we will get from a doubling of CO2 – the uncertainty coming from the total contribution of the feedbacks Muller even states that if the cloud cover increases by 2% when the CO2 is doubled, this will nullify the effects of AGW… Here’s another video where Muller gives his thoughts on CO2 emissions To me, Muller seems to be a scientist who is not afraid to state his opinion, and not afraid to point out flaws in the science behind GW, or even point out flaws with the IPCC. “There. ” Couldn’t all the disagreement with BEST and Dr. Muller have been kept private with them rather than posting on the WUWT blog? That would have attracted far less attention and wouldn’t have publicly detracted from Dr. Muller and the BEST team’s reputation as much. If the results changed following peer review, then one could write an op-ed or get a retraction from the Economist or other news outlets, etc. That would seem to be the quiet professional’s route, imo. Not so. The title of the paper is all you need to know that what you state is incorrect. “Decadal” means that they removed the long term trend and only looked at “Decadal Variations” in the residual time series. Meaning 2-15 year variations as they state in the last sentence of their abstract; “Variations in the flow of the Atlantic Meridional Overturning Circulation may be responsible for some of the 2‐15 year variability observed in global land temperatures.” I don’t get this. For the past two years, skeptics were claiming that the Climategate faux “scandal” and other alleged malfeaseances proved a conspiracy of scientists and faked or manipulated data to exaggerate, perhaps even fabricate, the warming trend of the last half century.. How can both of these contradictory positions be true at the same time? The mistake was in thinking that a U.C. Berkeley effort of this importance and this visibility would be politically neutral. Imagine what pressure BEST is getting from virtually everyone else at Berkeley to get “the right answer.” People respond logically to the incentive and feedback system they find themselves in. No external incentive or feedback will be enough to counter the one-sided pressure from within Berkeley. Steve Piet says: October 22, 2011 at 7:59 am “The mistake was in thinking that a U.C. Berkeley effort of this importance and this visibility would be politically neutral. Imagine what pressure BEST is getting from virtually everyone else at Berkeley to get “the right answer.”” ____________________ Well, if that is the case, then you can run their code when they put it on the website. As I understand it, all the work (and maybe the source code I heard) will be put on the website. Or, you could develop your own code, analyze the data, and then publish a peer-reviewed rebuttal to the BEST team’s results if they are biased to get the “right answer”. Slightly amending Legatus on the Keenan thread False Flag – Ally, Neutralize, and!” Legatus says that these are standard Communist tactics, and that Berkeley is a Marxist bastion. peter stone says: October 22, 2011 at 7:53 am “How can both of these contradictory positions be true at the same time?” __________________________ That brings up a good point. I guess the datasets themselves are vindicated from accusations of manipulation now? peter stone says: “I don’t get this.” Apparently you don’t. BEST is nothing but alarmist propaganda, intended to reinforce the current narrative. There is no skpetical scientist as co-chair of BEST, and the rest of the BEST propagandist team are all climate alarmists. So do you actually believe their story? And if so, why? With no spokesmen from the scientific skeptics’ camp, you are just being spoon-fed alarmist propaganda. Please tell us you’re smarter than that. peter stone says: October 22, 2011 at 7:53 am .” I guess you missed my post above which ends with the following paragraph:. Smokey says: October 22, 2011 at 8:11 am “BEST is nothing but alarmist propaganda, intended to reinforce the current narrative.” ____________________________ Is there any evidence to show that their funding sources were aimed to put pressure on them or if there was any political pressure put on the group? Weren’t Dr. Muller and Dr. Curry anything but climate alarmists? Who on the team specifically are climate alarmists that would make them untrustworthy? If you don’t like their results, you can use the same base data and come up with your own code when the stuff is published. Goodness gracious… “the movement” is back in full swing. They only need to prove warming because everyone already believes that only human activity can cause warming. NotTheAussiePhilM says: October 22, 2011 at 7:31 am “To me, Muller seems to be a scientist who is not afraid to state his opinion, and not afraid to point out flaws in the science behind GW, or even point out flaws with the IPCC.” In the corporate world, people like Muller quickly earn the label: LCOD (loose cannon on deck). You can get away with that nonsense in academia where 39 is last year of adolescence. Some have speculated that Muller is behaving as he does to draw attention to his geo-engineering firm. Apparently, he is unaware that his behavior is the kiss of death for corporate types. Maybe he is angling for a Nobel Prize. It appears that BEST has posted their monthly Land Temperature data in a text file (which covers the full dataset at: So, here is the Monthly Land Temperature chart and the 12 Month Moving Average going back to 1800 which you have not seen as yet. It is highly variable. Here is the 12 month Moving Average and the 5 year Moving Average. A couple of notes, the data ends in May, 2010. The moving averages were appropriately cut-off at the end of the chart (but not at the beginning which signals they were using data prior to 1800 to be able to maintain the moving averages in January 1800). April 2010 has an anomaly of -1.035C (which carries through the moving average values so is not a typo). The overall trend over the whole period is 0.059C per decade. Starting in 1976, it is 0.28C per decade. Starting in 2000, it is 0.176C per decade. Starting in 2005, it is down -0.68C per decade. [two of the three sites referenced are blocked. . . are you aware of this?] @Bill Ills 8:22 Bill Thanks. There’s a delicious error in assumption here on the part of BEST in using such old data, pre-1800, note none of the other metrics use such old data and there is a very very good reason. I’ll have a post on it later. – Anthony I am not clear as to why they are dismissing the UHI effect on the temperature record. This appears to be a relevant factor affecting the data. Perhaps someone can put it in terms we can all understand. Or are they just claiming it is irrelevant without explanation? It is quite clear to me that the surfacestations project is showing that the data is compromised at acquisition. This stinks to high heaven. Elmer says: October 22, 2011 at 6:55 am .” Very well said. In addition, there are two egregious errors in the Muller analysis. The first is that one can draw lines among the stations and label them Rural, Very Rural, and so on, as if the labels applied to static groups. Urban sprawl grows all the time and, for that reason, UHI effects must be treated as dynamic and local. They need to work with something like Rate of Change From Urban to Rural and similar rates. The second egregious error is their claim that UHI has a one-time effect on a weather station, bumping it up by a constant number. That is ridiculous and a typical example of Warmista loathing for experience. UHI comes incrementally and has its effects for decades. Lucy Skywalker says: October 22, 2011 at 8:09 am Great post, M’lady !! Bill Illis says: October 22, 2011 at 8:22 am …. [two of the three sites referenced are blocked. . . are you aware of this?] ————————- My charts are not making it through from imageshack? That is the point of the post. Anyone else not getting them. @ Bill Illis 8:51 Whatever issue that poster has it is within his own PC/network or the paranoiaware installed on his PC, the links are fine – Anthony [snip] Funny the comments that get through the moderation policy, isn’t it? And yet Anthony will use the excuse of a fake email address in order not to publish dissenting opinions. [Reply: Use a legitimate email address and there’s no problem. Suggest you read the site Policy. ~dbs, mod.] This post is all about back peddling, going back on your own words and being unable to admit you are utterly wrong. Add all that up and try to find a word to describe it and one quickly comes to mind: [snip]. Best Science Blog? In the comedy category perhaps. REPLY: Well if you believe what you say, have the courage to stand behind your words by putting your name to it as I do, otherwise shut up. As for the offending comment, I didn’t see it as I don’t approve all comments. But it is removed now. Will you argue equally vociferously to have the post removed where I’m accused of raping farm animals for daring to ask for a correction in a libelous article, or does that sort of thing seem acceptable to you? – Anthony Glen Tamblyn: Do you think that looking at a few stations provides any relevent information. Well, yes, as a matter of fact I do. On-site developments can swamp larger scale trends, from global all the way down to UHIs. Observed anomaly trends can come from any level; the immediate vicinity is surely relevant. Or perhaps you think that not looking provides some relevant information? : > ) This might be picking at nits, but I question the extent to which bias is absent from the BEST group’s methods and results when they saw fit to put this sentence in the “Berkeley Earth Temperature Averaging Process” paper: “As described in the preceding section, the existing global temperature analysis groups use a variety of well-motivated algorithms to generate a history of global temperature change.” Why include “well-motivated”? Why do the temperature anomalies look to be higher now than in the 1930s? I don’t think you need to worry about “pal review”. The papers are being reviewed all over the world as we write. When the published versions appear, everyone who wants to will be able to see the differences between the submitted and published versions. UHI should be called thermal emissions as the use of the word island suggests that heat goes directly into space and there is no green house effect at all. These numbers if you use watts/m2 for certain countries are very significant in theory. I worked them out for some countries based on energy use creating 70% as thermal emissions for 2009 as follows USA 0.20 watts/m2 China 0.19watts/m2 France 0.38watts/m2 Germany 0.74watts/m2 United Kingdom 0.75watts/m2. Another point the graph shown above ends in 2006 and is 10 year moving average so as not to show relatively stable temperatures for the last 12 years or so. Matt says: October 22, 2011 at 1:01 am ‘.”’ Utter and total BS. Peer Review serves the journal editor and no one else. Authors are entirely welcome to circulate articles to colleagues apart from peer review. But leaking information about your work to the press when it is in peer review earns you the title Loose Cannon on Deck. Muller seems to have earned that title more than once. EFS_Junior says: October 22, 2011 at 12:43 am “So now we’re back on the Conspiracy Train? That didn’t take too long, now did it?” Judith Curry calls for disbanding the IPCC. Is she on the conspiracy train? You do not have to believe in a conspiracy to believe that a group of people are behaving as if they were conspirators. It is called GroupThink. Anthony, accepting to your applied caveats, do you accept the BEST results? Would you be anymore accepting if a follow-up analysis limited itself to your preferred 30-year period? Equally, will you be as resolute in your criticisms of skeptic ‘revelations’ that get pumped up into the mainstream, sans peer-review? @ peetee, are you unable to read and comphrehend this post? All your questions about BEST acceptance and time periods are answered. You’re just looking for a money quote so you can post it elsewhere sorry chump, seen your kind too many times. Go fish. The issue with peer review is that they told me (when they asked for my data) that they’d be going through the standard peer review process, No mention of media blitzkrieg. If I knew they were going to do that, I would not have agreed to share my data. A lie of omission is still a lie. In case noone else is getting the charts, this is a different link to BEST monthly anomaly. I’m also starting to wonder how NCDC, GISS and Crutemp can come up with such stable Land temperature data month-to-month when BEST, using 39,000 datapoints, has such high variability from month-to-month. BEST temperatures vary from Crutemp3 by +/- 0.5C and from the NOAA by +/- 0.4C on a consistent basis month to month. They also have a higher trend than either dataset. (It also looks like BEST has an error in their database for April, 2010 which should be +1.035C rather than -1.035C – it is such an outlier compared to the trend and to other datasets – that means all their moving averages have to recalculated as well). If the 1/3 of the stations that are showing cooling are all very-rural it could be argued that mankind is saving the planet by burning fossil fuels. Elmer says: October 22, 2011 at 9:38 am “If the 1/3 of the stations that are showing cooling are all very-rural it could be argued that mankind is saving the planet by burning fossil fuels.” The supreme arrogance that isolates Warmista from all common experience also makes their language meaningless. For example, Warmista might claim that Rural stations show the most warming of all and, in addition, that fact shows that the effects of UHI are not important. What they do not tell you is that the Rural stations show the most warming because they are well into the process of becoming Urban stations. The same holds for Very-Rural stations that are in the process of becoming Rural stations. Their language is designed to defeat attempts to use empirical observations in criticism of their conclusions. The error bars on the reconstruction seem implausibly tight for an honest ‘instrumental’ approach instead of a ‘proxy’ approach. Saying that a thermometer in a field is representative of the true energy content of the meter of surface air across the entire gridcell is an assumption, and the stated error bars are ludicrous. It’s fine when you call it a proxy – “Hey, this is all we have.” But “instrumental” is claiming some skill at -actually- measuring the quantity of interest. And calculating and propagating errors from actual calibrations of instruments. These instruments are generally calibrated as -point-source-observers-. I have never yet seen one calibrated as a -gridcell- observer. Coming at this a different way: Pick a tiny gridcell that’s -far- smaller than those typical of these reconstructions: 10km x 10km. From where I sit, that would include a long windy ridgeline, a couple of hills, two streams and a large flat area near a lake. Even correcting for elevation differences, these locations do -not- have the same temperatures – a three-degree temperature difference exceeds the ‘0.1C’ error limit on the physical thermometers by more than an order of magnitude. “CRN1” siting and instrumentation at these locations wouldn’t change that. The “microsite” issues and the (separate) UHI issue are -not- the same problem as converting a point measurement into an estimate for a distributed quantity. otter17 says: October 22, 2011 at 8:19 am “If you don’t like their results, you can use the same base data and come up with your own code when the stuff is published.”? Septic Matthew says: October 22, 2011 at 9:16 am “I don’t think you need to worry about “pal review”. The papers are being reviewed all over the world as we write. When the published versions appear, everyone who wants to will be able to see the differences between the submitted and published versions.” In the meantime, Muller and friends are happily spreading claims that might prove to be false in a matter of days when peer review is finished. They have succeeded in the media handsomely. You do not understand peer review. It serves journal editors and has no other function whatsoever. You are confusing peer review and the general discussion of new papers. Muller himself seems to accept that the surface temperature record as a whole is suspect when he says “it is nonetheless possible to find long time series with both positive and negative trends from all portions of the United States.” “The Earth is warmer than it was 100-150 years ago. But that was never in contention – it is a straw man argument. The magnitude and causes are what skeptics question.” Yes. Yes, it was…including by you. Stop trying to whitewash history, and you might start getting some of the respect you think you’ve earned. UC Berkeley got a 500 million dollar grant for green energy from British Petroleum. BP is owned by the English Crown/Rothschilds banking consortium. The Koch brothers got ‘outbid’. UC berkeley most of their research money comes from the Government and UC Berkeley is once again out pushing their massively flawed nuclear reactor technology to the world among other scams they have going funded for by the taxpayers. it is estimated the carbon trading market is worth 2 trillion a year it they can make it ‘stick’- CO2 taxes- with falsified, bad science. Prince Charles, Sir Albert Gore, Barron rothschild, etc are the principals in setting the trading exchange up for Carbon to be based out of london and have hedge funds to invest in green energy as well. prince charles is sort of the behind the scenes mover on ‘green’ energy. ( and we all see how ‘green’ fukushima is) and al gore is the notorious ‘science’ front man even though he was a liberal arts major and a C law school student/graduate. englands north sea is in decline and all of england is financed based or service based around finance. the english invented the CDS what is currently being used to collapse/manipulate global markets and that market is coming under strict regulation in the USA and Europe. this global warming/carbon tax is their ‘next’ act in England. And not only that if they can levy Carbon trading and carbon taxes globally many of Al Gore (prince Charles 12 th cousin) and Rothschiilds ‘green’ energy firms will benefit as well. After Gore let his literal cousin, W. Bush, who is on the record admitting he is a relative of Prince Charles as well, like Al Gore, take the election in 2000, and some say the Bush family bribed the supreme court to do so. Al Gore was put on the BOD of directors of Kleiner perkins, the big Silicon Valley , by the City of London, as ‘compensation’ and got 1 dollar series A google stock which made him a billionaire and front man for the anthropogenic global warming hoax. All of this centers around an ethos the English Royals postulated by Disraeli that the English Elites should oppress the middle class with heavy taxes and send that money down to lower classes and use that to eliminate competitors coming out of the middle class to challenge the english elites. it is one of the greatest hoaxes of the world by the English royals that their empire has been disbanded. the PM of England must kneel and swear and oath of loyalty to the English crown and kiss the Crowns rings to take power. The English queen can dissolve parliament in New Zealand, Canada, Australia, and the UK at any time. The PM of the UK and Finance minister must met weekly with the English crown’s privy council (bankers and house of lord types) to advise them and receive advise. the USA was ‘recaptured’ by traitorous elite families with ties to the English crown like the Al Gore and Bush family by the Creation of the Federal Reserve Bank -See Dr. Murray Rothbards book, ‘the case against the Fed’ and his history of banking it the USA. The Fed is privately owned and its owners are all part of the City of London, ‘Royal’ banking cartel. no one disputes that by virtue of Obama’s father being an English Citizen at the time of Obama’s birth he is dual English-USA citizen. Obama must know who has the ‘power’ as his first meeting was not to Canada as is tradition for American presidents but to England for private meetings with Prince Charles, and his banking pals. Queen Elizabeth is retired even though she has not told her people and prince Charles is heading the English monarchy day to day and his father was head of the WWF, a highly political outfit for stealing indigenous land rights. It looks like the Queen and Prince Phillip have an early onset of Alzheimers as their memory is shot. The Queen has announced she is on her last world tour. English Commonwealth countries control the majority of the votes at the UN. This anthropogenic global warming hoax like the peak oil theory hoax and has ties right to MI6 and the English crown. The English empire is still growing they just took ‘libya’ back again. Al Quaddafi was put in power by the English, is a graduate of their military schools and was partnered with Prince Andrew and Nate Rothschilds and Tony Baliar, and when he stopped cooperating with the Crowns NWO agenda in Libya the English crown lead the effort to war to remove him. The English created the modern nation of Libya and they have always been in ‘charge’ and in business with its various leaders since all that oil was discovered there. I used to work as a petroleum engineer for a large multinational that is where you sort of pick up how the world really works. I’ve always favored, LNG, clean coal, and solar over oil , and oil by far to nuclear power. I studied nuclear engineering at UC and thought it was a ‘scam’ to advance the nuclear power goals of the military. there is no way a nuclear power plant can be called ‘clean’ energy and no way any of the designs can be really fail safe or fool proof. when i was last in the oil industry these same people in london were trying to blame ‘global’ cooling for the worlds ills. it seems every generation they change the story and no one remembers what lies London told the last generation. i do. Theo Goodwin says: October 22, 2011 at 9:55 am ?” Still, if BEST can be proven wrong after the publication of their method/code or whatever, that would make Muller look all the more incorrect. Now, not to say that this pre-release is kosher in the climate science realm, but if Dr. Muller is used to doing it in the physics realm, why demonize him as an alarmist? He may have made an honest mistake doing what he is used to doing for his physics results/publications. Theo Goodwin:You do not understand peer review. It serves journal editors and has no other function whatsoever. You are confusing peer review and the general discussion of new papers. Let me try again: When the published versions appear, everyone who wants to will be able to see the differences between the submitted and published versions. Here’s another reason I can’t buy the belated assertion the skeptics “knew all along” the temperature reconstructions were robust, and credible. And that they merely have problems with attribution. This belated assertion is contradicted by what has been said routinely – and recently – on skeptics blogs. AW himself said this in March on his own blog. He stated he didn’t know if the Berkley results would show warming or cooling. This clearly suggests uncertainty about the trend of global surface temperature. And he implied the NOAA and GISS temperture records were all mucked up….”madness” was his word. How, therefore, can it be true that skeptics “knew all along” that the temperature records were credible and robust- as now confirmed by a prominent Berkley team that was sympathetic to skeptics; indeed had their support? ********************************************************************************************************************** AW on the Berkley Project: “But here’s the thing: I have no certainty nor expectations in the results. Like them, I have no idea whether it will show more warming, about the same, no change, or cooling in the land surface temperature record they are analyzing….. However, I can say that having examined the method, on the surface it seems to be a novel approach that handles many of the issues that have been raised.” “And, I’m prepared to accept whatever result they produce, even if it proves my premise wrong. I’m taking this bold step because the method has promise. ….. any of those advocating extreme dangerous warming would like to discuss the paleoclimatic record and current observations rather than name calling I am more than willing. I would be curious, “What is the scientific case that the judges will be asked to decide on?” Milankovitch’s theory does not explain the paleoclimatic record. The paleoclimatic record shows evidence of a massive serial pseudo cyclic forcing function. It is this unknown mechanism that drives the glacial/interglacial cycle. With Milankovitch’s mechanism and assumed positive amplification planetary temperature should cyclically track insolation at the critical 60 degree North. That is not observed. Interglacials periods end abruptly not gradually. There are at least six fundamental observations that cannot be explained by Milankovitch’s theory. Detailed analysis of top of atmosphere radiation balance in response to ocean temperature changes shows the planet’s feedback response to a change in forcing is negative not positive. The planet resists temperature changes rather than amplifies them. The glacial/interglacial cycle is caused by what causes the abrupt climate change events in the paleoclimatic record. 1) 100,000-year problem 2) 400,000-year problem 3) Stage 5 problem 4) Effect exceeds cause 5) The transition problem 6) Identifying dominant factor. I have a serious question and maybe someone could help me. If BEST claims 1/3 of the stations show cooling and 2/3 warming, have they included the mass elimination of sites by 2/3, like Ross McKitrick shows here? In original from GISS here (Fig. 2 at the top) I’m just wondering about the “1/3” and “2/3” coincidence. Dr. Muller’s job was to provide the results for which Charles and David Koch so well paid him. Dr. Muller’s job was NOT – I repeat NOT – to think for himself. Until we Americans learn to follow our job creators without question, we will never truly be a free nation. Global warming does not exist. Climate change does not exist. Only government impedes right and just firms from acquiring the rich amounts of oil and natural gas that lie inches beneath the ground all over the United States. “peter stone says: October 22, 2011 at 7:53 am I don’t get this. How can both of these contradictory positions be true at the same time?” The beliefs of people that do not believe in CAGW vary. However I do not see any contradiction to believe we are warming due to natural causes since we are coming out of the LIA while at the same time disagreeing with the huge effect man-made CO2 allegedly has. “For the past two years, skeptics were claiming that the Climategate faux “scandal” and other alleged malfeasences proved a conspiracy of scientists and faked or manipulated data to exaggerate, perhaps even fabricate, the warming trend of the last half century.” This does not necessarily reflect on BEST or HADCRUT, but may be a reflection of the hockey stick that did away with the MWP thereby exaggerating the “warming trend of the last half century” relative to 1000 AD. Otter17: That brings up a good point. I guess the datasets themselves are vindicated from accusations of manipulation now? ********************************************************************************************************** I would say so. If the Berkeley project confirms and replicates that the HadCrut, NASA and NOAA temperature reconstruction are scientifically credible, then I presume the AEU scientists, Phil Jones, and Michael Mann are owed apologies. Their temperature reconstructions are consistent with the reconstruction the Berkley team did. And the Berkley team had the blessings of prominent skeptics. BTW, regarding the claim that skeptics “knew all along” that the temperature reconstructions were showing global warming, and that their only objection was with regard to attribution, here’s another gem. This recent “knew all along” assertion, sadly, just isn’t cutting it. ************************************************************************************************************* stone’s reading comprehension appears to be almost non-existent. He seems to believe that “significant” means zero. The planet has naturally warmed over the past century and a half, from 288K to 288.8K. That is hardly significant. The planet has been warming along the same trend line since the LIA. There has been no acceleration in the warming trend, despite a ≈40% increase in [harmless, beneficial] CO2. What does that tell you? Perhaps we need an FOIA request to Berkeley asking for any emails between members of the BEST team and the IPCC or any representives of the IPCC concerning inclusion of these papers in the next IPCC report. peter stone says: October 22, 2011 at 11:49 am You continue to miss the point that BEST pulled a Bait and Switch using a 60 year record instead of Anthony’s 30 year record. They did not address the station siting issue which is the topic that Anthony was talking about. The Earth is warmer than it was 100-150 years ago. But that was never in contention – it is a straw man argument. The magnitude and causes are what skeptics question. ———- Well I have seem lots of commenters say that there has been no warming at all. But of course that is no indication of the proportion of climate skeptics who do actually believe that. They are just the noisy ones. I am of the view that the many attempts to discredit both these temperature series and the scientists who produced them was based on this belief. I think it’s time to find out what proportion of climate skeptics do actually believe the world has been warming over the 200 year instrumental record period. I think one of Anthony’s web surveys is the best approach. Multiple choice of course. Something like: 1. Are you a climate skeptic yes/no 2. What temperature unit do you prefer C/F 3. Do you believe the temperature fall/rise over the last 200 years has been: -1.0/-0.75…….. 0.75/1.0 degrees celcius (unit changes depending answer to previous question) 4. What country do you live in? The Australian Bureau of Meteorology,, has a set of “high quality sites” some of which are classified as urban. “Urban sites have some urban influence during part or all of their record, hence are excluded from the annual temperature analyses.“. Unfortunately, I could not find a list of the “high quality sites”, only a map with them marked on it, so it would be quite time-consuming to find out just which these stations all are. I went through just the ones in Western Australia as that is the largest state by area. Here are the ones not classified as urban, ie. the ones that are not excluded from the annual temperature analyses: Derby Aero 3032 Broome Airport 3003 Halls Creek Airport 2012 Port Hedland Airport 4032 Roebourne 4035 Marble Bar Comparison 4020 closed 2006 Newman Aero 7076 Carnarvon Airport 6011 Giles Meteorological Office 13017 Meekatharra Airport 7045 Geraldton Airport 8051 Kalgoorlie-Boulder Airport 12038 Southern Cross 12074 Ceased temp. obs. 2007 Merredin 10092 Kellerberrin 10073 York 10311 Rottnest Island 9193 Wandering 10917 Cape Naturaliste 9519 Jarrahwood 9842 Bridgetown Post Office 9510 Katanning Comparison 10579 Esperance 9789 Cape Leeuwin 9518 Albany Airport 9741 28 stations. 10 at airports. 1 at a Post Office (Post Offices are typically near the centre of a town). 2 now closed. Not very encouraging. And it is likely that many of the others would be decidedly non-rural. For example Katanning Comparison 10579 – here is a Google Map centred on the given location of the site: (downloaded in January 2011). How on earth can BOM claim that the airports, the post office, and stations like Katanning 10579 do not have “urban influence during part or all of their record“? PS. All the Google maps I downloaded in Jan 2011 are in here: http:\\members.westnet.com.au\jonas1\RSelectedStationsGoogleMaps.pdf (large file 10.7mb). 8 of them are in the above list (3003, 4020, 12038, 12074, 10073, 9510, 10579. 9518). There were only 3 of these which could be classified as rural (4020, 10073, 9518). PPS. I don’t know whether these BOM sites were used by BEST, but it does seem likely. LazyTeenager says: October 22, 2011 at 6:04 pm [survey] In the abstract a good idea. But before starting, you have to define what “warming” is. I am, for example, not the one who disputes a little temperature increase over the last 200 years, but I rather prefer to call it a little fluctuation. It’s the alarmists, who seem not to be able to look over the rim of a tea cup. For most of them, it even seems, Earth hasn’t existed way back before those 200 years. For example, if you take the Vostok ice core temperature proxy, you’ll notice a trend by -0.3°C for the beginning of the holocene period until now. It’s merely a point of view, and in alarmist’s case of a certain (cherrypicked) period of 200 years of steady incline. The 200 years before that 200 years were warmer and declining. Overall however, there was not much change in temperature. That’s why I can’t understand that whole anxiety. It’s simply mother nature at work. typo in my last comment – 25 stations not 28. Mike Jonas says: October 22, 2011 at 6:26 pm Mike, you can freely download the GHCN v3 data here (ghcnm.tavg.latest.qca.tar.gz) and look for your stations. For Katanning WMO #94629: 50194629000 -33.6800 117.5500 311.0 KATANNING 324R -9HIxxno-9x-9WARM FOR./FIELD B It is a (R)ural station, and it seems, they don’t know the population. According to that data, it is not at an airport, the surrounding vegetation type is WARM FOR./FIELD and suburban by satellite night lights. If that doesn’t meet the reality, feel free to contribute all abnormal stations to Peter O’Neill’s project. He would appreciate each helpy hand. It’s the chicanery. The AGW crowd never seems to be able to conduct everything above board and let the science lead discussion. They just can’t let go of the tricks. Makes me willing to double down on spreading the properly skeptical message. We simply can’t have those who know what’s best for us, running the show. Rhys Jaggar at .” Rhys, I would assume that all processing happens after discontinuities in station records have been considered. Either because such a discontinuity has been identified and perhaps some correction applied to acount for say a change in altitude of the station. Or the station was rejected from the record because of the discontinuity. Or because the possible discontinuity has not been noticed and made its way into the record. Of your 3 examples, 1 & 2 are things you would hopefully find and resolve in some way. Case 3, rapid climate change AT THAT STATION is something we would want to include. If that station saw a rapid real climate change then that is something we want to include in the analysis. As to stress testing the data, a number of individuals one the net such as Tamino have done just that, taking the unadjusted data from GHCN and the adjusted data and comparing the results. No significant difference. This post at SkS has links to quite a range of independent studies looking at temp records. A range of people have looked at aspects of this question quite independently. Follow the links to see what others say. To your alternative possible scenarios 1. ‘That it has little if any effect’. This is what I would expect because I would expect the various impacts of all sorts of faults in the data record to be at least substantially random and thus tend to cancel out. 2. ‘it has a significant effect in explaining away the increase in the ‘global temperature index’. This seems very unlikely to me for 2 reasons. First for the reason I gave in case 1. And secondly, the ‘global temperature index’ is land and ocean and 70% of the earth is ocean so any problems with land based data only impacts 30% of the index anyway. 3. For the same reasons I wouldn’t expect that any ‘issues’ would add a cooling bias either. [] Oops, Copy & Paste error. Ignore the last 2 lines. ? Often, rural sites attract more development funds than urban sites because people prefer them for work and home life. Likewise developmental changes that result in lower temperatures would also average the same over time for rural and urban sites so one would indeed find some urban sites with short term cooling outcomes until other subsequentr warming developments offset it once more. So the problem here is the false distinction between rural and urban. That distinction matters not a jot because what matters most is the actual changes that occur at each site whether urban or rural and the splitting of the sites into those two groups does nothing to separate out sites that are affected by incremental (rather than absolute) development from sites that are not affected. Anthony, I will take you up on your advice to ‘go fish’… given BEST, why given Fall et al… is the assertion any more….. credible? Just sayin. A. Watts 2010: “Instrumental temperature data for the pre-satellite era (1850-1980) have been so widely, systematically, and uni-directionally tampered with that it cannot be credibly asserted there has been any significant “global warming” in the 20th century.” Theo, One of the four papers is devoted to that subject. Earth Atmospheric Land Surface Temperature and Station Quality in the United States @Leif Svalgaard “I do not buy the lame argument that the text is the property of the reviewers and cannot be published without their consent.” #include <ianal.h> From a legal standpoint, it is ridiculous. Copyright in a “work for hire” belongs to the customer who paid for the work to be done, not the writer. The act of reviewing a paper for a journal should be interpreted as a “work for hire” under any sane reading of the facts and circumstances. Anthony, Your honesty is refreshing, but your hypocrisy is astonishing. You cannot criticise BEST for not observing normal peer review process when what most of what passes for science in contrarian quarters is never peer reviewed and never published in reputable, specialist, scientific journals (i.e. Energy and Environment, Nature, and Science do not count). Can you honestly say that you can look at that portion of the BEST instrumental record since 1960 and not be in anyway concerned about the clear accelerating warming trend? Rather than thrashing around like fish out of water trying to prove some causal link between this and sunspots, solar flares, cosmic rays, volcanoes, water vapour, etc., etc., why can you people not accept that the most obvious explanation is the right one? I’ll give you a clue: Petet Jacques et al (2008) ‘The organisation of denial: Conservative think tanks and environmental scepticism’, in Environmental Politics volume 17(3) provides the answer. Dr Muller may not yet have admitted that he was wrong to accuse MBH98 and the Hockey Stick Graph of being a fraud (i.e. “hide the decline” referred to the removal of post-1960 tree ring data that declined when the instrumental record showed temperatures to be rising), but at least he has now accepted that global warming is accelerating not slowing down. He has therefore knocked-over the first of what I will call, for the purposes of passing moderation on this website, The Six Pillars of Climate “Contrarianism“: 1. Global warming is not happening. 2. Global warming is not man-made. 3. Global warming is not significant. 4. Global warming is not necessarily bad. 5. Global warming is not a problem. 6. Global warming is not worth fixing. (See Henson, 2007, p.257). Anthropogenic climate change is happening; it is not a false alarm, scam, or hoax. It is not anti-Western, anti-Capitalist, anti-progress, or anti-human; and it is not a UN-WMO-IPCC conspiracy to install worldwide socialist government. However, it is a consequence of unrestrained and ill-considered fossil fuel consumption and of unsustainable development. As such, delaying doing something about it is utterly short-sighted and counter-productive. In the long-run, the later we try and tackle the problem the harder it will be to fix. It is perfectly analogous to getting into debt and not making any attempt to pay off your creditors. The sooner you people accept this the better it will be for all of the Earth’s inhabitants. REPLY: I went through peer review with Fall et al, before announcing the final results of our siting analysis.So did O’Donnell et al with refuting Steig’s Antarctic warming statistical fabrication. Both papers were published in reputable journals, so your demeaning claims about skeptics not publishing in ” reputable, specialist, scientific journals” is falsified. Why didn”t BEST put peer review before PR?. Simple question. They told me when I visted they would “do it by the book” which is why I embraced it. They they threw out the book in favor of a PR blitz. You seem OK with that, and I think that says more about you than it does me, especially when you complain about skeptics publishing. As for being concerned about trend since 1960, have a look at the trend since 1800, since BEST went further back than any of the global temperature metrics. they have made a unique window. How would the trend from 1800 to 1900 be so strong prior to the CO2 forcing everyone is looked at? It is very important to note that BEST ascribes NO CAUSE to the trend, and specifically avoids the AGW question in their papers. Also, why does the data for land diverge from the oceans if UHI and other land based human influences have no effect? – Anthony, peetee, and all recipients of grants for Obama’s “green jobs training program”, of course including those already occupying these transformational “green” jobs but paid for by the other usual sources: I, too, hope that the climate system is warming, I really do, because the alternatives place us in a position closer to global cooling! But to support my wishes, I’m not going to merely repeat the same faux science and “latest, same as the earliest” unhinged propaganda tactics and memes intended to deliver the world’s populace into the maw of the same kind of greedy, looting and controlling Totalitarian throwbacks who are apparently paying you for your “green job” role in the repetition of this “latest” meme. Because their/your alleged cure is obviously worse than your alleged disease, and when it comes to further evaluating the climate according to this “alleged disease-alleged cure” metric where the rubber meets the road, I think it’s pretty clear by now that “2.5 billion Chinese and Indians can’t be wrong”. In other words, I prefer real scientific “credibility” and “significance” to support my desire for the world’s climate system to be warming, and thus to possibly wake up someday surrounded by Palm trees and Girls Gone Wild. So if you could please just get Muller and the BEST program to finally practice real scientific method and principle science instead, it would be of great help to me! Anthony, from your reply to Martin Lack above “Also, why does the data for land diverge from the oceans if UHI and other land based human influences have no effect? – Anthony” LOL!!! Because land air temperature is EXPECTED to warm faster than ocean surface temps! This is Meterology 101 Anthony! A. As the GH Effect increases due to AGW, more back radiation occurs. So land and ocean absorb more radiation. and thus could warm more. However additional energy being absorbed by the ocean can be (and is!) transported into deeper water away from the surface. In contrast heat conduction down through the land is orders of magnitude smaller.so the land surface isn’t able to sequester away additional energy and as a result this heat is available to heat the air over the land more. So if total radiation striking the Earth’s surface increases, land air temps must increase faster. B. One of the central aspects of the Climate/Weather systems is that energy is transported from tropical regions to higher latitudes via the major circulation patterns in the atmosphere – Hadley Cells etc. So if the total heat in the system is increasing due to AGW, this will tend to be concentrated more at higher latitudes. And more of the Earth’s land is at higher latitudes in the Northern Hemisphere. So we expect to see greater warming there. So a fairly obvious question comes to mind for me Anthony. Did you start the whole surfacestations.org thing, discussions about UHI etc because you thought greater warming over land than oceans NEEDED explaining? Great Caesar’s Ghost Anthony! It is one thing to be critical of something – that is your right. But surely you if you are going to find faults in something you need first to ADEQUATELY understand the thing you are critical of. Particularly if you are then going to set yourself up as a commentator on the subject. REPLY: Oh please, seriously? You are reading way too much into one offhand question dashed off to a troll as if that question was the holy grail. There’s a whole bunch of stuff you left out yourself. Point is that my current data (which Muller does not have) shows UHI has an effect on trends, and I’m very close to proving it. Unlike Muller et al, I don’t blitzkrieg the press before the paper gets peer reviewed. Check back in a few months, and no I’m not going to explain it to you now.. – Anthony So Anthony. That would be ‘I am in the habit of flinging off incorrect statements when I don’t like people who put me on the spot’. Troll indeed? How about someone simply calling you to task? Radical thought Anthony: People who point out that you might be wrong about many things may not be ‘Trolls’. They may actually offering you a way out of the corner you seem to have spent several years painting yourself into. But the regulars here, they just love it. “Keep on painting Anthony, we’re here in the corner with you man!” REPLY: I don’t need a way out. Like I said, check back when the paper is published, I look forward to your comments then. – Anthony [SNIP: The world wondered when TCO would reappear. If you want to be a contributor, fine. Snark like this will be snipped. Your choice. -REP] Given the zero or declining temps since 1900 at 600 continuous US sites shown in the guest post by Michael Palmer, University of Waterloo, Canada, that just went up (), you might want to begin to perhaps possibly considering maybe tempering your de rigeur pro forma protestations that you accept that (significant) warming has occurred since the LIA. The author’s record shows a bump in the 1920s to 1940s, and another one recently, but that’s it, and the net change/trend is zero to negative. . Anthony, in replying to my earlier comment, I am not sure what you think re-posting the graph achieves. However, if you are rebutting my focus on the steep incline on the graph since 1960 by questioning the less steep and oscillating part of the graph prior to that, then I am afraid I am bound to conclude that, as a former TV weatherman, you are now getting desperate to defend your “anything but CO2” hypothesis. As I have said in response to Pat Frank and Smokey’s comments on your previous post, you people seem to lack (no pun intended) the ability or willingness to see what we are doing to our planet in its proper geological context: When everyone from the American Association of Petroleum Geologists to the Zoological Society of London agrees that anthropogenic climate change is happening, is serious, and needs to be minimised, I am afraid that you have to be a fantasist, conspiracist, or Supreme Being to believe that they are all wrong, or lying to you, and/or that you know better. P.S. Is everyone who challenges your belief system on this website automatically labelled as a “Troll“? REPLY: No just condescending writers of denial handbooks that refer to others as “you people”. Sheesh. And please as a book writer you must be getting desparate when you say I have a “anything but CO2” hypothesis. Citation required or retract. WUWT has plenty of articles on the effects of CO2. You’ll be in the troll bin (with extra moderation applied) unless you can provide a citation where I claim such a theory. I don’t generally extend full privilege to someone who who put words in my mouth I have not said or written. – Anthony Is that the Stephen Wilde who is a fellow resident of Cheshire in the UK and a fully-qualified Solicitor; now world-famous for posting non-peer-reviewed critiques of conventional climate science on websites such as Climate Realists? As they say, Stephen, “don’t give up your day job… Anthony, by putting “anything but CO2” in quotations I was not actually implying you have ever said this. I was (as I am sure you realise) just characterising you position as seeking to explain what is happenng by any other means than accepting human activity s the main cause. As for the remainder of your comments, I am not sure what you are on about, as I have not yet written any books (just an MA dissertation on “Climate Change Scepticism in the UK“), but thanks for crediting me as capable of such – I will take it as a complement on my writing ability. Martin Lack is a clueless person who doesn’t understand the first thing about the null hypothesis, which falsifies the CAGW nonsense he and his deluded Believers believe. There is absolutely nothing unusual about today’s climate. Nothing. It is completely normal. Lack has to change his underwear several time a day because he’s scaring himself spitless over something that exists only in his fevered imagination. There is no evidence whatever for CAGW. None at all. Smokey, In your haste to call me “clueless” (yet again), you clearly did not read what I said: “When everyone from the American Association of Petroleum Geologists to the Zoological Society of London agrees that anthropogenic climate change is happening, is serious, and needs to be minimised… you have to be a fantasist, conspiracist, or Supreme Being to believe that they are all wrong, or lying to you, and/or that you know better.” (emphasis now added to clarify the point being made) No matter how much you may wish that a simple statement of fact such as this (and its implication) may not be true, it almost certainly is. Unless, that is, you have been personally informed by the Almighty that it is not. In which case, such a revelation would trump all of those received by Moses, Jacob, the witnesses to the Transfiguration of Jesus, and the apocalyptic vision of St John combined… Martin Lack, Your appeals to authority mean nothing. You’re just avoiding facing the fact that the null hypothesis has never been falsified, which means that the alternate hypothesis – CAGW – is falsified. Sorry to rain on your parade, but CAGW exists only in your imagination. Smokey, I can’t help it if you have a singular lack of imagination. However, your demand that we falsify your null hypothesis (before if becomes worthwhile taking any mitigating action) does not seem to wash with the majority of members of just about every professional body there is; and it cuts no ice with the vast majority of peer-reviewed climate scientists. If I was a betting man, I know whom I would put my money on being right and it would not be you… and neither would any reasonable jury find in your favour! All of this ignores the simple question, however, of exactly what evidence would convince you that your null hypothesis had been falsified? You are like a frog in a pan of water being heated on a stove; you will never jump out because the rate of temperature change is never great enough to be sufficient cause for alarm… I see that Martin Lack would love to consecrate himself as the jury, and he would, if he wasn’t so impotent. He sounds just like the climate alarmist Kevin Trenberth when he snivels and complains about the null hypothesis. To answer Lack’s quaetion, it is very easy to falsify the null hypothesis: simply show where the pre-industrial parameters of the Holocene are currently being exceeded. Neither Lack nor Trenberth can provide any such evidence. Therefore, nothing unusual is happening. The climate is normal. That is why Trenberth demands that science do away with the null hypothesis; he knows that it falsifies his alternative CAGW hypothesis. Lack is no doubt a true believer in his own doomsday fantasies. But the science is proving him wrong. The truth is tough for him to swallow. But that’s the scientific method in action. Smokey, I appreciate that your horizons may be severely restricted but, it is still not clear to me whether you are just ignoring what I say, or simply incapable of taking it on board? Also, why do you keep referring to me in the third person? Are you trying to appeal to the audience to back you up? You are on the losing side of this argument. Maybe not on this particular website but, you will lose, nonetheless. The only question is, will it be by 4, 5, or 6 degrees Celsius (by the end of the Century)? Quite literally, only time will tell…. So, to get to the point, I know I am wasting my time quoting James Hansen et al to you but, irrespective of whether you accept their actual numbers, the key point I keep trying to impress upon you is that it is not the last few thousand years that is important; it is the last million years that matter (unless of course you believe the Earth to be flat and/or only 6,000 years old – in which case we have a much bigger problem)…. Hansen et al (2008) – see especially Figure 2 on page 5 – point out that, in the context of the evolution of complex life on Earth, what we are now doing to the planet steps outside of the conditions that made the emergence of human beings possible: The fact that this does not concern you, brings into question the effectiveness of evolution itself but, then again, amoebas are still here as well… Anthony – Exaclty when are you going to address the point that, even as recently as January 2010 you your self said, “Instrumental temperature data for the pre-satellite era (1850-1980) have been so widely, systematically, and uni-directionally tampered with that it cannot be credibly asserted there has been any significant ‘global warming’ in the 20th centuy… ” Daleo, J and Watts, A (2010), published via SPPI without peer review (how could it have been otherwise?). REPLY: In our new upcoming peer reviewed paper we’ll have some things that directly address your concerns, and no I won’t talk about them now. I look forward to your comments then when it passes peer review and also your comments when and if all four BEST papers pass peer reviewed significantly unchanged. And by the way, the bulk of that SPPI paper you hate has been reviewed and published by the respected science publisher Elsevier. I didn’t have time to work on it for that publication so only Joe D’aleo’s name appears on it. See here: See chapter 3. A critical look at surface temperature records In the meantime I’m not wasting any more time on the concerns of a person who labels me and others a “denier” and is trying to sell a book filled with such ugliness. – Anthony Anthony – do you still not get it? I am not Robert Henson, I have not written any books; and i am not trying to sell his book. You can get a self-righteous as you want; all I am asking for is intellectual honesty! If you are going to continue to insist that climate change is not being caused by humans; you must have a defensible alternative. This you have not got. You are just refusing to accept what we have got; a workable hypothesis that fits the evidence we have got – it’s a bit like evolution in fact (only we can see it happening)… REPLY So many people in the AGW camp use fake names and non de plumes when they attack me, because they don’t have integrity, I thought your “Lack” was a non de plume to fit “lackofenvironment”. Still you push a book on deniers, and I find that repulsive that you’d embrace the word by pushing the book. Like I said, check back in a few months when we’ve published and then you’ll be able to see very clearly why I’m not the least bit concerned about your opinions. – Anthony Martin Lack has problems. Please put him the troll bin post haste. I’m fine with him viewing it as a badge of honour. What book are you talking about? There is only one book that exists; and that is the very-well respected, and entirely rational Rough Guide to Climate Change by Robert Henson… My website is named after me (not the other way around); and the image that appeared on my blog last week is just my spoof on the front cover of Henson’s book. So, do you see, “my” book does not exist; and my six pillars of [you know what] are just my own simplification of page 257 of Henson’s book. Do you get it now? I look forward to reading what you have to say in your book; hopefully it will make more sense than your total sense of humour failure and misconstrual of all that I have been trying to say here. In the meantime, I would try and get some rest, you are clearly not working at optimum efficiency! Finally, with all due respect to you and your knowledge of meteorology – and your quest to prove AGW to be a hoax (or whatever it is you think you’re fighting for) – I think you should be concerned about the opinions of others; especially those that know more about a wider range of subjects than you do. You never know, you might actually learn something from them! REPLY: I’m only tying to demonstrate that the way surface measurements have been done affect the record. Check back again on the upcoming paper. As for the rest? So you made a spoof book cover of a book written by somebody else, then PUT YOUR OWN NAME ON IT (as seen below) …and now you are upset it is misinterpreted and suggest I have health problems? OK we’re done. Get off my blog, you fabricating liar. -Anthony Anthony, I know you will probably block my IP address now but, for the record, I have not lied about anything; you just misunderstood. That is not my fault. I changed the name of the book, publisher, and author (all 3 if you look carefully-enough) – just to avoid any accusation of Copyright infringement. I think you are getting far too sensitive about all this. REPLY: Putting your name on a book cover that somebody else wrote isn’t lying and “not your fault”? Bullshit. I’d be excoriated for pulling such a stupid stunt. I wonder what respected author James Lovelock thinks about his “endorsement” of you? I suppose we’ll find out as I’ve sent him an email. No more from you, we are done. -Anthony Thank you, Mr Watts! I really could not have tolerated much more garbage from Mr Lack. Happily his surname is appropriate to his brain power. When Dr Muller feels he has performed enough alarmist shenanigens to ensure the funding stream for Best, and secured his salary for a few more years I wonder if his conscience will bring him back to science by peer review rather than by press review. As a participating member in the actual collection of some of the data which he has so expertly massaged, I have sincere doubts that it will ever be possible to use such data for a coherent climate record. As far as I can see we have a more or less flat line with huge amounts of noise on either side caused by all the naturally varying factors. I would have hoped for an increase of temperature since the LIA to have been very clear by now, as without it I fear we are in for some cold dark days ahead. I believe that the historical anecdotal and archeological record is more reliable than the numerical temperature record. Lets hope that the satelite data will be less manipulated and more informative. Frank Lansner says: October 22, 2011 at 7:29 am To Glenn Tamblyn ! You are 100% correct when you address issues on averaging temperature data. Even well known sceptic jumps happily into this trap of “validating” GHCN temperature data and the like by averaging blindly exactly as they are supposed to do, and then afterward can be quoted: _______________________________________________ I strongly urge every one to look at the work Frank has done. It is the true science that BEST should have done and DID NOT! Joanne Nova also put it on her blog here: One of her commenters “Pointman” also has a blog that NAILS Muller & co. as Pathological Scientists: This is very nasty propaganda folks and we can not get lost in the details and ignore the main objective: To fleece the ordinary people the world over. Reread Anthony’s post on just what these MONSTERS are really up to: otter17 says: October 22, 2011 at 7:39 am ……..Couldn’t all the disagreement with BEST and Dr. Muller have been kept private with them rather than posting on the WUWT blog? __________________________________ Muller declared war with yesterday’s media blitz. Why should Anthony continue to act the gentleman??? IT is MULLER not Anthony that blew off the “confidentiality agreement” He is the one that went public. LazyTeenager says: October 22, 2011 at 6:04 pm Well I have seem lots of commenters say that there has been no warming at all….. ____________________________________________- The answer depends on the time scale. And that is why it is so easy to lie and pull the wool over they eyes of the ordinary Joe. Last decade??? – No warming. Last six decades??? a slight warming trend. Last 8000 years – cooling trend. Last 0.03 million years??? warming trend Anthony, here is a proper question for you, I ask because I think you are in danger of sidelining a very important issue for the sake of maintaining the existence of your site. IS GLOBAL WARMING REAL OR NOT AND ARE WE IN DANGER? Please don’t over-respond, a simple yes or no will suffice ( if you wish to answer) If the answer is no, then I will accept it and would ask you to please keep on going with your magnificent work But if the answer is “Yes” then please close this site down and open a new one that deals more with “solutions” to our problem and less with the actions, findings or results of others while all you do is suck some glory from their very existence. Please don’t take this as a complaint, I’ve read much of your stuff and I think you have done much to help others comprehend the issue of Global Warming. But enough, is enough, find solutions, mate, not whinging material 🙁 Donald says,”here is a proper question for you ” “IS GLOBAL WARMING REAL OR NOT AND ARE WE IN DANGER?” That’s two questions. Can’t you count? Ask them one question for me. If you find Ocean temperatures have been decreasing will you make a press release that leads off with this statement, “Global Warming is false”? I am very curious to hear their answer.
https://wattsupwiththat.com/2011/10/21/best-what-i-agree-with-and-what-i-disagree-with-plus-a-call-for-additional-transparency-to-preven-pal-review/?shared=email&msg=fail
CC-MAIN-2019-22
refinedweb
21,042
70.02
Repo provides a mechanism to hook specific stages of the runtime with custom python modules. All the hooks live in one git project which is checked out by the manifest (specified during repo init), and the manifest itself defines which hooks are registered. These are useful to run linters, check formatting, and run quick unittests before allowing a step to proceed (e.g. before uploading a commit to Gerrit). A complete example can be found in the Android project. It can be easily re-used by any repo based project and is not specific to Android. When a hook is processed the first time, the user is prompted for approval. We don't want to execute arbitrary code without explicit consent. For manifests fetched via secure protocols (e.g. https://), the user is prompted once. For insecure protocols (e.g. http://), the user is prompted whenever the registered repohooks project is updated and a hook is triggered. For the full syntax, see the repo manifest format. Here's a short example from Android. The <project> line checks out the repohooks git repo to the local tools/repohooks/ path. The <repo-hooks> line says to look in the project with the name platform/tools/repohooks for hooks to run during the pre-upload phase. <project path="tools/repohooks" name="platform/tools/repohooks" /> <repo-hooks The repohooks git repo should have a python file with the same name as the hook. So if you want to support the pre-upload hook, you'll need to create a file named pre-upload.py. Repo will dynamically load that module when processing the hook and then call the main function in it. Hooks should have their main accept **kwargs for future compatibility. Hook return values are ignored. Any uncaught exceptions from the hook will cause the step to fail. This is intended as a fallback safety check though rather than the normal flow. If you want your hook to trigger a failure, it should call sys.exit() (after displaying relevant diagnostics). Output (stdout & stderr) are not filtered in any way. Hooks should generally not be too verbose. A short summary is nice, and some status information when long running operations occur, but long/verbose output should be used only if the hook ultimately fails. The hook runs from the top level of the repo client where the operation is started. For example, if the repo client is under ~/tree/, then that is where the hook runs, even if you ran repo in a git repository at ~/tree/src/foo/, or in a subdirectory of that git repository in ~/tree/src/foo/bar/. Hooks frequently start off by doing a os.chdir to the specific project they‘re called on (see below) and then changing back to the original dir when they’re finished. Python's sys.path is modified so that the top of repohooks directory comes first. This should help simplify the hook logic to easily allow importing of local modules. Repo does not modify the state of the git checkout. This means that the hooks might be running in a dirty git repo with many commits and checked out to the latest one. If the hook wants to operate on specific git commits, it needs to manually discover the list of pending commits, extract the diff/commit, and then check it directly. Hooks should not normally modify the active git repo (such as checking out a specific commit to run checks) without first prompting the user. Although user interaction is discouraged in the common case, it can be useful when deploying automatic fixes. If the hook is written against a specific version of Python (either 2 or 3), the script can declare that explicitly. Repo will then attempt to execute it under the right version of Python regardless of the version repo itself might be executing under. Here are the shebangs that are recognized. #!/usr/bin/env python& #!/usr/bin/python: The hook is compatible with Python 2 & Python 3. For maximum compatibility, these are recommended. #!/usr/bin/env python2& #!/usr/bin/python2: The hook requires Python 2. Version specific names like python2.7are also recognized. #!/usr/bin/env python3& #!/usr/bin/python3: The hook requires Python 3. Version specific names like python3.6are also recognized. If no shebang is detected, or does not match the forms above, we assume that the hook is compatible with both Python 2 & Python 3 as if #!/usr/bin/python was used. Here are all the points available for hooking. This hook runs when people run repo upload. The pre-upload.py file should be defined like: def main(project_list, worktree_list=None, **kwargs): """Main function invoked directly by repo. We must use the name "main" as that is what repo requires. Args: project_list: List of projects to run on. worktree_list: A list of directories. It should be the same length as project_list, so that each entry in project_list matches with a directory in worktree_list. If None, we will attempt to calculate the directories automatically. kwargs: Leave this here for forward-compatibility. """
https://android.googlesource.com/tools/repo/+/d79a4bc51b6ca5b47bbea861143c72bccc0ad13a/docs/repo-hooks.md
CC-MAIN-2020-29
refinedweb
845
66.74