text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
The problem “Form minimum number from given sequence” states that you are given some pattern of I’s and D’s only. The meaning of I stands for increasing and for decreasing we are provided with D. The problem statement asks to print the minimum number which satisfies the given pattern. We have to use the digits from 1-9 and no digits should be repeated. Example DIID 21354 Explanation The first digit is 2 then after decreasing it next digit will be 1. Then increasing 2 will make 3 as the minimum digit which is greater than 2. And then we have to increase it again but since after again increasing, we have to decrease it. Since digits can’t be repeated. So we will increase it by 2 and then decrease it by 1. So the output will become 2 1 3 5 4 Again we increase the current digit. But doing that will give us 4 and after decreasing that we will have to use either 1, 2, or 3. And all of these digits were already used. So we need to increase the current digit to 5. And that’s how we reach to the answer. Algorithm to form minimum number from given sequence - Check if the given sequence length is greater than or equal to 9, if true then return -1. - Declare a char array of size n+1 and set the value of count to 1. - Start traversing the array from 0 to n(inclusively). - Check if i is equal to the n or the string’s current character is equal to “I”, if true then - Traverse the array from the previous value of to -1. - Update the values of the output array by doing the following steps. - Increase the value of count and store it to the output array. - Check if j is current index is greater than 0, and also the current character of the string is “I”, then, break. - Return result. Explanation Given a string and of I’s and D’s only, we are asked to print the pattern which can be formed with the given string. Here I refers to increasing that is we have to make or form a minimum number which can justify the given sequence. Suppose if we say DI, where D stands for decreasing minimum number such that 2 is followed by forming 21 as a minimum number. the digits can not be repeated, the number can only contain the digits from 1-9. After that, “I” is there, we should form a minimum increasing number. Since 21 is already there. We aim to form an increasing number. Thus 1 is followed by 3, because 2 is already in use, and after that 3 is the only number which can be joined. With all of these concepts and ideas, we are asked to print that minimum number which follows and satisfies the given conditions. We are going to use the output array in which we all store our output to that character array. We made some conditions for data validation and verification. If we find the length of the string greater than or equal to 9. Then we return -1 as result because we are strictly ordered to use 1-9 digit and no digits can be repeated. Traverse the string we have received as an input. If the current index is equal to the length of the string or the current character is equal to the “I”. Then only we proceed further. After that start traversing from the value previous to the current value(i), till -1 is reached. Inside this loop we keep on increasing the value of count and storing into the output array at index j+1. Then check if the value of j is greater than or equal to 0 and also if the current character is “I”. If true, then break the loop and proceed further for more characters. Code C++ code to form minimum number from given sequence #include<iostream> using namespace std; string getMinimumNumberSeq(string str) { int n = str.length(); if (n >= 9) return "-1"; string output(n+1, ' '); int count = 1; for (int i = 0; i <= n; i++) { if (i == n || str[i] == 'I') { for (int j = i - 1 ; j >= -1 ; j--) { output[j + 1] = '0' + count++; if(j >= 0 && str[j] == 'I') break; } } } return output; } int main() { string inputs[] = { "DIID", "ID", "II", "DI", "DDII", "IDID", "IDIDID"}; for (string input : inputs) { cout << getMinimumNumberSeq(input) << "\n"; } return 0; } 21354 132 123 213 32145 13254 1325476 Java code to form minimum number from given sequence class minimumNumberID { static String getMinimumNumberSeq(String str) { int n = str.length(); if (n >= 9) return "-1"; char output[] = new char[n + 1]; int count = 1; for (int i = 0; i <= n; i++) { if (i == n || str.charAt(i) == 'I') { for (int j = i - 1; j >= -1; j--) { output[j + 1] = (char) ((int) '0' + count++); if (j >= 0 && str.charAt(j) == 'I') break; } } } return new String(output); } public static void main(String[] args) { String inputs[] = { "DIID", "ID", "II", "DI", "DDII", "IDID", "IDIDID" }; for(String input : inputs) { System.out.println(getMinimumNumberSeq(input)); } } } 21354 132 123 213 32145 13254 1325476 Complexity Analysis Time Complexity O(N) where “N” is the length of the query string. First check that we enter inside the nested loop, only if we have reached the end or if the current index is I. And the nested loop runs in a backward direction and we come out of the loop if come across I. So we can say that we enter the nested loop when we encounter an I and work on indices that have D stored at them. Since each index can either have I or D. We are traversing over each character only one. Thus the time complexity is linear. Space Complexity O(N), because we have created an output character array to store the result. The space complexity for the problem is also linear.
https://www.tutorialcup.com/interview/string/form-minimum-number-from-given-sequence-2.htm
CC-MAIN-2021-31
refinedweb
995
70.84
Today, you will learn how to perform distributed load testing in Locust. If you are not familiar with Locust, then let’s refresh the basics and see how to perform distributed execution. Locust is an open source load testing tool and you can describe all your test in Python code. Locust File CreationCreate a python file using the below code. from locust import HttpLocust, TaskSet, task class UserBehavior(TaskSet): @task def index(self): self.client.get("/") class WebsiteUser(HttpLocust): task_set = UserBehavior min_wait = 5000 max_wait = 9000 [/code] Note: If a file has HttpLocust class, then it will be considered as LocustFile. In the above code, we have two classes, one is TaskSet which is used to declare all your tests and the second one is HttpLocust which represents one user and it should have task_set attribute. Executing Locust FileUse the below command to run your Locust file. locust -f locust-examples/locustfile.py --host= [/code] Note: As per the above command, our LocustFile is placed inside a folder (i.e. locust-examples). Once you execute the above command, Locust web monitor will start and you can start load execution from web monitor Locust Web MonitorYou can launch the web monitor using. Now, click Start Swarming button after entering number of users and Hatch rate. Distributed Execution To start locust in master mode: locust -f my_locustfile.py --master [/code] And then on each slave (replace 192.168.0.14 with IP of the master machine):locust -f my_locustfile.py --slave --master-host=192.168.0.14 [/code] Submit a Comment
https://codoid.com/distributed-load-testing-in-locust/
CC-MAIN-2021-31
refinedweb
258
65.83
Imagine a door with an unusual handle. The handle is five feet off the ground and rotates upwards to open the door. The door has no lock. Is this a good door design? Sorry. That’s not an answerable question. The purpose of almost every door is to prevent something from going through it without preventing everything from going through it. Some doors exist only to mitigate heat loss while allowing anything else through. Some doors exist to keep everyone out except authorized personnel. The goodness or badness of a door design depends entirely on how well or poorly it performs the task of preventing undesired access and allowing desired access. If I were describing a jail cell door or bank vault door, that would be an absurdly bad design. But for the teacher’s supply room in a kindergarden classroom, it’s pretty good. It doesn’t prevent access to any adults, but the kids are unlikely to be able to get through without adult supervision. Purpose matters. Last time I talked a bit about what makes a book title better or worse. These ideas can be extended to naming just about anything, but it is important when considering the goodness or badness of a name to keep in mind the purpose of the name. Frequently when we name types and methods, it’s like my book example. The purpose of the code, like the purpose of the book, is to provide a benefit to the consumer. The purpose of the name is usually (*) to make it easy for the potential consumer to find the thing, and then quickly and accurately evaluate whether it is likely to provide the desired benefit. With those purposes in mind, here are some guidelines that I use when trying to come up with a public name for something: - Everything in my Bad Names post. - The name should describe what the thing is (classes) or does (methods) or is used for (interfaces), as opposed to how it achieves any of those things. - Names should describe the unchanging aspects of the nature of the thing. A small example of how I got this wrong recently: the style that I use for adding responses to comments in this blog is called “yellowbox”. If I ever decide to change it the look of the blog and want to make it blue, I’m going to look pretty silly having a style called “yellowbox” that draws a blue box. I should have called it “response”; it’s always going to be for responses. - The name should use the vocabulary of the consumer, not the jargon of the mechanisms used in the implementation. - The name should be as unique as possible, so that searching for the words in it rapidly narrows the field down. - The name should be as precise as possible. How many “HandleSomething” methods have you seen? Usually these do something a lot more specific than “handling”. It might be important to the consumer to have a better idea of what you’re doing in there. - The name should not have any non-standard abbreviations; FindCustRec is unlikely to be found by searching for “customer” or “record”. Those are just a few off the top of my head. What are some of the criteria you use to come up with good names for types and methods? ************************** (*) There are unusual cases where names are deliberately chosen to work against those typical goals. Sometimes code is very special-purpose, designed to be used very rarely and then only by subject experts. In those cases, you do not want the code to be accidentally found by people who are unlikely to need it, and unlikely to use it correctly. In those cases, it’s desirable for the name to be laden with the jargon of the experts who will be using it. Doing so sends the message to potential users “if you do not know what these words mean, you probably should not be using this code”. Again, purpose matters. >> I should have called it “response”; it’s always going to be for responses. That might be true for an individual blog page, but it’s a dangerous statement to make when dealing with any site of reasonable complexity. What if I write something really profound and thought-provoking, and you decide to apply your special style to it to make it stand out for other readers? Also, the name "response" is just too generic – response to *what*? The text I’m typing right now is a response to the article, but that’s not the same thing as the text you might write in response to this comment. I would probably go with "highlightedCommentText", but the flexible and reusable nature of a CSS style makes it really hard to choose good names. If you later decide to apply the style to certain sections of article text, then highlightedCommentText would become a bad name, too. @ Joe In keeping with the "vocabulary of the consumer" guideline, "response" specifically refers to the author’s responses to comments. Sure from your point of view the "comments" are responses to the blog, but only Eric sees that style name; he’s the consumer, so the vocabulary comes from his point of view. A guideline I follow for naming is "Don’t be afraid to make a name too long.". It’s far easier, to my mind, to later identify an overly-long name and think of something better than it is to later find an overly-short name and try to figure out what it means. If you really must end have short and cryptic names for things, you can always run an obfuscator over it later 😉 Personally, I 100% agree with everything said about the good names: they should describe the purpose of the thing, using the consumer’s vocabulary. I can envision someone offering an objection that sometimes, some ‘things’ have multiple purposes from the consumer’s perspective, but I believe this can always be worked around, using clever phrasing and a broader view on the ‘thing’s’ purpose. The fun starts, however, in the following situations: – When the champions of the ‘old-school’ C++/UNIX school clash with ‘pro-Microsoft’ crowd: i_am_sick_of_code_wars() vs. IAmSickOfCodeWars(); – When the developers don’t speak English; in that case, the difference between FUNCBREC and StatementBindingContext is not clear, to put it gently: both are just some barbaric nonsense. 🙂 Also, what about the Hungerian notation? I know it was useful in the early Windows days, when half of the system was written in the Assembler language and the name was the only way to know the ‘thing’s’ type; later, in the MFC days, we have seen constructions like m_memberName; now, in the days of SharePoint, we see, for example, SPAlert, for a class name… Along with its namespace prefix, it would look quite good as simply ‘Microsoft.SharePoint.Alert’ to me, but without the namespace prefix..? How may various ‘Alerts’ can there be in the code? And if we go ‘SharePointAlert’, then how long before we get ‘VeryLongNameThatTakesAgesToTypeAlert?’ On the other hand, the prefix ‘SP-‘ may suggest ‘SharePoint’ for some people, and, say, ‘Starting Point’ for others… In short, I believe the art of naming is a bit like the martial arts: don’t use the techniques, use the principles. And, as I have already said, I agree with the princilples from the post. Thanks. For methods I use a rule of thumb that says, "Given a method’s name, signature, and return type you should be able to write a test without looking at the code". I’ll be pointing other developers to this and the Bad Names post for guidelines on how to get the name right. Thanks for both posts! I live in venezuela. And I found this blog almost by accident I have a coworker who totally adores cryptic names that mean something only to her. For instance: Id from user that writes this blog string bidusuye; what about their name? string busun; …. I mean.. What? I have argued so many times to not use those horrible names but she won’t let go of her insane nomenclature, i don’t even want to touch the matter with the method names, they are insane… I for one, like to use names that are descriptive for example UserRegistration() for a method. But I like your guidelines and I think I’m going to print these, translate them into Spanish (that’s my main language, no one speaks English 4 miles from me) and put them in big font letters, and pray they become a kind of rule Of course, only if you mister don’t mind. May I? Names of functions should generally be action-oriented. Functions do things, and their names should tell what they do. For example, instead of UserRegistration(), it would be clearer to call it RegisterUser(). That also forces you to think about the actual steps involved in registering a user. On another note, I think boolean variables are one of the toughest kind to name well. Having a good handle on grammatical tenses helps a lot. Booleans describe a state. Understanding what state an variable has had, has right now, or will have in the future determines its use. I think booleans work best when they are used in the present tense. Eg bool keepGoing vs bool gasPedalIsPressed I recently rewrote a small portion of code. It changed the way I thought about the way it worked and I ended up refactoring it as well. I was able to spot the deficiencies just by trying to give the variables more meaningful names. (I wrote about it here:) I like to avoid any of the items found here Although one day I will figure out what the following evaluates to. marypoppins = ( superman + starship ) / god; @Denis, While you’re on the right track with your Sharepoint.Alert, I’ve seen people create entities called Image or File which really mucks you around when you need to import System.IO or System.Drawing.Image and System.Web.UI.WebControls.Image. NameSpaces are good, but it’s sometimes quite handy to create a convention. 6 of one, half dozen of the other, you get burned either way. @AC, that’s what I mean: use the principles, not the techniques. Unique, precise, descriptive, easy-to-find, humanelly-readable (of reasonable length and in the language all potential readers understand). How exactly do we achieve that? Depends… and there is no point going balls-to-the-wall over it (I’ve actually seen a couple of barely-prevented fist fights in the office over the naming conventions 🙂 ). In my experience, the more detailed, specific, and restrictive the conventions get, the more people resent it and make it ‘a matter of honour’ to bend or break them (for example, enforcing the American ‘color’ over the ‘colour’ commonly used here in Australia has caused protests along the lines of ‘What?! Shall we speak only with the "deep South" American accent during our team meetings now!?’ 🙂 ) And if the people arounf you don’t speak English (as in the case of @Luis Robles), and you are sure your code will never be read and modified by any foreign nationals ‘out there’, I don’t have any problem with using the names in the vernacular, either. Naming is one of the few parts of the modern software development which is still an art (most of it has long become a service industry, standardised through and throug)… but I only say that because I don’t work for Microsoft! 🙂 "as opposed to how it achieves any of those things." Putting the HOW in a method name can make sense for private functions, if, for example, a particular job is accomplished differently depending on context. I believe that vocabulary should be dictated by or strongly connected with the problem domain. For instance, if we describe the door "lock" is a good term. It resides in the door problem/purpose domain. On the other hand we can name it "shiny metal thing", descriptive but taken from much broader domain. In big software projects it is often hard to find adequate names that were not used before. The scope of problem domain narrows the words usage. Thus makes developers use prefixes and suffixes to spawn new words: BasicAuthentication, ComplexAuthenticaion or BTree, BTreeEx etc. @AC: using WebImage = System.Web.UI.WebControls.Image; using SysImage = System.Drawing.Image; using MyImage = MyCorp.Other.Graphics.Image; I hate to be too literal, but I chuckle when I see a door with a sign on it that says "This door must remain closed at all times". At "ALL times"??? Why is it a door, then, and not a wall? If it must remain closed at all times, the door doesn’t need to be there at all, and it should be plastered over! You won’t look silly if you change your stylesheet to have .purpleText {color:#000;} Speeling off nams iis verry inportamt! Seriously, many years ago I implemented a dictionary as part of "Lint" checks. This was prompted by working on an international project where "color"/"colour" type mismatches were occuring all of the time. Often with both pointing to different information from within the same context. On the other hand, I akm fairly liberal with adding "words" to this dictionary if there is a consensus about their meaning.
https://blogs.msdn.microsoft.com/ericlippert/2009/04/06/good-names/
CC-MAIN-2017-47
refinedweb
2,244
69.62
It’s easy to use the Twilio API to send and receive SMS using Python and Bottle. What if we convert that traditional web application into a serverless application using Amazon Web Services’ API Gateway and AWS Lambda? Interested in learning how to create a serverless Python application? Great, because we’re going to build a serverless API for Twilio SMS throughout this post. What Problem Are We Trying to Solve? If we were to build a production ready service for our next million dollar startup, a Bottle web server running locally on the laptop with Ngrok tunneling would not scale. That’s a no brainer. But, if a startup were to allow multiple users to use the same number, say Sales and Marketing, how would they control access to the API calls? They’d want to monitor and throttle usage for each department so they don’t break the bank. They would need a service that allows multiple users to call the same Twilio service that is scalable, secure and hopefully easy to manage. As a bonus, we can enable them to control the usage for each user. Let’s build it. Old Way of Solving this Problem Traditionally, this is what we’d need to solve the problem: - Web server such as Nginx or Apache. - Use some web framework to expose the endpoints (what Bottle framework was doing in the original SMS using Python and Bottle post). In the traditional Model-Control-View model paradigm, we don’t really need the View part but the framework comes in a bundle so I guess we will just throw away the parts that are not need). - Write the functions to correlate what each endpoint would do. - Optionally, it would probably be easiest if the server can be accessed publicly, like the Ngrok tunneling in Matt’s post. This means the server needs a public IP with DNS setup. - Some type of authentication mechanism to control access, depending on which framework was used. - Optionally, we should monitor the API usage. Even after all the work, that is just one server. When the business grows beyond the single server, more servers are needed. We can put the service on AWS EC2 instances with auto-scaling, but that would only solve #4 above, still leaving the rest tasks to be worked on. There is a Better Way Luckily, we live in the age of cloud computing. By combining AWS API service with Lambda, we can solve most of the problems above. Here is an overview of the flow: Each component above represents a service from AWS and not servers or VM’s. Here is the workflow: - The API gateway will create publicly accessible URI endpoints. - Each user will be assigned a key to identify the user and assigned proper usage. - The API request will be passed on to AWS Lambda service. - The Lambda function will in turn make a call to Twilio API and sends the text message. I will show you how to set up this flow by extending Matt’s code into Lambda. Truth be told, both the API Gateway and Lambda services have extensive features that we are only scratching the surface of. Luckily, they are both well documented, here are some links that can be helpful if you want to learn more about them: Amazon API Gateway, Amazon Lambda. Both services can work in conjunction with Twilio’s subaccount services, which allows you to separate the billing and API keys per subaccount. Tools Needed - Your AWS Account,. If you don’t already have one, it is highly recommended to register a free account. Many of the services have a free tier (see about Lambda below) that you can use for free. - Twilio account, API key that you have set up from the post, Getting Started with Python, Bottle, and Twilio SMS / MMS. Step 0. Before We Begin: - If you are brand new to AWS, here is a quick start guide broken down into solutions for each industry that you might find interesting,. - If you are brand new to Lambda, it might be beneficial to watch the introduction video about the serverless model on. - If you are brand new to Amazon API service, it might be helpful to quickly glance thru the high level points here. - At the time of publishing in June 2017, there is a free tier for Lambda up to 1 million requests per month that is more than enough for getting started with the service. Please check the pricing page,, for the latest AWS free tier details. - The Amazon API service is currently NOT offered in the free tier. However, there are no minimum fees or upfront commitments with the usage-based service. Pricing is based on usage and your region of choice. For example, in US-East, the price is$3.5 per million API calls received plus the cost of data transfer out,. My total cost of API services when I built this use case was $0.01, but your mileage might vary. - In AWS, not all regions are created equal. We will be using us-east-1 region in N. Virginia to build both services, make sure you select it on the top right hand corner. This is sometimes overlooked and cause issues down the line when the services have to exist in the same region. Step 1. Create IAM Role For security best practices, we will create a new role using Amazon Identity and Access Management for the API call that will later be assigned to the Lambda function. - Create a free account or sign in to your AWS console via: The click should lead you to a page where you can create a new account or log in to your existing account. 1. Once logged in, click on “Services” on the top left hand corner and choose IAM under Security, Identify & Compliance. 2. Choose the role option on left panel. 3. Create new role and give it a name. 4. Select AWS Lambda for role type. 5. Click on Attach policy and attach the following pre-made policies with the role. Step 2. Create New Amazon API Endpoint In our simple design, we will use the a simple API endpoint of POST to /sms. In the body of the POST message, we will construct 3 JSON key value pairs of to_number, from_number, and 1. From Services drop down, choose API Gateway under Application Services. 2. If this is your first time on the API Gateway page, click on the “Get Started” button or click on ‘Create API’ if you have existing API gateways. 4. If there are pre-populated sample code, close it and choose new API. Name it twilioTestAPI as the name. You can put anything you want for the description. 5. Use the Action drop down to create a resource, name it SMS. 6. Click on action again to create a method, then from the drop down menu create a POST method. 7. Select Lambda Function as the integration type. A menu to select Lambda Region and name of the function will appear. 8. At this point we need to go create the Lambda function before we can link the API to the correct resource. So let’s do that. I find it easier to leave this tab open in the browser so we can come back to it, but it is up to your personal preference. Step 3. Create Twilio Lambda Function - Make a new directory somewhere on your computer for the code you will write for the Lambda function. - Change to the newly created directory and install the Twilio helper function locally via PIP in the same directory. It is important to install the any helper function you need in a single directory. In later steps we need to zip all the modules in this directory into one zip file and upload to Lambda. 2016-09-15 11:25:54 ☆ MacBook-Air in ~/Twilio/AWS_API_Demo ○ → pip install twilio==5.5.0 -t . Collecting twilio Downloading twilio-5.5.0-py2.py3-none-any.whl (262kB) 100% |████████████████████████████████| 266kB 998kB/s Collecting pytz (from twilio) Using cached pytz-2016.6.1-py2.py3-none-any.whl Collecting six (from twilio) Using cached six-1.10.0-py2.py3-none-any.whl Collecting httplib2>=0.7 (from twilio) Installing collected packages: pytz, six, httplib2, twilio Successfully installed httplib2 pytz-2015.7 six-1.10.0 twilio Note. If you are using a Python version that was installed with Homebrew, instead of pip -t you would need to use pip install -prefix. For more information please see. In general, we have seen some issues with Homebrew-installed Python due to installation location issues. If you can, use a standard Python install for these examples. 3. Create a new file named lambda_function.py and construct the code below. Replace it with your Twilio Account SID and Auth Token which you can get from the Twilio Console. The lambda_function.lambda_handler() function will be our entry point when the function is called. 2016-09-15 11:48:21 ☆ MacBook-Air in ~/Twilio/AWS_API_Demo ○ → cat lambda_function.py from __future__ import print_function from twilio import twiml from twilio.rest import TwilioRestClient import json client = TwilioRestClient("<your twilio account sid>", "<your auth token>") def lambda_handler(event, context): print("this is the event passed to lambda_handler: " json.dumps(event)) print("parameters" event['to_number'], event['from_number'], event["message"]) client.messages.create(to=event['to_number'], from_=event['from_number'], body=event['message']) return "message sent" 4. Zip everything in the directory into a file called twilioLambda.zip. zip -r twilioLambda.zip * 5. Go back to your browser for the AWS portal, click on Services and choose Lambda under Compute. 6. Create a new function. 7. Choose Black Function. 8. Use the drop down box to choose API Gateway as your trigger. 9. Choose twilioTestAPI as the API name. 10. Add a name for your function, such as twilioAPILambda, choose Python 2.7 as Runtime and ‘upload a zip file’ for Code entry type. Click on ‘upload’ and upload the previously created zip file. 11. Scroll down for Lambda function handler and role. Leave the handler name as lambda_function.lambda_handler, choose an existing role, and the role you created in step 1. 12. Review and Choose Create Function. 13. Please note, you might get this trigger error, but that is ok. We will wire it up in the next step. Step 4. Wire up the API Service with Lambda Function - Head back to the AWS API Gateway page from Step 2 substep 7, refresh your page and choose Lambda Function, us-east-1 region (or other regions if that is where you created your Lambda function), and fill in the name of the Lambda function that you just created. Then click on Save. 2. Click on ‘Ok’ when prompted to grant access from API to the Lambda function. 3. At this point you will see them wired up. You can click on the Test button to test the connection. 4. Enter your to_number, from_number, and message according to your account in the request body and click on test. Below is an example of the JSON request body: { "to_number": "<your to number>", "from_number": "<your Twilio from number>", "message": "hello from API" } 4. You should get a 200 status and a ‘message sent’ in the response body. If it does not work, delete the resource and recreate it again. 5. You can also test it from the lambda function itself on the corresponding Lambda page on AWS, use the test button and fill in the same fields. 6. You should now see the message on your phone. Step 5. Create Security authentication and Usage Plan - Create API Key, after creation take a note of the API key: 2. Create Usage plan and associate it back to the API. 3. Set API Key required to be True for the POST method. 4. From Action drop down, choose create API. 5. Take a note of the Invoke URL. 6. Associate the Usage Plan to the API Stage that you just created. Step 6. At Last, Test out the Public API Here is a quick Python script for testing the URI, notice that I change the message to indicate this was from the Python script: import requests, json url = "" auth_header = {'content-type': 'application/json', 'x-api-key': <your API key>} data = { "to_number": "<phone number>", "from_number": "<phone number>", "message": "Hello from Python Requests Test to Prod!" } r = requests.post(url, data=json.dumps(data), headers=auth_header) print(str(r.status_code) + " " + r.content) Here is the result when we execute the above code: ○ → python pythonTest.py 200 "message sent" Let’s try again with some bogus key in the header: auth_header = {'content-type': 'application/json', 'x-api-key': 'bogus key'} As expected, here is the forbidden message: ○ → python pythonTest.py 403 {"message":"Forbidden"} Conclusion Whew, we did it! Hopefully you can get a sense of the power of the serverless setup. I believe the combination of Amazon Lambda and Amazon API services is one of those disruptive technologies that allows for quick and secure service deployments that previously could not be done before. Please feel free to reach out to me on Twitter @ericchou or by email eric@pythonicneteng.com if you have any questions regarding this post or my new book, Mastering Python Networking.
https://www.twilio.com/blog/2017/06/build-serverless-api-amazon-web-services-api-gateway.html
CC-MAIN-2020-40
refinedweb
2,226
73.27
C library function - exit() Advertisements Description The C library function void exit(int status) terminates the calling process immediately. Any open file descriptors belonging to the process are closed and any children of the process are inherited by process 1, init, and the processs parent is sent a SIGCHLD signal. Declaration Following is the declaration for exit() function. void exit(int status) Parameters status -- This is the status value returned to the parent process. Return Value This function do not return. Example The following example shows the usage of exit() function. #include <stdio.h> #include <stdlib.h> int main () { printf("Start of the program....\n"); printf("Exiting the program....\n"); exit(0); printf("End of the program....\n"); return(0); } Let us compile and run the above program, this will produce the following result: Start of the program.... Exiting the program....
http://www.tutorialspoint.com/c_standard_library/c_function_exit.htm
CC-MAIN-2015-18
refinedweb
141
52.05
The following example shows how you can define a FileReference object in MXML by defining a custom namespace for the flash.net package. Full code after the jump. <?xml version="1.0" encoding="utf-8"?> <!-- --> <mx:Application <mx:Script> <![CDATA[ import mx.controls.Alert; private const DOWNLOAD_URL: <mx:Button </mx:Application> View source is enabled in the following example. Could you fix Flash plugin validator? I tried IE7 and FF3 with player 9.0r115 and got request to instal Flash Player 9 Yevgen, Yeah, sorry. I believe that Flex Builder 3.0.1 update targets Flash Player 9,0,124,0 by default. So you may need to update your Flash Player to 9,0,124,0, or I may have to set the target Flash Player to 9,0,115,0 and republish each of the samples from the last week or so. In the future I’ll try and target a lower version of the Flash Player. Peter Thanks. I updated my player. But you didn’t count other problem. Your visitors in the most cases are developers. Your “download player” link pointed to regular player not debug version. Yevgen, Sorry, it is the default Flash Player detection code from Flex Builder 3.0.1. Peter Sorry, I have a question, it about … When user click download button, application doesn’t open dialog box to user select save path, then the applicaton direct download file and save the specific path. above is my case. I don’t know how to solve the question.. can you give some idea to me, thank you.
http://blog.flexexamples.com/2008/08/25/creating-a-filereference-object-using-mxml-in-flex/
crawl-002
refinedweb
264
67.96
# Dark code-style academy: line breaks, spacing, and indentation ![](https://habrastorage.org/r/w1560/getpro/habr/upload_files/414/735/211/414735211af7e003a0b936b32960e744.png)Hey guys! Let me walk you through the next part of our dark-style code academy. In this post, we will discover some other ways how to slow down the reading speed of your code. The next approaches will help you to decrease maintenance and increase a chance to get a bug in your code. Ready? Let's get started. ### Line breaks, spacing, and indentation may kill How do people read books? From up to bottom, from left to right (at least majority). The same happens when developers read code. One line should give only one thought, therefore every line should match only one command. If you want to confuse other developers you'd better violate those principles. And I will show you how. #### Case #1 Look at this piece of code. One idea per line. It is so clean so it makes me sick. ``` return elements .Where(element => !element.Disabled) .OrderBy(element => element.UpdatedAt) .GroupBy(element => element.Type) .Select(@group => @group.First()); ``` We can join all statements into one line, but it's going to be too easy. In this way, the developer's brain will understand that something is wrong and will parse your statements from left to right. Easy-peasy! The better way is to leave a couple of statements per one line and some on own separate one. The best scenario is when a developer might not even notice some statements which will lead to misunderstanding and to a bug eventually. In another way, we will just slow down his understanding of this code due to the parsing and "WTF" screaming. ``` return elements.Where(e => !e.Disabled) .OrderBy(e => e.UpdatedAt).GroupBy(e => e.Type) .Select(g => g.First()); ``` How cool is that? You can add some indentation so other developers will be aligning your code for decades if they need to rename `elements` variable. ``` return elements.Where(e => !e.Disabled) .OrderBy(e => e.UpdatedAt).GroupBy(e => e.Type) .Select(g => g.First()); ``` Send me a postcard if this approach passed code review in your team. > Lesson: Leave a couple of statements per one line and someplace on their own lines. > > #### Case #2 Absolutely the same idea here. But this code you can see more often. ``` var result = (condition1 && condition2) || condition3 || (condition4 && condition5); ``` Action items are the same as well. Split lines to confuse as much as you can. Play a little with line breaks to choose the best option. ``` var result = (condition1 && condition2) || condition3 || (condition4 && condition5); ``` And plus some indentation to make it like normal code. ``` var result = (condition1 && condition2) || condition3 || (condition4 && condition5); ``` Remember, you have to keep a balance between the unreadability of your code and the believability of your excuse. > Lesson: Play with line breaks to choose the best option. > > #### Case #3 What about this one? ``` if (isValid) { _unitOfWork.Save(); return true; } else { return false; } ``` Same problem but in another shade. Here is the best way to join all statements into one line and, of course, leave curly braces. ``` if (isValid) { _unitOfWork.Save(); return true; } else { return false; } ``` This approach will work only in case you don't have many statements in then and `else` blocks. Otherwise, your code may be rejected on the code-review stage. > Lesson: Join small `if/for/foreach` statements into one line. > > #### Case #4 80 characters per line is a standard which is preferable even today. This allows you to keep a developer concentrated while reading your code. Moreover, you can open two documents simultaneously per one screen when you need and there is a place left for your solution explorer. ``` bool IsProductValid( ComplexProduct complexProduct, bool hasAllRequiredElements, ValidationSettings validationSettings) { // code } ``` The easiest way to slow down reading your code is to let other developers scrolling your code horizontally. Just ignore the rule of 80 characters. ``` bool IsProductValid(ComplexProduct complexProduct, bool hasAllRequiredElements, ValidationSettings validationSettings) { // code } ``` It is super easy to forget what was before you start scrolling or miss the line where you have started. Nice trick. > Lesson: Ignore the rule of 80 characters on purpose. > > #### Case #5 An empty line in the **right** place is a powerful instrument to group your code up and increase reading speed. ``` ValidateAndThrow(product); product.UpdatedBy = _currentUser; product.UpdatedAt = DateTime.UtcNow; product.DisplayStatus = DisplayStatus.New; _unitOfWork.Products.Add(product); _unitOfWork.Save(); return product.Key; ``` An empty line in the **wrong** place along with other tips from these lessons may help you to save your job. Which empty line do you prefer? ``` ValidateAndThrow(product); product.UpdatedBy = _currentUser; product.UpdatedAt = DateTime.UtcNow; product.DisplayStatus = DisplayStatus.New; _unitOfWork.Products.Add(product); _unitOfWork.Save(); return product.Key; ``` > Lesson: Place empty lines randomly. > > #### Case #6 When you commit your code to a repository, there is a teeny-tiny possibility that you can take a look at what you are committing. DONT DO THIS! It is ok if you added an extra empty line like this. ``` private Product Get(string key) { // code } private void Save(Product product) { // code } ``` Or, which is better, some extra spaces on the empty line here (just select line 5). ``` private Product Get(string key) { // code } private void Save(Product product) { // code } ``` Why do you need it? The code is still working (but that's not for sure). You still understand your code. Another developer will understand your code less. You can' just add some extra spaces overall methods at once in your project (code review is our enemy), but using this practice you will get some mess in a couple of weeks of active development. One additional benefit from using extra spaces per line is that when other developers will commit some related functionality, their IDE may automatically fix file formatting. On code review, they will see thousands of same red and green lines. If you know what I mean ;) For this reason, you can set up tabs on your IDE if you have spaces on your project and vice versa. > Lesson: Do not look at the code before commit. > > #### Case #7 Bypass those developers who can see extra space in this code. They are dangerous for your career. ``` product.Name = model.Name; product.Price = model.Price; product.Count = model.Count; ``` > Lesson: Know your enemy. > > Hard work will make your code unmaintainable. When you gather lots of small issues then they will be growing without your input. Junior developers will be writing their code by your template. Once, on a wonderful day, during your code review, you will hear "WFT?" from your team lead and you will get an opportunity to use the famous phrase: "What? We always do like this." And then you can point him to a thousand places like this one. Enjoy.
https://habr.com/ru/post/517690/
null
null
1,123
57.87
On Thu 19 Jul 2007 09:11, Michael Niedermayer pondered: > On Thu, Jul 19, 2007 at 07:35:55AM -0400, Marc Hoffman wrote: > > We would be me ++ folks using Blackfin in real systems that are > > waiting for better system performance. > > doing the copy in the background like you originally did requires > a few more modifications than you did, that is you would have to add > checks to several points so that we dont read the buffer before the > specfic part has been copied, this sounds quite hackish and iam not > happy about it architecture specific optimisations are never a happy thing. I would think that with the proper defines #ifdef USE_NONBLOCKINGCPY #extern non_blocking_memcpy(void *dest, const void *src, size_t n); #extern non_blocking_memcpy_done(void *dest); #else #define non_blocking_memcpy(dest, src, n) memcpy(dest, src, n) #define non_blocking_memcpy_done #endif it could be made less "hackish" - and still provide the optimisation. > is mpeg4 encoding speed on blackfin really that important? There are lots of people waiting for it to get better than it is. (Like me) > cant you just optimize memcpy() in a compatible non background way? memcpy is already as optimized as it can be - it is already in assembly - doing int (32-bit) copies when possible. - The loop comes down to: MNOP || [P0++] = R3 || R3 = [I1++]; Which is a read/write in a single instruction cycle (if things are all in cache). This coupled with zero overhead hardware loops makes things as fast as they can be. The things that slow this down are cache misses, cache flushes, external memory page open/close - things you can't avoid. If we could be doing compute at the same time - it could make up for some of these stalls. Based on our profiling - the single most executed instruction is the above read/write - in the libc memcpy - about ~10% of the total CPU load (depending on the codec). This is pretty high - and a good candidate for the kind of optimisation that Marc is talking about. This again is multiplied by the fact that the Blackfin architecture (as well as others) have a non-cached L1 - that runs at Core Clock speed (like cache), but has no cache tags, and therefore is cheaper to implement - but harder to write software for :) This is L1 non-cached area is where Marc is doing alot of the video processing. Copy things into this non-cached l1, and run computations on it. Storing the answer back to L3. Using the existing memcpy pollutes the data cache with the memory reads, where the non_blocking version (since it would use DMA) would not do this. -Robin
http://ffmpeg.org/pipermail/ffmpeg-devel/2007-July/029663.html
CC-MAIN-2014-41
refinedweb
438
63.12
This is my first time making video games and I decided to use unity because mainly it was free. I am learning about terrain but my main problem is how to make the character your using move and how to make the camera follow him/her. Can someone help me out and explain it very simply because like I said this is my first time? There is a prefab called "First Person Controller" inside Unity, search "First person Controller" in the project panel and drag it into the scene view, and play the game. Thank you so much that helped a lot Thanks, I am new and was having trouble with that too thank you that really helped me too because I am new to unity Thanks used this and it worked great added some up and down code as well. Answer by Wentzel · Nov 02, 2011 at 08:48 PM var MoveSpeed : int = 0.5; function Update () { if(Input.GetKey(KeyCode.UpArrow)){ transform.Translate(Vector3,MoveSpeed,0,0); } } I couldn't test it as i'm using my phone. This is just a simple way of moving your object if you change "Translate" into Rotate you can well rotate your object. i'm new myself :P I would reccomend site since your new and is a great place to start. Answer by Penaloza12 · Jul 28, 2015 at 09:51 PM This should work for you. this is a using UnityEngine; using System.Collections; public class Movement : MonoBehaviour { public int movementspeed = 100; // Use this for initialization void Start () { } // Update is called once per frame void Update () { if (Input.GetKey (KeyCode.A)) { transform.Translate (Vector3.left * movementspeed * Time.deltaTime); } if(Input.GetKey (KeyCode.D)) { transform.Translate (Vector3. right * movementspeed * Time.deltaTime); } } } Answer by CreativeStorm · Nov 02, 2011 at 08:48 PM Youtube can become a good friend to you learning Unity - there are lots of tutorials ;) a quick search brought up this: there are much more and maybe some better - watch a couple of them will point you the right direction. you can find tutorials everywhere on the web most are really simple, quick and nice to get familiar with Unity ;) Ask google. It is very useful to make my charterer move yes it is exactly right concept, share good stuff with good ideas and concepts, lots of great information and inspiration, both of which I need, thanks to this. Camera not moving on mobile 0 Answers Camera animation into a scene –most efficient way? 1 Answer Character Runs Off Camera 1 Answer object follow mouse on x-z plane with angled camera 0 Answers How to make a camera that moves with the player? 1 Answer
http://answers.unity3d.com/questions/182391/how-to-make-your-character-move.html
CC-MAIN-2017-09
refinedweb
446
67.18
Opened 7 years ago Closed 6 years ago #12483 closed New feature (wontfix) Error message when Model.__unicode__() returns None Description I had already several times the situation that the unicode method of one of my models returned None in some case. This leads to a "TypeError: coercing to Unicode: need string or buffer, NoneType found" traceback. That's okay, but sometimes it is not easy to guess *which* model is wrong. So I took django/db/models/base.py and added the lines marked '+' below: def __repr__(self): try: u = unicode(self) except (UnicodeEncodeError, UnicodeDecodeError): u = '[Bad Unicode data]' + except TypeError,e: + raise TypeError("%s: %s" % (self.__class__, e)) return smart_str(u'<%s: %s>' % (self.__class__.__name__, u)) Attachments (1) Change History (9) comment:1 Changed 7 years ago by comment:2 Changed 7 years ago by comment:3 Changed 6 years ago by Changed 6 years ago by comment:4 Changed 6 years ago by comment:5 Changed 6 years ago by comment:6 Changed 6 years ago by comment:7 Changed 6 years ago by I'm -1 on this, if you dislike the error message I highly recommended taking this up with upstream CPython (or your VM of choice I suppose), as written this will catch the issue in only some contexts (notably those that try to turn the model into a str() rather than unicode() directly). comment:8 Changed 6 years ago by Closing as Alex requested. New patch with tests and everything. I'm not completely sure whether raising a new exception on TypeError is worth it though -- maybe we lose precious trackback information...
https://code.djangoproject.com/ticket/12483
CC-MAIN-2017-09
refinedweb
270
57.3
There are plenty of continuous functions such that . Besides the trivial examples and , one can take any equation that is symmetric in and has a unique solution for one variable in terms of the other. For example: leads to . I can’t think of an explicit example that is also differentiable, but implicitly one can be defined by , for example. In principle, this can be made explicit by solving the cubic equation for , but I’d rather not. At the time of writing, I could not think of any diffeomorphism such that both and have a nice explicit form. But Carl Feynman pointed out in a comment that the hyperbolic sine has the inverse which certainly qualifies as nice and explicit. Let’s change the problem to . There are still two trivial, linear solutions: and . Any other? The new equation imposes stronger constraints on : for example, it implies But here is a reasonably simple nonlinear continuous example: define and extend to all by . The result looks like this, with the line drawn in red for comparison. To check that this works, notice that maps to , which the function maps to , and of course . From the plot, this function may appear to be differentiable for , but it is not. For example, at the left derivative is while the right derivative is . This could be fixed by picking another building block instead of , but not worth the effort. After all, the property is inconsistent with differentiability at as long as is nonlinear. The plots were made in Sage, with the function f define thus: def f(x): if x == 0: return 0 xa = abs(x) m = math.floor(math.log(xa, 2)) if m % 2 == 0: return math.copysign(2**(m + xa/2**m), x) else: return math.copysign(2**(m+1) * (math.log(xa, 2)-m+1), x)
https://calculus7.org/tag/inverse/
CC-MAIN-2020-40
refinedweb
307
65.42
Simple Application to Zip and UnZip Files in C# Using J# Libraries Introduction Recently, I had to build a private online photo album application. I planned to provide the users with the option to upload images as a zip file. I needed this option to be implemented in ASP.Net 1.1 because my hosting service provider didn't have .NET 2.0 support. An online search led me to a blog with this wonderful idea of using the zip option in J# and it really attracted me. I did want to try it out and the results were pretty cool. Even though this sample application was written in .NET 2.0, the .NET framework 2.0 has libraries to work with compressing and decompressing files. I will soon come up with an article to illustrate the use of .NET 2.0 libraries for compressing and decompressing functionalities. In this article, I will explain the usage of zip functionality in J# from C# code. The code in this application has been designed to be reused in a copy/paste fashion and not as a library. Background This application consumes J# classes internally. For this, you first must refer to the J#.NET library. Physically, it resides as a file named vjslib.dll. If you are not sure how to refer to a library in your project, please follow these steps: - Right-click your project in Server Explorer and click "Add Reference." - Select the .NET tab. - Scroll down and select "vjslib". - Click OK and you are there. Now, you can refer the Java library classes within your application. In fact, this was the first time I tried to refer to the J# classes and, personally, it was a moment I would never forget in my programming life. I just may be thrilled about the usage of the whole power and options of Java language within my C# programs (if the need arises). Import the following namespaces for ease of coding. java.util; java.util.zip; java.io; The java.util.zip namespace contains the classes and methods to implement the compress and decompress functionalities within your code. The main classes used from the above namespaces are these: - ZipFile - ZipEntry - ZipOutputSteam - Enumeration Programmatically, a ZipFile object can be considered equivalent to a physical zip file. A ZipFile can contain multiple ZipEntry objects apart from the actual content of the zipped files. In fact, each ZipEntry object is the metadata about a zip file. The ZipOutputStream class represents a writable stream pointing to a zip file. This stream can be used to write ZipEntry objects and content to the zip file. Enumeration enables iteration through each element in a collection. Using the Code Following is a code listing to create a zip file. private void Zip(string zipFileName, string[] sourceFile) { FileOutputStream filOpStrm = new FileOutputStream(zipFileName); ZipOutputStream zipOpStrm = new ZipOutputStream(filOpStrm); FileInputStream filIpStrm = null; foreach(string strFilName in sourceFile) { filIpStrm = new FileInputStream(strFilName); ZipEntry ze = new ZipEntry(Path.GetFileName(strFilName)); zipOpStrm.putNextEntry(ze); sbyte[] buffer = new sbyte[1024]; int len = 0; while ((len = filIpStrm.read(buffer)) > = 0) { zipOpStrm.write(buffer, 0, len); } } zipOpStrm.closeEntry(); filIpStrm.close(); zipOpStrm.close(); filOpStrm.close(); } The above Zip() method accepts two parameters: - zipFileName: Zip file name including the path and - sourceFile: String array of file names that are to be zipped. The FileOutputStream class is capable of writing content to a file. Its constructor accepts the path of the file to which you want to write. The FileOutputStream object is then supplied to an instance of the ZipOutputStream class as a parameter. The ZipOutputStream class represents a writable stream to a zip file. The foreach loops through each file to be zipped, creates corresponding zip entries, and adds them to the final zip file. Taking a deeper look into the code, a FileInputStream object is created for each file to be zipped. The FileInputStream object is capable of reading from a file as a stream. Then a ZipEntry object is created for each file to be zipped. The constructor of the ZipEntry class accepts the name of the file. Path.GetFileName() returns the file name and extension of the specified path string. The newly created ZipEntry object is added to the ZipOutputStream object by using its putNextEntry() method. In fact, a ZipEntry merely represents metadata of the file entry. You still need to add actual content into the zip file. Therefore, you need to transfer data from the source FileInputStream to destination FileOutputStream. This is exactly what the while loop does in the piece of code above. It reads content from the source file and writes it into the output zip file. Finally, the closeEntry() method of the ZipOutputStream class is called and this causes the physical creation of the zip file. All the other streams created also are closed. private void Extract(string zipFileName, string destinationPath) { ZipFile zipfile = new ZipFile(zipFileName); List<ZipEntry> zipFiles = GetZippedFiles(zipfile); foreach (ZipEntry zipFile in zipFiles) { if (!zipFile.isDirectory()) { InputStream s = zipfile.getInputStream(zipFile); try { Directory.CreateDirectory(destinationPath + "\\" + Path.GetDirectoryName(zipFile.getName())); FileOutputStream dest = new FileOutputStream(Path.Combine(destinationPath + "\\" + Path.GetDirectoryName(zipFile.getName()), Path.GetFileName(zipFile.getName()))); try { int len = 0; sbyte[] buffer = new sbyte[7168]; while ((len = s.read(buffer)) > = 0) { dest.write(buffer, 0, len); } } finally { dest.close(); } } finally { s.close(); } } } } The ExtractZipFile() method accepts two parameters: the zip file name(including path) to be extracted and the destination path where the files are to be extracted. It then creates a ZipFile object and retrieves entries in the zip file using GetZipFiles() method. This method will be discussed later in this article. The foreach loop iterates through all the entries in the zip file and, in each iteration, the entry is extracted to the specified folder. The code in the foreach loop executes only if the entry is not a folder. This condition is verified by using the isDirectory() method of the ZipEntry object. Each entry is read into an InputStream using the getInputStream() method of the ZipFile object. This InputStream acts as the source stream. The destination stream is a FileOutputStream object that is created based on the specified destination folder. Here, you use the getName() method of the ZipEntry object to get the file name (including the path) of the entry. During the extraction, the original folder structure is maintained. In the while loop that follows, content from the source InputStream is written to the destination FileOutputStream. The source stream is read to a buffer using the read() method. It reads 7 Kb in a sequence into a temporary buffer and then the write() method of the destination FileOutputStream writes the content to the stream from the buffer, using the write method. It is in the finally block that follows that the destination FileOutputStream is closed and the content is physically written to disk. private List<ZipEntry> GetZipFiles(ZipFile zipfil) { List<ZipEntry> lstZip = new List<ZipEntry>(); Enumeration zipEnum = zipfil.entries(); while(zipEnum.hasMoreElements()) { ZipEntry zip = (ZipEntry)zipEnum.nextElement(); lstZip.Add(zip); } return lstZip; } The GetZipFiles() method returns a generic List of ZipEntry objects taking a ZipFile object as an argument. The method creates a generic collection of a ZipEntry type. Now comes the use of an interesting feature in the Java language: the use of Enumeration. Note that it's not the enum type that you have in C#. An object that implements the Enumeration interface generates a series of elements, one at a time. Successive calls to the nextElement() method return successive elements of the series. The hasMoreElements() method returns a boolean value indicating whether the Enumerator contains more elements. Here, the entries() method of the ZipFile class returns an Enumeration of ZipEntry objects. The code then iterates through the Enumeration and populates the List. Finally, the populated List is returned. Apart from the above listed code, the downloadable source code for the sample application contains some extra code to handle the UI part of the application; in other words, entries made to the ListBox and handling the progress bar. I haven't included them in this article because I didn't want to lose focus from the main objective of the article. The code is comprehensive, but the UI controls can be handled in better ways, keeping performance and usability in mind. One good option might be to keep your zip functionality separate from the UI thread, so that interactivity is well maintained. Points of Interest One interesting fact to note while using the Java library is the difference in naming conventions, especially the naming of methods. For C# coders, methods give the feel of variable names. Also, the way the classes are named is a distinguishable factor. Even though you can't use the features of Java beyond an extent because the runtime decides the main advantages of a platform, you take advantage of using the the Java libraries. The gives an upper hand over Java for C#. I wish that Microsoft will continue supporting J#. Password Protected UnzipPosted by prashantww on 07/07/2008 08:25am Hi, thanks for the code above to unzip a zip file, but how about unzipping a password protected zip file..?? Thanks PrashantReply HiPosted by ValeriM on 05/08/2007 11:00am Mohammed everywhere ;)Reply
http://www.codeguru.com/csharp/.net/net_general/netframeworkclasses/article.php/c13593/Simple-Application-to-Zip-and-UnZip-Files-in-C-Using-J-Libraries.htm
crawl-003
refinedweb
1,535
57.06
Thank you for your answer. We decided, on our side, to not go further in the migration process, and stick currently to Axis1, because : - the session : flash webservices classes do not support WS- Adressing, (and we cannot add a custom soap enveloppe). - complex beans with arrays would have made us use the wsdl2java which would have been a major rewrite of the app (a dozen services, maybe 50 operations per service, a lot of beans, ...) and a lack of time to do all of this, we sadly a small structure and cannot afford having these kind of major changes. Thanks anyway. Tom On 5 avr. 08, at 16:43, Anne Thomas Manes wrote: > I gather that your original WSDL is using RPC/encoded. > > Axis2 does not support SOAP encoding, and it is more rigorous about > SOAP and WSDL conformance than Axis. According to the specifications, > faults MUST be defined using document/literal; therefore, the fault > message parts MUST reference elements rather than types. > > Try this: > > In the bindings, change all use="encoded" attributes to use="literal". > > In your types section, add an element definition for each fault > message and define its type as the type used in the part definition, > e.g., for this fault: > > <wsdl:message > <wsdl:part > </wsdl:message> > > define the following element: > > <xsd:element > > And modify the fault message definition like so: > > <wsdl:message > <wsdl:part > </wsdl:message> > > Anne > > On Tue, Apr 1, 2008 at 10:21 AM, Thomas Burdairon > <tburdairon@entelience.com> wrote: >> Thank you for your advices, I'm trying to generate java classes >> from my old >> wsdl files, >> and while java files are generated almost without problems (need >> to remove >> some use="encoded" so the operation is OK), >> I cannot manage to generate java files from WSDL with operations >> declared >> with faults. >> >> ex : >> >> <wsdl:message >> <wsdl:part >> </wsdl:message> >> >> -> we get Part 'fault' of fault message >> '{urn:our.package.SoapGeography} >> OurCustomException' must be defined with 'element=QName' and not >> 'type=QName' >> >> -> change to> >> But at this time we get some >> Exception in thread "main" >> OurCustomException with namespace >> urn:our.package.SoapGeography >> >> that prevent java classes to be generated, and I cant see what's >> wrong. >> >> the command line used is : >> .build/axis2-1.3/bin/wsdl2java.sh -d xmlbeans -uri >> ~/Desktop/wsdls/Geography.wsdl -ss -g -sd -o output -p our.package >> >> >> Another question while here : >> In Axis 1.4, we were using the class >> org.apache.axis.handlers.SimpleSessionHandler to manage the session. >> On 1st works, i used a soapsession for my services and so my >> messages now >> contain some WS Addressing parts. >> The problem is that our clients are written in flash >> actionscript, and >> webservice is very limited on this platform, and the addressing >> namespace >> does not seem to be declared. >> >> Is there any simplier solution? >> Has anybody ever used Flash with axis2 and a session management ? >> >> >> Thanks for your time ! >> Tom >> >> >> >> On 29 mars 08, at 14:14, Anne Thomas Manes wrote: >> >>> If you want to keep the WSDL the same, then I suggest you use the >>> WSDL-first approach rather than the POJO approach. Take the WSDL >>> from >>> your Axis 1.4 service and use WSDL2Java to generate a new service >>> skeleton. >>> >>> Anne >>> >>> On Fri, Mar 28, 2008 at 5:47 AM, Thomas Burdairon >>> <tburdairon@entelience.com> wrote: >>> >>>> Greatings >>>> I'm working on a migration from Axis1.4 to Axis2 1.3. >>>> We choosed to use the POJO approach since it seems to be the >>>> easiest >>>> one. That means I don't generate my WSDL, they are >>>> autogenerated by >>>> axis. >>>> I am currently encountering 2 major problems. >>>> - Beans presents in WSDL >>>> Some of our services return different objects depending of the >>>> parameters. To do so, we have a simple inheritance schema that >>>> look >>>> like : >>>> interface A >>>> object B implements Aobject C implement A... >>>> All the methods in the service return A, so by default the >>>> generated >>>> WSDL only contains the definition of A. >>>> In Axis1, there an extraClasses parameter in the WSDL that we were >>>> using to declare objects B and C.I couldn't find an equivalent in >>>> Axis2 (in the service.xml).I've read >>>> jira/ >>>> browse/AXIS2-1056, but it seem to fix only java2wsdl and this >>>> isn't >>>> what I am looking for.Is there any way for it?or, in your opinion, >>>> would a patch be easy to write? >>>> >>>> >>>> - Beans description in WSDL for List :Some of the javabeans >>>> sent or >>>> received by our webservices contains List >>>> One nice feature in axis1 was that java.util.List were >>>> converted in >>>> the WSDL as type="impl:ArrayOf_xsd_anyType"Now, it looks like >>>>>>> i tried to convert some of them ito arrays, just to see and i get >>>> maxOccurs="unbounded" minOccurs="0" seems weird since it is not >>>> explicitly said it is an array, but why not ... >>>> problem is i used to use List<Number> as type and it's strangely >>>> deserialized.is Number type supported by Axis2?is there an >>>> official >>>> list of supported java types ? >>>> >>>> thanks for your time and your answers >>>> >>>> >>>> ------------------------------------------------------------------- >>>> -- >>>>
http://mail-archives.apache.org/mod_mbox/axis-java-user/200804.mbox/%3C80E51EF1-B3FE-4D40-BB01-C78364330580@entelience.com%3E
CC-MAIN-2017-39
refinedweb
836
55.74
RF 4.0 M4 SEVERE problemIlya Sorokoumov Jan 6, 2011 4:59 PM This is code of my xhtml page: <h:form <h:inputText</h:inputText> <a4j:commandLink </a4j:commandLink> </h:form> <h:panelGroup <h:form <h:commandButton <h:commandLink </h:form> </h:panelGroup> And this is my TestBean class: /** * @author sorokoumov */ @SessionScoped @ManagedBean public class TestBean { private String input; public String getInput() { return input; } public void setInput(String input) { this.input = input; } public String doSomething() { System.out.println("doSomething"); return null; } } When I input something into h:inputText it shows me the second form. But when I press the button or the link on the second method of testBean is not getting call and the page is just rerendered. If I press the button or the link at the second time it works fine and calls testBean method. Is it a bug in a4j or may be I'm missing something? P.S. I'm using GlassFish 3.0.1 and RF 4.0 M4. 1. Re: RF 4.0 M4 SEVERE problemIlya Sorokoumov Jan 6, 2011 5:30 PM (in response to Ilya Sorokoumov) I can face the same problem if I replace a4j usage with f:ajax<h:commandLink<f:ajax</h:commandLink> <h:commandLink <f:ajax </h:commandLink> So I guess my problem is deeper. But I still can't find out where it is. 2. Re: RF 4.0 M4 SEVERE problemIlya Sorokoumov Jan 8, 2011 4:50 PM (in response to Ilya Sorokoumov) I've also posted this problem on java.net forum. The links follow below: 3. RF 4.0 M4 SEVERE problemIlya Sorokoumov Jan 11, 2011 10:55 AM (in response to Ilya Sorokoumov) Also posted this problem here: 4. Re: RF 4.0 M4 SEVERE problemIlya Sorokoumov Jan 13, 2011 4:33 AM (in response to Ilya Sorokoumov) Finally I got the answer on but it seems I don't like it. =) I looks like I can't completely re-render one form form another if it was not rendered on the page before. If I place all elements into one form it should work fine. It looks like a JSF 2.0 limitation. But at least I know how to avoid this problem. If someone has any other thoughts, please, write them here. 5. Re: RF 4.0 M4 SEVERE problemJay Balunas Jan 12, 2011 1:54 PM (in response to Ilya Sorokoumov) I think that the person at coderanch explained this well. Thanks for posting the link/info here. 6. RF 4.0 M4 SEVERE problemIlya Sorokoumov Jan 15, 2011 1:28 PM (in response to Jay Balunas) Hi all, Today I tried to troubleshoot this problem and found that this stuff <input type="hidden" name="javax.faces.ViewState" is causing the problem because it is not present in a re-rendered form. I also found a couple of similar issues in JSF JIRA: So I hope that this is a bug and it's going to be fixed in the future JSF 2.x releases. Regards, Ilya. 7. RF 4.0 M4 SEVERE problemCobus Stroebel Jan 21, 2011 6:19 AM (in response to Ilya Sorokoumov) Hi Ilya Any news yet? Have you found a workaround for this problem? Regards 8. Re: RF 4.0 M4 SEVERE problemIlya Shaikovsky Jan 21, 2011 6:44 AM (in response to Cobus Stroebel) workaround in "correct answers"
https://developer.jboss.org/thread/160831?tstart=0
CC-MAIN-2017-26
refinedweb
566
72.56
Ticket #20 (closed defect: Invalid) Mode parameter seems to be ignored Description import sqlite conn = sqlite.Connection('mydb', mode=0600) > ls -l mydb -rw-r--r-- 1 thor thor 0 Jun 11 14:21 mydb > No matter what I put for mode, the access permissions for file 'mydb' are always -rw-r--r--. It doesn't matter whether I populate the database, or if the file exists before the connection is created. Both sqlite and sqlite-python were downloaded today: > cat sqlite/VERSION 2.8.3 > ... > ls pysqlite-0.4.3/ > I run a recent Mandrake on an IBM T30 laptop: > cat /etc/issue Mandrake Linux release 9.1 (Bamboo) for i586 Kernel 2.4.21-0.13mdk on an i686 / \l > Change History Note: See TracTickets for help on using tickets.
http://oss.itsystementwicklung.de/trac/pysqlite/ticket/20
crawl-002
refinedweb
132
64
First published on TechNet on Feb 21, 2017 Hey Everybody! I am Jose Blasac, a Microsoft Premier Field Engineer, here with my first post on the world famous ASK PFE Platforms blog! I am super excited! I spend a lot of time working with System Center Configuration Manager and Windows 10. If you have done any work with Config Manager and Windows 10 Servicing, you will have noticed some of the pre-requisites like Heartbeat Discovery and WaaS Deferral GPOs (More on this later). By default, all Windows 10 systems are discovered as CB or Current Branch. (If you are not familiar with WaaS Concepts like CB or CBB, head over to our WaaS Quick Start Guide) Starting with Config Manager 1511 a pair of new attributes have been added to the DDR or the Heartbeat Data Discovery Record of Config Manager Clients. For our purposes, we are concerned with the OS Readiness Branch attribute as highlighted below. The OS Readiness Branch properties of a Computer Object can display the following values: So where does the Client discover this information? As I stated earlier, these attributes are now part of the DDR that is inventoried on Clients and copied up to the Management Point for Processing by the Primary Site. We can trace these activities on the client side via ConfigMgr Log files. The Client logs Discovery actions in the InventoryAgent.log found in %windir%\CCM\Logs folder. After manually initiating a DDR cycle let's follow the action. If we drill down through the InventoryAgent.log to see what items were discovered and inventoried, we can see the following WMI query with a particular property of Interest! So, what is the OSBranch property all about and what values are we potentially looking for? If we launch the good old Wbemtest utility, we can test this WMI query for ourselves! Right Click on Start, Run, Type in Wbemtest and Launch the Utility. Hit Connect and Attach to the Root\ccm\invagt namespace. We can take part of the query above to peek into the OSBranch Property. As you can see above we have an integer value of 1. This system is considered a CBB client. The OSBranch Property has the following possible integer values: As we continue to piece this together, what is the Client Discovery routine looking for to decide what value to set the OSBranch Property to? Now I happened to have read the documentation on configuring Windows Update for Business which is here. So technically I already know what Registry Keys need to be set. (I am doing all my testing in this blog on Windows 10 1607) If we scroll down the page to the section titled "Configure Devices for Current Branch (CB) or Current Branch for Business (CBB) we can see the Release branch policies and how to configure them for either Windows 10 1607 or Windows 10 1511. Here is a snippet of that Table. With that said we still have the ability to trace this for ourselves and observe the system behavior. Let's resort to one of my favorites tools, Process Monitor. Chances are you have used this in the past but just in case you can go over to and grab the it! Prior to initiating a DDR discovery cycle I will launch Process Monitor. The DDR cycle runs quickly so I will pause the trace after approx. 30 seconds. Then I begin to search for key terms. In this case I used to the term "Branch". Bingo!! The first hit takes us right to the relevant Registry key. We can see the RegQuery being performed by the WMI Provider host process but let's dig deeper and see who is initiating the actions. Double click on the Highlighted Line item and pull up the Event Properties Dialog box. Let's go to the Stack tab to view this Threads Stack activity. Without getting too Nerdy we can see some Config Manager activity once we walk up the Stack. The ddrprov.dll belongs to the Config Manager Client DDR Provider as detailed below. Phewww, ok so now what? Knowing how Config Manager Discovers and identifies WaaS in Windows 10 can be very helpful once we start to play with things like Windows 10 Servicing Plans or trying to make sense of the Servicing Dashboard. Etc We can create collections based on some of these Attributes and create Deployment 'Rings'. For example, you could create collections based on OS Build or OS Readiness Branch. Some of the possible integer values when setting up your Query. We could also run SQL reports and queries against the Config Manager database to identify systems. The SQL view of interest is v_R_System. This contains the attributes like OS Build and OS Readiness Branch. Here is an example query and result: As you can see the Branch value is also an integer in the Database. As we should have all mastered by now, Current Branch or CB is 0 and so on and on…. Well, I hope you have enjoyed this little exercise on identifying WaaS systems in your environment using System Center Configuration Manager. Till next time!! Jose Blasac
https://techcommunity.microsoft.com/t5/core-infrastructure-and-security/identifying-waas-systems-using-config-manager/ba-p/258989
CC-MAIN-2022-27
refinedweb
863
72.05
Opened 3 years ago Closed 3 years ago #22594 closed Bug (fixed) Content Type framework does not trigger Cascade delete Description I've posted this question in StackOverflow () and given it looks like a bug, I've decided to post it here. Suppose the following models: class DeltaCheck(models.Model): logs = generic.GenericRelation('Log') title = models.CharField(max_length=50) owner = models.ForeignKey(User) class Log(models.Model): title = models.CharField(max_length=50) content_type = models.ForeignKey(ContentType) object_id = models.PositiveIntegerField() content_object = generic.GenericForeignKey('content_type', 'object_id') If I create a DeltaCheck and a couple of Logs and then delete the DeltaCheck, the Logs are deleted as well: In [7]: Log.objects.count() Out[7]: 10 In [8]: DeltaCheck.objects.get().delete() In [9]: Log.objects.count() Out[9]: 0 BUT if I delete the User (the field owner), the DeltaCheck gets deleted BUT not the Logs, look: In [14]: Log.objects.count() Out[14]: 10 In [15]: DeltaCheck.objects.get().owner.delete() In [16]: DeltaCheck.objects.all() Out[16]: [] In [17]: Log.objects.count() Out[17]: 10 Is that proper behavior? I don't think so. Change History (5) comment:1 Changed 3 years ago by comment:2 Changed 3 years ago by I'm able to reproduce the same issue with a similar database schema. - django 1.6.1 - postgresql 9.1 As a workaround I've defined a pre_signal (but NOT empty as Jose says): from django.db.models.signals import pre_delete from django.dispatch import receiver @receiver(pre_delete, sender=User) def pre_delete_receiver(sender, instance, **kwargs): ## CODE TO DELETE RELATED OBJECTS for delta in instance.deltachecks.all(): delta.logs.all().delete() comment:3 Changed 3 years ago by Hi, I can also reproduce this issue (and indeed, having an empty signal fixes the issue). For reference, cascade deletion of generic foreign keys is documented and only works if you have a reverse GenericRelation [1] (last two paragraphs). The key to this issue probably lies in the fast_delete method [2] (note for example the line 126 that explains why things behave differently if there is a signal listening or not). Thanks. [1] [2] Wait... If I define an empty signal receiver, it works! (????) I've just added: And now the deletion is performed...
https://code.djangoproject.com/ticket/22594
CC-MAIN-2017-22
refinedweb
372
61.12
Master of Science (MSc) in Engineering Technology Electronics-ICT. Proceedings of MSc thesis papers electronics-ict - Anne Townsend - 3 years ago - Views: Transcription 1 Master of Science (MSc) in Engineering Technology Electronics-ICT Proceedings of MSc thesis papers electronics-ict Academic year 2 3 Content INTRODUCTION... III JAVA-BASED LOCAL STREAMING OF MULTIMEDIA FILES - BY BEL SANDER ΜM DIGITALLY CONTROLLED SWITCHED CAPACITOR OSCILLATOR FOR A RESONANCE TRACKING ULTRASOUND TRANSMITTER BY BLOCKX BERT... 5 MOTION CONTROL OF AN ARM ORTHOSIS BY EMG SIGNALS THROUGH ANN ANALYSIS BY BUYLE JOACHIM DESIGN OF A DISCRETE TUNABLE ULTRA-WIDEBAND PULSE GENERATOR FOR IN VIVO DOSIMETRY BY COOLS KRIS POSITIONING TECHNIQUES FOR LOCATION AWARE PROGRAMMING ON MOBILE DEVICES - BY DE MAEYER MATTHIAS FIXED-SIZE KERNEL LOGISTIC REGRESSION: STUDY AND VALIDATION OF A C++ IMPLEMENTATION BY GEUKENS ALARD UNIFORMITY TESTS AND CALIBRATION OF A SCINTILLATOR COUPLED CCD CAMERA USING A AMBE NEUTRON SOURCE BY HAMBSCH LORENZ ULTRA-WIDEBAND ANTIPODAL VIVALDI ANTENNA ARRAY WITH WILKINSON POWER DIVIDER FEEDING NETWORK BY JANSSEN KAREL IDENTIFICATION OF DIRECTIONAL CAUSAL RELATIONS IN SIMULATED EEG SIGNALS - BY KEMPENEERS KOEN REINFORCEMENT LEARNING USING PREDICTIVE STATE REPRESENTATION BY KLESSENS GREG A 1.8V 12-BIT 50MS/S PIPELINED ADC BY LIEVENS BART ACCESSING ENTERPRISE BUSINESS LOGIC IN A MOBILE WAREHOUSE MANAGEMENT SYSTEM USING WCF RIA SERVICES BY MOLENBERGHS XANDER AND JONAS RENDERS AN OPTIMAL APPROACH TO ADDRESS MULTIPLE LOCAL NETWORK CLIENTS - BY MOTMANS TIM AN ESTIMATE OF THE PATIENT S WEIGHT ON A ANTIDECUBITUS MATTRESS USING PIEZORESISTIVE PRESSURE SENSORS BY MOUJAHID ABDERAHMAN PRACTICAL USE OF ENERGY MANAGEMENT SYSTEMS BY REYNDERS JEROEN AND SPELIER MATTHIAS EXECUTION TIME MEASUREMENT OF A MATHEMATIC ALGORITHM ON DIFFERENT IMPLEMENTATIONS BY SALAERTS FREDERIC NORMALIZATION AND ANALYSIS OF DYNAMIC PLANTAR PRESSURE DATA - BY SCHOTANUS BERND EVOLUTION TO A PRIVATE CLOUD USING MICROSOFT TECHNOLOGIES BY SELS CARSTEN CREATION OF 3D MODELS BY MATCHING ARBITRARY PHOTOGRAPHS BY SOLBERG STEVEN CONCEPTUAL DESIGN OF A RADIATION TOLERANT INTEGRATED SIGNAL CONDITIONING CIRCUIT FOR RESISTIVE SENSORS BY STERCKX JEF I 4 REINFORCEMENT LEARNING WITH MONTE CARLO TREE SEARCH - BY VALGAEREN KIM STRING COMPARISON BY CALCULATING ALL POSSIBLE PARTIAL MATCHES BY VAN DEN BOSCH RUUD BEAMFORMING AND NOISE REDUCTION FOR SPEECH RECOGNITION APPLICATIONS BY VAN DEN BROECK BERT REINFORCEMENT LEARNING: EXPLORATION AND EXPLOITATION IN A MULTI-GOAL ENVIRONMENT BY VAN HOUT PETER SERVICE-ORIENTED ARCHITECTURE IN AN AGILE DEVELOPMENT ENVIRONMENT - BY VAN KETS NIELS DETECTION OF BODY MOVEMENT USING OPTICAL FLOW AND CLUSTERING BY VAN LOOY WIM A COMPARISON OF VOLTAGE-CONTROLLED RING OSCILLATORS FOR SUBSAMPLING RECEIVERS WITH PS RESOLUTION BY VAN ROY STIJN TESTING AND INTEGRATING A MES BY VANDYCK GERT NOSQL BEYOND SCALABILITY: EVALUATION OF COUCHDB FOR ADMINISTRATIVE APPLICATIONS - BY VERSCHUEREN MICHIEL MODEL TREES IN REINFORCEMENT LEARNING BY WOLPUT JAN II 5 Introduction We are proud to present you this second edition of the Proceedings of M.Sc. thesis papers from our Master students in Engineering Technology: Electronics-ICT. Thirty-two students report here the results of their research. This research was done in companies, research institutions and our department itself. The results are presented as papers and collected in this text which aims to give the reader an idea about the quality of the student conducted research. Both theoretical and application-oriented articles are included. These proceedings can be downloaded from menu item onderzoek and Wetenschappelijke papers. Our research areas are: Electronics ICT Biomedical technology We hope that these papers will give the opportunity to discuss with us new ideas in current and future research and will result in new ways of collaboration. The Electronics-ICT team Patrick Colleman Tom Croonenborghs Joan Deboeck Guy Geeraerts Peter Karsmakers Paul Leroux Vic Van Roie Bart Vanrumste Staf Vermeulen III 6 7 Java-based local streaming of multimedia files (June 2011) S. Bel When playing media from a remote device, different solutions exist the day of writing. Streaming is one of those possibilities and can be implemented without high demands on hardware and network structures. Thanks to the variety of streaming, different streaming protocols can be used to suit multiple situations, like in this case HTTP for the local area network. Implementing this concept can be done using Java and an available library found in the large online community supporting this programming language. The concerning library is called vlcj and is well known due to the popular VLC media player, where it is a decent from. B Index Terms Java, streaming, HTTP, local, vlcj. I. INTRODUCTION ECAUSE of the growing demands for digital video applications on the internet and the broadcasting of media, different solutions are developed and are still being developed. A lot of media nowadays is available on the internet and can be watched on the webpage itself, like by example YouTube. The objective of this paper is to watch media remotely without using the internet as a medium. The environment for this application is the local area network and the media will be hosted on a regular peer in the network, so no high performance superpeer or server. In this way the flexibility of the application can be guaranteed, because the hosting peer does not have high requirements. Another fact that provides flexibility is cross-platformability. With web applications this guarantee exists, you only need to take into account the layout of the web application for the different platforms. For this research topic, the solution is based upon a standalone application. Since this includes choosing a programming language, a requisite will be the ability to develop one program that suites to all (or almost all) different platforms existing the day of writing. Furthermore the application needs to be simple in layout and be able to reach a large audience. This requisite can be fulfilled by using the streaming concept. With this you build up a structure of different peers, with by example one peer holding the required multimedia file to play along the different other peers. Streaming allows you to host the media with a range of different protocols (by example HyperText Transfer Protocol, HTTP) and this stream can be opened by the other peers. An easy solution that can be implemented with the technologies described in this paper. When combining the two requisites and the streaming concept, the research points us to the direction of Java in combination with streaming libraries. In the sections below Java bindings for VLC media player, or vlcj, will be discussed, because this is the most suitable answer to the questions posed in this research topic. This topic will be discussed first with a short explanation about the prerequisites of the network structure and what is included into the research and what is considered implementing the solution. Afterwards the choice of the programming language is defended with the advantages Java offers. This will be followed by the streaming concept, how streaming in a local area network takes place and which protocols that can be used. Based upon this section the paper continues with the Java bindings for VLC media player (vlcj), the used streaming library. To conclude then, the proposed solution is tested by using a test package provided by vlcj and a global conclusion is made. II. PREREQUISITES Before starting to describe the problems and solutions of the research, it is important to know what is included in research area of this paper and what is not. The objective is to provide an application that is able to host a multimedia file and to play this file remotely on another peer in the network. When we do the comparison with the different layers in the OSI (Open Systems Interconnection) model, we can conclude that the objective of this paper covers only the top layers of the communication process. The bottom layers will be outside the boundaries of the research area and are setup and maintained by protocols that will not be discussed here. This contains the setup of the different connections, which in most cases will be done by TCP/IP, the 1 8 maintaining of the connections between the clients, etc. The network structure containing different switches and peers needs to be in full working order and will not be a requirement of the developed solution. Aside from all the advantages of the Java programming language itself, the online community supporting Java is very large. Thus developing in Java can be made a lot easier when consulting these resources, especially when using developed libraries, which has been eagerly have used in this paper. In a manner of fact, the utilized library in this paper is namely an example of this. The library will be addressed in the following sections. Fig. 1. The local area network structure that makes implementation of streaming flexible, no centralization of data is necessary, because no high performance server is mandatory for setting up the streaming concept. III. PROGRAMMING LANGUAGE An important fact in the development of streaming applications is that it needs to be cross-platform. The multimedia files provided will be hosted on different peers and possible with different Operating Systems. To overcome this obstacle the choice for the programming language will be determined by the cross-platformability. A different range of languages exist that have the possibility to develop applications that are cross-platform. Java is an example of such a language and will be chosen, because it not only provides cross-platformability, it also maintains the stability of the applications, what can be an issue with this kind of programming. Java utilizes the Java Virtual Machine (JVM) which includes WORA or write once, run anywhere and is well-implemented and even features automatic exception handling, which provides information about the error independent of the source, making debugging a lot easier. The JVM can execute CLASS or JAR files and moreover nowadays Just-In-Time (JIT) compiling is used to achieve greater speed and overcome the overhead mentioned in the previous section. Also resource utilization is kept low with using Java. Applications written in Java have an automatic garbage collection system while executing, so keeping needed resources down and eliminate the need to dispose created objects or pointers. Another advantage is the small footprint of a Java application, for the developed solution the size is less than 100 kilobytes. For more information about the Java programming language, consult [1] [2]. IV. STREAMING CONCEPT For playing multimedia that is stored remotely streaming is a solid solution. With streaming one is the streaming provider and the other side are the streaming receivers. Since one stream can be opened by multiple receivers it has become a popular solution for digital video internet applications. Aside from this usage, it is also very suitable for the local area network. Streaming multimedia files can be described as opening a file and making it accessible for others by using different protocols. Supported protocols are HTTP, Real Time Streaming Protocol (RTSP), Real-time Transport Protocol (RTP), User Data Protocol (UDP), etc. The advantage of the different supported protocols is that you are able to choose the protocol that is suitable for each situation. In the case of local area network streaming, the utilized protocol will be HTTP. HTTP in namely well supported and as you ll see later on the chosen Java library support this as well. Fig. 2. Simple visualization of the streaming concept with the HTTP protocol. When opening a stream, the streaming provider needs to convert the multimedia file in a (HTTP) stream that can be opened by others. This conversion does not require much processing power and every notebook or desktop is capable for doing this conversion. However when the multimedia file 2 9 uses a rare codec, one that is not supported by the streaming provider application, the streaming provider needs to convert this file on the fly to a format supported by the application and demanding a lot of resources. This problem can be encountered when using the DLNA protocol [3] [4]. V. JAVA BINDINGS FOR VLC MEDIA PLAYER When using streaming in combination with the Java programming language, the possibilities for streaming libraries are numerous. One example is the Java Media Framework (JMF), a big heavy-weight library that includes a lot of extra components that are not needed in this research. Another drawback is that JMF relays on native libraries, which in case of a mismatch need to be installed on the host machine [5]. After researching the different available libraries, one solution looked very appealing. The Java bindings for VLC media player are namely based upon the existing VLC media player, which is a very popular media player the day of writing. A couple of advantages made this solution the most suitable: good codec support, flexible and lightweight, streaming support with multiple protocols (HTTP included) and a large online community. All supported codecs can be found on the VLC media player website [6]. For development and testing purposes vlcj proposes a test package with most features of the libraries included. With this solution developers are able to evaluate the libraries before implementing them. The testing of the solution will also be based upon this available test package. VI. PRACTICAL IMPLEMENTATION The vlcj libraries will be implemented for the streaming provider and the streaming receiving side using the HTTP protocol. The provider side requires the most development time due to the fact that the multimedia files needs to be selected and the streaming needs to be set up. The receiving side will require only a few lines of codes considering the few actions which need to be implemented. The receiver side will only need to open of the broadcasted stream. The streaming provider will mainly be using two vlcj classes: HeadlessMediaPlayer and MediaPlayerFactory. The following example code contains the setup of the stream. 1 String[] libvlcargs = {<<arguments of VLC>>}; 2 MediaPlayerFactory mediaplayerfactory = new MediaPlayerFactory(libvlcArgs); 3 HeadlessMediaPlayer mediaplayer = mediaplayerfactory.newmediaplayer(); Listing 1. Streaming provider code for setting up the necessary classes. MediaPlayerFactory is used to create the class containing media player, together with arguments concerning the play back of media. After the MediaPlayerFactory the creation of HeadlessMediaPlayer is realised, this will be the media player class able to stream the multimedia file String mediaopt = :sout=#duplicate{dst=std{ access=http,mux=ts,dst=<<ip:portnr>>}} ; mediaplayer.setstandardmediaoptions(mediaopt); mediaplayer.playmedia(<<media file>>); Listing 2. Streaming provider code for opening the multimedia file and initiating the stream using the HTTP protocol. Now the HeadlessMediaPlayer class has been giving the media options to stream the multimedia file using the HTTP protocol and with mediaplayer.playmedia() the streaming is started. The IP address that is used as a media option will determine who can open the network stream, it is possible to use a broadcast address, so that the stream can be opened by multiple receivers. The port number decides on which port the stream will be available and this port must be known by the receiver side. The provider side of the streaming is setup and if the media options are set correctly, it is now easy for the receiving side to open the stream with the vlcj library String[] libvlcargs = {<<arguments of VLC>>}; MediaPlayerFactory mediaplayerfactory = new MediaPlayerFactory(libvlcArgs); HeadlessMediaPlayer mediaplayer = mediaplayerfactory.newmediaplayer(); String mediaopt = <<options for playback>> ; mediaplayer.setstandardmediaoptions(mediaopt); mediaplayer.playmedia(<<ip:portnr>>); Listing 3. Streaming receiver code to open the HTTP stream setup by the streaming provider. At first the client sets up the required class to be able to open the stream, the same as the server, except for the media options. The last code line is the most important here, it opens the streamed multimedia file. The provider and the receiver side are implemented in an easy to understand way using the vlcj library. When implementing the local streaming with different protocol options, the information can be found on the vlcj developer s website [7] [8]. This paper will now continue with the testing of the Java bindings for VLC media player and finish off with a conclusion. VII. TESTING AND CONCLUSION The previous section talked about the usage of the vlcj library, this section will show the results of the test and conclude the research. As testing methods, the standard VLC Media Player is used in combination with the test package provided by vlcj. Like mentioned in the previous section vlcj has a vlcj test package that includes most of the features available in the vlcj library. Since streaming with HTTP is included in this package as well, the testing will be based upon this provided solution. The test setup is a Windows 7 streaming provider together with a Windows XP Virtual Machine (using Microsoft Virtual PC). The Windows 7 machine will be used to set up the streaming of an.mp4 file encoded with x264 (Matroska) and MPEG AAC for audio. The virtual Windows XP will be using the vlcj test package to open the stream with the HTTP protocol. The communication between the 2 applications will 3 10 be done over the local area network, using a virtual network adapter. In figure 3 the setup of the test approach can be seen. At the left it is showing the VLC Media Player on the Windows 7 machine that is streaming the multimedia file and on the right the Windows XP Virtual Machine with the Java test application that will open the stream using HTTP. The results fulfilled the expectation of vlcj. The requirements posed in the first section were local streaming, large codec support and low processing power. Local streaming was successful and verified by testing with the test application. Large codec support can be confirmed by using a more unusual codec to stream over the local network. Also the processing power was very low when streaming the multimedia file. The processor was an Intel T6670 Dual Core and stayed lower than 7% for streaming the multimedia file over a period of 40 minutes. Java together with the vlcj library is a solid streaming solution that can be used in the local area network. Even when implementing this across the World Wide Web is possible with HTTP as a streaming protocol. For playing multimedia files on a remote computer, streaming is a flexible way of solving this problem. The streaming provider is not required to have high performance hardware, since the streaming only demands low processing power. The streaming receiver opens the streams when it is allowed access to the stream and when it knows the IP address and port number of the streamed media. Java as programming language made it possible to program for multiple platforms together and creating an application that is cross-platform. This in combination with the JIT compiling and the JVM, Java was stable, high in performance and versatile at the same moment. When implementing an application with Java, the available libraries are from the large online Java community. The Java bindings for VLC media player was found in this large community and gave great results. First of all the codec support covers almost all available codecs nowadays and secondly the implementation when without problems. After using the test package provided by vlcj, the conclusion was very clear. The combination of streaming and Java, while making use of vlcj makes a solid base for streaming multimedia along the local area network. REFERENCES [1] Tim Lindholm and Frank Yellin, The Structure of the Java Virtual Machine in The Java Virtual Machine specification, 2nd ed., Addison- Wesley, 1999, ch. 3. [2] Oracle Technology Network - Java", Available at June [3] About Digital Living Network Alliance - White Paper, Available at June [4] Why do I hate DLNA protocol so much?, Available at August [5] Osama Tolba, Hector Briceño and Leonard McMillan, Pure Java-based Streaming MPEG Player unpublished. [6] VLC Features Formats Codec Support, Available at November [7] Java Bindings for VLC Media Player, Available at June [8] vlcj - Java bindings for the VLC Media Player, Available at June Fig. 3. Streaming test setup using a Windows 7 machine (host machine) and a Windows XP Virtual Machine. At the left the VLC media player in the Windows 7 system, which streams the multimedia file? At the right the vlcj test package in the Windows XP Virtual Machine opening the stream with the HTTP protocol and using the IP address and port number of the streaming provider. 4 11 0.7µm Digitally Controlled Switched Capacitor Oscillator for a resonance tracking ultrasound transmitter. B. Blockx 1, W. De Cock 2, P. Leroux 34 1 Katholieke Hogeschool Kempen, Departement Industriële en biowetenschappen, Geel, Belgium ² SCK-CEN, Belgian Nuclear Research Centre, Advanced Nuclear Sytems Institute, Mol, Belgium 3 Katholieke Hogeschool Kempen, Departement Industriële en biowetenschappen, Geel, Belgium 4 Katholieke Universiteit Leuven, Departement ESAT-MICAS, Leuven, Belgium Abstract A digitally controlled oscillator is presented in this paper. The oscillator is part of a robust PLL transmitter which is used for ultrasonic visualization in the MYRRHA-reactor. It has a quiescent frequency of 5MHz, a tuning range of 4-6MHz and a power consumption of 15mW. Index Terms Oscillator, DCO, Band-pass filter, PLL I. INTRODUCTION Oscillators are used in a wide variety of applications. The oscillator presented here is designed for use in a robust PLL stabilized transmitter for ultrasonic visualization in the MYRRHA-reactor (Multi-purpose hybrid Research for Hightech Applications). The reactor serves as a research tool to study the effectiveness of the minor actinides transmutation. Because the liquid cooling in the reactor is done with leadbismuth (LBE) at high temperatures (200 C C) which is opaque, optical inspection of the reactor is not possible. Therefore visual inspection is done using ultrasonic waves. The transducers [1] used to send and receive the ultrasonic waves, which are specially designed for use in harsh conditions (high temperature, corrosive nature of LBE and strong gamma radiation), have a resonant frequency of 5MHz. In a regular ultrasound transmitter [2] the transducer is driven by a short Dirac pulse to which the transducer responds with an oscillation burst at the resonance frequency (its impulse response). The aim is to detect and digitize this pulse with high accuracy in a band-pass Delta-Sigma receiver centered around this resonance frequency. The receiver bandpass frequency needs to be tunable as the transducer may suffer rapid aging under the harsh conditions presented above, which may affect the resonance frequency. In order to be able to automatically tune this receiver, in our transmitter design the transducer will be driven with a pulse at its resonance frequency. This resonance frequency will be measured continuously using a Phase Locked Loop. A copy of the Digitally Controlled Oscillator in the PLL can be used as resonator in the Delta-Sigma loop which ensures accurate tuning if both VCO and resonator are controlled with the same digital control word. This work will focus on the design of this oscillator. The oscillator is implemented with switched capacitors. This technique has several advantages [3] [4] [5] in the design of programmable oscillators and filters. They can be fully integrated, have high accuracies and stability, operate over a wide frequency range, are small in size and have good temperature stability. The primary advantage is that the quality factor Q and center frequency are independently variable [6]. The oscillator presented in this paper consists of a band-pass filter with a hard limiting feedback [7]. Frequency and amplitude of the resulting oscillation can be controlled separately. This is explained in the next section. As mentioned earlier the oscillator needs to work under high temperature conditions. This puts certain constraints on the design of the operational amplifiers. The use of Complementary Metal-Oxide-Semiconductor (CMOS) allows fabrication on a single die and offers low power consumption. At high temperatures conventional bulk CMOS has many drawbacks including increased junction leakage, reduced mobility and unstable threshold voltages [8]. Excess leakage current is the most serious problem which can reduce circuit performance and eventually can destroy the chip due to latchup. Despite these issues, an instrumentation amplifier in bulk CMOS has been demonstrated at temperatures reaching 300 C [9]. When MOS is compared with other structures like JFET s or BJT s, the MOS devices more suitable for high temperature because an insulated gate is used. The gate in a JFET and the base in a BJT are formed by a junction. Each of these junctions is in effect a diode resulting in leakage currents and degrade biasing. II. PLL CONTROLLED ULTRASOUND TRANSCEIVER Because of the harsh condition inside the reactor, the transducer suffers from rapid aging with shifting of the resonant frequency as a result. The efficiency of the transducer reduces under these conditions resulting in a reduced measurement distance or an increased power requirement. To 5 12 counter this problem the resonant frequency of the transducer is measured and tracked with a PLL. The ultrasonic transducer in the resonance frequency region can be described using the Butterworth-Van Dycke model [10] in figure 1. This model describes the mechanical part,, and the electrical part (the clamping capacitor ) of the transducer. C 0 R s L s C s Figure 1 Butterworth-Van Dycke model The derived ultrasonic transducer input impedance is given by: (1) Figure 2 Modulus and phase of the input impedance of the equivalent transducer model The PLL uses this natural behavior of the transducer to measure and track the resonant frequency of the transducer. This concept is translated into the block diagram in figure 3. From the previous equation we get the resonance frequencies for the serial and the parallel resonance or anti-resonance: Serial resonance frequency: Parallel resonance frequency: 1 (2) 2 R s L s C (3) In [11] the proposed component values for an equivalent transducer model with an operating frequency of several MHz are the following: 0.110, , , With a serial resonance frequency of 5MHz the following component values are calculated: 1, 1, 200, 100 These component values result in a serial resonant frequency of 5.03MHz and a parallel frequency of 5.058MHz. The simulation in figure 2 displays the modules and the phase of the transducer input impedance in (1). This figure clearly shows that when the transducer resonates the phase becomes almost zero. On this resonance frequency and cancel each other out. The parallel resonance frequency is due to the clamping capacitor. Figure 3 PLL block diagram When resonating, the current through the transducer is theoretically in phase with the output voltage of the oscillator. This current is converted in to a voltage which is then compared with the output voltage of the oscillator in the phase detector. The difference between them is integrated by the loop filter which drives the controlled oscillator. The oscillator adjust the output frequency until the difference between the current through the transducer and the output signal of the oscillator is zero. The phase detector is a three state phase detector which has a linear range of 2 radians. The loop filter is implemented as a charge pump type integrator with a finite zero gain which results in a second order, type 2 PLL. The VCO has a quiescent frequency of 5MHz, an input sensitivity of 250KHz/V and an output amplitude of 1V. Figure 4 shows the simulation results of the block diagram in figure 3. At the beginning of the simulation the current through the transducer is not in phase with the output signal of the oscillator. Under this condition the output of the phase detector is continuously negative which results in a rising output of the loop filter and a rising oscillation frequency of the oscillator. After 10µs both input signals of the phase detector are in phase and the oscillator settles around his quiescent oscillation frequency which is also the series resonance frequency of the transducer. C s 6 13 In this paper the values are 2 5 and 10. The sample frequency at which the transmission gates are switching is 100MHz or 10nS. This is fast enough so that the second order term in the transfer function of the band-pass filter can be neglected. 1 1 K 4 C C 1 K C 6 1 C2 V in K 2 C K 5 C V out a) K K (1 z ) Figure 4 PLL simulation III. CIRCUIT DESCRIPTION The controlled oscillator is implemented with switched capacitors. The advantages of this technique have already been mentioned. The basic concept of the oscillator is a band-pass filter with a hard limiting positive feedback [7]. The band-pass filter in figure 5 has a center frequency of 5MHz. The transfer function of the filter is 2 1 After applying the bilinear transformation the analog transfer function is The second order term in the nominator is negligible, which is valid whenever the sampling frequency is considerably higher than the oscillation frequency, so that the TF becomes the well-known band-pass transfer function from which the capacitor values can be calculated. In order to obtain a low-distortion oscillator, it is necessary for the band-pass filter to have a relatively high Q-factor. Suppose now that a square wave with a frequency equal to the center frequency of the band-pass filter is applied at the input of the filter. Under high-q assumption, it is clear that the outputs of the operational amplifiers will be approximately sinusoidal due to the filtering action. V in (z) 1 K (1 z ) z 1 K z b) z 1 V out (z) Figure 5 a) Band-pass filter with switched capacitors b) equivalent signal flow diagram The foregoing arguments demonstrate that the circuit in figure 1 added with a hard limiting positive feedback is a good basic oscillator. Yet, a significant drawback is that the output amplitude is a direct function of the saturation limits of the positive feedback. A solution for this drawback [7] is illustrated in figure 6. The output of the comparator generates logic signals and, which are used to select the input path. V ref X X K 2C1 C3 2 1 C K 4C1 C6 K 6C1 C5 5C 2 C4 Figure 6 Oscillator with switched capacitors Figure 7 shows the simulation results of the circuit in figure 6. The transmission gates are implemented as ideal voltage controlled switches. The operational amplifiers are ideal voltage controlled voltage sources with a voltage gain of 25dB. The minimum voltage gain that guarantees a proper operation of the oscillator is 12dB. Figure 7 shows the output of the first operational amplifier and the output of the second K C 2 X X V out 7 14 operational amplifier. While the output of either operational amplifier is usable, the output of the second operational amplifier is preferred because it has less distortion due to its low-pass response at least with a non ideal oscillator. The quiescent oscillation period of 200 ns is shown clearly.. This results in a overall voltage gain of The maximum output swing is. The operational amplifier is supplied with an asymmetric voltage so that the output signal is symmetrical. The supply voltage is 3 and 2. V DD 3V Figure 7 Ideal oscillator output signal M M 7 8 Figure 8 shows the normalized frequency spectrum of the ideal oscillator in figure 6. This image shows clearly the oscillation frequency of 5MHz. Also the sample frequency plus and minus the oscillation frequency are clearly visible. R 1 V 1, b 2 3V V 0, b 1 8V M6 M5 M3 M4 M1 M2 V in V SS M 11 V out R 2 I R1 M 10 I D9 M 9 M M V SS 2V Figure 9 Low level implementation of the operational amplifiers Figure 8 Normalized frequency spectrum of the ideal oscillator output signal a. Operational amplifier implementation The implementation of each of the two operational amplifiers presented in figure 5 and figure 6 consist of an operational transconductance amplifier followed by a buffer output stage. The implementation of the op amp is shown in figure 9. Current mirror / is biased by. In this case the value of is chosen so that the current drawn by the first stage of the op amp is 450µA from which is 10 times smaller than. The current mirror multiplies this current with a factor of 10. The amplifier has a voltage gain of Figure 10 displays the bode diagram of the voltage gain and the phase margin. The voltage gain is 62dB with a bandwith of 3MHz. The gain at 5MHz, the quiescent frequency of the oscillator, is 56dB. The gain bandwidth is 1.1GHz which is more than the minimum GBW of 120MHz.. The buffer avoids large capacitive loads on each of the output nodes of the two operational amplifiers and increases the output swing. The buffer is biased by with a current of 200µA. This current is multiplied by the current mirror / with a factor of 10. The voltage gain of the buffer is Figure 10 Voltage gain of the operational amplifier 8 15 Figure 11 shows the slew rate of the operational amplifier with a capacitive load of 10pF which is: / Figure 11 Slew rate of the operational amplifier b. Comparator implementation The comparator [12] generates logical signals and which are used to inject the proper amount of charge into op amp 1. The low-level implementation of the comparator used in figure 6 is shown in figure 12. The comparator consists of a preamplifier, the actual comparator and two inverters. To reduce the delay of the amplifier the input signal is preamplified with a voltage gain of. The comparator is supplied with 2.5 and 2.5. The outputs of the pre-amplifier drive the inputs of the comparator. When is on, the gates of M5b and M6b are low causing the output voltage to be at. and vice versa. The same reasoning applies to. Eventually the outputs of the comparator drive the inputs of the inverters. R 2 M 2 V in R 1 a M 1a V SS Vout V DD M 3 R 1 b V DD M 1b M 7 b X V in M 4 a M 6 a Vout V out M 5a V DD V SS V DD M 8 b M 5b X M 6b V out M 4b Figure 13 a) Input signal of the comparator b) output signal of the comparator c) inverted output of the comparator The input signal has a amplitude of 200. The output range of the comparator is and. The delay of the comparator is 7nS. Without the preamplifier the delay is 25ns. c. Frequency control The oscillator frequency can be made variable when the capacitance of one or more capacitors changes. Here capacitor is made variable because this capacitor has most influence on the oscillation frequency. The capacitor can be made variable by placing capacitors parallel. Each capacitor is controlled separately by the transmission gate in series with it. These transmission gates, shown in figure 14, can be controlled by a digital control word. The phase-frequency detector in a digital PLL generates a digital number which is integrated in a counter. The depth of that counter determines the number of capacitors used to control the oscillation frequency. Figure 14 shows an example of the oscillator controlled by a 4 bit digital word. If # then 8, 4, 2,. When the number of bits in the digital word is increased, more capacitors can be placed parallel which increases the accuracy by which the oscillation frequency can be controlled. M 7 a M 8 a V SS Figure 12 Comparator The inverters avoid capacitive loads at the output of the comparator and produce a square wave. Figure 13 shows the input- and output signals of the comparator. V SS 9 16 2 1 8C C 11 C C 14 K6C1 C5 C 1 1 X K2C1 C3 2 1 K5C 2 C 2 4 C 2 V out V ref X 1 1 X Figure 14 Frequency controlled oscillator IV. OSCILLATOR SIMULATION RESULTS X Figure 16 Normalized frequency spectrum of the oscillator output signal The signal-to-noise ratio of the oscillator output signal is 11.1dB. Figure 15 displays the output of the oscillator in figure 6. The operational amplifiers and the comparator are implemented as shown in figure 9 and figure 12. The quiescent oscillation frequency of the oscillator is Figuur 17 displays the normalized frequency spectrum of the output signals generated by the oscillator in figure 14. The spectrum consist of 15 different oscillation frequencies. The range of the oscillator is 5.5MHz. To obtain a range of 2MHz the digital word can be limited from 0110 to The oscillation range of 2MHz, from 4MHz to 6MHz, is then controllable in 6 steps. which is corresponds to the required oscillation frequency. The peak-to-peak voltage of the output signal is 2V. Figuur 17 4 bit controlled oscillator output frequencies Figure 15 Transient analysis of the oscillator output signal Figure 16 shows the normalized frequency spectrum of the oscillator oscillating at 5MHz. The oscillation frequency of 5MHz is clearly dominant. Figure 18 displays the output signal of the oscillator simulated at different temperatures. The amplitude drift is less then 40mV over a temperature range of 100 C. Figure 18 Amplitude drift of the oscillation output signal 10 17 V. CONCLUSION In this paper the high level implementation of a PLL ultrasound transmitter has been presented. The need for a PLL based transmitter with a controlled oscillator is clarified. The design of the controlled oscillator with sinusoidal output used in this PLL has been discussed in detail. The oscillator uses a switched capacitor topology with a high Q bandpass filter and a comparator to provide the required loop gain. The circuit is simulated in a standard low-cost 0.7µm CMOS technology. Frequency control is performed by a digital control word used to adjust the feedback capacitor. The DCO has a center frequency of 5MHz, consumes only 15mW, has a SNR of 10.7dB and is stable over a wide temperature range. REFERENCES [1] R. Kazys, L. Mazeika, E. Jasi Niené, A. Voleisis, R. Sliteris, H. A. Abderrahim, M. Dierckx. Ultrasonic Evaluation of Status of Nuclear Reactors Cooled by Liquid Metal, ECNDT [2] Shen, C., Li, P. Harmonic Leakage and Image Quality Degradation in Tissue Harmonic Imaging. IEEE Transactions on ultrasonics, ferroelectrics and frequency control, vol 48, no. 3, may [3] D. J. Allstrot, R.W. Brodersen and P R Gray. An Electrically Programmable Analog NMOS Second-Order Filter, ISSCC 1979 [4] B. C. Douglas and L.T Lyon. A Real-time Programmable Switched Capacitor Filter, ISSCC [5] M. Verbeck, C. Zimmermann, and H.-L. Fiedler, A MOS switchedcapacitor ladder filter in SIMOX technology for high-temperature applications up to 300 C, IEEE J. Solid-State Circuits, vol. 31, no.7, pp. 908, July [6] B. C. Douglas. A Digitally Programmable Switched-Capacitor Universal Active Filter/Oscillator. IEEE Journal of solid-state circuits, vol sc-18, NO. 4, April [7] PAUL E. FLEISCHER, A switched Capacitor Oscillator with precision amplitude control and guaranteed start-up. in IEEE Journal of solidstate circuits, vol sc-20, NO. 2, April [8] M. Willander and H. L. Hartnagel (eds.), High Temperature Electronics, Chapman & Hall, London, [9] P. C. de Jong, G. C. M. Meijer, and A. H. M. van Roermund, A 300 C dynamicfeedback instrumentation amplifier, IEEE J. Solid-State Circuits, vol. 33, no.12, pp , Dec [10] L.Svilainis, G. MOtieJunas, Power amplifier for ultrasonic transducer excitation, ISSN ULTRAGARSAS, Nr.1(58) [11] A. Arnau (2004). Piezoelectric Transducers and Applications. Berlin: Springer-Verlag p [12] Koen Uyttenhove and Michiel Steyaert, A 1.8-V, 6-bit, 1.3-GHz CMOS Flash ADC in 0.25 µm CMOS, ESSCIRC 18 12 19 Motion control of an arm orthosis by EMG signals through ANN analysis Joachim Buyle 1, Ekaitz Zulueta 2 1: Katholieke Hogeschool Kempen, Department of Industrial Engineering, Geel, Belgium 2: Universidad del País Vasco, University College of Engineering, Vitoria-Gasteiz, Spain 1 June 2011 ABSTRACT This paper briefly describes the research method used to attempt to improve the motion or speed control of an arm orthosis by EMG signals. Data obtained from EMG sensors is processed by calculating its auto-regressive coefficients, using the Yule-Walker method. These coefficients are fed in an artificial neural network and validated to check whether or not the AR coefficients correspond to the desired output. Key words: EMG, Arm Orthosis, ANN, Yule- Walker, AR coefficients, MATLAB I. INTRODUCTION This research project is carried out in order to improve the control and movement of a robotic arm prosthesis or orthosis. The purpose of this orthosis is to support a person, who has lost strength in his arm, to lift objects. The orthosis is controlled by electromyographic or myoelectric signals, captured at the arm s biceps and triceps muscles using specialized surface EMG sensors and equipment. These EMG signals correspond to the body s emitted electric signals in the nervous system, which produce movement. This kind of application could help handicapped or disabled persons to carry out a specific job or operation. These man-machine interfaces have become very important in the last years will become even more important in the future. Each year new groups of students contribute to this research project. The last group of students mainly focused on improving the system s signal processing to control the orthosis in a more natural way. They implemented a fuzzy logic controller, which processes the EMG signals using fuzzy rules. The outcome of this method was indicating the most important EMG signal of a certain input. They also implemented the use of a goniometer. This device allows you to measure the angle of the orthosis and adds more feedback and possibilities to control the orthosis in a better way. This year ( ) I joined two local Spanish students, Marcos Albaina Corcuera and Enrique Kike de Miguel Blanco, who continued the project with updating the control diagram in Simulink in order to improve the arm s motion more naturally. We found a way to improve the control diagram using the Fast Fourier Transform (FFT) and control the orthosis in a real-time environment. Besides that the main focus of my research was to look for an offline or non real-time method to improve the arm s movement by using a more nonconventional or more intelligent system to extract and analyze the data. We opted to analyze the EMG signals with the Yule-Walker Auto-Regression (AR) coefficients. For a certain subset of data these transfer function coefficients could indicate which Motor Unit Action Potential (MUAP) is active. These coefficients would serve as an input for an Artificial Neural Network (ANN). Using a training set, derived from the goniometer s signal, we could train the neural network and indicate if the correspondent AR coefficients are predicted correctly as the arm moving up, down or not moving. At a later stage the idea is to combine this learned knowledge with a control diagram in Simulink. II. ELECTROMYOGRAPHY Electromyography or EMG is a technique for evaluating and recording electrical activity produced by skeletal muscles or muscles under control of the nervous system, for example the biceps muscle in the upper arm. EMG sensors detect the electrical potential of the muscle cells when these cells are electrically activated by the nervous system. These EMG signals can be analyzed to detect and help treat medical abnormalities or used for biomechanics, like in this project. There are many applications for the use of EMG. EMG is used 13 20 clinically for the diagnosis of neurological and neuromuscular problems. It is used diagnostically by laboratories and by clinicians trained in the use of biofeedback or ergonomic assessment. EMG is also used in many types of research laboratories, including those involved in biomechanics, motor control, neuromuscular physiology, movement disorders, postural control, and physical therapy. Electromyography is the recording of electrical discharges in skeletal muscles, for example: muscle activation patterns during functional activities like running or weight lifting. Muscle fibers have to be stimulated to initiate muscle contraction. They are innervated by motor neurons located in the spinal cord or brainstem. Motor neurons are connected to the skeletal muscle by nerve fibers, called axons. Each fiber of a muscle receives its innervations by one single motor neuron. Contrary, one single motor neuron can innervate more than one muscle fiber. A motor neuron and its associated fibers are defined as a motor unit. Feinstein described that one motor unit controls between three and 2000 muscle fibers, depending on the required fineness of control. When a motor unit is activated by the central nervous system, an impulse is sent along the axon to the motor end plates of the muscle fibers. As consequence of the stimulus, the motor end plates release neurotransmitters that interact with receptors on the surface of the muscle fibers. This results in a reduction of the electrical potential of the cells and the released action potential spreads through the muscle fibers. Such a depolarization is called end plate potential. The combined action potentials of all muscle fibers of a single motor unit are called "Motor Unit Action Potential" (MUAP). As the tissue around the muscle fibers is electrically conductive, this combined action potential can be observed by means of electrodes. The repetitive firing of a motor unit creates a train of impulses known as the "Motor Unit Action Potential Train" (MUAPT). These MUAPTs can be measured from the body s skin through SEMG sensors. SEMG or surface electromyography is used in this project to measure muscle activity from the skin. Through SEMG, the combination of electrical activity or action potentials from the numerous muscle fibers that contribute to a muscle contraction can be collected and analyzed. III. ARTIFICIAL NEURAL NETWORKS A way to improve the movement of the orthosis is by analyzing the EMG signals with an Artificial Neural Network. An ANN can be trained to verify if the calculated auto-regressive coefficients are correct by validating that input with the goniometer s signal, which indicates the produced motion of the arm orthosis. An artificial neural network or ANN is a computer algorithm created to mimic biological neural networks. Even with today's complex computing power standards, there are certain operations a microprocessor cannot perform. An ANN is a nonlinear system used to classify data, which makes this a very flexible system. ANNs are used in a wide range of applications such as control, identification, prediction and pattern recognition. ANNs are considered to be relatively new and an advanced technology in the field of digital signal processing. And it applies to much more fields than just engineering. It has two main functions: pattern classifiers and non-linear adaptive filters. An ANN is also an adaptive system, just like its biological counterpart. Adaptive means that during its operation parameters can be changed. This is known as the training phase. An ANN works by a step-by-step procedure, which optimizes a criterion, known as the learning rule. It crucial to chose a proper set of training data to determine the optimal operating point. There are many different architectures for NNs with different types of algorithms, and although some ANNs have a very complex nature, the concept of a NN is relatively simple. Basically an ANN is a system that receives input data, processes that data and delivers an output. The input data is usually an array of elements, which can be any representable data. The input data will be compared with a desired target response and an error is composed from the difference. This error will be fed back into the system to adjust the parameters of the learning rule. This feedback process will continue until the desired output is acceptable. Since there is no way to determine exactly what is the best design, you'll have to play with the design characteristics in order to get the best possible results. So it's pretty difficult to refine the solution when the system doesn't do what it is intended for. However ANNs are very efficient in terms of development, time and resources. ANNs can provide real solutions that are difficult to match with other technologies. Collecting EMG sensor data The first step is to take a series of samples of the EMG sensors. We take a series of different, controlled movements and collect data from the biceps, triceps and goniometer. The test subjects are my project partners Kike and Marcos. They are two persons with different physical characteristics who 14 21 will provide different EMG patterns. Both execute the same controlled movements, which include moving the arm up and down, fast or slow, with or without extra weight added. When extra weight is hold in the hand, in this case a heavy bag, EMG signals tend to more clear as more force is needed to move the arm. We try to locate the sensors in the exact same position every time. Each sample is recorded for nine seconds and then saved as a.mat file. We try to find a clear pattern of distinction in muscle movement and EMG activity for both the biceps and triceps muscles. From each test subject one sample will be chosen for further investigation with an ANN. After recording more than 10 different samples of both test subjects, sample 5 of Marcos, marcos5, provided the best wave patterns for further investigation. Marcos5 was created with following movements: The arm starts down, rested next to the body, then moved up quickly against the shoulder (fully contracted) and then back down again. This move is repeated four times, with no extra weight added. exact same position. Surface sensors also tend to move a bit when the arm is moved, making them more susceptible to noise and errors. It seems that EMG signals tend to be easier to distinguish when the arm is moving in fast motion. IV. IMPLEMENTATION For the implementation of the Artificial Neural Network we used MATLAB. We begin with loading the MATLAB file that contains the EMG sample data. The data contains 4 signals: biceps, triceps, goniometer and the time. The next step after loading the EMG data into MATLAB is the feature extraction. We have raw EMG data but the question is how we will use this data to produce a meaningful input for the ANN. We will use this input signal of Yule-Walker coefficients to compare it to the training set which will be based on the goniometer signal. Yule-Walker Because an EMG is signal is a composition of different EMG impulses we need to find a way to decompose the signal into multiple signals or inputs. We need to know which muscle fiber or MUAP is active at the time of movement and the amplitude of the strength. One way of doing that is to estimate an autoregressive (AR) all-pole model using the Yule- Walker method. This method returns the transfer function coefficients of the input signal, which can be an indication for which Motor Unit Action Potential (MUAP) in the muscle is active. Figure 1: A plot of the biceps, triceps and goniometer signals of sample marcos5 Sample 5 of Marcos ( marcos5.mat ) and sample 6 of Kike ( kike6.mat ) are chosen to be further investigated because when moving the arm the graph shows a clear, sudden increase in EMG muscle activity. Sample 5 of Marcos is clearly the best sample, as you can clearly see which muscle is active at what time or movement. You can see a sudden increase of EMG activity in the biceps signal if the arm moves up and increased activity in the triceps signal when the arm is moving down. There are numerous factors why the other samples are not that easy to distinguish. The main reason is that surface sensors produce much more noise and capture more EMG signals at a time than the more precise intra-muscular or needle EMG sensors. Also physical characteristics of the test subject are important as one person can have more or less body tissue and a different muscle structure than the other. Third thing to consider is the placement of the sensors: it is quite difficult to place the sensors in the In statistics and signal processing, an autoregressive (AR) model is a type of random process, which is often used to model and predict various types of natural phenomena. The autoregressive model is one of a group of linear prediction formulas that attempt to predict an output of a system based on the previous outputs. The notation AR(p) indicates an autoregressive model of order p. The AR(p) model is defined as: where! 1,,! p are the parameters of the model, c is a constant (often omitted for simplicity) and! t is white noise. White noise is a random signal (or process) with a flat power spectral density. In other words, the signal contains equal power within a fixed bandwidth at any center frequency. An autoregressive model can thus be viewed as the output of an all-pole infinite impulse response (IIR) filter whose input is white noise. 15 22 It is based on parameters! i where i = 1,..., p. There is a direct correspondence between these parameters and the covariance function of the process, and this correspondence can be inverted to determine the parameters from the autocorrelation function (which is itself obtained from the covariances). This is done using the Yule-Walker equations. The Yule-Walker method, also called the autocorrelation method is used to fit a p th order autoregressive (AR) model to the windowed input signal, x, by minimizing the forward prediction error in the least-squares sense. This formulation leads to the Yule-Walker equations, which are solved by the Levinson-Durbin recursion. x is assumed to be the output of an AR system driven by white noise. Vector a contains the normalized estimate of the AR system parameters, A(z), in descending powers of z. Because the method characterizes the input data using an all-pole model, the correct choice of the model order p is important. AR(p) model, by replacing the theoretical covariances with estimated values. One way of specifying the estimated covariances is equivalent to a calculation using least squares regression of values Xt on the p previous values of the same series. The EMG data of the biceps and triceps are extracted for an interval of 1 second. The order of the AR model is 9, which will result in 10 coefficients since the first AR coefficient is always equal to one. We calculate the Yule-Walker coefficients for both the biceps and triceps EMG signals after a white noise filter is applied. The Yule-Walker equations are the following set of equations: Figure 2: A plot of the biceps and triceps Yule-Walker AR coefficients (9 th order) where m = 0,..., p, yielding p + 1 equations. " m is the autocorrelation function of X,!! is the standard deviation of the input noise process, and # m,0 is the Kronecker delta function. Because the last part of the equation is non-zero only if m = 0, the equation is usually solved by representing it as a matrix for m > 0, thus getting equation: solving all!. For m = 0 we have: which allows us to solve!! 2. The above equations (the Yule-Walker equations) provide one route to estimating the parameters of an Training Set A neural network has to be trained so that a set of inputs produces the desired set of outputs (target). Teaching patters (training set) are fed into the neural network and change the weights of the connections according to a learning rule. So in order to validate the input data of the ANN, the AR coefficients of the EMG data, we must first generate a training set. This training set will be generated by the information from the third sensor, the goniometer. The training set sets the target for matching the input data. Because the goniometer registers movement in the arm, we can see if the Yule-Walker coefficients were correctly predicted. Moving the arm upwards corresponds to +1, moving the arm downwards corresponds to -1 and 0 means the arm isn t moving. Before we generate the training set, the goniometer signal is filtered with an 8 th order low-pass filter to reduce noise and smooth out the signal. Envelope function Next, we will put the biceps and triceps EMG signals through an envelope function. The idea is to further smoothen out the signal in order to prevent noise in the output and get more stable results. Each peak in EMG activity is directly followed by a 16 23 negative peak. This is also the case for the noise, when no or low EMG activity is measured we still see the signal move up and down rapidly. This pattern could confuse the ANN and affect the output in a negative way. To overcome this problem we filter the biceps and triceps EMG signals with an envelope or enclosure function. This function follows the positive spikes of the signal and reduces sudden variations in the signal. The envelope function S(t) with input signal Y(t) is mathematically defined as follows: S(t) = Y(t) if Y(t) > S(t-1) Else S(t) = a * S(t-1) Since we look for a positive rise in the signal s amplitude we take the absolute value of the current signal s value Y(t) and compare that to the previous outcome S(t-1). If the current absolute value Y(t) is bigger than the previous envelope value S(t-1), the current absolute value Y(t) will be stored as the new current envelope value S(t). In case Y(t) is smaller we multiply the previous envelope value S(t1) with a constant a which value is set to This constant defines how quickly the slope of the envelope decreases. Low values, for example 0.1 or 0.3, ensure that the slope declines very fast. This is not desirable, as the result would resemble the original signal a lot. We set a to the maximum value, 0.99, in order produce a more smooth output signal. In the following figure you can see an example of the envelope signal: In red the original EMG signal and in blue the filtered envelope signal. As you can see there are a lot less sudden variations in the signal. gradient of each point we can determine the movement of the arm. By using a sliding window with a certain interval for linear regression we can try to indicate where exactly the arm moves. To keep it simple, linear regression will fit the best possible straight line in a series of points, in this case the slope values of the goniometer signal within each window. Linear regression is defined as follows: Y = [!] * P P = (!t *!)-1 *!t * Y If y is a linear function of x, then the coefficient of x is the slope of the line created by plotting the function. Therefore, if the equation of the line is given in the form: y = mx + b then m is the slope. This form of a line's equation is called the slope-intercept form, because b can be interpreted as the y-intercept of the line, the ycoordinate where the line intersects the y-axis. The slope is: m = (y2-y1)/(x2-x1) Figure 3: an example of the envelope function Linear regression Before we calculate the AR coefficients we will determine a target set or training set out of the goniometer signal. The target set will be based on the movement of the arm so we can than link arm movement to increased EMG activity. We just want to know when the arm moves up or down. So the training set should indicate the upward movement with 1 and the downward movement with -1, 0 indicates no movement. By calculating linear regression, which makes use of the slope or Figure 4: A plot of the (filtered) goniometer signal in red, the linear regression signal in blue and the target signal in magenta. ANN Training Once we have all the input data ready, it is time to start training the ANN and looking at the results. We create a feed-forward network with a single hidden layer and a varying number of neurons (3 to 7). In feed-forward neural networks the data propagates from input to output, over multiple 17 24 processing units, called hidden layers. There are no feedback connections present. A feedback connection means that the output of one unit is fed back into an input of a unit of the same or previous layer. More neurons require more computation, and they have a tendency to over fit the data when the number is set too high, but they allow the network to solve more complicated problems. More layers require more computation, but their use might result in the network solving complex problems more efficiently. Comparison of different ANN setups The network architecture for this research consisted of an input layer, one hidden layer and an output layer. A hidden layer in an ANN consists of a number of neurons. Several trainings and validations are executed using different number of neurons and the two types of sample processing: with and without using an envelope function. I compared the EMG samples of marcos5 and kike6 with different ANN parameters and reviewed the results to see whether or not we can find an optimal configuration for classifying Yule- Walker AR coefficients as correct movement or not. In total we will test 20 different setups. Four sample sets: marcos5 with and without envelope function and kike6 with and without envelope function. Each sample set will be trained 5 times, each time with a different number of neurons, ranging from 3 to 7. We will compare results graphically in a plot with the target output in red and the actual output in blue. The blue pattern has to match the red target as good as possible. The following graphs are an example of such a test set: Figure 6: marcos5 with 5 neurons and envelope filter With 5 neurons applied both positive and negative movements are more or less properly classified within its target. The envelope function gets rid of a lot of noise. This result is quite ok. V. CONCLUSION After examining the results I ve come to the following conclusions. The use of the envelope functions seems to reduce the noise or interference in most cases. Mostly it reduces the amplitude between the -0,5 and +0,5 V range, making it easier to spot the spikes, which approach a +1 or -1 value. It also seems that negative movements (stretching of the arm) are way easier classified than positive movements (flexing of the arm). In all samples we see that negative spikes around the target are way more abundant than positive spikes. The use of an envelope tends to filter some of the positive spikes. But in general we can say the envelope function works. However the use of the Yule-Walker AR method to classify surface EMG signals doesn t look like a very stable method. It works more or less but acceptable results depend heavily on the setup of the ANN, of the sample used and on the test subject. The amount of neurons doesn t seem to be conclusive as for Marcos samples we got better results starting from 5 neurons and for Kike s results the opposite happened. In general we can say that this used method of analyzing surface EMG signals with an ANN and Yule-Walker AR coefficients didn t give the result we hoped for. REFERENCES Figure 5: marcos5 with 5 neurons and without envelope filter Roberto Merletti, Philip A. Parker, Electromyography - Physiology, Engineering and Noninvasive Applications, IEEE Press/Wiley-Interscience, 2004 Mark Hudson Beale, Martin T. Hagan, Howard B. Demuth, Neural Network Toolbox 7 - User s Guide, MathWorks Inc., 2010 MATLAB Help Offline Documentation, MathWorks Inc., 2010 MathWorks Support Online MATLAB Documentation, available at: ScienceDirect, available at: Web of Knowledge (WoK), available at: 18 25 Design of a Discrete Tunable Ultra-Wideband Pulse Generator for In Vivo Dosimetry K. Cools, M. Strackx, P. Leroux Abstract This work presents the design of a tunable discrete UWB pulse generator. Basic pulse generation architectures are reused in a new design for testing a novel in vivo dosimetry technique. Simulations predict the same or a smaller minimum monocycle duration than other published results on a design with discrete components. Depending on the smallest possible stub length, the obtained results are 300ps for 5mm or 215ps for 4mm (time between 20%-20% of the maximum). The reason for this research is because the present in vivo dosimetry methods don t offer a real-time non-invasive in situ measurement. The new envisioned method does all that and the method measures the irradiated tissue directly. Index Terms In vivo dosimetry, IR-UWB, Pulse generator, tunable circuits and devices, Ultra-wideband (UWB) N I. INTRODUCTION OWADAYS cancer treatment is implemented on an increasing number of people, regretfully. A commonly known part of most treatments is radiotherapy, used for curative or adjuvant cancer treatment or in worst cases to control/slow down the cancer. For irradiation of living human beings in vivo dosimetry, which is the measurement of the absorbed radiation dose in living tissue, is required in order to be able to optimize the irradiation cycles and minimize damage to healthy tissue. In vivo dosimetry can be done in many different cumbersome ways. Different physical and chemical effects are used to measure the dose, e.g. luminescence or conductivity. All present measuring techniques still involve time-consuming labor like individual placement, individual calibration or imaging. Some techniques also tend to be invasive, causing great discomfort to the patient. Also none of the current techniques combine real-time measurement and give the opportunity to really measure reactions/changes within the tissue during irradiation. For these reasons there exists a strong need for better and easier ways to do in vivo dosimetry. Kris Cools is a last year s Master Electronics student at the IBW department at Katholieke Hogeschool Kempen (Association KU Leuven), Geel, Belgium ( M. Strackx is with the Katholieke Hogeschool Kempen, Geel, Belgium ( P. Leroux is with the Katholieke Hogeschool Kempen, Geel, Belgium. He is also with the SCK CEN, the Belgian Nuclear Research Centre, Mol, Belgium (tel. : , A novel non invasive concept to tackle the in vivo dosimetry problem is presented in this work using Impulse- Radio UWB (IR-UWB) signals and Time Domain Reflectometry (TDR). UWB is already being implemented in other areas of medicine [1] (e.g. heart rate or respiration monitoring). Another field of great use is high speed datacommunication, but this implements other kinds of UWB. Our concept is based upon the fact that the permittivity (ε, polarization grade) and conductivity in matter is changed when it is irradiated [2] and this change can then be related back to the absorbed dose. With this technique the absorbed dose inside the irradiated tissue may be measured instead of only the entrance and/or exit dose which is the case for many other methods. In principle the combination of TDR and UWB allows investigation of the absorbed dose at any specific depth of interest. TDR is a way to observe discontinuities in a transmission path. The discontinuities in case of dosimetry applications are the multiple transitions between different layers of tissue in a human. Using UWB, defined as a signal with a bandwidth equal to or greater than 20% of the center frequency or 500MHz, whichever occurs first, a variety of frequencies are used to derive information from. The UWB bandwidth is measured at the -10dB points. The propagation depth is dependent on a combination of the permittivity and the conductivity, which are frequency dependent. Not every frequency can penetrate equally deep. This means in order to measure deeper in tissue there are fewer frequencies available to derive information from, which could mean less precise information. The lower frequencies tend to penetrate deeper into a human body and have a significant power reflection. Changes after irradiation are also most noticeable at the lower frequencies [2]. Another benefit of the proposed technique is the real-time monitoring of tissue during irradiation. In order to develop an UWB dosimetry test setup an UWB pulse generator must be made which is capable of generating multiple pulse shapes and thus possessing different frequency spectra. In the next section the basics of UWB are discussed. Next the different subparts of the generator are discussed based on their specific purpose, beginning with the creation of a sharp high-frequency containing signal edge. The second is pulse creation, followed by pulse enhancement and last pulse shaping. Some tuning possibilities will also be mentioned. Discreet Tunable Ultra-Wideband Pulse Generator for In Vivo Dosimetry August 26 II. UWB SIGNALS AND REGULATIONS In order to generate a wide frequency band signal, a short pulse is used in IR-UWB. Ideal would be a dirac-impulse, which contains every frequency, all equal in magnitude. In reality there always is a finite pulse time, and the most ideal signal is a Gaussian pulse. A special property of this pulse is that it is Gaussian in time and frequency domain. The property is shown in the Fourier transform in (1) [3]. (1) Figure 1 shows some commonly used waveforms and it can be seen that the smoother the signal is in time, the smaller the side lobes or the less energy there is outside the bandwidth [4]. This shows that the Gaussian pulse is the most energy efficient. Fig. 2. EC (past ) & medical imaging FCC UWB frequency masks Fig. 1. Different pulse signals with the same bandwidth, the smoothest time signal has the least energy outside of its bandwidth. The regulations on UWB are still evolving, and different regions may have different regulations for different fields of application like indoor, handheld, etc.. The US has the FCC as regulator and countries that are a member of the EU have the European Commission (EC) as main regulator. The common Equivalent Isotropically Radiated Power (EIRP) maximum for indoor use is -41,3dBm/MHz. In practice an EIRP measurement happens by integrating average RMS measurements over 1ms [5]. The formula to calculate the EIRP for a practical antenna is given in (2), where P trans is the generated transmitter power in dbm, P loss is for example the cable loss to the antenna in db and G antenna is the antenna gain in dbi. An example of FCC and EC frequency masks is shown in Figure 2 [6][7]. A commonly used term in UWB is Pulse Repetition Frequency (PRF). It gives the number of pulses sent in a second, expressed in Hz. Sampling in the frequency-domain is a property of a periodic signal, shown in Figure 3. In order to keep the noise-like characteristic, the UWB spectrum has to be continuous. A way of keeping this is to add a noise generator to the oscillator circuit responsible for the PRF. (2) Fig. 3. Periodic in time means sampling in the frequency-domain III. SHARP EDGE CREATION The first thing to know is how much power the UWB signal needs to posses to be useful for the envisioned application. In most cases this is enough to determine the component to be used to generate a sharp edge. There is however a trade-off between the transmit power and the smallest possible rise time. Typically the rise time is measured from 90% to 10% or 80% to 20% of the maximum amplitude. Something all these edge generating components have in common is their nonlinear behavior. If however rise time is more important, one can also achieve a higher power signal by summing signals, this requires multiple identical circuits and an accurate combiner circuit, or using a RF amplifier, both methods raise costs. A. High Power Here the voltage is starting from above 100V. The used components are the avalanche transistor and the Drift Step Recovery Diode (DSRD). The first uses the avalanche effect, which is a very fast transition to conducting state caused by a chain reaction of one free carrier hitting and freeing multiple similar carriers. The operation of the latter is comparable with that of a similarly named component in the mid-power class. Discreet Tunable Ultra-Wideband Pulse Generator for In Vivo Dosimetry August 27 However the dimensions and composition are different. These two can produce rise times in the order of hundreds of picoseconds. The possible PRF is limited due to caution for high temperatures inside the component, causing it to break down, and depending on the surrounding circuit components, who for example need to recharge. Some of the application areas where this high power type is used are radar and Ground Penetrating Radar (GPR). B. Low Power This category contains a pulse generator capable of delivering amplitudes up to ±250mV. The utilized component is a tunnel diode. These can give rise times of a few picoseconds. Figure 4 shows a typical tunnel diode I-V graph compared to that of a regular diode. This diode uses a quantum effect called tunneling, the penetration of a potential barrier by electrons whose energy (by classical reasoning) is insufficient to overcome the height of the barrier. Because of the use of very highly doped materials the gap between electrons of the N-material and holes of the P-material is much narrower than in regular PN junctions. So when forward biased, conduction starts immediately until a peak current is reached. Then the gap becomes bigger, too big for tunneling, and current decreases until the valley current is reached. Afterwards the tunnel diode functions as a normal diode. This decrease in current can happen in a very short period of time making it possible to create steep edges. time when its polarization switches from conducting to blocking state due to a stored charge in the intrinsic layer (PIN diodes). A special property of the SRD is when it stops conducting, it stops in a much more abrupt way than other kinds of diodes. This is due to its layer structure and used materials. Figure 5 illustrates this difference. In order to tune the edge duration, the easiest way is to adjust the amplitude of an input sine. The fastest digital gates are of the Emitter Coupled Logic (ECL) kind, but they can only achieve rise times in the order of hundreds of picoseconds and they drain a constant current. Our focus was most on this power class, which also is the most commonly used. As a result the following sections are applicable for this class, but may not be limited to it. Fig. 5 Ideal reverse recovery transient for a SRD (right) compared to typical PIN diode recovery transient IV. PULSE CREATION For the digital gates this can be done by a combination of two gates. Examples of this are shown in Figure 6. The pulse time can be regulated by placing a capacitor to the ground or adding a delay line behind the inverter. Fig. 6 Pulse cre ation with digital gates Fig. 4. Typical I-V graph of a tunnel diode C. Mid Power Between the two previously mentioned voltage levels, the generation may be performed by the use of a Step Recovery Diode (SRD) or digital gates and optionally at the end of the circuit a broadband amplifier, where Monolithic Microwave Integrated Circuit (MMIC) amplifiers are best suited. A much recalled UWB radio using the SRD is the Micropower Impulse Radar (MIR), which features a low PRF (= 2MHz) [8], but this can be adequate for many applications. Rise times in the order of tens of picoseconds are possible with SRDs. The interesting part of the operating principal is when the SRD ceases to conduct. Some diodes keep conducting for a short period of Other options which are also usable with the SRD are a high pass filter or a stub terminated to the ground. The latter gives a better symmetric shape then the former. So a better Gaussianlike pulse is generated by combining a sharp edge creating device with a stub terminated to the ground. The operating principal is that at the point of the stub a signal splits in two equal parts. One half travels directly towards the output while the other half takes a detour along the stub and reflects back with inverted polarity to join the other half with a certain time delay dependent on the stub length. Figure 7 shows the principal starting off with the edge at the diode and ending up as a pulse on the right. Discreet Tunable Ultra-Wideband Pulse Generator for In Vivo Dosimetry August 28 Fig. 7 Pulse creation with a stub The stub length tuning can be achieved by two different methods, namely line switching and bend switching. In the case of line switching there are multiple spots along a line where it can get connected to the ground. Whereas with bend switching there is only one ground at a single point. It is the path towards it that can be changed. Both layouts are depicted in Figures 8 and 9. Because of the use of diodes as switching elements every switching section has to be DC isolated by coupling capacitors and this means extra deformation of the pulse. This can be resolved by replacing the diodes with MESFETs. With line switching there can only be one section activated at a time, while with bend switching they can be switched binary. V. PULSE ENHANCEMENT Besides the pulse there is also some smaller unwanted signal forming. To separate the pulse from this, another component has to be added. This component will be used to generate a threshold higher then this unwanted signal. There are two possibilities. One is a diode with the right voltage drop. Preferably a Schottky diode can be used because of his fast operation and minimal reverse recovery. The voltage drop can be regulated by biasing the diode with a DC-source. The biasing has to be separated from the rest of the circuit by two coupling capacitors. The second possibility is by use of a MESFET, where V g or V s can serve to adjust the threshold. By using V s one less coupling capacitor has to be used. Which results in less signal deformation. This last possibility also has the extra of amplification, making it possible to place the threshold higher than really necessary and by doing so, the pulse time can be made even shorter, while still being able to get the wanted amplitude. VI. PULSE SHAPING Pulse shaping is done to make a signal fit in a frequency mask, to modulate data or for other reasons. UWB pulses have different names according to their form. You have the Gaussian pulse, the doublet, the monocycle and the polycycle. The two latter are derivatives of the first, with the monocycle being the first derivative. Examples of these pulses and their spectra are shown in Figures 10 and 11. Derived forms can be attained by sending a pulse trough a high pass filter, even an UWB antenna can be used. Another technique to get all these waves is by summing multiple pulses together with a certain delay. It is also possible with a stub, like mentioned in the pulse creation paragraph. For good derivative forms the forward and reflected wave have to overlap [9]. Fig. 8 Line switching Fig. 10 Different waveforms shown with their derivative form Fig. 9 Bend switching Discreet Tunable Ultra-Wideband Pulse Generator for In Vivo Dosimetry August 29 It s assumed that capacitive coupling while be neglectable in our case. Mainly because the MESFETs are kept as close as possible to the stub. So that with a practical minimal ground distance of 4mm or 5mm the capacitive coupling with any MESFET connection is still very small. The resulting signals and spectra for 4mm and 5mm distance without coupling are shown in Figure 12. Fig. 11 Spectra of the waveforms in Figure 10 VII. DESIGN & SIMULATION RESULTS By using a well thought combination of the above techniques an own design was made and is presented in Figure 13. Dependent on the chosen spacing between the stub start and the first grounded point, the minimum monocycle duration will differ. The minimum distance will depend upon the maximum capacitive coupling accepted. For traces on a PCB the rule of thumb for minimal capacitive coupling is a distance of 10 times the conductor width [10]. Also the less two traces are parallel, the less coupling they will have with each other. Fig. 13 Our own discreet tunable UWB pulse generator VIII. CONCLUSION Different approaches to a solution where found. Through finding the best performing and most suited approaches a well thought combination could be made. As sharp edge creator a SRD was chosen. Regulating the edge time can be done through the input voltage. Transforming the edge into a pulse Fig. 12 Predicted monocycle output of our own designed circuit Discreet Tunable Ultra-Wideband Pulse Generator for In Vivo Dosimetry August 30 was done with a shorted stub, which is tunable of length with MESFETs. The pulse enhancing was also done by a MESFET. After which the can be changed into a monocycle or doublet by a second tunable stub. This all leads to a new discreet tunable UWB pulse generator design, which, according to simulations, promises excellent results. ACKNOWLEDGMENT I would like to thank Ing. Maarten Strackx and Dr. Ir. Paul Leroux for their guidance and giving me the chance to participate in their research. REFERENCES [1] E. M. Staderini, UWB radars in medicine, IEEE Aerospace and Electronic Systems Magazine, vol. 17, no. 1, Jan. 2002, pp [2] T. Tamura, M. Tenhunen, T. Lahinen, T. Repo and H. P. Schwan. Modelling of the Dielectric Properties of Normal and Irradiated Skin. Phys. Med. Biol. 39, 1994, pp [3] M. J. Roberts. Signals and Systems, International Edition 2003, New York: McGraw-Hill, 2004, pp [4] K. Siwiak and D. McKeown, Ultra-Wideband Radio Technology. West Sussex, England: John Wiley & Sons Ltd, 2004, pp. 64. [5] S. K. Jones. (2005, May 11). The Evolution of Modern UWB Technology: A Spectrum Management Perspective. Available: [6] United States of America. Federal Communications Commission FCC. Memorandum Opinion and Order and Further Notice of Proposed Rule Making. Washington, D.C., 12 March 2003, section Available: [7] European Union. Commission of the European Communities. Decisions - Commission, Official Journal of the European Union. Brussels, 21 Apr. 2009, section 1.1 Available: 3:EN:PDF [8] J. D. Taylor. Ultra-Wideband Radar Technology. Boca Raton, Florida: CRC Press LLC, 2001, ch. 6. [9] I. Oppermann, M. Hämäläinen and J. Iinatti. UWB Theory and Applications. West Sussex, England: John Wiley & Sons Ltd, [10] P. Colleman. Digitale Technieken: EMC [Course]. Geel, Belgium: Campinia Media VZW, 2008, pp. 25. Discreet Tunable Ultra-Wideband Pulse Generator for In Vivo Dosimetry August 31 Positioning Techniques for Location Aware Programming on Mobile Devices M. De Maeyer To implement location aware programs for mobile devices two main techniques exist for defining the location: relative defined positioning techniques and absolute defined positioning techniques. Where the relative techniques rely on the limited range of the connection standards like Wi-Fi and Bluetooth, the absolute technique uses a GPS sensor and extra formulas to make conclusions. A combination of both can give interesting results because of the combined advantages of both techniques. T I. INTRODUCTION HESE days people use their phones for all different kinds of purposes, the smartphones are getting more and more powerful and are able to use a big set of sensors. This all leads to a whole new world of opportunities for developers. Using their smartphones, people are connected all day long. They receive messages, phone calls, s,, they retrieve information from the internet and connect with each other using different social networks. The information that a person can receive and use on these devices is overwhelming. But while the amount of information that is available is enormous, the problem shifts from offering information to letting a person find the right information. This information can be a lot of different things. It can be a search that a person wants to execute, it can be messages send by other persons, advertisements, To get the right information to a person different techniques can be used and are already used nowadays. These techniques are filtering information by date, language, tags, interests, but also filtering based on friends of a person or even interests of other persons is used. Another interesting approach to filtering content that can be used is the use of the location of a certain person. For example in advertising a person that is in a library would probably be more interested in seeing advertisements about books than seeing advertisements about fishing. But also a person that is looking at what his friends are doing on Facebook can be more interested in the friends that are close to his location, it even creates opportunities to send messages in a whole new kind of way, by sending messages to people in a location instead of to people you know. In this paper we will look into the techniques that can be used to fetch a location of a mobile device and to make conclusions using this knowledge. Firstly we will describe the related work to this paper, and next we will describe the two main techniques that can be used to fetch a location. Where after we will look into the extra knowledge that is needed to use this information. Then we will compare the two different techniques and form a conclusion about them. II. RELATED WORK A lot of research has already been done on how to use the location of a person or device to be used in advertising [1]. In this usage scenario a lot of money is invested and earned. In general there has been a lot of writings about location based services and how these can be implemented [2]. Other research has been done in how to implement location aware services on mobile devices. A good example of this is the PLASH platform: A Platform for Location Aware Services with Human Computation [3]. This platform describes a complete framework that developers can use to implement local aware services on. This platform is extremely broad and detailed, what makes it complete but also too big to use for smaller projects. But then again it can give a good idea of a structure for location aware projects. The PLASH platform is not dependent on technologies that are used for the connection, another platform that is developed by Qualcomm is called AllJoyn [4] and this proscribes the connection standard that should be used. This is a platform that works using Bluetooth or TCP over Wi-Fi or ethernet. This platform is a lot smaller than the PLASH platform. There has also been done some research in the area of so called geographic routing, this makes use of a different type of routing protocol with a slightly larger packet header. This header will add 16 extra bytes that will hold information on the user s physical location. These are split up into 8 bytes for the user s X coordinate and 8 bytes for the user s Y coordinate [5]. 25 32 III. TECHNOLOGIES There are two different types of defining a location of a certain mobile device, the first is the relative defined positioning technique where a mobile device won t get a coordinate to be able to define it on a place on earth, but it will be positioned nearby another device. The second technique is then of course the absolute defined positioning technique, where a mobile device will be positioned in a 2D coordinate space covering the earth. In this way it is possible to define the exact location of a device. In this part we will explain the different possibilities for these two techniques, and shortly describe the characteristics of those techniques. Afterwards we will describe some extra knowledge that is needed to use these techniques. Then we will compare the techniques and make conclusions. A. Relative Defined Positioning Techniques. In this paragraph we will describe different techniques that can be used to define a location of a mobile device relative to that of another device. To do this one can use different kinds of connection techniques that don t have a big range. The first connection technique that one could use is Bluetooth. This connection technique is available in every mobile device these days and most of them use version 2 whereas the devices already supporting version 3 are still not widespread. Apart from this division there are also 3 different classes that are used for Bluetooth, and the one that is mostly supported in mobile devices is Class 2. This stands for connection distances up to something around 10 meters [6]. Using this connection some projects are already made, like the BEDD software which is a SOCIAL SOFTWARE AND FLIRT TOOL. It is a localized chatting application with some extra functionality [7]. Another technique that can be used is ad hoc Wi-Fi, which uses the Wi-Fi antenna that can be found in a lot of mobile devices nowadays. It makes a direct connection between different mobile devices without the need for any infrastructure. It is also known as peer to peer mode and has a range of 400 meters [8]. Other newer techniques are also possible, these are techniques like Wi-Fi Direct[9] or FlashLinq[10]. These are both still new techniques and are not yet supported by a big range of mobile devices. They both take care of easier connection set-up between multiple devices and a bigger range than the solutions that we mentioned earlier. We can get a bigger range using the previous techniques by using every mobile device as a node in a bigger network topology. In this way every node works as an end station and a routing device to be able to route traffic to the network. But using this technique also called Mobile Ad Hoc Network (MANET) the topology will be able to change a lot and will be unpredictable. For instance if the mobile device X in fig. 1 disappears or moves then the connection between part A and B will be lost. B. Absolute Defined Positioning Techniques To have an absolute defined position we need to place a mobile device in a certain location on the earth. The most logical method would be to use the GPS sensor that is available in most of the smartphones today. But there are also other techniques available like the home location register (HLR) in cellphones or the Geolocation API[11]. To get the current location of a mobile device one could use the GPS sensor in this mobile device. Therefore we can use the Geolocation API in HTML5 to fetch the coordinates of a certain device. Important is to see that the accuracy of a GPS Fig. 1. Unexpected splitting of one MANET in to two different ones position is around 10 meters under ideal circumstances. The use of HLR is not available without carriers permission, so we won t look in to this any further. The same applies to the use of the Geolocation API without use of the GPS sensor on the mobile device because the results are unsatisfying and depend on a network location provider server. IV. EXTRA KNOWLEDGE In this paragraph we will describe the extra knowledge that is needed to use the absolute defined positioning technique. To be able to use the location in a good way for a mobile device we don t need the absolute position of a device but we want to know what is in its neighborhood. In this way this device can act according to what is happening around it. For the relative defined positioning technique no further knowledge is needed, 26 33 because by using this the devices in the neighborhood are already known and shouldn t be retrieved anymore. But when we use the absolute defined positioning technique we need to know what the relation is between device A and device B by using their coordinates. When we have the coordinates of a certain device A (with latitude A lat and longitude A long ) and another device in an unknown location B (with latitude B lat and longitude B long ) we can find the distance D between these two devices: D( A B, A B ) (1) But this will give us a distance in degrees, which is hard to work with. Like for example if someone makes a chat client that displays messages only if people are closer than hundred meters to each other this won t give a good solution. Therefore we use a square with a certain size X (the length of the side of the square). In this way we can say that all the mobile devices in this square (B) around the sending device (A) are in the interest zone, and the devices (C) are not in our interest zone.(fig. 2 is a visual representation of this.) Here we define the interest zone as a certain physical area which covers devices that are closer to the sending device than others. following formulas: NLat = sin sin(latitude) cos + cos(latitude) sin cos(angle) (2) NLong = longitude + atan2 sin(angle) sin cos(latitude), cos sin(latitude) sin ( ) (3) Fig. 3 Finding formulas for T,U,V and W These formulas are used to find the coordinates of a point (NLat,NLong) which lies a certain distance (D) from a point with known coordinates (latitude, longitude). The variable R is the radius of the earth and angle is the direction in which we measure the distance D. Using these formulas we can find the Fig. 2 Interest zone To be able to make a distinction between the mobile devices in the interest zone and those outside of the interest zone we need to find the coordinates of the vertices P,Q,R and S in figure 3. But if we have the coordinates of the points P and R or Q and S this will be enough to define the square. This is the same as saying that we need the latitude of point T and V and the longitude of point W and U. (Because the latitude of T and V are the same as the latitude of respectively P and Q or S and R. The longitude of W and U are the same as the longitude of respectively S and P or Q and R.) To find the latitude of T and V and the longitude of W and U we use formulas for the points T,U,V and W: = sin sin(latitude) cos + cos(latitude) sin (4) = sin sin(latitude) cos + cos(latitude) sin cos(angle) (5) 27 34 = + 2 sin cos( ), cos ²( ) (6) = sin cos( ), cos ²( ) (7) Here we find the latitude of T (Tlat) and V (VLat) and the longitude of U (ULat) and W (WLat) therefore we used formula 2 and used 0 for angle to get formula 4 and 180 to get formula 5 and we used formula 3 to get formula 6 and 7 with respectively 90 and 270 as value for angle, we also know that latitude and NLat will stay the same, because the points are on the same horizontal as the point A. So now we found easy to use formulas to make a decision if a certain mobile device is in the interest zone of a sender. Using this we can make conclusion on whether we want that a mobile device receives certain information or not or if we should rank certain information higher than other information. but there are new and promising techniques coming in the near future. Using the absolute defined positioning technique the location of a mobile device will be set using the GPS sensor in the device. Using this technique we need to make calculations using the coordinates of different devices to be able to make conclusions about interest zones. But it is possible to use this technique to use big locations, on the other hand it is impossible to use this technique in smaller locations like rooms or even buildings. We can conclude that for good results in every use case a combination of both relative and absolute defined positioning techniques are the better solution. In this way software is able to send messages in short ranges using the relative technique and in long range using the absolute technique. ACKNOWLEDGMENT The authors would like to express their gratitude to everyone who supported them during the period in which they made this paper and in special Patrick Colleman, Tony Larsson and the Erasmus Programme. V. COMPARISON In this paragraph we will make a comparison of the two different techniques that one can use to define the position of a mobile device, we will look into the differences of those techniques. Using the relative defined positioning techniques all the connected devices are in the same region by definition. This is because of the limits on the range of the connection techniques. This limit in range is the biggest disadvantage of this technique but it is also the biggest advantage because unlike the absolute defined positioning technique with the relative technique you don t need any further calculations to make conclusions about the interest zone. Using the absolute defined positioning technique we can easily make conclusions about very big interest zones, for example when making a localized chatting service we can easily use this for a location as big as whole city or province. But on the other hand this won t work very good when we would want to use it in a small area like a room or even a building. To send a message in a square with side of 10 meters the difference in degrees between sender (A) and edge point (T,U,V,W) will only be and this together with the accuracy of a GPS will be too small to make good conclusions about an interest zone. REFERENCES [1] B. Kölmel, Location Bassed Advertising Available at s%20conference% pdf, 2002 [2] J. H. Schiller, A. Voisard, Location Based Services San Francisco: Elsevier, [3] Y. H. Ho, Y. C. Wu, M. C. Chen, PLASH: A Platform for Location Aware Services with Human Computation IEEE Communication Magazine,Decemer 2010: pp [4] Qualcomm Innovation Center Inc. AllJoyn Available at [5] J.C. Navas, T. Imielinsk, GeoCast geographic addressing and routing MobiCom 97 Proceedings of the 3 rd annual ACM/IEEE international conference on Mobile computing and networking, 1997: pp [6] Bluetooth Special Interest Group, Building with the Technology available at [7] HiWave, BEDD Social Software and Flirt Tool available at [8] Wi-Fi Alliance, Windows Tips and Techniques for Wi-Fi Networks available at [9] Wi-Fi Alliance, Wi-Fi CERTIFIED Wi-Fi Direct available at [10] Qualcomm Incorporated, FlashLinq: Discover your wireless sense, available at [11] A. Popescu, Geolocation API Specification Feb VI. CONCLUSION We looked into two positioning techniques for location aware programming: relative defined positioning techniques and absolute defined positioning techniques. Using the relative defined positioning technique we explained that it is easy to use without further calculations needed, because of the limit on the range. For this technique one can use ad hoc Wi-Fi or Bluetooth connections these days 28 35 Fixed-size kernel logistic regression: study and validation of a C++ implementation Alard Geukens 1, Peter Karsmakers 1,2 and Joan De Boeck 1 1 IBW, K.H. Kempen, B-2440 Geel, Belgium 2 ESAT-SCD/SISTA, K.U.Leuven, B-3001 Heverlee, Belgium Abstract This paper describes an efficient implementation of Fixed-Size Kernel Logistic Regression (FS-KLR) suitable to handle large scale data sets. Starting from a verified implementation in MATLAB, cpu load and memory usage is optimized using the C++ language. Since most of the computation time is spend on performing linear algebra, first a number of existing linear algebra libraries are summarized and empirically compared to each other. Finally, the MATLAB and C++ implementations of FS-KLR are compared in terms of speed and memory usage. I. Introduction IN machine learning, classification is assigning a class label to an input object, mostly a vector. This classification rule is found using a set of input vectors (x 1, y 1 ),..., (x N, y N ), where x is the input vector and y is a label denoting one of C classes. This is called training the model. Logistic Regression (LR), and its non-linear kernel based extension Kernel Logistic Regression (KLR) are are well known methods for classification that will determine a-posteriori probabilities of membership in each of the classes based on a maximum likelihood argument [1]. Because there needs to be worked with big data sets, a fixed-size variant of KLR is used to save recourses. This is then called Fixed-Size KLR (FS-KLR). An algorithm of FS-KLR is already implemented in MATLAB. The disadvantage of MATLAB is that you do not have full control of the internal working which leads to an implementation with suboptimal computational performance and memory usage. For this reason, an implementation in C++ is chosen here for a better performance in terms of speed and memory usage. Because C++ is one of the most used programming languages, it has many support. It has very advanced compilers that can do many optimizations to produce efficient machine code. Because C++ does not use a garbage collector, there is much more control over the memory usage. It is known that most of the computation time in the FS-KLR algorithm is spend on performing linear algebra. This is why the best linear algebra library is searched for C++ and used in the implementation. By doing this, we hope to outperform the MATLAB implementation. This paper is organized as follows. In section II, a short overview of kernel-based learning is given. Next the large scale KLR method is briefly reviewed for the binary case. Section IV summarizes the available linear algebra libraries. Then, in Section V, some implementation related issues are discussed. In the experimental section MAT- LAB and C++ implementations are compared in terms of speed and memory usage. Section VII discusses the results. Finally, we conclude in Section VIII and mention future work. II. Kernel-based learning Kernels are used to solve problems linearly in a feature space since this is more efficient than solving it non-linear in the original space. For example, in Figure 1 a set of data is shown, where each point belongs to one of two classes, that needs to be seperated. In the original input space this can only be done non-linear, but via a feature map ϕ( ) the points are mapped to a different space where they then can be linearly separated. Fig. 1. From input space to feature space. Define the learning problem where data vectors only appear in a dot product, then the feature map can be defined implicitly using a kernel function which defines a kernel matrix Ω ij = K(x i, x j ) = ϕ(x i ) T ϕ(x j ), i, j = 1,..., N where x R D are the input vectors and ϕ(x) R Dϕ are the vectors mapped in feature space. Any valid kernel function K : R D R D R corresponds with an inner product in a corresponding feature space as long as the function K is positive semi-definite [2]. In the experiments, the Radial Basis Function (RBF) kernel ( x x K(x, x 2 ) 2 ) = exp (1) σ 2 is used, where σ is a tuning parameter. III. Fixed-size kernel logistic regression In order to make it easier for the reader, the FS-KLR is explained for the binary case, so C = 2. A similar deriva- 29 36 tion for the multi-class case is given in [1]. The aim of logistic regression is to produce an estimate of the a-posteriori probability of membership in each of the classes for the given vector. Suppose we have a random variable (X, Y ) R D {1,...,C} where D is the dimensionality of the input vectors, and C is the number of classes. The posterior class probabilities for the binary case are estimated by a logistic model given as P (Y = 1 X = x;w) = 1 1+exp(w T x+w 0) P (Y = 1 X = x;w) = exp(wt x+w 0) 1+exp(w T x+w 0) where w R D. For a non-linear extension, the inputs x are first mapped to a different space. Hence x in (2) is replaced by ϕ(x). In [1] it is shown that the feature map can be approximated by M ˆϕ i (x ) = λ s i M j=1 (2) (u i ) j (x j, x ) (3) to save computer resources where λ s i are the eigenvalues and u j are the eigenvectors of the kernel matrix Ω. M is a subsample of the N input vectors called prototype vectors (PVs) and are randomnly selected in the experiments. In the sequel, the ˆϕ(x) are augmented with a 1 such that the intercept term w 0 is incorporated in the parameter vector w for notational convenience. Equation (2) now becomes 1 P (Y = 1 X = x;w) = 1+exp(w T ˆϕ(x)) P (Y = 1 X = x;w) = exp(wt ˆϕ(x)) 1+exp(w T ˆϕ(x)) By choosing y i { 1,1}, (4) can be rewritten as P (Y = y i X = x i ; w) = (4) exp( y i w T ˆϕ(x i )), (5) which gives the same result as (4). Parameter w is inferred by maximizing the log likelihood max w N l(w) = max ln P (Y = y i X = x i ; w) (6) w i=1 Because very flexible functions are considered, a penalized negative log likelihood (PNLL) is used which adds a regularization term and results in ν min w lν = min w 2 wt w ln N P (Y = y i X = x i ; w) (7) i=1 where ν is a regularization parameter that needs to be tuned. Specializing (7) for KLR leads to the global objective function which can be written as min w ν lν K = min w N 1 2 wt w ln 1 + exp( y i w T ˆϕ(x i )) ν = min w 2 wt w + i=1 N ln(1 + exp( y i w T ˆϕ(x i ))) (8) i=1 In order to solve (8) a Newton Trust Region optimization method can be used. Here a tentative solution is iteratively updated by a step s (k) as follows w (k) = w (k 1) + s (k). (9) This step s (k) is obtained by minimizing a second order Taylor approximate of l ν K, subject to a trust region constraint which results in the following optimization problem a (k) = min s (k) g (k)t s (k) s(k)t H (k) s (k) such that s (k) < (k), (10) at iterate w (k) and a trust region (k), with H the Hessian and g the gradient of the objective function (8). The direction s (k) is accepted if ρ (k), the ratio of the actual reduction to the predicted reduction of the objective function, is large enough. To solve the constrained minimization problem of (10) a conjugate gradient method is used [1]. The gradient of (8) is given by g = N y i ˆϕ(x i )(P (Y = y i X = x i ;w)) + νw (11) i=1 If we define Φ = [ ˆϕ(x) 1 ˆϕ(x) 2... ˆϕ(x) N ] T, p = [ P (Y = y 1 X = x 1 ; w);... ; P (Y = y N X = x N ; w)] T, y = [y 1,...,y N ] T then we can write g = Φ T q + νw (12) where q i = p i y i, i = 1,...,N. The Hessian of (8) in case i equals j is defined as 2 l ν K (w) w i w j = N ˆϕ(x i ) ˆϕ(x i ) T P (Y = y i X = x i ;w) i=1 (1 P (Y = y i X = x i ;w)) + ν (13) for i, j = 1,..., N. If i is not equal to j, then the same result is obtained but without the term ν. Define v i = P (Y = y i X = x i ; w)(1 P (Y = y i X = x i ; w)), then V = diag(v 1,..., v N ). The Hessian is then given by H = Φ T V Φ + νi (14) Details about the derivations of g and H are given in Appendix A Algorithm 1 summarizes the FS-KLR trust region algorithm. 30 37 Algorithm 1 FS-MKLR 1: Input: training data D = (x i,y i ) N i=1,m 2: P arameters : w (k) 3: Output: probabilities P r(x = x i Y = y i ;w opt,i = 1,...,N and w opt is the converged parameter vector 4: Initialize : k := 0,w(0) = 0 CM 5: Define : g (k) and H (k) according to resp. (12)(14) 6: PV selection 7: compute features Φ 8: repeat 9: k := k : compute P r(x = x i Y = y i ;w (k 1) ),i = 1,...,N 11: calculate g (k) 12: min s (k) g (k)t s (k) s(k)t H (k) s (k) such that s (k) (k) 13: compute ρ (k) 14: w (k+1) = w (k) + s (k) 15: obtain (k+1) 16: until convergence The features Φ in step 7 are the mapped input vectors computed by (3). The model hyper-parameters ν and σ are tuned by cross-validation with a grid search method [1]. Specifically in the multi-class case this Newton trust region approach is useful since in this case, the size of the Hessian H R (Dϕ+1)C (Dϕ+1)C will be proportional to C and the feature vector lengths D ϕ + 1 [1]. For large scale multi-class data this matrix is then too large to be stored. In case of a Newton Trust Region algorithm the Hessian is always used in a product with a vector d. This fact can be used to exploit the structure of the Hessian and therefore storage of the full Hessian is not needed [1]. IV. Linear algebra libraries Since the FS-MKLR method is mostly based on linear algebra a short survey on the available C++ libraries is given. A. BLAS and LAPACK BLAS (Basic Linear Algebra Subprograms) are standard routines for basic vector and matrix operations. The operations are subdivided into 3 levels. The Level 1 BLAS are the routines for scalar and vector, Level 2 for matrixvector and Level 3 for matrix-matrix operations. A higher level makes use of the lower levels, for example a Level 3 operation uses also Level 1 and Level 2 BLAS. LAPACK (Linear Algebra Package) is a standard for routines such as eigenvalue decomposition, that we ll need for our implementation. LAPACK makes use of the BLAS as much as possible. So the faster the BLAS routines are, the faster LAPACK will work. Both BLAS and LAPACK only define the functionality and interfaces, the actual implementation is not standardized [3], [4]. A reference BLAS written in Fortran77 can be found on a reference LAPACK written in Fortran90 can be found on These do not support multithreading. B. Optimized BLAS There exist many implementations for different architectures of the BLAS which also all support multithreading: ATLAS (Automatically Tuned Linear Algebra Software) will create optimized BLAS by automatically choosing the best algorithms for the architecture during compile time. ATLAS contains also a few LA- PACK routines [5]. The source code is found on GotoBLAS tries to use the memory as efficient as possible. The library gets optimized for the architecture it gets compiled on. The basic subroutines are developed in assembly language [6]. On the source code is found. Intel Math Kernel Library (MKL) contains Intel s implementation of BLAS and LAPACK. So they are very optimized for Intel processors. This library is not free, but an evaluation version can be downloaded on ACML (AMD Core Math Library) contains AMD s implementation of the BLAS and LA- PACK. Good performance is expected on AMD processors. ACML can be found on /Pages/default.aspx. C. Other Libraries Many other linear algebra libraries exist such as ublas, Seldon and Lapack++ which all use an implementation of BLAS and LAPACK with a C interface. GPU implementations of BLAS can be found too which can be much faster than the CPU implementations [7]. We will not use these because specific video cards are needed. D. MATLAB MATLAB uses BLAS for among others matrix-vector and matrix-matrix multiplication and LAPACK for eigenvalue decomposition. The release notes of MATLAB on mention that MATLAB uses MKL since version on Intel processors and ACML on AMD processors for an optimized BLAS on Windows systems. The used LAPACK is version since MATLAB version 7.6. This is a reference implementation, so it is not optimized. V. FS-MKLR Implementation It is known from analyzing the implementation in MAT- LAB that almost all time in the algorithm is spend in the eigenvalue decomposition, the matrix-vector and matrixmatrix multiplication. That is why specifically for these operations the fastest libraries are searched and used in our implementation. MEX-files [8] are used to have a MATLAB interface. This way data can easily be entered from MATLAB. It also makes it easier to test MATLAB and C++ with the same test data and compare the results. The disadvantage is that the MATLAB software 31 38 still needs to run the program, which takes some extra memory. The training of the model and the tuning of the hyper-parameters are put in two MEX-files. In the tuning function, multithreading is used on a high level. For different values of the hyper-parameter σ in the grid search, different threads are used up to the number of cores the processor has. In the training function, multithreading is used in the BLAS and LAPACK libraries. VI. Experiments First, empirical evaluation of BLAS and LAPACK libraries is performed in order to choose the fastest library for a set of individual operations. Later on, these libraries will be used when comparing the MATLAB and C++ FS- KLR implementations in terms of computation speed and memory usage. Note that all measurements are averaged over 5 runs. Each of the experiments are performed on an Intel Core 2 duo E Ghz with 2GB RAM with Windows XP SP2 and MATLAB R2009a (version 7.8). To compile the C++-code, the Visual C compiler is used. More experiments (including those on another architecture) can be found in [9]. with 123 attributes that belong to 2 different classes. The time is measured while tuning and training with 1000, 2000,..., inputvectors. The memory usage is measured while tuning and training the training set of isolet found on It consists of 6238 input vectors with 617 attributes belonging to 26 classes. For both implementations, the memory that the MATLAB software uses, is not measured. In the future when the implementation in C++ is totally independent of MATLAB, extra memory will be saved compared to MATLAB. In both tests the number of prototype vectors is 500 and 5 folds are used in the cross-validation. A. BLAS and LAPACK VII. Results and Discussion A. BLAS and LAPACK The tested BLAS libraries are: MKL version ACML version Reference BLAS version GotoBLAS version 1.26 Lapack++ ublas BLAS library of MATLAB R2009a (version 7.8) Note that ATLAS is not tested as we did not succeed to compile it on Windows. The function in BLAS for the matrix-vector multiplication for double precision is DGEMV, for matrix-matrix multiplication it is DGEMM. The tested LAPACK libraries are: MKL version ACML version Reference LAPACK version with gotoblas LAPACK library of MATLAB R2009a (version 7.8) Fig. 2. Time results of matrix-vector product. For the eigenvalue decomposition, the routine DSYEV is used. This function is used for the eigenvalue decomposition of a symmetric matrix, as this will always be the case in the FS-KLR algorithm. MATLAB uses this function automatically when the matrix is symmetric. All the matrices in the tests are square, the time of the operations will be measured with different sizes of the matrices. The libraries are only tested single-threaded, as we wish to multithread on a higher level in the implementation. B. FS-MKLR Implementation The training set of the data set a9a on cjlin/libsvmtools/datasets/ is used for time results. It consists of input vectors Fig. 3. Time results of matrix-matrix product. The results of the tests on the BLAS and LAPACK libraries are shown in Figure 2, 3 and 4. For this architecture MKL has the best BLAS and LAPACK library. The BLAS library of MATLAB performs almost the same because it uses MKL too. But because MATLAB only uses a reference LAPACK, time can be won in the implementation for 32 39 Fig. 5. Time results of tuning the hyper-parameters Fig. 4. Time results of eigenvalue decomposition. the eigenvalue decomposition. It is remarkable that the gotoblas library does not perform any better than the reference BLAS for the matrix-vector multiplication. Though it performs only a bit worse than MKL for the matrixmatrix multiplication. It is obvious that lapack++, Seldon and ublas are not optimized. That is also the reason why it was not tested anymore for the eigenvalue decomposition. ACML does well for the matrix-vector multiplication, but is almost four times slower than MKL for the matrix-matrix multiplication. For this architecture, MKL is chosen as BLAS and LA- PACK library. But as is indicated in [9], a different architecture can give other results. The best thing you can do is test it for yourself which one performs best. Using another library hardly needs changes. Fig. 6. Time results of training the model. B. FS-MKLR Implementation The time results are shown in Figure 5 and 6. For a small amount of input vectors, MATLAB is a bit faster singlethreaded. But the more input vectors, the faster C++ gets in proportion to MATLAB. From 4000 input vectors on the implementation in C++ is already faster. This is because the eigenvalue decomposition gets more important the more input vectors there are. With input vectors the tuning is already 7% faster and the training of the model 11%. With multithreading (using 2 threads) the tuning performs around 35% better than single-threaded. In MATLAB, the multithreading is done in the BLAS library for tuning and training. Because everytime the BLAS library is called, threads need to be created and deleted, the performance improvement is only around 15% for tuning. That is also the reason why the training of the model does not improve that much for the implementations in C++ (around 20%) and MATLAB (around 10%). The memory results for tuning are shown in Figure 7. To keep de figure clear, only the first 10% of the tuning is shown, the further progress is about the same. The implementation in C++ needs on average 24 MB less than MATLAB and has a peak of MB. MATLAB has a memory peak of MB. The memory results for training the model are less good (Figure 8). Here C++ needs on average 13 MB more than MATLAB with a peak of 142 MB. The implementation in MATLAB has a peak of 124 MB. Fig. 7. Memory usage of tuning the hyper-parameters. 33 40 Fig. 8. Memory usage of training the model. VIII. Conclusion and Future Work We can conclude that our C++ implementation outperforms that of MATLAB in terms of speed. The gain in speed for the C++ implementation opposed to the MAT- LAB alternative will increase when the number of training data points increases. The best libraries will depend on the architecture it runs on. Because MATLAB only uses the reference LAPACK, a better library is likely to be found on each architecture. The memory usage is something that needs to be improved in the future. This can be done by directly removing matrices from the memory when they are not used anymore. The next step is to make the implementation completely independent from MATLAB so the MATLAB software does not need to run anymore, which saves memory. By using a profiler, some other slow operations in the implementation can be searched for. These operations can then be further optimized when possible. Appendix A. Derivation of gradient and Hessian The gradient of (8) is given by g = lν K (w) w N y i ˆϕ(x i )exp( y i w T ˆϕ(x i )) = 1 + exp( y i w T + νw ˆϕ(x i )) = = i=1 N i=1 y i ˆϕ(x i ) 1 + exp(y i w T ˆϕ(x i )) + νw (15) N y i ˆϕ(x i )(P (Y = y i X = x i ;w)) + νw i=1 = = = N i=1 N i=1 N i=1 y i ˆϕ(x i ) 0 exp(y iw T ˆϕ(x i ))y i ˆϕ(x i ) (1 + exp(y i w T ˆϕ(x i ))) 2 + ν y i ˆϕ(x i ) y i ˆϕ(x i ) T exp(y i w T ˆϕ(x i )) (1 + exp(y i w T ˆϕ(x i ))) 2 + ν y 2 i ˆϕ(x i ) ˆϕ(x i ) T exp(y i w T ˆϕ(x i )) 1 + exp(y i w T ˆϕ(x i )) exp(y i w T ˆϕ(x i )) + ν N = ˆϕ(x i ) ˆϕ(x i ) T P (Y = y i X = x i ;w) i=1 (1 P (Y = y i X = x i ;w)) + ν (17) for i, j = 1,..., N. If i is not equal to j, then the same result is obtained but without the term ν. References [1] Karsmakers P., Sparse kernel-based models for speech recognition., PhD thesis, Faculty of Engineering, K.U.Leuven (Leuven, Belgium), May 2010, 216 p., Lirias number: [2] M. Aizerman, E. Braverman, L. Rozonoer, Theoretical foundations of the potential function method in pattern recognition, Automation and Remote Control, Vol. 25, pp , 1964 [3] L. S. Blackford, J. Demmel, J. Dongarra, I. Duff, S. Hammarling, G. Henry, M. Heroux, L. Kaufman, A. Lumsdaine, A. Petitet, R. Pozo, K. Remington, and R. C. Whaley, An Updated Set of Basic Linear Algebra Subprograms (BLAS), pp [4] E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, and D. Sorensen, LAPACK Users Guide, Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, third edition, [5] R. C. Whaley, and A. Petitet, Minimizing development and maintenance costs in supporting persistently optimized BLAS, Software-Practice and Experience, Volume 35, Issue 2, 2005, pp [6] K. Goto, R. A. Van De Geijn, Anatomy of High-Performance Matrix Multiplication, ACM Transactions on Mathematical Software, Vol. 34, 2008 [7] Francisco D. Igual, Gregorio Quintana-Ort, and Robert van de Geijn, Level-3 BLAS on a GPU: Picking the Low Hanging Fruit, FLAME Working Note 37. Universidad Jaume I, Depto. de Ingenieria y Ciencia de Computadores. Technical Report DICC [8] Bai, Y., Using MEX-files to Interface with Serial Ports, The windows serial port programming handbook, Auerbach Publications, USA, 2004, pp [9] Geukens, A., Fixed-Size Kernel Logistic Regression, 2010 The Hessian of (8) is defined by H = 2 l ν K (w) w 0 w l ν K (w) w Dϕ w 0 2 l ν K (w) w 0 w Dϕ 2 l ν K (w) w Dϕ w Dϕ (16) Starting from equation (15), using d( f g = gdf fdg g ) and 2 the chainrule, we derive the Hessian for i = j: 2 l ν K (w) = N w i w j w T ( i=1 y i ˆϕ(x i ) 1 + exp(y i w T ˆϕ(x i )) + νw) 34 41 Uniformity tests and calibration of a scintillator coupled CCD camera using a AmBe neutron source L. J. Hambsch 1,2, J. De Boeck 2, P. Siegler 1, R. Wynants 1, and P. Schillebeeckx 1 1 European Commission JRC-IRMM, Retieseweg 111, B-2440 Geel, Belgium 2 Katholieke Hogeschool Kempen, Kleinhoefstraat 4, B-2440 Geel, Belgium Abstract A neutron sensitive scintillator coupled to a CCD camera based image system has been tested with an AmBe neutron source. Using polyethylene slabs as moderator for the neutrons and a Cd foil as a collimator, a well defined circular neutron spot was created. By scanning different positions on the scintillator, the response of the whole system was measured and analyzed with respect to uniformity over the entire sensitive area. The results show that due to the large amount of noise induced by the internal electronics, the camera is not suitable for experiments resulting in very low measurable intensities, but has advantages compared to the traditional photo plates when using stronger sources such as a linear accelerator. A I. INTRODUCTION T IRMM a linear electron accelerator with a maximal energy of 150MeV is used to produce neutrons with a white spectrum in a rotary, depleted uranium target. This target is placed in the center of a heavily shielded concrete hall with openings in a horizontal plane. Neutrons leave the target hall via evacuated aluminium tubes leading to the experimental areas. Since those neutron beams travel long distances through the flight tubes, collimators and experimental equipment, it is of importance to determine the position of the neutron beam as well as the neutron density distribution of the beam spot. These neutron beam profiles are usually measured by exposing traditional photo plates to the gamma-ray flux of the beam. This is a rather time consuming process and gives only indirect and limited qualitative information on the neutron density distribution. The use of a scintillator coupled to a CCD camera to capture images of the neutron beam directly could be envisaged to replace the traditional photo plates. First of all, capturing a beam profile using a CCD camera is generally faster than using a photo plate. Also, no special requirements such as wet chemical development are needed to be able to use a CCD camera. Secondly, the biggest advantage of a scintillator coupled CCD camera is that in effect, the neutrons are registered, whereas the photo plate is registering only the gamma radiation presumably originating from the same source as the neutrons. This enables us to get images of beams where thick high Z-material filters are applied to fully suppress gamma radiation, but at the same time allow neutrons to penetrate. Using photo plates, the measurement would imply a disassembly of the beam line in order to remove the filter to take a picture of the beam position. Finally, a CCD camera can be used multiple times and provides digital data directly to the user, making a quantitative analysis of the image possible. Still the question remains if the CCD camera is actually more accurate at capturing beam profiles, since the use of electronics automatically brings additional sources of noise to the data, which are nonexistent with photo plates. To investigate the impact of these sources on the final result, we designed an experiment in order to verify the uniformity of the sensitivity of the camera. In the following sections we demonstrate why the use of scintillator coupled CCD cameras in neutron beam profiling has advantages over the use of photo plates, but we also point out the limitations discovered during our experiments and the impact they have on our results, such as the noise introduced by the internal electronics. II. MEASUREMENT SETUP A lens coupled Coolview IDI neutron imager from Photonic Science was used in the present experiment. It provides an image with a resolution of 1392x1040 pixels covering an input area of 267 x 200 mm². The readout of the CCD has a depth of 12 bits, allowing the image to have theoretically 4096 different intensities. The scintillator used is a Li 6 F/ZnS:Ag being 0.4 mm thick and protected by a 500 µm aluminium membrane. This membrane also shields the scintillator from natural light. The light from the scintillator then passes a 45 degree high reflectivity mirror and a close focus lens. This enables placement of the electronics outside of the beam to minimize scattering of incident neutrons. Before reaching the CCD chip, the light passes an intensifier which can be gated by an external TTL signal [2]. Figure 1 shows a sketch of this setup. 35 42 Figure 1. Schematic drawing of the insides of the used neutron camera system. Since the calibration of the scintillator requires a constant intensity and a homogeneous distribution of the incoming neutron beam, the use of an AmBe neutron source over the IRMM Geel Electron Linear Accelerator (GELINA) was preferred. The strongest source available at the laboratory was an AmBe source with an intensity of 37 GBq resulting in neutrons/s. This intensity already required careful safety measures concerning the neutron and gamma radiation. The AmBe source has by definition a constant neutron intensity whereas an accelerator can show fluctuations of the neutron intensity as a function of time. The problem with a source setup is that the emission of radiation is isotropic and not a linear beam. This means that the radiation is everywhere, and can cause a lot of scattering. To cope with this the source was shielded by B 4 C and lead on the back side whereas on the front, towards the camera, a 12cm thick paraffin layer was placed. This created a thermalized and homogeneous neutron flux. A 1 mm thick cadmium plate with a hole of 6 mm diameter was placed in front of the camera, to create a small neutron beam. Figure 2 shows the experimental setup. In order to perform a scan of the scintillator area of the camera, the source had to be moved across the sensitive area of the camera. Since the source with its shielding weights more then 200 kg, we chose to move the camera instead. To achieve this, we designed a custom made table that allowed us to move the camera horizontally and vertically without changing the source configuration (Figure 2). With this setup, an array of 30 different positions was measured resulting in a 5x6 points dataset. The points are separated 4cm from each other horizontally and 3cm vertically, covering an area of 24x15cm² on the scintillator screen. III. ANALYSIS For the camera control and readout a LabVIEW program was developed. Starting from the basic image acquisition functions, the graphical user interface allows the user to control the camera settings and process the received data. The raw CCD images are corrected with dark frames in order to eliminate the static noise on the acquired images, consisting of amplifier glow and hot pixels. Also an algorithm was implemented in order to eliminate random hot pixels showing up during acquisition. To extract quantitative data from the images, the possibility to add circular areas to the standard cursors has been implemented. This is necessary to reduce the statistical fluctuations between data points and to get an average over-area intensity instead. The possibility to add or subtract different data sets and save the results has also been implemented, giving the user the basic tools to analyze and compare different datasets. The results of a 20 minute acquisition session from our test series can be seen in Figure 3. The spot resulting from our neutron source is barely noticeable near the center of the image just above the noise level. Figure 2. Experimental setup on a custom made table allowing movement of the camera while leaving the source setup untouched. Figure s acquisition time raw image of the AmBe beam spot. Intensities of the amplifier glow in the lower part reach over 800 counts/pixel, whereas the beam spot is barely standing out from the background noise (inside the red circle). 36 43 data points that are added together. This was necessary to minimize the influence of statistical fluctuations present in the data. Position Table 1. List of the 5x6 matrix with the values of the measured intensities in a radius of 50 pixels, covering the complete region of interest. Figure 4. Image data after dark frame subtraction. Now the beam spot is clearly visible showing an average of 40 counts/pixel. In Figure 3 one can clearly notice the strong amplifier glow [3] located in the bottom corners, showing more then 800 counts/pixel compared to the unaffected region of only 200 counts/pixel. This has a large impact on our results, as the AmBe source provides only a small neutron intensity requiring relatively long exposure times. In Figure 4, the dark frame has been subtracted and the hot pixels have been filtered out. The resulting image shows the beam spot much clearer than in the raw image, and enables us to extract numerical data from it. Figure 6 shows a plot of the intensities for the 5x6 matrix scan based on the values of Table 1. Examining the graph, one can clearly notice a steep drop of the measured intensity at all the edges except the upper one. This drop in intensity is attributed to the dark frame subtraction performed on the images to remove the amplifier glow induced on the image by internal components of the control electronics of the CD chip. When comparing the raw image in Figure 3 with the intensity graph in Figure 6, the similarity between areas of noise and areas of low neutron intensity is evident. Figure 6. Graphical representation of the intensity data presented in Table 1, clearly showing the drop in intensity at the edges. Figure 5. Superposition of all data points placed next to each other in their correct positions. Figure 5 shows a superposition of the full scan of the scintillator. The measured spot intensities of the scan are given in Table 1. Each dataset has an exposure time of 1200 seconds. To read out the numerical intensities, a circular area with radius 50 pixels was used, resulting in a total of 44 Looking from a different angle on these results, one can conclude that by having a lot of amplifier glow on those sides where the noise producing components are located, the small intensity of the captured neutrons is drowned in noise of the amplifiers. Considering the dark frame has intensities of more than 400 counts/pixel whereas the AmBe neutron source intensities only reach a value of 40 counts/pixel (or 0.03 counts/pixel/s) at its best, the camera is quite insensitive to low neutron intensities at the edges where the amplifiers are located. Figure 7. LINAC beam image with 600 s exposure time and no filters in place at a distance of 30 m from the source. IV. DISCUSSION Considering the relatively weak AmBe source used when compared to a linear accelerator beam, the camera shows in general a good sensitivity to even weak neutron sources. During our measurements it was shown that the camera produces a considerable amount of background noise and this certainly needs to be taken into account when looking at low intensity results with respect to quantitative analysis of beam profiles. However for its main purpose, beam profiling, the intensities will be high enough that one can neglect the influence of the noise after image processing. As can be seen in Figure 7, the intensity was about 2.4 counts/pixel/s in the center of the collimation and this after only 10 minutes of exposure with a beam from GELINA at 30 m distance from the source. Looking at Figures 4 and 7, we can compare the neutron intensities of the AmBe source with the neutron beam from GELINA at 30 m. From a quantitative analysis of the images we conclude that the AmBe source delivers 78 times less neutrons/cm²/s at the camera than the accelerator. Considering this, we can conclude that the camera serves its purpose well, as long as the neutron intensities are high enough to overcome the background noise induced by the camera. Hence if one could use a photo plate in the past, the intensities are surely large enough to get a clear image with the new scintillator coupled CCD camera. V. CONCLUSION The linearity measurements with the AmBe source have given satisfactory results for the use of the neutron camera as a tool for beam profiling. The influence of noise, especially when used at low neutron intensities implies limitations especially at the lower edges of the neutron sensitive area. As stated in the previous paragraph, the results of our experiment show that the small intensities produced by the AmBe source are drowned in noise and thus are strongly affected by the amplifier glow. In conclusion the camera is best used with strong neutron sources such as those available at the GELINA beam in order to obtain high intensities that stand out clearly from the background noise after image processing. For low neutron intensities, one should focus on the upper center of the sensitive screen to achieve best results because there the amplifier glow has the least influence. For general beam profiling or object positioning, the camera is perfectly suited and has big advantages over traditional photo plates, especially because the time needed for getting results is very short, being only a few minutes with the GELINA beam on. In the future, an activation analysis method using thin goldfoils will be performed in parallel with the scintillator coupled CCD camera in order to compare the results of the measured neutron flux from the GELINA beam with the image intensities of the camera. This will allow for an absolute calibration of the measured neutron intensities. REFERENCES [1] Put, S. (2005, February 2). Accelerators and time-offlight experiments. Geel, Belgium: BNEN. [2] Photonic Science. (n.d.). Lens Coupled Coolview IDI Neutron Imager USER MANUAL. United Kingdom: Photonic Science. [3] Martinec, E. (2008, May 22). Noise, Dynamic Range and Bit Depth in Digital SLR's. Retrieved February 22, 2011, from [4] Hambsch, L. J. (2011). Neutron Imaging using a CCD- Coupled Scintillator. Geel: KHK. 38 45 Ultra-Wideband Antipodal Vivaldi Antenna Array with Wilkinson Power Divider Feeding Network K. Janssen 1, M. Strackx 2,3, E. D Agostino 3, G. A. E. Vandenbosch 2, P. Reynaert 2, investigates a two element antipodal Vivaldi antenna array powered by an Ultra-Wideband (UWB) Wilkinson power divider feeding network. Firstly, a comparison is made between the performances of a FR4 and Rogers RO3003 substrate. Next, a stacked antenna configuration is defined to improve the directivity. Furthermore, a feeding network is developed, to obtain a uniform power split. Those performances are analyzed by use of simulations and measurements on a standard planar technology. Finally beam steering is simulated with a three element stacked antenna configuration. Keywords-Ultra wideband (UWB), Vivaldi antenna array, Wilkinson power divider, beamsteering I. INTRODUCTION Since the FCC authorized the use of Ultra-Wideband (UWB) technology in the GHz range, there has been a wide research effort. UWB antennas are an important aspect in this research. Two main types of antennas can be considered, both based on directional or omni-directional point of view. Communication systems provides messaging and controlling opportunities and RADAR systems are used to navigate and to detect objects. Previous research distinguished UWB Vivaldi antennas from other short range communication methods [1, 2]. They mainly support a great bandwidth and good impedance matching. They re characterized by a wide pattern bandwidth and a usable gain. This research expands an UWB dual elliptically tapered antipodal slot antenna (DETASA) or enhanced antipodal Vivaldi antenna [2]. We present an UWB Vivaldi antenna array with well-improved small pattern bandwidth. This makes practical use in radar applications possible. Besides that, analyzing building constructions [3] and ground penetrating radar (GPR) usage [4] is suitable with this antenna. The performance of this array construction presents medical usages. Remote measurement of heartbeats, breathing and subcutaneous bleeding are a few examples [5]. This wireless healthcare simplifies human monitoring. Based on [6] we have developed and measured an UWB Wilkinson power divider. Despite promising simulation results, this divider wasn t measured before. The design was altered to work on a Rogers RO3003 substrate. It operates with acceptable S-parameters in the UWB band and characterize matched output ports with high isolation. Furthermore, this study will analyze the scan performances of a Vivaldi array over the UWB frequency range. This paper investigates the simulations and development of an UWB Vivaldi array. First, Section II shows the results of an improvement of the antipodal Vivaldi antenna and an implementation on a Rogers substrate. Section III describes an enhancement and realization of a two-way Wilkinson power divider. After that, Section IV defines a construction of a twoelement UWB antenna array using two antipodal Vivaldi antennas in a stacked configuration. Results are shown for both measurements and simulations. Next, Section V investigates beam steering of a three element Vivaldi antenna array. Finally, conclusions are drawn in Section VI. II. SINGLE-ELEMENT VIVALDI IMPROVEMENT Over the last few years many designs and improvements have been published on UWB antenna s [7, 8]. We define an UWB antenna if it confirms by the following considerations. It requires an UWB signal with a fractional bandwidth greater than 20% of the center frequency or a bandwidth greater than 500MHz [9]. The FCC regulated medical applications to achieve a S 11 of -10dB between 3.1GHz and 10.6GHz [9]. In this band an acceptable radiation pattern is assured. Next, the signal has to propagate like a short-impulse over the frequency band of interest with minimum distortion. Furthermore the FCC specified a power emission limit of 41.3dBm/MHz [10]. UWB antennas can be specified in terms of geometry and radiation characteristics and are classified in two- or threedimensional and directional or omnidirectional designs. [11] This research investigates the DETASA which can be classified as a two-dimensional and directional antenna. It s characterized by a stable directional pattern and consistent impedance matching, which makes a point-to-point communication system possible [2]. This antenna mainly differentiates from other UWB antennas, like a patch and horn antenna by its large operational bandwidth and gain performance and size constraint. We consider that the minimal frequency-limit is related with the antenna size and the gain directly related with the size of the antenna. We obtain radiation when a half wavelength is configured between the conducting arms, which determines the width of the antenna. The length can be specified as follows. A shorter length results in a steeper tapering which implies a more abrupt impedance change. This results in more reflections. The DETASA is a modified version of the antipodal tapered slot antenna (ATSA), also known as Vivaldi antenna. It varies from the ATSA by the elliptically tapered inner and outer edges of the slotline conductors. This slotline consists of semicircular loadings at the ends, which improves the radiation performance and decreases the lowest operating frequency [11, 39 46 8]. The antipodal Vivaldi antenna contains a tapered radiating slot and a feeding transition. The same dimensions reported in [2] were used, with exception of the feeding strip length. We extended the feeding strip by 3.5mm in order to take the connector dimensions into account, which results in an improved return loss. We simulated and measured the antipodal Vivaldi antenna on a Rogers RO3003 substrate with a thickness of 0.762mm and a relative permittivity of 3. We consider that the characteristic impedance of the feed is dependent on the substrate height and permittivity. The width of the strip line is calculated and optimized to 1.596mm with CST Microwave Studio to achieve an excellent match for S-parameters in the UWB band. This requires a minimum of -10dB return loss. The simulation includes a 50Ω surface mounted assembly (SMA) connector instead of a waveguide port to feed the radiator. This realization integrates the effects of discontinuities and results in a better resemblance with the measurements. Furthermore the legs of the connector are connected to each other by two via s soldered to the pads on the substrate. This reduces the interference of the connector and improves the realized gain at 7GHz. Using a RO3003 substrate instead of FR4 improves the results. As shown in Fig. 1, both substrates perform the same lower -10dB cutoff frequency at 2.2GHz, but a profit return loss is achieved with RO3003 for frequencies higher than 9GHz. Only RO3003 substrate satisfices the -10dB at higher frequencies. Furthermore, Fig. 2 shows the measured and simulated performances of the realized gain, which represents the losses. A higher gain could be succeeded for frequencies starting on 10GHz by using the RO3003 substrate. III. WILKINSON POWER DIVIDER We also implemented a predefined Wilkinson power divider [6] which wasn t physical measured before. This component is designed as a 50Ω two-section version with stepped-impedance open-circuited (OC) radial stubs to achieve good operation within the UWB band. We used the same design methodology as described in [6]. However, during simulations it was necessary to increase the length of the transmission line with L0=1.05cm to broaden the -10dB bandwidth of S 11 and to redefine the radius of the open stub to tune the upper cutoff frequency limit of S 22 and S 33. Unsatisfactory measurement resulted by using through hole SMA connectors at the divider s outputs required a redesign. This was realized by adding a transmission line at the output, which made the use of end launch SMA connectors possible. The length between the two outputs are altered to achieve the preferred element spacing, which will be explained in the next section. We designed the Wilkinson power divider included with end launch SMA connectors on a Rogers RO3003 substrate. Fig. 4 compares the simulated and measured S-parameters. A return loss of the input (Port 1) and outputs (Port 2, Port 3) less than -12dB is guaranteed. It also reflects an acceptable isolation parameter S 23 which reveals a good power split and an stable insertion loss in the range of -3dB. The well-working length and width dimensions of the microstrips are shown in Table I. Figure 1. Simulated and measured return loss of the single element Vivaldi antenne on FR4 vs RO3003 Figure 2. Simulated and measured realized gain of the single element Vivaldi antenne on FR4 vs RO3003 The design visualized in Fig. 3 consists of a symmetric structure, so an equal power division is guaranteed. An 100Ω SMD805 isolation resistor is used to match satisfactory isolation performances between the output ports. We optimized the resistor microstrip to perform a high operating frequency level of at least -10dB for S 22 and S 33 in the UWB band. TABLE I. Geometric dimensions of the UWB Wilkinson power divider [mm] W.. 1,847 0,836 0,534 1,847 L.. 10,5 7,237 0,346 1,789 16,517 R.. 0,862 [mm] b W.. 0,6 1,847 2,357 0,4572 L.. 0,673 8,627 5,38 0,889 R.. 0,4826 0, 47 Figure 3. Layout of the UWB Wilkinson power divider In this configuration, the Wilkinson power divider is proposed to feed two antipodal Vivaldi antennas. This feeding network represents an important role in the performance of the antenna array. Firstly we found an optimum as inter-element spacing for the 2x1 antenna array by simulating this in CST Microwave Studio. This optimum results in a constructive interference pattern and determines the -3dB beamwidth. It specifies the distance of both output lines from the power divider. The Wilkinson power divider basically sizes 49mm x 30mm. An extension of the substrate was applied for construction reasons. IV. VIVALDI ARRAY Many UWB applications demands an antenna with a small directive beamwidth, like radar systems. Examples are through-wall imaging, detection of traumatic internal injuries, fall detection and imaging of the human body without direct skin contact have strong potential use [5]. Those wideband technologies can be developed with Ultra-Wideband Vivaldi arrays. By implementing an array construction, a high directivity can be achieved. In this study we designed an UWB array with two antipodal Vivaldi antenna s. To construct an interference pattern with a fixed beam, the spacing between the two elements had to be calculated. To suppress grating lobes and achieve a higher directivity, the element spacing is determined with the parametric optimizer tool of CST. A design goal was set to obtain a maximum integrated realized gain in the frequency range 3-11GHz as shown in Fig. 5. This resulted in an interelement space of 38.4mm. Magnitude [db] Magnitude [db] S11 sim -30 S11 meas -35 S21 sim S21 meas Frequency [GHz] S22 sim -35 S22 meas S23 sim -40 S23 meas Frequency [GHz] Figure 4. Simulated and measured S-parameters of the UWB Wilkinson power divider 41 48 Integrated realized gain [db] Element spacing [mm] Figure 5. Optimization of the element spacing related with the integrated realized gain The antipodal Vivaldi antenna array and the UWB Wilkinson power divider were combined as shown in Fig. 6. Measurements depicted in Fig. 7 show a narrowness of normalized beamwidth at 10GHz due to the array configuration. The 3dB beamwidth of the E-plane remains the same, however the 3dB beamwidth of the H-plane is reduced by half. This results in an improved gain. Figure 6. Two element antipodal Vivaldi antenna array with UWB Wilkinson power divider Normalized magnitude [db] E-plane element E-plane -30 array H-plane element -35 H-plane array Angle [ ] Figure 7. Measured radiation patterns in E/H-plane of one element vs array at 10GHz V. BEAMSTEERING Radar and remote sensing applications requests a small beam and a high scan angle. We previously studied a two element Vivaldi array, which improved the beamwidth and the directivity. This part will investigate the design of an UWB scanning array. To achieve great scan performances, a three element Vivaldi array has been built in the same way as before. The element spacing d x, which determines a fixed beam with the nearest grating lobe at the horizon, can be reflected by the criterion [12]: d x sin 0 at the upper frequency limit (f 0 =10,6GHz) and for a scan angle θ 0 =22,5. The spacing is limited to one-half wavelength for a maximum angle of 90. This results in a maximum of 20,5mm. A further reduction of the distance was done in order to eliminate array blindness, which results in a distance of 10mm. Beam steering with an UWB signal isn t an obvious task. It s difficult to achieve a high scan range with suppression of side lobes in the whole frequency range ( GHz). Typically, a phase shift or an increased frequency relates with an improved directivity. Despite, a development of grating lobes. Oversteering results in an asymmetric main lobe and furthermore a side lobe with higher magnitude related to the main lobe. Those issues can be derived out of Table II, which gives an overview of the magnitude (Magn.), scan angle (α), beamwidth (θ) and side lobe level. This for simulated radiation patterns at 4,7 and 10,6GHz with the antenna elements individually phase shifted over 0, 40, 80 and 100 degrees. The given side lobe level is related to the magnitude of the main lobe. 42 49 TABLE II. Beam steering of a three element Vivaldi array at 4, 7 and 10,6GHz 4 GHz 7 GHz 10,6 GHz Magn. [db] α [ ] θ 3dB [ ] Sidelobe [db] Magn. [db] α [ ] θ 3dB [ ] Sidelobe [db] Magn. [db] α [ ] θ 3dB [ ] Sidelobe [db] Fase shift [ ] 0 9, ,7 11,9 11,8 0 56, ,6 0 36,9 11,3 40 8, ,2 16,8 11, ,1 10,9 9 35,1 11,1 80 6, ,3 25,1 8, ,2 8,8 8, ,3 10, , ,9 11,3 6, ,6 8,2 6, ,9 8,1 VI. CONCLUSION Based on an improved UWB dual elliptically tapered antipodal slot antenna, we analyzed a beneficial to use the Rogers RO3003 substrate for frequencies above 9GHz. For lower operation, a standard FR4 satisfies equal performances. We implemented a stacked two element configuration and achieved a reduction of the 3dB beamwidth in the H-plane by half. The element spacing was optimized to result a maximum realized gain and limits crosstalk. A developed Wilkinson power divider feeding network succeeded with equal power split, reflection losses lower than -12dB and an insertion loss in the range of -0.15dB to -0.96dB. It is shown that the simulations corresponds with measurements in the UWB band. Finally, we steered the beam of a stacked three element configuration over the UWB range. A satisfied scan angle of 34 at 10.6GHz resulted due an accurate element spacing along with sufficient suppression of grating lobes. REFERENCES [1] P. J. Gibson, The Vivaldi Aerial, Proc. 9th European Microwave Conf. Brighton, U.K., p , [2] E. Li, H. San and J.Mao, The conformal finite-difference time-domain analysis of the antipodal Vivaldi antenna for UWB applications, 7th International Symposium on Antennas, Propagation & EM Theory, Guilin, [3] I. Y. Immoreev, Practical application of ultra-wideband, Ultrawideband and Ultrashort Impulse Signals, Sevastopol, Ukraine, [4] RADAR Y. J. Park et al, Development of an ultra wideband groundpenetrating radar (UWB GPR) for nondestructive testing of underground objects, Antennas and Propagation Society International Symposium, [5] C. N. Paulson et al, Ultra-wideband Radar Methods and Techniques of Medical Sensing and Imaging, SPIE International Symposium on Optics East, Boston, MA, United States, [6] O. Ahmed and A. R. Sebak, A modified Wilkinson power divider/ combiner for ultrawideband communications, International Symposium on Antennas and Propagation Society, APSURSI, IEEE, Charleston, SC, [7] O. Javashvili and D. Andersson, New method for Design Implementation of Vivaldi Antennas to Improve its UWB Behaviour, EuCAP 2010, Sweden, [8] X. Qing, Z. N. Chen and M. Y. W. Chia, Parametric Study of Ultra- Wideband Dual Elliptically Tapered Antipodal Slot Antenna, Institute for Infocomm Research, Recommended by Hans G. Schantz, Singapore, [9] Code of Federal Regulations: Part 15 Subpart F Ultra-Wideband Operation. Federal Communications Commission. May [10] K. P. Ray, Design Aspects of Printed Monopole Antennas for Ultra- Wide Band Applications, SAMEER, IIT Campus, Hill Side, Powai, Mumbai, India, Recommended by Hans G. Schantz, [11] B. Allen et al, Ultra-Wideband Antennas And Propagation For Communications, Radar And Imaging, New York: Wiley, pp , [12] R. J. Mailloux, Phased Array Antenna Handbook, Second edition, Boston, Londen: Artech House. 43 50 44 51 Identification of directional causal relations in simulated EEG signals Koen Kempeneers 1,2, A. Ahnaou 3, W.H.I.M. Drinkenburg 3, Geert Van Genechten 4, Bart Vanrumste 1,5 1 KHKempen, Associatie KULeuven, Kleinhoefstraat 4, 2440 Geel, Belgium 2 Damiaaninstituut VZW, Pastoor Dergentlaan 220, 3200 Aarschot, Belgium 3 Janssen Pharmaceutica NV, Turnhoutseweg 30, 2340 Beerse, Belgium 4 ESI-CIT group, Diepenbekerweg 10, 3500 Hasselt, Belgium 5 KULeuven - ESAT, Kasteelpark Arenberg 10, 3001 Heverlee, Belgium Abstract Many researchers rely on standard-, multiple- and partial coherences to identify causal relations in signal processing. Furthermore Directed Transfer Functions (DTF) and Partial Directed Coherences (PDC) are used to discriminate for directed causal relations in recorded EEG data. In this paper these algorithms used for identifying causal relations in neural networks are reviewed. In particular the effects of volume conduction on the yielded results are sought after. When fed simple simulated models generated by software that takes the effects of volume conduction in to account both DTF and PDC will show themselves to be susceptible to the effects of volume conduction. Index Terms EEG, Causal flow, Coherence, Directed Transfer Functions, DTF, Multiple Coherence, Partial Coherence, Partial Directed Coherence, PDC, Volume Conduction I. INTRODUCTION The electro-encephalography measures field potentials from firing neurons in the brain. In research on brain diseases and medication against these diseases the electro-encephalogram (EEG) offers valuable diagnostic information with high temporal resolution. EEG signals allow for discrimination of the nature and location of a brain defect with a number of illnesses. Some other encephalopathy s such as schizophrenia and Alzheimer's however cannot be linked to a single location in the brain. It is believed the latter diseases are characterized or might even be caused by failures in the rhythmic synchronization of neuron clusters. Therefore, research into temporal shifted rhythmic synchronization of EEG signals is carried out. Over time several methods have been used In 1969 Granger causal analysis [6] offered the first formal method to investigate causal relations in the frequency domain. The concept was widely adopted and used not only for the intended econometric use but also for other domains of research. Seth [18] provides a MATLAB toolbox under GNU General Public License for Granger causal analysis. In 1991 Kamiński and Blinowska defined the Directed Transfer Function [9] based on Granger frequency domain causal analysis. Their adaptation of Granger causal analysis expands from the bivariate auto regression process Granger described to a multivariate one. In 2000 Baccalá and Sameshima defined Partial Directed Coherence [2] and is since widely adopted by researchers [16][21]. In this paper the latter 2 algorithms for identifying causal relations in neural networks will be reviewed. In particular we will examine whether they are able to withstand the effects of volume conduction and still produce accurate results when stressed. This critical reflection, around the usability of these algorithms for effective analysis of EEG data, is carried out with data coming from a simulated spherical head model with a electrode system. II. METHODS Standard-, multiple- and partial coherences, Directed Transfer Functions (DTF) and Partial Directed Coherences (PDC) are developed to discriminate for directed causal relations in EEG data. These algorithms rely on multivariate auto regression (MVAR) analysis. In fact, they only differ in the way the Fourier transformed regression parameters are interpreted. The coherence functions distinguish themself from DTF and PDC in the fact that it does include spectral power, where DTF and PDC do not. Spectral power is derived from the spectral matrix calculated from the Fourier transformed regression parameters and the noise covariance matrix of the residing noise sources. Standard coherence is the Fourier transform of the normalized time domain cross correlation function and quantifies to what extend two functions relate to each other in the frequency domain. The standard coherence function is itself a real function of frequency whose result is confined to the range 0 to 1. A 0 result portraits no coherence whatsoever, coherence 1 means the first time series can be written as a linear combination with constant coefficients of the second time series and vice versa. Values in between signify either a contamination of both time series with noise, an other than linear relation between both time series or the fact that the second time series may be written as a combination of the first time series and another input. When dealing with multiple time series, like EEG does, multiple coherence portraits the coherence of any one channel in relation to all analyzed time series. Its result is again confined to the range 0 to 1. Partial coherence on the other hand returns the coherence of just two channels in a multichannel system. Partial coherence will only return a close 45 52 to 1 result if and only if only those two time series display the same spectral coherence. The Directed Transfer Function, first defined by Kamiński and Blinowska [9][10][11], offers supplemental information with respect to coherence functions. DTF allows for discrimination of the direction of causal flow where coherences simply identify the causal relation. Although DTF allows the identification of causal sources and causal sinks. A causal relation identified by DTF does not necessarily imply that the relation is a direct one. That is where Partial Directed Coherence distinguishes itself from DTF. Like DTF, PDC does not use the spectral power matrix derived from the Fourier transform of the regression coefficients and the noise covariance matrix of the residing noise sources. PDC however does differentiate between direct and indirect causal relations in time series. A. Theoretical considerations In any of the algorithms under review the model is described using an appropriate multivariate auto regression process as shown in (1) for a bivariate time series. t i t x A i t x A t x p i i p i i (1) t i t x A i t x A t x p i i p i i Where x 1 are elements of the first time series and x 2 are elements of the second series. When these equations are subsequently transformed to the Fourier domain they yield: f E f X e A f X e A f X p i i f j i p i i f j i (2) f E f X e A f X e A f X p i i f j i p i i f j i Rewriting the equations in matrix form:, f E f E f X f X e A e A e A e A p i i f j i p i i f j i p i i f j i p i i f j i (3) f E f E f X f X f A (4) Or, f E f E e A e A e A e A f X f X p i i f j i p i i f j i p i i f j i p i i f j i (5) f E f E f H f X f X (6) Equation (6) shows that, provided the residing noise sources contain uncorrelated white noise the Fourier transformed regression coefficients fully describe the actual data. Therefor all dependencies may be calculated from the system transfer matrix H. Where the system transfer matrix H is the inverted matrix A. From the system transfer matrix H and the noise covariance matrix of the white noise sources V one may calculate the spectral power matrix S: ) ( ) ( ) ( * f H V f H f S (7) The asterisk denotes complex conjugate matrix transposition. For a trivariate auto regression process the spectral power matrix is given in (8). ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( f S f S f S f S f S f S f S f S f S f S (8) In the spectral power matrix the diagonal elements contain the respective time series power spectral density. The off diagonal elements contain the cross power spectral density. Since standard coherence is calculated from the squared cross power spectral density G xy normalized by the product of the respective power spectrum densities G xx and G yy :. 2 2 yy xx xy xy G G G C (9) It is easily understood that ordinary coherence may also be calculated from the spectral power matrix elements previously calculated., ) ( ) ( ) ( ) ( 2 2 f S f S f S f C jj ii ij ij (10) where S ij is the element of the i th row and j th column of the spectral power matrix calculated with (7). Equation (10) is therefore equivalent to (9). Literature states that a squared coherence value larger than 0.5 denotes a significant causal relation [7]. Multiple coherences on the other hand are calculated using:. det 1 2 f M f S f S f ii ii i (11) While partial coherences are calculated using:, 2 2 f M f M f M f jj ii ij j i (12) where M is the minor taken from the spectral power matrix with row i and column j removed. DTF is defined using only the system transfer matrix H elements defined in equation (5)(6). DTF can be explained in terms of Granger causality, in fact when applied to a bivariate time series DTF is exactly the same as pairwise spectral Granger causality. 2 2 f H f j i j i (13) ) ( f H f A f A f I j i j i j i (14) 46 53 Since non normalized properties are hard to quantify, normalized DTF was defined. Normalized DTF (13) returns spectral Directed Transfer Functions confined to the 0 to 1 range. Which may be found particular convenient because of similar properties in the coherence functions. 2 i j f k H m1 i j H f im 2 f 2 (15) An inconvenience of DTF however is that any revealed causal relation does not necessarily imply that the causal relation is a direct one. For that matter Kamiński en Blinowska define the direct causal influence [10]. This property of the multivariate auto regression process may be viewed upon as a direct time domain causal property. Unfortunately literature does not offer algorithms to normalize or correctly quantify the direct causal influence. This leads up to the definition of the last algorithm for identifying directional causal relations under review in this paper. Baccalá and Sameshima define partial directed coherence [2]: f A 2 i j i j * a j f a j 2 (16) In this expression a j is the source channel column vector from the transformed coefficients. Partial directed coherence is calculated immediately from the Fourier transformed regression coefficients. And is therefore less intensive a computation than DTF and normalized DTF that require a number of matrix inversions. It is not hard to see that partial directed coherence is closely related to pairwise Granger causality (13). Partial directed coherence plots the influence of a source channel towards the destination channel normalized by the cumulated influence of that channel towards all channels. B. Simulation models Comparative analysis of the properties of the directed causal flow algorithms is carried out using a spherical approximation of the human head. Signals from a electrode EEG were simulated using software supplied by Vanrumste [23]. Subsequently 4 slightly different models were simulated. Figure 1 The Electrode setup on a human head [24] 29 uv Fp1 F3 F7 C3 T3 T5 P3 O1 Fp2 F4 F8 C4 T4 T6 P4 O2 Fz Cz Pz Time (sec) Figure 2 The 4 second epoch used for comparative analysis of the directed causal flow algorithms. A 10Hz neuron cluster is located at [ ] in the left occipital lobe with added noise. The first simulation model simulates volume conduction from just one firing neuron cluster located in the base of the occipetale lobe (Figure 2). The combined signal to noise ratio is 8.6dB 30 uv Fp1 F3 F7 C3 T3 T5 P3 O1 Fp2 F4 F8 C4 T4 T6 P4 O2 Fz Cz Pz Time (sec) Figure 3 The 4 second epoch used for the simulation of a second, unrelated neural cluster at a frequency of 30Hz. The second cluster s location is at [ ] and it kicks in after 1 second. In the second model the electrode potentials from a second, unrelated, cluster firing at a frequency of 30Hz are added to the first model. The unrelated cluster is located at [ ], the combined signal to noise ratio is 9.3dB. The third model is a simulated causal relation at a frequency of 10Hz. (Figure 4) The source is located at [ ] at the base of the left occipetale lobe while the receiver cluster is located at [ ] near electrode P3. The temporal shift in the causal relation is kept at 4 lags over a 250Hz sample frequency (16ms). The fourth and final model features the same simple causal 47 54 relation expanded with signals from a nearby cluster. 38 uv Fp1 F3 F7 C3 T3 T5 P3 O1 Fp2 F4 F8 C4 T4 T6 P4 O2 Fz Cz Pz Time (sec) Figure 4 The 4 second epoch used for simulation of a causal relation. A second cluster at 10 Hz is located at [ ] near electrode P3 (Combined S/N Ratio 12.2dB). 42 uv Fp1 F3 F7 C3 T3 T5 P3 O1 Fp2 F4 F8 C4 T4 T6 P4 O2 Fz Cz Pz Time (sec) Figure 5 The 4 second epoch with a causal relation and a nearby unrelated cluster (Combined S/N Ratio 12.5dB). III. RESULTS Each model was fitted with a multivariate auto regression (MVAR) process, model order was estimated using Akaike information criterion [1]. Subsequently residuals where tested for correlated noise and model consistency according to Ding et al. [4] was calculated. Model TABLE I REALISTIC HEAD MODEL SIMULATION PROPERTIES MVAR process order Residuals White Noise Model Consistancy Model 1 2 yes 93.5 % Model 2 4 yes 94.0 % Model 3 2 yes 97.3 % Model 4 3 yes 96.6 % Subsequently the spectral transfer functions under review where calculated using a 20 discrete frequencies in the range 0 to f s /2. Since especially the causal flow is under investigation the maxima from those calculations are plotted in a heat map. The columns in the heat map signify source channels, the rows are the receiver channels. A. Coherence functions It is no surprise that standard coherence functions reveal synchronizes regions in the EEG electrode potentials. Figure 6 shows the heat map for model 4 simulation. Notice the figure is symmetrical around the main diagonal. 6 Ordinary coherence maxima using simulation model 4, due to volume conduction there is no possible way to distinguish O1-P3 causal relation. Particular those electrodes that are located closely to the firing neuron clusters display significant standard coherence values. 7 Partial coherence maxima using simulation model 4, the O1-P3 causal relation remains undetected, O1-T5 displays a near significant value due to presence of a unrelated firing neuron cluster. The partial coherence function shows less positives, in fact the only positive partial coherence revealed is a false positive due to volume conduction P3-Pz. The same information is shown when the third model is simulated. That model incorporates the same causal relation but the third unrelated cluster is removed. This illustrates that both ordinary, standard, coherences and partial coherence suffer from the effects of volume conduction when applied to EEG data recorded from scalp electrodes 55 B. Directed transfer functions Literature states that DTF is rather resistant to the effects of volume conduction [11]. 8 DTF for simulation model 1, volume conduction signals show near significant directional causal relations where there are none. Figure 8 shows the result of model 1 simulation, volume conduction causes the algorithm to detect a near significant causal relation where there is none. Roughly the same results are returned with the second model simulation. 9 DTF for simulation model 2, notice the volume conduction signals show significant directional causal relations where there are none. When a underlying directional causal system like model 4 is simulated DTF seems to withstand the effects of volume conduction from unrelated neuron clusters far better. However, remember the model 4 system simulates a causal relation from near electrode O1 towards electrode P3. Figure 10 shows the heat map returned by normalized DTF when model 4 is analyzed the causal relation is detected. However the detected direction is incorrect. The P3 O1 directional relation shown in yellow is qualified stronger than the O1 P3 relation which is obviously wrong. When the unrelated nearby cluster is removed, as is when using simulation model 3 the directional relation is correctly qualified. Which leads to the conclusion that normalized DTF is to certain extend susceptible to the effects of volume conduction 10 Simulated model 4 Normalized Directed Transfer Function. Notice the directional causal relation from O1 towards P3 is detected but due to volume conduction of a nearby cluster the direction of the causal relation is incorrect. C. Partial Directed Coherence 11 Partial directed coherence plotted for the first simulation model, volume conduction originating from one firing neuron cluster shows near significant directional causal relations. 12 Partial Directed Coherence maxima returned when analyzing the second model, two unrelated active neuron clusters. When the EEG scalp electrode potentials from one firing neuron cluster are analyzed using PDC these potentials reveal near significant causal relations. As for DTF the inclusion of a second active cluster nearby just barely amplifies the effect. Analyzing model 4 using PDC reveals comparable results. PDC does, however hardly significant, identify the causal 56 relation in the model but fails to properly identify the source cluster in the relation in de presence of a nearby unrelated cluster. (Figure 13) 13 PDC heat map for model 4 the P3 O1 simulation shows, incorrectly, to be more significant than the simulated O1 P3 relation. IV. DISCUSSION This paper reviewed, among others, the coherence functions, the directed transfer functions and partial directed coherence. In particular the usability of these functions to discriminate against causal flow in recorded channels of EEG data was evaluated. We showed that any of these functions, while they do produce desired results in isolated systems, suffer from the effects of volume conduction. Nevertheless, the partial coherence function showed itself to be able to distinguish between volume conduction and an underlying causal relation in our simulation with a spherical head model. However, partial coherence does not differentiate between transmitter and receiver in a causal relation and is therefore inadequate when fully identifying causal relationships in EEG data. The directed transfer functions, although they do not allow to discriminate between direct and indirect causal relations, do allow the identification of the transmitter - and receiver electrode. But, DTF seemed to be unable to do so in the presence of a underlying, unrelated, firing neuron cluster. Therefore DTF showed itself to be susceptible to the effects of volume conduction. As for partial directed coherence, at first glance PDC seems to be reliable in both identifying transmitter and receiver electrodes and in discriminating against direct and indirect causal relations. PDC however showed itself to be unable to distinguish between transmitter and receiver electrode in the presence of the effects of volume conduction. Thus showing PDC more susceptible to the effects of volume conduction than DTF. Nevertheless our simulations showed both DTF and PDC to be able to identify brain regions involved in a causal relation. If researchers bear in mind that the identified causal flow is tainted by the effects of volume conduction these functions may be used to identify causal relations. However they should be aware that identifying a causal relation does not necessarily identify the underlying neural structure. We suggest that further research in this area in the future incorporate blind source separation (BSS) techniques and independent component analysis (ICA) as they are currently being researched for brain computer interfacing (BCI) [8][3]. It is our believe that these techniques might be able to factor out the effects of volume conduction in brain causal analysis. REFERENCES [1] Akaike, Hirotugu: A New Look at the Statistical Model Identification (IEEE Transactions on Automatic Control 1974) [2] Baccalá, L. A., Sameshima, K.: Partial directed coherence: A new concept in neural structure determination (Biological Cybernetics 2001) [3] Delorme, A., Makeig, S.: EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis (Journal of Neuroscience Methods 134, 9 21, 2004) [4] Ding, M. et al.: Short-window spectral analysis of cortical event-related potentials by adaptive multivariate autoregressive modeling: data preprocessing, model validation, and variability assessment (Biological Cybernetics 2000; 83: 35-45) [5] Eshel, Gidon: The Yule Walker Equation for the AR Coefficients [6] Granger, Sir Clive W.J.: Investigating Causal Relations By Econometric Models And Cross-Spectral Methods (Econometrica 1969; 37: ) [7] Hytti H, Takalo R, Ihalainen H.: Tutorial on multivariate autoregressive modeling (Journal of Clinical Monitoring and Computing 2006) [8] James, C.J., Wang, S.: Blind Source Separation in single-channel EEG analysis: An application to BCI (Signal Processing and Control Group, ISVR, University of Southampton, Southampton, UK) [9] Kamiński, M., Blinowska, K.: A new method of the description of the information flow in the brain structures (Biological Cybernetics 1991; 65: ) [10] Kamiński, M. et al.: Evaluating causal relations in neural systems, Granger causality, directed transfer function and statistical assessment of significance (Biological Cybernetics 85, , 2001) [11] Kamiński, M., Liang, H.: Causal Influence: Advances in Neurosignal Analysis (Critical Reviews in Biomedical Engineering 2005; 33(4): ) [12] Kemp, B. et al.: A simple format for exchange of digitized polygraphic recordings (Electroencephalography and Clinical Neurophysiology 1992; 82: ) [13] Korzeniewska, Anna et al.: Determination of information flow direction among brain structures by a modified directed transfer function (ddtf) method (Journal of Neuroscience Methods 2003; 125: ) [14] NI LabVIEW Help, TSA AR Modeling [15] Roberts, M.J., Signals and Systems: Analysis Using Transform Methods and MATLAB (McGraw-Hill 2004) [16] Schelter, B. et al: Testing for directed influences among neural signals using partial directed coherence (Journal of Neuroscience Methods , 2005) [17] Seth, Anil K.: Granger Causal Connectivity Analysis (Neurodynamics and Consciousness Laboratory (NCL), and Sackler Centre for Consciousness Science (SCCS) University of Sussex, Brighton 2009) [18] Seth Anil K.: A MATLAB toolbox for Granger causal connectivity analysis. (Journal of Neuroscience Methods 2009) [19] Seth, Anil K.: Causal connectivity of evolved neural network during behavior [20] Schwarz, Gideon: Estimating the dimension of a model (The annals of statistics 1978; Vol.6 Nr ) [21] Takahashi, D. Y., Baccalá, L. A.: Connectivity Inference between Neural Structures via Partial Directed Coherence (Journal of Applied Statistics 2007) [22] Takalo, Reijo et al.: Tutorial on univariate Autoregressive Spectral Analysis (Journal of Clinical Monitoring and Computing 2005) [23] Vanrumste, Bart.: EEG dipole source analysis in a realistic head model (Universiteit Gent, Faculteit Toegepaste Wetenschappen 2002) [24] 50 57 Reinforcement Learning using Predictive State Representation. Greg Klessens, Tom Croonenborghs Biosciences and Technology Department, K.H.Kempen, Kleinhoefstraat 4, B-2240 Geel Biosciences and Technology Department, K.H.Kempen, Kleinhoefstraat 4, B-2240 Geel Abstract In reinforcement learning an agent makes decisions based on the reward he will receive. In standard reinforcement learning the agent knows exactly where he is and can easily make decisions based on this information. In Predictive State Representation (PSR) the agent only has a set of information which it obtained through e.g. sensors. The most common methods used in reinforcement learning today are direct approaches such as Monte Carlo or Q-Learning which use the direct output of states to solve a problem. In PSR we first make a model of the environment to see how the environment responds to the actions of the agent and then solve the problem. We have implemented such a learning method in Matlab where we focus on learning and modeling the environment. II. INTRODUCTION TO REINFORCEMENT LEARNING A. Reinforcement Learning Reinforcement Learning is a part of machine learning where an agent has to solve a learning problem by doing actions and receiving rewards for these actions. The agent is not told which actions to take, as in most of machine learning, but instead must discover which actions yield the most reward by trying them [Sutton, Barto, 1998]. Index Terms Predictive State Representation, Reinforcement Learning, Suffix History Algorithm I. INTRODUCTION Reinforcement learning is learning what to do, how to map situations to actions, so as to maximize a numerical reward signal [Sutton, Barto, 1998]. In reinforcement learning we deal with methods that allow an agent to learn from interacting with his environment. The agent can use different methods to learn the correct actions [1]. Predictive state representation is a new method of representing the environment to the agent. Unlike tradition reinforcement learning, the agent does not know exactly which state he is in. The agent only receives observations of his environment and not the exact position. This sort of problem has already been looked at using Partially Observable Markov Decision Processes which uses the (possibly entire) history to represent a belief state. In PSR we use predictions of future actions to give an estimate in which state the agent is currently located. This paper is an extension to my master thesis. In my master thesis I describe the experiments I conducted as well as how I implemented PSR into Matlab. Figure 1: Reinforcement Learning Setup. B. Discovery The basic setup of reinforcement learning is shown in Figure 1. An agent has a set of actions he can perform in an environment. Based on these actions he will receive a reward and information about the new state he is in. With every action the agent takes he should get a better view of his environment and learn which actions in which states result in the biggest reward. C. Environment The environment can be presented to the agent in a variety of ways. The most obvious one is that the agent completely knows his environment and his exact position in this environment. This is known as the Markov Property. The agent can then start the discovery process, these kinds of environments can be modeled using the Markov Decision Process [2]. However there are situations where the environment is only partially known to the agent thus the same methods cannot be used. It has already been proven that these kind of problems can be solved using Partially Observable 51 58 Markov Decision Processes [3]. Predictive state representation uses observations received by the environment to model the environment. This is the same way a POMDP works. But where a POMDP uses the history to determine what state the agent is in, a PSR uses predictions of the future. The remainder of this paper will present how PSR works and the results of our research. III. PREDICTIVE STATE REPRESENTATION A. System Dynamics Matrix Predictive State Representation uses action a i A and observation o i O pairs to represent the environment to the agent. There are 2 different pair groups, histories h i H and tests t i T. Histories represent actions and observations the agent has done in the past, while tests represent the actions and observations the agent can do from this point on. The relationship between histories and tests is gathered in a matrix called the system dynamics matrix. The rows of the matrix correspond to the histories that have been encountered and the columns correspond to the tests. In the intersecting part will be the probability p(t h) of a test occurring with a specific history. Figure 2: Matrix containing histories and tests. B. Core Tests We will be using a linear PSR in this thesis, because this has several advantages when it comes to implementing. A linear PSR typically has columns (rows) that are linearly dependent on other columns (rows) in the System Dynamics Matrix. Therefore we don t need to keep the entire System Dynamics Matrix to know all the possible probabilities. Instead we will look for so called core tests q i Q and core histories h i H. The probability of a core test occurring at a core history is noted as p(q h). Core tests are linearly independent on the other tests. All the other tests can be calculated using the core tests. Finding these core tests is an important part of the PSR since finding good and correct core tests will define how compact we can represent our environment. There are several ways to finding these core tests. We have used the Suffix History Algorithm for its simplicity. C. Suffix History Algorithm The Suffix History Algorithm uses the rank of a matrix as an indication of the linearity of the matrix. We sort our System Dynamics Matrix by the lengths of the tests and the histories. We take the System Dynamics Matrix and check the rank. After taking another action and updating the system dynamics matrix, we check the rank again. If the rank is equal or less than the previous rank, this indicates that the core tests are in the system dynamics matrix. After this we start an iterative process to find the core tests. We take a sub matrix from the system dynamics matrix which we increase in size. We check the rank of this sub matrix. If the rank is smaller than the size of the sub matrix, we remove the last added test because it is not linearly independent. It is clear that this does not always give best results, depending on what information is given in the system dynamics matrix, some core tests may not be found or in the other case some tests may be chosen as core tests while they are not. When tests that are linearly dependent get chosen as core tests we can t predict our model parameters correctly. This is because we use linear equations to find our model parameters. To find these linear equations we take the inverse of the matrix and when there is linear dependency in the matrix, this matrix becomes singular which makes it impossible to find the inverse of the matrix. D. Model Parameters As we mentioned before, all the probabilities of the other tests can be calculated using the core tests. This means that for every test t there exists a variable m t so that: p(t h) = p(q h). m. So we need to find these parameters as well. We use the one-step extensions p(aoq h) of the Core Tests to find the parameters. Because we are working with linear PSR s we can find these parameters by solving the linear equations [3]: p(q h1) p(aoqi h1) p(q h2) p(aoqi h2) m = p(q hn) p(aoqi hn) p(q h1) p(ao h1) p(q h2 p(ao h2) m = p(q hn) p(ao hn) Using these 2 parameters we can calculate the probability of any test given any history with the following formula: p(q aoh) = p(q h) m p(q h) m Our experiment shows how accurate the current setup approaches the actual probabilities at different time steps. 52 59 IV. EXPERIMENTS A. Influence of the size of an environment We conduct research to find out if the size of an environment has an immediate and large impact on the accuracy of the predictions made by the PSR. 1) Environments The environments on which we conducted research for this experiment are very simple. This is to make the results more interpretable. In the entire used environment the agent only has 1 action. This is because the focus of this experiment lies on the modeling of the environment solely depending on the size of the environment. Figure 3: Optimal environment. The environment in Figure 3 is a straightforward one, you always go from one state to the other. This shows that the setup works and gives the optimal graph to strive for. Figure 4: More complex environment. The environment in Figure 4 is the same as in Figure 3 but more complex in the way we move from one state to the other. The start state is Light for example, when the agent has to take an action (of which he only has 1), he has a chance of 0.7 to go to state Dark and a chance of 0.3 to stay in the state Light. This gives a view on how complexity alone affects our probabilities. Figure 5: Bigger and complex environment. The environment in Figure 5 is bigger and will show how the size of an environment affects our results. 2) Results Figure 6 and Figure 7 show the results we obtained from running our setup 10 times on the different environments as described above. The test we predict is p(dark Dark Light), the probability that we get Dark after we have seen Dark followed by Light. The values in the graph are the Root Mean Square Errors of the calculated value and the real value. The root mean square error is an indication of how much the estimated value approaches the real value. The formula is RMSE = E((a â) ) Where a is our real value and â is the calculated value. We take the square of the variance and then take the root of this square. This allows for negative variances. The values in Figure 8 are the average of 10 independent runs and each run is actions taken. The values are displayed on a logarithmic scale so the differences between the environments are highlighted. 53 60 Figure 6: Complex environment results. Figure 9: p(light Light Dark) RMSE averages on logarithmic scale. Here we see that the results in Figure 8 are similar to the results in Figure 9. Both the Complex and the Big and complex environment give similar results. Figure 7: Big and Complex environment results. B. Influence of the amount of actions in an environment Next we research to find out if the amount of actions in an environment has an immediate and large impact on the accuracy of the predictions made by the PSR. 1) Environments The environments we used in our previous experiment represent the first action of the environment in this experiment. Figure 8: p(dark Dark Light) RMSE averages on logarithmic scale. The graph in Figure 8 shows that for this particular test the RMSE values of the bigger environment is similar to the values of the complex environment. For comparison we used the same setup to predict p(light Light Dark). The results are shown in Figure 9. Figure 10: Second action for the complex environment. 54 61 Figure 12: p(action1 Light action1 Light action2 Dark) RMSE averages on logarithmic scale. As we can see in Figure 12 the differences between the 2 environments is clearly larger than the differences seen in Figure 8 and Figure 9. To determine the impact of adding an action we compare the results of the test p(dark Dark Light) in the previous experiment with the results of the same test in the environments with 2 actions. Figure 11: Second action for the bigger environment. Figure 10 and Figure 11 show the environments for the second action which is clearly different from the first action. 2) Results We use the same setup as in the previous experiment by running the PSR 10 times on the different environments described above. The test we predict is p(action1 Light Action1 Light Action2 Dark), the probability that we perform Action 1 and observe Light after we have used Action 1 resulting in observation Light followed by Action 2 and observation Dark. The graph shows us the RMSE value on a logarithmic scale. Figure 13: Comparison of the complex environment with different amount of actions. Figure 13 shows the RMSE values of the complex environment with the different amount of actions. The environment that has only one action is clearly more precise in predicting the test. Figure 14: Comparison of the big environment with different amount of actions. 55 62 Just as with the smaller environment, the difference between the big environment with 1 action and the same environment with 2 actions is clearly notable in Figure 14. V. CONCLUSION The results of our experiments show that the RMSE values of all environments decrease as more actions are taken. This proves that the PSR works. In environments containing one action the size of the environment does not make an apparent influence on the predictions. When we add an extra action to the environments, there is a clear difference. Predicting the same test at environments with different amounts of actions we saw that the RMSE values from the environment with more actions still decrease but clearly at a slower rate. The differences cause by the size of the environment is more obvious in the environments with 2 actions compared to the environments with 1 action. REFERENCES [1] Peter Van Hout, Tom Croonenborghs, Peter Karsmakers, Reinforcement Learning: exploration and exploitation in a Multi-Goal Environment [2] Richard S. Sutton, Andrew G. Barto, Reinforcement Learning: An Introduction (Book style). The MIT Press, Cambridge: Massachusetts, 1998, pp [3] Michael R. James, Using Predictions for Planning and Modeling in Stochastic Environments [4] Satinder Singh, Michael R. James, Matthew R. Rudary, Predictive State Representations: A New Theory for Modeling Dynamical Systems 63 A 1.8V 12-bit 50MS/s Pipelined ADC B. Lievens presents the design of a 12 bit 50 MS/s pipelined ADC in the TSMC 180 nm technology. The goal is to achieve an effective resolution of 11 bit or more by using an optimized architecture. This architecture will use multiple bits per stage and digital error correction to reduce the number of stages and the power consumption. Index Terms Analog-digital conversion, pipeline ADC, Analog integrated circuits T I. INTRODUCTION HE use of Ultra-Wideband (UWB) radars has been proven useful in the biomedical industry. They can be used for monitoring a patient s breathing and heartbeat [1], hematoma detection [2] and 3-D mammography [3]. In [4], [5]. It specifies a resolution of 11 bits or more for the ADC, and a sampling frequency with jitter less than or equal to 1 ps. Thus to digitalize the reflected pulse, there is a need of a high resolution ADC. In this application high resolution means a moderate sample rate as well as a relatively high number of bits. To get the required information of the UWB pulse, a sample must be taken every 10 ps. This translates to a sampling frequency of 100 GHz. A high resolution ADC with such a sampling frequency has yet to be discovered due to the immense power consumption it would have. To achieve this, [5] proposes equivalent time sampling. Assuming that identical UWB pulses which reflect on the same material give identical reflections, a minimum of only one sample per pulse is needed. One pulse every 20 ns decreases the required ADC sample rate to 50 MS/s. For a total sample rate of 100 GHz, each sample must be taken every 20 ns plus a small delay of 10 ps for each sample. The subsampling system is based on a low jitter VCO incorporated in a PLL, which provides the low jitter sampling frequency, in combination with a medium resolution ADC. The latter will be described in this paper. II. ADC ARCHITECTURE The ADC requirements are targeted at a resolution of 12 bits with a sampling frequency of 50 MS/s. Therefore we propose the pipeline architecture [6][7][8][9][10]. This paper proposes an architecture as seen in Fig. 1. It consists of two 2.5 bit front end stages, followed by four 1.5 bit stages and ending with a 4 bit backend flash ADC. Even without background calibration a resolution of 12 bits could be achieved in TSMC 180 nm technology with this architecture [11]. No background calibration greatly simplifies the pipeline architecture. However, great consideration must be made regarding the mismatch of components in circuit design. Fig. 1. Proposed pipeline architecture. By using multiple bits per stage in the first two stages, the needed accuracy of the residue calculated in a stage reduces very quickly. Digital error correction with 0.5 bit overlap is used to reduce the needed accuracy of the comparators and thus prevent errors made in the sub ADCs. By increasing the resolution of the two front-end stages and the backend stage the number of stages are reduced. This can increase the linearity of the total ADC and reduce the total power 57 64 consumption. III. CIRCUIT IMPLEMENTATION A. Multiplying digital- to-analog converter The Multiplying digital-to-analog converter (MDAC) is the heart of each stage and is responsible for generating the residue for the next stage. It is a switched capacitor circuit that functions as digital-to-analog converter, sample-and-hold circuit and residue amplifier. Fig. 2 shows a circuit implementation of a 1.5 bit MDAC. The residue of 1.5 bit stage is calculated by (1). The circuit works in two non overlapping phases. During phase one, all the switches marked with f1 are closed. During phase two, the switches marked with f2a close and shortly after also the switches marked with f2. During the first phase all the capacitors are charged to V in. The total charge in the circuit at this time equals to (2). During the second phase the feedback capacitors C f1 and C f2 are flipped around and connected between the output and the inverting input of the amplifier. Due to negative feedback created in this way, the negative input of the amplifier can be considered a virtual ground. The total charge in the circuit at this time equals to (3). According to the conservation of charge (2) is equal to (3). It can then be shown that the output V res is equal to (4) with C f1 and C f2 equal to C f, and C s1 and C s2 equal to C s. This value is holded at the output of the amplifier during phase two. This contributes to the sample and hold function of the MDAC. By selecting V A and V B at either V ref or 0V and taking C s equal to C f, the value of the sub-adc can be created and subtracted from V in. This contributes to the DAC function of the MDAC. The feedback factor multiplication also contributes to the amplification of the residue. (3) Fig. 3. Circuit implementation of 1.5b MDAC (4) The circuit implementation of a 2.5 bit MDAC is shown in Fig. 3. The residue in a 2.5 bit stage is calculated by (5). It works in the same way as described for the 1.5 bit MDAC. However, due to more six sample capacitors the feedback factor results in a gain of four. The output V res is given by (6) with all capacitors equal to each other. 4 (5) Fig. 2. Circuit implementation of 1.5b MDAC 2 (1) (2) 4 (6) In the design of the MDAC it is important that the relative matching of the capacitors C s and C f is large enough. Mismatch in these capacitors may result in missing codes at the digital output of the ADC. The size of the switches is also important. The switch resistance is responsible for an RC time constant. As the capacitors need to be fully charged within half of a sampling period, the size of the switches must be large enough. The required switch size is a function of the capacitance that needs to be charged and the sampling frequency. Given that the capacitors only need to be charged up to ¼ LSB, it can be shown that 58 65 2 (7) With t being half a period of the sampling frequency, τ the RC time constant and x the resolution of the total ADC in bits. The maximum switch resistance could then be calculated by (8) Large switches however, produce large parasitic capacitances in the circuit. The switches can also cause distortion if the input signal is too high. Choosing the reference voltage at 0.3V and 1.5V in a design where the supply voltage equals to 1.8V prevent this from happening. Other factors, which are important to the functioning and accuracy of the MDAC, are the specifications of the OTA used in the circuit. These specifications will be discussed in the next section. B. Operational transconductance amplifier The specifications of the OTA used in the MDAC, define its settling time and settling accuracy. To achieve a 12 bit resolution, the OTA used as residue amplifier in the front end MDAC must settle the residue within ¼ LSB. Since the value of two bits are already calculated in the first stage, the residue must settle within ¼ LSB of a 10 bit resolution. Therefore the OTA in the first stage must have a DC gain of at least 2 14 or 84 db for an ADC resolution of 12 bits. The settling time is dependent on the GBW of the OTA and must be well within half a period (10 ns) of the sampling frequency. It is shown in [12] that the required GBW of the OTA can be calculated by (9) For a sampling frequency of 50 MHz and a feedback factor of 0.25 in the front end MDAC, we calculate the closed-loop GBW of the OTA at 230 MHz. Considering the feedback factor of the OTA in a 2.5 bit MDAC is 0.25, the open-loop GBW then should be > 230MHz/0.25 = 920 MHz. As the OTA needs a high GBW as well as a high DC gain, a folded cascode architecture is chosen as shown in Fig. 3. In order to increase the gain of the cascode even more, the gain boosting technique is applied to the cascode transistors. Because of the limited output swing of a folded cascode amplifier, a second stage is added to increase the output swing. Miller compensation is used to ensure a sufficient phase margin. The second stage is a simple NMOS differential pair with PMOS current source loading and achieves a high swing with a large GBW. C. Comparator For the comparators in the sub ADCs and in the back-end ADC, a differential pair comparator is used as shown in Fig. 4. In comparison to the Lewis and Gray dynamic comparator introduced in [13], it can achieve a lower offset and is thereby more accurate [14]. It also consumes no DC power. Simulations of this comparator are shown in [Fig] and an offset of 16mV was measured. In a differential circuit with V ref being 0.6V and -0.6V, 16 mv offset corresponds to a accuracy within 7 bit. As the proposed architecture in this paper uses digital error correction to reduce the needed accuracy of the comparators, an accuracy of 7 bit is enough for all the stages. The comparators in the first two stages only need to be accurate to 4 bit and the comparators in the four 1.5 bit stages only need to be accurate to 3 bit. Even the 4 bit backend ADC only needs to be accurate to 5 bit. This comparator will therefore be used in each architecture. Here f u is the unity gain frequency, N is the settling accuracy in bits, n is the number of bits resolved in the stage, f s is the sampling frequency and β is the feedback factor given by (10) Fig. 4. Circuit implementation of the differential pair comparator Fig. 3. Circuit implementation of the OTA in the front-end MDAC IV. CONCLUSION The design of a 12 bit 50 MS/s pipeline ADC is presented. By optimizing the architecture the power consumption is reduced. This is done by using first 2.5 bit stages and ending with a 4 bit flash. In between 1.5 bit stages are used. 59 66 REFERENCES [1] I. Immoreev, T. H. Tao, UWB Radar for Patient Monitoring, IEEE Aerospace and Electronic Systems Magazine, vol. 23, no. 11, pp , [2] C. N. Paulson et al., Ultra-wideband Radar Methods and Techniques of Medical Sensing and Imaging, Proceedings of the SPIE, vol. 6007, pp , [3] S. K. Davis et al., Breast Tumor Characterization Based on Ultrawideband Microwave Backscatter, IEEE Transactions on Biomedical Engineering, vol. 55, no. 1, pp , [4] M. Strackx et al., Measuring Material/Tissue Permittivity by UWB Time-domain Reflectometry Techniques, Applied Sciences in Biomedical and Communication Technologies (ISABEL), rd International Symposium. [5] M. Strackx et al, Analysis of a digital UWB receiver for biomedical applications, European Radar Conference (EuRAD), 2011, submitted for publication. [6] T. N. Andersen et al, A Cost-Efficient High-Speed 12-bit Pipeline ADC in 0.18um Digital CMOS, IEEE Journal of. Solid-state Circuits, vol. 40, no. 7, July [7] S. Devarajan et al, A 16-bit, 125 MSps, 385 mw, 78.7 db SNR CMOS Pipeline ADC, IEEE Journal of. Solid-state Circuits, vol. 44, no. 12, December [8] H. Van de Vel et al, A 1.2-V 250-mW 14-b 100-MSps Digitally Calibrated Pipeline ADC in 90-nm CMOS, IEEE Journal of. Solidstate Circuits, vol. 44, no. 4, April 2009 [9] Y. Chiu, P. R. Gray and B. Nikolic, A 14-b 12-MS/s CMOS Pipeline ADC With Over 100-dB SFDR, IEEE Journal of. Solid-state Circuits, vol. 39, no. 12, December 2004 [10] O. A. Adeniran and A. Demosthenous, An Ultra-Energy-Efficient Wide-Bandwidth Video Pipeline ADC Using Optimized Architectural Partitioning, IEEE Transactions on Circuits and Systems-I: Regular Papers, vol. 53, no. 12, December [11] J. Li and U. K. Moon, Background Calibration Techniques for Multistage Pipelined ADCs With Digital Redundancy, Circuits and Systems II: Analog and Digital Signal Processing, IEEE Transactions, vol. 50, no. 9, pp , September [12] I. Ahmed, Pipelined ADC Design and Enhancement Techniques, Springer, [13] T. B. Cho, P. R. Gray, A 10 b, 20 MsampleIs, 35 mw Pipeline A D Converter, IEEE Journal of. Solid-state Circuits, vol. SC-30, no. 5, pp , March [14] L. Sumanen et al. CMOS dynamic comparators for pipeline AD converters, Circuits and Systems, 2002, IEEE International Symposium. 60 67 Accessing Enterprise Business Logic in a Mobile Warehouse Management System using WCF RIA Services Jonas Renders, Xander Molenberghs, Chris Bel, and Dr. Joan De Boeck Abstract When developing a Warehouse Management System, code reuse can significantly facilitate the coding process. Together with the different components in the system, such as a PDA or PC, different techniques of accessing the underlying data storage show up. Moving towards a N-Tier architecture creates opportunities for all the components to access the same business logic. WCF RIA Services strive towards simplifying the development of N-Tier solutions. This paper starts with a global overview of the N-Tier architecture concept. The Entity Framework and WCF RIA Services cooperate in order to achieve the N-Tier architecture. This is followed by a theoretical approach of how access to the same business logic is gained, either through a PC or a PDA. Based on the study of shortcomings with respect to updating the underlying database, this paper concludes in some concrete workarounds to the identified problems. Index Terms domain services, Entity Data Model, Entity Framework, N-Tier, WCF RIA Services S I. INTRODUCTION TRUCTURE is one of the main concerns in developing software applications nowadays. Code reusability, scalability and robustness are key points when it comes to application development. IT-teams consist of developers who work on the same project. As this could lead to codeoverwriting or duplicate code, a well-thought software architecture could overcome these issues. An N-tier architecture divides an application in multiple logical tiers. Examples of such tiers may be a data access layer, a business logic layer and a presentation layer. With the division of tiers, developers aim at creating a reusable, scalable and robust application. Each developer in the team can focus on the part of the application that fits best his/her skills set. J. Renders is a Master of Science Student at Katholieke Hogeschool Kempen, Geel, 2440 BELGIUM X. Molenberghs is a Master of Science Student at Katholieke Hogeschool Kempen, Geel, 2440 BELGIUM C.Bel is a Manager at Codisys, Olen, 2250 BELGIUM Dr. J. De Boeck is a university lecturer at the Department of Industrial and Biosciences, Katholieke Hogeschool Kempen, Geel, 2440 BELGIUM In this paper we will discuss how to implement this N-tier architecture in a Warehouse Management System application striving for a maximum of automatic code generation and maintenance. This Warehouse Management System includes several components: mobile devices or PDA s are used in the warehouse to control the flow of goods; Desktop computers on the other hand handle the administration of the system. Besides these two, there are other services like mobile printers and reporting services. In order to achieve reusability, the mobile devices in the warehouse and the desktop pc s at the office have to access the same business logic layer in the N-tier architecture. As time is money, automatic code generation could save a lot of time. In the search for technologies to use, this was also an important criterion to take into account. We will start this paper with a discussion of the technologies we used at the different tiers. The Entity Framework [1] and WCF RIA Services [2] are addressed here. Next we will distinguish the difficulties that arose when trying to update the underlying data storage. The Entity Framework as well as the WCF RIA services had several problems reacting to these changes. Finally, we will propose our solutions to these problems. Though some of the problems couldn t be completely solved and remained bottlenecks. II. IMPLEMENTING THE N-TIER ARCHITECTURE Codisys February, 2011 The N-tier architecture needs to be suitable for a Mobile Warehouse Management system. Before discussing each component, a global overview of how each component fits in the N-tier architecture can give a better understanding of their tasks and purposes. 61 68 The Conceptual layer (.csdl) represents the actual Entity Data Model Schema. The Storage layer (.ssdl) represents the database objects that the developer selected to use in the model. The Mapping layer (.msl) maps the entities and relations in the Conceptual layer to the tables and columns in the Storage layer. Fig. 2. Main parts of the Entity Data Model These three layers co-operate to form the Entity Data Model. Because the model handles the connection to the database among others, development time is reduced. Fig. 1. Global overview of the N-tier architecture in the WMS A Warehouse Management System depends on a large underlying database, which has to be accessed from within the programming code. With using Microsoft s Entity Framework the need for writing a lot of code has been disappeared. A. Entity Framework 4.0 Microsoft s Entity Framework 4.0 implements Peter Chen s entity-relationship model [3]. This model maps relational database schemas to an Entity Data Model. The Entity Framework lets developers create a data model starting from an existing relational database. The model is created through a wizard and represents all the items in the database, such as tables, views and stored procedure, as objects. In the wizard a database connection is made and the tables, stored procedures, and views the developer wants to use, are selected. Afterwards the developers can adjust the model so that they get an application-oriented view of the data in the relational database scheme. To be able to work with the created object, or the so-called entities, one has to approach the entities within an ObjectContext. The ObjectContext is a class that knows all the entities and which is used to create and execute queries. Because of the application-oriented view, it s easier to query for data from the database. The developer doesn t necessarily need to know anything about the structure of the database, the model will handle this. Julie Lerman [1] describes the three main parts of which the model exists. These files are created at runtime: B. WCF RIA Services Every enterprise application needs business logic. The business logic includes all custom methods that are specific to a particular application. In a classical three-tier architecture the business logic resides at the middle tier. WCF RIA Services implement business logic in so called domain services. The domain services are generated by a wizard which uses the ObjectContext and the entities created by the Entity Framework. Though WCF RIA services interact seamlessly with the Entity Framework it is not exclusively bound to the EF. Also other data layers can be used. In our Warehouse Management System, the WCF RIA Services adds some key features to the application. CRUD methods are automatically created if selected in the wizard. A metadata class is generated with each domain service. Validation and authentication methods can be added here. Queries are executed transactional from client to server. In case of an interrupt during query execution, a rollback action will occur. Perfect integration with Silverlight. Silverlight was preferred as the technology to use in the presentation layer of our system, as we will see next. Fig. 3. WCF RIA Services in an Architecture 62 69 C. Presentation Layer The upper part of the N-tier architecture consists of the presentation layer. Our goal at this point is that the presentation layer only has to call methods from the business logic layer. It only serves as a way to represent the data, no business calculations are performed here. As said before in the previous section, the Silverlight technology was preferred to be used in the top layer. However, the Windows Mobile 5 6 platforms that run on the mobile devices don t support the Silverlight technology. Research brought us to a solution to this lack of support that will be explained in a further section. D. Desktop implementation One of the criteria was to use Silverlight, at least as desktop application. When it was compatible with the Windows Mobile 5 6 platforms, it is also preferred for the PDA software. This will be discussed later. Here we will discuss what is needed to setup a Silverlight application which accesses a SQL database and uses WCF RIA Services as service layer with the necessary business logic. This was tested in the Visual Studio 2010 environment and written in C#. For testing purposes we used the Silverlight Business Application template build in Visual Studio. A basic presentation layer is provided with this template which sped up the testing process. Once the Silverlight project has been created, a data model should be added to the project. It s important to add the model to the project from which the name ends with.web. By adding a new item and choosing for an ADO.NET Entity Data Model, the wizard will ask to create an empty model or generate one from an existing database. The latter option is needed in our case. Along the way, the wizard will give the user more options than discussed here. Only the ones that are relevant here are discussed. Now the database needs to be accessed by the Silverlight application. This is accomplished by adding a new item to the same project and choosing a Domain Service Class. A wizard will pop up where the user can specify which tables should be accessible. By enabling the editing option the user can perform all of the CRUD methods. The associating classes for metadata are generated by default. Later on we will discuss what the purpose of the metadata is. When the Domain Service is created it is crucial to build the project. Otherwise none of the entities in the Domain Service will be accessible from the code. The Domain Service file is opened by default and it is now possible to add custom code or business logic. To access the entities from the code (this is the code file from a XAML page from the other project created when the Silverlight template was created) a reference is needed from the project with the Silverlight XAML pages to the project with the data model and the Domain Service (<name-of-the-solution>.web). This can be done by entering the using statement with the name of the project. What is the purpose of the metadata? A metadata class is a simple class that defines the same or a subset of the same properties defined by one of the entity classes (of the Domain Service), but those properties have no implementation and won t actually be called at all. They are just a place where attributes can be added. Those attributes will be added to the corresponding properties in the code generated client entities. Depending on what the attributes are, they will also be used by RIA Services on the server side [4]. E. PDA implementation As at this time, Silverlight is still unavailable for the Windows Mobile 5 6 platforms another presentation technology had to be chosen here. We chose for a web application optimized for mobile use. Obviously, the web application must access the same service layer as the Silverlight application on the desktop. We chose to use a SOAP endpoint to expose the RIA Services. This is accomplished in the following manner. First of all, a.net reference to the RIA Services project is needed. Next, the SOAP endpoint we are going to use needs to be defined somewhere in the application. This is done in the Web.config file. Every Web.config file contains the following line: <system.servicemodel> Any endpoint that needs to be added (this can be a SOAP, OData or JSON endpoint) is added after this line in the following manner (only the SOAP endpoint is shown here): <domainservices> <endpoints> <add name="soap" type="microsoft.servicemodel.domainservices.ho sting.soapxmlendpointfactory, Microsoft.ServiceModel.DomainServices.Hosting, Version= , Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> </endpoints> </domainservices> After a successful built, the application can be run for the first time. The application will open the web browser with the application startup page. By altering the link it is possible to test if the creation of the Domain Service was successful. The link generally looks like this: <name-of-the-domain-service>.svc This link is used as the service reference for the web application project which can be added to the solution as a service reference. The Domain Service is found automatically and ready to be accessed in the web application. The following lines of code are needed to address the Domain Service in a proper way. 63 70 using ServiceReference1; var context = new <name-of-the-domain- Service>SoapClient("BasicHttpBinding_<name-ofthe-Domain-Service>Soap"); The using statement should be entered under the namespace attribute. The Page_Load method could be a possible location to define the Domain context. Everything is now set to start developing the web application for the PDA. A. Updating the Data model III. PROBLEMS In section II we explained how a data model is generated from an existing database. All this works very well until changes are made to the database after the data model has been generated. Even for the smaller changes such as adding columns or changing relationships the data model needs to be regenerated. Microsoft provided an option to the Entity Framework from which one would expect it would take care of this issue. The option itself is well implemented. When a data model is opened, updating the model from the database is only two clicks away. This will show a procedure in which any changes made to the database since the last update are detected. This procedure works fine if attributes or tables are added. The problems start when an attribute or a table is deleted. The procedure detects the deleted attribute or table but deliberately doesn t delete it from the model. The idea behind this is that the model may contain more information than the database, maybe for future updates to the database. There are many complaints about this because of the confusion and there is no option to turn this feature off. It also brings another problem. When a new table is added with the same name as the table that was deleted before but with different attributes, the table is added again to the model but with a different name (mostly the table name with a number attached to it) because there already exist a table with the same name in the model. This may lead to further confusion and errors when building the project solution. But this problem is easily resolved when we manually delete the attribute or table from the data model. The downside is that any attribute or table that needs to be deleted manually from the database also needs to be deleted manually from the data model. This may not be a big problem if the database isn t very large or complex. The same problem exists when an attribute is changed into another type. If we try to build the project, an error occurs because of the incompatibility of the two types. Manually changing the type will resolve this but know that the attribute types of a SQL database are named differently in Entity Framework. Then there are the relationships between the tables. These also need to be manually changed in the data model. This is a bit more complex than deleting an attribute or changing the type of an attribute and gives the user again more work to be done twice. For some people these problems may not be problems at all, but the majority would expect that when there is an option provided that says Update Model from Database that all the updates would be done automatically without intervention of the user. In a worst case scenario the graphical representation of the model cannot be shown any more due to manual changes. Altering the data model in XML is the only alternative besides completely deleting the model. In section IV we propose a possible solution to this problem by means of a third party tool. B. Updating the Domain Service When the former problems are solved without any errors, the next problem arises. In section II we explained how to make a Domain Service for the Silverlight project. Any change in the database also needs to be made in the Domain Service. Otherwise any new attribute or table would not be recognized by the Domain Service and thus would not be addressable in the code. Unlike updating the data model, there is no option to automatically update the Domain Service. Deleting the Domain Service and making a new one seems to be the only way to update the Domain Service. This means that any custom code like business logic is lost each time there is a change in the database. The best solution Microsoft provided is to copy the custom code from the old Domain Service and add it to the new one. Custom code is not written in the beginning or at the end of a file, it is spread across the whole file, making it difficult to find and copy all of the custom code. If the database is large and complex this would be a very timeconsuming job. If for some reason the database needs to be changed regularly, updating the Domain Service every now and then would be too time consuming. Therefore, the main target of our research is to look for possible and stable workarounds for this problem. IV. BOTTLENECKS AND WORKAROUNDS A. Alternative for Updating the Data Model In section III we identified the problem that exists when updating the data model from a changed database. Here we provide an alternative that facilitates the update process by means of a third party tool. After some research we found a DBML/EDMX tool from the company Huagati Systems [5]. We tested this tool and found it to be very simple, yet powerful. It is an add-in for Visual Studio that adds functionality to the Entity Framework designer. The Model Comparer is the most powerful feature of this tool. It is capable of more than we are going to discuss here, for this problem the Model Comparer is the most relevant. The Model Comparer compares the database, the SSDL (storage) layer of the model and the CDSL (conceptual) layer of the model. If any differences between the database and the model layers are found, an overview is shown with the 64 71 possibility of updating the database or one of the models with a single click. This way the differences are easily and selectively synchronized across the layers, only updating the portions selected by the user and leaving the other portions untouched. With this tool we are able to update the data model from the database with just a few clicks. Also the relationships between the tables are easily updated. The Huagati DBML/EDMX tool comes at a price of 150 per user. It s an affordable amount if the database is regularly changed because it saves the user a huge amount of time and misery. B. Updating the Domain Service Workarounds As we explained in section III, the updating of a Domain Service is a serious problem if changes to the database are made regularly. Copying and pasting the custom code from the old to the new Domain Service takes a lot of unnecessary time. A small mistake is easily made while doing this and can have serious consequences. In the worst case, even a small mistake can result in a program that does not work anymore. Because of the time-consuming and fault sensitive nature of this updating process, we did some research to possible tools or workarounds that may make the updating process easier, less time-consuming, less fault sensitive and easier to maintain. We only found one tool that provided an automatic update for this process: DevForce [6]. DevForce is a third party tool which installs as an add-in for Visual Studio. But DevForce does not rely on WCF RIA Services. On the contrary it can be seen as an alternative which works different. Because of the high price of DevForce, while WCF RIA Services is free to use, we kept on using WCF RIA Services. In the next section we will describe some of the workarounds. These workarounds are not the best possible solutions but they can make life a little easier. 1) Use the Existing Wizard The first workaround [7] uses the existing wizard to regenerate code. There is a concrete procedure for this with very clear steps. 1. Exclude any metadata classes from the project that need to be regenerated. 2. Clean and rebuild the solution. This step is crucial, as it ensures that regeneration will function appropriately. 3. Add a new Domain Service to the project with the same name and a.temp.cs extension. 4. In the resulting dialog, clear the '.temp' from the service name and select the entities you would like to include in the service. 5. Copy the resulting output from the <ServiceName>DomainService.temp.cs file into the <ServiceName>DomainService.cs file. Remember to readd the partial modifier to the service class. 6. Merge the resulting output from the <ServiceName>DomainService.temp.metadata.cs file into the classes in the Metadata folder. 7. Delete the two new.temp.cs files that were generated. 8. Include all files excluded in Step #1. 9. Clean and rebuild. This workaround is far from ideal because it looks a lot like the solution provided by Microsoft and it is still a lot of work and still quite fault-sensitive. But it gets the job done without the hassle of searching for your own custom code in the Domain Service. Step #4 can be a bottleneck because all the entities that are needed had to be selected over again. The partial modifier used in step #6 is also used in the next workaround where we discuss this further. 2) Use a Domain Service with Partial Classes The second workaround provides easier maintenance by using a Domain Service with partial classes. With a partial class it is possible to split the definition of a class over two or more source files. Each source file contains a section of the class definition, and all parts are combined when the application is compiled. In our case, the partial classes will spread a Domain Service across multiple files with one entity in each partial class file. The implementation [8] of this workaround is fairly simple. Only a few key steps are needed. 1. Add a new Domain Service 2. Name the file after the entity that you are generating. 3. Modify the Domain Service by adding a partial to the class definition. Fig. 4. The 'partial' Modifier 4. Repeat this process for any additional entities needed in the Domain Service. 5. Remove the attribute shown in the figure below from all other partial class files. It is impossible to set an attribute twice across the partial class files. Fig. 5. Remove the '[EnableClientAccess()]' attribute For the moment, this is the best available workaround there is but it has a disadvantage. If there are a lot of entities needed, you will end up having a lot of partial classes that can be confusing and may increase the complexity of the application. If it does get confusing, a method exists where the solution of the application is split up in multiple projects. We did not test this method and are not going to discuss this further. Because this updating process is a serious problem, Microsoft has put this on their agenda for future releases of WCF RIA Services. 65 72 V. CONCLUSION The combination of the Entity Framework, WCF RIA Services and Silverlight offer a great way for implementing a N-tier architecture in our Warehouse Management System. Developers don t have to write every piece of code themselves as some code is automatically generated. The different components can access the same business logic as the WCF RIA Services exposes several endpoints from which the services can be consumed. Other endpoints that haven t been discussed can be exposed in order to give even more components access to the same business layer. Thus we get a very scalable application. At some points there may remain some bottlenecks, but because of the constant evolution of the used technologies, these issues will probably be solved in the near future. REFERENCES [1] J. Lerman, Programming Entity Framework 2nd edition, O Reilly, [2] Microsoft Silverlight, WCF RIA Services, at [3]P.P. Chen, The Entity-Relationship Model Toward a Unified View of Data, ACM Transactions on Database Systems, [4]Silverlight Show, WCF RIA Services Part 5: Metadata and Shared Classes, at Metadata-and-Shared-Classes.aspx. [5] Huagati DBML/EDMX Tools, Features for Entity Framework, at [6] Ideablade, Ideablade DevForce, at [7]Vurtual Olympus Blog, WCF RIA Services Can we get a better story please?, at Services-Regeneration-Can-we-get-a-better-story-please.aspx. [8]The Elephant and the Silverlight, How to setup your DomainService using partial classes for easy maintenance, at 66 73 IDE1115 An optimal approach to address multiple local network clients 1 An optimal approach to address multiple local network clients (May 2011) Motmans Tim, Larsson Tony Computer Systems Engineering, Högskolan i Halmstad, Box 823, S Halmstad, Sweden Abstract In a Local Area Network, a LAN, some applications or services running on a computer system, like a client participating in a network, need to inform other hosts, participating in the same network while executing the same applications or services, about their presence or exchange other relevant data between each other. One could use single host addressing, called unicasting, when there is only one host to exchange information with. However, when more hosts have to be reached at the same time, one can iterate through sending unicasts or, more commonly, one uses broadcast messaging. This latter approach, however, is not efficient at all, while the first approach can give problems in real-time applications when addressing a big group of hosts. In this paper we will discuss the different network addressing methods and try to find out which approach should be used to provide efficient and easy addressing to multiple local network hosts. Index Terms Broadcasts, Local Area Network, Multicasts Network Addressing, Unicasts A I. INTRODUCTION ddressing multiple hosts participating in the same network has not been efficient for a very long time now. Commonly, even at this time of writing, broadcasting is used, which is a good solution to address all hosts at once. However with the evolution of the human kind, also the network addressing methods have evolved. A new addressing approach, called multicasting, has come up and looks very promising. It only tampers hosts, which are interested in getting certain information, whereas broadcasting tampers all hosts participating in a network, even when not interested in the information being sent. More information about this will be given later on, but first we will start to discuss all suitable network addressing methods, give an example of how to implement the best solution in a prototype application and conclude this paper with the results of the prototype implementation. Motmans T., a Master in Industrial- and Biosciences student at the Katholieke Hogeschool Kempen, Geel, B-2440 BELGIUM. Larsson T., a Professor teaching Embedded Systems at the Högskolan i Halmstad, Box 823, S Halmstad, SWEDEN. A. Definition II. BROADCASTING Broadcasting refers to a method of transferring a message to all recipients simultaneously and can be performed in high- or low level operations. An application broadcasting Message Passing Interface is a good example for a high level operation, whereas broadcasting on Ethernet is a low level operation. In practical computer networking, it also refers to transmitting a network packet that will be received by every device on the network, which can be seen in the figure shown below. The red dot being the broadcasting device, while the green ones are the recipients in the network. Broadcasting a message is also in contrast to unicast addressing, in which a host sends datagrams or packets to another single host identified by a unique IP address. Fig. 1. Broadcast addressing B. Broadcasting scope The scope of a broadcast is limited to a specific broadcast domain. However, by adjusting the Time-To-Live or TTL value of the broadcast datagram being sent, one can configure how far the packet can travel through a network. The TTL value specifies the number of routers or hops that traffic is permitted to pass through, before expiring on the network. For each router, the original specified TTL is decremented by one. When its TTL reaches a value of zero, each datagram expires and is no longer forwarded through the network to other subnets. The optimal TTL value for local networks is four since otherwise the messages will travel too far and will be subject to eavesdropping by others. The table below shows commonly used TTL values for controlling an addressing scope. 67 74 IDE1115 An optimal approach to address multiple local network clients 2 Table I. Commonly used TTL values for controlling the scope Scope Initial TTL value Local segment 1 Site, department or division 16 Enterprise 64 World 128 However, to optimally decide which TTL value should be used, one should use traceroute or tracepath and make a decision based on the output of those commands. This output will contain a list of every hop in the network until the specified host or URL has been reached. C. Limitations One limitation is that not all networks support broadcast addressing. For example, neither X.25 nor Frame Relay have a broadcast capability. Moreover any form of an Internet-wide broadcast does not exist at all. Broadcasting is also largely confined to LAN technologies, most notably Ethernet and Token Ring, the latter being less familiar. The successor of IPv4, Internet Protocol Version 4, IPv6 also does not implement the broadcast method to prevent disturbing all nodes in a network when only a few may be interested in a particular service. Instead it relies on multicast addressing, which will be discussed in section III. D. In practice Both Ethernet and IPv4 use a broadcast address to indicate a broadcast packet. Token Ring however uses a special value in the IEEE control field of its data frame. In those kinds of network structures, the performance impact of broadcasting is not as large as it would be in a Wide Area Network, like the Internet, however it is still there and better can be avoided. E. Security issues Broadcasting can also be abused to perform a DoS-attack, a Denial-of-Service attack. The attacker then sends fake ping requests with the source IP address of the victim computer. The victim s computer will then be flooded by the replies from all computers in the domain. A. Definition III. MULTICASTING Multicast addressing is a conceptually similar one-to-many routing methodology, but differs from broadcasting since it limits the pool of receivers to those that join a specific Multicast Receiver Group, an MRG. It is the delivery of a message or information to a group of destination computers simultaneously in a single transmission from the source, automatically creating copies in other network elements, such as routers, however only when the topology of the network requires it. As shown in Fig. 2, you can see that only the interested green hosts, which joined the MRG, will receive the multicast information packets sent by the red multicast host. However the uninterested yellow hosts, not inside the MRG, will not be tampered at all. Fig. 2. Multicast addressing Multicast uses the network infrastructure very efficiently by requiring the source to send a datagram only once, even if it needs to be delivered to a large number of receivers. The nodes in the network take care of replicating the packet to reach multiple interested receivers, however only when necessary. Note that similar to broadcasting, the scope of a multicast datagram can be configured likewise. It is sufficient to adjust the TTL value for controlling the scope. B. Limitations Sadly no mechanism has yet been demonstrated that would allow a multicast model to scale to millions of transmissions together with millions of multicast groups and thus, it is not yet possible to make fully-general multicast applications practical. Another drawback is that not all Wi-Fi Access Points support multicast addressing, however this number is increasing quite fast and is facilitating the WiCast Wi-Fi multicast, which allows the binding of data not only to interested hosts or nodes, but also to geographical locations. C. In practice Multicast is most commonly implemented in IP multicast in IPv6, applications of streaming media and Internet television. In IP multicast, the implementation of the multicast concept occurs at the IP routing level, where routers create optimal paths for datagrams or multicast packets sent to a multicast destination address. IP multicast is a technique for one-tomany communication over an IP infrastructure in a network. It scales to a larger population but not requiring the knowledge of whom, or how many receivers there are in the network. The most common transport layer protocol used to use multicast addressing is UDP, which stands for User Datagram Protocol. By its nature, UDP is not reliable because messages may be lost or delivered out of order. However, PGM, Pragmatic General Multicast, has been developed to add loss detection and retransmission on top of IP multicast. Nowadays IP multicast is widely deployed in enterprises, stock exchanges and multimedia content delivery networks. A common enterprise use of IP multicasting is for IPTV applications such as distance learning and televised company meetings. 68 75 IDE1115 An optimal approach to address multiple local network clients 3 D. Security issues Since multicasting requires joining an MRG in order to receive datagrams being sent in the same multicast group, it provides good security. DoS-attacks can still be executed in one MRG, however, since the number of hosts are commonly limited, the attacks will have less negative consequences. Moreover, the attacker first has to join the multicast receiver group, which requires the knowledge of a special IP address, in order to execute an attack. IV. IMPLEMENTATION OF MULTICASTS IN A PROTOTYPE APPLICATION A. Introduction Implementing multicast addressing in a prototype application is fairly easy. When using Java as the programming language, using the java.net namespace already includes support for multicasts. Not only the support is excellent, also the coding itself only requires a few lines. First, one should create a MulticastSocket object and specify the port number as an attribute. Secondly, the creation of a MRG is required, which can be done by instantiating a InetAddress object and using a multicast IP address as an attribute. Next, one should specify the MulticastSocket to join the MRG by using the joingroup() method. Optionally, the TTL can be adjusted to specify the addressing scope by calling the settimetolive() method. After this, a DatagramPacket should be instantiated with the same port number as been given to the MulticastSocket. Note that this packet will also contain the data being sent within the receiver scope, the data being a byte array. Finally it is sufficient to set the MulticastSocket s buffer size to the length of the data with setsendbuffersize() and send the datagram throughout the network using the send() method. B. Code example for sending multicast datagrams try { MulticastSocket mcsocket = new MulticastSocket(8717); InetAddress inaaddress = InetAddress.getByName(" "); mcsocket.joingroup(inaaddress); mcsocket.settimetolive(127); byte[] bytmsg = ( Data here! ).getbytes(); DatagramPacket packet = new DatagramPacket(bytMsg, bytmsg.length, inaaddress, 8717); mcsocket.setsendbuffersize(bytmsg.length); mcsocket.send(packet); } catch (Exception ex) { System.out.println("Error ); } Fig. 3. Sample code for sending multicast datagrams C. Code example for receiving multicast datagrams try { MulticastSocket mcsocket = new MulticastSocket(8717); InetAddress inaaddress = InetAddress.getByName(" "); mcsocket.joingroup(inaaddress); DatagramPacket packet = new DatagramPacket(new byte[1024], 1024); mcsocket.receive(packet); String strmsg = new String(packet.getData(), 0, packet.getlength()); // Do something with message here // } catch (Exception ex) { System.out.println("Error ); } Fig. 4. Sample code for receiving multicast datagrams D. Discussion As you can see in section B and C, the implementation of this addressing approach is very straightforward. Note that port number 8717 is used, because port numbers below this value are officially reserved by other applications. Using a port number between and results in avoiding congestions with applications using the same port. The special multicast IP address is chosen with more care since an MRG has to be specified by a class D IP address. Class D IP addresses are in the range to , inclusive. However, the address is reserved and should not be used. On the receiving part of the prototype implementation, one should set a buffer size for the datagram being received. Note that in the code sample shown above, a buffer size of 1 kilobyte is chosen. Though, one should keep in mind to adjust this value to the prototype s requirements. V. TEST OR ANALYSIS OF SOLUTION In the prototype application we created, the implementation of multicast addressing succeeded very well. We tested multicast datagram transmission and receiving up to 20 clients participating in the same multicast receiver group. The results were very good. All datagrams were sent and received successfully. Moreover we added some more clients to the network participating in another MRG to test whether those clients would not be tampered with information being sent in another receiver group. Again, the result was successful. Only clients in the same MRP received information they were interested in. VI. CONCLUSION To conclude, the optimal approach to address multiple local network clients is to use multicasting. Multicasting uses the network more efficiently, can be easily implemented and will eventually replace the ineffective broadcast addressing in the future in IPv6. However, one should keep in mind that multicast addressing is not very well supported yet. Not all access points are capable of providing multicast addressing. However, since its popularity is growing so fast, one should strongly believe to use this approach because more and more manufacturers will support it in the near future. 69 76 IDE1115 An optimal approach to address multiple local network clients 4 VII. REFERENCES [1] Ard Digital DE. (2010, 7 23). ARD Digital - Digitales Fernsehen der ARD - Digitalfernsehen - Digital TV - Multicast-Adressen. Retrieved 5 1, 2011 from Ard Digital DE: digital.de/empfang-- Technik/IPTV/Multicast-Adressen/Multicast-Adressen [2] Britannica ORG. (2009, 3 27). broadcast network -- Britannica Onlince Encyclopedia. Retrieved 4 30, 2011 from Britannica: [3] CompTechDoc ORG. (2010, 4 5). Broadcasting and Multicasting. Retrieved 4 30, 2011 from CompTechDoc: sting.html [4] Davidson, P. (2004, 1 29). Local stations multicast multishows. Retrieved 5 1, 2011 from USA Today: [5] EzDigitalTV. (2011, 4 23). What is Multicasting? Retrieved 4 30, 2011 from EzDigitalTV: [6] Oracle. (2010, 10 19). MulticastSocket (Java 2 Platform SE v1.4.2). Retrieved 5 3, 2011 from Oracle: et.html [7] Parnes, P. (2006, 2 2). Java Multicasting Example. Retrieved 5 6, 2011 from LTU.SE: 70 77 An estimate of the patient s weight on a antidecubitus mattress using piezoresistive pressure sensors Abderahman Moujahid 1, Stijn Bukenbergs 1, Roy Sevit 1, Louis Peeraer 1,2 1 IBW, KHKempen [Association KULeuven], Kleinhoefstraat 4, 2440 Geel, BELGIUM 2 Faculty of Kinesiology and Rehabilitation Sciences, Katholieke Universiteit Leuven, BELGIUM Abstract Europe s demographic evolution predicts that in the year 2060 the population above 80 will triple. Elderly people have a higher risk for severe illness, causing patients to spend most of their time in bed. Because of their medical condition, these bedridden patients are more likely to develop decubitus (pressure ulcers). Decubitus is often threaded by using an alternating pressure air mattress (APAM). This work explores the possibility to measure a patients weight based solely on pressure variations in the APAM s air cells. Previous studies have shown that weight is a important factor in the development of decubitus, so most of present APAM s are pressurized based on the weight of a patient. This configuration is currently done manually, leaving room for error. If this configuration is not done correctly, tissue pressure will not be reduced effectively. The development of an intelligent APAM, that can measure the patient s weight and regulate pressure, will not only reduce decubitus development, but also increases the patient s comfort. Index Terms decubitus; pressure ulcer; weight; interface pressure; cell pressure; alternating mattress; anti-decubitus mattress; pressure mapping; piezoresistive pressure sensor I. INTRODUCTION A. General ecubitus is a global problem. For the total of Europe, the D prevalence number is rated at 18,1% [21]. In Belgium, up to 12 % of patients with mobility impairment and poor health, deal with some sort of decubitus [4]. With the aging of the population, this prevalence percentage will only increase. Fig 1. shows a population pyramid with predictions in 2060 for Europe. The 65-plus population will be more than 30% of the total population in Europe [26]. Fig 1. Population pyramid Europe ( ) Decubitus is the medical term for pressure-necrosis or pressure ulcering. It refers to dying of tissue under influence of compressive forces (pressure), shearing forces, friction [2] and micro-climate [3]. Capillaries get pressed shut by these external pressures resulting in insignificant oxygen and nutrients supply to the tissues. Decubitus can lead to severe complications and permanent tissue-damage. The main cause of decubitus development is contact pressure (interface pressure) from the patient s body weight against an underlying support surface. Therefore, APAM s focus their operation to reduce the magnitude and duration of this pressure [5]. II. FACTORS Adjacent to pressure, there are numerous indirect causes. These factors will contribute to the development of pressure ulcers. The factors are classified in 2 categories : extrinsic and intrinsic factors. The main extrinsic factors are the forces of bodyweight, shear and friction. These factors can basically be controlled by using proper support surfaces and by following the guidelines issued by the European Pressure Ulcer Advisory Panel (EPUAP) [2]. Pressure, resulting from the patient s weight pressing on the support surface, causes the tissue to be compressed against the bony prominences resulting in disturbed capillary perfusion. In the 1930 s, Landis [6] research suggested that the amount of pressure to achieve capillary occlusion is 32 mmhg. This misconceived number comes from measurements taken at the fingertips of healthy, young students. Later studies [23] have shown that 71 78 the amount of pressure for capillary occlusion varies between individuals and the anatomical location in the human body. The second extrinsic factor is shear [7]. Shear stress is parallel (tangential) force applied to the surface of an object, while the base remains stationary. Shear stress causes deformation on top layers of the skin. A typical example of this occurs when the head of the bed is raised. This will cause an increase in shearing forces on the sacral tissues. The skin is hold in place while the body is pulled down due to gravity, causing the bony prominence to push against the deep, internal tissues. Shear forces increase pressure and thereby cause a reduction in capillary blood flow. Shear forces are mostly combined with friction. Friction occurs when two surfaces move across one another [8]. The protective outer layer of the skin is then stripped off. Patients who are not able to move by themselves or patients with uncontrollable movements (ex. spastic patients) have a higher risk in tissue damage caused by friction [24]. A recent development in friction-reduction is the usage of low-friction materials (ex. polyurethane). These are usually laid over the support surface. The intrinsic factors are factors that can speed up the formation of pressure ulcers. They are related to the medical condition of the patient. There are a lots of these factors for example : mobility, incontinence, body temperature, age, gender, nutrition, A. Alternating systems Like previous mentioned, the best method to successfully overcome pressure ulcers, is to redistribute pressure in combination with providing a comfortable surface for the patient. Support surfaces that redistribute interface pressure fall into two categories: pressure-reducing [17] and pressurerelieving [9,17] surfaces. Interface pressure-reducing systems reduce interface pressure by maximizing the skin s contact area. Interface pressure is the result of the weight of the patient causing a deformation of the mattress that adapts to the patient s body contours. An alternating system has a pressure-reducing functionality. It has a series of cells beneath the patient that are inflated by an air pump. The manuals of such devices often indicate a table with pressure-settings according to the patient s weight while others give the preference to a hand check [10]. In both cases the pump configuration is performed manually. The hand check procedure is issued by the Agency of Healthcare Policy and Research (AHCPR) [10]. Inappropriate configuration of the air-pump can lead to bottoming-out, a situation where the patient is no longer supported by the underlying surface. This situation should be avoided at all times. An alternating system has also a pressure-relieving task. Regions of high pressure are periodically shifted by deflating and inflating consecutive APAM s air cells. Usually cycle times vary from 10 to 25 minutes, and are manually adjustable. The optimal time cycle between inflation and deflation is still not known. III. MATERIAL AND METHODS The experiments, conducted in this paper, are applied on an low-air-loss alternating mattress that consist of 20 cells. Lowair-loss is a property of a APAM which refers to the escape of air through the micro-perforated pores of the cell for temperature and moisture regulation [3]. Each cell is divided in 2 compartments : - active layer - comfort layer The seventeen cells in the active layer are connected to the pump through two separate tubes, so alternating circuits can be formed. An additional tube is connected to the pump to provide a fixed pressure to the comfort layer. The pump can be set to static or dynamic mode. In static mode, the pressure in the cells is held on the configured pumppressure (PP). When the pump is placed in dynamic mode, the active layer will be divide into two separate circuits which will be periodically deflated or inflated. The comfort layer is held at a constant pressure. A non-return valve is integrated in this layer to guarantee that the patient is always supported even if a power failure occurs. This prevents a bottoming out situation. The cells are only ventilated in the active layer to prevent skin degeneration of the patient. (see low-air-loss) Which means that over a period of time, the cells deflate automatically. The air-pump (Q2-03, EEZCARE, TAIWAN) produces 8L / min air flow and works in a pressure range from 15 to 50 mmhg [10]. A. Interface pressure measurements For measuring interface pressure, a pressure mapping system is needed. A MFLEX Bed ACC 4 Medical system is used. The system consists of a pressure mat containing 1024 (32 x 32) thin, flexible piezoresistive sensors and is covered with polyurethane coated ripstop Nylon [11], an interface module for the collection of data and a calibration kit. The system is delivered with its own software (Fig 2.) to allow clinicians to view, annotate, file and share the information gathered by the sensors. Fig. 2. Pressure mapping using MFLEX software 72 79 B. Meaning of interface pressure Theoretically, the interface pressure reflects the weight of a patient, (Pascal s law) : F P (1) A Formula 1 states that the interface pressure equals the force (body weight multiplied with earth s gravitational force), divided by the surface area in contact with that force. This means that the larger the surface area supporting the patient, the lower the tissue interface pressure will become. Interface pressure can be affected by the stiffness of the support surface or the shape of the patient s body [12]. There is some controversy regarding the analyzing of IP measurements[25]. Maximum IP is cited in many studies as a the most significant parameter. This use is based on the assumption that the maximum IP is the leading factor for the development of pressure ulcers [13]. Vulnerable sites such as heels, elbows, sacrum, head, etc have the highest interface pressure on the body due to the fact that these are bony surfaces. C. Goal It s already stated that manual configuration of the air pump introduces error. The aim of this study is to fully minimize human-error by means of automatic weight measurements. Proper weight-/pressure configuration will optimize APAM functionality. D. Setting Pressure sensors are attached to each individual cell and the signals are then processed using LABVIEW (National Instruments, Texas). E. Hardware A MPX5010GC7U piezoresistive pressure sensor (Freescale Semiconductor, Texas) is used in this setting. The sensor is a integrated silicon pressure sensor with on-chip signal conditioning, temperature compensation and calibration hardware. The measurable pressure ranges from 0 to 10 kpa (eq mmhg), which is converted to an output-voltage range of 0.2 to 4.7 V. The sensor demands a supply of 5 V. When the sensor is not under load, the voltage offset has a typical value of 200 mv [22]. Transfer function is given below Vout Vs *(0.09* P 0.04) ERROR (2) The output-voltage (Vout) is proportional to the sourcevoltage (Vs) and the pressure (P). The added error-term refers to a temperature compensation factor that must be taken to account when not using the sensor in its normal temperature range : 0 to 85 C. The analog pressure signals are amplified before digitalization (two times) to broaden the input range of the digitalization step. Amplification is done with a MC33174, a low-power single supply OPAMP. For PC signal processing, the raw, analog signals need to be converted to digital values. The actual conversion is done by a analog-to-digital-converter (A/D) that s part of the dataacquisition system (DAQ, NI USB-634, National Instruments, Texas). It has 32 analog inputs, a maximum sampling frequency of 500kS/s and a 16-bit resolution A/D. The maximum pressure-resolution after amplification will be : Sensitivit y Full scale span 9V 900 pressure range 10kPa mv kpa 10 V 4 ( voltage) resolution 1, V 0, 1525mV 16 (2 1) bit bit bit 0,1525 mv ( pressure) resolution bit 0, kpa 0, 0013 mmhg 900mV bit bit kpa F. Noise considerations When retrieving the signals through the 16-bit resolution A/D, the influence of noise becomes an important error factor. There are 2 dominant types of noise in a piezoresistive integrated pressure sensor: shot (or white) noise and 1/f noise (flicker noise) [14]. Shot noise is the result of non-uniform flow of carriers across a junction and is independent of temperature. Flicker noise results from crystal defects due to wafer processing errors. To minimize the effect of noise, a low-pass RC filter with a cutoff frequency of 650 Hz is placed behind the sensor s output-pin [14]. The transducer has a mechanical response of about 500 Hz, its noise output extends from 500 Hz to 1 MHz. Another point of importance is the supply voltage. The sensor output is influenced by variation in the supply voltage. Meaning that any variation in the supply voltage will appear at the output of the sensor (see formula 2).This has a direct effect on the sensor accuracy. The developed sensor-board is equipped with a LM317, adjustable voltage regulator to supply a stable source of 5V. The adjustable voltage regulator is chosen instead of a fixed regulator (ex. LM7805) because of the improved line and load regulation, overload protection and reliability [15]. G. Software The software is written in LABVIEW. The application is split up into 2 modules (Fig 3). One module has the task of continuous measuring the pressure in the different APAM cells, while the other measures interface-pressure of the patient. Interface pressure is derived from the MFLEX pressure mapping system. (3) 73 80 Fig 3. LABVIEW-application Secondly the second module has the task of estimating the contact area of the person on the underlying support surface. H. LABVIEW : pressure measurement The cell pressure module of the LABVIEW application runs through several steps. An overview is presented in Fig 4. Fig 6. 3-point Moving Average Filter applied Calculations of mean, standard deviation, median and variance are made to compare different measurements.. The incoming data is saved in a TDM data model. Data includes a time-stamp, comfort-layer pressure, pressure of all cells in the active layer and relevant statistics. For optimal conclusions, it s desirable that cell pressure is synchronized with the interface pressure measurement. This is not possible with the original MFLEX software. So custom extensions from the MFLEX SDK are implemented using LABVIEW. A schematic view of the interaction between the MFLEX SDK and LABVIEW environment is shown in Fig 7.: Fig 4. Algorithm : measuring cell-pressure in LABVIEW The first step is acquiring the pressure signals. Secondly an optional offset adjustment can be made. The reason for this is that the offset can differ among the different sensors. For precise measuring it s recommended to apply this offsetadjustment to allow all sensors to have the same calibrated output. The third step is the conversion from the actual voltage to the relating pressure. This is done in accordance to the above mentioned transfer function of the pressure sensor. (Formula 2) The next step is filtering the input-signal. The moving average filter smoothens the data signal, removing any additional noise. Fig. 5 shows an example of a raw input signal, while Fig. 6 shows the result after applying the moving average filter. Fig 7. Algorithm : measuring interface-pressure in LABVIEW Fig 5. No filtering-technique applied The initialization step will ensure that the connection is established between the pressure mat / interface box and the LABVIEW software. Subsequently reading of pressure values is then possible and available for processing.. 74 81 IV. RESULTS Before measuring with real life persons, static objects are used to eliminate all possible interference parameters (e.g movement of real life test persons). In the second stage, the results of the static study is matched with the results on human-beings. A. Test 1 static mode : Same weight - Different PP Various pump pressures are able to be set on the air pump. Two weights are used in this setting : 33±1 kg and 73±1 kg. The mattress is first pumped to a specific PP (ex. 15 mmhg). Then 33±1 kg is placed on the mattress and one static cycle is measured. Afterwards the same procedure is done with 73±1 kg. The PP is then increased with one step and the same weights are again applied. This procedure is repeated till maximum PP (50 mmhg ) is reached The goal of this test is to find out on which PP a noticeable difference can been seen if two objects with a different weight are applied on the mattress. PP (mmhg) MIN CP (mmhg) Weight (kg) MAX CP (mmhg) LT (min:sec) MIN CP (mmhg) MAX CP (mmhg) LT (min:sec) : : : : : : : : : : : : : : : : 15 TABLE I. Static measurements of different PP s Result : The difference in maximum and minimum cell pressure on all PP s is merely 0.1 to 0.2 mmhg. At a PP of 15 mmhg, the leakage time for 33 kg is 6m 51s while an object of 72 kg has a LT of 12m 44s. At a higher PP, for instance 45 mmhg, the LT for 33 kg is 2m 45s and for 73 kg is 2m 34s. In Fig 8 shows different static cycles on different pump pressures. It s noticeable that the blue line is much longer than the dotted line. On the higher pump pressures, the lines coincide. This instigates that on these cell pressures there s no distinction between the two weights Fig 8. : Comparison between kg on different PP s Next parameters (Fig 9.) are measured : o Minimum cell pressure (MIN CP) o Maximum cell pressure (MAX CP) o Leakage time (LT) B. Test 2 static : Same PP Different weight The next test focuses on the magnitude of differentiation of the weight. Different weights are applied while the pump pressure is kept at a fixed value. The pump pressure is set to 15 mmhg. The objects are measured for a couple of cycles, and these parameters (Fig 9.) are tested : o Minimum cell pressure (MIN CP) o Maximum cell pressure (MAX CP) o Step up time of the cell (ST) o Leakage time of the cell (LT) o Total cycle time (TT) Each measurement is repeated 4 times. The average and the standard deviation are shown in TABLE ІI. Weight MIN CP MAX CP ST LT TT (kg) (mmhg) (mmhg) (sec) (min : sec) (min : sec) ± ± ± : 49 ± : 18 ± ± ± ± : 35 ± : 07 ± 4.0 Fig 9. Parameters of a static cycle The leakage time is the time measurement between MAX CP and MIN CP. The results from Fig 8. are covered in TABLE I ± ± ± : 04 ± : 44 ± ± ± ± : 40 ± : 28 ± ± ± ± : 59 ± : 53 ± 3.5 TABLE II. Static measurements of static weights Result : There is no significant relationship between weight and minimum and maximum cell pressure in static mode. There is indeed a connection between weight, step up time, leakage time and total cycle time. The step up time will 75 82 increase, if the weight of the object is raised. But the variation is too small for formulating correct conclusions The leakage time is a better parameter. Between 0 and 33±1 kg, there is a difference of 46s. Between 33±1 kg and 44±1 kg, it s already 1 minute 29 seconds. The correlation between leakage time and weight is not linear correlated. The total cycle time is proportional to the leakage time, so both parameters can be used for analysis. C. Test 3 static : Comfort layer All cells are attached to the comfort layer. The non-return valve will guarantee that air does not escape and stays in the bladders. Measurements have proven that this layer reacts the same as the active layer in static mode. The only difference is a decrease of mmhg in pressure. [18]. This is shown in Fig 10. The comfort-layer situates directly under the active layer. When a force is applied on the active layer, a certain amount of the energy is passed through to the comfort layer. This is the total surface area of the human body. Various calculations have been published to arrive at the BSA without direct measurement. Mosteller proposed the following definition for BSA calculation [19] : 2 weight ( kg) x height ( cm) BSA ( m ) 3600 Dubois & Dubois [20] reformed this equation to the following : weight ( kg) x height ( cm) BSA ( m ) Because only one side of the human body is resting on the mattress, the BSA must be divided with a factor two. This number gives only a rough estimation of the body surface. Therefore BSA is only used to compare the magnitude with the calculated surface area from the pressure mat. The difference between BSA and actual surface area could be substantial because the total human surface area does not rest fully on the alternating mattress. The actual surface area is calculated in LABVIEW by following procedure : 1) Count the number of sensors with a IP-value greater than a specific threshold 2) Divide previous number with the total sensors in the mat (in this case 1024) Fig 10. Active layer and comfort layer in static mode D. Test 4 static : Surface area To test the influence of the body contact area, different know surface areas (0,16 m² 0,32 m² 0,48 m². ) have been placed on the mattress containing the same weight (45±1).. The emphasis is laid on leakage time of the active layer. (See Test 1 & 2) Result : Contact area plays a significant role in the duration of leakage time. The time span is almost twice as long with a surface area of 0,16 m² (LT : 13m 17s) than with 0,48 m² (LT : 8m 17s). This test gives a clear indication that contact area is important for later investigations. E. Calculation of human surface area Formula 1 notes that pressure is force (body weight) divided by the surface area. Body weight is the variable that is wanted. Surface area is the missing variable to solve the equation in Formula 1. Surface area can be estimated using the pressure mapping system. This is only done for research purposes. For the intelligent APAM, the pressure mat can t be used due to the high price. Also a good starting point is the body surface area (BSA). 3) Multiply this with the total sensing area of the mat The result will be a representation of the human surface area that is in contact with the support surface. F. Test 5- static : human subjects Previous test dealt with static objects, but to accomplish the weight estimation on a APAM testing on human subjects is vital. For this test both cell pressure as interface pressure were recorded. The experimental subjects are seven healthy volunteers with an age between 20 and 28 yrs old. Following table (TABLE III) summarizes all test persons: Person ID Length (cm) Weight (kg) BMI TABLE ІIІ. Description of the experimental group 76 83 Procedure taken with subjects : - Step 1. : Inflation of the APAM on a static pump pressure of 15 mmhg - Step 2 : The test person lays on the mattress and stays as stable as possible. - Step 3 : Minimum, maximum CP (cell pressure) for at least 3 cycles are measured and the standard deviation (SD) is calculated. The minimum, maximum and average interface pressure (IP) is retrieved with the pressure mat. Both PP as IP are expressed in mmhg. The surface area (SA) is then calculated based on sensor counts. Leakage time (LT) was also monitored. The results are summarized in Table ІV. Fig 12. Alternating PP = 45 mmhg TABLE V. summarizes Fig 11. and Fig 12. PP (mmhg) MIN CP (mmhg) WEIGHT (kg) 66±1 90±1 MAX CP (mmhg) MIN CP (mmhg) MAX CP (mmhg) ID MIN CP MAX CP SD CP MIN IP MAX IP AVG IP SA LT : : : : : : :20 TABLE ІV. PP and IP measurements of human subjects Result : The minimum cell pressure is between 13.8 mmhg and 14.1 mmhg. It is not at all weight-related. Maximum cell pressure is much more scarred with a minimum value of 29.4 mmhg and a maximum value of 45.6 mmhg. The interface pressures (MIN, MAX, AVG) differ for each individual person. Leakage time for person 1 (62.5 kg) is 11m 44s, while person 7 (103 kg) has a LT of 33m 20s. This is almost 22 min of variation for 41.5 kg. TABLE V. Dynamic measurements of different PP s Result : On a PP of 15 mmhg, the difference in MIN CP between two weight is 2.0 mmhg. For MAX CP, there is a difference of 1.7 mmhg. For a PP of 45 mmhg, the difference in MIN CP between the two weights is 1.5 mmhg. For MAX CP, there is a difference of 1.2 mmhg. The difference is slightly higher at a PP of 15 mmhg. H. Test 7 dynamic : interesting parameters Test 6 showed two interesting parameters for further investigating : minimum cell pressure in the active layer (Fig 14.) and the mathematical difference between minimum and maximum cell pressure in the comfort layer. (Fig 13.) G. Test 6 dynamic : cell pressure The cycle time for an alternating cycle is set on 10 min. Two weights are applied on the mattress : 66±1 kg and 90±1kg. The goal is to find the pump pressure where the biggest deviation is seen between the two weights. A low PP of 15 mmhg (Fig 11.) and high PP of 45 mmhg (Fig 12.) are selected. Fig 13. Difference max. and min CP in the comfort layer Fig 11. Alternating PP = 15 mmhg 77 84 V. MEASUREMENT CONCLUSIONS Fig 14. Minimum CP in the active layer Results are summarized in TABLE VI. DP (=Differential pressure) is the difference between maximum and minimum cell pressure in the comfort layer. Both average and standard deviation are calculated. Weight (kg) DP ( AVG ± SD) MIN CP (AVG ± SD) ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± 0.05 TABLE VI. DP and MIN CP in dynamic mode Result : If no weight is applied, the DP is 7.40 mmhg. For 90 kg, the DP is 2.03 mmhg. DP is decreasing according to weight, but in the transition from 45 kg to 49 kg, an increase of DP takes place. Meaning accurate weight estimations using this method is not possible. Minimum CP for no weight is 2.74 mmhg and 8.75 mmhg for the highest weight. Minimum CP increases with applied weight, but is also not consistent in its results. A 58 kg object gives a MIN CP of 5.57 mmhg, while 53 kg gives a MIN CP of 6.07 mmhg. There can be noted that for both layers a distinction can be made between 0kg and 90±1kg. Everything in between can t be accurate identified. The pressure differences are too low to make any definite conclusions. Test 1 showed a major difference in leakage time for both weights on the lowest pump pressure (15 mmhg). On a PP of 15 mmhg, the difference in LT between 33kg an 73kg is 5 min 25 sec. For a PP of 20 mmhg, it s only 34 sec. The declination of LT proceeds towards the higher PP s. Test 2 confirmed that the leakage time is dependent of the weight, but has no linear correlation. Test 3 showed that comfort layer reacts the same as the active layer. The only difference is a reduction in CP. Next test showed that leakage time is dependent of contact area (Test 4.). Test 5 covered measurements with human subjects. It showed just like static measurements a positive evolution in leakage time. In the dynamic mode, there is no LT. The only parameters that can be discussed are MIN CP and MAX CP. Test 6 showed that raising the pump pressure, the alternating datasets have more similarity. This is a undesirable effect. So the best way of distinguishing weight is by applying the lowest pump pressure (15 mmhg). This conclusion is also valid in the comfort layer. It must be noticed that the shape of the waveform in the comfort layer is completely different with regard to the active layer [18]. This is the opposite of the static mode, where active and comfort layer are shape-wise the same. Two interesting parameters were found in this process : MIN CP in the active layer and the difference between MAX CP and MIN CP (DP) in the comfort layer[18]. Test 7 explored these two parameters : if the weights are close to each other, the possibility of an accurate weight-estimation with these parameters is nil. This is due to the very small variations in CP. VI. GENERAL CONCLUSION In static mode, the leakage time of the cells gives a good indication for a weight estimation of the patient. To further explore the effectiveness of this parameter, more data from test subjects is required. In dynamic mode, the results of the applied tests are inconclusive. Further steps need to be taken in the dynamic mode. A possibility is to examine the actual shape of one alternation. VII. FUTURE WORK The leakage time in static mode is dependent of the surface area (Test 4). The time dilation caused in variation of surface area must be taken in account with the total leakage time. Multiple persons with the same weight and a different surface area are needed and must be subjected to the static tests. The result will show, the effect of contact area and leakage time for real persons. Another possibility is to examine the actual shape of one alternation. Conducted tests [18] showed that for each weight, the shape of the waveform is different. A test set could be built with alternations from different kinds of weights varying between 45 and 120 kg. The data set will then include the 78 85 recordings of alternations of our test subject. The goal is then to compare the data set with the test set. (ex. using crosscorrelation) The best comparison between the alterations in the test and data set, gives the highest correlation coefficient. Tests are successfully completed with static objects, but not yet with real-life persons. Extra filtering is presumably acquired to nullify movements of the patient. REFERENCES [1] K.Vanderwee, M. H.F. Grypdonck, T. Defloor, Effectiveness of an alternating pressure air mattress for the prevention of pressure ulcers, March 2005 [2] National Pressure Ulcer Advisory Panel and European Pressure Ulcer Advisory Panel. Prevention and treatment of pressure ulcers : clinical practice guideline. Washington DC: National Pressure Ulcer Advisory Panel, 2009 [3] Clark M, Romanelli M, Reger S, et al. Microclimate in context. In: International review. Pressure ulcer prevention: pressure, shear, friction and microclimate in context. London: Wounds International,2010 [4] T. Defloor, Studie van de decubitusprevalentie in de Belgische ziekenhuizen : Project PUMAP, 2008, pp 6 27 [5] C. Theaker, Pressure sore prevention in the critically ill: what you don t know, what you should know and why it s important, Intensive and Critical Care Nursing 2003, 19, pp [6] Landis EM. Micro-injection studies of capillary blood pressure in the human skin. Heart 1930; 15: [7] Reger SI, Ranganathan VK, Orsted HL, et al. Shear and friction in context. In: International review. Pressure ulcer prevention: pressure, shear,friction and microclimate in context. London: Wounds International 2010 [8] Bridel J Assessing the risk of pressure ulcer. Nursing Standard, 7(25): pp [9] K.Vanderwee, M. H.F. Grypdonck, T. Defloor, Alternating pressure air mattresses as prevention for pressure ulcers : A literature review, International Journal of Nursing studies 45, 2008, pp [10] User Manual Q2 support surface series, January 2004, p9 [11] MFLEX 4.0 User Manual 1 st edition, Vista Medical, 2008 [12] U.S. Departement of Health and Human Services : Agency for Health Care Policy and Research, Pressure Ulcers in Adults : Prediction & Prevention, May 1992, p 55 [13] J. T. M. Weststrate, The value of pressure ulcer risk assessment and interface pressure measurements in patients, 2005 [14] A. Reodique, W. Schultz, Noise considerations for Integrated Pressure Sensors, Freescale Semiconductor, AN1646, Rev2, 05/2005 [15] U.A. Bakshi, A.P. Godse, Power Electronics II, 2009, pp [16] S. Sumathi, P. Surekha, Labview based advanced instrumentation systems, 2007, p 24 [17] National Pressure Ulcer Advisory Panel, Support Surface Standards Initiative, Terms and Definitions Related to Support Surfaces, January 2007 [18] A. Moujahid, Ontwikkeling van een geïnstrumenteerde alternerende matras ; statische gewichtsbepaling door drukvariaties, Master thesis, 2011 [19] RD. Mostleller, Simplified Calculation of Body Surface Area., N Engl J Med., Oct 1987, [20] Dubois D; Dubois EF, A formula to estimate the approximate surface area if height and weight be known, Arch Int Med, 1916 [21] K. Vanderwee, Symposium van de Federale Raad voor de Kwaliteit van de Verpleegkundige Activiteit, Brussel, 3 maart 2011 [22] Freescale Semiconductor, Technical Data MPX5010G, Rev 11, 2007 [23] A.C. Burton, S. Yamada, Relation between blood pressure and flow in the huma forearm. J Appl Physiol, 1951; pp [24] M.Moffat, K. Biggs Harris, Integumentary essentials: applying the preferred physical therapist practice patterns,2006, p 23 [25] L.Philips, Interface pressure measurement: Appropriate interpretation of this simple laboratory technique used in the design and assessment of pressure ulcer management devices [26] EUROSTAT, EUROPOP2008, 86 80 87 Practical use of Energy Management Systems J. Reynders, M. Spelier, B. Vande Meerssche Abstract This paper discusses some systems that could be useful for a global energy management system. The goal of this project is to bring the different energy management systems together in LabVIEW TM so we can use some smart algorithms in a later stadium to control all the appliances in a more efficient way. It is hereby important to synchronize all the different measurements. An additional purpose of this work is to create a SWOTanalysis of different kinds of measuring systems that could be used for energy management. This will be a comparison of the strengths, weaknesses, threats and opportunities of each individual system. Index Terms Energy Management System, LabVIEW TM, Plugwise, Zigbee, Siemens Sentron Pac 3200, Modbus, Beckhoff bk9050, Beckhhoff KL6041, Twincat, Socomec Diris A10, RS485 I. INTRODUCTION THE way we use energy must change. We waste too much energy, consciously and unconsciously, and therefore we contribute to an enormous pressure on the ecosystem of our earth. Reducing our ecological footprint [6] can be archived by living more economical in an active way by changing our habits. But it can also be done more easily be integrating systems like smart grids into our environment. One way to minimize this ecological footprint is to maximize the use of Renewable energy. Therefore, we can use intelligent energy management systems (EMS). These systems will try to match the consumption of energy and the amount of renewable energy available in such a way that the consumers are not affected. On the other hand there is also the smart grid that is put in place worldwide. A smart grid is a new approach one how an electricity grid works, whereby not only energy flows from the grid to the customer but also communication data. This is a consequence of the move towards decentralized power generation. This evolution means that the energy supply, and, correspondingly the price, will be much more variable. To keep the lights on, it is important that the power grid can communicate with consumers. This can be achieved by the help of EMS. Calculating and reducing the ecological footprint and the introduction of a smart grid implies that energy consumption must be measurable and controllable. In addition, the energy consumption has to be as much efficient as possible. In Belgium, for production units which produce more than 10 kilowatt, the injection and the use of the power from the grid has to be measured separately. This results in worse payback periods because the surplus of electricity can only be sold at 30% to 40% of the purchase price. It is therefore important for this type of equipment to minimize power injection into the grid. EMS offers the necessary J. Reynders, M. Spelier and B. Vande Meerssche are with the Biosciences and Technology Department, KH Kempen University College, Geel, Belgium solutions for this challenge. The EMS could intervene in an intelligent way. An EMS means a great support for managing the energy consumption of a building. The system is capable to handle a wide range of actors in an independent way. II. HOME APPLIANCES In this section, we are going to analyze the way of communicating of two EMS developed for home users. The most important thing to make it suitable for home users, is the simplicity in how to set up the connection. It needs to be possible to be configured by a user without technical knowledge. A. Module using Zigbee communication The Plugwise [3] module which is used in our test, is fully plug and play. It uses Zigbee as communication protocol. Our test system consists in a main controller (MC), a network controller (NC) and eight nodes. The main controller is the connection between the Zigbee network and our Energy management server. It utilizes an Ember EM250 chipset. This is a single-chip solution that integrates a 2.4GHz, IEEE compliant transceiver with a 16-bit XAP2b microprocessor. On this chip the Zigbee PRO stack is implemented. Although the Plugwise stick looks like a USB interface, it actually utilizes a serial protocol. A virtual serial port is provided by an onboard FT232R chip. The main controller communicates directly with the Network Controller and the network controller communicates with the other nodes in a mesh network topology. This means that we first have to power up the Circle+ and after that, we can plug in the circles which connect automatically with the Circle+ Fig. 1. Plugwise implementation Once all the circles are installed, we used the Plugwise source to test the connectivty and to get some measuremnts from our plug. This is the standard Plugwise software which 81 88 was included in the package. The measurements will later be used to check the accuracy of our LabVIEW TM application. After that we used a serial port sniffer to get information about how the main controller communicates with the network controller. To get the right power information Plugwise uses a calibration and a power information string. We have to use the calibration string in order to get the right measurements. The module uses a CRC16 checksum which is calculated over the full data string. The Plugwise Circle also holds an internal buffer with information about power usage in the past. Since we only need the actual value the latter will not be discussed in this paper. B. ModuleX using Powerline communication ModuleX is a prototype of an EMS that uses Powerline communications to connect with the server. The hardware contains some bugs which is normal in this stage of development. The Protocol which is used for the prototype is HomePlug Turbo. As you can see in figure 2, we use a gateway that makes a connection between the Ethernet and Powerline network. Every appliance we want to manage needs to be connected to a measurement module. This module contains the hardware for measuring the current energy consumption in milliampere and has the ability to switch the devices on and off. Fig. 2. ModuleX implementation The measuring module consists of two parts: a communication unit and a logical unit which communicate with each other by using a TCP connection. This means that the logical unit could be directly connected to the Ethernet if necessary. Each module has its own IP address that we can use to start a telnet connection on port 23. The command protocol that is used consists of an ASCII character. These characters are not case sensitive. For example to ask the current power consumption we need to send the ASCII character l or L. The module will reply with an ASCII string which need to be converted to get the measurement in milliampere. III. INDUSTRIAL APPLIANCES In this section the different set of rules used by two energy meters created for industrial usage are analyzed. This set of rules can be used in our LabVIEW TM application. First a power meter that uses Ethernet (Siemens Sentron PAC3200) to connect with our energy management server is considered. Afterwards we will analyze a meter that uses RS-485 (Socomec Diris A10) to communicate with a software PLC (Beckhoff) and a hardware PLC (Siemens). A. Power meter with Ethernet (Siemens Sentron PAC3200) The Siemens Sentron PAC3200 is a powerful compact power meter. The device can replace multiple analog meters and monitor over 50 parameters. It has an integrated 10 Base- T Ethernet module, which we will use to get the readings from the meter. The meter connects directly with the Energy Management Server. As shown in figure 3, it uses Modbus TCP for the communication between the meter and the server. Fig. 3. Siemens Sentron PAC3200 implementation Since we do not need all the values the PAC3200 measures, we will make a list containing the commands for the values we are interested in. This list will be used in LabVIEW TM to acquire the needed meter readings. Another thing Wireshark tells us, is that TCP port 502 is used to communicate with the meter. In LabVIEW TM we created a TCP connection and used a for-loop with the listed commands to extract the readings. Once we have a proper connection and the readings come through, we start with the synchronization. We will write all the received values to an excel-spreadsheet, including a time stamp. In LabVIEW TM we are controlling the synchronization interval, which is five seconds, using a timed loop. Later on we will use this loop to synchronize this meter with the meters described below. To have a decent feedback, we will add an error notification. Whenever the meter is not communicating, we will log this to a text-file, using the meter s name and a time stamp, and add a notification to the meter s name in the spreadsheet. B. Power meter with RS-485 (Socomec Diris A10) Another meter we will use is the Socomec Diris A10. The Diris A10 is a multifunctional meter for measuring electrical values in low voltage networks in modular format. [2] The Diris A10 has a build-in RS-485 interface, which we will use to connect to a Programmable Logic Controller (PLC). In total, we will connect three meters to the PLC, all using the same serial line. The PLC will constantly poll the meters and store the information temporarily. This information will be available for LabVIEW TM to extract from the PLC and will be logged for later use. Using the Control Vision software and a serial port sniffer, we were able to see what commands are used by the software to receive the readings from the meter. In contrast with the PAC3200, only one command is used to extract all readings, meaning we will have to split the different values in LabVIEW TM. 82 89 We will compare two makes and types of PLC s: a software PLC (Beckhoff TwinCAT PLC) and a hardware PLC (Siemens Simatic ET200S). 1) Using the software PLC: Here we are using a software PLC from Beckhoff. The TwinCAT Manager connects over Ethernet to a BK9050 Ethernet TCP/IP Bus Coupler, which in turn is connected to a KL6041 Serial Interface RS422/RS485 Bus Terminal. Because none of the modules has a CPU on board, the TwinCAT PLC software controls how they operate. Fig. 5. Siemens PLC implementation Fig. 4. Beckhoff PLC implementation Because we are polling three meters over the same serial line, the PLC will poll the meters one by one, causing a delay of ten milliseconds between the readings of the meters. The first thing we need to do in the PLC program is send an initialization command to the meters. This will happen only once, in the startup phase of the program. Once initialized, we start the main part of the program and poll the meters for the data they hold. The meters will each return one string with all their data. This string is made accessible for LabVIEW TM and LabVIEW TM will handle the processing of the received data. The information extracted from the meters is placed in Merker Bytes in the PLC. In this way, LabVIEW TM can access the information through TwinCAT ADS/OCX. 2) Using the hardware PLC: A hardware PLC from Siemens is used to read the connected meters. We will use the Ethernet connection of the PLC to send the data to the server. In this setting, we are using two meters. Both meters are again connected to the same serial line, which in turn connects to the PLC. We program the PLC using the Siemens Step 7 software. The PLC will poll the meters one by one. This time we are using the build-in blocks for Modbus communication. The line speed is set to 9.6kbps, causing a typical delay of six milliseconds between readings. The data received from the meters is stored in two buffers (one for each meter) in the PLC memory. To allow LabVIEW TM to receive the meter outputs, we add TCP communication to the PLC. We will define two commands (one for each buffer) to which the PLC will respond. The server can send those commands when needed and receive the meter readings through the Ethernet network. IV. SYNCHRONIZATION In order to process all measurements in a correct manner, they have to be taken at the same time. For a meter which is directly connected to the Ethernet, like the Siemens Sentron PAC3200, this is not an issue. Every meter has its own direct connection to the server and every meter can be requested at the same time. For a power meter with a PLC this synchronization can be a problem. Because they use a shared serial connection, all the meters must be queried sequentially. This means that one meter must wait for the other to send its data. The readout of the meters by the PLC has to be done as soon as possible in order to get a new measurement of the meters when the software requests one. The Reading of the buffers from the PLC happens trough a 100 Mbps network, that causes practically no delay. To give an indication of the delay between the measurements we have timed this delay. Sending and receiving a single command, together with the reply, takes an average time of 320 milliseconds. The duration of the whole cycle for reading 3 meters takes about 1250 milliseconds. We notice that there is a certain delay in the readout of the measured values. If we really want to use time critical measurements this can cause a problem. We can fix this by connecting fewer meters to one shared line. Or by using meters directly connected to Ethernet like the pac3200. To have a decent synchronization in LabVIEW TM, we use timed loops. All timed loops are connected to each other, what makes them to run synchronously. The period of the timed loops is set by a control and is chosen by the user. A. Plugwise (Zigbee) V. RESULTS Plugwise is less suitable for use as an EMS. However, it may serve as an advanced time switch to turn certain appliances on or off at a given time. The product is rather expensive, 41EURO per plug given the possibilities the plug offers. However, Plugwise is suitable for energy mapping using the internal memory. The accuracy of the module is good, the measurements are within the specified error margin of 5% and the consumption of the plug is not above 1 watt. Moreover, the modules are compact and easy to install. 83 90 The communication method is a weak point of the Plugwise system, while this in one of the most important specifications off a good EMS. The receiving range is quite low. This lowers the possible configurations in buildings because all the plugs need to be in the receiving range of the ZigBee network. The use of a USB stick as connection to the server is a disadvantage. Especially because the position of the stick in the network is a key factor for the speed of communication. The fact that all the plugs should be read sequentially is also a problem. It occurs that a plug, which is a number of hops away from the stick, takes a long time to send his data to the server. In the Meanwhile, the connection is busy and we could not receive any measurements from the other plugs. Also, we could not turn an appliance on or off while waiting for the data from a plug. The company itself does not support the integration of the product in a central EMS. This translates into not publishing the Protocol to the general public. Plugwise does have a wide product range that is still being expanded, this is an opportunity to evolve into a system for controlling devices from a central location without the need of extra wiring. However without changing the communication method, it is not suitable for the use as an EMS. Maybe later, if Zigbee integrates Powerline communication to its protocol, it could be an opportunity to develop a plug that does not only support wireless communication but also make use of Powerline communication methods. The SWOT SWOTanalysis of Plugwise can be found at table I. Strengths TABLE I SWOT-ANALYSIS OF PLUGWISE Weaknesses - Compact - Range - Accurate - Speed - Large product range - Price - Energy consumption - possible configurations - Easy installation - sequential readout - Stick as only connection Opportunities Threats - Advanced time switch - Not suitable for EMS - Zigbee + Homeplug - Protocol is shielded - Expanding the product range B. Module X (Powerline) Modulex is perfectly usable as EMS. the module that we have examined is only a prototype with some hardware errors ModuleX provides a fast communication by using the existing power grid. The module measures the consumption accurately. It has the opportunity to Quickly switch appliances. The module shows the current power consumption in milliampere in case of watts which is normally used in an EMS. The advantage of this module is that each node has its own IP address. Therefore, we can appeal each module separately and do not need to wait for other modules. We must consider that we have enough IP addresses available. In the future, IPv6 could be implemented so a very large range of addresses can be used. Also the plug currently uses a fixed IP address for the nodes. In a later stage, it could use a DHCP server. The module itself consists of a measuring PCB and a communications module. This measuring PCB can also be connected onto a standard Ethernet network. It is interesting to keep them that way. It will also be interesting if a sensor could be connected on the module. There are still some drawbacks to the prototype. These include the size of the devices (13cm x 7cm x 4.5 cm) that can be disruptive if there are multiple outlets close to each other. Besides that the prototype consumes 5 watt per node. For example, if we have ten appliances, all the units together consume fifty watts. This is a lot for a system that seeks to reduce energy consumption. When testing the prototype we also noticed an annoying beeping noise from the node. this is not disturbing in a kitchen for example, but it does in a bedroom These disadvantages will probably be solved if the communication module and the measuring unit becomes one. The consumption can also be reduced by switching to HomePlug GreenPhy instead of HomePlug Turbo. The SWOTanalysis of ModuleX can be found at table II. Strengths TABLE II SWOT-ANALYSIS OF MODULEX Weaknesses - Fast communication - Dimensions - use of existing electrical wiring - Energy consumption - Accurate - Bugs - Individual IP address - annoying noise - Minimal configuration required - Easy installation - Suitable as EMS - Fast switching - Simple protocol Opportunities Threats - Homeplug GreenPhy - Sticking to Homeplug Turbo - Both Powerline and Ethernet - Unresolved bugs - Protocol to the public - Shielding protocol - Price - Price - Reading sensors - Interference from other devices - integration into existing hardware - Single IP address - Current consumption in power (W) - IPv6 Integration - DHCP Integration C. Siemens Sentron PAC3200 (Ethernet) With the Siemens PAC3200 Sentron we use a completely different setup. This meter is directly connected to the Data Network and has its own IP address. When we have several meters of this type, the size of the address book of the meters rises and in the worst case, the addressing of the existing 84 91 network is inadequate. The configuration of the meter is simple and the meter itself remembers its settings. In case of a server failure, we only need the measurement software on another machine to continue the reading of the values. Processing the data requires less programming effort. The received data should not be split, since we could only retrieve the valuable values Siemens Sentron PAC3200 is a decent EMS. It has a digital input and digital output. We can therefore use an external sensor or we can control a device. The SWOT-analysis of the Siemens Sentron PAC3200 can be found at table III. Strengths TABLE III SWOT-ANALYSIS OF THE SIEMENS SENTRON PAC3200 Weaknesses - Energy and power readout - Seperate IP address - extensive measurements - Separate readout of measured values - Central programming - Simple readout - Simple configuration a algorithm to separate all the useful values. We also need to consider that we need a larger buffer in the PLC because of this. The SWOT-analysis of the Socomec Diris A10 with a Beckhoff PLC can be found at table IV. TABLE IV SWOT-ANALYSIS OF THE SOCOMEC DIRIS A10 WITH A BECKHOFF PLC Strengths Weaknesses - Energy and power readout - Delays - extensive measurements - All values in one time - Simple modules - Large buffer needed - Central programming - Simple readout - One IP address Opportunities Threats - Expansion with other modules - Integration into the Power Grid - Easy to install additional meters - Installation by professional - Additional functions - To many meters on one serial line - Failover Opportunities Threats - One digital input - Integration into the Power Grid - One digital output - Installation by professional - Failover - IP address shortage D. Socomec Diris A10 on Beckhoff PLC When we look at the advantages and disadvantages of this setup, we see that this system is worth to use as an EMS. The Socomec Dirs A10 is a versatile meter which can be read easily. We get a comprehensive measurement of the power grid and the load. Because we use a software PLC, the programming of the system is completely centralized on the EMS server. This implies that a fail-over is difficult to build because the backup server must also have the PLC software and we need to reinitialize the meters in case of a fail from the server. The PLC functions as a central repository for the measurements. The benefit of this is that we only need one IP address. When we are in a further stage to add some extra meters we does not need to provide extra IP addresses. We could add the meters at the same serial line or we could add new serial interface module on the Beckhoff PLC. When we use the same serial line, we must carefully consider if the delays does not increase too much. A Beckhoff PLC system has also the opportunity to add modules that allow us to read some sensors and switching devices. Thanks to the central software Programming, these modules could be easily integrated on the same EMS server to execute the necessary algorithms As we look to the programming of the software we see some extra work. De output of the meter exists of all the measurements in one time. This implies that we need to use E. Socomec Diris A10 on Siemens PLC Because we are using the same meter with a similar system, many aspects of the previous section shall return. The biggest difference is that we are dealing with a hardware PLC. It has its own memory and processor, making that the PLC program is executed independently from the server. This results in a more consistent failover if the server fails. We just need to activate the software for retrieving the data from the PLC on another machine. The survey of the meters will just go on and experience no problems from the server failure. The SWOTanalysis of the Socomec Diris A10 with a Siemens PLC can be found at table V. TABLE V SWOT-ANALYSIS OF THE SOCOMEC DIRIS A10 WITH A SIEMENS PLC Strengths Weaknesses - Energy and power readout - Delays - extensive measurements - All values in one time - Simple modules - Large buffer needed - Simple readout - No central programming - One IP address Opportunities Threats - Expansion with other modules - Integration into the Power Grid - Easy to install additional meters - Installation by professional - Additional functions - To many meters on one serial line - Failover F. Output As a result of the measurements from the EMS, we get an energy consumption profile as in figure 6. Such profile will help us in the research for saving strategies. 85 92 Fig. 6. Energy consumption profile VI. FURTHER WORK The next step is that we need to use these systems as efficient as possible by using energy management strategies. In the first phase there will be an ad hoc development where a separate strategy for each device is examined. After that all these strategies need to be combined in a centralized system to manage a supply-demand situation instead of the classic demand-supply situation. VII. CONCLUSION There is still a lot of work to do but we have made a great step forward in the research for environmentally friendly ways to use energy. With the help of this paper, we now know which considerations need to be taken if we wish to implement an EMS. We know how to use a few systems with different kind of communication methods and their strengths, weaknesses, opportunities en threats. These systems are a model of the current market situation on EMS. For existing buildings we have noticed that Modulex has a great potential for becoming a useful EMS. There is only one Condition for ModuleX to succeed and that is that the errors in the hardware need to be corrected. For new homes, it could be interesting to consider a more centralized approach. It is possible to use a energy meter like the Socomec DIRIS A10 combined with PLC applications such as Beckhoff and Siemens. For specific applications where the centralized approach is not possible and no use can be made of a Plug and Play system, then a all in one module such as the Siemens Sentron PAC3200 could be considered. REFERENCES [1] Zigbee Alliance, Zigbee specification: Zigbee document r13, Version 1.1. Web site: 1 Dec [2] B. Vande Meersche, Meer HEB door DSM Request Tetra project, [3] Plugwise website, [online]web site: [4] M. Damen and P. W. Daly, Plugwise unleashed, 1st ed. Web site: [5] SmartGrids: European Technology Platform, [online]web site: [6] Ian Moffat, Ecological footprints and sustainable development Online at - Ecolog Footprint and Sustain Dev.pdf. [7] ZigBee Alliance, [online]web site: [8] Renesas: Efforts to Implement Smart Grids, [online]web site: society/smart grid.html [9] G. Stromberg, T.F. Sturm, Y. Gsottberger and X. Shi, Low-Cost Wireless Control-Networks in Smart Environments, [10] Plugwise B.V., How to set up your Plugwise Network, [11] Ember Corporation, EM250 datasheet, [12] Future Technology Devices International, FT232R USB UART IC datasheet, [13] Analog Devices, ADE7753 datasheet, [14] Silicon Labs, C8051F340 datasheet, [15] Analog Devices, Evaluation Board Documentation ADE7753 Energy Metering IC, [16] HomePlug Powerline Alliance, Inc., HomePlug 1.0 Technology White Paper, [17] Steven Mertens, Modbus Industrieel protocol over RS232, [18] Socomec, RS485 Bus, [19] Simon Segers, Code generator voor PLC en SCADA, [20] Siemens, ET 200S distributed I/O IM151-3 PN interface module, [21] Siemens, ET 200S distributed I/O IM PN/DP CPU interface module, [22] Siemens, S7-300 Instruction List, [23] Siemens, SIMATIC ET 200 For distributed automation solutions, [24] Siemens, Distributed I/O System ET 200S, [25] Siemens, Power Monitoring Device SENTRON PAC3200, [26] Socomec, JBUS Common Table version : 1.01, 93 Execution time measurement of a mathematic algorithm on different implementations F. Salaerts 1, B. Bonroy 1,2, P. Karsmakers 1,3 1 IBW, K.H. Kempen [Association KULeuven], Kleinhoefstraat 4, 2440 Geel, Belgium 2 MOBILAB, K.H. Kempen, Kleinhoefstraat 4, 2440 Geel, Belgium 3 ESAT-SCD/SISTA, KULeuven, B-3001 Heverlee, Belgium Abstract As the resolution of camera s and the amount of recorded data increases, real-time processing of these signals becomes a challenging job. One way to tackle this problem is to decrease the processing time of computational intensive tasks. Singular Value Decomposition (SVD) is a computational intensive algorithm that is used in many application domains of video and signal processing. In this paper, we show how an SVD can be implemented to decrease the processing time. First, we describes MATLAB, OpenCV, CUDA and OpenCL SVD implementations which targets an Central Processing Unit (CPU) and Graphics Processing Unit (GPU) of the computer system. Next, the different implementations are executed on their target processor and processing times are measured. Measurements show that implementations targeting an GPU with an input matrix of size 3000x300 has a performance gain of a factor 93 compared to our MATLAB reference implementation and a factor 11 compared with the OpenCV implementation. Furthermore the comparison between CUDA C code and OpenCL C code from the same SVD algorithm shows that CUDA performs the best in all tested matrix sizes. We conclude that when there is enough data to process by SVD, the GPU is an appropriate solution. Otherwise the CPU remains a good candidate, especially with small matrix sizes. Index Terms CUDA, OpenCL, OpenCV, Singular Value Decomposition, SVD O I. INTRODUCTION N several data intensive applications, it is not opportune to wait long on computation results. Many results become useful if they are computed in real-time. An often used computational intensive algorithm is the Singular Value Decomposition (SVD). SVD is used in video processing for compression [11] and digital image processing [12], in signal processing for filtering [13] and noise removal [14], and in machine learning techniques for data clustering [15]. A drawback of SVD is that when datasets grows, more computational power is needed. There are several initiatives to reduce the required computational power and to make these tasks running faster: make existing algorithms more efficient; using a dedicated processor: o Field Programmable Gate Array (FPGA); o floating point co-processor; using the Graphics Processing Unit (GPU) of the graphic card. A. Making an algorithm more efficient A first possibility to make an SVD more efficient is by making use of cross matrices to compute the SVD. A property of cross matrices is that bigger singular values are more accurate than smaller values. The Rayleigh coefficient [10] can overcome this problem. To get accurate small singular values, the smaller and bigger singular values must be separated clearly. This implies an increase of memory usage. The reason is that the processing matrix A and A T A must be stored in system memory to limit the amount of elements in A T A. A second example is the one-sided block Jacobi method [21]. This method use the caches of the CPU and system memory very well. An improvement can be achieved to use a fast, scaled block rotation technique by a cosine-sinus decomposition [7]. Hereby the FLoating Point Operations per Second (FLOPS) count from one step of the method can be reduced to at least 40%. This result is obtained by calculating the right singular vectors with the standard block Jacobi algorithm. B. Using a dedicated processor A second initiative focuses on a co-processor and FPGA. Typically FPGAs has the advantage that the programmer describes hardware instead of software which increase the performance. The program connects logical gates in the processor so the FPGA is programmed to execute only the programmed task. Weiwei Ma et al [8] makes use of this advantage by means of a simple structure of 2x2 processors to calculate the SVD of a NxN matrix. A major disadvantage of an FPGA is the limit amount of fast accessible internal memory, which limits the sizes of usable matrices. Yusaku Yamamoto et al.[6] uses a ClearSpeed CSX600 floating 87 94 point co-processor [23] to compute the SVD. It processes big matrices a lot faster than the Intel Math Kernel Library [24]. A disadvantage is that no performance improvement can be reached for small matrices. The cause is that the processed matrix has not enough rows. Another reason is that the ratio between the amount of rows and columns are too small. C. Using the GPU of the graphic card As last initiative to improve SVD processing time is based on the Graphic Processing Unit (GPU). Zhang Shu et al. [9] implemented an one-sided block Jacobi method which computes the SVD using the Computing Unified Device Architecture (CUDA) library [17]. This implementation has some shortcomings with shared memory of the GPU which limits the size of the processing matrices. An implementation solution to overcome this problem is proposed by Sheetal Lahabar [4], where the SVD algorithm exists of 2 steps: the bidiagonalization and diagonalization. The bidiagonalization is executed entirely on the GPU, while the diagonalization is executed on both the GPU and CPU. Moreover a hybrid computing system allows each component to be executed his part of the algorithm on the best performing processor. This study focuses the performance measurements based of the above described implementations and libraries. The expectations are that implementations which makes use of the GPU, performing better because of parallel processing. This paper is structured as follows. Chapter 2 describes first the SVD algorithm in common. Followed by a description of the libraries and implementations used for the performance measurement. Finally this chapter describes the test and development environment. Chapter 3 and 4 displays and discusses the results. Chapter 5 concludes this paper. II. MATERIALS AND METHODS A. Singular Value Decomposition Singular Value Decomposition or SVD is a mathematic algorithm used in many application domains such as: video processing, signal processing and machine learning techniques. SVD tells how a vector, which is multiplied by a matrix, has changed compared with the initial vector. How this vector is changed can be determined by calculating the SVD as shown in formula 1: A = USV T (1) where A is a matrix of size MxN, U is a matrix where the columns are orthonormal eigenvectors of AA T, V is a matrix where the columns are orthonormal eigenvectors of A T A and S is a diagonal matrix filled with singular values in descending order [5]. For example: Figure 1 shows the mapping of a circle to an ellipse. The V T -matrix rotates the initial vectors to the coordinate axes, S scales the vectors to a smaller or bigger size and U rotates the vectors in opposite way [22]. So the eigenvectors describes the rotations of the ellipse. The scale is determined by the singular values in the matrix S. The bigger the singular value, the more influence it has on the size of the resulting vector [16]. Figure 1 shows also the major axis and minor axis of the ellipse. These are the largest and smallest eigenvalues. The eigenvalues are the square of the singular values and are related to the eigenvectors. Fig. 1: mapping a circle to an ellipse by SVD [22] There is a lot of computational power needed for large datasets because SVD creates multiple dimensions of the result matrices as follows: A (MxN) = U (MxM) S(MxN) V (NxN) [4]. In practice, many applications that calculates an SVD uses a reduction in dimensions of the matrices. By taking only k biggest singular values with corresponding reduced matrices U and V result in A, an approximation of the original matrix A. This is a reduced SVD of A or rank k of A [16][22]. The advantage of this method is that it saves memory and computational power. B. Implementations MATLAB MATLAB as product of the MathWorks company is a high level programming environment that is suitable to implement and execute several mathematical and scientific tasks on an easy way. These tasks can be like: signal processing, statistics, drawing of mathematical functions and matrices calculating. It is also possible to write new functions in C++ and use them in the MATLAB environment. These functions gets connected within MATLAB using the Matlab EXecutable or MEX interface. This way new functions can easily be called in the Integrated Development Environment (IDE) [19]. In this paper, MATLAB is tested on 2 ways. First we will use the build-in SVD function. Secondly we will write an 88 95 C++ function which will be connected to our MATLAB model with MEX. This function allows the calculations of the SVD to be performed on the GPU using an external library which call CUDA. OpenCV OpenCV or OpenSource Computer Vision is a special build library for real-time image processing and computer vision. The library has more than 500 CPU-optimized functions. A few examples of applications are: facial recognition, motion tracking and mobile robotics [2]. It s originally developed by Intel but afterwards adopted by Willow garage. The latter exploits it as an opensource Berkeley Software Distribution (BSD) license. It is cross platform and available for Microsoft Windows, Apple Mac OS X and GNU/Linux [19]. Just as in MATLAB, the performance comparison uses the build-in SVD function of OpenCV. CUDA CUDA or Computing Unified Device Architecture is a toolkit designed by Nvidia [17]. It allows the programmer to communicate with the GPU as it was a general purpose processor. This technology exists since Nvidia Geforce 8800 came on the market [4]. This way the programmer does not need to have an thorough knowledge of the internal parts of the GPU to process data in parallel. CUDA supports several languages/interfaces [17]: - CUDA C; - DirectCompute; - OpenCL; - CUDA Fortran. CUDA can be called on two ways. First it can be called direct trough a CUDA interface or secondly through an external library. In this paper both ways are implemented. First, the SVD algorithm is programmed with CUDA C language using the CUDA interface and secondly with an external library called CULATools. CULATools is an GPU accelerated linear algebra library created by EM Photonics [20]. This library uses CUDA to communicate with the GPU. Unlike the direct interfaces, CULATools automatically regulates all data traffic to and from the GPU which implies the programmer does not need to take into account optimizations for his GPU code. CULATools is used in this paper with a MATLAB and a C++ implementation. OpenCL Open Computing Language or OpenCL is a toolkit that is designed to execute parallel calculations efficiently on various microprocessors. This requires, nevertheless, a OpenCL compiler must be available for that microprocessor. OpenCL is designed by Apple and presented to the Khronos Group to get standardized. The strength is that once code is written, it can run on several platforms without changing the code [1][3]. So it is not limited to only GPU s like CUDA which only works on Nvidia graphic cards. Several companies e.g. AMD, Apple, IBM, Intel, Motorola, Nokia, Nvidia and Samsung support the development of OpenCL. To able to compare the different implementations in this paper, the code is written in standard OpenCL C language. Because there is no SVD function available in this language at the moment, the author ported existing code from Zhang Shu et al [9]. C. Development and Test environment The test platform hardware consists of an: AMD processor Athlon X2 2GB of system memory, Western Digital Raptor 10K RPM 150GB hard drive and Geforce GT240 graphic card with 1GB video memory. The software part exists of a Windows XP Professional operating system with Nvidia driver version , OpenCV 2.1, CUDA toolkit 3.1, CULATools 2.1 and OpenCL 1.0. The development environment is MathWorks MATLAB R2010a and Microsoft Visual C Professional. The data test set are matrices with a size starting from 100x100 and increments with a size of 100x100 and ends up by matrices of 3000x3000. The matrices are filled with random floating-point values from 0,0 until 1,0. The start time is measured before the execution of the SVD algorithm and the end time is measured after the execution of the SVD algorithm, no pre or post processing is taken into account. Because system resources are running on the background, each implementation is tested 10 times. Afterwards the average of the results will represents the execution time. The results are also influenced by the optimization flags of the compiler and the way the results gets returned on the screen. Some optimization flags are used to limit the binary size of the code, where others are used to speed up the code. The results on the screen can also differ. Some implementations shows the singular values by means of a vector variable, while others shows the singular values by means of a matrix. Because of the differences in showing results, the memory usage and the computational power slightly differs. The OpenCL test is compared with the original CUDA code from the implementation proposed by Zhang Shu et al [9]. Since the matrices for those implementations are different from the rest, they are tested separately.. III. RESULTS Figure 2 shows the graph of the four different implementations. The Y-axis is presented as a logarithmic scale: log(1+x). Figure 3 shows the performance differences between CUDA C code and OpenCL C code. 89 96 As shown in both figures, it is clear that GPU implementations performs better than the CPU implementations. Figure 2 shows that MATLAB performs worst at huge matrix size. Until the matrix size of 400x400, MATLAB implementation is faster than the GPU implementations. When more data needs to be processed e.g. at matrices of 3000x3000, MATLAB needs 2600 seconds to execute the SVD algorithm where the GPU implementations can do the job in less than 28 seconds. That is a factor 93 faster. OpenCV performs better in this case. With a matrix of 3000x3000, OpenCV returns a result in 320 seconds. In comparison with MATLAB, this is a factor 8 faster. Compared to the GPU implementations, the OpenCV implementation is a factor 11 slower. Matlab with CUDA and C++ with CUDA calculates the SVD algorithm using the GPU. At figure 2, there is almost no performance difference between MATLAB and C++. Both returns a common result of 28 seconds. Figure 3 shows that e.g. matrices with size 1760x1760, CUDA C has an execution time of 6.2 seconds while OpenCL return a result in 10.4 seconds. OpenCL is a factor 1.67 slower than CUDA with the same algorithm and dataset. IV. DISCUSSION Figure 2 shows that MATLAB performs worst at huge matrixes size. In the beginning, MATLAB implementation is faster than the GPU implementations. This is because data transfers between system memory and video memory of the graphic card flows through the PCI-express bus. This bus has a maximum bandwidth of 4GB/s, however the CPU communicates with system memory at a bandwidth of 12,8 GB/s which introduce a bottleneck. When more data needs to be processed e.g. at matrices of 3000x3000, this bottleneck gets compensated by faster parallel processing time by the GPU. OpenCV performs better in this case. Since MATLAB and OpenCV are using the CPU, OpenCV is apparently better optimized in his build-in SVD function than MATLAB. Matlab with CUDA and C++ with CUDA calculates the SVD algorithm using the GPU. Both gets a common result of 28 seconds. It is clear that the GPU can use the advantage of parallel data processing. The lines on the graph in figure 2 are less steep than the CPU implementations. So the differences becomes more visible with bigger matrices [4]. Figure 3 shows that CUDA performs better than OpenCL with the same algorithm. This was expected because CUDA, created by Nvidia, is perfect optimized for the graphic card used for these tests. An advantage of OpenCL code is that it should run on various platforms nevertheless it also implies that it is not optimized for a specific platform/microprocessor. As future work a fully functional SVD algorithm can be implemented in OpenCL that works on any size of matrix as dataset. It is interesting to test such mathematical algorithm on different platforms. For example a platform with a PowerPC processor like the older Apple computer systems. Another example is the PlayStation 3 from Sony with his Cell processor. Fig. 2: comparison between MATLAB, OpenCV, and CUDA implementations Fig. 3: comparison of processing time between CUDA C and OpenCL C SVD implementations V. CONCLUSION SVD or Singular Value Decomposition requires a lot of computational power, especially for huge data sets. In realtime applications, it is inappropriate to wait long for results. In this study, we comparing an often used mathematical algorithm implemented using different, existing libraries on CPU and GPU. The conclusion of this study is when a SVD algorithm is processed on a GPU, the input dataset must be large enough. This is because the bottleneck by means of the PCI-express bus must be compensated by large data input. When there is not enough data to process on the GPU, the CPU remains a good solution. In comparison with [10] where the smaller and bigger singular values must be separated to get accurate singular values, the tested algorithms in this paper still works accurate on the calculated singular values. Other algorithms such as 90 97 [8] works only well on small matrices because of the limited internal memory. Like Yusaku Yamamoto et al. [6], the CPU implementations performs well with smaller matrices. If the matrices are small enough, then the performance is even better than the GPU implementations. The GPU algorithm of [9] has an shared memory issue on his graphic card. The tested GPU implementations that uses CULATools does not has any issues with a memory block on the graphic card. This is because the CULATools library is well tested by various people en companies on his correctness. Finally this paper offers test results of many used implementations/libraries unlike [4] who is using only MATLAB and Intel Math Kernel Library. This paper gives also an impression of the difference in performance between CUDA C and OpenCL C language. [18] OpenCV. In Accessed on September [19] The MathWorks TM. In Accesssed on September [20] CULA programmer's guide 2.1, EM Photonics 2010; Accessed on December [21] Cuenca, J., Gimenez G. Implementation of parallel one-sided block Jacobimethods for the symmetric eigenvalue problem. Parallel computing: fundamentals and applications (D Hollander,Joubert, Peters, Sips, eds.). Proc. Int. Conf. ParCo 99, August 17 20, 1999, Delft, The Netherlands,Imperial College Press 2000, pp [22] Muller, Neil; Magaia, Lourenco; Herbst, B.M. Singular value decomposition, eigenfaces, and 3D reconstructions. Society for Industrial and Applied Mathematic 2004; 46(3): [23] ClearSpeed. In Accessed on December [24] Intel Math Kernel Library. In Accessed on December REFERENCES [1] John E. Stone, David Gohara, Guochun Shi. OpenCL: a parallel programming standard for heterogeneous computing systems. Computing in Science & Engineering 2010; 12(3): [2] Bradski G, Kaehler A. Learning OpenCV Computer Vision with the OpenCV library, First edition. Sebastopol: O'Reilly, 2008, pp [3] Ryoji Tsuchiyama, Takashi Nakamura, Takuro Iizuka, Akihiro Asahara, Satoshi Miki. The OpenCL programming book: parallel programming for multi-core CPU and GPU, Printed edition. Fixstars Corporation,2010. [4] Sheetal Lahabar, P J Narayanan. Singular value decomposition on GPU using CUDA. International Symposium on Parallel & Distributed Processing - IPDPS 2009 (2009); [5] Virginia C. Klema, Alan J. Laub. The singular value decomposition: it's computation and some applications. IEEE Transactions On Automatic Control 1980; 25(2): [6] Yusaku Yamamoto, Takeshi Fukaya, Takashi Uneyama, Masami Takata, Kinji Kimura, Masashi Iwasaki, Yoshimasa Nakamura. Accelerating the singular value decomposition of rectangular matrices with the CSX600 and the integrable SVD. Lecture Notes in Computer Science 2007; 4671(2007): [7] V. Hari. Accelerating the SVD block-jacobi method. Computing 2005; 75(1): [8] Weiwei Ma, M. E. Kaye, D. M. Luke, R. Doraiswami. An FPGA-based singular value decomposition processor. Conference on Electrical and Computer Engineering 2006: [9] Zhang Shu, Dou Heng. Matrix singular value decomposition based on computing unified device architecture. [10] Zhongxiao Jia. Using cross-product matrices to compute the SVD. NUMERICAL ALGORITHMS 2007; 42(1): [11] Prasantha H.S, Shashidhara H.L, Balasubramanya K.N. Image compression using SVD. Conference on Computational Intelligence and Multimedia Applications 2007; 3: [12] Andrews H, Patterson C. Singular value decompositions and digital image processing. IEEE Transactions on Acoustics, Speech and Signal Processing 1976; 24(1): [13] Wei-Ping Zhu, Ahmad, M.O, Swamy, M.N.S. Realization of 2-D linearphase FIR filters by using the singular-value decomposition. IEEE Transactions on Signal Processing 1999; 47(5): [14] Maj J.-B, Royackers L, Moonen M, Wouters J. SVD-based optimal filtering for noise reduction in dual microphone hearing aids: a real time implementation and perceptual evaluation. IEEE Transactions on Biomedical Engineering 2005; 52(9): [15] Tsau Young Lin,Tam Ngo. Clustering High Dimensional Data Using SVM. Lecture Notes in Computer Science 2007; 2007(4482): [16] SVD and LSI Tutorial: Understanding SVD and LSI. understanding.html. Accessed on September [17] NVIDIA CUDA Programming Guide,Nvidia 2007; _Programming_Guide_1.1.pdf. 91 98 92 99 Normalization and analysis of dynamic plantar pressure data B. Schotanus, T.Croonenborghs and E. De Raeve K.H. Kempen (Associatie KULeuven), Kleinhoefstraat 4, B-2440 Geel, Belgium Abstract The analysis of plantar pressure data is an important part of the diagnostic tools an orthopedic has to his disposal. However for some of the extracted values there is no acceptable way of comparing between measurements. One of the characteristics we would like to be able to compare is the Centre of Pressure line. Another is the pressure distribution during each moment of the foot roll off. By aligning and synchronizing the data from our measurements we will be able to directly compare both of these characteristics. W I. INTRODUCTION alking seems like a very trivial movement to most of us. But when we take a closer look at the foot, we will see that it is one of the most complex biomechanical structures of the human body. Our feet contain a quarter of the bones in our body. Because of this complexity there are a large number of potential problems that can occur. And as our feet are supporting our full bodyweight a small problem can lead to major consequences. To diagnose these problems, a number of techniques where developed to perform measurements on the foot while walking. One of these techniques is called plantar pressure measurement (or pedobarographic measurement). Because of the dynamic aspect of the data it is often analyzed as a time-series using a computer to visualize results. Plantar pressure imaging is an branch of medical imaging that is still in development. At the moment there is no universal way of comparing two plantar pressure images with each other, nor is there a universal technology for obtaining useful plantar pressure images. However, these images have been used for a long time to correct minor and major abnormality s in plantar pressure distribution, like flat feet or problems caused by foot injuries. These corrections have always been done by the insights of professional orthopedists. If we could provide part of the analysis automatically or semi automatically for these images we could speed up the process and provide a more objective approach. The information we are looking for are the characteristics of a person s gait and pressure distribution during walking. Using this information it is possible to create shoes that will compensate for deviations from what is considered a normal pressure distribution. To make a more comprehensive analysis it is extremely useful to be able to compare multiple images with each other to find differences and similarities. A first step to achieve this goal is to be able to compare two images. In later step multiple images could be compared with the same template image to achieve normalization for a bigger group of images. In this paper we will try to transform a foot pressure intensity image to optimally overlap another. By aligning two pressure images so that the width, height and rotation are equal it becomes possible to compare various statistics about the images. For example the COP-line or Centre of Pressure line can be directly compared with another person only if both images are correctly aligned. By using only affine transformations we preserve relative position information. To find these optimal transformations we will be using metrics described and tested in Todd Pataky s research [1],[2]. Then we will see if we can use this transformation to examine the differences of the dynamic data, adding the time dimension. While analyzing plantar pressure data the maximum intensity image is very often used, ignoring the dynamic data. In this paper we will not only try to align static images but use the found transformations to create overlays of synchronized dynamic pedobarographic data. All calculations where done on a standard laptop, more specifically a Sony Vaio VGN-NS21Z with a 2.4 GHz dual core processor and 4GB of RAM. 93 100 II. DATA SET A. Obtaining the data set Our data set was obtained from the exports of the commercial program Footscan which provides some precalculated statistics about the plantar pressure image. However these statistics are not very well documented so we decided to extract only the raw data from this program. The actual data was obtained using a one meter long RS Scan sensor array at a resolution of 64 x 128, each sensor measuring 5.08 x 7.62mm². An average foot was contained in a 37 x 21 matrix of which the foot populated an average 412 pixels. The sample frequency of the array is 200Hz which resulted in time series ranging from 40 to 160 samples in length. Our subjects were 18 people with a high, low or normal medial longitudinal arch. B. Limiting Data loss We will need to transform our images in order to optimally overlap them, and because of the very low resolutions involved it is important that we consider the data loss when we transform these images. The metric we will use to assess the amount of data lost is the squared error between the original image and an image that was transformed and then inverse transformed. The techniques we will compare are transformations using nearest neighbor, bilinear and bicubic interpolation. We will also examine the effects of up sampling. We can see clearly in Figure 2-1 that nearest neighbor interpolation is not suited for our needs. We can also see that cubic interpolation is too delicate for these very low resolution images while for most cases it behaves slightly better than bilinear interpolation it has a very high outlier in this set. The best result is obviously obtained by enlarged bilinear which is bilinear interpolation transformations on an up sampled image using the nearest neighbor algorithm for up sampling. After the transformations we down sampled the image again to compare with the original. A lower squared error means that we have retained much more pixel information when we enlarged the image before applying any transformations. We did not consider other algorithms for up sampling because they estimate new values for the new pixels and when we try to keep as close to the original as we can we don t want that. Figure II-1 Box plots of the error distribution using different interpolation algorithms III. NORMALIZATION OF THE IMAGES For the normalization we will use the maximum intensity image for the calculation of the needed transformations to achieve a good alignment between two pressure measurements. Earlier research shows that XOR and MSE are excellent metrics to achieve a good alignment between two images. XOR or more specifically the overlap error is calculated by dividing the number of non-overlapping pixels by the number of overlapping pixels. This error is calculated on a binary image, every pixel that contains a value is given a value of one. The mean squared error (or MSE) is calculated by raising the error between the images to the second power and dividing the sum of all these errors by the matrix dimensions. This approach gives extra penalty to large errors. The division by the matrix dimensions eliminates the possibility of drastically reducing one of the images dimensions to minimize the total error. They both require a decent amount of computation, and if the initial position is far away from the optimal position optimizing can take up to 25 seconds on my system. To achieve better results an initial guess is required. Using the much faster but also less accurate algorithm[2]: principal axis alignment. Because of great differences in the pressure distribution of our subjects we will use this algorithm on a binary image from our maximum intensity image using the threshold >0. We implemented this algorithm using Matlab s Image Processing Toolbox, the math required for calculating the 94 101 principal axes of an image is based on the central moments of area, more information can be found in[3]. Using this addition, our average processing time for optimization went down to 3-5 seconds using the standard optimization algorithm: fminsearch, provided by Matlab s Optimization Toolbox. Using principal axis alignment also makes the registration more robust as it can correct rotations as far as -89 and +89. When only using XOR or MSE these rotations could lead to upside down registration of the images and very slow convergence towards the minimum. To further increase robustness we flipped all right feet vertically so that all examined feet were presented as left feet. Also we added a piece of code that would find the center of pressure in the first 20% and the last 20% frames so we could check for reverse direction recorded data and horizontally flip the image when necessary as shown in Figure III-1. Figure IV-2 Comparison between COP-lines V. SYNCHRONIZATION Now that we have obtained the desired transformation to optimally overlap two feet we can apply it to the entire time series of the foot roll off giving us images from two different feet with the same size and orientation. To compensate for the difference in length in time and possibly a entire different way of walking we could try to synchronize these images in order to analyze the way they walk. We have tried three different ways of synchronizing these time series which we will discuss below. Figure III-1 crop and flip as necessary The optimization algorithms from the Optimization Toolbox could not always improve on our initial guess so we wrote our own algorithm that would operate with boundaries and step sizes that we could more easily manipulate. For each loop our algorithm calculated the MSE or overlap error for one step in six possible directions for horizontal and vertical scaling and rotation. The translation, which is the easiest to calculate was determined automatically for each step. Our algorithm changed the parameter that would give the best improvement until a minimum was found. The found parameters where then used to transform both images. IV. COMPARISON BETWEEN COP-LINES Because we now have two aligned plantar pressure images it becomes possible to directly compare their respective COPlines. Aligning the general shape of these feet also means that the anatomical regions are now aligned. Things like the starting angle of the COP-line or the deviations along the horizontal axis now have the same reference and can be compared between subjects. This was not possible between non-normalized images because these values are dependent on the shape and orientation of the individual foot. In Figure IV-1 we can see the result of such a comparison. The contour mask and A. Stretching to same length One approach could be to simply stretch out the shortest time series in a linear fashion to match the other one in length. This will preserve the relative lengths of each phase of the movement compared with the template. It is not possible to compare the weight distributions at a chosen foot stance because this method does not provide any real synchronization. This method is not very useful as we don t need visual feedback when examining the time differences between phases. B. Synchronizing using Footscan parameters Footscan detects four different phases during foot roll off. The beginning and ends of these phases are measured by timing the moments of initial and last contact of the various anatomical zones. Using these parameters to synchronize these phase transitions we now have five points in time which should represent the same foot position for our examined feet for those points in time. The data frames during each phase can be linearly spread along the duration of each phase. This approach gives us a visual representation of the differences in pressure distribution between two datasets for the full duration of the foot roll off. One problem we encountered is that the last phase typically lasts for about 40% of the total time. 95 102 Because there is not a single synchronization point during this time span there may be some difference in stance between frames that where matched in this way. In Figure V-1 you can see that during the last phase the synchronization can be less accurate because of the few known synchronized points. This can best be observed by looking at the progress of the COPlines, we can see that the blue line is pulling ahead of the red one. Figure V-1 COP-line synchronization As you can see in the figure above, the red and blue line which represent the COP-lines are moving together and we can now demonstrate the differences in pressure distribution at this point in time. VI. CONCLUSION Figure V-1 Synchronisation using Footscan parameters C. Synchronizing using COP-line progress The COP-line gives us a good indication of the foot stance during the foot roll off. By using this knowledge to synchronize between datasets, we now have a very large number of synchronization points. During the registration, our images where aligned to match each other s form. Because of this we can use the vertical coordinate of the COP-line to give an indication of foot roll off progress. As both images now have the same size, position and rotation, the vertical coordinates of the COP-line now should have an identical value for both feet in the same position. Using this approach to synchronization gives us a good view of pressure distribution differences as the upward movement of the centre of pressure progresses. Using an Optimization algorithm with a principal axis alignment as an initial guess proves to be an effective way of aligning plantar pressure measurements. As a result we now are able to compare Centre of Pressure (COP) lines directly with each other. This new possibility will be useful to quickly evaluate changes in this line for different kinds of shoes or insoles. Also with a synchronization based on the COP-line we can check for differences in pressure distribution during each moment of the foot roll off with both feet in the same position in time of the roll off. This can help orthopedists to get a better insight on the whole roll off process. REFERENCES [1] Pataky, T. C., & Goulermas, J. Y. (2008). Pedobarographic statistical parametric mapping (pspm): A pixel-level approach to foot pressure image analysis. Journal of Biomechanics, 41 (10), [2] Pataky, T. C., Goulermas, J. Y., & Cromptona, R. H. (2008). A comparison of seven methods of within-subjects rigid-body pedobarographic image registration. Journal of Biomechanics, 41 (14), [3] Prokop, R. J., & Reeves, A. P. (1992). A survey of moment-based techniques for unoccluded object representation and recognition. Graphical Models and Image Processing, 103 Evolution to a private cloud using Microsoft technologies C. Sels, F. Baert, G. Geeraerts Abstract An efficient IT environment is a key factor in today s business plan. Without an optimized IT environment which is easy to manage, businesses won t go far. This is common knowledge nowadays and businesses tend to focus more and more on optimizing their IT infrastructure to gain advantage to their competitors. Datacenters are created dynamically and the goal is to make use of resources as efficiently as possible such that the environment becomes easily manageable. This is a necessity for most modern infrastructures. Lately, the goal is more and more to create a private cloud, which needs this flexible infrastructure as an underlying basis. This private cloud can help reduce IT costs while increasing agility and flexibility for the company. By building a private cloud, the way IT delivers services and the way users access and consume these services changes. The private cloud provides a more cost-effective, agile way to provide IT services on-demand. The evolution to a private cloud can be achieved through numerous technologies. The focus here is how a company can evolve to a private cloud infrastructure using Microsoft technologies. Index Terms Private cloud, Infrastructure as a Service, Virtualization, Dynamic IT, System Center, Self-Service Portal C I. INTRODUCTION loud computing is a concept in IT where computing resources which are running in the cloud, can be delivered on-demand to people who request them. This is often referred to as IT as a service. Cloud computing is defined in The NIST Definition of Cloud Computing and exhibits following essential characteristics: on-demand self-service, broad network access, resource pooling, elasticity and measured service [1]. Of course, there are several types of clouds defined. A public cloud is a cloud infrastructure which is owned by an organization selling cloud services. It is made available to the general public. A private cloud is a cloud which is dedicated to an organization. The cloud infrastructure can be owned by the organization itself or a 3 rd party hosting company. In this paper, the focus is on achieving a private cloud infrastructure which is owned by the organization itself. With private cloud computing, on-demand self-service is introduced. This way, organizations hope to decrease the costs and the time needed to deliver infrastructure. The datacenter can respond more quickly to changes and the whole environment becomes even more dynamic. It is clear that automation is an important factor in a private cloud. The goal is to automate IT processes and minimize human involvement. A private cloud infrastructure builds upon the company s existing virtualized environment. Thus, it is not a separate product but rather a solution that builds upon existing infrastructure technologies and adds some important aspects. The result is a service-oriented environment which changes the way IT services are delivered. Notice that there are 3 different types of services which can be provided by a cloud [2], [3]: 1. Software as a Service (SaaS) 2. Platform as a Service (PaaS) 3. Infrastructure as a Service (IaaS) In most cases, a private cloud means provisioning Infrastructure as a Service (IaaS) to the users within the organization. With IaaS, datacenter resources such as hardware, storage, and network are offered as a service from within the cloud. These resources are placed in a pool and abstraction of the underlying fabric is made. It hides the technical complexity of compute, storage and network from the consumer. Instead, the consumer can select network and storage based on logical names. This way, infrastructure is delivered as a service. This infrastructure can be delivered with virtual machines in which the consumer has to maintain the OS and installed applications, while the underlying fabric is managed by the organization. As mentioned, one of the most important attributes of a private cloud infrastructure is user self-service. Users can obtain infrastructure from the cloud ondemand via self-service portals. This decreases the time required to provision infrastructure to users within the organization and decreases the costs as well. At the same time, capacity can be rapidly and elastically provisioned to the consumer.the overall agility of the datacenter increases. In order to move to a private cloud infrastructure, several steps have to be kept in mind. First, the overall infrastructurearchitecture required needs to be noted. This involves several key architecture layers. In order to implement these layers in the datacenter, several Microsoft technologies can be used. This paper analyses the private cloud layered architecture and provides for the milestones needed to implement these layers. This allows for an organization to build the basis for their own private cloud to provide Infrastructure as a Service using Microsoft technologies. 97 104 II. FROM VIRTUALIZATION TO PRIVATE CLOUD Every organization wants to evolve to an efficient IT infrastructure. For some companies, the ultimate goal is to create a private cloud, which needs this flexible infrastructure as an underlying basis [2]. Notice that the evolution to the cloud will introduce many manageability issues concerning IT operations. Without the proper application manageability, the transformation will most likely become non-manageable and produce a lot of costs. This is why several aspects of the infrastructure have to be taken in consideration when an organization wants to evolve to a private cloud infrastructure. Fig. 1. Logical evolution of the datacenter Nowadays, most companies are aware of the benefits that virtualization technologies have to offer. A lot of companies have moved to a virtualized datacenter, which means the utilization of the datacenter increases significantly, compared to the traditional datacenter. Server consolidation is another advantage with the virtualized datacenter. For a lot of organizations however, the evolution doesn t stop there. Virtualization is a critical layer in the infrastructure which has several advantages. It must be noted however, that other layers such as automation, management, orchestration and administration are of equal importance when a private cloud infrastructure is to be achieved. Even though an infrastructure is virtualized, its virtual machines or the applications within might still not be monitored; there might not be process automation, etc. This is the reason why other infrastructurearchitecture layers are needed [4]. These additional layers are shown in figure 2. very important when moving to a private cloud. They are used to enable Infrastructure as a Service, or IaaS. This was described earlier. When introducing these architecture layers in the datacenter, the term fabric management is often used. We can define the fabric of a datacenter as the storage, network and hardware in the infrastructure. In other words, the bottom layers in the private cloud layered architecture. A private cloud will use these additional architecture layers to enable abstraction of services from the underlying fabric. This way, the fabric can be delivered as a service from within the cloud. Thus, infrastructure can be delivered as a service to the organization. This provides a more cost-effective, agile way to provide IT services on demand and is one of the main attributes of a private cloud. The management layer plays an important role in a private cloud infrastructure. This layer consists of a set of components in which each component has its own management function. The management layer must be able to manage and monitor the virtualized environment, allow for service management, and so on. You may also notice the orchestration layer. The orchestration layer makes use of all of the underlying layers and provides for an engine which automates IT processes. This is not done by scripts, but by using a graphical interface, mostly workflows. This is referred to as Run Book Automation (RBA). The orchestration layer allows for the datacenter management components to be integrated in workflows. Finally, the administration layer is shown as the top layer. Private clouds provide Infrastructure as a Service. In order to do this, it is obvious that self-service portals are required. The administration layer fulfills this role. Thus, the administration layer is a sort of user interface which can be accessed in the organization to request infrastructure as a service. This makes it possible to provision infrastructure on-demand to the organization. By integrating all of these infrastructure-architecture layers, moving to a private cloud is made possible. It can be stated that all of these layers are required to evolve to a private cloud. The next sections show the technologies and best practices that are offered by Microsoft to achieve this layered architecture [5]. Fig. 2. Private cloud layered architecture Notice that virtualization only offers the foundation for moving to a private cloud. It is used as a foundation to introduce the other architecture layers. These layers are also III. CREATING THE HYPERVISOR INFRASTRUCTURE A reliable, highly available and scalable infrastructure is needed as a foundation for a private cloud. In order to achieve this, virtualization is a must. Microsoft Hyper-V virtualization technology is used to accomplish this goal. This provides the appropriate virtualization hosts needed. By using virtualization, an abstraction layer is created which hides the complexity of the underlying hardware and software. This is needed in a private cloud. High availability is an important aspect within flexible IT environments. Windows Server 2008 R2 Failover Clustering can be used to provide highly available 98 105 virtual machines. Failover clusters in Windows Server 2008 provide high availability and scalability for critical services [6]. Each Hyper-V host server is configured as a node in the failover cluster. When a node in the failover cluster fails and shuts down unexpectedly, the failover cluster will migrate all of the virtual machines on the failing node to another node. This allows for the virtual machines to keep running. Fig. 3. Failover cluster Failover clustering requires shared storage for the cluster storage. This requires use of a Storage Area Network (SAN). All of the virtual machines have to be stored in the shared storage. If this step is not done, migration of virtual machines in the failover cluster is not possible. This means that there will be downtime when a virtual machine has to be migrated to another node in the cluster. There are two types of migrations possible: 1. Quick Migration 2. Live Migration Quick migration will save the state of a virtual machine, move the virtual machine to another node in the failover cluster, and next restore the state of the VM on the new node and run the virtual machine. This means there will be some downtime. With live migration, another mechanism is used. Live migration transparently moves running virtual machines from one node of the failover cluster to another node in the same cluster. A live migration will complete in less time than the TCP timeout for the migrating VM [7]. This means the users working on the virtual machines won t perceive any downtime and thus availability of services increases. It s clear that live migration will produce less downtime. By implementing and configuring these aspects, the virtualization layer in the private cloud architecture can be realized. IV. CREATING THE MANAGEMENT INFRASTRUCTURE The Hyper-V infrastructure and failover cluster form the core of the IT environment. Virtualization is essential as a foundation in the private cloud infrastructure. However, to maintain the availability and flexibility of the datacenter, management software is required. Thus, an important step is the design of a management infrastructure that supports the virtualization hosts and storage infrastructure. This will allow for the cloud to rapidly scale its resources with location transparency. The management infrastructure will monitor the virtualized environment, manage it and operate it. To achieve this, several Microsoft technologies can be used. System Center is the management suite which is offered by Microsoft. It consists of several products, each with their own management function within the datacenter. We can define the most important management components in the datacenter as the following. A. Directory and authentication services The offering that best meets these needs is Active Directory Domain Service (ADDS) and Domain Name System (DNS). An Active Directory environment for the datacenter will require fewer changes than a standard network as it will not have as many user and computer accounts. This allows for better manageability. To provide flexibility and reliability, the domain controller has to be protected from possible failures. If the primary server fails, a secondary server can then take over. Thus, at least two domain controllers must be implemented. B. System Center Virtual Machine Manger (VMM) Managing a virtualized environment is an important task. By creating the hypervisor infrastructure, the virtualized environment was enabled. It has to be managed as well, though. If this is not the case, evolution to a private cloud cannot be made possible, since the top layers in the layered architecture of a private cloud will make use of these management components to provide user self-service. By implementing System Center Virtual Machine Manager (VMM), the management of the virtualized environment is taken care of [8]. The management of the virtual machines is more and more automated. This is needed for a cloud to scale rapidly with pooled resources and with location transparency. Thus, VMM can be defined as one of the most important management components for evolving to a private cloud infrastructure. This will become clearer when we talk about self-service portals, since they make use of VMM as an underlying layer. C. System Center Operations Manager (SCOM) Opposed to VMM, SCOM will monitor the virtualized environment instead of managing it. SCOM is able to monitor thousands of servers, applications and clients. This is very important. Without a monitoring solution in place, the infrastructure will become unmanageable. Problems with servers will not get solved. A failing service needs to be monitored as soon as possible. By introducing SCOM, failing applications or services are kept in a centralized environment, instead of distributed over all of the servers. So it is clear that the manageability and flexibility will increase significantly, while costs and time to resolve issues will decrease. VMM and SCOM can be tightly integrated with each other using Performance and Resource Optimization (PRO). PRO is a feature in VMM and helps optimize resources through 99 106 intelligent placement of virtualized workloads. This means PRO will choose the most optimal host for a VM based on its resources and configuration. This will significantly increase the elasticity of the datacenter, which is a very important attribute of private cloud infrastructures. For example, PRO will migrate a virtual machine to another host when the host runs out of sufficient resources. The need for resources of the host is monitored by SCOM and is automatically remediated by VMM. D. Other management components Apart from these management servers, other System Center components need to be implemented as well. Other management components are System Center Service Manager (SCSM), System Center Data Protection Manager (DPM) and System Center Configuration Manager (SCCM). By introducing these management components, following is provided: Data protection and backup solution Helpdesk service management Patch management and software distribution Operating system deployment think about problems at a higher level. Opalis is a very important factor in a private cloud infrastructure. It allows for datacenters to respond more quickly to changes through automating scenarios and best practices. By doing this, operational tasks can be performed automatically with minimized human involvement. This is an important attribute of private cloud computing [9]. VI. PRIVATE CLOUD In the previous sections, it was seen how virtualization is essential to moving to a private cloud. However, other architecture layers were needed as well. By implementing the Microsoft technologies shown before, these layers can be achieved [10], [11]. An overview of the implemented Microsoft technologies can be seen in figure 4. By combining all of these management components, the management infrastructure is realized. This increases the availability and resiliency of the datacenter. All of the management components can work together and help automate the datacenter. V. AUTOMATION AND ORCHESTRATION INFRASTRUCTURE An important characteristic of a cloud is the minimization of human involvement in IT processes. A well-designed private cloud will perform operational tasks automatically; elastically scale capacity, and more [9]. This can be achieved over time by implementing the automation and orchestration layer in the private cloud infrastructure. Opalis Integration Server (OIS) is a Microsoft technology which can be used to create the orchestration layer in the private cloud architecture. Opalis is a management component which is very important in datacenter automation. It can be described as the component that glues all of the System Center products together. Opalis is an automation platform and is used for orchestrating and integrating IT tools and processes. Opalis does this by using workflows, as opposed to scripts. This is called Run book Automation (RBA). Opalis uses a workflow engine which does the automation of the workflow. Every IT process is represented by a building block. All of these building blocks are combined and orchestrated in an Opalis workflow. This automation method has several advantages. Mainly, because it uses a graphical representation of IT processes, the workflows are selfdocumenting and easy to understand. This way, it s possible to Fig. 4. Microsoft technologies used in private cloud layered architecture In the previous sections, the administration layer has not yet been described. This layer will provide for a user-interface and implements self-service in the datacenter. As stated before, this is one of the most essential characteristics of a private cloud. By adding self-service, users can obtain infrastructure from within the cloud on-demand. This decreases the time needed to provision infrastructure to users within the organization and decreases the costs as well. The overall agility of the datacenter increases. As seen in figure 4, the Microsoft technologies which provide self-service portals to the datacenter are Virtual Machine Manager Self-Service Portal 2.0 (VMM SSP 2.0) and System Center Service Manager (SCSM). VMM SSP 2.0 plays a very important role in the evolution to a private cloud and will allow for many of the attributes that are needed. The architecture can be seen in figure 107 Fig. 6. VMM SSP 2.0 building blocks Fig. 5. VMM SSP 2.0 architecture VMM SSP 2.0 is a solution by Microsoft which is implemented in the administration layer of the private cloud layered architecture and introduces user self-service to the infrastructure [12]. So, it is implemented on top of the existing IT-infrastructure to provide Infrastructure as a Service to business units within the organization. As seen in figure 5, it consists of a web component, a database component, and a server component. VMM SSP 2.0 makes use of a specific mechanism to provision datacenter resources to the organization. By using this mechanism, it implements a private cloud of IaaS. It allows for a method to provision datacenter resources in a different way than the traditional datacenters. VMM SSP 2.0 works in following way. It makes use of several built-in user roles. Each of these user roles has specific rights. First, the user role of the datacenter administrator has to configure the resource pools in VMM SSP 2.0. This way, pools are defined for storage, network, etc. At the same time, the datacenter administrator will import templates from VMM. These templates can be used by the business units to create virtual machines. Costs are associated for reserving and allocating these resources. After this, a business administrator is able to register his business unit in SSP 2.0. He can then create an infrastructure request with appropriate services. If the datacenter administrator approves this request, the business unit users can start creating virtual machines within the approved capacity which was assigned to the business unit (figure 6). The costs which are associated with these resources will be charged back to the business unit. By monitoring chargeback data, the organization can keep track of the resources which are provisioned to the business units and the cloud solution is kept manageable. VMM SSP 2.0 also provides a dashboard extension to monitor these chargeback costs and resources which are serviced to business units. Remember that that measured service was an essential characteristic of cloud computing. VMM SSP 2.0 uses a specific service-delivery model to deliver Infrastructure as a Service to business units within the company. By using virtual machines, the infrastructure can be provisioned on-demand. Once an infrastructure gets approved, business units can create virtual machines on-demand within the permitted capacity. With VMM SSP 2.0, the provisioning and deployment of infrastructure is different than in a traditional datacenter. In a private cloud, the focus is more in delivering services than managing and setting up physical servers [13]. This allows IT to focus more on the business aspect than on the physical hardware associated with the services. Capacity can be rapidly and elastically provisioned. Furthermore, Service Manager (SCSM) can be used in the administration layer of the private cloud architecture. This way, SCSM can be used as a user interface to initiate automation workflows in Opalis. Thus, by integrating SCSM and Opalis, change requests can be automated. When a change request in SCSM is approved by a datacenter administrator, Opalis can be used to detect this. This can initiate a workflow in Opalis which automatically resolves the change request. This way, a lot of intermediate steps are removed and human involvement is minimized. This is an important attribute of a private cloud infrastructure, as mentioned before. VII. CONCLUSION We can note that the evolution to a private cloud using Microsoft technologies is possible with current Microsoft offerings. By implementing these offerings, the infrastructure complies with the NIST definition of cloud computing. The implemented solution provides all of these capabilities. However, it must be underlined that a private cloud is not a single solution offered by Microsoft. Rather, a private cloud is a collection of products which work together to offer Infrastructure as a Service. First of all, the layered architecture of a private cloud should be kept in mind. Although virtualization is an essential part of moving to a private cloud, several other layers are of equal importance. They are necessary as well when a private cloud infrastructure wants to be achieved. Virtualization provides the foundation for these 101 108 layers. Thus, it can be stated that private cloud computing really is a logical evolution from the virtualization trend of the last years. By adding layers of new technologies (self-service, chargeback, management, and more) to the existing datacenter system, a private cloud infrastructure can be realized. In this private cloud, the provisioning and deployment of infrastructure is different than in a traditional datacenter. In a private cloud, the focus is more in delivering services than managing and setting up physical servers. This allows IT to focus more on the business aspect. Finally, we can conclude that evolution to a private cloud consists of many aspects and considerations, which should be well planned. A private cloud is not a single solution; rather it consists of several steps and products. Microsoft provides the products needed to implement these aspects. REFERENCES [1] National Institute of Standards and Technology, NIST Definition of Cloud Computing, at: [2] M. Tulloch, Understanding Microsoft Virtualization Solutions from the desktop to the datacenter, 2010 [3] E. Kassner, Road to a private cloud infrastructure, 2010, Available at: [4] D. Ziembicki, From Virtualization to Dynamic IT, The Architecture Journal #25, Available at: [5] Microsoft corporation, Infrastructure Planning and Design Guide Series, at: [6] Microsoft Corporation, White paper Failover Clustering in Windows Server 2008 R2, Available at: [7] Microsoft Corporation, Hyper-V Live Migration Overview & Architecture, at: c6-3fc7-470b e6a19fb0fdf&displaylang=en [8] M. Michael, Mastering Virtual Machine Manager 2008 R2, 2008 [9] A. Fazio, Private Cloud Principles, 2010, at: [10] D. Ziembicki, Government Private Cloud, 2011, at: [11] Microsoft Corporation, Microsoft private cloud solutions, at: [12] Microsoft Corporation, Hyper-V Cloud Fast Track, at: [13] Y. Choud, Choud on Windows Technologies, at: 102 109 Creation of 3D models by matching arbitrary photographs (June 2011) S. Solberg Abstract The presented algorithm matches two photographs taken by any consumer camera. Common features on these photographs are selected by human interaction and are used to create a 3D model of a subject s facial features. The acquired data provides a much more reliable base then a standard plaster cast as it also contains a subject s facial structure. This approach is very interesting in dental and other medical domains as it does not rely on expensive hardware. Thus making it possible for a small dental lab to analyze and measure a subject s set of teeth. The goals of this document are showing the theory behind the algorithm and the accuracy of the data it provides. Index Terms 3D, model, arbitrary photograph, matching A I. INTRODUCTION common way to represent a person s set of teeth is by taking a picture or by making a plaster cast. The problem that arises is the loss of information that occurs. The cast is sufficient to define the teeth relatively to each other and the picture is sufficient to define the set of teeth relatively to the face, but never both at the same time. This might result in wrong crowns and bridges, giving an unsightly result. More importantly; It will damage the reputation of the dentist and may lead to additional costs. The algorithm described in this document provides a solution to this problem. The algorithm is based on elements found on both a frontal photograph and a profile photograph. From here on we assume that the specified features are one of the canine and one of the incisors. These points are chosen for several reasons: human physiology to create a 3D image from two photographs taken from a slightly different viewpoint. Thus mimicking the view received by the left and right eye. When the two pictures are presented to their corresponding eye, the viewer sees the image in 3D. This implies that the depth of the scene can be calculated from these two photographs. This technique relies on the correct positioning of the two cameras. The algorithm presented in this document aims to improve the amount of freedom in camera positioning. In contrast with stereoscopy, this algorithm is not intended for full scene modeling. III. CALIBRATING CAMERA Some important properties of photographs need to be taken into account before explaining the algorithm. Before any measurement can be made, we need to determine the viewing angle of the camera. Since perfect accuracy is not required, this can be done in a practical test. When the size of an object or shape is known it can be used to calibrate the camera. The easiest way to do this is to use a ruler and position it horizontally or by drawing a line of a known distance on a wall. The camera should be positioned so the ruler or line will fit inside the camera frame exactly. The distance between the camera and the ruler can then be used to calculate the viewing angle. This setup can be seen in figure 1. When the scene is They do not move when the subject opens or closes his mouth. They are close together and therefore suffer less from distortions caused by camera lenses. They are relatively easy to define in an indisputable way. There is also a drawback in using these points. Since these two points are close together, quantization noise will play a significant role in the acquired accuracy. II. RELATED WORK A very similar technique on creating 3D models or scenes can be found in stereoscopy [3]. This technique uses the Fig. 1 Calibrating the camera 103 110 viewed from above it can be seen that the camera angle can be calculated by following formula. The top view is shown in figure 2. 1 (1) This formula can then be used to calculate the focal length of the used camera. v 2 w tan 2 2 w L 2 d w d L IV. MATHEMATICAL APPROACH A. Defining a photograph A photograph is basically a rectangle with a certain width and height. The photograph is taken at a certain point in 3D space with a hardware specific focal point. Once the size and position of the photograph and the focal point are determined, we have defined the photograph. This is shown in figure 3. The image shows arbitrary points being projected onto a plane. Every point that is contained in the pyramid will be projected on the area of the picture. This pyramid is defined by the size and position of the picture and the focal point. The rotation of the picture solely depends on the location of the focal point. (2) B. Defining a projected point Every pixel on a picture can be seen as a projection of a point in 3D space on a 2D screen. Points in 2D space can be defined by a single vector p between the center of the picture and the projected point. This vector will be coplanar with the projection plane. This projection is not perpendicular, but towards a single point, namely the focal point of the camera [1]. We will call this point the origin o for the remainder of the paper. A result of this projection is distortion of size and perspective. An object which is out of the center of the image will appear rotated and scaled. Looking into a cardboard box is a good example to illustrate this phenomenon. You will be able to see all four walls of the box, despite the fact that they are perpendicular to the bottom of the box. C. Defining a ray In order to simplify the calculations, we will represent the points in 3D space with vectors. Every point in 3D space is projected onto a 2D plane. All these projections pass through the viewpoint. A very alike approach is used in raytracing [2]. This implies that we can also define each point in 3D space by a vector passing through the origin like shown in figure 4. An arbitrary point a would then be defined by: a With o r k v r p (3) (4) The parameters in these equations are: The position of the camera o Fig. 2 Top view of calibration Fig. 3 Projection of 3D space onto 2D plane Fig. 4 Vector representation 104 111 The viewpoint v relative to the camera position o The direction vector r The projected point p relative to the viewpoint v Note that this formula alone is not sufficient to describe a point in 3D space. The parameter k in this equation is unknown, thus defining a straight line through o. From now on this will be referred to as a ray. This is because a picture does not contain information about depth. The rest of the parameters are either unknown or can be chosen arbitrarily. Since only the distance between the two photos has any significance in the algorithm, one of the origin vectors can be chosen to be coincident with the origin of the coordinate system. The same goes for the rotation of the photos. The remainder of the paper will thus assume that o is chosen (0,0,0) and that v is chosen (0,0,v z ). Depending on the available information regarding the hardware, v z might also be known. D. Defining a point Since a single picture is not enough to define a point in 3D space, two pictures will be used. These picture can be taken from any position and with any given rotation. We can define rays for both projection planes. These vectors will collide somewhere in 3D space, given by the following equation. The only known parameter is p. arbitrarily. The algorithm assumes that and that a F a S of k o k F v F is chosen (0,0,v z ). r o l r o F and F F S S ( vf pf ) os l ( vs ps E. Solving the ray equation F v can be chosen o F is chosen (0,0,0) (7) can be split up in components, so it defines a system of three separate equations. o k v p ) o l ( v p ) (8a) Fx ( Fx Fx Sx Sx Sx ( vfy pfy) osy l ( vsy psy o k ) (8b) Fy o k v p ) o l ( v p ) (8c) Fz ( Fz Fz Sz Sz Sz ) (5) (6) (7) k p o l (9a) Fx Sx ( vsx psx) k p o l ( v p ) Sy (9b) Fy Sy Sy k v o l v p ) (9c) Fz Sz ( Sz Sz F. Solving for a model Section E showed the equations for a single point. It can be seen that this system is not linear and thereby not solvable by using a matrix. Since we can never solve a system which has more independent variables than equations, we define points until the following condition is satisfied. ( equations) 3n n 7 2n 7 ( parameters) For every point we define in 3D space we find three more equations, while only two unknown parameters get added. These parameters are the k and l factors. When seven points are defined in 3D space, this provides 21 equations with 21 unknown parameters. This is a solvable problem. The possible rotation of secondary projection plane causes the equations to become even more complex. The rotation causes the parameters p Sx, p Sy and p Sz to longer correspond to the distances that can be seen on the photograph. They can however be calculated by using a rotation matrix. p T S p S With T ' tx ² txy txz 0 c sz sy txy ty ² tyz 0 sz c sx txz tyz tz ² 0 sy sx c (10) The X, Y and Z represent the unit vector around which the rotation is to take place. We will represent this vector with R. R v v F F v v S S By making previously mentioned assumptions, the system is reduced to nine unknown parameters. These parameters are o Sx, o Sy, o Sz, v Fz, v Sx, v Sy, v Sz, k and l. Despite any information regarding the focal length of the camera, the parameters v Sx, v Sy and v Sz remain unknown due to a possible rotation. The system can be simplified to: With v F v S v v F F 0 v v Sx Sy 105 112 Thus making R v v Sx Sx v ² v Sx Sy v ² v 0 The other parameters in this matrix are derived from the angle over which the rotation is to be executed. c s t c s t cos( sin( 1 1 v v v v F F F F ) ) cos( vs vs vs vs vf v F ) vs v S This extra transformation does not add any additional unknown variables, which implies that the system can still be solved. However, the complexity of the system has increased dramatically. It is very unlikely that this system can be solved by a deterministic approach. Therefore numerical or iterative approaches have to be taken into account. This however is beyond the scope of this document. G. Remarks Sy Sy In section D of this chapter we made some assumptions about the rotation of the pictures. By assuming that chosen (0,0,0) and that v F ² ² is chosen (0,0,v z ), two rotations of the frontal projection plane are locked. This is not a necessity, but it makes solving the equations easier. There is however one very important rotation that is not included in the mathematical approach. The rotation of the frontal projection plane for instance; When we alter v, the plane will rotate around the Y-axis. When we alter v o Fx Fy F is, the plane will rotate around the X-axis. What happens when a picture is rotated around the Z-axis? This will have no effect on v Fz or vice versa. The answer is that it does not matter how the plane is rotated. Instead of considering the picture to be finite in size, consider it to be infinite while only points contained within the pyramid are projected. An infinite plane has no rotation around its normal. A more practical way of illustrating this is by considering a person tilting his head on a picture. The same effect can be created by tilting the camera. In other words, the rotation of the photograph does not matter. It is the relative position and rotation of the subject and the other photograph that is key in the described problem. H. Full photo modeling After solving the system of equations by providing seven points, either by human interaction or by a matching algorithm such as correlation, we can select additional points and run the algorithm for every point on the picture. This enables us to make a full 3D representation of the object shown on the picture. V. THE ALGORITHM IN SOFTWARE The mathematical problem which was shown in section II can be greatly simplified by making a few assumptions. This will eliminate the solving of a system of variables. A property that will be used to simplify the algorithm can be seen in the vector components shown in (9). In these equations can be seen that a line that is parallel with the X-axis in 3D space will also be parallel with the X-axis in 2D space. It will only appear smaller or further away from the center. It also shows that when v is relatively large compared to p, a small change of k only has a negligible effect. This can be easily seen in an example. Following equations are valid for the frontal projection plane only. a o k p (9a) x y x y x a o k p (9b) z z z y a o k v (9c) When the projected point is close to the center, changing the distance of the projected object will not affect p very much. If the intended purpose for the algorithm does not require large changes in the depth of an object, the equations can be reduced to: a o p (10a) x y x y x a o p (10b) z z y a o k v (10c) z This assumption has an important benefit in practical use. The physical distance can be indicated on a picture by using a ruler. Points which are only slightly closer or further away from the projection plane can still be used for measuring purposes. Further simplifications can be made by assuming that the photograph taken from the side of the subject is perpendicular 106 113 to the frontal photograph. In other words, v S is chosen (v x,0,0). This way any tilting of the camera around its normal vector can be compensated by software. Figure 5 gives a better insight into this compensation. The basic principle of the algorithm is matching a dimension that is shared by both pictures. I.e. the vertical distance between the selected incisor and canine. When either photograph is rotated, the measured distances y will not be the same in both pictures. This difference depends on the angle of the rotation. Suppose the frontal projection plane is rotated around its normal. d would not change, but y f would. The rotation could then be found by the following formula. f f y f s arcsin arcsin (7) d f y d f An analogue formula can be found for the rotation in the other projection plane. s y s f arcsin arcsin (8) d s y d s VI. REFERENCES [1] Matt Pharr and Greg Humphreys, Physically based rendering. San Francisco, United States of America: Elsevier, [2] Max Born and Emil Wolf, Principles of Optics, 7th ed. Cambridge: The Press Syndicate of the University of Cambridge, [3] Affonso Beato. (2011, February) Stereo 3D Fundamentals. [Online]. " ndamentals.html" Fig. 5 Rotation effects 107 114 108 115 Conceptual design of a radiation tolerant integrated signal conditioning circuit for resistive sensors J. Sterckx, P. Leroux Abstract This paper presents the design of a radiation tolerant configurable discrete time CMOS signal conditioning circuit for use with resistive sensors like strain gauge pressure sensors. The circuit is intended to be used for remote handling in harsh environments in the International Thermonuclear Experimental fusion Reactor (ITER). The design features a 5 V differential preamplifier using a Correlated Double Sampling (CDS) architecture at a sample rate of 20 khz and a 24 V discrete time post amplifier. The gain is digitally controllable between 27 and 400 in the preamplifier and between 1 and 8 in the post amplifier. The nominal input referred noise voltage is only 8.5 µv. The circuit has a simulated radiation tolerance of more than 1 MGy. TABLE I CIRCUIT SPECIFICATIONS Presettable voltage gain Voltage gain accuracy 2% -3dB Bandwidth 1 khz Supply voltage 24 v Output voltage level 12 v Input impedance > 50 kω Output impedance < 100 Ω Radiation tolerance 1MGy Temperature range 0 C 85 C Table 1 Circuit specifications of the circuit. Index Terms International Experimental Thermonuclear fusion Reactor (ITER), radiation effects, CDS, signal conditioning T I. INTRODUCTION ODAY, the demand for energy keeps on growing. While the fossil elements are exhausted. The challenge is to produce sustainable clean energy with CO2 as low as possible. Fusion may prove a key technology to global energy demands. The ITER facility in Cadarache, France is an experimental design of a tokamak nuclear fusion reactor aiming to prove technical feasibility. In this reactor, remote handling is required as radiation levels are too high for human interventions. In order to reduce the number of cables from the reactor, electronics is needed to locally amplify, digitize and multiplex the large amount of sensor signals. In this work a discrete time instrumentation amplifier is presented for interfacing a pressure sensor. The sensor that is used is a OMEGA PX906.[1] The sensor preamplifier has been simulated and designed using a SPICE-like circuit simulator. Simulation results will be used to compare the circuit performance with the target specifications. (Table 1) Simulations will also include the use of radiation models, based on previous work.[2] A. Preliminary Market Research The components from several different manufacturers are principally suited for the purpose of amplifying the signals from the Omega PX906 pressure sensor, but only few of them have a specified radiation tolerance. This tolerance is aimed for space applications (kgy) while this circuit must have a tolerance up to 1MGy. Hence an application specific integrated circuit needs to be designed. B. CMOS Radiation Effects In the envisaged application, the electronics will be exposed to ionizing radiation up to a life-time ionizing dose of 1 MGy. Ionizing radiation affects the behavior of the transistors mostly through charge generation and trapping in the intrinsic transistor oxides. This results in changes in the device s threshold voltage, mobility and subthreshold leakage current and may cause inter transistor leakage[3]. For linear circuits the main degradation effect is in the threshold voltage. For switches and transmission gates, also the subthreshold leakage is of concern. These effects need to be counteracted through radiation hardening by design and layout. For this design in 0.7 µm CMOS technology, the radiation dependent SPICE model is based on the data presented in [4] Jef Sterckx is with the Katholieke Hogeschool Kempen, Geel, Belgium. (tel. : , Paul Leroux is with the Katholieke Hogeschool Kempen, Geel, Belgium. He is also with the Katholieke Universiteit Leuven, Dept. ESAT-MICAS, Heverlee, Belgium and with the SCK CEN, the Belgian Nuclear Research Centre, Mol, Belgium, (tel. : , 109 116 C. Architecture selection As discussed in section 1, the resistive sensor Omega PX906 will provide the signal for the circuit s differential input. The circuit connects directly to the full Wheatstone bridge output of the PX906 pressure sensor (fig. 3). The switched capacitor architecture [7] is most suitable for the envisaged application as it offers the benefit of intrinsic rejection of both offset and 1/f noise. This is especially important in radiation environments, as the 1/f noise, related to surface defects, tends to increase under radiation as defects are created at the gate oxide channel interface. II. CIRCUIT DESIGN AND LAYOUT A. First stage Fig.3 : Full Wheatstone bridge for a pressure transducer with millivolt output There are three different circuit architectures that may be considered for use with this Wheatstone bridge sensor: Continuous time instrumentation amplifier [5] Fig. 1 example of a continuous time architecture Chopper modulated amplifier [6] Fig. 2: example of a chopper modulating architecture Discrete time switched capacitor amplifier [7] Fig. 3: example of a switched capacitor architecture In the continuous time instrumentation amplifier architecture, the main drawback is the 1/f noise that can t be distinguished from the actual signal, hereby lowering the SNR. Also the offset is amplified with the signal. This reduces the DC and low level accuracy of the amplifier. In the chopper stabilized architecture, the signal is chopped by a high frequency signal. This results in a PAM (Puls Amplitude Modulated) signal. The main drawback of this circuit is the low suitability for full integration Fig. 4: Switched capacitor circuit based upon the OTA The first single-ended differential stage uses an Operational Transconductance Amplifier (OTA) in switched capacitor feedback. The basic concept [8] is to sample and store the offset of the OTA during one phase (reset phase) and subtracting this value from het next phase (amplification phase). Because offset and 1/f noise are strong correlated, both contributions are drastically lowered [9]. The OTA is implemented as a wide-swing folded cascode amplifier in a mainstream low-cost 0.7µm CMOS technology (fig. 5). The circuit works with a power supply of 5 V with a common mode level of 2.3 V to maximize the dynamic swing. Several measures were taken to make the circuit radhard. First the supply voltage of 3.3 V is raised to 5 V. This allows more margin to push the transistors in saturation. An additional source follower (M11 doubles the DC voltage at the drain of M4.) is added to ensure that both M4 and M6 stay in the saturation region even after possible drops in the NMOS threshold voltage during irradiation. Biasing is realized by a reference current source which can be based on a radiation tolerant bandgap reference [10], and current mirrors. If under radiation the devices threshold voltage and/or mobility changes, the current through the stage is not affected and all transistors stay in the correct saturation region. 110 117 enabled by digital selection of the feedback capacitor, which is binary scaled between 2 pf and 8 pf. We use the same internal OTA design as from the first stage. All transistors are pushed 0.5V deeper into saturation to ensure operation in radiation environment. Fig. 5: internal OTA design from the first stage B. Second stage For compatibility with a 24V supply post amplification stage, an intermediate stage is needed for buffering the postamp input capacitance and for common mode translation. This level shifter translates the common mode input from 2.3 V to 12 V using high voltage DMOS transistors on the same chip. The 12 V input level ensures a high output range in the 24 V post amplifier. Transistors biased with a fixed voltage of 19.15V replace current sources. If the threshold voltage of the DMOS transistors shifts, then the current through Mls3 will change but the gate source voltage of Mls1 will stay equal to Mls3 (4.85 V) causing a constant DC output voltage of 7.15 V. The bulk effect is avoided by connecting the sources to the n-well bulk. The performance of this voltage mirror is improved by adding cascode transistor Mls5 causing even closer matching between Mls1 and Mls3 as they have identical V DS. The same is done in the second buffer stage increasing the DC output to a V T insensitive 12 V DC output. In this way the output remains exactly at 12 V inspite of V T shifts up to 0.5V at a dose up to 1 MGy. Resistors had to be inserted to minimize ringing, typical for a source follower loaded with a large capacitance. As no DC current flows through these resistors, there are no changes in the level shifter operation. Fig. 7: Last stage switched capacitor circuit based upon the OTA D. Switch Fig. 8: Pass-transistor The digital switches are realized with pass-transistors (transmission gates). The switch requires both a low on resistance for sufficiently low settling time, and a high offresistance to prevent charge leakage. In order to limit the leakage current under radiation, an enclosed gate layout for the NMOS transistors is required [3]. Dimensions are chosen to minimize effects of charge injection and clock feedthrough [11]. Switching happens at a rate of 20kHz. E. Capacitor banks To make the gain controllable, a single capacitor is replaced by a capacitor bank. The switches are again transmission gates. The capacitors are binary scaled, selectable using a digital control word. The input capacitance is 800 pf yielding a gain between 400 (2 pf feedback) and 27 (30 pf feedback) for the first stage. Fig. 6: Levelshifter C. Third stage This circuit is built around a DMOS based folded cascode OTA with an open loop gain of 60dB. The input capacitor of 16 pf is buffered by the level shifter. The controllable gain is Fig. 9 : Binary scaled capacitor bank 111 118 III. SIMULATION RESULTS The simulations discussed in this section are all based on the circuit in figure 10. The circuit consists of the preamplifier and the level shifter. The post stage isn t discussed here because it has minimal impact on the circuit s performance with respect to noise and timing and merely increases the maximum available gain. Fig. 11 : Level shifter output voltage for a 3mV 500Hz input and a gain setting of 400 Fig. 10 : preamplifier and the levelshifter A. AC performance The AC performance is demonstrated with the Bode plot of the open loop voltage gain of the preamplifier OTA. The DC gain is 93dB and the bandwidth is 700Hz. The switched capacitive feedback in this stage). The open loop voltage gain is unchanged after introduction of the maximum V T shifts. C. Noise performance Besides thermal noise, the 1/f noise is present. SPICE simulations were used to find het total input referred noise PSD of the OTA. The square root of the PSD is shown in the dotted line. The noise density increases for decreasing frequency. After the noise cancellation (CDS) the noise will be first order high pass filtered with a cutoff frequency of about f CLK =4.5 khz. This yields the corrected noise density shown in the full line in figure 12. Differences with maximum V T shifts applied are minimal. All simulation were performed with a maximum gain setting of 400. Fig. 12 : Input referred noise density of the OTA (25 C) Fig. 11 : Bode plot of the OTA voltage gain under normal operation B. Transient performance With a 3mV input signal at 500 Hz the transient performance of the amplifier at a gain setting of 400 is studied. This maximum gain setting corresponds to the minimum system bandwidth and hence maximum settling time. The input referred noise voltage PSD of the 2 stages amounts to 78µV. The main noise stems from the high-frequency noise in the reset phase. This noise originates from the Φ 2d switch at the input in figure 10. At high frequency the noise voltage from this switch is transferred to the inverting input of the OTA with a GHz bandwidth, which is determined by the low parasitic input capacitance of the OTA. In order to reduce this noise contribution a capacitor of 120pF was placed over both input Φ 2d switches effectively shorting their noise voltage at high frequency. The capacitance is small enough not to affect 112 119 signal-settling behavior. The total input referred noise voltage density is shown in figure 13 yielding a total input noise level reduction of 78µV to 8.5µV. The noise specifications of the commercial instrumentation amplifiers range from 2.4 nv/rthz to 55nV/rtHz, where the presented ASIC design features a noise density of 22nV/rtHz. Note that the specification from the other manufacturers do not take into account the noise from the additional external components and the lower noise level of the COTS components also owes in a large part to the higher power consumption ( a few hundreds of mw). In this design the power consumption is only 1mW and the noise performance can be improved by increasing the power consumption as the input referred noise density is inversely proportional to the square root of the current drawn by the amplifier. If the settling at 0 C is compared to the settling at 85 C, the settling time is increased due to a reduction of the OTA transconductance and an increase in the switch on-resistance both owing to a decreased channel mobility. E. Radiation behavior Also the influence of ionizing radiation on the transient settling and noise behavior of the circuit was simulated. Again, the minimum bandwidth and corresponding maximum settling time occur at the maximum gain setting of 400 this value is used in the following simulation. The output of the level shifter at room temperature for the same 3mV 500Hz input is shown in figure 15 when the maximum VT shifts of - 500mV were applied. Also simulations up to 85 C showed sufficient settling. Fig. 13 : Square root of the total input referred noise voltage PSD after addition of input capacitors. Fig. 15 : Level shifter output voltage for a 3mV 500Hz input and a gain setting of 400 (zoomed in to show sufficient settling) at room temperature after irradiation (maximum VT shifts). D. Temperature behavior The circuit needs to work guaranteed between 0 C and 85 C (cf. table 1). Main issues are the transient settling performance during the amplification phase and the noise of the circuit. As the minimum bandwidth and corresponding maximum settling time occur at the maximum gain setting of 400 this value is used in the following simulations. IV. CONCLUSION The SPICE-simulations show that under 1MGy radiation no significant changes are introduced in the circuit performance. A switched capacitor topology is chosen to reduce offset and 1/f noise errors. The gain is digitally controllable between 50 and 400 in the preamplifier and between 1 and 8 in the post amplifier. The nominal input referred noise voltage is only 8.5 µv. The circuit has a simulated radiation tolerance of more than 1 MGy Fig. 14 : Level shifter output voltage for a 3mV 500Hz input and a gain setting of 400 at a temperature of 0 C (left) and 85 C (right). REFERENCES [1] [2] M. Van Uffelen, W. De Cock and P. Leroux, Radiation tolerant amplifier for pressure sensors, final report, EFDA TW6-TVR-RADTOL2 Task, [3] ANELLI, G. M." Conception et caracterisation de circuits integres. Institut national polytechnique de grenoble."2000 [4] P. Leroux, S. Lens, M. Van Uffelen, W. De Cock, M. Steyaert, F. Berghmans, Design and Assessment of a Circuit and Layout Level Radiation Hardened CMOS VCSEL Driver, in IEEE Transactions on Nuclear Science, vol. 54, pp , August [5] R. Pallas-Areny and J. G. Webster, Sensors and Signal Conditioning, 2nd edition, published by Wiley 113 120 Interscience, [6] D. A. Johns and K. Martin, Analog Integrated Circuit Design, published by Wiley, [7] R. Gregorian, K. Martin and G. C. Temes, Switched- Capacitor Circuit Design, IEEE Proceedings, Vol. 71 no. 8, pp , August [8] B. Razavi, Design of Analog CMOS Integrated Circuits, published by McGraw-Hill, [9] W. Claes, W. Sansen and R. Puers, Design of Wireless Autonomous Datalogger IC's, published by Springer, [10]V. Gromov, et, al., A Radiation Hard Bandgap Reference Circuit in a Standard 0.13 µm CMOS Technology, IEEE TNS, vol. 54, no. 6, pp , Dec [11]G. Wegmann, E. Vittoz, and F. Rahali, Charge injection in analog MOS switches, IEEE Journal of Solid-State Circuits, vol. 22, no. 6, pp , Dec 121 Reinforcement Learning with Monte Carlo Tree Search Kim Valgaeren, Tom Croonenborghs, Patrick Colleman Biosciences and Technology Department, K.H.Kempen, Kleinhoefstraat 4, B-2240 Geel Abstract Reinforcement Learning is a learning method where an agent learns from experience. The agent needs to learn a policy to earn a maximum reward in a short time period. He can earn rewards by executing actions in states. There are learning algorithms like Q-learning and Sarsa that help the agent to learn. We have implemented these learning algorithms in reinforcement learning and combined them with Monte Carlo Tree Search (MCTS). We will experimentally compare an agent using Q- learning with an agent using MCTS. Index Terms Monte Carlo Tree Search, Reinforcement Learning, Learning Algorithms R I. INTRODUCTION INFORCEMENT LEARNING is a learning method where an agent needs to learn a policy. This policy determines which action to take in a certain state to get a maximum reward. The agent does not know what action is best in a state, the agent must learn this by trial & error. face of rewards and punishments. [2] There are two very important parts in reinforcement learning: 1. The agent: This is the learning part. The agent repeatedly observes the state of its environment, and then chooses and performs an action. Performing the action changes the state of the world, and the agent also obtains an immediate numeric payoff as a result. Positive payoffs are rewards and negative payoffs are punishments. The agent must learn to choose actions to maximize a long term sum or average of the future payoffs it will receive. 2. The environment: This determines every possible observation the agent can get or every possible state the agent can reach. In every state or observation the agent can choose from a number of actions. The agent needs to discover the best policy to earn a good payoff. To let the agent learn by experience we used reinforcement learning algorithms like Sarsa and Q-learning. A very important part of my thesis is the implementation of Monte Carlo Tree Search in reinforcement learning. MCTS gives the agent the opportunity to simulate experience, this should improve the learning rate of the agent. This paper is based on my master thesis in reinforcement learning [1]. In my thesis you can find the results of all the implementations we have done to improve the agent s learning rate. II. REINFORCEMENT LEARNING A. Introduction Reinforcement learning is the study of how animals and artificial systems can learn to optimize their behavior in the Figure 1: Reinforcement learning [3] In most cases the agent needs to learn a policy in order to reach a specific goal in the environment. But not all agents need to reach a goal, if the agent needs to perform a continuous task then there is no main goal. All the steps the agent took to reach a goal in the environment starting from the beginstate is called an episode. It is possible that an agent has a goal but cannot reach it. For 115 122 example: An environment that exists out of wall states (that the agent cannot cross) and open states (that the agent can cross). If the goal is surrounded by walls, then the agent cannot reach it. That is why we define a maximum number of steps the agent can take in an episode. If the agent cannot reach its goal and we do not define a maximum number of steps, the agent keeps taking steps to try to reach its goal resulting in an episode with infinite steps. Because the agent does not know that he cannot reach its goal he will keep trying to reach it. The sum of all the rewards the agent received during an episode gives us a view how well the agent performed that episode. This results in the following formula: ( ) Where R is the total reward in an episode, r is the reward per step and N is the number of steps in one episode. Rewards that the agent received in a short number of steps are more important compared to rewards the agent received in a lot of steps. By example: it would be better if an agent takes 2 steps and gets a reward of 5 then if he takes steps and gets a reward of 10. We can use the discount factor (γ) to make future reward less valuable. The total reward function will change to: ( ) If the number of steps (t) rises then the reward will decrease in value because γ t decreases in value every step the agent takes. A very important part of reinforcement learning is the Markov Decision Process (MDP), this can be represented as: MDP = <S,A,T,R> where S is a list of all the possible states the agent can reach, A is a list of all the possible actions the agent can choose, T is the transition function which determines the probability of every state the agent can reach if he performs an action in a state and finally the reward function which gives the agent a numerical reward when he performs an action in a state. During an episode the agent is always in a state and has the opportunity to choose an action. If the agent performs an action he will get a numerical reward and reaches a state determined by the transition function. The policy decides what action the agent should choose in every state. A lot of the reinforcement learning algorithms are based on estimating value functions that estimate how good it is for the agent to be in a given state. The notion how good here is defined in terms of future rewards that the agent can expect. The policy π determines the probability that the agent chooses action a in state s: π(s,a). The value of state s following a policy π, written as V π (s), is the expected total reward standing in state s and following policy π. The value function is shown by the following formula: ( ) * + { } ( ) Where E π { } denotes the expected value given if the agent starts in state s, follows policy π, and t is any time step. The value of a terminal state (by example the goal in the environment) is always zero. If the agent does not know its environment he does not know how to reach good states. The agent needs to learn its transition function to know how to reach good states. The agent learns this transition function with the use of 2 counters and a reward array. c1[s t ][a t ]: Every step the agent takes will increase counter c1 by one for state s t and action a t. c2[s t ][a t ][s t+1 ]: Every time the agent performs action a t in state s t and reaches s t+1 then the counter c2 will increase by one. r[s t ][[a t ]: This array saves the reward the agent gets when he is in state s t en performs action a t. If the agent is in state s t and performs action a t then you can calculate the probability that he will reach a certain state s t+1 with the following formula: ( ), -, -, -, -, - ( ) If you do not want to simulate the environment you can always use state/action values instead of state values. A state/action value is often called a Q value. If we choose an action a in a state s under a policy π we denote Q π (s,a) as the expected total reward standing in state s, choosing action a and thereafter following policy π. The Q function is shown by the following formula: ( ) * + B. Exploration vs. Exploitation { } ( ) An agent starts his learning process with no knowledge of his environment. He doesn t know which actions are better than others. The agent needs to explore if he wants to learn which actions were good in certain states. A simple way to explore is to choose random actions. Every action the agent chooses gives him some reward. At the end of an episode the total reward will help the agent to determine if the action was good or bad. We call this the exploration phase. The agent needs to explore the environment to learn. When the agent collected a lot of information about the quality of actions in certain states (Q value) he can determine which action is the best in which state. The agent is in the exploitation phase when he only chooses the best possible action in every possible state, such an action is called a greedy action. One of the challenges that arise in reinforcement learning and not in other kinds of learning is the trade-off between exploration and exploitation. [4] If an agent knows nothing of its environment he will start his learning process exploring his environment. The agent needs to explore enough until he discovers which path gives the most cumulative reward. When an agent always chooses the greedy action, he will never find 116 123 a better path. If during the learning process the agent starts to exploit too soon by always taking the greedy action then he may not yet explored the most rewarding path. If the agent exploits too late then he may lose some chances on a better reward in the whole episode, because the agent explored too much instead of choosing the most rewarding action. The Boltzmann distribution or ε-greedy can provide a solution for this problem. ε -greedy The greedy action is the action with the highest Q value in the current state of the agent. We chose ε so that there is a probability of ε when the agent will choose a random action (for exploration) and a probability of (1 - ε) when the agent will choose the greedy action. If ε is very small, the agent will mostly exploit, if ε is very big then the agent will mostly explore. If the agent uses ε-greedy and needs to select a random action then the agent can choose the second best action or the worst action possible. Boltzmann For some applications it is not always good when an agent needs to select a random action and he chooses the worst possible action in a state. The Boltzmann method lets the agent calculate the probability that he chooses an action in a certain state. If the agent does this for every possible action in a certain state he can choose which action to perform. The worst possible action will be given a very low probability to be chosen. The Boltzmann distribution is shown by the following formula: ( ) ( ) ( ) ( ) Qt(a) determines the quality of action a in the current state (=Q value). The τ parameter is called the temperature. If you take the limit to zero of the temperature the Boltzmann distribution gives the highest probability for the action with the highest Q value. If the temperature is high then another action can be given the highest probability to be chosen. III. LEARNING ALGORITHMS An agent uses learning algorithms to learn a policy. I have studied two important learning algorithms: Sarsa and Q- learning. A. Sarsa Sarsa generates a value for each state/action pair called a Q value. This Q value determines the quality of the action in that state. The formula to create the Q value for a state/action pair is: ( ) ( ) ( ), ( )- ( ) Where: Q(s,a): The new Q value for state s and action a. α : the learning rate, this value determines how much of the new Q value will be used to update the current Q value. This value needs to be between zero and one. Q(s t,a t ): The current Q value for that state and action r t+1 : De reward the agent gets when he performs action a t in state s t. γ: the discount factor. This value needs to be between zero and one. Q(s t+1,a t+1 ): The Q value for the next state. Sarsa is an on-policy algorithm. Each time the agent takes a step the Q value will be updated. Each update Sarsa will have to choose an action (a t+1 ) to perform in the new state, this action will be chosen by the policy of the agent. The following pseudo code explains when to update the Q value when you use online learning with Sarsa: Initialize Q(s,a) arbitrarily Repeat (for each episode) Initialize s Choose a from a using policy derived from Q (ε-greedy) Repeat (for each step of episode) Take action a, observe r t+1, s t+1 Choose a from s using policy derived from Q ( ) ( ) ( ), ( )- S s ; a a Until s is terminal It is also possible to update all the Q values at the end of the episode instead of every step the agent takes, if you want to use offline learning. B. Q-learning Another way to update the Q values is to use Q-learning. Q- learning will use the maximum Q value for the state/action pair in the next state. This gives the following formula: ( ) ( ) ( ), ( )- ( ) The parameters are the same then those of Sarsa only the ( ) is different. This function calculates the maximum Q value in state s t+1 for every possible action. Q- learning is an off-policy algorithm. It means that Q-learning does not follow the policy to choose the action in the new state. Instead it uses the action that has the highest Q value in the new state s t+1.the Q value update can happen each step the agent takes, or at the end of an episode. The following pseudo code explains when to update the Q value when you use online learning with Q-learning: Initialize Q(s,a) arbitrarily Repeat (for each episode) Initialize s Repeat (for each step of episode) Choose a from s using policy derived from Q (ε-greedy) Take action a, observe r t+1, s t+1 ( ) ( ) ( ), ( )- s s t+1 until s is terminal It is also possible to update all the Q values at the end of the episode instead of every step the agent takes, if you want to use offline learning. 117 124 IV. MONTE CARLO A. Monte Carlo Method The Monte Carlo Method is a way to estimate the value of a state or if the transition function is not available you can estimate the value of a state/action pair (Q value). The Q function of a state and an action will give the expected cumulative reward the agent will receive in one episode. If the agent made a lot of useless steps during an episode the estimate of the Q values will not be very good. If the agent wants to know how good an action in a certain state is he needs to make a better estimation of the Q value for that state/action pair. The agent needs to do multiple episodes where he always chooses the same action in a certain state. At the end of every episode the agent calculates the Q value for that state/action pair. When the agent completes his multiple episodes he takes the average of all the calculated Q values for that state/action pair. This will be a much better estimation for that state/action pair. If the agent always follows its policy (greedy) in a deterministic environment then he will only observe returns for one action in every state. We need to estimate the value for every action in each state, not just the one action we currently favor. That is why we must assure continual exploration, so that the agent chooses other actions in certain states and not only the ones we currently favor. The Monte Carlo method can slow down the entire learning process because for every step the agent takes he needs to do multiple episodes to take an average of every calculated Q value for that state/action pair. This takes time. The agent can also save the Q value every episode, this is done with Monte Carlo Tree Search. B. Monte Carlo Tree Search Monte Carlo Tree Search (MCTS) can be applied effectively to classic board-games (such as GO), modern board-games (such as SETTLERS OF CATAN), and video games (such as the SPRING RTS game). [5] MCTS can also be applied in the game Arimaa. [6] MCTS can be used for any game of finite length. Monte Carlo Tree Search (MCTS) builds a tree of states that starts with the beginstate and will be expanded to all possible states that the agent can reach. MCTS will use offline learning: The agent updates all his Q values at the end of an episode, to do this the agent saves all the information of the episode to arrays (all the state/action pairs and the rewards received are saved). The agent sends all this information to the MCTS algorithm. MCTS uses 4 operations to build a tree: [7] 1. The selection step: Start from the root node (beginstate) and traverse the tree in the best-fit manner (according to the exploration/exploitation formula) down to a leaf. 2. The expansion step: If in a leaf node a few simulations have gone through this node then expand the node. 3. The simulation step: Simulate one or more episodes with a policy. In our experiments we used a policy that always chooses random actions. 4. The back propagation step: The total result of the episode is back propagated in the Monte Carlo Tree. Figure 2: The four steps of MCTS The Monte Carlo Tree makes good use of the Q function to update Q values. When we used MCTS in our reinforcement learning program we implemented the Q-learning algorithm to update Q values instead of the normal Q function. Q-learning improves the learning rate of the agent. After a complete episode all the Q values for all the states that the agent reached in that episode are calculated by the Q-learning algorithm. All the calculated Q values will then be updated in the Monte Carlo Tree during the back propagation step. Monte Carlo Tree Search gives the agent the opportunity to simulate an episode. This is a very powerful advantage of MCTS, it should increase the agent learning experience enormously. If the agent would simulate an episode it should know his (partial) transition function, else the agent doesn t know where he is. We used the 2 counters and the reward array to get to know the (partial) transition function, we did the same for the value function in chapter 2. If we can simulate one or more episodes every step the agent takes, we can have a better estimation of the Q values for all the actions in every state. This should increase the performance of the agent. A. The environment Figure 3: A wall environment V. EXPERIMENTAL EVALUATION We chose a wall environment for the tests with Monte Carlo Tree Search. You have 3 possible state types in the environment: A wall state: If the agent tries to reach a wall he will be put back in his last known state. He will get a reward of 125 An open state: The agent can reach this state but gets a reward of -1. The goal: When the agent reaches this state he gets a reward of +100 and he wins the game. B. The agent We will test 2 implementations of agents. The first agent will be using standard Q-learning. The second agent will be using MCTS (with a combination of Q-learning). MCTS will be using its simulation technique to simulate an episode. We let the agent simulate 1000, 100 and 10 and 1 episode per step in each real time episode. Both agents will use the same parameters: A learning rate of 0.5 discount factor of 0.9 The Q-values will be initialized to zero They will use ε-greedy to select an action with ε=0.1 In one run the agent has 100 episodes to learn a policy, every episode there will be 200 tests to test the agents learning rate. There will be 50 runs so there will be a total of tests each learning episode. The start state of both agents is at (1,1) The agent that uses MCTS will have one advantage: the agent can simulate episodes every step he takes. The agent needs to learn its environment to simulate those episodes, therefor we give the agent 10 episodes to take random actions and learn the (partial) transition function. Every episode simulation the agent starts in his current state and applies the standard policy for simulation. The standard policy for simulation is taking random moves until the agent reaches its goal or if he reaches the gap of 200 steps in one simulated episode. Each agent needs to reach the goal and tries to minimize its steps to reach it. The shortest path the agent can take is 20 steps long. This includes 19 steps in open space and one step to its goal. The agent should receive (19 * -1) reward for the 19 steps and (100 * 1) reward for the step to its goal. This gives a total maximum reward of 81 each episode. C. The Results We tested an agent using standard Q-learning versus an agent that uses MCTS with 1 episode simulation every step he takes (Graph 1 and 2). The agent that uses standard Q-learning will perform better the first 50 episodes than the agent using MCTS. This is mainly because the agent can only simulate one episode every step. If the agent reaches its goal during the episode simulation, the random action the agent took the first step will always be the greedy action for the real time episode. This will change once the agent has tried more actions than only one every step. The actual learning of the MCTS agent starts at episode 10. The first 10 episodes are used to learn the (partial) transition function. The agent performs random actions the first 10 episodes. Graph 1: Agent with MCTS (1 simulation) vs. agent with Q-learning If we only focus on the positive cumulative reward we see that the agent using MCTS performs better after 50 episodes and he reaches the maximum cumulative reward after 60 episodes. The agent using standard Q-learning only gets a maximum cumulative reward of 67. When using ε-greedy with ε = 0.1 there is a probability of 10% when the agent chooses a random action, that s why the maximum cumulative reward of 81 is never reached in our experiments. Graph 2: Agent with MCTS (1 simulation) vs. agent with Q-learning (positive cumulative rewards) 119 126 If we compare 2 agents using MCTS but one agent can simulate 1 episode every step and another agent can simulate 10 episodes every step we see that the agent using 10 simulated episodes every step performs a lot better (Graph 3). Both agents use the first 10 episodes to learn the (partial) transition function. function. The very first episode the agent starts to learn, he already finds his optimal path. Graph 3: Agent with MCTS (1 simulation) vs. agent with MCTS (10 simulations) Can the agent perform better with more simulations every step? We tested 2 agents using MCTS where one agent performs 10 episodes every step and one agent performs 100 episodes every step (Graph 4). If we focus on the positive cumulative reward from both agents we can conclude that there is no difference between simulating 10 episodes and simulating 100 episodes every step for this environment. Graph 4: Agent with MCTS (10 simulations) vs. agent with MCTS (100 simulations) Graph 5: Agent with MCTS (1000 simulations) vs. agent with Q-learning VI. CONCLUSION The results from our experiment show that Monte Carlo Tree Search is a good improvement compared to a standard Q- learning implementation. The Monte Carlo Tree Search uses the Q-learning algorithm too, but its episode simulation quality makes MCTS a very powerful learning method in reinforcement learning as long as the agent simulates enough episodes to learn faster than the agent using only Q-learning. There is only one downside about MCTS simulation: It takes more time to run the simulations. It doesn't matter a lot when you simulate 1 episode per step, but when you simulate 1000 episodes per step you will notice a serious delay. REFERENCES [1] Kim Valgaeren, Monte Carlo zoekbomen bij het leren uit beloningen. Katholieke Hogeschool Kempen: Geel. [2] Peter Dayan, Christopher J. Watkins, Reinforcement Learning. University of London: London, pp. 4. [3] Tom Croonenborghs, Model-assisted approaches for relational reinforcement. Katholieke Universiteit Leuven: Leuven, pp. 12. [4] Richard S. Sutton, Andrew G. Barto, Reinforcement Learning: An Introduction. The MIT Press, Cambridge: Massachusetts, 1998, pp. 15. [5] Guillaume Chaslot, Sander Bakkes, Istvan Szita, Pieter Spronck, Monte-Carlo Tree Search: A New Framework for Game AI. Universiteit Maastricht: Maastricht, pp [6] Tomas Kozelek, Methods of MCTS and the game Arimaa. Charles University, Prague, [7] Mark H.M. Winands, Yngvi Bj.Äornsson, Jahn-Takeshi Saito, Monte- Carlo Tree Search Solver. Universiteit Maastricht: Maastricht, pp To demonstrate the huge advantage of episode simulation with MCTS compared to an agent using standard Q-learning, we implemented an agent that simulates 1000 episodes every step he takes (Graph 5). The agent using MCTS knows the shortest path on the 11 th episode. The agent using Q-learning does not know the optimal path after 100 episodes. We can see that the graph of the agent using MCTS looks a lot like a step 120 127 String comparison by calculating all possible partial matches R. Van den Bosch 1, L. Wouters 2, V. Van Roie 1 1 K.H.Kempen, Geel, Belgium 2 tbp electronics n.v., Geel, Belgium Abstract Searching for a keyword in a large text can be done by going through the whole text and look for a match. Comparing strings to determine the best match is much more difficult. For humans it is usually easy to determine which string matches best. Programming code that looks for partial matches and determines the best match is not as easy as it sounds. It is possible that there are several matches, how to find the best match? First naïve comparison and IndexOf algorithm are programmed to find all possible partial matches. Possible optimizations for faster comparison are briefly described. Search functions that are built in most languages are highly optimized and therefor much faster than most custom made for-loops. Next we need to quantify the comparison based on the overlapping parts. This value creates a division between all comparisons. The desired result is a fast algorithm which also indicates overlapping parts. This result should match the human intuition when it comes to string comparison. Index Terms pattern matching, text processing, performance analysis W I. INTRODUCTION AND RELATED WORK HEN it comes to verifying a manufacturing part number (MPN), we would expect it to require a 100% full match. This is not always true. The MPN for a requested product does not always contain package details (e.g. the size). While the delivered MPN does contain some package details. The correct product is delivered but it has no full MPN match with the requested product. An alternative comparing method is the solution. We have to ease the verification process to allow a wider variety of allowed MPN to pass. When searching for a keyword in a text file, the search algorithm will look for a full match. When a full match is found, it stops looking. Various algorithms for exact string matching exist [1]. Naïve linear search algorithms are easy to program, reliable but very inefficient [2]. Some search algorithms have been improved over the years. This results in faster algorithms. The Knuth-Morris-Pratt (KMP) algorithm learns from its previous mismatches. This is difficult to program and to debug but is much more efficient than the naïve string matching method [3]. The theories behind these faster algorithms are used to speed up our algorithm. If there are multiple MPN s to be verified, we need some indication to tell which MPN matches better than the other. Levenshtein distance is a method that can be used to quantify the similarity between strings [4] [5]. Sam Chapman has developed an open source java library SimMetrics. This library has various similarity measures between strings [6] [7]. All partial matches or overlaps should be marked. Neil Fraser has done some research and did some performance tests about overlap detection [8] [3]. This paper deals with two problems. The first problem is finding all partial matches and the second problem is quantifying the comparison. A. Expected result II. APPROACH IN THEORY Searching for pattern anat in text bananatree is simple. There is only one match. Let baxxanata be the pattern where XX can be any character except n. The search for this pattern in text bananatree results into two partial matches: ba and anat. This is what we are looking for. We compare character by character and find that character a has multiple matches. But we don t want the last a in the pattern to be marked. In the text there is no a after the last match anat. Example 1 illustrates the result we want to achieve. ban anatr e e baxxanata Example 1: Overlap result. Next we need to add a quantifier for each comparison to determine the best match. For example compare pattern ban, bananat, bananatree, bananatreexxxx and bananatreexxxxxx with text bananatree. There is only one full match but which is the second best match? Example 2 shows the desired results, best match first. bananatree 1: bananatree 2: bananatreexxxx 3: bananatreexxxxxx 4: bananat 5: ban Example 2: Quantify result. 121 128 B. Searching for overlap First step is naïve string comparison using two nested for loops. Figure 1 shows an illustrated example. All characters found in string 1 and in string 2 are marked. Note: this also includes the last a from pattern banxxanata. Now we need to find the longest continuous match. This is determined by the sum of all cells marked with an X that form a diagonal line. This sum is 4 for the example in figure 1. The longest continuous match is anat. Next we have to eliminate some other matches that overlap with anat. Every cell that is on the same row or column as anat can be cleared. This is marked with the colors purple and blue in figure 2. Also no characters in the pattern after anat can match a character in the text before anat. These cells can also be cleared and are marked with stripes in figure 2. These two steps have to be repeated until all cells are empty. To continue, the next longest match will be ba. All cells are empty after this step and the process is finished with results ba and anat. C. Quantifying similarity b a n a n a t r e e 0 b x 1 a x x x 2 X 3 X 4 a x x x 5 n x x 6 a x x x 7 t x 8 a x x x Figure 1: Naïve string comparison b a n a n a t r e e 0 b x 1 a x x x 2 X 3 X 4 a x x x 5 n x x 6 a x x x 7 t x 8 a x x x Figure 2: Eliminating matches. The most used method to quantify a comparison is Levensthein distance [4] [5]. Basically it counts the amount of steps it takes to change the pattern to match the text. For most applications this is a very effective method. When it comes to comparing MPN, we need to add some weighting parameters. Figure 3 shows the results of Levensthein Distance calculated for example 2. Pattern bananat has a better matching distance than pattern bananatreexxxx. This is not what we expect because it has an exact match with a suffix XXXX. Levensthein Distance works with costs. Each operation has a cost. In the basic version each operation (substitute, insert and delete) has the same cost. A copy is free. Some extended versions of the Levensthein algorithm allow a user to change the cost for certain actions. Substitution can have a higher cost value then a delete or insert action. Let s compare banana tree with banana monkey and banana. The amount of steps needed to get from string A to string B are for both comparisons the same. But banana monkey should match better than banana because it has an overlap of length 7 due to the space in the text and pattern. Based on performance, it is not preferred to add another algorithm for similarity calculation. This involves another set of nested loops. Building it into one overlap algorithm is a better approach. Again the longest continuous overlap is the most important and will be used to quantify the similarity. Long continuous overlapping parts should score better than a series short overlapping parts. The score is calculated by sorting each overlapping part by length. This length is divided with an increasing power of 2. The first overlapping length is divided by 2 0, the second by 2 1 and so on. Figure 4 illustrates this method. The score for pattern banxxanxxtree is calculated as follows:. The number to be raised to a power can be changed to 3 or more if the continuous length is more important. Last we need to create a difference between all full matches, which have a score of 10 in figure 4. The easiest way to do this is to include the length of the pattern as a ratio with the text. Pattern bananatreexxxx has an overlap ratio around 71 percent while bananatreexxxxxx results in an overlap ratio around 63 percent. The final result is calculated by taking the sum of the score in percentage and the ratio. When there is no full match, the ratio will be 0. This result is divided by 2 to normalize it. A perfect match bananatree will result in the maximum normalized score: result:. Pattern bananatree Lev. Dist. bananatree 0 bananat 3 bananatreexxxx 4 bananatreexxxxxx 6 ban 7 Figure 3: Levensthein Distance calculated for example 2. bananatree Pattern Score Ratio Result bananatree bananat (0) 0.35 bananatreexxxx bananatreexxxxxx banxxanxxtree (0) 0.3 Figure 4: Similarity quantification using overlap.. Pattern banana has 122 129 A. Find all character matches III. ALGORITHM IN PRACTICE To search for all individual matches it s required to nest two loops, one for the text and one for the pattern. This is the naïve method. The result is a byte type 2D array where all matches are marked with either 0 for a mismatch or 1 for a match (see figure 1). Let P be the pattern and T the text, the code in C# will be: //result byte array byte[][] r = new byte[p.length][]; //loop through pattern for (int i = 0; i < P.Length; i++) { r[i] = new byte[t.length]; } //loop through text for (int j = 0; j < T.Length; j++) { r[i][j] = (byte)((p[i] == T[j])? 1 : 0); } B. Find longest match Next step in our algorithm is to iterate through the byte array and return the longest continuous match. Again two loops are nested to find the first match. Inside the inner loop another loop checks the next item, one column and row further. If this item also has a match it will check the next item and so on. The longest continuous match is stored. In this C# code we store the maximum continuous length ml, first index for this match in the text mj and first index for this match in the pattern mi : int ml = 0, mi = 0, mj = 0; for (int i = 0; i < P.Length; i++) { for (int j = 0; j < T.Length; j++) { } } //search diagonal for longest continous match for (int l = 1, ii = i, jj = j; ii < P.Length && jj < T.Length && r[ii][jj] == 1; l++, ii++, jj++) { if (l > ml) { ml = l; mi = ii - l + 1; mj = jj - l + 1; } } Now that we have the longest match, it is time to clear this match along with all the other matches that are not valid anymore. This process will take two clearing steps. First we clear every match where j > mj and i < (mi + ml). Next we clear every match where j < (mj + ml) and i > mi. In C# code this gives: //bottom left rectangle for (int j = 0; j < ml + mj; j++) { for (int i = mi; i < P.Length; i++) { r[i][j] = 0; } } //top right rectangle for (int i = 0; i < ml + mi; i++) { for (int j = mj; j < T.Length; j++) { r[i][a] = 0; } } These two steps have to be repeated until no matches are found (ml == 0). C. Quantify the comparison We need to find the best pattern in the similarity results between multiple patterns and one text. Similarity calculation for each pattern can be done in C# code like this: //overlap calc double a = 0; for (int i = 0; i < arrmatch.count; i++) { a += (double)arrmatch[i] / Math.Pow(2, i); } a = a / (double)t.length; //length calc double b = (double)t.length / (double)p.length; //precent result double p = (a + b * ((a == 1)? 1 : 0)) / 2; Where P is the pattern, T the text and arrmatch is an array containing all partial overlap lengths. The result is a percent that quantifies the comparison. First step includes the overlap with division by 2 with an increasing power. This is to give a higher ranking to a longer continuous match. If the length of a continuous match is more important, this can be changed to a division by 3 or more with an increasing power. Second step a calculation to include the length of the pattern comparing the length of the text. A short pattern compared to a long pattern with the same overlap is more accurate. In the final result we only include this length calculation when the pattern has a full continuous match with the text (a == 1). IV. OPTIMIZATIONS The naïve method is easy to understand but it s not optimized for speed. Some adjustments can be made to increase the performance of the algorithm. The most important optimization is to reduce the amount of comparisons. Some comparisons are not necessary. A comparison between two characters should only be calculated once. It is a waste of time to calculate the same comparison twice. KMP-algorithm [3] is based on this theory. Other reasons could be that the results being calculated are never used or cleared in a second iteration. Look at figure 5 where some unnecessary results are b a n a n a t r e e 0 b x 1 a x x x 2 X 3 X 4 a x x x 5 n x x 6 a x x x 7 t x 8 a x x x Figure 5: Optimization. marked in red. All these cells can be skipped during first iteration. The right column is marked because after the first loop through the text we already found a continuous match of two characters. It s not possible to have a longer continuous match because the text ends there. Same conclusion goes for the lower two rows. 123 130 Based on the KMP-algorithm we can create a table with only the unique characters between the text and pattern. This way we can eliminate double comparisons. For example in figure 5 row index 1 contains character a. In the text are three occurrences for character a. We can add an x for all those three cells at once. A totally different approach in optimization is the use of fuzzy searching. Look at the row index 7. Before we find the match with column index 6 we have to go through six other columns. With fuzzy searching there is a chance that searching starts at column index 6. If the next fuzzy search is for column index 4, no other comparison has to be made for this row. The longest continuous match is now already found. It s clear that optimization is difficult to implement and debug. Some optimization techniques are only efficient for specific input strings. Therefore we consider optimization for the naïve method as future work in this paper. Almost each programming code has its own search functions. These methods are highly optimized and could result in a faster result. Therefor we shortly look at the IndexOf function that is built in C#. The theory behind this is shown in figure 6. Each iteration searches for a match starting T: bananatree P: baxxanata Search 1 T: bananatree P: baxxanata Search 2 T: ban ree P: baxx a Search 3 T: n ree P: XX a Figure 6: IndexOf comparison. with a single character. When a match is found, the search is expanded with one character. In the first step we search the whole text and find that anat is the longest continuous match. In the next iteration the previous result is removed from both the pattern P and text T. Now we search again for a single character but not over the entire length. The search is limited to the section before the previous result in the text and pattern. Another search is limited to the section after the previous result in the text and pattern. This is marked with the colors in figure 6. In other words we only search for overlaps between two parts that have the same color. Similar to the naïve method, this method also has two steps. First step to search for a match and second step to cleanup. Search uses the IndexOf and Substring functions. Cleanup splits the text and pattern using the Split function that returns an array containing the remaining substrings. Each iteration step stores the best match in a variable. We store best length bl, best index in pattern bi, best index in text bj and best array index bn. In C# code the search is: for (int n = 0; n < T.Length; n++) { for (int i = 0; i < P[n].Length; i++) { int j = -1, l = 1; } } And the cleanup is: while (i + l <= P[n].Length) { j = T[n].IndexOf(P[n].Substring(i, l)); } //not found if (j == -1) break; //found if (l > bl) { bl = l; bi = i; bj = j; bn = n; } //look for next l++; //overlap string o = P[bn].Substring(bi, bl); //loop string[] nt = new string[t.length + 1]; string[] np = new string[p.length + 1]; int c = 0; for (int n = 0; n < T.Length; n++) { string[] st = new string[1] { T[n] }; string[] sp = new string[1] { P[n] }; } if (n == bn) { st = T[n].Split(new string[1] { o }, 2); sp = P[n].Split(new string[1] { o }, 2); } for (int a = 0; a < st.length; a++) { nt[c] = st[a]; np[c] = sp[a]; c += 1; } This code is also not optimized. There are some comparison steps that can be discarded. This will not be discussed here and belongs to future work. V. EXPERIMENTS The difference between the naïve and IndexOf method is shown in figure 7. Pattern length is 200 and text length is increased from 1 to 200. If we change this so that text length is 200 and increase pattern length from 1 to 200 we ll get almost Figure 7: Text length increase and pattern length fixed 200. the same results. In these experiments each comparison is calculated 100 times with the average execution time plotted 124 131 on the graph. Each string is randomly generated with a maximum of 26 different characters. Figure 8: Text char difference increase 1-95 and pattern char fixed 95. Besides the length we can also change the difference between strings. Let s vary the text from 1 to 95 maximum different characters while the pattern has a fixed maximum character difference of 95. Results are shown in figure 8. If we Figure 9: Text char fixed 95 and pattern char difference increase change this so the pattern varies from 1 to 95 and the text has a fixed 95 maximum difference in characters, we get results as shown in figure 9. These experiments are created with a string length of 200. Each comparison is calculated 100 times with the average execution time plotted on the graph. The trendline is automaticaly generated. In figure 8 and figure 9 the overall trend seems more like a logarithmic function instead of a straight line. Figure 10: Text char difference increase 1-95 and pattern char fixed 1. When we set either text or pattern to a character difference of 1, two spikes shown up in the beginning (see figure 10). The spikes show a full match between text and pattern. VI. CONCLUSION Comparing two strings to highlight the matching parts can be done in many ways. We compared two methods, a naïve and IndexOf method. Both are based on calculating partial matches in the first step and cleanup in the second step. The naïve method creates a table where all character matches are marked with 1. Further calculations are based on this table (2D array). The IndexOf method splits the input strings based on the longest continuous match. Every comparison is calculated based on the corresponding substring parts. The cleanup step removes the longest continuous match found and also removes all overlapping matches with this result. This is repeated until no matches are found. Finally we loop through all partial overlaps to quantify the comparison. A continuous full match gets a higher ranking than an interrupted full match. This is done with a simple math calculation based on continuous overlap and string length. Experiments show that the IndexOf method is faster in all cases. In terms of string length, it remains the same whether you compare string A with string B or string B with string A. The execution time for the IndexOf method will gradually increase as the string length increases. The naïve method on the other hand will increase in execution time much faster (see figure 7). Another important factor is the variety of characters. Here the naïve method is significantly slower than the IndexOf method. The comparison between a string with just one type of character and a string with 95 different types of characters goes rather fast. The execution time will increase very fast as the string variety increases. Here we see a difference between comparing string A with string B or string B with string A. If the text has a wider character variety relative to the pattern, the execution time for the naïve method will gradually increase (see figure 9). The opposite happens when the text has a narrower character variety relative to the pattern, the execution time for the naïve method increases more strongly (see figure 8). Experiments also show that a complete match takes a lot of execution time (see spikes on figure 10). A complete match requires a lot of the algorithms. Therefore it is recommended to do a simple pre-match prior to the algorithms. REFERENCES [1] C. Charras and T. Lecroq, Exact String Matching Algorithms, Université de Rouen, January 1997 [2] R.S. Boyer and J.S. Moore, A fast string searching algorithm, Comm. ACM 20 10, October 1977, pp [3] D.E. Knuth, J.H. Morris, and V.R. Pratt, Fast pattern matching in strings, SIAM Journal on Computing, 1977, pp [4] M. Gilleland, Levenshtein Distance, in Three Flavors, Available at Februari 2011 [5] Li Yujian and Liu Bo, A Normalized Levenshtein Distance Metric, IEEE transactions on pattern analysis and machine intelligence, vol. 29, no. 6, June 2007 [6] S. Chapman, SimMetrics, an open source Java library, Available at Februari 2011 [7] W.W. Cohen, P. Ravikumar and S.E. Fienberg, A Comparison of String Distance Metrics for Name-Matching Tasks, Carnegie Mellon University, 2003 [8] N. Fraser, Overlap Detection, Available at November 2010 [9] D. Eppstein, Knuth-Morris-Pratt string matching, Available at May 132 126 133 Beamforming and noise reduction for speech recognition applications B. Van Den Broeck 1, P. Karsmakers 1, 2, K. Demuynck 2, 1 IBW, K.H. Kempen [Association K.U.Leuven],B-2440 Geel, Belgium 2 ESAT, K.U.Leuven, B-3001 Heverlee, Belgium AbstractIn order to have adequate speech recognition results the speech recognizer needs a decent speech signal which doesnt contain too much noise. One way to obtain such signal is to use a close talk-microphone. However, this might be impractical for the user in certain situations, e.g. when the user lays in bed. A more practical and comfortable alternative for the user is the use of a contactless recording device acquiring the signal at a relatively large distance from the speaker. In the latter case intelligent signal processing is required to obtain an appropriate speech signal. In this paper the signal processing part is covered by a beamformer with noise-cancelation, a commonly used one is called GSC (Generalised Side lobe Canceller). Under ideal circumstances the GSC works quite well. However when microphone mismatch is taken into account the resulting speech signal can be distorted, which might seriously impact the accuracy of a speech recognizer. Since the demands for matched microphones is expensive, new methods were develop to reduce the effect of microphone mismatch by using altered versions off some well know GSC algorithms. Two of these are SDR-MWF and SDR-GSC (resp. Speech Distortion Regularized Multichannel Wiener Filter and Speech Distortion Regularized Generalized Side lobe Canceller). In this work SDR-MWF and SDR-GSC will be compared with their non SDR equivalents which are not designed to handle microphone mismatch. The comparison is carried out both theoretically as well as practically by simulations and experiments using non matched microphones showing the positive effect of the SDR-MWF on the recognition of speech. Here we will show a improvement of 20% in word error rate for a noisy environment and 2% for a less noisy environment. Furthermore some practical issues are handled as well, such as a method for easy assessment of the beamformer and a way to generate good overall extra noise references. Index TermsNoise reduction, Speech Distortion Regularized Multichannel Wiener Filter / Generalized Side lobe Canceller, Sum and delay beamformer F I. INTRODUCTION OR speech recognition applications a close-talk microphone is often used to guarantee a clean speech signal. But this solution has the drawback that the user needs to wear the close-talk microphone. When sound is acquired at greater distances with a single microphone the speech signal proves to be too corrupted for speech recognition, especially in a noisy environment. One way to overcome this problem is to use a multi microphone system to form a sum and delay beamformer [1]. This will suppress sound coming from all directions except one desired direction. Normally sum and delay beamformers are described by transfer functions for all directions [1], where the many variables make it quite difficult to assess them. In this paper we will introduce a energy based directivity pattern, from this pattern we can assess the beamformer simply by a beam width and a suppression for undesired directions. In order to further suppress the noise an adaptive filter is often used such as NLMS or RLS (resp. Normalized Least Mean Square, Recursive Least Square). These techniques are better known as GSC (Generalized Side lobe Canceller) and are further explained in [2]. In theory these techniques work quite well but due to some imperfections such as microphone mismatch the resulting speech could contain al lot of speech distortion [3]. This will prove to be disadvantageous for the recognition. In the literature methods are described to overcome this by using ultra high quality microphones or by manually matching microphones, but this will result in a expensive end product (either by expensive hardware or labor). However, techniques are available which partially overcome the latter problem by using a different algorithm to update the adaptive filter without resulting in a more expensive end-product. Two of these techniques are SDR-MWF and SDR-GSC (resp. Speech Distortion Regularized Multichannel Wiener Filter and Speech Distortion Regularized Generalized Side lobe Canceller) [3]. In this work SDR-MWF, SDR-GSC, a regular MWF and a NLMS will be compared both on theoretical grounds and in practice. As a criterion for comparison both noise reduction and speech distortion will be used. This will show the positive effect obtained by a SDR algorithm and the up and downsides of a MWF and a GSC. Additionally the effects when using an extra noise reference are investigated. Furthermore, we will elaborate on the positioning of these extra noise microphones. II. BEAMFORMING AND NOISE REDUCTION ALGORITHMS A. Beamforming The sum and delay beamformer is depicted in fig. 1. It consists out of multiple microphones in a single line. The theory behind this beamformer is relatively easy. First we compensate the different delays of the sound coming from the desired direction due to different distances traveled. Then we 127 134 just take the mean of each sample from each microphone. Sound coming from the desired direction will constructively interfere and hence will be pass unmodified. Sound coming from undesired directions (which is assumed noise) will partially interfere deconstructive and hence will be suppressed. This way the SNR (signal to noise ratio) will improve compared to the output of one microphone. Mathematically this system can be described as: If X(f)=1 for f [0 f nyquist ] (white noise) then the energy ratio of the output of the sum and delay beamformer and the input of one microphone can be calculated as: where, (2) fig. 1: Sum and delay beamformer. (1) In equation 1 we can notice that if source = desired then H() becomes 1, and all sound will pass. In order to gain further insight in the properties of the sum and delay beamformer, we evaluated an example. A directivity pattern for 3 microphone beamformer is shown in fig. 2. The microphones are equidistantly spaced with 5cm between each of the microphones. It can be noticed that for source desired (=90 ) the gain will be smaller than one. But its difficult to assess the beamformer from this figure. Therefore a different way of depicting the directivity pattern off the beamformer will be introduced. fig. 2: f-curves, 3mic's 5cm spaced. (3) In equation 2 and 3 it is seen that the term 1/M stands for the average suppression of sounds coming from undesired directions. So we can state that the average improvement in SNR is only dependent on the number of microphones used and not the spacing of them. We also see that the gain from adding one microphone diminishes when the number of microphones is already large (also stated in [4]). The summations of sinc functions will form the beam. It can be stated that the smallest sinc is formed by the largest d i -d k (furthest spaced microphones), and this will primarily determine the beam width. Applying equations 2 and 3 to the setup from fig. 2 we end up with fig. 3Fout! Verwijzingsbron niet gevonden.. Here we can clearly talk about a formed beam. fig. 3: Energy-plot, 3mic's 5cm spaced B. Noise reduction algorithms It has been shown that the sum and delay beamformer can improve the SNR of the speech, but still falls short when used for speech recognition applications [2]. Noise from undesired directions is still present and noise coming from the desired direction will not be suppressed at all. In order to further enhance the speech signal an additional noise reduction stage is needed. Most noise reduction algorithms are based on an adaptive filter. Here SDR-MWF, SDR-GSC and their non speech distortion regularized equivalents are compared. In the next paragraph first the SDR-MWF will be explained. Next, the other methods are explained using the SDR-MWF framework. The scheme of the SDR-MWF is depicted in fig. 4. To the left we see the microphones and the previously described beamformer (A(z)). We also see a blocking matrix (B(z)), this 128 135 will in fact do the opposite of the beamformer. It will take the differences of consecutive microphone inputs so that the speech in it will be removed to form noise references. All of these outputs are passed to the SDR-MWF. The block represents a delay. This way (when we talk about the speech reference sample wherefrom we want to estimate the noise being the present) the filters w can contain as much samples from the future as they contain samples from the past. fig. 4: SDR-MWF [5] (6). (7) To compute [n ] information about the pure speech is needed, which obviously is unknown. The solution lies in the use of a VAD (voice activity detection) so that we can collect information about the noise and the noise+speech parts individually. Combining this information gives us well estimated statistics about the pure speech. This is handled in detail in [5]. In equation 6 there is also a parameter which stands for the step sizes taken to update the filters. Using a fixed can result in a instable update. Therefore a normalized step is used similar to NLMS [6]. Now the step will be normalized by the energies of the speech and the noise. The parameter needs to be small and is added so that can not become infinite. Now the idea in SDR-MWF is to update the filters w so that the combined output represents the noise in the speech reference. This preferably without containing too much speech because this will introduce speech distortion. Notice that this can only happen when the noise references contain speech which will be so when the microphones are not matched or when the filter w 0 is used. To further explain the algorithm some auxiliary are introduced:. (4) The goal is to update the filters w in order to minimize the cost function (equation 5). Notice that the superscripts s and n stand for the speech and noise parts of the signals and stands for the expected value. The second term stands for the energy of the residual noise in the output. Minimizing this will suppress the noise. The first term stands for the energy of the speech distortion. The inclusion of this term makes the algorithm speech distortion regularized. There is also a weighing parameter µ which will regulate the importance of the second term against the first term. In this way an appropriate balance between noise reduction and speech distortion [3] [5] can be obtained.. (5) The cost function is minimized by gradient descent, which resulted in the following update function is obtained [3] [5]: Again information about the pure speech is required. A similar solution like for w (equation 6 and 7) is applied, which is further described in [5]. SDR-GSC is identical to SDR-MWF except for the fact that the filter w 0 is not used. The non SDR equivalents are derived by excluding the first term in the cost function [3]. This can also be accomplished by making µ very large. An additional remark can be made about the noise references. In fig. 4 all noise references are obtained by the blocking matrix but this does not have to be so, other noise references can be used as well. These can be formed anywhere. Only two constraints need to be satisfied. First the noise in these references must be formed by a noise source that is also the cause of noise in the speech reference. Secondly, the SNR in these references could best be low in order to avoid speech distortion. In the experimental section an extra set of microphones with a blocking matrix will be used to achieve such a noise reference. III. SIMULATIONS A. Comparison of noise reduction algorithms In this section a comparison of the discussed noise reduction algorithms will be made. The setup was as follows: a three microphone beamformer was used with 5cm spacing. A speaker was simulated at 90 to this beamformer. The speech consisted out of 2 sentences, the first to initialize the filters and the second to validate the result. This results in an energy pattern as depicted in fig. 5. Then a pink noise source (8) 129 136 was simulated at various angles from 0 to 90 to form a noisy signal with a SNR of 5dB at the output of one microphone. All microphones were simulated as ideal, there was no microphone mismatch. fig. 5: Energy pattern for simulation A The noise reduction algorithms where run on this data. The used parameters where: L (filter length of one filter) = 11, = 0.1, = 0.1 and = ( is a parameter used for estimating the information for the pure speech, see [5]). These parameters proved to work quite well in previous simulations (not included in this paper). And this was done for various values of µ. The result was judged by the energy of the residual noise in the output, the energy of speech distortion in the output and the gain in SNR from the output of one microphone to the output of the noise reduction algorithm (the energy of speech distortion is also considered to be noise). The results for the SDR-MWF are shown in fig. 6 and the results for the SDR- GSC are given in fig. 7. First the results obtained by the SDR-MWF will be discussed. It can be noticed that the energy of the residual noise is decreasing with increasing µ. At the same time the energy of distortion is increasing. This is as expected since µ is the parameter that will trade off noise reduction for speech distortion. When looking at the resulting SNR gain it can be seen that this forms an optimum for one value of µ, where the residual noise and speech distortion form a best balance. Notice that this is why the SDR algorithm was chosen to be used here in the first place. As mentioned before, the non SDR equivalents can be found by setting µ very large which might cause a lot of speech distortion. When looking at the different noise angles it can be noticed that at 90 (1,5708 radians) the result is at its worst. This is caused by two things. First the beamformer will not reduce any noise because this is the desired direction. And second, at this angle the noise references formed by the blocking matrix will not produce any noise (this is blocked the same as the speech is blocked). The only useful noise reference is the speech reference (by w 0 ). So now the noise has to be estimated out of a signal which contains a lot of speech. Therefore the algorithm must focus very hard on reducing the speech distortion so that noise reduction is hard to do. But looking at the SNR gain it is still seen that a small improvement can be achieved. fig. 6: Results for SDR-MWF, at various noise angles Now when considering the results obtained by the SDR- GSC it is seen that the value off µ has hardly got any effect. Because of the perfectly matched microphones and not using w 0, none of the noise references contain any speech. This way the algorithm only has to focus on the noise reduction no matter what value for µ is used. When microphone mismatch would be introduced this will not be the case, but still we can expect a large interval of appropriate choices for µ. This will make the SDR-GSC more easy to set up with appropriate parameters. The downside of SDR-GSC can be seen at a noise angle of 90, here there will not be any improvement. The beamformer does not remove any noise and the noise reduction algorithm has no noise references left (not using w 0 ). fig. 7: Results for SDR-GSC, at various noise angles B. Extra noise reference (placement) In previous simulation it is shown that SDR-GSC could perform better in terms of speech distortion but has the disadvantage that no noise references are left when noise is coming from the desired direction. The latter can be overcome by using extra noise references. It was already stated that the noise references should contain no (or very little) speech, 130 137 therefore extra beamformers with a blocking matrix, directed at the speaker, will be used. Now there is still the question where to position them. First, they need to be placed away from the original beamformer otherwise the previous problem occurs. Second, it must be noticed that most noise has a poor autocorrelation. Therefore the noise in the noise reference cannot be delayed from the noise in the speech reference (or at least the delay has to stay within the bounds formed by the filter lengths) and this for all possible desired directions. This means that the extra beamformers cannot be placed too far away from the original beamformer. This leads to the following setup: two extra beamformers (2 microphones, 5cm spaced) are placed at 0.5m left and right from the original beamformer (the same as in previous simulation) to form two extra noise references. A speaker is simulated at 1.5m from the original beamformer at an angle of 90. A noise source is simulated at various angles (0-180 ) at a distance of 3m from the original beamformer. The noise and speech were the same as in the first simulation. The SDR-GSC algorithm was run for all these different setups with parameters: µ = 1, = 0.1, = 0.1, = and for two different filter lengths L=11 and L=21. The resulting SNR gain is depicted in fig. 8, left for a filter length of 11 and right for a filter length of 21. The SNR gain is plotted in function of the angles of the noise source. In these figures it is clearly seen that with the use of an extra noise reference an improvement is achieved in case the noise comes from the same angle as the speech (=90 ). It can also be noticed that by using a larger filter length an improvement is seen for a larger interval of angles. When a larger filter length is used a greater delay between noise in the noise references and noise in the speech reference can be allowed. results shown here are word error rates (WER). This is the number of wrongly recognized words in the set, expressed as a percentage of the total number of words. These WERs were obtained with a noise robust speech recognizer built by using the SPRAAK toolkit [7]. Hereunder, we will show results for the processed electret microphones, the unprocessed electret microphones and a good qualitative reference microphone at the same distance as the electret microphones. fig. 9: Room for experiments fig. 10: Energy pattern for experiment fig. 8: Results simulation B IV. EXPERIMENTS All experiments were conducted in the room showed in fig. 9. The beamformer consisted out of five randomly selected electret microphones (CUI inc., part number CMA-4544PF- W), evenly spaced at 3cm. This will form the energy pattern showed in fig. 10. Here it can be seen that the formed beam is small enough to reject the noise source, but still wide enough for a possible small positioning fault. The noise was played from a laptop. The speech was the Aurora4 dataset [8]. All A. Noisy environment The noise source was set to play random white noise. The microphone outputs were processed with the SDR-MWF algorithm, with parameters: L = 11, = 0.1, = , = 0.1 and various values for µ. The results are shown in fig. 11. First, it can be noticed that using the algorithm an improvement in terms of WER is seen. Interestingly it is noticed that the results with various µ show an optimum. This optimum results from a good balance between residual noise and speech distortion. As explained before a non SDR algorithm is obtained when µ is very large. As a consequence the WERs will be much worse. This proves that using a SDR algorithm can have a positive effect, compared to its non SDR alternatives, when combining it with a speech recognizer. When looking at the results for the single microphones it can be seen that the electret microphones deliver a results almost equal to the result delivered by the good reference microphone. The reason is that the influence of the noise source has got the biggest influence in the result, not the quality of the used microphone. 131 138 One last comment should be made about the overall hi WERs. The noisy environment was rather extreme. Even after noise reduction the speech was flooded with noise in the higher frequency regions. The speech recognizer had serious problems with this. But still, the influence of balancing residual noise and speech distortion is clearly visible. fig. 11: Results experiment A B. Less noisy environment The setup was the same as in previous experiment with the difference that the noise source was removed. Just a little amount of noise remained, introduced by the used microphone preamplifiers. This way another effect will take the upper hand, namely reverberation. The results are showed in fig. 12. We can see that all findings of experiment A for the electret microphones also apply here. The only difference is that the WER is overall smaller. This is due to the already better microphone signals. When comparing the results for the individual microphones it can be seen that the electret microphones now result in worse results than the good reference microphone. The reason for this can be found in the noise introduced by the electrets preamplifier. visualize the spatial filtering effect of beamformers, which provides an easy assessment of such beamformers. Next, the SDR-MWF and the SDR-GSC algorithms were reviewed and experimentally compared. SDR-MWF performs better under all noise angles, but SDR-GSC could have the advantage of inducing less speech distortion if an extra noise reference can be made. Additionally an appropriate setup for doing so was proposed. Finally, two practical experiments were carried out. One in a noisy environment and another in an almost noise free environment. Both experiments had the same conclusion. An optimum is formed over the trade off parameter µ which controls the amount of residual noise versus the amount of speech distortion. When µ is set to high or too low, the recognition of the speech deteriorates because of too much speech distortion or too much residual noise respectively. The difference between both experiments was that the results for a less noisy environment were better on overall. For the noisy environment we achieved a gain in WER of 20% absolute, for the less noisy environment this gain was about 2% absolute. VI. ACKNOWLEDGEMENTS Research supported by the Flemish Government: IWT, project ALADIN (Adaptation and Learning for Assistive Domestic Vocal Interfaces). REFERENCES [1] J. Benesty, M. Sondhi, I. Huang. Handbook of Speech Processing. Berlijn: Springer, [2] J. Mertens. Opbouw en validatie van een spraak acquisitie- en conditioneringssysteem. KH Kempen., [3] A. Spriet, M. Moonen, J. Wouters. Stochastic gradient implementation of spatially pre-processed Multi-channel [4] D. Van Compernolle and S.Van Gerven, Beamforming with microphonearrays, Heverlee: KU leuven- ESAT, 1995, pp [5] S. Doclo, A. Spriet, M. Moonen, J. Wouters. Design of a robust Multimicrophone noise reduction algorithm for hearing instruments. ESAT, K.U.Leuven,2004. [6] Kuo, M. Sen, Real-time digital signal processing:implementations and applications, 2nd ed., [7] Kris Demuynck, Xueru Zhang, Dirk Van Compernolle and Hugo Van hamme. Feature versus Model Based Noise Robustness. In Proc. INTERSPEECH, , Makuhari, Japan, September [8] N. Parihar, J. Picone, D. Pearce, H.G. Hirsch. Performance analysis the Aurora large vocabulary baseline system., Proceedings of the European Signal Processing Conference, Vienna, Austria, 2004 [9] K. Demuynck, X. Zhang, D. Van Compernolle and H. Van hamme. Feature versus Model Based Noise Robustness. In Proc. INTERSPEECH, , Makuhari, Japan, September [10] N. Parihar, J. Picone, D. Pearce, H.G. Hirsch. Performance analysis the Aurora large vocabulary baseline system., Proceedings of the European Signal Processing Conference, Vienna, Austria, 2004 fig. 12: Results experiment B V. CONCLUSIONS In this paper we assessed the performance of different beamformers. We first introduced a novel energy pattern to 132 139 Reinforcement Learning: exploration and exploitation in a Multi-Goal Environment Peter Van Hout 1, Tom Croonenborghs 1/2, Peter Karsmakers 1/3 1 Biosciences and Technology Department, K.H.Kempen, Kleinhoefstaat 4, B-2240 Geel, Belgium 2 Dept. of Computer Science, K.U.Leuven, Celestijnenlaan 200A, B-3001 Heverlee, Belgium 3 Dept. of Electrical Eng., K.U.Leuven, Kasteelpark Arenberg 10/2446, B-3001 Heverlee, Belgium Abstract In Reinforcement Learning the agent learns to choose the best actions depending on the rewards. One of the common problems is the trade-off between exploration and exploitation. In this paper we consider several ways to deal with this in a multi goal environment. In a multi-goal environment it is important to choose the correct value for epsilon. If the tradeoff between exploration and exploitation is not correct the agent takes too long to learn or does not find an optimal policy. We present several experiments to illustrate the importance of this trade-off and to show the influence of different parameters. Index Terms Reinforcement Learning, Boltzmann, Epsilon- Greedy, Q-Learning I. INTRODUCTION AND RELATED WORK Reinforcement Learning is a sub-area of Machine Learning, where an agent has to learn a policy from the rewards he gets from the environment on certain actions [Sutton, Barto, 1998]. The agent can use different methods to learn a policy using the environment with sarsa or Q-learning that I will use in the experiments discussed in this paper. And this in combination with two famous exploration techniques Epsilon-Greedy and Boltzmann, both use a different approach in exploration and exploitation. This paper is written as an extension of my master thesis. My master thesis is also a combination of different experiments on different environments using the Q-agent. The basics of RL (Reinforcement Learning), Q-learning and Epsilon-Greedy/Boltzmann are described in the following sections of this paper. The results of the experiments are compared after the short introduction to RL and its components. II. INTRODUCTION TO REINFORCEMENT LEARNING A. Reinforcement Learning In RL you have interaction between an agent and the environment [Kaelbling, Littman, Moore, 1996]. The agent gets an indication of his state and chooses an action that he wants to execute. Due to this action the state changes and the agent will receive feedback about the quality of the transition, the reward, and its new state. This will be repeated until the experiment is over. While the experiment is running, a policy will be learned. This policy approximates the optimal policy over time. In the long run the agent should be choosing the actions that results in a high long-term reward. The goal of the task is to optimize the utility function V π (s) for all states. The most common definition is the discounted sum of future rewards where the discount factor (γ) indicates the importance of future rewards with respect to immediate rewards. state st reward rt rt+1 st+1 Agent Environment action at Figure II.1 Interaction between agent and environment. Consider for instance the following example: Environment: You are in state 1 and have 4 possibilities. Agent: I ll choose action 2. Environment: You are now in state 4 and have 4 possibilities. Agent: I ll choose action 1. B. Q-Learning Q-learning is one of the possible algorithms to learn a policy [Watkins, Dayan, 1992]. The Q-agent uses the Q-function where the Q-values are defined on state-action pairs. The Q-value gives an indication 133 140 of the quality of executing a certain action in a certain state. Q- learning is an off-policy learning algorithm, this means that the learned Q-function will approximate the optimal Q-function Q* independent of the exploration that is used. The update rule used in Q-learning is: Q(s t,a t ) Q(s t,a t ) + α [(r + γmax a Q(s t+1,a )) - Q(s t,a t )]. In Q-learning some parameters have an influence on the behavior of the Q-function. 1) Learning rate The learning rate (α) describes how the newly acquired information will be used. If α = 0 the agent will not learn a thing because only the old information is used. While α = 1 only the newly acquired information will be used to update the Q-values. 2) Discount factor The discount factor (γ) determines the importance of future rewards. If γ = 0 the agent will only consider the current rewards. While γ = 1 the agent will go for a high long-term reward. A common value for the discount factor is 0.9. C. Exploration For convergence it is necessary to execute each action in each state an infinite number of times. Intuitively, you want even if you have already a good policy, investigate whether there is no better policy. Q-learning only describes that an agent must choose an action, not how he chooses this action. For exploration we can use different techniques [Croonenborghs, 2009], so the agent can try to collect as much new information as possible. The agent must also be able to use its information to accomplish a task while learning. Sometimes the agent has to give up exploration so he can use his knowledge and exploit this. A good trade-off between exploration and exploitation is therefore a must. 1) Epsilon-Greedy Epsilon-Greedy is an extension of the Greedy algorithm. Greedy only chooses the action with the highest Q-value and due to this action he will not be able to explore the environment. When the Greedy algorithm is extended with exploration we call it Epsilon-Greedy. With a small chance (epsilon) the agent will choose a random action, otherwise the greedy action will be chosen. Mostly a small part is used for exploration and a big part for exploitation. 2) Boltzmann Boltzmann uses a different strategy for choosing an action by assigning a non-zero probability of being executed to every possible action in a certain state. The probability of all actions should be 1 (100%) and the probability per action is calculated in a particular state and action always will end up in the same state, used in this experiment the learning rate (α) is set to 1.0 due to the use of a deterministic environment. And the discount factor (γ) 0.9 is to optimize a high long-term reward. The learning rate and discount factor are discussed in section II.B.1/2. B. Environment The created maze is pretty simple and specially made for explaining the importance of a good trade-off between exploration and exploitation. This environment is deterministic, ie for each action in a particular state there is only one possible new state. For the illustration I used a 2-goal environment with different rewards. The following rewards comply: Goal2: Goal1: Obstacle and step: 0.0 W W W W W W W W W W W W W W W W G W W W W W W W W W W W W E2 0 0 W W W W 0 0 E1 0 G W W W W W W W W W W W W W W W W W W W W W W W W S W W W W W W W W W W W W Table III.1 - The maze (environment). Around the maze (Table III.1) you have an obstacle, the wall (W), so that the agent has to move in the free (0) environment. The start-state (S) is fixed at the bottom, while the 2 goals (G1 and G2, E1 and E2) are on a different distance to the start-state. The G- and E-goals are for two different mazes. When the maze with the G-goals is used, the E-goals are a free space (0) and vice versa. with the following formule: A. Q-agent III. SETTINGS For the deterministic environment, this means that the agent 134 141 C. Experiment Each experiment runs for 1000 episodes. At the beginning of a episode the agent starts in the start-state and ends when the agent reaches the goal or the step-limit of 200. Of these 1000 episodes every 10th is used to check if the agent has already learned the optimal policy. The used results are an average of 5 experiments. The experiments were done using the program RL-Glue [Tanner, White, 2009]. IV. COMPARISON A. Random In a multi-goal environment it is always difficult to choose the right trade-off between exploration and exploitation. With an agent that takes always a random action, the goal closest to the start state will be reached much earlier. Goals further away will take more time to reach. The graph will be represented in color, green are the states with a high frequency of steps and red with only a few. Table IV.1 - Environments map-temperature with random actions. As shown in the table (Tbl. IV.1), the goals at the end of the environment will be more difficult to reach. B. Epsilon-Greedy In the experiment, there are 2 epsilon values used, a very common used epsilon of 0.2 and one of 0.8. The epsilon value is the measure of the probability. I.e. a epsilon of 0.2 is the same as 20% chance on a random action. In the remaining 80% the agent will choose the greedy action. 1) Epsilon of 0.2 With only 20% chance on a random action (explained above), this strategy focuses more on exploitation and a little on exploration. When we look back at the map-temperature of the random actions (Tbl. IV.1), we can say that when the second goal is situated further back, it will be more difficult to reach. We can also conclude that when they are situated next to each other, it will be easier to reach the second goal. For the environment with the G-goals, the second goal with a higher reward will not be reached in a normal time-limit. In the case of the E-goal environment it s a lot easier to reach the second goal with a epsilon of 0.2, but it will take some time. a. b. Table IV.2 - EG0.2 map-temperature for G-goal (a.) and E-goal (b.) environment. The table above (Tbl. IV.2) clearly shows the different behavior on the 2 mazes. When the agent has to run in the difficult environment (a.) he will only find the first goal and the best path to it, this is the green corridor. When the easier environment (b.) is used the agent will find the way to the second goal, but still prefers the first goal. If the number of experiments increases the agent will learn the way to the other and better goal. The table also gives a good view of the fact that the agent will be lead to the first goal. 2) Epsilon of 0.8 With 80% random actions it s close to a fully random agent. This value lays the focus on exploration, and less on exploitation. The profit of this strategy is that the agent will learn the goals quicker and especially when there are more goals than one. Due to the high chance on random actions the maptemperature will look almost the same like a random agent (Tbl. IV.1). The difference is that there will be a green corridor to the learned goal. Unlike the corridor of EG0.2, the corridor will be wider because the strategy does not focus on exploitation. In the table (Tbl. IV.3) below, the green spot is the wide corridor to the goal. The behavior of both environments is almost the same, and the agent will learn both goals pretty quick. And on the easy map (b.) a little faster than on the difficult map (a.). a. b. Table IV.3 - EG0.8 map-temperature for G-goal (a.) and E-goal (b.) environment. 135 142 3) Epsilon 0.2 against 0.8 The higher the epsilon, the more exploration and the faster the other goals will be found. But this can be a disadvantage too. In reinforcement learning we want to learn and use the best way in a short amount of time. A epsilon with the value 0.8 will learn the best path in a short time due to the random actions in 80% of the time. The agent explores more than exploiting the learned policy. When the epsilon is set to 0.2, the agent will focus more on exploiting the learned policy and chooses the best actions according this policy. Although these actions are not always the best possible ones. There is a probability that the agent will learn the second goal in the future but it can take a lot of time. And that is obviously not what we want. The time taken depends on the difficulty of the used environment, but the general proposition is true. The figure below (Fig. IV.1) illustrates the rewards (y-axis) when the agent is running with a frozen policy over a certain time (xaxis). It shows that both exploration strategies will learn the optimal policy. The more random the strategy, the faster the optimal policy will be learned. settings you ve made. Not every kind of exploration is good on all kind of environments. On a multi-goal environment it s important to explore enough for finding each goal on a fast time and afterwards fast changing from exploration to exploiting, to use the best possible way, when the optimal policy is learned. Epsilon-Greedy and Boltzmann are not the optimal exploration strategies to use in a multi-goal environment due to the amount of actions that are randomly taken. The epsilon or temperature used has an influence on the behavior but it s difficult to set this value or change it over time. If we want to have good results in a multi-goal environment we have to think about an alternative strategy. Like one that first explores the whole map and then exploits the best way (i.e. Rmax, E³). ACKNOWLEDGMENT We would like to express our gratitude to Tom Croonenborghs and Peter Karsmakers for their support concerning the machine learning concepts. We would also like to express our gratitude to Tom Croonenborghs for the support concerning reinforcement learning. REFERENCES Figure IV.1 - Received rewards in episodes with different epsilon values on both environments. C. Boltzmann The use of the Boltzmann exploration strategy is similar to the use of Epsilon-Greedy. While the trade-off between exploration and exploitation is given by the epsilon with Epsilon-Greedy, Boltzmann uses the temperature in the formule for this trade-off. The higher the Q-value, the higher the probability of the action being executed. When the temperature is decreased, it should resemble exploitation. As the temperature is increased, Boltzmann should resemble exploration. The trade-off for both exploration strategies are set through parameters. Therefore the difference between Epsilon-Greedy and Boltzmann experiments are nil. [1] B. Tanner, A. White, RL-Glue: Language-Independent Software for Reinforcement-Learning Experiments in Journal of Machine Learning Research 10, 2009, pp [2] R. Sutton, A. Barto, Reinforcement Learning: An Introduction, MIT Press, 1998 [3] L. Kaelbling, M. Littman, A. Moore, Reinforcement Learning: A survey in Journal of Artificial Intelligence Research, 1996, pp. 4: [4] T. Croonenborghs, Model-Assisted Approaches for Relational Reinforcement Learning on kuleuven.be, 2009 [5] C. Watkins, P Dayan, Q-learning in Machine Learning vol.8, 1992, pp V. CONCLUSION Both exploration strategies are important in the story of exploration. But for a good exploration there is need for a correctly chosen epsilon and temperature value. The best way to know the best epsilon or temperature settings is to test the 136 143 Service-Oriented Architecture in an Agile Development Environment Niels Van Kets, Bart De Neuter and Dr. Joan De Boeck Abstract Within the domain of software development there has always been a keen discussion about the combination of Service Oriented Architectures (SOA) with Agile methodologies. Both SOA and Agile development strive towards change but in a different way. Service Oriented Architecture enables business agility while Agile enables agility while developing. Despite the common background, there still are some points where Service Oriented Architecture impedes Agile Development. This paper starts with an analysis to discover where Service Oriented conditions prevent the use of Agile Development methodologies. This is followed with a theoretical study to evade these conditions in such a way that Agile methodologies can be used. Afterwards, a proof of concept shows if it s possible to create a Service Oriented Architecture based on the solutions derived from the theoretical study and the main Agile methodologies. This proof of concept is founded on Representional State Transfer (REST) services, which enable an enterprise to scale globaly with their Service Oriented Architecture. Index Terms Agile, Enterprise Service Bus, Representional State Transfer, Service Contracts, Service Oriented, Web Services. I. INTRODUCTION MODERN enterprises are subject to change these days. Staying alive within concurrential domains demand enterprises to be able to change business rules and processes very quick. Because enterprises are exposed to external influences like economics, no one can absolutely predict when or in what extend changes will be necessary. The only thing a business environment can do, is making sure that changes are welcome and manageable. For IT this means that it must be aligned with the business so it can react the same way as the business itself. In a perfect scenario, both architecture and development will have to be able to cope with changes whenever they occur. That is where SOA and Agile development show up. SOA makes sure that the software architecture adopts very well to business changes by stressing on low coupling, high cohesion business processes. Agile development is a software development methodology that stresses on agility within the development cycle, and quick iterative working software releases. In practice the two principles are not easily adopted together in one enterprise. There are some SOA constraints that tend N. Van Kets is a Master of Science Student at Katholieke Hogeschool Kempen, Geel, 2440 BELGIUM B. De Neuter is a Senior Software Architect at Cegeka, Leuven, 3001 BELGIUM Dr. J. De Boeck is a university lecturer in the Department of Industrial- and Biosciences, Katholieke Hogeschool Kempen, Geel, 2440 BELGIUM to hinder Agile development. It feels unnatural that this is possible. SOA is an architectural approach and Agile describes how teams have to develop software, these are two completely different things, so they must be compatible somehow. The Agile development methodology described in this paper is based on Scrum [1] and extreme Programming (XP) [2]. Both Scrum and XP are deriviated from the basic Agile principles described in the Agile Manifesto [3]. They both describe the iterative development of software and embrace change within that process. They apply a prioritised approach towards new functionality to implement, meaning that business processes with the highest value are delivered first. Both Scrum and XP can be mutually used within one team and give a good set of rules to deliver fast, efficient and incremental. Scrum has its focus on avoiding and disentangle obstructions within an Agile development process. Scrum solves these obstructions by using transparent communication between team members mutually and between the team and the customer. Extreme Programming focusses more on the engineering practices Agile teams can use. These practices describe processes or tools to intensify the Agile methods. In this paper we will first discuss what we understand under SOA. Many definitions have manifested all over the web and within the software developer world. Because of the ambiguity, we will first show our interpretation of Service- Oriented Architecture. Next we will destinguish the problems we encountered when trying to develop an SOA with Agile methodologies. Based on each of these impediments we will try to describe how to avoid them while not being inferior to the SOA and Agile principles. Last, we will show that when chosen right, a SOA can be easily developed in an Agile manner and even scale globally very fast. We will try to prove this with our proof of concept based on Representional State Transfer services as described by Thomas Fielding [4]. Cegeka January, 2011 II. SERVICE ORIENTED ARCHITECTURE The need for Service Oriented Architecture is founded on the need to tie the business and IT within a corporate environment together. When used propery, the technology 137 144 behind a SOA can even empower the business instead of putting constraints on it. If software can be made reusable within its business context, this business can gain profit by not having to reinvent the wheel. SOA gives the business the key of doing this on a organizational scale. This way an organisation can cope better with changing business needs. When applied right, a business change will only trigger a recombination of existing services, sometimes with some new services. A. Services Within a Service Oriented Architecture, a service represent one specific part of the business domain, with all its possible methods and functions. This functionality is made available for external services or users through its interface. The following list gives the constraints for services in an SOA. The services are as loosely coupled and autonomous possible, so they can live with a minimal set of dependencies. A service has a high cohesion so it contains only the responsibility of one small part of the business. Services have to be reusable, at least within the business. Services can be combined to represent real business processes. IV. SERVICE CONTRACTS The second impediment we have discovered are service contracts. In an SOA, each service should have its own service contract. This contract describes how other services or users can interact with it. Because these contracts are important to enable the combination of multiple services to one logical business process, SOA tends towards big upfront design. Since multiple services can be dependant on one service contract, it seems better to design these contracts in advance so they won t change too often in a later stadium. This is exactly what Agile development tries to avoid. It strives for a development where functionality is incrementaly delivered. This also means that services and service contracts shoud evolve based on this incremental development. To solve this problem it s important to evaluate the lifecycle of a service developed in an Agile manner. We ve discovered that a service can roughly be in three different stages. Figure 1 describes this lifecycle. This description is very broad-shouldered. Most other SOA definitions bind much more constraints upon an SOA. But these are the general fundamentals of an Service-Oriented Architecture. The other constraints like service registers, business process management, orchestration,... should only be implemented when there is a necessity. Fig. 1. Service Stages III. TOOLS AND FRAMEWORKS The first impediment we encounter when developing an SOA with Agile methodologies are tools and frameworks. Many companies, when adopting to SOA, are doing this based on specialized tools and frameworks. This is because SOAs are mostly described as bombastic solutions to get IT inline with the business. Many claim that SOAs cannot exist without large service registers, huge messaging frameworks (see section V),... Agile demoralizes this kind of approach. The Agile Manifesto [3] even describes that induviduals and interactions have more value than processes and tools. In practice this means that an Agile team always has to implement the simplest thing possible. This encourages lightweight frameworks that deliver only the necessary functionality. A second problem with large frameworks is that they deliver none or marginal business value. Within Agile development, the business drives the implementation. Things that don t deliver business value may not be implemented at this time. Although it is a key value of Agile development to postpone tool and framework decisions as long as possible, it is something that is often forgotten. Developers need to consider lightweight frameworks when developing an SOA with Agile development methods. A. Development The first stage starts with the birth of a service and ends at the maturity stage. During this stage the service will start small and grow incrementaly each sprint. During that growth, the service contract will grow and change according to the service itself. If the service is developed seperate from other services, this agile method will succeed. However if multiple services are built at the same time and these services are depending on eachothers contract there is a need for another approach. Suppose service A being developed at the same time as service B. Service A is internally using service B, so the development team of service A will build a stub service based on the contract of service B to simulate the interaction with it. Whenever the service contract of service B changes during development, the team of service A should update their stub service to fulfil the new contract. It is very important for team A to see whenever a contract change breaks existing code, so it can react very quick. In practice it is very important to communicate contract changes. If two services are developed by the same team, the daily team communication will ensure this process. If however two services are developed by two separate teams, there will be need for some written documentation. For example creating a wiki to keep track of service contract changes is a very good practice when working with two teams. Once a service is finished and released, it enters the maturity stage. 138 145 B. Maturity The second stage is the maturity stage. This is the stage in which a service contract is stable and mature. During this time, consumers of the service have to meet with the service contract available. C. Evolution Since a service represents business functionality, it is subject to the same influences a business has to cope with. This means that a service is bound to change in a timely manner. Since one service can have multiple consumers, it is important to make sure that they evolve in pace. This evolution can be a major constraint when multiple consumers are dependent on the provider. Inside an SOA, service contracts provide coupling between services. This coupling should be as loosely possible. Ian Robinson [5] describes in his article on service evolution, three types of contracts. 1) Provider contracts give a complete set of elements available for consumers. A provider contract has a one to one relationship with the service and is authorative. All consumers must agree upon this contract. During maturity the contract is stable and immutable. Evolution of a provider contract will be very exhausting. The service provider has to maintain older versions of services or update all the consumers. 2) Consumer contracts commence when provider contracts take consumer expectations into account. When a consumer contacts a provider, it will send expectations for the provider response. Based on these expectations, the provider will send the subset of business function that the consumer requested. Consumer contracts have a many to one relationship with the provider. These contracts are non-authorative because they don t cope with the total set of provider obligations. Like provider contracts, consumer contracts are stable during maturity. 3) Consumer-driven contracts are slightly different from consumer contracts. They give a better view on the business value a provider exploits at a given time. A consumer-driven contract is always complete. It represents all functionality demanded by all consumers at a given point in time. When consumers need more functionality, the provider contract will be updated. Consumer-driven contracts are singular, but still nonauthorative because they are not driven from the provider. These contracts are stable and immutable when the list of consumers do not change. D. Service Contact Conclusion Robinson describes two specific benifits when it comes to consumer-driven contracts. First, these contracts will only include functionality required by its consumers. As effect, it is always clear what the extend of use is for the provider. The second advantage is that providers and consumers can stay backwards and forwards compatible more easy. Consumerdriven contracts provide a knowledge of which functionality consumers really use. When changes occur, it will be much easier to see if it will affect the consumers. Which makes a provider more manageable and agile within the development context. V. ENTERPRISE INTEGRATION A third problem arises when we talk about enterprise integration. With enterprise integration, service-oriented architects describe the middleware that connects individual business processes and services. In this part we will show that when choosen wrong, an enterprise integration solution can be a real burden on Agile development. A. Enterprise Integration Solutions To describe the most common problem between SOA and Agile development, we have to situate the different integration solutions available. Figure 2 shows that the two integration solutions both start with common Enterprise Integration Patterns. These patterns are substracted and described from repeated use in practice. Fig. 2. Enterprise integration Solutions Hophe and Woolf [6] identify 65 patterns which are organized in 7 categories: Integration styles describe different ways to integrate systems. These can be file transfer, shared database, remote procedure invocation or messaging solutions. The Enterprise Integration Patterns are specific for the latter. Channel patterns describe the fundamental part of messaging systems. These channels are based on the relation between providers and consumers. Message construction patterns describe the type of message, intent, form and content. Routing patterns describe routes towards different receivers based on conditions. Transformation patterns are responsible for message content changes. Endpoint patterns describe the behavior of clients within the system. System management patterns are solutions for error handling, performance optimalization,... Because Enterprise Integration Patterns, as described above, are bounded to messaging, they are very important in Service Oriented Architectures. The integration between services will exist in exchanging messages to trigger events or exchange data. Since patterns only describe behavior of the different messaging components, Enterprise Integration Frameworks and Enterprise Service Busses cope with the practical use of 139 146 them. Frameworks provide a solution to use integration patterns from within a programmable context. Enterprise Service Busses on the other hand combine all integration rules within one black box and give some extra functionality like security, QoS, BPM,... B. ESB In this part we will describe why Enterprise Service Busses are related to some problems with Agile development, and give a solution to avoid these problems. ESBs hinder agile engineering practices. Agile development uses some engineering practices. Because of an ESB, all engineering practices that involve integration, will be bound to use the ESB. Especially for test driven development and continious integration, this will be a problem. Test evaluations will be less clear because developers do not know what happens inside the ESB. Automated test and build cycles will be hard because they need access to the ESB. Everything that hinders engineering practices, hinders Agile development and should be avoided. C. Lightweight Enterprise Integration We need to find a solution to solve enterprise integration, without reducing agility. Figure 3-A shows both the problem and the solution for enterprise integration. It shows us multiple point to point connections between applications and systems. The big problem with this picture is that the connection between services is hard coded within the service itself. This means that whenever service compositions change, these changes can occur anywhere in the code. This makes it unmanageable. What we need is a combination of both worlds. We do not want an ESB like figure 3-B, but the single point of entry for a service that it uses is a good idea. To solve this problem we can use a virtual ESB as in figure 4. Fig. 3. Enterprise integration: A without ESB, B with ESB ESBs do not allow easy changes. Vendors sell ESBs to provide complete integration solutions for the whole enterprise (see figure 3). All of the integration needs will be bundled within one vendor specific black box. This means that whenever an enterprise buys itself an ESB, it gets a vendor lockdown with it for free. This vendor lockdown makes development less agile because they are bound to use the vendor s solution. Changing or replacing an ESB will be a difficult task and therefore developers will try to avoid that. This is completely against Agile development. Within Agile development allowing change is a necessity, anything that works against change has to be avoided. ESBs are under separate control. Because ESBs span the whole enterprise, ESBs are mostly managed by a separate team. This is alien to the cross functional teams that Agile describes. These teams are responsible for the complete implementation of business needs, this includes the integration of services. With an ESB, these Agile teams will be slowed down because they have to communicate with other teams before they can integrate services. ESBs are not incremental. Vendors sell ESBs based on the analysis of an enterprise. They analyse the business integration needs, make a design, and based on that design they try to convince the organization to buy their solution. This type of development is reffered to as the waterfall approach. Because they try to implement all at once, it will become a very difficult and long development to make an ESB work. Fig. 4. Virtual ESB With a virtual ESB each service is connected, with its single point of entry, to its own small integration service. This piece of software will connect all the services where its service relies on. Typically this connection relies on two parts: Provide routing to the other services. Transform service messages to a uniform messaging format between the services. This is the part where Integration Frameworks show up. The virtual ESB will only need a combination of Enterprise Integration Patterns. Frameworks can handle the practical implementation of these patterns, without creating extra overhead like ESBs. This way we create a Service Oriented Architecture that can cope with Agile development. Teams who implement a service will also build the virtual ESB component that takes care of the enterprise integration. This way, the Agile team members are responsible for the whole service, including the 140 147 integration. The uniform message format makes sure that each service talks the same business language. It also contributes to the agile engineering practices like test driven design and continious integration. In their presentation about enterprise integration, Martin Fowler and Jim Webber [7] describe another approach based on the World Wide Web defined by Tim Berners-Lee. They suggest using the internet as middleware. The HTTP protocol used on the internet can be interpreted as a globally deployed coordination framework. This coordination comes from the use of status codes (404 Not found, 200 OK,...). But most important of all, the internet is incremental. It accepts new and small pieces to be implemented. This gives a great advantage when using Agile development methodologies. VI. PROOF OF CONCEPT WITH REST SERVICES In previous sections we have seen some constraints when combining a SOA with an Agile development approach. Based on these concerns we have decided to build a proof of concept based on Representional State Transfer (REST) services and Java. REST services, as described in Thomas Fieldings dissertation [4], use the WWW as middleware to exchange resource representations over a uniform interface. A resource is something interesting made public throughout the WWW by one or more representations. Each resource representation has at least one Uniform Resource Identifier (URI). This is the address on which a resource representation is available and obliging. An important property of the REST style is the uniform interface. Each interface will communicate over HTTP with the common HTTP verbs (POST, GET, PUT, DELETE,...). When talking about resources these HTTP verbs can have a one to one match with a create, read, update, delete (CRUD) lifecycle of the resource. In our proof of concept (PoC), we have built three separate services (figure 5). Each service will handle the lifecycle of a common resource applying product backlogs, sprint backlogs and tracking mechanisms, and Agile engineering practices like TDD, continious integration, pair programming and refactoring. B. Tools and frameworks In an Agile development cycle it is very important to be able to change. Heavy tools and frameworks put constraints on this agility, so we had to find lightweight tools and frameworks to enable this change. With this constraint in mind we ve chosen to start our project with Maven [8] and Spring [9]. Maven is used to simplify build processes, dependency management, simplify unit testing, and many more. With Maven enabled, there is no need to make build and test scripts to test and build projects, everything is automated. This is important for an Agile team, because build scripts don t add any business value to the project, and should be avoided. Spring on the other hand is a very large framework, but we only choose to start with the Inversion of Control (IoC) container it provides. The IoC container eliminates the need of singletons and factories within our code by delivering them at runtime. Both frameworks reduce the effort and complexity of starting and maintaining a Java based project. While developing we will certainly need more tools and frameworks, we ve always tried to take lightweight frameworks that allow changes as much as possible. For example Hibernate [10] allows us to change our database very easy by just changing a few lines of configuration code. The switch from a MySQL database to an Oracle database should only take about 10 minutes. C. Service Contracts For this project we are working with one team. This means that we can rely on our own tests to see if contract changes break functionality. There is no need for wiki pages to advert our changes. As we have discovered in our service contract chapter, it is better to use consumer-driven contracts. Because we use Agile methodologies, our requirements are described in user stories. These stories tell us what users (in our case consumers) should be able to do. This means that the Agile methodologies are already driven from consumer expectations. Based on these expectations, we should implement our service contracts and services. In our case we have used the enunciate [11] package. This packages automatically generates service contracts based on Web Application Description Language (WADL) [12]. Fig. 5. Proof of concept This way we have only built code that our business realy needs. Services will only dispose functionality consumers realy need, being consumer-driven. A. Agile One important constraint for our PoC was to use Agile methodologies. This means using Agile project management D. Enterprise Integration The enterprise integration part was probably the most important constraint in our implementation. Our PoC consists 141 148 of three services which are all consumable through a REST interface. The employee and company service are both implemented on site A, while the project service is implemented on site B. Because we use the HTTP protocol to communicate with services, it becomes very easy to integrate both services together. The HTTP protocol is globaly available and known, so by using the WWW, we are actually using a big middleware platform that scales on a global scale. This is the main advantage of REST services. But there was still one flaw within our implementation. On the project side we can put our employees on different projects. We do this by using an aggregate of our employee (being the staff number) and attach it to a project. There is one situation where a problem arises. When we delete an employee at site A, this employee should be removed out of all the projects he is in. But this should happen asynchronous. When an employee is deleted, our employee service should notify the project service as soon as possible to delete all aggregates. But when the project service is down, we don t want our employee service to fail, instead, it should send the request as soon as the project site is back available. This is what we mean by asynchronous. To do this, we have chosen to use an enterprise integration framework, called Camel [13], at the employee service. This framework will save an XML file on the employee service hard disk, and tries to deliver it as soon as possible to our project site. This delivery also goes through REST, which means that our project service does not need extra configuration or frameworks to grant the request. VII. CONCLUSION Service-Oriented Architecture is often misconcepted because of the ambiguity that emerged throughout the years. People forgot that Service-Oriented Architecture is all about services that are reusable and discoverable. All the other constraints that vendors or businesses link with SOA, are a burden most of the time. We have discovered that Agile is all about the less is more principle when it comes to choosing proprietary frameworks and tools. Developers should only implement the things that deliver business value and this as simple as possible. The second impediment when combining SOA with Agile development are service contracts. Since businesses are bound to change, services are bound to change. Every service should have its own service contract, which means that service contracts should be able to change too. Nobody can foresee in which extent services will have to change in the future, so there is no way of making a service contract upfront that lasts a lifetime. We have described a service lifetime that starts with the development, goes to maturity and from maturity it can go to evolution and back whenever change is needed. Within development there are two main cases: One team works on all services: The daily communication will make sure that contract evolution will be managed. Multiple teams work on dependent services: There is need for written documentation (wiki), or extensive communication between those teams to exchange contract changes. When a service evolutes, we have found that consumerdriven contracts are better manageable within an Agile context. It always shows how a provider is used by consumers and makes sure they stay compatible more easy. The third and last described impediment is Enterprise Integration. We have shown that vendors try to sell solutions for the whole enterprise which don t scale very well. Their famous ESBs are hard to implement because they use a big-bang approach which hardly ever works from the first time. These ESBs make our enterprises less agile because integration changes become very pricy and time consuming. They also stimulate a single team that copes with the enterprise integration, which makes it difficult for other Agile teams to estimate the duration of implementation. As a solution for this problem we have described a lightweight enterprise integration approach with a virtual ESB. In this setup, each team developing a service will also integrate that service with other services. They glue a small amount of integration code against their service to interact with other services. This interaction will happen in form of messaging in a uniform language. As a general conclusion we can say that Service Oriented Architectures and Agile development are combinable. They both stress on agility in different contexts, but when applied well they behave like extenders for eachother. REFERENCES [1] K. Schwaber, M. Beedle, Agile Software Development with Scrum, Prentice Hall, [2] K. Beck, C. Andres, Extreme Programming Explained: Embrace Change, Pearson, 2nd edition, [3], Manifesto for Agile Software Development, at [4] R. T. Fielding, Architectural Styles and the Design of Network-based Software Architectures, University of California, Irvine, [5] I. Robinson, Consumer-Driven Contracts: A Service Evolution Pattern, ThoughtWorks Ltd., [6] G. Hophe, B. Woolf, Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions, Addison-Wesley Professional, [7] M. Fowler, J. Webber, Does my Bus look big in this?, ThoughtWorks Ltd., [8] Apache Software Foundation, Apache Maven Project, at [9] Spring, at [10] JBoss Community, Hibernate, Relational Persistence for Java and.net, at [11] Codehaus, Enunciate, at [12] W3C, Web Application Description Language, at [13] Apache Software Foundation, Apache Camel, at 142 149 Detection of body movement using optical flow and clustering Wim Van Looy 1, Kris Cuppens 123, Bart Vanrumste IBW, K.H. Kempen (Associatie KULeuven), Kleinhoefstraat4, B-2440 Geel, Belgium 2 Mobilab, K. H. Kempen Kleinhoefstraat4, B-2440 Geel, Belgium 3 KULeuven,ESAT, BioMed Celestijnenlaan 4, B-3000, Heverlee Abstract In this paper we investigate whether it is possible to detect movement out of video images recorded from sleeping patients with epilepsy. This information is used to detect possible epileptic seizures, normal movement, breathing and other kinds of movement. For this we use optical flow and clustering algorithms. As a result, different motion patterns can be extracted out of the clustered body parts. Keywords Epilepsy, optical flow, spectral clustering, movement, k-means I. INTRODUCTION Epilepsy is still being researched in the medical world. It is very hard to find a specific cure for the disease but momentarily around 70% to 75% can be treated with medicines or specific operations. In a way to find new insights in these types of attacks, monitoring is important. Nowadays different methods have been proposed and developed to detect and examine epileptic seizures [10]. Mostly the video EEG standard is used. Neurologists monitor the patient using cameras and EEG electrodes [14]. They can look at the behavior and on meanwhile compare the reaction of the brain in an EEG chart. This is an effective but uncomfortable way of monitoring a patient. Electrodes have to be attached which consumes a lot of time, it is uncomfortable for the patient and it is quite hard to do other activities while being monitored. Next, medical staff is required and it is not possible to monitor a patient for a longer period. In this paper we explain a new approach for detecting epileptic seizures with body movement, using video monitoring. Our next goal is to achieve an accurate detection of the seizures using simple and well priced hardware. With a simple infrared camera and a computer it should be possible to make a detection out of the video images featuring a resolution of 320 x 240 pixels. The use of simple hardware requires intelligent processing as video data is computationally intensive. We investigate which algorithms are efficient on our video data. Previous research showed us that clustering algorithms can be used to cluster optical flow results for image segmentation [9]. In this paper, algorithms as optical flow and spectral clustering are tested on video recordings. The first algorithm, optical flow, is a motion detection algorithm that is capable of calculating vector fields out of two video frames. For this we use the Horn-Schunck method [15], this is discussed in section II. Optical flow calculates the movement of each pixel using parameters as brightness and contrast. Each vector contains the velocity and direction which allows us to extract information necessary for a seizure detection. This is the main reason why this algorithm is chosen. Other motion detection techniques such as background subtraction or temporal differencing do not give us information about the velocity and position of the movement. Next, clustering is used to cluster the different features that are given by the optical flow algorithm. The goal is to separate different body parts and measure their velocity and direction to make an accurate prediction of the movement. The monitoring of respiration is also possible using our method. The algorithms are applied using Matlab. Section II explains how the clustering can be optimized, how threshold calculation is done and which standards we used to come to our conclusions. In the third section we explain the results and how these results are influenced. Finally in the last section a vision is given on possible future improvements. II. METHOD A. Video specifications The video data we use is recorded in Pulderbos epilepsy centre for children and youth in Antwerp. It is compressed using Microsoft s WMV compression. The video has a resolution of 720 x 576 with a frame rate of 25 frames per second. The camera is positioned in the upper corner of the room, monitoring a child sleeping in a bed. It is possible that the person is covered by a blanket so the different body parts are not always visible. This shouldn t be a problem in the final result as the body movement connects with the blanket. The image is recorded in grayscale using an infrared camera, no RGB information is available. Before we are able to use the data for our optical flow algorithm, the video sequences are reduced in size and frame rate. We downsize the image to a 320 x 240 pixels which contains enough detail to apply the optical flow. The frame rate is also lowered to a 143 150 12,5 frames per second. Due to these reductions, processing time has significantly decreased with a minor loss of detail. B. Optical flow The next step is to deliver the reduced video data to the optical flow algorithm. The algorithm calculates a motion field out of the consecutive frames. It calculates a vector for every pixel which is characterized by the direction and magnitude of movement in the video. Mathematically a translation of a pixel with intensity and time can be written as follows: (1),,,, Using differential methods, the velocity in both x and y directions is calculated. The Horn-Schunck method uses partial derivatives to calculate motion vectors. It has a brightness constancy constraint, this means that the brightness stays constant over a certain time span. This is used to track pixels from one image to another. Horn & Schunck also uses a smoothness constraint, in case of an unknown vector value, the algorithm assumes that its value will be quite similar to the surrounding ones. It is important to supply video with rigid objects and a good intensity of the image. Otherwise the algorithm will respond less accurate. The algorithm calculates movement out of two consecutive frames by default. It is possible to use frames over a bigger time span to emphasize the movement. (e.g. to monitor respiration which is a slower movement) First we need to specify the smoothness and the number of iterations. The smoothness factor we ve chosen is 1. This value is proportional to the average magnitude of the movement. It also depends on the noise factor. In our approach the factor is defined by experiment. Next, the number of iterations has to be specified. When the number is higher, the motion field is more accurate and noise is reduced. A downside is that the higher this parameter is, the more calculations have to be done. Our experience showed us that 1000 iterations were ideal for both accuracy and speed. As a result, the optical flow algorithm produces a motion vector field. The information is stored in a matrix containing a complex value for each pixel. Using absolute value, the complex magnitude is found. The angular value of the vector shows the direction of the moving pixel. When this information is visualized, some noise in the signal became visible. This noise is the result of motion indirectly caused by the camera. A threshold is calculated to eliminate noise (e.g. Fig. 1). The maximum amplitude for each frame is plotted. The result of this visualization is simultaneously compared to movie sections without movement. The maximum amplitude of these sections is used as a threshold. Fig. 1 Maxima of the magnitude of the optical flow calculations. Video length is 10 seconds. Noise is visible around Actual movement is indicated by magnitude values higher than For all the magnitude values beneath this threshold, the matching motion vector will be replaced by a zero and will be ignored in any further calculations. C. Clustering The next step in this process is to cluster the vector field. A clustering algorithm makes it possible to group objects or values that share certain similarities or characteristics. In this approach we cluster pixels belonging to one moving part in the image having the same direction, speed and location. This results in different body parts moving separately from each other. Different clustering methods are available to make a classification. We tested several spectral clustering methods and also tested the standard Matlab k- means clustering on our dataset. We also used the Parallel Spectral Clustering in Distributed Systems toolbox provided by Wen-Yen Chen et al. [3] This toolbox for Matlab provides different clustering methods. Eventually the results of the k-means clustering algorithm provided by Matlab proved best. Both accuracy and speed scored very well in our opinion. More info is given in the next section. Before we apply the clustering, certain features had to be extracted from our optical flow field. D. Clustering features A clustering algorithm needs features to classify objects in different clusters. We tested the algorithms with different features, starting by the following three. Magnitude Direction Location The magnitude can be found by taking the absolute value of the vectors. It represents the strength of the movement. For example as the patient strongly moves his head from the right to the left, this will result in vectors with high magnitude for the pixels that correspond with the location of the head. If the patient moves his hand at the same moment, these pixels will also have the same magnitude but have a different location and are therefore grouped in two different clusters when two clusters are specified. As a second feature the direction was used. We use the radian angular value of the vectors by default. It can be converted to degrees which makes no difference for the algorithm. Due to the scale of 0 to 360, phase shifting 144 151 occurred. Two pixels pointing towards 0 or 360 have the same physical direction. As a feature this is falsely interpreted by the clustering algorithm, resulting in bad classification as shown in figure 2-B. A Fig. 2 Phase shift results in bad clustering, clusters are covering each other. Direction is plotted in radians. A solution to this problem is to split up the angular feature in two parameters. When the imaginary and real part are divided by the magnitude of the complex vector, the magnitude is suppressed and the direction is now given as a coordinate on the unit circle. Phase jumps have been eliminated but one feature is replaced by two features which has consequences for weight of this feature. This is discussed in section E. As a result, this feature will cluster all the movements that point to a single direction, independent of the location or magnitude. The third feature consists of the location of the movement vector. We use the coordinate of the pixel in the x-y plane. This feature is used to make a distinction between movements that occur in different parts of the image. E. Clustering algorithm First an appropriate clustering algorithm needs to be specified. The algorithms we investigate are [3]: spectral clustering using a sparse similarity matrix spectral clustering using Nyström method with orthogonalization spectral clustering using Nyström method without orthogonalization k-means clustering Method 1: the spectral clustering method using a sparse similarity matrix. This matrix gives the Euclidean distances between all data points. [2,18] This type of clustering gives good results with higher number of clusters. Nevertheless it requires too much processing time and it gave bad results with two or three clusters. The calculation of the sparse similarity matrix would take half a minute. (e.g. Table I) This is caused by the high amount of data our image features can contain. This calculation requires too much processing time which is not feasible on a normal pc system. Method 2 and 3: using the Nyström method with or without orthogonalization, the processing time significantly decreased (e.g. Table I). This is because Nyström uses a fixed number of data points to calculate the similarity matrix. Around 200 samples are compared to the rest of the data B points to calculate the matrix. The cluster quality is more than average and is usable for further processing. No difference in quality was noticeable between both Nyström methods but without the orthogonalization, the clustering is faster. Method 4: k-means first randomly specifies k center points. Next it calculates the Euclidean distance between these centroids and the data points. Data points are grouped with the closest center point. By repeating this step and calculating new center points for the current clusters, it tries to maximize the Euclidean distance between clusters. Using the selected features, k-means provided good results. It requires minimal processing time (e.g. Table I) and gives an accurate clustering. We will continue our research using this algorithm. TABLE I CLUSTERING SPEED COMPARISON. Time needed for clustering one frame ( average of 20 iterations) Pentium D 830 3Gb Ram, Windows 7 system Clustering type Processing time Spectral Clustering 39s Nyström with orthogonalization s Nyström without orthogonalization s K-means s F. K-means and scaling of the features The next step applies the features to the clustering algorithm that will cluster our data. Without scaling, the x- coordinate varies from 0 to 320 while the magnitude of the complex vector varies between 0 and 1. A scale has to be applied to define which feature needs more weight and which one needs less weight. In order to find good weights, we use visual inspection instead of a mathematical approach. This method is easier to use and provides clear understanding of the used data and algorithms. Features are plotted on a 3D plot. Coordinates on the x- and y-axis and the other feature on the z-axis (e.g. Fig. 3). A Fig. 3 Features direction (A) and magnitude (B) plotted versus x- and y- coordinates. The direction feature has a larger weight, the algorithm clusters on this feature as the data points have a bigger variation and more weight. Plot (B) shows the impact of this clustering on the magnitude equivalent. This visualization makes it easier to see the impact of the scaling. Adapting these scales soon lead to the appropriate B 145 152 clustering. The final result should be as follows: pixels are grouped when they differ in location, intensity of movement and direction. Ideally the clusters cover different body parts. G. Cluster analysis The next issue is providing the number of clusters before the clustering. Every movement is different, which gives a varying number of clusters. We use inter and intra cluster distances to check the quality of the clustering [17]. The inter cluster distance is the distance between the clusters. The bigger this value, the better the clustering as there is a clear distinction between the clusters. The intra cluster distance is the average distance between the centroid and other points of the cluster. The intra cluster distance should be minimized to have compact clusters. In this research, a maximum of four clusters is common, one cluster is the minimum of course. This occurs when a person only moves its arm for example. H. Defining thresholds All frames of the test video are clustered up to four clusters. Several features are extracted out of the distance measures. The most important features are the following four. Maximum overall inter cluster distance Standard deviation of the intra cluster distance for two clusters Mean of the intra cluster distance for two clusters Maximum of all intra cluster distances for one frame The frames are labeled with the right number of clusters and compared to the selected features. Our goal is to find similarity between the labeled frames and the features. The comparison showed us that the maximum inter cluster distance and the standard deviation for two clusters would give the best results. Next, the labeled frames are classified using the selected features (e.g. Fig. 4). The classification is based on Euclidean distance. predicted value (PPV) and negative predicted value (NPV). These measures are stated below. (Distinction between class 1 and class 2.) (2) 1 1 (3) 2 2 (4) 1 1 (5) 2 2 TP stands for true positives, FN for false negatives, TN for true negatives and FN for false negatives. The results of the application of thresholds can be found in section III. III. RESULTS A. Cluster analysis results To find the right number of clusters, thresholds are defined. First the video is labeled. (i.e. the right number of clusters is added manually to each frame). This is done for three videos. Using the maximal inter cluster distance, the standard deviation of intra cluster distance for 2 clusters and the right number of clusters, the data is trained. Our training set consists of 80 frames. The rest of the data is used as test set (i.e. 56 frames). Using classificationication in Matlab, thresholds are calculated. The thresholds are tested using the test set and shown in table II. TABLE III CLUSTER ANALYSIS RESULTS FOR DISTINCTION BETWEEN ONE OR TWO CLUSTERS AND DISTINCTION TION BETWEEN TWO OR THREE CLUSTERS. Seperation between class 1 and 2 Sensitivity 90,90% Specificity 62,50% PPV class A 83,30% NPV class B 76,90% Seperation between Class 2 and 3 93,30% 38,90% 71,80% 77,80% Fig. 4 This graph shows the classification between 1 or 2 clusters. Next thresholds can be defined using a training and test set to specify how many clusters should be used. The quality of the thresholds is tested ted on sensitivity, specificity, positive All of these values should be as high as possible. Results are discussed in section IV. 146 153 B. Movement analysis In this section the results of our study are presented. In the following movie sequence a young patient is monitored, she randomly moves her head from the left side to the right side of the bed (e.g. Fig. 5-AC). A B A C B D Fig. 5 Sample of two frames, in frame 41 the head is moving towards the left side. The red colour and the arrows indicate this. In frame 45 the head is moving towards the right side. Screenshots (B) and (D) show the clustering. Both clusters that cover the head contain different movement information. We cut 60 frames or five seconds out of the video. Two clusters are selected, one cluster represents the head, the other cluster features the lower body part. The clusters are segmented using the standard k-means clustering method. The Nyström method would give similar results. For both body parts, direction and intensity of the movement are plotted. It can be seen that the direction plot of the head crosses the horizontal axis multiple times (e.g. Fig. 6- A). In the example the head is moving towards the left side and next to the right side. This can be seen as -120 (left) and 15 (right) (e.g. Fig. 6-A). A Fig. 7 The plot above shows the direction and intensity of the lower body part movement. This information is provided by the cluster that covers the lower body part. This information can be used to conclude whether or not a patient is simply moving or having a seizure. Strong movement and fast change in direction are signals that can point to seizures. This needs to be studied in the future to find measures that confirm this. The next example shows a sleeping patient. The aim of this test was to measure the breathing of the patient. Originally the breathing was monitored using sensors attached to the upper body. Now we can monitor this using video detection. For this test, the algorithm uses one cluster. The plot shows 20 seconds or 250 frames of the original video. The breathing is clearly visible in the signal as the angle changes 180 degrees each sequence (e.g. Fig. 8). In the intensity plot is seen that inhaling produces slightly more movement than exhaling (e.g. Fig. 8-B). A B B Fig. 6 The plot above shows the direction and intensity of the head movement. This information is provided by the cluster that covers the head. Figure 6-B plots the intensity of the movement. Figure 7 gives information about the direction and intensity of movement for the lower body part. Fig. 8 Breathing monitored and visualised with angular movement and intensity. The breathing pattern is clearly visible. In future work the monitoring of respiration should be combined with nocturnal movement. This will not be easy as the magnitude of the respiration is much smaller compared to the magnitude of movement. IV. DISCUSSION We discuss several possible improvements for our method (e.g. automated cluster scaling and improved cluster analysis ). 147 154 It would be interesting to test a system that scales the features depending on the situation. As a limited number of frames provide poor clustering quality, automatic scaling might solve this problem in these cases. Sometimes other features should have more weight than others. E.g. as the complete body is moving, the location feature might be emphasized a bit to have better distinction of the clusters. Cluster analysis provides a good automated distinction between one, two or three clusters. The difference between two or three is less accurate but good in certain circumstances where the inter cluster distance has a higher value. The specificity of class three is a bit too low. The chances are 61.10% that a frame should be clustered using two clusters while being labeled in class 3 (e.g. Table II). It is real that frames belonging to class 2 are falsely classified in class 3. Note that it is possible that they are classified correctly as the labeling sometimes has different correct number of clusters. As a conclusion we can say that the algorithm is able to make a distinction between one, two or three clusters, but the right amount of clusters needs to be supplied manually when four or more clusters are required. Other cluster features have to be searched in the future. Next, in our approach, the ideal number of clusters is defined before information is extracted out of the clusters. On a system with enough power, different cluster properties (intensity, direction ) can be compared in time to see which number of clusters is ideal. E.g. if a body part is moving in a certain direction with velocity x, it is expected that the same body part will still be moving in the same direction the next frame but increasing or decreasing its velocity x. This way you can expect a cluster at the same location of the previous frame featuring slightly different properties. Increasing the frame rate of the video will make clusters change more gradually but more system power is needed. This method can be described as cluster tracking. We also experimented with pixel intensity of the original video pixels as a feature but this resulted in bad clusters. This is because the pixel intensity is not directly related to the movement of the patient. The light source stays at the same position in time. Therefore it was not studied any further. Finally, optical flow sometimes has problems with slow movement as the magnitude of these movements becomes comparable to the magnitude of noise. This requires an adjustment of the optical flow settings. Optical flow needs to be calculated on two frames, the current frame and a frame shifted in time. This could be improved, measuring the average movement over a fixed period in order to calculate which frame in the past should be used to calculate the optical flow. This way it is possible to have an accurate motion field with less noise. V. CONCLUSION The research learned us that it is possible to cluster movement in video images. Different body parts can be separated using the location, direction and intensity of the movement. Out of these clusters further information can be extracted whether or not the patient is having epilepsy seizures, is breathing etc. Of course this method still has room for improvement. VI. ACKNOWLEDGMENTS Special thanks to the Mobilab team and KHKempen to make this research possible. REFERENCES [1] Casson, A., Yates, D., Smith, S., Duncan, J., & Rodriguez-Villegas, E. (2010). Wearable Electroencephalography. Engineering in Medicine and Biology Magazine, IEEE (Volume 29, Issue 3 ), 44. [2] Chen, W.-Y., Song, Y., Bai, H., Lin, C.-J., & Chang, E. Y. (2008). Parallel Spectral Clustering in Distributed Systems. Lecture Notes in Artificial Intelligence (Vol. 5212), [3] Chen, W.-Y., Song, Y., Bai, H., Lin, C.-J., & Chang, E. Y. (2010). Parallel Spectral Clustering in Distributed Systems toolbox. Retrieved from [4] Cuppens, K., Lagae, L., Ceulemans, B., Van Huffel, S., & Vanrumste, B. (2009). Automatic video detection of body movement during sleep based on optical flow in pediatric patients with epilepsy. Medical and Biological Engineering and Computing (Volume 48, Issue 9), [5] Dai, Q., & Leng, B. (2007). Video object segmentation based on accumulative frame difference. Tsinghua University, Broadband Network & Digital Media Lab of Dept. Automation, Beijing. [6] De Tollenaere, J. (2008). Spectrale clustering. In J. D. Tollenaere, Zelflerende Spraakherkenning. (pp. 5-18). Katholieke universiteit Leuven, Leuven, België. [7] Fleet, D. J., & Weiss, Y. (2005). Optical Flow Estimation. In N. Paragios, Y. Chen, & O. Faugeras, Mathematical models for Computer Vision: The Handbook. Springer. [8] Fuh, C.-S., & Maragos, P. (1989). Region-based optical flow estimation. Harvard University, Division of Applied Sciences, Cambridge. [9] Galic, S., & Loncaric, S. (2000). Spatio-temporal image segmentation using optical flow and clustering algorithm. In Proceedings of the First International Workshop on Image and Signal Processing and Analysis. Zagreb, Croatia: IWISPA. [10] International League Against Epilepsy. (n.d.). Epilepsy resources. Retrieved from [11] Lee, Y., & Choi, S. (2004). Minimum entropy, k-means, spectral clustering. ETRI, Biometrics Technol. Res. Team, Daejon. [12] Nijsen, N. M., Cluitmans, P. J., Arends, J. B., & Griep, P. A. (2007). Detection of Subtle Nocturnal Motor Activity From 3-D Accelerometry Recordings in Epilepsy Patients. IEEE Transactions on Biomedical Engineering. [13] Raskutti, B., & Leckie, C. (1999). An Evaluation of Criteria for Measuring the Quality of Clusters. Telstra Research Laboratories. [14] Schachter, S. C. (2006). Electroencephalography. Retrieved from [15] Schunck, B. G., & Horn, B. K. (1980). Determining Optical Flow. Massachusetts Institute of, Artificial Intelligence Laboratory, Cambridge. [16] Top, H. (2007). Optical flow en bewegingillusies. University of Groningen, Faculty of Mathematics & Natural Sciences. [17] Turi, R. H., & Ray, S. (1999). Determination of Number of Clusters in K-Means Clustering and Application in Colour Image Segmentation. Monash University, School of Computer Science and Software Engineering, Victoria Australië. [18] Von Luxburg, U. (2007). A Tutorial on Spectral Clustering. Statistics and Computing (Volume 17 Issue 4). [19] Xu, L., Jia, J., & Matsushita, Y. (2010). Motion Detail Preserving Optical Flow Estimation. The Chinese University of Hong Kong, Microsoft Research Asia, Hong Kong. [20] Zagar, M., Denis, S., & Fuduric, D. (2007). Human Movement Detection Based on Acceleration Measurements and k-nn Classification. Univ. of Zagreb, Zagreb. [21] Zelnik-Manor, L. (2004, Oktober). The Optical Flow Field. Retrieved from 148 155 A Comparison of Voltage-Controlled Ring Oscillators for Subsampling Receivers with ps Resolution S. V. Roy A comparison between different stages in ring oscillators is presented both in terms of jitter and phase noise. The architecture with least jitter and phase noise will be searched for. Also there will be a comparison of a multi-path and a single path ring oscillator. From simulation results we can conclude that a multi-path differential architecture produces less phase noise and jitter than a single-path differential architecture. The reason for this is the low RMS values of the impulse sensitivity function that are obtained. Index Terms Voltage Controlled Ring Oscillator, design methodology, jitter, phase noise, multi-path. T I. INTRODUCTION hese days, ring oscillators are used in many applications. They can be used in applications such as clock recovery circuits for serial data communications [1]-[4], disk-drive read channels [5], [6], on-chip clock distribution [7]-[10], and integrated frequency synthesizers [10], [11]. In these applications, the ring oscillator is typically a voltage controlled oscillator (VCRO). The use of Ultra-Wideband (UWB) radars has been proven useful in the biomedical industry. They can be used for monitoring a patient s breathing and heartbeat [12], hematoma detection [13] and 3-D mammography [14]. In [15], [16]. It specifies a resolution of 11 bits or more for the ADC, and a sampling frequency with jitter less than or equal to 1 ps. This sampling frequency will be provided by a VCRO in a Phase- Locked Loop (PLL). Fig. 1 shows that for each new period a certain delay τ is added, so for each period, a time τ later, a sample is taken. Fig. 1 Principle of UWB pulse subsampling Because the time τ is very low, the jitter must also be low. The jitter cannot be greater than one tenth of the delay. This paper describes how a ring oscillator is used to add a certain delay to each period, for example to perform subsampling. Fig. 2 shows a subsample circuit using a PLL. There will occur a frequency division of 5, because the frequency that is produced by the VCRO is 5 times greater than those provided to the phase detector. The logic will select the correct branch from the VCRO where necessary to obtain the desired delay. Each period, a line further will be taken in order to obtain a resolution equal to the delay of a cell. Fig. 2 Principle of subsampling in a PLL using a VCRO A comparison is done between a multi-path VCRO and a cross-coupled load. It is concluded that the multi-path is better 149 156 than the single-path load. This will be visible in the delay of a single stage, but also jitter and phase noise are better. Jitter is a variation on the time of a periodic signal, often used in a comparison with a clock source. An example of what can happen in the presence of jitter: if an analog-to-digital or digital-to-analog conversion occurs, it has to be ensured that the sampling time remains constant. If this is not the case, i.e., there is jitter present in the clock signal of the analog-to-digital convertor (ADC) or digital-to-analog convertor (DAC), then the phase and amplitude of the signal is affected depending on the magnitude of the jitter. Fig. 3 shows how jitter evolves over time. The jitter added by each separate stage, is totally independent of jitter added by other stages. Therefore, the total variation of jitter is given by the sum of jitter which is added by each stage separately. When little jitter occurs, it will expand over time, because several stages are dealt with, which also have jitter and thus the sum is greater. Therefore the gated ring oscillator is used [17]. It will be turned on and after a certain time back off. This means that the jitter does not increase in time when the oscillator does not need to work. input of the next stage. In this case, the negative (resp. positive) output from one stage can be found at the drain of M1 (resp. M2). The positive (resp. negative) input of a stage can be found at the gate of M1 (resp. M2). It must be ensured that there are an odd number of stages, otherwise the whole structure will not oscillate. There can also be taken an even number of stages but then there has to be a crossing in the connection. This is shown in Fig. 6. Negative Output Positive Input V cntrl Positive Output Negative Input Fig. 4 Cross-coupled load Fig. 3 Jitter expanding over time When talking about phase noise, this is the same as jitter, phase noise is only seen in the frequency domain and jitter in the time domain. In the next section the architecture of the VCRO shall be cited. Hereby are the single-path differential architecture and multipath differential architecture discussed, but also the Cross- Coupled Load (CCL) architecture which is the architecture of a single element. Thereafter there is a chapter that will discuss the calculation of the impulse sensitivity function (ISF). This ISF is necessary to calculate the RMS value and this is required to calculate phase noise and jitter. Finally some simulation results will be discussed. II. VCRO ARCHITECTURE This paper will show the difference in phase noise and jitter on a single-path differential ring oscillator and a multi-path ring oscillator. Both architectures have a different number of crosscoupled load (CCL) stages. There is chosen for the CCL because it is not too difficult in terms of architecture, but still good for an acceptable jitter and phase noise [18]. Fig. 4 shows the cross-coupled load. It is clear why these stages are called cross-coupled load, namely because the transistors M1 and M2 are loads that are cross-wise coupled to the opposite branch. To use these stages to form a ring oscillator, they must be contiguous as shown in Fig. 5. The negative (resp. positive) output of the stage is connected to the positive (resp. negative) Fig. 5 Differential ring oscillator (odd number of stages) Fig. 6 Differential ring oscillator (even number of stages) In a multi-path ring oscillator, we have the same architecture as in the previous differential ring oscillator, except that the output of a stage is not only connected to the input of the next stage, but also to the input of one or more of the following stages. An example of a multi-path ring oscillator with 9 stages is shown in Fig. 7. If more inputs are used per stage, then the complexity will increase. In order to link the stages together, extra input transistors must be placed in parallel with the input transistor that already was present (Fig. 8). The size of these extra transistors will be different from those that are already there, because they must have less influence on this stage. This premature charging or discharging of the node provides a gain in speed. 150 157 and is the effective capacitance on the node at the time of injection. For small V, the change in phase (t) is proportional to the injected current: Γ (5) where V swing is the voltage swing over the capacitor. Γ is the time-varying proportionality constant, and it has a period of 2π. Γx represents the sensitivity of each point of the waveform to a perturbation. Γx is therefore called the impulse sensitivity function (ISF). A current will be injected during simulations in order to be able to measure V and. With these values one can calculate the ISF with (5). The is read out several periods after injecting the current. With the current injected at different times, we can draw a graph as shown in Fig. 9. Fig. 7 Example of a multi-path ring oscillator with 9 stages V cntrl Output Negative Positive Inputs Output Negative Inputs Output Output Positive Fig. 9 Approximate waveform and ISF for ring oscillators The phase noise spectrum originating from a white noise current source is given by [20]: / (6) where Γ is the RMS value of the ISF, ı / f the singlesideband power spectral density of the noise current, q max the maximum charge swing and f the frequency offset from the carrier. The related phase jitter is then given by: Fig. 8 Cross-coupled load with multiple inputs III. CALCULATING THE ISF FOR RING OSCILLATORS This section summarizes the main results from [19] which will allow us to calculate the ISF. When a current is injected at a node, phase noise and jitter will occur. Suppose that the current consists of an impulse with a charge q (in coulombs) and that it occurs at t = τ. This will cause a change in voltage on the node. This voltage change is given by: (4) Γ 2 2. / 2 (7) For the calculation of phase noise and jitter using (6) and (7), one needs to know the RMS value of the ISF. As can be seen in these formulas, it is preferable to keep the RMS value as low as possible. 151 158 For the oscillator waveform in Fig. 9, the ISF has a maximum of 1/f max. Here f max is the maximum slope of the normalized waveform f in (8). So what is desired is that the slope is as steep as possible. This is the output of a practical oscillator:. [ + ] (8) If it is assumed that the rise and fall time are the same, Γ can be estimated as: Γ Γ / ² (9) On the other hand, the stage delay is proportional to the rise time. (10) where is the normalized stage delay and η is the proportionality constant, which is typically close to one. The period is 2N times longer than a single stage delay: 2 2 (11) Where N is the number of stages. If formulas (9) and (11) are placed beside each other, the following approximate formula is obtained: Γ. (12) Note that the 1/N 1.5 dependence of Γ rms is independent on the value of η. IV. EXPERIMENTAL RESULTS By simulating both the multi-path and the single-path differential architecture, there is a minimum delay for a stage achieved at each of these architectures. In multi-path differential architecture, this delay is 20ps per element. For example, if a window of 2ns is desired, there are 100 elements needed to sample the entire window. In the singlepath differential architecture, this delay is 50ps per element. Then there will be needed 40 elements to sample the entire window of 2ns. As discussed in previous section, the phase noise and jitter are calculated using (6) and (7), but before these formulas are used, the RMS value of the ISF must be known. The ISF of every architecture must be simulated. This is achieved by injecting a current at a node and measure the phase shift several transitions later on. With (5), the ISF can be calculated. The ISF of the single-path differential architecture is shown in Fig. 10. Fig. 11 shows the ISF at the rising edge. The ISF of the multi-path architecture can be found in Fig. 12. Fig. 13 shows again the ISF at the rising edge. It can already be noted that the ISF becomes wider when the number of elements decreases. When calculating the RMS value from this ISF, it will be noted that the RMS value decreases with increasing number of elements. This can be seen in Fig. 14. If looking at (6) and (7), it can be observed that the RMS value is preferably kept as low as possible. x r(w0t) Elements 54 Elements 72 Elements 90 Elements x(radian) Fig. 10 ISF of a single-path differential architecture r(w0t) x Elements 54 Elements 72 Elements 90 Elements x(radian) Fig. 11 Magnification of the ISF at rising edge r(w0t) 2 x Elements 63 Elements 81 Elements 99 Elements x(radian) Fig. 12 ISF multi-path differential architecture 152 159 r(w0t) Fig. 13 Magnification of the ISF at rising edge rms-value x x Number of delay elements Fig. 14 RMS value of single-path and multi-path architecture V. CONCLUSION A comparison of different voltage-controlled ring oscillators has been presented. This has been especially the single-path differential architecture and multi-path differential architecture. In this comparison, we mainly focused on the phase noise and jitter, because these are important in the subsampling we want to execute. From simulations and calculations we can conclude that the multi-path architecture in terms of differential phase noise and jitter certainly obtain better results than single-path differential architecture. REFERENCES 45 Elements 63 Elements 81 Elements 99 Elements x(radian) Single-path Multi-path [1] L. DeVito, J. Newton, R. Croughwell, J. Bulzacchelli, and F. Benkley, A 52 and 155 MHz clock-recovery PLL, in ISSCC Dig. Tech. Papers, pp , Feb [2] A. W. Buchwald, K. W. Martin, A. K. Oki, and K. W. Kobayashi, A 6-GHz integrated phase-locked loop using AlCaAs/Ga/As heterojunction bipolar transistors, IEEE J. Solid-State Circuits, vol. 27, pp , Dec [3] B. Lai and R. C. Walker, A monolithic 622 Mb/s clock extraction data retiming circuit, in ISSCC Dig. Tech. Papers, pp , Feb [4] R. Farjad-Rad, C. K. Yang, M. Horowitz, and T. H. Lee, A 0.4 mm CMOS 10 Gb/s 4-PAM pre-emphasis serial link transmitter, in Symp. VLSI Circuits Dig. Tech Papers, pp , June [5] W. D. Llewellyn, M. M. H. Wong, G. W. Tietz, and P. A. Tucci, A 33 Mbi/s data synchronizing phase-locked loop circuit, in ISSCC Dig. Tech. Papers, pp , Feb [6] M. Negahban, R. Behrasi, G. Tsang, H. Abouhossein, and G. Bouchaya, A two-chip CMOS read channel for hard-disk drives, in ISSCC Dig. Tech. Papers, pp , Feb [7] M. G. Johnson and E. L. Hudson, A variable delay line PLL for CPUcoprocessor synchronization, IEEE J. Solid-State Circuits, vol. 23, pp , Oct [8] I. A. Young, J. K. Greason, and K. L. Wong, A PLL clock generator with MHz of lock range for microprocessors, IEEE J. Solid- State Circuits, vol. 27, pp , Nov [9] J. Alvarez, H. Sanchez, G. Gerosa, and R. Countryman, A widebandwidth low-voltage PLL for PowerPCTM microprocessors, IEEE J. Solid-State Circuits, vol. 30, pp , Apr [10] I. A. Young, J. K. Greason, J. E. Smith, and K. L. Wong, A PLL clock generator with MHz lock range for microprocessors, in ISSCC Dig. Tech. Papers, pp , Feb [11] M. Horowitz, A. Chen, J. Cobrunson, J. Gasbarro, T. Lee, W. Leung, W. Richardson, T. Thrush, and Y. Fujii, PLL design for a 500 Mb/s [12] I. Immoreev, Teh-Ho Tao, UWB Radar for Patient Monitoring, IEEE Aerospace and Electronic Systems Magazine, vol. 23, no. 11, pp , [13] C. N. Paulson et al., Ultra-wideband Radar Methods and Techniques of Medical Sensing and Imaging, Proceedings of the SPIE, vol. 6007, pp , [14] S. K. Davis et al., Breast Tumor Characterization Based on Ultrawideband Microwave Backscatter, IEEE Transactions on Biomedical Engineering, vol. 55, no. 1, pp , [15] M. Strackx et al., Measuring Material/Tissue Permittivity by UWB Time-domain Reflectometry Techniques, Applied Sciences in Biomedical and Communication Technologies (ISABEL), rd International Symposium. [16] M. Strackx et al, Analysis of a digital UWB receiver for biomedical applications, European Radar Conference (EuRAD), 2011, submitted for publication. [17] M. Z. Straayer and M. H. Perrott, A Multi-Path Gated Ring Oscillator TDC With First-Order Noise Shaping, IEEE J. Solid-State Circuits, vol. 44, no. 4, Apr [18] Rafael J. Betancourt Zamora, T. Lee, Low Phase Noise CMOS Ring Oscillator VCOs for Frequency Synthesis. [19] A. Hajimiri, S. Limotyrakis, and T. H. Lee, Jitter and Phase Noise in Ring Oscillators, IEEE J. Solid-State Circuits, vol. 34, no. 6, June [20] A. Hajimiri and T. H. Lee, A general theory of phase noise in electrical oscillators, IEEE J. Solid-State Circuits, vol. 33, pp , Feb 160 154 161 Testing and integrating a MES Gert Vandyck FBFC International Europalaan 12 B-2480 Dessel, Belgium Supervisor(s): Marc Van Baelen Abstract Production volume and quality are very important for any company. When you are producing nuclear fuel, quality concerns increase as failures risk the lives of both employees and the community to which the fuels are shipped. Technology and automation addresses the concerns. In a factory environment, management process and automation is called a Manufacturing Execution System (MES). This paper highlights the project of testing a new MES, and integrating that MES into an existing environment with other MESs in place already. Keywords Manufacturing Execution System, software testing I. INTRODUCTION FBFC International produces fuel assemblies for nuclear Pressurized Water Reactors based on uranium dioxide (UO 2 ) and mixed oxide (MOX). The production of these assemblies is divided into three steps: fabrication of the UO 2 -pellets, fabrication of fuel rods, and the final assembly of 264 rods. The last 2 production steps each had a MES for that portion of the production. FBFC International recognized the importance of a reliable MES to both maintain the highest quality and optimized production volume of the pellet manufacturing. Unlike the latter two production steps, the UO2-pellets portion is in constant production 24 hours a day, 7 days a week. When the two previous individual MESs were integrated, the process did not go smoothly. It required a lot of time and support from IT and production workers to minimize downtime. FBFC could not afford the downtime with the pellet manufacturing. This is why the project was so critical. II. MANUFACTURING EXECUTION SYSTEM A Manufacturing Execution System (MES) is an information processing and transmission system in a production environment. MESA International (Manufacturing Enterprise Solutions Association) is a global community who are focused on improving Operations Management capabilities through the effective application of technology solutions and best practices. In one of its white papers MESA defined 11 manufacturing execution activities, which later gained recognition primarily thanks to the MESA honeycomb model, illustrated in figure 1. [1] Simply put, the original concept, Manufacturing Execution System, concerns information systems that support the tasks a production department must do in order to: Fig. 1. Manufacturing execution activities in the honeycomb model. c MESA International. Prepare and manage work instructions Schedule production activities Monitor the execution of the production process Gather and analyze information about the process and the product Solve problems and optimize procedures At FBFC International, most of these things were done manually, without the use of any automation. III. VALIDATION OF THE TECHNICAL ANALYSIS With the intention of purchasing core software for the new MES, a detailed technical analysis of the system requirements need to be both created and validated. The document that describes the functionality of the system is called the system requirement, or spec/specs for short. Every aspect of the production process had to be studied extensively. Only then could one judge whether the technical analysis covers all conditions. The specs will be reference by the software supplier who will customize the software. Also, the specs will be the primary reference used by the in-house testers. 155 162 There is another reason why it s important to properly validate the system requirements before the software has been developed. Any major addition or change to the system requirements will be charged separately by the software supplier to implement the changes. Plus changes added later can, and often do, cause unforeseen bugs in parts of the software that had already been validated. So we can conclude that it s crucial that the specs are validated with the greatest precision. The MES for FBFC International wasn t been built from scratch. It s a modified version of the MES of a sister company. The production process at the sister company is similar to ours. Yet, there are significant differences. Consequently, their original system requirements can act as a basis for our MES, but it needs revisions to reflect how the two plants differ. That s why during the requirements validation, it s utterly important to check if all differences have been taken into account. How we did this is covered in this paper. IV. HARDWARE AND SOFTWARE PLATFORM To guarantee a smooth installation and to set up a proper test environment, it s necessary to have a good understanding of how the software works. A. Wonderware InTrack The MES module used by the software supplier is InTrack, developed by Wonderware. InTrack is the core of the MES. All data is stored in a Microsoft SQL Server database. This database is generated by the InTrack setup and custom tables have been added by the software supplier. The software supplier used visual basic programs to create process specific functionalities within the InTrack module. B. OPC Server To communicate with the Programmable Logic Controllers (PLCs) of all the machines, an Object Linking and Embedding (OLE) for Process Control (OPC) Server was used. The OPC Server connects to the various PLCs and translates the data into a standard-based OPC format. Next, this information can be accessed by the visual basic programs, using an OPC client. V. TESTING Software Testing is the process of executing a program or system with the intent of finding errors. [2] Testing of an MES is an important step before it can be implemented. The testing done by the client is called acceptance testing. These tests are performed prior to the actual transfer of ownership. A. Acceptance Testing During these tests, the end-user validates that the software does what it is expected to do. This basically means that they need to verify that the software conforms to the technical specifications documented during the requirements phase. Besides some test scenarios, we mostly used the system specifications during these test. For every action there are conditions that have to be matched before the action is executed. All conditions detailed in the spec needed to be verified by the end-users. This wasn t easy at the start. Rather than technical issues, the main problem was time priorities of the end-users. As important as any technical issues is the politics of getting the end-users to prioritize time to do the testing. After some inter-departmental efforts all users were able to perform their tests. Besides the validation of the functionality, the end-user had a far better understanding of the workings of the software than with any previous integration/implementation. The verification of the correct execution of a lot of the functions was not so easily completed. Many actions resulted in only changes to the data stored in the database. There were some reports available, where the user could check some data, but a lot of information was not visible. Thus, it required the IT department to do much of the acceptance testing by analyzing the log files. These log files contain a lot of information: Date and time Functions being used Input and output parameters All queries being executed on the database B. Simulation of Machines In order to simulate the production environment, the working of the different machines had to be simulated. To accomplish this, the software developer supplied us a simulator, illustrated in figure 2. Fig. 2. Simulation of the spheroidization process. Basically this simulator changes the OPC-items, just like the PLCs would do. C. Simulations Besides the verification of the technical specification we also did some simulations with the senior production operators. These operators know every aspect of the operation by heart, so they can easily spot shortcomings of the software. We also did a complete simulation with all the departments, mainly to explain everybody s function in the process D. Bug tracking The software suppliers were on-site only during the first few days of the acceptance testing. During further testing, we reported bugs and variances from the specifications on a daily basis to the software supplier. Diligence was required to uniquely 156 Abstract. Cycle Domain Simulator for Phase-Locked Loops Abstract Cycle Domain Simulator for Phase-Locked Loops Norman James December 1999 As computers become faster and more complex, clock synthesis becomes critical. Due to the relatively slower bus clocksMore information Digital to Analog Converter. Raghu Tumati Digital to Analog Converter Raghu Tumati May 11, 2006 Contents 1) Introduction............................... 3 2) DAC types................................... 4 3) DAC Presented.............................More informationMore theMore information W a d i a D i g i t a l Wadia Decoding Computer Overview A Definition What is a Decoding Computer? The Wadia Decoding Computer is a small form factor digital-to-analog converter with digital pre-amplifier capabilities. It isMore information CONVERTERS. Filters Introduction to Digitization Digital-to-Analog Converters Analog-to-Digital Converters CONVERTERS Filters Introduction to Digitization Digital-to-Analog Converters Analog-to-Digital Converters Filters Filters are used to remove unwanted bandwidths from a signal Filter classification accordingMore information ISSCC 2003 / SESSION 13 / 40Gb/s COMMUNICATION ICS / PAPER 13.7 ISSCC 2003 / SESSION 13 / 40Gb/s COMMUNICATION ICS / PAPER 13.7 13.7 A 40Gb/s Clock and Data Recovery Circuit in 0.18µm CMOS Technology Jri Lee, Behzad Razavi University of California, Los Angeles, CAMore information DIGITAL-TO-ANALOGUE AND ANALOGUE-TO-DIGITAL CONVERSION DIGITAL-TO-ANALOGUE AND ANALOGUE-TO-DIGITAL CONVERSION Introduction The outputs from sensors and communications receivers are analogue signals that have continuously varying amplitudes. In many systems CRP718 RF and Microwave Measurements Laboratory CRP718 RF and Microwave Measurements Laboratory Experiment- MW2 Handout. Updated Jan 13, 2011 SPECTRUM ANALYZER BASED MEASUREMENTS (groups of 2 may omit part 2.1, but are advised to study the proceduresMore informationMore information Precision Fully Differential Op Amp Drives High Resolution ADCs at Low Power Precision Fully Differential Op Amp Drives High Resolution ADCs at Low Power Kris Lokere The op amp produces differential outputs, making it ideal for processing fully differential analog signals or takingMore information ELECTRICAL ENGINEERING EE ELECTRICAL ENGINEERING See beginning of Section H for abbreviations, course numbers and coding. The * denotes labs which are held on alternate weeks. A minimum grade of C is required for all prerequisiteMoreMore information Section 3. Sensor to ADC Design Example Section 3 Sensor to ADC Design Example 3-1 This section describes the design of a sensor to ADC system. The sensor measures temperature, and the measurement is interfaced into an ADC selected by the systemsMore informationMoreMore information Clock Recovery in Serial-Data Systems Ransom Stephens, Ph.D. Clock Recovery in Serial-Data Systems Ransom Stephens, Ph.D. Abstract: The definition of a bit period, or unit interval, is much more complicated than it looks. If it were just the reciprocal of the dataMore information Topic 5: Measurement and Analysis of EMG Activity Topic 5: Measurement and Analysis of EMG Activity Laboratory Manual Section 05 HPHE 6720 Dr. Cheatham What is Electromyography (EMG)? Electromyography (EMG) is an experimental technique concerned withMore information Timing Errors and Jitter Timing Errors and Jitter Background Mike Story In a sampled (digital) system, samples have to be accurate in level and time. The digital system uses the two bits of information the signal was this bigMore informationMoreMore informationMore information Cologne Chip DIGICC TM. CODEC Technology. Technology Background DIGICC TM CODEC Technology Technology Background Technology Background 5 November 2004 Cologne AG Eintrachtstrasse 113 D - 50668 Köln Germany Tel.: +49 (0) 221 / 91 24-0 Fax: +49 (0) 221 / 91 24-100 information CMOS Sample-and-Hold Circuits CMOS Sample-and-Hold Circuits ECE 1352 Reading Assignment By: Joyce Cheuk Wai Wong November 12, 2001 Department of Electrical and Computer Engineering University of Toronto 1. Introduction Sample-and-holdMore information Experiment # (7) FSK Modulator Islamic University of Gaza Faculty of Engineering Electrical Department Experiment # (7) FSK Modulator Digital Communications Lab. Prepared by: Eng. Mohammed K. Abu Foul Experiment Objectives: 1. To understandMore informationCOMore information EECS 240 Topic 7: Current Sources EECS 240 Analog Integrated Circuits Topic 7: Current Sources Bernhard E. Boser,Ali M. Niknejad and S.Gambini Department of Electrical Engineering and Computer Sciences Bias Current Sources Applications Ultrasound Distance Measurement Final Project Report E3390 Electronic Circuits Design Lab Ultrasound Distance Measurement Yiting Feng Izel Niyage Asif Quyyum Submitted in partial fulfillment of the requirements for the Bachelor of ScienceMore information Filters and Waveform Shaping Physics 333 Experiment #3 Fall 211 Filters and Waveform Shaping Purpose The aim of this experiment is to study the frequency filtering properties of passive (R, C, and L) circuits for sine waves, and theMore information Frequency Response of Filters School of Engineering Department of Electrical and Computer Engineering 332:224 Principles of Electrical Engineering II Laboratory Experiment 2 Frequency Response of Filters 1 Introduction Objectives ToMore Computer Hardware Requirements for Real-Time Applications Lecture (4) Computer Hardware Requirements for Real-Time Applications Prof. Kasim M. Al-Aubidy Computer Engineering Department Philadelphia University Summer Semester, 2011 Real-Time Systems, Prof. KasimMore information Instruction Manual Service Program ULTRA-PROG-IR Instruction Manual Service Program ULTRA-PROG-IR Parameterizing Software for Ultrasonic Sensors with Infrared Interface Contents 1 Installation of the Software ULTRA-PROG-IR... 4 1.1 System Requirements...More informationMore information THE BASICS OF PLL FREQUENCY SYNTHESIS Supplementary Reading for 27 - Oscillators Ron Bertrand VK2DQ THE BASICS OF PLL FREQUENCY SYNTHESIS The phase locked loop (PLL) method of frequency synthesis is nowMore information Design of Bidirectional Coupling Circuit for Broadband Power-Line Communications Journal of Electromagnetic Analysis and Applications, 2012, 4, 162-166 Published Online April 2012 () Design of BidirectionalMore Synchronization of sampling in distributed signal processing systems Synchronization of sampling in distributed signal processing systems Károly Molnár, László Sujbert, Gábor Péceli Department of Measurement and Information Systems, Budapest University of Technology andMore information CMOS, the Ideal Logic Family CMOS, the Ideal Logic Family INTRODUCTION Let s talk about the characteristics of an ideal logic family. It should dissipate no power, have zero propagation delay, controlled rise and fall times, and have Tire pressure monitoring Application Note AN601 Tire pressure monitoring 1 Purpose This document is intended to give hints on how to use the Intersema pressure sensors in a low cost tire pressure monitoring system (TPMS). 2 IntroductionMore informationMore CHAPTER 16 OSCILLATORS CHAPTER 16 OSCILLATORS 16-1 THE OSCILLATOR - are electronic circuits that generate an output signal without the necessity of an input signal. - It produces a periodic waveform on its output with only theMore informationMore information nameMore information,More information Chapter 6: From Digital-to-Analog and Back Again Chapter 6: From Digital-to-Analog and Back Again Overview Often the information you want to capture in an experiment originates in the laboratory as an analog voltage or a current. Sometimes you want toMore informationMore information Challenges involved in the use of capacitive sensors: an overview Challenges involved in the use of capacitive sensors: an overview Martin Jaiser Capacitive sensing is an attractive technology: a capacitive sensor is easy to understand, and relatively easy and cheapMore information Accurate Measurement of the Mains Electricity Frequency Accurate Measurement of the Mains Electricity Frequency Dogan Ibrahim Near East University, Faculty of Engineering, Lefkosa, TRNC dogan@neu.edu.tr Abstract The frequency of the mains electricity supply Design of op amp sine wave oscillators Design of op amp sine wave oscillators By on Mancini Senior Application Specialist, Operational Amplifiers riteria for oscillation The canonical form of a feedback system is shown in Figure, and EquationMore information International Journal of Electronics and Computer Science Engineering 1482 International Journal of Electronics and Computer Science Engineering 1482 Available Online at ISSN- 2277-1956 Behavioral Analysis of Different ALU Architectures G.V.V.S.R.Krishna AssistantMore information 1-800-831-4242 Distributed by: 1-800-831-4242 The content and copyrights of the attached material are the property of its owner. LF411 Low Offset, Low Drift JFET Input Operational Amplifier General DescriptionMore informationMore informationMore information)More information Application Note Noise Frequently Asked Questions : What is? is a random signal inherent in all physical components. It directly limits the detection and processing of all information. The common form of noise is white Gaussian due to the many randomMore VoltageMore informationMore informationMoreMore informationMore information Dually Fed Permanent Magnet Synchronous Generator Condition Monitoring Using Stator Current Summary Dually Fed Permanent Magnet Synchronous Generator Condition Monitoring Using Stator Current Joachim Härsjö, Massimo Bongiorno and Ola Carlson Chalmers University of Technology Energi och Miljö,More informationMore information USE OF VIRTUAL INSTRUMENTS IN RADIO AND ATMOSPHERIC EXPERIMENTS P.N. VIJAYAKUMAR, THOMAS JOHN AND S.C. GARG RADIO AND ATMOSPHERIC SCIENCE DIVISION, NATIONAL PHYSICAL LABORATORY, NEW DELHI 110012, INDIAMore information Dielectric Definition RADIO FREQUENCY DIELECTRIC MEASUREMENT Dielectric Definition Electric field interaction with an atom under the classical dielectric model In the classical approach to the dielectric model a material isMore informationMore information Introduction to Receivers Introduction to Receivers Purpose: translate RF signals to baseband Shift frequency Amplify Filter Demodulate Why is this a challenge? Interference (selectivity, images and distortion) Large dynamic rangeMore information CMOS Analog IC Design Page CMOS Analog IC Design Page 10.5-4 NYQUIST FREQUENCY ANALOG-DIGITAL CONVERTERS The sampled nature of the ADC places a practical limit on the bandwidth of the input signal. If the sampling frequency is fMoreMore information Minimizing Power Supply Transient Voltage at a Digital Wireless Telecommunications Products' Test Fixture Minimizing Power Supply Transient Voltage at a Digital Wireless Telecommunications Products' Test Fixture By Jim Gallo and Ed Brorein Agilent Technologies, Inc. Abstract This paper addresses the problemMore informationMore informationMore information Laboratory 4: Feedback and Compensation Laboratory 4: Feedback and Compensation To be performed during Week 9 (Oct. 20-24) and Week 10 (Oct. 27-31) Due Week 11 (Nov. 3-7) 1 Pre-Lab This Pre-Lab should be completed before attending your regularMore informationMore information,MoreMore information Part 2: Receiver and Demodulator University of Pennsylvania Department of Electrical and Systems Engineering ESE06: Electrical Circuits and Systems II Lab Amplitude Modulated Radio Frequency Transmission System Mini-Project Part : ReceiverMore informationMore information Imaging parallel interface RAM Page 1 of 6 ( 4 of 32 ) United States Patent Application 20070024713 Kind Code A1 Baer; Richard L. ; et al. February 1, 2007 Imaging parallel interface RAM Abstract Imaging Parallel Interface Random AccessMore information Absolute Maximum Ratings RC15 Voltage-to-Frequency Converters Features Single supply operation Pulse output DTL/TTL/CMOS compatible Programmable scale factor (K) High noise rejection Inherent monotonicityMore information?More information.More information Understanding Mixers Terms Defined, and Measuring Performance Understanding Mixers Terms Defined, and Measuring Performance Mixer Terms Defined Statistical Processing Applied to Mixers Today's stringent demands for precise electronic systems place a heavy burdenMore information RF System Design and Analysis Software Enhances RF Architectural Planning From April 2010 High Frequency Electronics Copyright 2010 Summit Technical Media, LLC RF System Design and Analysis Software Enhances RF Architectural Planning By Dale D. Henkes Applied Computational SciencesMore information InstituteMore information Application Note SAW-Components Application Note SAW-Components Principles of SAWR-stabilized oscillators and transmitters. App: Note #1 This application note describes the physical principle of SAW-stabilized oscillator. OscillatorMore information Baseband delay line QUICK REFERENCE DATA FEATURES Two comb filters, using the switched-capacitor technique, for one line delay time (64 µs) Adjustment-free application No crosstalk between SECAM colour carriers (diaphoty) Handles negative orMore information
http://docplayer.net/434761-Master-of-science-msc-in-engineering-technology-electronics-ict-proceedings-of-msc-thesis-papers-electronics-ict.html
CC-MAIN-2019-04
refinedweb
95,970
54.63
« Previous 1 2 3 4 Next » Cloud-native storage for Kubernetes with Rook Memory Simpler Start The basic requirement before you can try Rook is a running Kubernetes cluster. Rook does not place particularly high demands on the cluster. The configuration only needs to support the ability to create local volumes on the individual cluster nodes with the existing Kubernetes volume manager. If this is not the case on all machines, Rook's pod definitions let you specify explicitly which machines of the solution are allowed to take storage and which are not. To make it as easy as possible, the Rook developers have come up with some ideas. Rook itself comes in the form of Kubernetes pods. You can find example files on GitHub [6] that start these pods. The operator namespace contains all the components required for Rook to control Ceph. The cluster namespace starts the pods that run the Ceph components themselves. Remember that for a Ceph cluster to work, it needs at least the monitoring servers (MONs) and its data silos, the object storage daemons (OSDs). In Ceph, the monitoring servers take care of both enforcing a quorum and ensuring that clients know how to reach the cluster by maintaining two central lists: The MON map lists all existing monitoring servers, and the OSD map lists the available storage devices. However, the MONs do not act as proxy servers. Clients always need to talk to a MON when they first connect to a Ceph cluster, but as soon as they have a local copy of the MON map and the OSD map, they talk directly to the OSDs and also to other MON servers. Ceph, as controlled by Rook, makes no exceptions to these rules. Accordingly, the cluster namespace from the Rook example also starts corresponding pods that act as MONs and OSDs. If you run the kubectl get pods -n rook command after starting the namespaces, you can see this immediately. At least three pods will be running with MON servers, as well as various pods with OSDs. Additionally, the rook-api pod, which is of fundamental importance for Rook itself, handles communication with the other Kubernetes APIs. At the end of the day, a new volume type is available in Kubernetes after the Rook rollout. The volume points to the different Ceph front ends and can be used by users in their pod definitions like any other volume type. Complicated Technology Rook does far more work in the background than you might think. A good example of this is integration into the Kubernetes Volumes system. Because Ceph running in Kubernetes is great, but also useless if the other pods can't use the volumes created there, the Rook developers tackled the problem and wrote their own volume driver for use on the target systems. The driver complies with the Kubernetes FlexVolume guidelines. Additionally, a Rook agent runs on every kubelet node and handles communication with the Ceph cluster. If a RADOS Block Device (RBD) originating from Ceph needs to be connected to a pod on a target system, the agent ensures that the volume is also available to the target container by calling the appropriate commands on that system. The Full Monty Ceph currently supports three types of access. The most common variant is to expose Ceph block devices, which can then be integrated into the local system by the rbd kernel module. Also, the Ceph Object Gateway or RADOS Gateway (Figure 2) enables an interface to Ceph on the basis of RESTful Swift and S3 protocols. For some months now, CephFS has finally been approved for production; that is, a front end that offers a distributed, POSIX-compatible filesystem with Ceph as its back-end storage. From an admin point of view, it would probably already have been very useful if Rook were only able to use one of the three front ends adequately: the one for block devices. However, the Rook developers did not want to skimp; instead, they have gone whole hog and integrated support into their project for all three front ends. If a container wants to use persistent storage from Ceph, you can either create a real Docker volume using a volume directive, organize access data for the RADOS Gateway for RESTful access, or integrate CephFS locally. The functional range of Rook is quite impressive. « Previous 1 2 3 4 Next » Buy this article as PDF (incl. VAT)
http://www.admin-magazine.com/Archive/2019/49/Cloud-native-storage-for-Kubernetes-with-Rook/(offset)/3
CC-MAIN-2019-26
refinedweb
739
58.72
Still working away on the authentication system. I'm basically at the point where I can use RSA keys to sign in to my demo webapp. Manually. As long as the keys are in PEM format. And crypto:verify is in a good mood. This isn't about that though. I've been slowly moving towards more and more command-line oriented interfaces. It's not a recent trend, in fact it started pretty much when I first discovered Emacs. Ever since doing away with my desktop environment a little while ago, it's been more of a necessity than idle speculation. The good news is that there's almost nothing I wanted to do in X windows that I can't do via command line. Command Line MVPs Let me draw your attention to some command line programs that I honestly wouldn't want to go without anymore. Not counting obvious necessities like ssh/ rsync/ find/ grep/ tail. I've already written a bit about wicd-curses, the very good, simple command line network manager. After you set up a wireless device with Shift+p, and set up your connection keys, it'll make sure you're as plugged in as you can possibly be with no need for a network widget. You don't even need to run it unless you're connecting to a new network; the daemon starts up with the rest of your machine. htop isn't anything new, if you've been paying attention. It's an improvement over the regular top in that it gives you more information and prettier colors. That's reason enough for me to use it. acpi does quite a few things relating to cooling, power, and battery. Really, I just use it as the replacement for the gnome/xfce battery widget. screen is something I've been using forever. My first time firing it up was to deploy a Hunchentoot application. Since then, I've used it as a way of managing multiple terminals, and kicked its tires as a full-on window manager. mplayer is another piece that I've been using for a long time. Even while bumping around GNOME, I preferred this to VLC (YMMV). It's worth a read through the documentation if you're having a slow day; the program does various crazy things in addition to music/video playback, including bitmap frame outputs, format conversion and some timeline-based edits. pacpl is an audio chopping tool. As of the latest version in the Debian repos, it can directly extract music from videos. As you can see by the website there, it can convert to and from pretty much any audio format you care to name, though I mostly use it to convert things to oggs. imagemagick is a command-line image chopping program with so many options that you'd really better just read the docs. It's actually composed of a bunch of different utilities, of which I mostly use convert, mogrify and identify. get_flash_videos is about the only way I get to see most videos, given a) how crappy flash support is when you're even half-way dedicated to the idea of Free software and b) how few sites other than YouTube provide an HTML5 based video player. transmission-cli is the command line interface to my favorite torrent client. Granted, I don't torrent much since I got out of the habit of downloading the massive install CDs, but still. gtypist is a curses-based typing tutor that has effectively replaced klavaro for me. It's mildly more entertaining to run typing drills on surrealist, minimal poetry than it is to type out old newspaper articles. The only thing about it that rustles my jimmies is that it enforces hitting space twice after a period. Which is a thing I guess? Honestly it sounds like an anachronistic behavior that used to make sense back when actual humans used actual typewriters. Luckily, the lessons are contained in a set of conf files, so I'll be able to do something about this.EDIT: Aaaaand bam. Enjoy. Wed, 20 Jun, 2012 canto is a command-line based RSS feed reader. I complained about liferea earlier for its complexity, and having taken a look at a number of feed readers (both GUI and CLI), that doesn't seem to be an uncommon feature. canto, by contrast is ridiculously simple; set up your conf file, and it'll track those feeds, pulling when you tell it to (every 5 minutes by default). The example config up at the project site is pretty extensive, but I've gotten on fine with a much more minimal setup: from canto.extra import * import os link_handler("lynx \"%u\"", text=True) image_handler("feh \"%u\"", fetch=True) keys['y'] = yank ## requires xclip filters=[show_unread, None] add("") add("") add("") add("") add("") The one quirk that I have to highlight is that by default, its update doesn't fetch, it just updates from the local pool. In order to fetch, you actually need to run canto-fetch somehow. You can throw it in your crontab, but given how I use an RSS reader, it made more sense for me to just bind that to a StumpWM key. feh is an extremely lightweight command-line imageviewer with options to browse folders, delete files, do slideshows and other assorted goodness. I didn't find this looking for an imageviewer, I found it looking for a way to get a background picture directly in Stump. It turns out that this does it: (defun set-background (bg-image) (run-shell-command (format nil "feh --bg-scale ~a" bg-image))) lynx is something I don't use on a regular basis anymore, but it is quite useful when I need to check a discussion or two without booting up X. It happens every once in a while. Command Line Gaps There aren't as many as you'd think. In fact, for my purposes, there is exactly one, and it's sort of minor; the lack of good animated gif viewer. There is a concerted effort at putting one together, but it didn't exactly blow me away. mplayer does a half-decent job, but chops when looping and doesn't loop by default (which is sort of helpful when describing haters). feh is awesome for stills, but doesn't display gifs in an animated fashion, and neither does Emacs. At the moment, my workaround is to just use chromium and call it a day. Shell UI Ok, so maybe I lied a little in the previous section. The thing I really don't like about some command line programs is their sometimes inscrutable option settings and lack of sensible defaults. That second one bugged me enough that I whipped up a pair of Ruby scripts to help me out with archiving a little while ago. Yesterday, I ported them to Python; what they do, basically, is provide a sane set of default options for creating and decompressing various archives. Instead of tar -xyzomgwtfbbq foo.tgz, I can just call unpack foo.tgz. pack -t tar.gz foo/ similarly replaces tar -cwhyareyoueventryingtoreadthis foo.tar.gz foo/. I guess I could have done what most of my friends do (memorize the one or two most common combinations and call it a day), but having the machine adapt to humanware seems like the better idea to me. That's also what caused me to sit down earlier and whip up a first draft at my first curses-based program. I was trying to pull out sections of some movies into animated gifs, and using mplayer/ feh/ convert manually proved to be laborious and repetitive. So, I did this. I call it with a movie file, and the filename I want the result saved to. The program uses a curses interface to - lets me pick a part of the movie to pull, using mplayeroptions -ssand -endpos - has mplayeroutput the chosen section as a series of JPGs in a temp folder - opens the folder with feh, giving me the opportunity to delete some frames as desired - once I quit out of feh, stitches the remaining frames together into an animated gif Honestly, I'm not sure how often I'll want to do that again, but hey. I've got the process formalized now, so it should be Pie next time. And, now I know how to curse in Python.
http://langnostic.blogspot.com/2012/06/commanding-lines.html
CC-MAIN-2017-39
refinedweb
1,411
69.92
Peter Bismuti wrote: > What is best for a singleton, making it a module or a class within a module? > I guess the reason why having a class within a module undesirable is that it > adds to the namespace of the object mymodule.mclass.property instead of > mymodule.property, and so fourth. I know you can import it in a way to > reduce the path length of the object, but still, it seems like a cleaner > coding style to just use the module itself. Opinions? THanks. You can simply put your object into the __builtins__ namespace: #--------------------------------------- class Spam: x = 0 def f(self): print self.x __builtins__.__dict__['eggs'] = Spam() eggs.x = 1 eggs.f() #--------------------------------------- Using modules is fine for singletons, but classes allows you things like inheritance. regards, Hung Jung
https://mail.python.org/pipermail/python-list/2001-November/078046.html
CC-MAIN-2018-05
refinedweb
130
67.65
Created on 2006-12-13 18:10 by gj0aqzda, last changed 2017-06-14 18:49 by Christian H.. I should further add that I have implemented the following API calls as methods of the new CapabilityState object in addition to the standard functions: * cap_clear * cap_copy_ext * cap_dup * cap_get_flag * cap_set_flag * cap_set_proc * cap_size * cap_to_text Can you please provide documentation changes as well? (If you don't want to write LaTeX, it's enough to write the docs in plaintext, there are a few volunteers who will convert it appropriately.) I've attached a documentation patch, which should be applied in addition to the base patch. File Added: patch-svn-doc.diff No news on these patches in a while. To summarise, the patches are ready to go in. The issues surrounding cap_copy_int(), cap_get_*() and cap_set_*() are pretty minor. The vast majority of uses will be of the cap_get_proc(), cap_set_flag(), cap_set_proc() variety. I am not trying to hassle you; I know you don't have enough time to get through everything. However, I'll hang fire on future development of stuff that I, personally, am not going to use, until I know when/if these patches are going to go in. The patch cannot go in in its current form (I started applying it, but then found that I just can't do it). It contains conditional, commented out code. Either the code is correct, then it should be added, or it is incorrect, in which case it should be removed entirely. There shouldn't be any work-in-progress code in the Python repository whatsoever. This refers to both the if 0 blocks (which I thought I can safely delete), as well as commented-out entries in CapabilityStateMethods (for which I didn't know what to do). So while you are revising it, I have a few remarks: - you can safely omit the generated configure changes from the patch - I will regenerate them, anyway. - please follow the alphabet in the header files in configure.in (bsdtty.h < capabilities.h) - please don't expose method on objects on which they aren't methods. E.g. cap_clear is available both as a method and a module-level function; that can't be both right (there should be one way to do it) Following the socket API, I think offering these as methods is reasonable - try avoiding the extra copy in copy_ext (copying directly into the string). If you keep malloc calls, don't return NULL without setting a Python exception. - use the "s" format for copy_int and from_text - consider using booleans for [gs]et_flags ISTM that this would be better as a separate module or an optional submodule to posix. The posix module is already 8720 lines. I really don't want it to get bigger, especially when you realize how much #ifdef'ery is in there. Some other things I noticed: You should use PyMem_Malloc instead of a raw malloc (same deal with free). Methods that take no arguments should use METH_NOARGS and then there's no need to call PyArgs_ParseTuple (e.g., posix_cap_get_proc). There definitely shouldn't be any abort()s in there, even if #ifdef'ed out. Is this 64-bit safe? My manpage (gentoo) says this: int cap_set_flag(cap_t cap_p, cap_flag_t flag, int ncap, cap_value_t *caps, cap_flag_value_t value); I see that you are using ints. I don't know if that's correct on a 64-bit platform. If not, you will need to modify the places that ints are used to take longs. I don't mind the POSIX module getting bigger. In C, these functions are all in a flat namespace, also. I like the view "if it's specified by POSIX, you find it in the POSIX module" (although this specific API was rejected for inclusion into POSIX). The functions are all very small, as the real functionality is in the C library, or even the OS kernel. As for the ifdefery: most of it is straight-forward: functionality is either present or it isn't. It gets messy when it tries to use alternative underlying APIs, e.g. for Win32 or OS/2. If the code is to be refactored, this should be the way to go (i.e. move all Win32 and OS/2 implementations out of the module) As for PyMem_Malloc: I see no need to use that API; it doesn't improve the code to do so, compared to malloc/free. All that matters it is symmetric. Updated patch with numerous changes, which (hopefully) address the issues you raised. Updated patch with further documentation fixes. Unfortunately, these changes missed the beta for 2.6, so it must be delayed until 2.7. Ping. Anything I can do? Matt Kern has put a lot of work into the attached patches from what I can see. Common courtesy suggests that someone make an effort to review his work which now can only go into 3.2. I would take it on myself but know nothing about POSIX and still find the Python C API intimidating. Adding this to the posix module would enforce linking with lcap and lattr always. The development headers for these are not installed by default on some distributions. I think it would be better if they are added to a separate module (especially since all the functions are prefixed with cap_, it is like they are in their own namespace) which means that the module is optional for people that don't have/want to build the functionality. What are your thoughts? posix module has many optional functions, which are available only on some systems. > The development headers for these are not installed by default on > some distributions. This is not an issue at all - that's what autoconf is for. > Adding this to the posix module would enforce linking with lcap and > lattr always. That's a more serious problem, IMO; I think some people won't like the additional dependency. > I think it would be better if they are added to a separate module Can you propose a name for the module? > > I think it would be better if they are added to a separate module > Can you propose a name for the module? I would say either posixcap or capabitilies. > I would say either posixcap or capabitilies. The problem with capabilities is that it's easy to misspell, as I did :-) Another possibility is to make it a private module _posixcapabilities, which would be used in os module: try: from _posixcapabilities import * except ImportError: pass "posixcap" sounds ok to me. > "posixcap" sounds ok to me. Bike-sheddingly, it bothers me that these functions are actually *not* defined by POSIX, but have been withdrawn before becoming standard. So I'd rather call it linuxcap. Using _linuxcap, and exposing them from os sounds fine to me.
https://bugs.python.org/issue1615158
CC-MAIN-2018-05
refinedweb
1,136
72.16
Types Types are the fundamental building blocks of the Oxygene language. There are three broad categories of types: Predefined Simple Types are small and atomic types that are built into the language to represent the simplest of data: numbers, booleans, strings, and the like. Custom Types are types not defined by the language, but by yourself or provided by base libraries or frameworks. Where standard types are universal in application, custom types usually serve a specific purpose. - Classes - Records - Interfaces - Enums - Blocks (a.k.a. Delegates) Modified Types are defined by the language itself, and extend or modify the behavior of a regular type, or form complex combinations, such as arrays, sequences, tuples or pointers of a given other type. Oxygene also has support for declaring types in special ways: Generic Types are classes (or records and interfaces) where one or more of the other types that the class interacts with (for example to receive or return as a parameter from methods) is not well-defined, but kept generic. This allows for the class to be implemented in a fashion that it can work with or contain different type parameters. Only when a generic type is used, a concrete type is specified. Partial Types are regular types that are declared and implemented across multiple source files – commonly to keep a large class easier to maintain, or because one part of the class is generated by a tool. Mapped Types allow you to provide the definition of a type that will not exist at runtime, but merely map to a different, already existing type. Type Extensions can expand an existing type with new methods or properties (but not new data), even if that type was originally declared in an external library. Type Aliases can give a new name to an existing type. Type Declarations Custom Types and Aliases to existing types can be declared in the interface or implementation section of any source file, after a type keyword. Each type declaration starts with the type's name, followed by an equal sign ( =) and followed by optional Type Modifiers, which can also include a Visibility Lebel, followed by the details of the type declaration as specified in the individual custom type topics referenced above. type MyClass = sealed class end; MyInteger = public type Integer; In the above example, MyClass ands MyInteger are the type names. They are followed by the equal sign, and the sealed and public type modifiers, respectively. Finally class ... end and Integer are the type declaration itself (a Class declaration and an Alias, in this case). Type References While Type Declarations, covered above, introduce a new type name, the most common interaction with types is to reference existing ones. Most types, including Simple Types and Custom Types, are referenced using simply their name – either their short name, or their fully qualified name including the Namespace. var x: SomeClass; // Variable x is declared referencing the SomeClass type by name. By contrast, Modified Types are referenced using a particular syntax specific to the type, such as the array of, sequence of or nullable keywords, often combined with a type name. var x: array of SomeRecord; // Variable x is as an *array* of the type referred to by name. var y: nullable Integer; // Variable x is as a *nullable* version of the name's simple type. On the Cocoa platform, type references can be prefixed with Storage Modifiers to determine how they interact with ARC. On the Island platform, type references can be prefixed with a Life-Time Strategy Modifier to determine how their lifetime is managed (although doing so explicitly is rarely needed). Storage Modifiers, as discussed above, are also supported when working with Cocoa or Swift objects in an Island/Darwin project. More on Type Names Every named type (Simple Types and Custom Types) in Oxygene can be referred to either by its short name (e.g. MyClass), or what is called a fully qualified name that includes a full namespace (e.g. MyCompany.MyProject.MyClass). When declaring types with a simple name, the type will automatically be placed within the namespace that is declared at the top of the file alongside the namespace keyword. Alternatively, a fully qualified name can be provided in the declaration to override the namespace. namespace MyCompany.MyProject interface type MyClass = class // full name will be MyCompany.MyProject.MyClass end; MyCompany.OtherProject.OtherClass = class // full name will be MyCompany.OtherProject.OtherClass end; You can read more about this in the Namespaces topic.
https://docs.elementscompiler.com/Oxygene/Types/
CC-MAIN-2019-22
refinedweb
746
50.36
Content-type: text/html environ, execl, execv, execle, execve, execlp, execvp - Executes a file Standard C Library (libc.a, libc.so) #include <unistd.h> extern char **environ; int execl ( const char *path, const char *arg, ... ); int execv ( const char *path, char * const argv[ ] ); int execle ( const char *path, const char *arg, ... char * const envp[ ] ); int execve ( const char *path, char * const argv[ ], char * const envp[ ] ); int execlp ( const char *file, const char *arg, ... ); int execvp ( const char *file, char * const argv[ ] ); Interfaces documented on this reference page conform to industry standards as follows: execl(), execv(), execv(), execle(), execve(), execlp(), execvp(): POSIX.1, XPG4, XPG4-UNIX Refer to the standards(5) reference page for more information about industry standards and associated tags. Points to a pathname identifying the new process image file. Specifies a pointer to a null-terminated string, which is one argument available to the new process image. The first of these parameters points to the filename that is associated with the process being started by execl(), execle(), or execlp(). The last element in the list of arg parameters must be a null pointer. Specifies an array of character pointers to null-terminated strings, which are the arguments available to the new process image. The value in the argv[0] parameter points to the filename of the process being started by execv(), execve(), or execvp(). The last member of this array must be a null pointer. Specifies an array of character pointers to null-terminated strings, constituting the environment for the new process. This array must be terminated by a null pointer. Identifies the new process image file. If this parameter points to a string containing a slash character, its contents are used as the absolute or relative pathname to the process image file. Otherwise, the system searches the directories specified in the PATH environment variable definition associated with the new process image to obtain a path prefix for the file. The exec functions replace the current process image with a new process image. The system constructs the new image from an executable file, called a new process image file. Successful calls to the exec functions do not return because the system overlays the calling process with the new process. To run an executable file using one of the exec functions, applications include a function call such as the following: int main ( int argc, char *argv[ ] ); Here, the argc parameter contains the number of arguments being passed to the new main function. The argv[ ] parameter is a null-terminated array of character pointers that point to the arguments themselves. (The null pointer is not included in the count specified in the argc parameter.) The value in argv[0] should point to the filename that is associated with the process being started by one of the exec functions. The system passes the arguments to the new process image in the corresponding arguments to main(). For forms of the exec functions that do not include the envp parameter, applications also define the environ variable to be a pointer to an array of character strings. The character strings define the environment in which the new process image runs. For example, the following shows how an application defines the environment variable: extern char **environ; The environ array is terminated by a null pointer. The format of the new process image file must be one that is recognized by the exec function being used. The exec functions recognize executable text files and binary files. An executable text file is one that contains a header line with the following syntax: The #! identifies the file as an executable text file. The new process image is constructed from the process image file named by the interpreter_name string. When executing an executable text file, the system modifies the arguments passed to the exec function being used as follows: argv[0] is set to the name of the interpreter. For example, the ksh shell might be the interpreter. If the optional_string is present, argv[1] is set to the optional_string. The next element of argv[] is set to the original value of path. The remaining elements of argv[] are set to the original elements of argv[], starting at argv[1]. The original argv[0] is discarded. A binary file can be loaded either directly by an exec function or indirectly by the program loader. The exec functions choose to use direct or indirect loading based on the contents of the new process image file. For example, the functions use indirect loading if the new process image file contains unresolved symbols, requiring use of a shared library. When an exec function loads a binary file indirectly, it constructs the new process image from the default program loader, /sbin/loader, in the same manner as the exec_with_loader() function (see exec_with_loader(2)). The default program loader is then responsible for completing the new program image by loading the new process image file and any shared libraries on which it depends. If the process image file is not a valid executable object, the execlp() and execvp() functions use the contents of that file as standard input to a command interpreter conforming to the system() function. In this case, the command interpreter becomes the new process image. The number of bytes available for the combined argument and environment lists of the new process image is ARG_MAX. ARG_MAX includes the null terminators on the strings; it does not include the pointers. File descriptors open in the calling process image remain open in the new process image, except for those whose close-on-exec flag, FD_CLOEXEC, is set (see fcntl(2) for more information). For those file descriptors that remain open, all attributes of the open file description, including file locks, remain unchanged. Directory streams open in the calling process image are closed in the new process image. The state of directory streams and message catalog descriptors in the new process image is undefined. For the new process, the equivalent of the following command is executed at startup: setlocale(LC_ALL, "C") Each mapped file and shared memory region created with the mmap() function is unmapped by a successful call to any of the exec functions, except those regions mapped with the MAP_INHERIT option. Regions mapped with the MAP_INHERIT option remain mapped in the new process image.. [XPG4-UNIX] After a successful call to any of the exec functions, alternate signal stacks are not preserved and the SA_ONSTACK flag is cleared for all signals. After a successful call to any of the exec functions, any functions previously registered by atexit() are no longer registered. [XPG4-UNIX] If the ST_NOSUID bit is set for the file system containing the new process image file, the effective user ID, effective group ID, saved set user ID, and saved set group ID are unchanged in the new process. Otherwise, if the set user ID mode bit of the new process image file is set (see chmod(2) for more information), the setuid() function. Any shared memory segments attached to the calling process image are not attached to the new process image. [XPG4-UNIX] Any mappings established through mmap() are not preserved across an exec. The following attributes of the calling process image are unchanged after successful completion of any of the exec functions: Process ID Parent process ID Process group ID Session membership Real user ID Real group ID Supplementary group IDs Time left until an alarm clock signal (see alarm(3) for more information) Current working directory Root directory File mode creation mask (see umask(2) for more information) Process signal mask (see sigprocmask(2) for more information) Pending signals (see sigpending(2) for more information) The tms_utime, tms_stime, tms_cutime, and tms_cstime fields of the tms structure File size limit (see the ulimit() function) Nice value (see nice(3) for more information) Adjust-on-exit values (see semop(2) for more information) [XPG4-UNIX] Resource limits [XPG4-UNIX] Controlling terminal [XPG4-UNIX] Interval timers Upon successful completion, the exec functions mark for update the st_atime field of the file. If a multithreaded process calls one of the exec functions, all threads except the calling thread are terminated and the calling thread begins execution within the new process image. If one of the exec functions returns to the calling process image, an error has occurred; the return value is -1, and the function sets errno to indicate the error. The exec functions set errno to the specified values for the following conditions: The number of bytes used by the new process image's argument list and environment list is greater than ARG_MAX bytes. Search permission is denied for a directory listed in the new process image file's path prefix, or the new process image file denies execution permission, or the new process image file is not an executable file type. [Digital] The security attributes of the program file do not allow execute permission. [Digital] The calling process is using a kernel subsystem that prevents executing the new image. The call to one of the exec functions will not succeed until the process has detached itself from the subsystem. [Digital] The path argument is an invalid address. [XPG4-UNIX] Too many symbolic links were encountered in pathname resolution. One of the following conditions occurred: The length of the path argument, the file argument, or an element of the environment variable PATH prefixed to a file exceeds PATH_MAX. A pathname component is longer than NAME_MAX, and _POSIX_NO_TRUNC is in effect for that file. One or more components of the new process image file's pathname do not exist, or the path or file argument points to an empty string. Insufficient memory is available. A component of the new process image file's path prefix is not a directory. The execl(), execv(), execle(), and execve() functions also set errno as follows: The new process image file has the appropriate access permission but is not in the proper format. The execlp() and execvp() functions also set errno as follows: [Digital] Indicates that another thread in the process is already performing an execlp() or execvp() operation. Functions: exit(2), fcntl(2), fork(2), sigaction(2), umask(2), mmap(2), exec_with_loader(2) Routines: alarm(3), getenv(3), nice(3), putenv(3), system(3), times(3), ulimit(3) Standards: standards(5) delim off
http://backdrift.org/man/tru64/man2/environ.2.html
CC-MAIN-2017-22
refinedweb
1,716
50.46
Java UDP with a short response. Read more at: http:/... Java UDP TCP and UDP are transport protocols used for communication between computers. UDP UDP Client in Java UDP Client in Java In this section, you will know how to send any request or messages for UDP server by the UDP client. For this process you must require UDP - User Datagram Protocol a request to UDP server in Java Here, you will know how to receive and send... interconnection model (OSI). UDP Client in Java In this section, you will know how to send any request or messages for UDP server by the UDP client Image transfer using UDP - Java Beginners UDP. I have used core java technologies like JFC,JDBC,UDP. My main... dont know how to convert ASCII format to original image. i.e .txt (ASCII net beans net beans Write a JAVA program to validate the credit card numbers using Luhn Check algorithm. You will need to search the Internet to understand how the algorithm works. Hi Friend, Try the following code: import UDP Server in Java UDP Server in Java  ... of UDP server. This section provides you the brief description and program of the UDP server only for received your messages or information. The UDP Disadvantages..... of java and .net Disadvantages..... of java and .net Disadvantages of Java and .Net net beans net beans Write a JAVA program to read the values of an NxN matrix and print its inverse net beans2 net beans2 Write a JAVA program to find the nearest two points to each other (defined in the 2D-space J2ME -- Stream video from a udp server - MobileApplications ..For example from the following url : udp://222.222.222.121:2211...J2ME -- Stream video from a udp server HI, I wanted to develope a mobile application in j2ME to stream video from a udp server by providing the Ip Java programming or net beans - Java Beginners Java programming or net beans Help with programming in Java? Modify the dog class to include a new instance variable weight (double) and the Cat... on how I can create the program on a step by step basis or the solution would be even java vs .net - Java Beginners java vs .net which language is powerful now java or .net net beans net beans Write a JAVA program to parse an array and print the greatest value and the number of occurrences of that value in the array. You can initialize the array random values in the program without the need to read them JPA Many-to-Many Relationship the many-to-many relationship and how to develop a many-to-many relation in your JPA Application. Many-to-many: In this relationship each record in Table-A may... JPA Many-to-Many Relationship   net beans net beans Write a JAVA program to auto-grade exams. For a class of N students, your program should read letter answers (A, B, C, D) for each student. Assume there are 5 questions in the test. Your program should finally print .Net dll to Java - Java Beginners .Net dll to Java Hi, I've a .Net dll file and need to call into JAVA. Can i get any sample code on this, please? Thanks Chinnapa java file with many methods - Ajax java file with many methods I have to send response to a java file where there are many methods and I have to call one of them by passing parameter .How can I do Multicast under UDP(client server application) Multicast under UDP(client server application) UDP is used to support mulicast. Recall that UDP is connectionless and non reliable. Hence... for multicast applications under UDP. For more information, check the following NET BEAN - IDE Questions NET BEAN Thanks for your response actually i am working on struts and other window application. so if you have complete resources abt it then tell me.... and if you have link of this book ""Java EE Development with Net Beans Hibernate Many-to-one Relationships Relationships - Many to one relationships example using xml meta-data This current... and between levels in a hierarchy. In this example multiple stories (Java... the Group.java, Story.java and Group.hbm.xml file in our one-to-many example section Hibernate One-to-many Relationships - One to many example code in Hibernate using the xml file as metadata. Here... in Hibernate. In next section we will learn how to create and run the many-to-many... Hibernate One-to-many Relationships net beans 4 net beans 4 Write a JAVA program to read an initial two number x1 and x2, and determine if the two numbers are relatively prime. Two numbers are relatively prime. Two numbers are relatively prime if the only common factor Hibernate Many-to-many Relationships Relationships - Many to many example in Hibernate. In this example we have used xml...-to-many example. The many-to-many tag is used to define the relationships... Hibernate Many-to-many Relationships UDP (User Datagram Protocol) times. IN spite of being so many demerits, UDP is very useful in some... UDP (User Datagram Protocol) The User Datagram Protocol (UDP) is a transport protocol JPA Retrieve Data By Using Many-to-Many Relation ; In the previous section, you had read about the many-to-many relation. Here, you will learn how to retrieve data to database table by using the JPA many-to-many relation... JPA Retrieve Data By Using Many-to-Many Relation Hibernate Many To Many Annotation Mapping Hibernate Many To Many Annotation Mapping How to use Many To Many Annotation Mapping in Hibernate? Hibernate requires metadata like... for example read an image in java read an image in java qns: how we can read an image tell me about its code Java FTP Example Java FTP Example Is there any java ftp example and tutorials available on roseindia.net? Thanks Hello, There are many examples and tutorials that teaches you how to user FTP in your Java project. Most commonly Dot Net Architect Dot Net Architect Position Vacant: Dot Net Architect Job Description Candidates will be handling Dot Net Projects.   Wicket ; Wicket on Net Beans IDE This tutorial will take you... framework. In it each application consists of simply JAVA file and HTML file. "Hello World" example dot net dot net how to open a new window having detailed contents by clicking a marquee text in a page(like news details opening on clicking flash news title) in dot net 2003 Many Public Classes in One File - Java Tutorials Many Public Classes in One File 2003-10-13 The Java Specialists' Newsletter [Issue 080] - Many Public Classes in One File Author: Dr. Heinz M. Kabutz.... Welcome to the 80th edition of The Java(tm) Specialists' Newsletter ask user how many numbers to be inputted and determine the sum and highest number using an array in java ask user how many numbers to be inputted and determine the sum and highest number using an array in java ask user how many numbers to be inputted and determine the sum and highest number using an array in java array example - Java Beginners a question about how many dependents 10. Use a loop to get the names and add... i cannot solve this example Java AWT Package Example Example In Java In this section, you will learn how to create BorderLayout... Java AWT Package Example  .... Many running examples are provided that will help you master AWT package. Example PROJECT ON JAVA NET BEANS AND MYSQL !! PLEASE HELP URGENT PROJECT ON JAVA NET BEANS AND MYSQL !! PLEASE HELP URGENT i need a project based on connectivity..it can be based on any of the following topics...:// Java File Programming Java File Programming articles and example code In this section we will teach... tutorial will teach you how to create, read and write in a file from Java program. Java programming language provides many API for easy file management. Java Java Example Codes and Tutorials Introduction to Java Applet and example explaining how to write your first Applet. Java Applet tutorials for beginners, collection many Java Applets example...; Swing Example Here you will find many Java Swing examples Java example for Reading file into byte array Java example for Reading file into byte array. You can then process the byte array as per your business needs. This example shows you how to write a program...: Java file to byte array Java file to byte array - Example 2 Wicket on Net Beans IDE Wicket on Net Beans IDE  ... consists of simply JAVA file and HTML file. Each and every component in this web framework application is created in java and it is later rendered into the HTML java code - Development process java code to setup echo server and echo client. Hi Friend, Please visit the following links: Hope Example of HashSet class in java unique. You can not store duplicate value. Java hashset example. How...Example of HashSet class in java. In this part of tutorial, we.... Example of Hashset iterator method in java. Example of Hashset size() method Java Args example Java FTP Client Example Java FTP Client Example How to write Java FTP Client Example code? Thanks Hi, Here is the example code of simple FTP client in Java which downloads image from server FTP Download file example. Thanks   Socket Wheel to handle many clients - java tutorials Socket Wheel to handle many clients 2001-06-21 The Java Specialists' Newsletter [Issue 023] - Socket Wheel to handle many clients Author: Dr. Heinz M... or RSS. Welcome to the 23rd issue of "The Java(tm) Specialists' Newsletter - String sort Java: Example - String sort Sorting is a mechanism in which we sort the data in some order. There are so many sorting algorithm are present to sort the string. The example given below is based on Selection Sort. The Selection sort Java hashset example. Java hashset example. HashSet is a collection. You can not store duplicate value in HashSet. In this java hashset exmple, you will see how to create HashSet in java application and how to store value in Hash pattern java example pattern java example how to print this 1 2 6 3 7 10 4 8 11 13 5 9 12 14 15 Example Code - Java Beginners Example Code I want simple Scanner Class Example in Java and WrapperClass Example. What is the Purpose of Wrapper Class and Scanner Class . when i compile the Scanner Class Example the error occur : Can not Resolve symbol Java HashMap example. Java HashMap example. The HashMap is a class in java. It stores values in name..., you will see how to create an object of HashMap class. How to display vlaue of map. Code: HashMapExample .java package net.roseindia.java java string comparison example java string comparison example how to use equals method in String... strings are not same. Description:-Here is an example of comparing two strings using equals() method. In the above example, we have declared two string.   Synchronized with example - Java Beginners Synchronized with example Hi Friends, I am beginner in java. what i know about synchonized keyword is,If more that one 1 thread tries to access... that how the lock is released and how next thread access that.Please explain Java Map Example Java Map Example How we can use Map in java collection? The Map interface maps unique keys to value means it associate value to unique... Description:- The above example demonstrates you the Map interface. Since Map Java collection Stack example Java collection Stack example How to use Stack class in java... :- -1 Description:- The above example demonstrates you the Stack class in java.... Here is an example of Stack class. import java.util.Stack; public class How to implement FTP using java client and FTP server. Could anyone help me for How to implement FTP using java? Thanks Hi, There are many FTP libraries in Java, but you should... is the best tutorials and example of Apache FTP Library: FTP Programming in Java Inheritance java Example Inheritance java Example How can we use inheritance in java program... for bread Description:- The above example demonstrates you the concept... properties of the superclass. In the given example, the class Animal is a super Java FTP file upload example ; Hi, We have many examples of Java FTP file upload. We are using Apache... Programming in Java tutorials with example code. Thanks...Java FTP file upload example Where I can find Java FTP file upload Java ArrayList Example Java ArrayList Example How can we use array list in java program ? import java.util.ArrayList; public class ArrayListExample { public static void main(String [] args){ ArrayList<String> array = new How to declare String array in Java? Following example will show you how to declare string array in java. There are many ways to declare an array and initialize them. We can use 'new'... the use of 'new' keyword. Following example declare, initialize and access Asp with C#.Net Asp with C#.Net How to generate barcodes in aspx page java Java count occurrence of a word there is file called "story.txt",in a program we want to count occurrence of a word (example bangalore) in this file and print how many time word is present in the file. The given What is a vector in Java? Explain with example. What is a vector in Java? Explain with example. What is a vector in Java? Explain with example. Hi, The Vector is a collect of Object... many legacy methods that are not part of the collections framework. For more Struts Links - Links to Many Struts Resources you how to develop Struts applications using ant and deploy on the JBoss Application Server. Ant script is provided with the example code. Many advance topics... Struts Links - Links to Many Struts Resources Jakarta Java Get Example is method and how to use the get method in Java, this example is going... will learn how to use the method getGraphics(). Java example program... Java Get Example   Java bigdecimal movePointLeft example Java bigdecimal movePointLeft example Example below demonstrates bigdecimal class.... To how many digits this shifting will done, will depend on the integer number passed Java BigDecimal movePointRight example Java BigDecimal movePointRight example Example below demonstrates bigdecimal class.... To how many digits this shifting will done, will depend on the integer number Java Word Occurrence Example Java Word Occurrence Example In this example we will discuss about the how many times words are repeated in a file. This example explains you that how you... will demonstrate you about how to count occurrences of each word in a file. In this example Java - Continue statement in Java Java - Continue statement in Java Continue: The continue statement is used in many programming languages such as C, C++, java etc. Sometimes we do not need to execute some Clone method example in Java Clone method example in Java Clone method example in Java programming language Given example of java clone() method illustrates, how to use clone() method. The Clone Ask Questions? If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
http://www.roseindia.net/tutorialhelp/comment/77485
CC-MAIN-2013-20
refinedweb
2,547
56.45
Cisco DNA Center - Assurance DNAC Series This is part of a DNAC series: - Part 1: Getting started - Part 2: Cisco DNA Center - Devices - Part 3 (this post):. One should never store the username and password in the clear, not in the source code itself. The examples in the post are merely conceptual and for informational purposes. Introduction In this post we introduced DNAC from a theoretical point of view. We continued in this post with looking at some simple Python scripts to retrieve device related information from DNAC. For this one, we will look into Clients API’s. We will write a Python script that shows us the health statistics per device category (wired or wireless). Get Client Health Let’s have a look at the Client API section in the API documentation: As a reminder, we are looking to print an overview of the various health categories (fair, poor, …) per device category. Let’s have a look at the POSTMAN example, that will make the implementation a lot easier as the response is quite complex to parse. Before we dive into the Python code, let’s see how the response will look like. {'response': [{'scoreDetail': [ {'clientCount': 66, 'clientUniqueCount': 66, 'endtime': 1587739500000, 'scoreCategory': {'scoreCategory': 'CLIENT_TYPE', 'value': 'ALL'}, 'scoreValue': 36, 'starttime': 1587739200000}, {'clientCount': 2, 'clientUniqueCount': 2, 'endtime': 1587739500000, 'scoreCategory': { 'scoreCategory': 'CLIENT_TYPE','value': 'WIRED'}, 'scoreList': [{'clientCount': 0, 'clientUniqueCount': 0, 'endtime': 1587739500000, 'scoreCategory': { 'scoreCategory': 'SCORE_TYPE','value': 'POOR'}, You will notice that there is quite a complex JSON response but let’s analyse it a bit first. You see we have 66 client devices in total. From these 66 total devices, there are 2 wired devices and 64 wireless devices (although we don’t see that in the above snippet, I truncated it to save some space). Per device we see the value for different qualities such as POOR, FAIR etc…. This is the base structure of the JSON response. Hence, we’ll need to do some parsing to extract the data we are interested in. Here’s what we do: We store the ScoreDetailsin a list called scores We define two dictionaries, one for the wired devices and one for the wireless devices. We will populate those dictionaries later on. We iterate over the scoreslist and we look at the valueof the scoreCategorykey. This contains essentially whether the score is related to a wired device or a wireless device. Next, per category (wired or wireless) we will store ScoreList(which is an array) values in a list. As you can see, also the ScoreListhas a scoreCategoryitself. Now these scoreCategories are poor, fair, …. So we can now see We can then loop over the values and for each value (GOOD, FAIR…) we add a key to the wired or wireless dictionary with the value (poor, fair…) and the clientCount as the corresponding value The above might be a bit confusing I admit, but essentially all the above is done to get the following dictionary: { 'wired': { 'POOR': 0, 'FAIR': 0, 'GOOD': 2, 'IDLE': 0, 'NODATA': 0, 'NEW': 0 }, 'wireless': { 'POOR': 0, 'FAIR': 42, 'GOOD': 22, 'IDLE': 0, 'NODATA': 0, 'NEW': 0 } } Note: this is in fact a dictionary inside another dictionary (cfr nested dictionaries). With this dictionary, we have an elegant structure to show the health (as a percentage) for category. This is taken care of in the calculatePercentageHealth() function. Here we do the following: * We first calculate the total of client devices we have. We need this later on to be able to calculate the percentage. * As we are dealing with a nested dictionary structure, we’ll need two for loops. The first loop gets us in either the wired or the wireless dictonary. The second loop will iterate over the inner dictionary and per category (poor, fair), calculate the client health as a percentage. import requests from authenticate import get_token from pprint import pprint def main(): dnac = "sandboxdnac2.cisco.com" token = get_token(dnac) url = f"https://{dnac}/dna/intent/api/v1/client-health" headers = { "Content-Type": "application/json", "Accept": "application/json", "X-auth-Token": token } querystring = { "timestamp": ""} response = requests.get(url, headers=headers, params=querystring, verify=False ).json() scores = response['response'][0]['scoreDetail'] d = { 'wired' : {}, 'wireless': {} } nested_dict_wired = {} nested_dict_wireless = {} print("Overview") print("--------") for score in scores: if score['scoreCategory']['value'] == 'ALL': #print(f"Total devices - all: {score['clientCount']}") print('') if score['scoreCategory']['value'] == 'WIRED': #print(f" Total devices - wired: {score['clientCount']}") values = score['scoreList'] for value in values: #print(f" {value['scoreCategory']['value']}: {value['clientCount']}") nested_dict_wired[value['scoreCategory']['value']] = value['clientCount'] d['wired'] = nested_dict_wired if score['scoreCategory']['value'] == 'WIRELESS': #print(f" Total devices - wireless: {score['clientCount']}") values = score['scoreList'] for value in values: #print(f" {value['scoreCategory']['value']}: {value['clientCount']}") nested_dict_wireless[value['scoreCategory']['value']] = value['clientCount'] d['wireless'] = nested_dict_wireless calculatePercentageHealth(d) def calculatePercentageHealth(d): print("Percentage Health") print("-----------------") #Calculate Totals sum_wired = 0 for key, value in d['wired'].items(): sum_wired += value sum_wireless = 0 for key, value in d['wireless'].items(): sum_wireless += value #Calculate Percentages for key, value in d.items(): if key == 'wired': print(f"Sum_wired: {sum_wired}") for k, v in value.items(): percentage = round((v/sum_wired) * 100) print(f" For {k} => {percentage}%" ) if key == 'wireless': print(f"Sum_wireless: {sum_wireless}") for k, v in value.items(): percentage = round((v/sum_wireless) * 100) print(f" For {k} => {percentage}%" ) if __name__ == "__main__": main() Executing this script, results in the following output: Overview -------- Percentage Health ----------------- Sum_wired: 2 For POOR => 0% For FAIR => 0% For GOOD => 100% For IDLE => 0% For NODATA => 0% For NEW => 0% Sum_wireless: 64 For POOR => 0% For FAIR => 66% For GOOD => 34% For IDLE => 0% For NODATA => 0% For NEW => 0% Get Network Health In a simular fashion, we can write a script to parse the Network health. We won’t cover that now but if you understood previous example, parsing the Network health response will be very easy. The API to call is /dna/intent/api/v1/network-health?timestamp={{$timestamp}}000 Note: the appending of 000 is because the timestamp is supposed to be in Epoch milliseconds, while we typically sypply the Epoch time in seconds. Just so you know. Get Site Health In a simular fashion, we can write a script to parse the Site health. We won’t cover that now but if you understood how to parse the client health, parsing the Site health response will be very easy. The API to call is /dna/intent/api/v1/site-health?timestamp={{$timestamp}}000 As usual, code can be found on my Github repo.
https://blog.wimwauters.com/networkprogrammability/2020-04-25_dnac_part3_pythonrequests/
CC-MAIN-2020-29
refinedweb
1,074
51.07
This is a special type of function implementing AS-code in C++. More... #include <builtin_function.h> This is a special type of function implementing AS-code in C++. Many functions (including classes) are implemented in ActionScript in the reference player. Gnash implements them in C++, but they must be treated like swf-defined functions. They are distinct from NativeFunctions, which are part of the player and do not go through the ActionScript interpreter. Construct a builtin function/class with a default interface. The default interface will have a constructor member set as 'this' Invoke this function or this Class constructor. Implements gnash::as_function. Return true if this is a built-in class. Reimplemented from gnash::as_function. Return the number of registers required for function execution. Gnash's C++ implementations of AS functions don't need any registers! Implements gnash::UserFunction.
http://gnashdev.org/doc/html/classgnash_1_1builtin__function.html
CC-MAIN-2013-48
refinedweb
140
51.14
This is an update from my original post: Before proceeding, please read the original article. It will help in understanding this one. A QUICK RECAP FROM THE ORIGINAL POST. THE PROBLEM AND THE FIX As you can see, I had to modify the source code of Assembly B. In a lot of real-world situations it’s not possible to modify Assembly A or B. Here are some reasons why: - You may not have the source code for assembly - You have the source code, but modification is useless because the original assembly was signed by the author - You have the source code, but you do not have the legal rights to modify or distribute the source. IMPLEMENTING THE SOLUTION And the source code looks like this: VERIFYING THAT IT WORKS Add a reference to AssemblyC and then import it. Then the extension methods will be visible to IronPython PARTING THOUGHTS - There’s still the pain of generating Assembly C – a future post will simplify this task - Notice that Assembly C contains a tiny class “ClassLibC.ClassC”. In order for this all to work it seems that AssemblyC needs to contain at least one public class. (If you leave this palceholder class out, then you will be able to add a reference to Assembly C but the import will fail) - Many thanks to Harry Pierson for letting me know that the ExtensionType attribute can be in any assembly. - The source code is attached if you want to try this for yourself
https://blogs.msdn.microsoft.com/saveenr/2009/02/05/ironpython-consuming-extension-methods-in-ironpython-part-ii/
CC-MAIN-2017-17
refinedweb
250
65.66
See also: IRC log <Norm> Hi Stuart <Stuart> Lo... dinner calls... be back soon... <DanC> Scribe: Ed <scribe> Scribe: Ed Rice <scribe> ScribeNick: Ed Ed: Next meeting Feb 14th, xml schema team will join <DanC> on Extending and Versioning draft findings Dan Connolly (Monday, 24 January) <scribe> ACTION: Stuart to send note to xml schema team to confirm meeting [recorded in] <ht_sof> HST sends regrets for the joint meeting on 14 Feb <scribe> ACTION: noah and Norm agree to scribe Feb 14th [recorded in] RESOLUTION: Resolved to accept the minutes of Jan 31st 2005 <DanC> 4 answers so far <scribe> ACTION: All Tag members to respond to on-line survey as to best meeting time. [recorded in] <ht> I.e. Monday and Thursday <scribe> ACTION: Deadline to respond to on-line Survey is Feb 10th 2005 [recorded in] ...") <scribe> ACTION: Stuart contact Rich to see if there is room on the panel for session three [recorded in] Vincent has agreed to chair/drive face to face . <noahm> Also, regarding change of schedule, I note that we seem committed to doing 2/14 with the XML Schema WG, so suggest that any changes be not before then. <Stuart> <scribe> ACTION: Vincent to send out tentative agenda by next weeks meeting for review and email discussion. [recorded in] <DanC> ACTION: Vincent to send out tentative agenda by next weeks meeting for review and email discussion. [recorded in] <Norm> FYI: The Core WG now expects it will be valuable to meet to discuss XML 2.0 voice browswer WG is a lunch time agenda <Roy> I can attend all of the liaison meetings. . chairs of the qa working group <ht> Noah, I can make Monday Noah, I am open on Monday.. open to trying new types of food (or old) Noah will coordinate and schedule. <Stuart> <Stuart> <DanC> <Zakim> DanC, you wanted to suggest keeping IRIEverywhere open until we have timbl's questions () answered and to ask if RF would like to continue "ACTION <scribe> ACTION: DanC to enter actions relative to issues list. [recorded in] review all;. <scribe> ACTION: norm to pick up Paul Cotton's work on namespaceDocument-8 [recorded in] <Roy> Please continue my action on IRIEverywhere, may be good for discussion at F2F. ) Dan states, the strong signal is that people want this solved in the particulars. If the TAG does not agree, we should respond that we're not going to. Henry will work with Norm to take up namespaceDocument-8 topic. <scribe> ACTION: Henry to produce a RDDL1 to RDF Style sheet or explain why not [recorded in] if you use a URI and you use an http without a hash, it needs to point to a network resource (information resource) other side if you use an http: with no hash mark, it may as well be an html algorithm there is no consensus should http be uses as a substrate protocal) Proposal: Noah suggests to take a cluster of these and set them aside for a week to see if we should adress these substrate 16 is currently deferred. deferred = until there is new information about this topic we will not discuss further. background - where should we be using xlink. Should be returned to once the xml working group resolves a couple of minor bugs. The xml core working group has committed to doing this quickly. <Norm> XLink extensions note from Core WG: .) there was a draft finding from Chris. But it has not been discussed. <ht> DanC, I'll keep it in mind, not to worry <Roy> <DanC> draft finding Ed asked if we should all review offline and discuss in two weeks during weekly meeting <DanC> ah... no, having us "all" review it is the anybody/somebody/nobody pattern. a pattern that works is to have 2 reviewers. Ed offers to be a reviewer on this topic, will contact Chris. <scribe> ACTION: Ed to meet with chris and review/update the document. [recorded in] already discussed. Note: Chris has an action to write up a resolution, but current status is not know. <scribe> ACTION: Henry to monitor and bring back up when time is appropriate. [recorded in] a workshop and working group is studying issue, should be complete in March." Stuart has agreed to continue working on this issue if the TAG would prefer. Stuart: asks to partner with noah on 31 on an informal basis. Norm: this one is the process of wrapping up. The CR draft is expected to be published tomorrow. ... High confidence in closure soon. This one and xmlFunctions-34 and RDFinXHTML-35 are grouped. <Zakim> DanC, you wanted to note we decided the mixedNamespaceMeaning-13 issue had a dependency on issue 42 There is a new compound documents working group working which may be working on 33. <scribe> ACTION: Chair to negotiate joint meeting to review during boston trip. [recorded in] <Stuart> <Zakim> noahm, you wanted to say that SOAP headers violate functional view of XML, I think <3> RSAgent]recorded in <5> RSAgent]recorded in
http://www.w3.org/2005/02/07-tagmem-minutes.html
CC-MAIN-2016-44
refinedweb
837
69.92
Log4j logger performance in JBoss AS 7Lars Gråmark Apr 11, 2012 4:08 AM I recently migrated an application from Tomcat 6 to JBoss 7. The migration went fine except but when we ran comparative performance tests between Tomcat and JBoss we found that the response times were a lot worse than in Tomcat. When I looked into the problem I found that the overhead was caused by the using log4j over the JBoss logging subsystem. The fact that the logging subsystem could have a slight overhead is understandable but I find the proportion a bit worrying. I simplified the test in order to recreate it for this forum and it now contains 10 calls to the log4j logger. When running this with 10 simultaneneous users and each with 100 requests I get approx 23 ms per call. Now if I compare this with a setup that does NOT use the logging subsystem but uses log4j directly I get approx 2 ms. Quite a differency. I havent run this through a profiler yet but I wanted to make a sanity check with you before proceeding with this. Did I miss anything in the configuration? This is what I did to get the figures for the logging subsystem. - I installed a fresh copy of JBoss 7.1.1-final. - I downloaded the latest version of the hellowolrd application from quickstarts. - I added the following dependency to the pom.xml. <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1.2.16</version> </dependency> - Altered the HelloService to this: public class HelloService { Logger logger = Logger.getLogger(HelloService.class); String createHelloMessage(String name) { for (int i=0;i<10;i++) { logger.warn("Hello world"); } return "Hello " + name + "!"; } } - Added the handler and the logger below to standalone.xml: <periodic-rotating-file-handler <formatter> <pattern-formatter </formatter> <file relative- <suffix value=".yyyy-MM-dd"/> <append value="true"/> </periodic-rotating-file-handler> <logger category="org.jboss.as.quickstarts" use- <level name="INFO"/> <handlers> <handler name="DEMO"/> </handlers> </logger> - Added the file jboss-deployment-structure.xml to WEB-INF <jboss-deployment-structure> <deployment> <dependencies> <module name="org.apache.log4j" /> </dependencies> </deployment> </jboss-deployment-structure> - I then used the Apache benchmarking tool to get the numbers. ab -n100 -c10 This is what I did to get the numbers for log4j without going through the logging subsystem: - Replaced the file jboss-deployment-structure.xml in WEB-INF with the following contents: <jboss-deployment-structure> <deployment> <exclusions> <module name="org.apache.log4j" /> </exclusions> </deployment> </jboss-deployment-structure> - Added the line below to bin/standalone.conf. JAVA_OPTS="$JAVA_OPTS -Dlog4j.configuration=file://<path to JBOSS_HOME>/standalone/configuration/log4j.xml" - Added the file log4j.xml in standalone/configuration. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE log4j:configuration SYSTEM "log4j.dtd"> <log4j:configuration xmlns: <appender name="DEMO" class="org.apache.log4j.DailyRollingFileAppender"> <param name="File" value="${jboss.server.log.dir}/demo_direct.log" /> <param name="Append" value="true" /> <param name="DatePattern" value="'.'yyyy-MM-dd" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d{ABSOLUTE} [%t] %-5p|%5X{id}|%-50.50c| %m%n" /> </layout> </appender> <appender name="CONSOLE" class="org.apache.log4j.ConsoleAppender"> <param name="Target" value="System.out" /> <param name="Threshold" value="DEBUG" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d{ABSOLUTE} %-5p [%c{1}] %m%n" /> </layout> </appender> <logger name="org.jboss.as.quickstarts" additivity="false"> <level value="INFO" /> <appender-ref </logger> <root> <priority value="INFO" /> <appender-ref </root> </log4j:configuration> - Again I used the Apache benchmarking tool to get the numbers. ab -n100 -c10 Thanks Lars Gråmark 1. Re: Log4j logger performance in JBoss AS 7James Perkins Apr 11, 2012 11:19 AM (in response to Lars Gråmark) There is some overhead in booting up any subsystem including logging. My guess is the extra time comes from booting up the subsystem. Also note that when using log4j with the logging subsystem you are going through an abstraction layer between the JBoss Log Manager and log4j. While the time to do this is probably insignificant it could add a slight performance decrease. If you were to use JBoss Logging rather than log4j even without the subsystem I would imagine the results are the same if not better than log4j. In the end we're talking about 21ms difference. Likely just a one time 19-20ms difference too because of the subsystem booting up. When you use the subsystem you get the ability to change your logging configuration in real-time rather than having to redeploy your application for every logging change. In my opinion that's definitely worth losing 20ms and really it's probably only a one time 20ms. 2. Re: Log4j logger performance in JBoss AS 7Lars Gråmark Apr 11, 2012 2:41 PM (in response to James Perkins) Thanks for your response. The boot up time was been taken into account when I ran the tests. The system has been running for a while and I've run the test several times to give JIT a chance to warm up. Unfortunately, a difference of 20 ms is actually a bit too much for us to keep using the subsystem. The reason is that this is an application that will have a huge amount of requests and the delay will be noticed by end users. The real time configuration is really a great feature but I need to solve the performance "problem" before I can use this. 3. Re: Log4j logger performance in JBoss AS 7James Perkins Apr 11, 2012 2:50 PM (in response to Lars Gråmark) Taking a second look at your configuration with the subsystem it appears the messages might be getting logged twice if you're using log4j or possibly not at all really. You're excluding the log4j library in the jboss-deployment-structure.xml, but using the configuration in standalone.xml. That doesn't seem right. 4. Re: Log4j logger performance in JBoss AS 7Lars Gråmark Apr 11, 2012 3:09 PM (in response to James Perkins) I have to admit that the example perhaps wasn't that great. But it actually works as expected. When I exclude the log4j module from jboss-deployment-structure.xml, the log message does not seem to reach the subsystem and it writes to the file I specified in log4j.xml but not to the file I provided in standalone.xml. The result is exactly the same if I clear out the entries in standalone.xml for the second test. 5. Re: Log4j logger performance in JBoss AS 7David Lloyd Apr 11, 2012 3:30 PM (in response to Lars Gråmark) 20ms per message is definitely indicating some major problem. Our log4j integration should actually outperform plain log4j by a pretty decent margin, based on previous performance testing. If log messages are taking this long, something must have gotten screwed up in the integration. 6. Re: Log4j logger performance in JBoss AS 7David Lloyd Apr 11, 2012 3:54 PM (in response to David Lloyd) Looks like this might be the issue solved in, so it should be fixed upstream soon. 7. Re: Log4j logger performance in JBoss AS 7James Perkins Apr 11, 2012 3:59 PM (in response to David Lloyd) Just to add, one way to speed it up would be to remove the date format from the pattern. Not sure that's desirable, but it would work. 8. Re: Log4j logger performance in JBoss AS 7Lars Gråmark Apr 11, 2012 5:02 PM (in response to David Lloyd) Synchronization sounds like a plausible cause. Looking forward to try it out as soon as its available. Thank you both for your time and help. 9. Re: Log4j logger performance in JBoss AS 7Haroon Foad Apr 14, 2012 3:17 AM (in response to Lars Gråmark) Under which tag in standalone.xml you added JAVA_OPTS="$JAVA_OPTS -Dlog4j.configuration=" ? 10. Re: Log4j logger performance in JBoss AS 7Lars Gråmark Apr 14, 2012 9:42 AM (in response to Haroon Foad) Thats bin/standalone.conf. Not standalone.xml.
https://developer.jboss.org/message/729463
CC-MAIN-2015-11
refinedweb
1,333
56.86
The JPA Overview's Chapter 12, Mapping Metadata and the JDO Overview's Section 15.7, “Joins” explain join mapping in each specification. All of the examples in those documents, however, use "standard" joins, in that there is one foreign key column for each primary key column in the target table. Kodo, Kodo will function properly. There is no special syntax for expressing a partial primary key join - just do not include column definitions for missing foreign key columns. In a non-primary key join, at least one of the target columns is not a primary key. Once again, Kodo supports this join type with the same syntax as a primary key join. There is one restriction, however: each non-primary key column you are joining to must be controlled by a field mapping that implements the kodo.jdbc.meta.Joinable interface. All built in basic mappings implement this interface, including basic fields of embedded objects. Kodo will also respect any custom mappings that implement this interface. See Section 7.10, “Custom Mappings” for an examination of custom mappings. Not all joins consist of only links between columns. In some cases you might have a schema in which one of the join criteria is that a column in the source or target table must have some constant value. Kodo. To form a constant join in JDO mapping, first set the column element's name attribute to the name of the column. If the column with the constant value is the target of the join, give its fully qualified name in the form <table name>.<column name>. Next, set the target. JPA: @Entity @Table(name="T1") public class ... { @ManyToOne @JoinColumns({ @JoinColumn(name="FK" referencedColumnName="PK1"), @JoinColumn(name="T2.PK2" referencedColumnName="'a'") }); private ...; } JDO: <class name="..." table="T1"> <...> <column name="FK" target="PK1"/> <column name="T2.PK2" target="'a'"/> </...> </class>: JPA: @Entity @Table(name="T1") public class ... { @ManyToOne @JoinColumns({ @JoinColumn(name="FK" referencedColumnName="PK2"), @JoinColumn(name="T2.PK1" referencedColumnName="2") }); private ...; } JDO: <class name="..." table="T1"> <...> <column name="FK" target="PK2"/> <column name="T2.PK1" target="2"/> </...> </class> Finally, from the inverse direction, these joins would look like this: JPA: ...; } JDO: <class name="..." table="T2"> <...> <column name="T1.FK" target="PK1"/> <column name="PK2" target="'a'"/> </...> <...> <column name="T1.FK" target="PK2"/> <column name="PK1" target="2"/> </...> </class>
http://docs.oracle.com/cd/E21764_01/apirefs.1111/e13946/ref_guide_mapping_notes_nonstdjoins.html
CC-MAIN-2015-40
refinedweb
386
60.51
Delve into fields and methods with this second article in the Java 101 "Object-oriented language basics" series. Deepen your understanding of fields, parameters, and local variables and learn to declare and access fields and methods. Ever hear the expression, "I can't see the forest for the trees?" That expression refers to specific details that cloud your understanding of the big picture. And what could be cloudier than a detailed examination of fields and methods during an introduction to Java's classes and objects? That is the reason I chose to minimize fields and methods while maximizing the big picture -- classes and objects -- in Part 1 of this object-oriented language series. However, because you eventually have to step back from the forest and focus on the trees, this month's article examines fields and methods in great detail. Java's variables can be divided into three categories: fields, parameters, and local variables. I'll present fields here and leave an exploration of parameters and local variables for a later section that discusses methods. Declaring fields. Access specifiers: class MyClass { int fred; } Only code contained in MyClass and other classes declared in the same package as MyClass can access fred. If you declare a field private, only code contained in its class can access the field. That field becomes inaccessible to every other class in every other package. Examine the code below: class Employee { private double salary; } Only code contained in Employee can access salary. If you declare a field public, code contained in its class and all other packages' classes can access the field. Refer to the code below: public class Employee { public String name; } Code contained in Employee and all other packages' classes can access name. ( Employee must also be declared public before code in other packages can access name.) Declaring every field in a given class public defeats the concept of information hiding. Suppose you create a Body class to model the human body and Eye, Heart, and Lung classes to model an eye, the heart, and a lung. The Body class declares Eye, Heart, and Lung reference fields, as demonstrated by the following example: public class Body { public Eye leftEye, rightEye; private Heart heart; private Lung leftLung, rightLung; } The leftEye and rightEye field declarations are public because a body's eyes are visible to an observer. However, the heart, leftLung, and rightLung declarations are private because the organs they represent are hidden inside the body. Suppose heart, leftLung, and rightLung were declared public. Wouldn't that be the equivalent to having a body with its heart and lungs exposed? Finally, a field declared protected resembles a field with the default access level. The only difference between the two access specifiers is that subclasses in any package can access the protected field. The following example demonstrates that: public class Employee { protected String name; } Only code contained in Employee, other classes declared in the same package as Employee, and all Employee's subclasses (declared in any package) can access name. Modifiers You can optionally declare a field with a modifier keyword: final or volatile and/or static and/or transient. If you declare a field final, the compiler ensures that the field is initialized and subsequently treats the field as a constant -- a read-only variable. The compiler can now perform internal optimizations on a program's byte codes because it knows the constant will not change. Consider the example below: class Employee { final int ACCOUNTANT = 1; final int PAYROLL_CLERK = 2; final int MANAGER = 3; int jobID = ACCOUNTANT; } The example above declares three final int fields: ACCOUNTANT, PAYROLL_CLERK, and MANAGER. Note: It is customary to use all capital letters and separate multiple words with underscores when declaring constants. That helps distinguish constants from read/write variables when analyzing source code. If you declare a field volatile, multiple threads can access the field, and certain compiler optimizations are prevented so that the field is accessed appropriately. (You'll learn about volatile fields when I discuss threads in a future article.) If you declare a field static, all objects share one copy of the field. When you assign a new value to that field, all objects can see the new value. If static is not specified, the field is known as an instance field, and each object receives its own copy. Finally, the value of a field declared transient will not be saved during object serialization. (I explore the topics of transient fields and object serialization in a future article.) Instance fields An instance field is a field declared without the static keyword modifier. Instance fields are associated with objects -- not classes. When modified by an object's code, only the associated class instance -- the object -- sees the change. An instance field is created when an object is created and destroyed when its object is destroyed. The following example demonstrates an instance field: class SomeClass1 { int i = 5; void print () { System.out.println (i); } public static void main (String [] args) { SomeClass1 sc1 = new SomeClass1 (); System.out.println (sc1.i); } } SomeClass1 declares an instance field named i and demonstrates two common ways to access that instance field -- from an instance method or from a class method. Both methods are in the same class as the instance field. To access an instance field from an instance method in the same class, you only specify the field's name. To access an instance field from another class's instance method, you must have an object reference variable that contains the address of an object created from the class that declares the instance field you want to access. Prefix the object reference variable -- along with the dot operator -- to the instance field's name. (You'll explore instance methods later in this article.) To access an instance field from a class method in the same class, create an object from the class, assign its reference to an object reference variable, and prefix that variable to the instance field's name. To access an instance field from another class's class method, complete the same steps as when you accessed that field from a class method in the same class. (I'll present class methods later in this article.) When the JVM creates an object, it allocates memory for each instance field and subsequently zeroes the field's memory, which establishes an instance field's default value. The way you interpret a default value depends on the field's data type. Interpret a reference field's default value as null, a numeric field's default value as 0 or 0.0, a Boolean field's default value as false, and a character field's default value as \u0000. Class fields A class field is a field declared with the static keyword modifier. Class fields are associated with classes -- not objects. When modified by a class's code, the class (as well as any created objects) sees the change. A class field is created when a class is loaded and destroyed if and when a class is unloaded. (I believe some JVMs unload classes whereas other JVMs do not.) The example below illustrates a class field: class SomeClass2 { static int i = 5; void print () { System.out.println (i); } public static void main (String [] args) { System.out.println (i); } } SomeClass2 declares a class field i and demonstrates two common ways to access i -- from an instance or class method (both methods are in the same class as the class field). To access a class field from an instance method in the same class, you only specify the field's name. To access a class field from another class's instance method, prefix the class field with the name of the class in which the class field is declared. For example, specify SomeClass2.i to access i from an instance method in another class -- which is in the same package as SomeClass2, because SomeClass2 isn't declared public. To access a class field from a class method in the same class, you only specify the field's name. To access a class field from another class's class method, follow the same procedure as you did when accessing the class field from an instance method in another class. Once a class loads, the JVM allocates memory for each class field and establishes a class field's default value. Interpret a class field's default value just as you would interpret an instance field's default value. Class fields are the closest things to global variables in Java. Check out the example below: class Global { static String name; } class UseGlobal { public static void main (String [] args) { Global.name = "UseGlobal"; System.out.println (Global.name); } } The above example declares a pair of classes in a single source file: Global and UseGlobal. If you compile and then run the application, the JVM loads UseGlobal and then begins executing the main() method's byte codes. After seeing Global.name, the JVM searches for, loads, and then verifies the Global class. Once Global is verified, the JVM allocates memory for name and initializes that memory to null. Behind the scenes, the JVM creates a String object and initializes that object to all characters between the pair of double quote characters -- UseGlobal. That reference assigns to name. Then, the program accesses Global.name and retrieves the String object's reference, which subsequently passes to System.out.println(). Finally, the contents of the String object appear on the standard output device. Because neither Global nor UseGlobal are explicitly marked public, you can choose any name for the source file. Compiling that source file results in a pair of class files: Global.class and UseGlobal.class. Because UseGlobal contains the main() method, use that class to run the program. Type java UseGlobal to run the program at the command line. If you type java Global instead, you would receive the following error message: Exception in thread "main" java.lang.NoSuchMethodError: main The word main at the end of the error message indicates that java couldn't find a main() method in class Global. Constants A constant is a read-only variable; once the JVM initializes that variable, the variable's value cannot change. Declare constants with the final keyword. Just as there are two kinds of fields, instance and class, constants come in two flavors -- instance and class. For efficiency, create class constants, or final static fields. Consider this example: class Constants { final int FIRST = 1; final static int SECOND = 2; public static void main (String [] args) { int iteration = SECOND; if (iteration == FIRST) // Compiler error. System.out.println ("first iteration"); else if (iteration == SECOND) System.out.println ("second iteration"); } } The above example's Constants class declares a pair of constants -- FIRST and SECOND. FIRST is an instance constant because the JVM creates a separate copy of FIRST for each Constants object. In contrast, because the JVM creates a single copy of SECOND after loading Constants, SECOND is a class constant.
http://www.javaworld.com/article/2075239/java-platform/java-101--object-oriented-language-basics--part-2--fields-and-methods.html
CC-MAIN-2015-14
refinedweb
1,825
62.48
(Admittedly, these highlights skew towards technologies I’m currently using most frequently – I’ve grouped some of these into related categories. Also I’m sure I’ve left out some highpoints, so I’ll plan to update this post as needed.) AIS at BUILD 2019 However, before describing announcements or specific technology updates I noted, my number one highpoint of the week was the session that Vishwas Lele (AIS CTO and MS Azure MVP) gave on Tuesday: “Architecting Cloud-Native Apps with AKS and Cosmos DB.” This year was the first year that Microsoft allowed a few select partners to lead sessions at BUILD, so I consider his inclusion recognition of the great work he is doing to advance cloud-native technologies on Azure. His session was packed, and attendees got their money’s worth of content related to AKS, Cosmos DB, and strategies for using cloud-native conventions for the consumption of PaaS services to build resilient, globally scalable applications. Kubernetes and AKS Most of the discussion about compute on Azure included at least one point related to AKS (Azure Kubernetes Service). AKS was everywhere, and one consistent theme seems to be AKS as a significant portion of the Azure “compute” offering in the future. So, there were many exciting K8s-related announcements and demonstrations which I had not previously heard, a few that stood out to me: - Windows server containers in AKS is private preview: With the update in Kubernetes 1.1.4 to officially support Windows containers, this was a natural follow-on. It seems that the AKS engineering team is ready to go in a more widely available preview, expect updates soon. - General Availability of AKS Virtual Nodes: Rapid provisioning of Kubernetes pods using “serverless” container infrastructure (Azure Container Instances). These pods are included in your Kubernetes cluster and can be billed very granularly (per second of execution). - Kubernetes-Based Event-Driven Autoscaling (KEDA) was announced: Allows you to auto-scale deployments in Kubernetes clusters in response to events like an event stream (EventHub, Kafka, etc.), Azure Monitor Events, or other event providers. - Azure Policy integration with AKS is in public preview: Allows you to set policies inside AKS clusters on pods, namespaces, and ingress. - AKS as an option for the underlying compute layer for Azure Machine Learning Service: This announcement isn’t technically new; it was just cool to see AKS as a “production” option for deployment. Azure AI The company’s vision related to Artificial Intelligence (AI) and Machine Learning offerings is stronger than it’s ever been. This story’s been developing for the past few years, and the vision hasn’t always been crystal clear. Over the past two years, I’ve often asked the question “If I were going to start a new custom machine learning project in Azure, what services would I start with?” Usually, that answer has been “Azure Databricks” by default, but I’m now coming around to the idea that there is now a viable alternative – or at least additional tools to consider. The BUILD 2019 conference included great sessions and content focused on Azure AI, segmented into three high-level areas: - Knowledge Mining: This is concerned with using Azure services to help discover hidden insights from your content – including docs, images, and other media. Sessions and announcements in this area focused on enhancements to two key services; Azure Search and a new “Form Recognizer” service. - Azure (Cognitive) Search is now generally available: This service uses built-in AI capabilities to discover patterns and relationships, understand the sentiment, extract key phrases, etc. without the need for specific data science expertise. Additionally, Azure allows consumers to customize results by applying custom-tuned ranking models. - Forms Recognizer: A new service announced in public preview. This service exposes a REST API that accepts document content (PDF, images, etc.) and extracts text, key/value pairs, and tables. The idea is that “usable data” can be gleaned from content that has been hard to unlock in the past. Machine Learning: A set of services that enable building and deploying custom machine learning models. This area represents many capabilities on the Azure platform; I found that at this year’s conference some great new additions and enhancements were highlighted that help to answer that first “where do I start?” question. Some highlights: - AutoML is in public preview: This service allows a consumer to choose the “best” machine learning algorithm for a provided data set and the desired outcome. It does this by accepting the data set from the user (in preview it accepts files stored in blob storage exclusively), automatically training several different models based on this data, comparing performance, and reporting the performance to the end user. - Visual Interface for Azure Machine Learning Service is in public preview: This service enables consumers to build ML models using a drag and drop interface, with the ability to drop down into Python code when needed for specific activities. In many ways, this is a reincarnation of the “Azure ML Studio” service of the past, without some of the limitations that held this service back (data size restrictions, etc.). - Choose your underlying compute: Choose where your models are trained and run, including the Machine Learning Services managed compute environment, AKS, Virtual Machines, Azure Databricks, HDInsight clusters, or in Azure Data Lake Analytics. AI apps and agents: This area includes Azure Cognitive Services and Azure Bot Service. Azure Cognitive Services is a set of APIs that allow developers to call pre-built AI models to enhance their applications in the areas of computer vision, speech-to-text, and language. A few data points that stuck out to me: - A new Cognitive Services category – “Decision”: This category will initially include three services: 1) Content Moderator, 2) Anomaly Detector (currently in preview), and 3) Personalizer (also currently in Preview). Personalizer is a service to help promote relevant content and experiences for users. - “Conversation Transcription”: An advanced speech to text capability. - Container Support Expansion: The portfolio of Cognitive Services that can be run in locally in a Docker container now includes Anomaly Detector, Speech-to-Text, and Text-to-Speech in addition to the existing text analytics and vision containers. .NET Platform It’s amazing for me to consider that .NET is now 17 years old – the official release of .NET 1.0 was in February 2002! And, although .NET is now on the “mature” end of the spectrum compared to many other active programming frameworks, it’s also true that there are many new .NET developers still adding C#, VB.NET, F#, or CLR-based languages to their repertoire. In fact, at BUILD 2019 the company quoted the fact that “a million new active .NET developers” were added last year alone. One of the reasons for this is that the .NET team continues to innovate with offerings like .NET core – which it released in 2014. .NET Core is the cross-platform development stack which runs across operating systems and has been the “future” of .NET for some time. One of the major announcements that will affect .NET developers in the future is that the next “release” of .NET core will be “.NET 5”. Yes, this means there will be one unified platform that includes legacy .NET framework components, .NET Core, and Mono. After the .NET 5 release in 2020, there will be one annual release of .NET. A few other .NET related data points that stuck out to me as items to investigate in more detail: - “Blazor” got a lot of session time and seems to be a real project now. For some people, the idea of running C# in the browser can devolve into a philosophical debate. However, it’s clear that Microsoft sees enough upside that it has moved the technology beyond an “experimental” phase into a fully-supported preview. - .NET for Spark was released (open source) aimed to provide access to Apache Spark for .NET developers. - Frequent mentions of gRPC support in .NET Core. gRPC is the language agnostic remote procedure call framework published by Google. - NET 1.0: A cross-platform (.NET core) framework for creating custom ML models using C# or F# – without having to leave the .NET ecosystem. Cosmos DB BUILD 2019 also had a few great sessions and announcements related to Cosmos DB, Microsoft’s fully managed, global, multi-modal database service. My highlights: - Best practices for Azure Cosmos DB: Data modeling, Partitioning, and RUs: A great session given by Deborah Chen and Thomas Weiss (program managers on the Cosmos DB team). Practical, actionable examples related to how to partition, how to minimize request units (RUs) for common database calls, etc. - Etcd API: In Kubernetes, etcd is used to store the state and the configuration of clusters. Ensuring availability, reliability, and performance of etcd is crucial to the overall cluster health, scalability, elasticity availability, and performance of a Kubernetes cluster. The etcd API in Azure Cosmos DB allows you to use Azure Cosmos DB as the backend store for Azure Kubernete - Spark API: New (preview) native support for Spark through the Cosmos DB Spark API. This one is interesting to me because it has the potential to enable a “serverless experience for Apache Spark” – where the “cluster” is Cosmos DB. I would pay close attention to the consumed RUs though! - Cosmos DB will support multi-model access in the future: Cosmos DB is a multi-model database, meaning you can access the data using many different APIs. However, until now this has been a choice that is made up front on the creation of the database. In his “Inside Datacenter Architecture” session, Mark Russinovich announced that in the future, Cosmos DB would support multi-model access to the same data. - Jupyter notebooks running inside Azure Cosmos DB: announced in preview. A native notebook experience that supports all the Cosmos DB APIs and is accessed directly in the Azure Portal. Other Announcements Below are some other BUILD 2019 announcements, highlights, and data points I’m investigating in the coming weeks: - Windows Subsystem for Linux (WSL) 2 - VNET integration now for both Linux and Windows App Service - Windows Container support in App Service - Serverless pricing tier for Azure SQL database - Azure Database Hyperscale tiers - SQL Server running on Edge devices - Key influencers visualization in Power BI - Power BI Dataflows - New Virtual Machine Image Builder Service - Virtual Machine Shared Image Gallery (cross subscription, AAD tenant) - Azure App Configuration service - Machine Learning Models in Power BI - Security Center Logs in Log Analytics - Azure Open Datasets - Blob index and quick query - Azure Kinect DK - Microsoft Fluid Framework If you have any questions, feel free to reach out to me on Twitter at @Bwodicka or contact the AIS team online.
https://www.ais.com/author/brent-wodicka/
CC-MAIN-2021-31
refinedweb
1,783
50.77
What is lens? lens is a package which provides the type synonym Lens which is one of a few implementations of the concept of lenses, or functional references. lens also provides a number of generalizations of lenses including Prisms, Traversals, Isos, and Folds. Why do I care?. Eventually, you can use lenses to bring things like record syntax, lookups in lists or maps, pattern matching, Data.Traversable, scrap your boilerplate, the newtype library, and all kinds of type isomorphisms all together under the same mental model. The end result is a way to efficiently construct and manipulate "methods of poking around inside things" as simple first-class values. Okay, what is a Lens? A lens is a first-class reference to a subpart of some data type. For instance, we have _1 which is the lens that "focuses on" the first element of a pair. Given a lens there are essentially three things you might want to do - View the subpart - Modify the whole by changing the subpart - Combine this lens with another lens to look even deeper The first and the second give rise to the idea that lenses are getters and setters like you might have on an object. This intuition is often morally correct and it helps to explain the lens laws. The lens laws? Yep. Like the monad laws, these are expectations you should have about lenses. Lenses that violate them are weird. Here they are - Get-Put: If you modify something by changing its subpart to exactly what it was before... then nothing happens - Put-Get: If you modify something by inserting a particular subpart and then viewing the result... you'll get back exactly that subpart - Put-Put: If you modify something by inserting a particular subpart a, and then modify it again inserting a different subpart b... it's exactly as if you only did the second step. Lenses that follow these laws are called "very well-behaved". What does it look like to use a lens? Well, let's look at _1 again, the lens focusing on the first part of a tuple. We view the first part of the tuple using view >>> view _1 ("goal", "chaff") "goal" >>> forall $ \tuple -> view _1 tuple == fst tuple True We modify _1's focused subpart by using over (mnemonic: we're mapping our modification "over" the focal point of _1). >>> over _1 (++ "!!!") ("goal", "the crowd goes wild") ("goal!!!", "the crowd goes wild") >>> forall $ \tuple -> over _1 f tuple == (\(fst, snd) -> (f fst, snd)) tuple True As a special case of modification, we can set the subpart to be a new subpart. This is called set >>> set _1 "set" ("game", "match") ("set", "match") >>> forall $ \tuple -> set _1 x tuple = over _1 (const x) tuple True (Sidenote: What is that forall thing?) It's actually a lie, sorry about that. It's meant to be read as a sentence, a statement of fact, like "forall tuples tuple, view _1 tuple == fst tuple". You can simulate this behavior by using QuickCheck's quickCheck function but forall is much stronger. Any more examples? Yeah, here are the lens laws again written using actual code. Below, l is any "very well-behaved" lens. -- Get-Put >>> forall $ \whole -> set l (view l whole) whole == whole True -- Put-Get >>> forall $ \whole part -> view l (set l part whole) == part True -- Put-Put >>> forall $ \whole part1 part2 -> set l part2 (set l part1) whole = set l part2 whole What are some large scale, practical examples of lens? Don't expect to be able to read the code yet, but here's an example from lens-aeson which queries and modifies JSON data. -- Returns all of the major versions of an -- array of JSON objects. someString ^.. _JSON -- a parser/printer prism . _Array -- another prism . traverse -- a traversal (using Data.Traversable on Aeson's Vector) . _Object -- another another prism . ix "version" -- a traversal across a "map-like thing" . _1 -- a lens into a tuple (major, minor, patch) -- Increments all of the versions above someString & _JSON . _Array . traverse . _Object . ix "version" . _1 %~ succ -- apply a function to our deeply focused lens -- We can factor out the lens allVersions :: Traversal' ByteString Int allVersions = _JSON . _Array . traverse . _Object . ix "version" . _1 -- and then rewrite the two examples quickly someString ^.. allVersions someString & allVersions %~ succ -- Because lenses, prisms, traversals, are all first class in Haskell! Wait a second, GHCi is telling me the types of these things are absurd! Yeah, sorry about that. "It'll make sense eventually", but the types start out tricky. Try to look at the type synonyms only. We can use :info to make sure that GHCi tells us the type synonym instead of the really funky fully expanded types. I'd show you _1 but it's not a great example of an understandable type. It uses some weird extra machinery. Instead, how about an example. {-# LANGUAGE TemplateHaskell #-} type Degrees = Double type Latitude = Degrees type Longitude = Degrees data Meetup = Meetup { _name :: String, _location :: (Latitude, Longitude) } makeLenses ''Meetup Let's assume we have lenses name and location which focus on the slots _name and _location respectively. The underscores are a convention only, but you see them a lot because the Template Haskell magic going on in makeLenses will automatically make name and location if use underscores like that. The type of these lenses is >>> :info name name :: Lens Meetup Meetup String String >>> :info location location :: Lens Meetup Meetup (Latitude, Longitude) (Latitude, Longitude) >>> :type location location :: Functor f => ((Latitude, Longitude) -> f (Latitude, Longitude)) -> (Meetup -> f Meetup) -- whoops! Ignore that for now please Four type parameters? Isn't that a bit much? You're right, we'll use the simplified forms for now. This is highly recommended until you get the hang of things. The simplified types are appended with apostrophes. Now the types of name and location are name :: Lens' Meetup String location :: Lens' Meetup (Latitude, Longitude) These types tell us that, for instance, the name lens focuses from a Meetup and to a String. Generally we write Lens' s a and throughout the documentation s and t tend to be where a lens is focusing from and a or b tend to be where the lens is focusing to. Okay, that makes sense. Didn't you say we can compose lenses? Yup, if we've got a Lens' s x and another lens Lens' x a we can stick them together to get Lens' s a. Strangely, we just use regular old (.) to do it. la :: Lens' s x lb :: Lens' x a la . lb :: Lens' s a Or, more concretely. meetupLat = location . _1 :: Lens' Meetup Latitude meetupLon = location . _2 :: Lens' Meetup Longitude Lenses compose backwards. Can't we make (.) behave like functions? You're right, we could. We don't for various reasons, but the intuition is right. Lenses should combine just like functions. One thing that's important about that is id can either pre- or post- compose with any lens without affecting it. forall $ \lens -> lens . id == lens forall $ \lens -> id . lens == lens (N.B. If you're categorically inclined, lenses-with-apostrophes would form a Category if we did somehow reverse the composition order--- flip (.) would do it. The instance still cannot be made without a newtype which lens trades off for uses mentioned below.) That's still pretty annoying It's true. On the bright side, lenses often feel a whole lot like OO-style slot access like Person.name. The reversed composition thing can be thought of as punning on that. >>> Meetup { ... } ^. location . _1 80.3 :: Latitude Furthermore, we can do composition using a convenient syntax without using import Prelude hiding .... What is that (^.)? Oh, it's just view written infix. Here's the Put-Get law again >>> forall $ \lens whole part -> (set lens part whole) ^. lens == part True Actually there are a whole lot of operators in lens Yup, some people find them convenient. Actually there are a WHOLE LOT of operators in lens---over 100 Very convenient! But that is a lot. To make it bearable, there are some tricks for remembering them. - Operators that begin with ^are kinds of views. The only example we've seen so far is (^.)which is... well, it's just viewexactly. - Operators that end with ~are like overor set. In fact, (.~) == setand (%~)is over. - Operators that have .in them are usually somehow "basic" - Operators that have %in them usually take functions - Operators that have =in them are just like their cousins where =is replaced by ~, but instead of taking the whole object as an argument, they apply their modifications in a Statemonad. Is that really worth it? Maybe. Who knows! If you don't like them, then all of the operators have regular named functions as well. ... Some examples would be nice Ok. (.~) :: Lens' s a -> a -> (s -> s) (.=) :: Lens' s a -> a -> State s () (%~) :: Lens' s a -> (a -> a) -> (s -> s) (%=) :: Lens' s a -> (a -> a) -> State s () Sometimes we get new operators by augmenting tried and true operators like (&&) with ~ and = (&&~) :: Lens' s Bool -> Bool -> (s -> s) (&&=) :: Lens' s Bool -> Bool -> State s () lens &&~ bool = lens %~ (bool &&) lens &&= bool = lens %= (bool &&) (<>~) :: Monoid m => Lens' s m -> m -> (s -> s) lens <>~ m = lens %~ (m <>) What about combinators with (^)? Do they show up anywhere else? Yes, but we have to talk about Prisms and Traversals first. Okay. What are Prisms? They're like lenses for sum types. What does that mean? Well, what happens if we try to make lenses for a sum type? -- This doesn't work... or exist _left :: Lens' (Either a b) a >>> view _left (Left ()) () -- okay, I buy that >>> view _left (Right ()) error! -- oh, there's no subpart there Prisms are kind of like Lenses that can fail or miss. So we use Maybe instead, right? We could try that. _left :: Lens' (Either a b) (Maybe a) >>> view _left (Left ()) Just () >>> view _left (Right ()) Nothing But it doesn't compose well. _left . name :: Lens (Either Meetup Meetup) ??? -- String? Maybe String? We'd need a Lens that looks into Maybe: _just :: Lens' (Maybe a) (Maybe a) _just = id Oh. That doesn't get us anywhere. Let's start over. Okay. What are Prisms? (Take Two) Prisms are the duals of lenses. While lenses pick apart some subpart of a product type like a tuple, prisms go down one branch of a sum type like Either... Or else they fail. Right, a Lens splits out one subpart of a whole. A Prism takes one subbranch. Exactly! Think of a product type as being made of both a subpart and a "whole with a hole in it" where the subpart used to go. product = (a, b) subpart = a whole-with-a-hole = \x -> (x, b) Whenever we can do that, we can make a lens that focuses just on that subpart. A sum type can be broken into some particular subbranch and all-the-other-ones That's how prisms are dual to lenses. They select just one branch to go down. preview :: Prism' s a -> s -> Maybe a Which also lets us "go back up" that one branch. review :: Prism' s a -> a -> s For instance _Left :: Prism' (Either a b) a >>> preview _Left (Left "hi") Just "hi" >>> preview _Left (Right "hi") Nothing >>> review _Left "hi" Left "hi" _Just :: Prism' (Maybe a) a >>> preview _Just (Just "hi") Just "hi" >>> review _Just "hi" Just "hi" >>> Left "hi" ^? _Left Just "hi" Oh, there's another (^)-like operator! Yep, (^?) is like (^.) for Prism's. Are there any other interesting Prisms? [Printer/Parsers] Here's a simple one. We can deconstruct []s using Prism's. _Cons :: Prism' [a] (a, [a]) >>> preview _Cons [] Nothing >>> preview _Cons [1,2,3] Just (1, [2,3]) _Nil :: Prism' [a] () >>> preview _Nil [] Just () >>> preview _Nil [1,2,3] Nothing Here's a strange one. String is a very strange sum type. No it isn't. It's just a List again. But Char can be thought of as the sum of all characters. And if we "flatten" Char into [] we can think of String as being the "sum of all possible strings" data String = "a" | "b" | "c" | "europe" | "curry" | "dosa" | "applejacks" ... Okay, sure. Is that important? What if we make Prism's that focus on particular branches of String? _ParseTrue :: Prism' String Bool _ParseFalse :: Prism' String Bool >>> preview _ParseTrue "True" Just True >>> preview _ParseFalse "True" Nothing >>> review _ParseTrue True "True" This is how printer-parsers can sometimes be valid Prisms. I think I understand Prisms now. What are Traversals? Traversals are Lenses which focus on multiple targets simultaneously. We actually don't know how many targets they might be focusing on: it could be exactly 1 (like a Lens) or maybe 0 (like a Prism) or 300. A very simple Traversal' traverses all of the items in a list. items :: Traversal' [a] a items = traverse -- from Data.Traversable, actually -- this is our first implementation of anything in `lens` -- but don't worry about it right now In fact, Traversal's are intimately related to lists since if we have a list we also may have either 1, or 0, or many elements. That's one way to view a Traversal. toListOf items :: [a] -> [a] >>> toListOf items [1,2,3] [1,2,3] That's pretty boring. It is. We can define items for any Traversable though using the exact same definition. items :: Traversable t => Traversal' (t a) a items = traverse flatten :: Tree a -> [a] flatten = toListOf items That's no big deal, though. It's just another way to use Data.Traversable For now, but Traversal will eventually show a close relationship with Lens and Prism. Do we have another (^)-like operator for Traversals at least? Yep! It's just toListOf. >>> [1,2,3] ^.. items [1,2,3] What else can we do with Traversals? [How do they relate to Prisms/Lenses?] We can get just the first target. firstOf :: Traversal' s a -> Maybe a firstOf items :: Traversable t => t a -> Maybe a Or the last target lastOf :: Traversal' s a -> Maybe a >>> lastOf items [1,2,3] Just 3 Why does it use Maybe? The Traversal could have no targets. Is this like preview? Yeah, firstOf is preview. But I thought preview was preview---and that it was specialized to Prisms? Nope, preview just handles access that focuses on either 0 or 1 targets. Can I use view on Traversals too? Actually, yes, but you might be in for a surprise. >>> view items "hello world" <interactive>:56:6: No instance for (Data.Monoid.Monoid Char) arising from a use of `items' Possible fix: add an instance declaration for (Data.Monoid.Monoid Char) In the first argument of `view', namely `items' In the expression: view items "hello world" In an equation for `it': it = view items "hello world" What is Monoid doing here? It represents the notion of failure. It's just like that Maybe we tried to use earlier when investigating Prism. Moreover its a way to handle the "0, 1, or many" nature of the targets of a Traversal. If you think of toListOf as the canonical way to view a Traversal, then Monoid is a canonical way to compress lists into single values. So we can use view if our targets are Monoids? Yep. What do you think ["hello ", "world"] ^. items gets us? The Monoid product of the items---"hello world" Exactly! What about [] ^. items? An ambiguous type error! Okay, yes. What about [] ^. items :: String? The empty string, "" And [] ^. items :: Monoid m => m? Whatever mempty is for m Bingo. So, Traversals generalize Prisms and Lenses somehow? Yeah. It works because Prism, Lens, and Traversal are all just type synonyms---Those scary types that GHCi sometimes prints out actually do line up. That's a big reason why they're left in. Is this subtyping? Kind of! That's what the scary chart on the lens Hackage page shows. But Haskell doesn't have subtyping It's a big clever hack, really. Without it there'd be completely separate combinators for Lens, Prism, Traversal, and all the other things in lens that we haven't talked about yet... even though they all end up with identical code at their core. That's not even the best part yet. What's the best part about the subtyping hack? We can compose Lenses and Prisms and Traversals all together and the types will "just work out". Admittedly at this point all that means is that compositions of Lenses are Lenses, compositions of Prisms are Prisms, and everything else is a Traversal. But there are other things in this subtyping hierarchy, like Isos. What are Isos? Isos are isomorphisms, connections between types that are equivalent in every way. More concretely, an Iso is a forward mapping and a backward mapping such that the forward mapping inverts the backward mapping and the backward mapping inverts the forward mapping. fw . bw = id bw . fw = id (N.B. This is again exactly a Category isomorphism.) Isos are in the lens subtyping hierarchy? Yep! They are more primitive than either Lens or Prism. If we think of a Lens' s a as a type which splits the product type s into its subpart a and the context of that subpart, a -> s, an Iso is a trivial lens where the context is just id. If we think of a Prism' s a as a type which splits the sum type s into a privileged branch and "all the other" branches then an Iso is a trivial Prism where there's just one branch and we're privileging exactly it. How do Isos compose with Lenses and Prisms? They can compose on either end of a lens or a prism and the result is a lens or a prism respectively. Isos are "iso" (unchanging) because no matter where you put them in a lens pipeline they leave things as they were. What are some example Isos? How about between Maybe and Either ()? someIso :: Iso' (Maybe a) (Either () a) >>> Just "hi" ^. someIso Right "hi" That's a little boring. How about between strict and lazy ByteStrings? strict :: Iso' Lazy.ByteString Strict.ByteString >>> "foobar" ^. strict "foobar" >>> "foobar" ^. strict . from strict "foobar" Also kind of boring. How about between an XML tree treating element names as strings and an XML tree treating element names as Qualified Names? qualified :: Iso' (Node String String) (Node (QName String) String) >>> Element "foaf:Person" [("foaf:name", "Ernest")] [] ^. qualified Element (QName { qnPrefix = Just "foaf" , qnLocalPart = "Person" }) [ (QName { qnPrefix = Just "foaf" , qnLocalPart = "name" } ,"Ernest") ] [] >>> Element "foaf:Person" [("foaf:name", "Ernest")] [] ^? qualified . attributes . _head . _1 . localPart Just "name" What is "from" doing there? [Iso Laws] from "turns an iso around". Since Isos are equivalences they're symmetric and we can treat them as maps in both directions between types. In fact, the Iso laws stated above are just that for any iso, iso view (iso . from iso) == id view (from iso . iso) == id That's pretty cool Isos are pretty cool. They're one of my favorite parts of lens. What if I have any more questions? This list may grow over time. Please ask them and leave any comments. But for now, go get yourself a peanut butter and marmalade sandwich. With help from Reddit users penguinland, kqr, markus1189, MitchellSalad, DR6, NruJaC, and rwbarton.
https://www.schoolofhaskell.com/school/to-infinity-and-beyond/pick-of-the-week/a-little-lens-starter-tutorial
CC-MAIN-2016-50
refinedweb
3,209
75.3
Code. Collaborate. Organize. No Limits. Try it Today. The Intel SSE intrinsic technology boosts the performance of floating point calculations. Both GCC and Microsoft Visual Studio supports SSE intrinsic. The xmm0-xmm15 (16 xmm registers for 64bit operating system) or xmm0-xmm7(8 xmm registers for 32 bit operating system) registers used for floating point calculations in SSE. Operations in SSE for single precision floating point and double precision floating point is a bit different. My objective is to point the differences between the calculation between these two data types using simple summation operation in floating point array. All SSE instructions and data types are defined in #include <xmmintrin.h>. __m128 is used for single precision floating point number and __m128d is used for double precision numbers. _mm_load_pd is used for loading double precision floating point number and _mm_load_ps is used loading for single precision floating point numbers. Similarly, _mm_add_ps, _mm_hadd_ps are used for adding single precision floating point numbers. Meanwhile, _mm_add_pd and _mm_hadd_pd are used for adding double precision floating point numbers. The float point array has to be aligned 16 and that can be done using _mm_malloc. #include <xmmintrin.h> __m128 __m128d _mm_load_pd _mm_load_ps _mm_add_ps _mm_hadd_ps _mm_add_pd _mm_hadd_pd using _mm_malloc _mm_add_ps adds the four single precision floating-point values r0 := a0 + b0 r1 := a0 + b1 r2 := a2 + b2 r3 := a3 + b3 _mm_add_pd adds the two double precision floating-point values r0 := a0 + b0 r1 := a1 + b1 This is the plain C code which we are we wish to convert codes using SSE. float sum = 0; //for double precision: double sum = 0; for (int i = 0; i < n; i++) { sum += scores[i]; } Single precision floating point number addition Sample code: float sum = 0.0; __m128 rsum = _mm_set1_ps(0.0); for (int i = 0; i < n; i+=4) { __m128 mr = _mm_load_ps(&a[i]); rsum = _mm_add_ps(rsum, mr); } rsum = _mm_hadd_ps(rsum, rsum); rsum = _mm_hadd_ps(rsum, rsum); _mm_store_ss(&sum, rsum); Double precision floating point number addition Sample code: double sum = 0.0; double sum1 = 0.0; __m128d rsum = _mm_set1_pd(0.0); __m128d rsum1 = _mm_set1_pd(0.0); for (int i = 0; i < n; i += 4) { __m128d mr = _mm_load_pd(&a[i]); __m128d mr1 = _mm_load_pd(&a[i+2]); rsum = _mm_add_pd(rsum, mr); rsum1 = _mm_add_pd(rsum1, mr1); } rsum = _mm_hadd_pd(rsum, rsum1); rsum = _mm_hadd_pd(rsum, rsum); _mm_store_sd(&sum, rsum); You can see the difference between single precision float and double precision float is that you can add 4 values in one operation of single precision floating point number rsum = _mm_add_ps(rsum, mr); You can add 2 values in one operation and therefore you need two operations for 4 values rsum = _mm_add_pd(rsum, mr); rsum1 = _mm_add_pd(rsum1, mr1); Adding a timer you can see SSE code is very much faster than normal code. In my PC I observed that SSE code is almost 4 times faster than plain code. Hence, using SSE instruction one can develop faster complex application where time optimization is required. This is my first post in CodeProject. There may be mistakes in this article. Please let me know and give me feedback. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) n rsum = _mm_hadd_pd(rsum, rsum1); rsum[63-0] = rsum[63-0] + rsum[127-64]; //addition in the high and low quadwords of the destination operand and saving the result in the low quadword of the destination operand. rsum[127-64] = rsum1/m128[63-0] + rsum1/m128[127-64]; //Addition in the high and low quadwords of the source operand and saving the result in the high quadword of the destination operand. General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Tips/499536/Single-precision-floating-point-and-double-precesi.aspx
CC-MAIN-2014-23
refinedweb
627
50.87
Opening Word documents in Pythonista What's the easiest way to get a Pythonista script to display a Word document that I have stored in a Pythonista folder? I can't seem to use webbrowser.open('pythonista:// <filename>' ) because it assumes I am trying to launch a script. I know that Pythonista can open a Word file (presumably using Quick Look) because it does so when I simply launch the file from the Pythonista GUI. Is my only option to use URL scheme to launch another app like Pages? Thanks Ken I have a script to convert markdown to a docx file. With the help of some great people at these forums I was able to get a preview in Pythonista of the final product from which I can then use the open in menu. Here's what that part of my code looks like: outfile_path = os.path.abspath(outfile) console.quicklook(outfile_path) outfile being the final docx product. If you know Python better than I, you might be able to adapt that to your needs, but I was thinking if you pass the file as input one way or another, that ought to work. I believe Pages is available from the Open In menu there. I don't think Pages has a URL scheme. If I'm wrong though, I'd like to know. Probably the webbrowsermodule, or console.quicklook()can open word documents Thanks. I wasn't aware of <b>console.quicklook()</b> for some reason and it seems to be exactly what I am after. However, I am getting inconsistent results. I've tried three documents, one PDF and 2 .docx files. They all show correctly if I launch then from the Pythonista GUI. But if I use <b>console.quicklook()</b> the results are mixed: the first PDF displays an empty screen with just "Portable Document Format" and file size. Open In appears, but it fails to send the PDF to any app except for Mail. the 2 .docx files display OK, but Open In cannot seem to send to any app other than Mail. In the process, I discovered that console.quicklook() won't raise an error for a non-existent file just displays a "Loading" screen. In any event, the display of .docx will allow me to progress my app. Thanks I had that problem at first, the Open In wouldn't work. I think it was the os.pathpart was what fixed that, if I recall. Thank you!! Yes, <code>os.path.abspath</code> fixed both the display of the PDF and the ability to Open In other applications. I should have paid closer attention to your first post. You're welcome. And that's alright. I'm still quite new to Python, personally, and I only ever figured out that problem through the forum as well. Glad it's here. Also glad you got it working. Cheers! So, onto the next related challenge. My application is using the webbrowser and a local IPad web server to provide basic GUI elements. However, I also need it display Word documents periodically. I'm happy to do this by dropping to the console, but it doesn't seem to let me. The very code above that successfully displays a Word doc when in a standalone script, won't work from within my code. I just get a spinning dial. I'd be happy even to exit the webbrowser/web server but I don't know how... Any ideas? It is easier to debug Python code than English prose. Would it be possible to post some code to GitHub for us to look at? :) I better set one of those up. For now, here's a code snippet demonstrating the issue: <code> from bottle import get, post, request, run import webbrowser import console import os @get('/start') def start(): f = 'test.docx' outfile_path = os.path.abspath(f) console.quicklook(outfile_path) return 'Success!' webbrowser.open('') run(host='localhost', port=8080) </code> I realise that I'm using a web browser and web server and then calling a console routine, but I also need to be able to display Word documents even if it means completely dropping out of the webbrowser and terminating the web server. Alternatively, could I force it to spawn a new thread for the console quicklook display, or is it doing that anyway? Many thanks for any ideas I've got a solution that works. I've replaced <code>console.quicklook(outfile_path)</code> with <code>webbrowser.open('file://'+quote(outfile_path))</code>. Should have thought of that before!
https://forum.omz-software.com/topic/578/opening-word-documents-in-pythonista
CC-MAIN-2017-34
refinedweb
763
75.81
Hi Gurus , I am trying to create few roles named staring with the characters like /DCR/N000S01. But i get the message 'No Suitable namespace available' . Although i am still able to create the role after this . Can you please let me know why are we getting this mesage ? Also I am using the BAPI ' PRGN_RFC_CREATE_ACTIVITY_GROUP ' to create roles. When i input the value of the role name as DCR/N000S01 its throwing an exception as NAMESPACE_PROBLEM . So kindly let me know why do we get the message as 'No Suitable namespace available' when we try to create roles with names such as /DCR/N000S01 and what is the resolution ? Thanks in Advance , Amit
https://answers.sap.com/questions/8872489/getting-the-message-%27no-suitable-namespace-availab.html
CC-MAIN-2021-39
refinedweb
114
74.29
At Mon, 4 Jul 2011 20:36:33 +1000, John Ky wrote: > > Hi Haskell Cafe, > > enum |$ inumLines .| inumReverse .| inumUnlines .| iter > ... > > iterLines :: (Monad m) => Iter L.ByteString m [L.ByteString] > iterLines = do > line <- lineI > return [line] > > iterUnlines :: (Monad m) => Iter [L.ByteString] m L.ByteString > iterUnlines = (L.concat . (++ [C.pack "\n"])) `liftM` dataI > > iterReverse :: (Monad m) => Iter [L.ByteString] m [L.ByteString] > iterReverse = do > lines <- dataI > return (map L.reverse lines) > > inumLines = mkInum iterLines > inumUnlines = mkInum iterUnlines > inumReverse = mkInum iterReverse > > It all works fine. > > My question is: Is it possible to rewrite inumReverse to be this: > > iterReverse :: (Monad m) => Iter L.ByteString m L.ByteString > iterReverse = do > line <- dataI > return (L.reverse line) > > inumReverse = mkInum iterReverse > > And still be able to use it in the line: > > enum |$ inumLines .| {-- inumReverse goes in here somehow --} .| > inumUnlines .| iter > > The reason I ask is that the Haskell function reverse has the type [a] -> [a], > not [[a]] -> [[a]]. > > I thought perhaps the alternative inumReverse is cleaner than the original as > it behaves more similarly to Haskell's own reverse function. I'm not sure what you are trying to achieve. If you want an iter that works on L.ByteStrings, then you can say: iterReverse :: (Monad m) => Iter L.ByteString m L.ByteString iterReverse = do line <- lineI return (L.reverse line) In that case you don't need inumLines and inumUnlines. If, however, you want the type to be [L.ByteString], and you would rather do this one line at a time, instead of calling map, then you could do something like the following: iterReverse :: (Monad m) => Iter [L.ByteString] m [L.ByteString] iterReverse = do line <- headI return [L.reverse line] But the code you have above should also work, so it all depends on what you are trying to achieve. David
http://www.haskell.org/pipermail/haskell-cafe/2011-July/093736.html
CC-MAIN-2014-41
refinedweb
296
78.55
Contents Check the Library Reference to see if there’s a relevant standard library module. (Eventually you’ll learn what’s in the standard library and will. To make testing easier, you should use good modular design in your that automates a sequence of tests can be associated with each module. a good delay value for time.sleep(),.)>switch..b+"), with open(filename, "rb") as f: floats),).. You can find a collection of useful links on the Web Programming wiki page. Use the standard library module smtplib. Here’s a very simple interactive mail sender that uses it. This method will work on any host that supports an SMTP listener. import sys, smtplib fromaddr = input("From: ") toaddrs =, sometimes . Interfaces to disk-based hashes such as DBM and GDBM are also included with standard Python. There is also the sqlite3 module, which provides a lightweight disk-based relational database..
http://docs.python.org/3.3/faq/library.html
CC-MAIN-2013-20
refinedweb
148
68.36
Summary: Guest blogger, Rohn Edwards, talks about using Windows PowerShell to access the last-modified time stamp in the registry. Microsoft Scripting Guy, Ed Wilson, is here. Welcome back guest blogger, Rohn Edwards. Rohn is one of the cofounders of the Mississippi PowerShell User Group. I’m not sure how many of you know this, but the registry stores the last-modified time for every registry key. Unfortunately, it’s not that accessible. Regedit won’t let you get it interactively, although you can get it if you export the key as a text file and then open the file. There are also third-party tools that will let you see this information. Today, I want to show you how to do it with Windows PowerShell. As far as I can tell, WMI and .NET don’t offer a way to get the last-modified time. The only way that I know to get this information from Windows PowerShell is to use platform invocation services (P/Invoke). For some great information about P/Invoke, see the following Hey, Scripting Guy! Blog posts: - Use PowerShell and Pinvoke to Remove Stubborn Files - Use PowerShell to Duplicate Process Tokens via P/Invoke After a little Internet searching, I came across two Win32 functions that will let you get this information: RegQueryInfoKey and RegEnumKeyEx. In this post, I’m going to show you how to use RegQueryInfoKey. Hopefully, after reading this, you can create a signature for RegEnumKeyEx on your own, if you would like to use that instead. If you follow the link to the MSDN page on RegQueryInfoKey, you can find the C++ signature: Almost any time you hear anything about P/Invoke, you’ll see a reference to pinvoke.net. I’ll agree that this is a wonderful resource for creating C# signatures in Windows PowerShell, but I usually use it only as a starting point, and I make sure that I agree with the types that were chosen for each entry. In this case, the C++ signature is simple enough to create a C# signature. If you look at the parameter types, you’ll see that there are only four unique types: HKEY, LPTSTR, LPDWORD, and PFILETIME. By using this Type Conversion table (Table 1), you can match the C++ to the following C# types: HKEY According to the RegOpenKeyEx function documentation, HKEY is a handle. hKey [in] A handle to an open registry key. This handle is returned by the RegCreateKeyEx or RegOpenKeyEx function, or it can be one of the following predefined keys: HKEY_CLASSES_ROOT HKEY_CURRENT_CONFIG HKEY_CURRENT_USER HKEY_LOCAL_MACHINE HKEY_USERS The conversion table says that handles are represented by the IntPtr, UIntPtr, or HandleRef types in managed code. At the time of this writing, the pinvoke.net signature uses a UIntPtr. This would work just fine, but we’re going to save ourselves some trouble, and use a different type (more on this in a little bit). LPTSTR This is handled by String or StringBuilder in .NET. LPDWORD Use Int32 or UInt32. PFILETIME This is a FILETIME structure: typedef struct _FILETIME { DWORD dwLowDateTime; DWORD dwHighDateTime; } FILETIME, *PFILETIME; It turns out that .NET already has a FILETIME structure that we can use: System.Runtime.InteropServices.ComTypes.FILETIME. We’ll use that combined with a little bit of Windows PowerShell 3.0 script to convert it to a DateTime object. There’s only one more thing: Remember how I said we’d come back to the hKey handle? Well, when using the Win32 functions to work with the registry, you have to open a handle to a key before you can do anything with it. This requires a call to RegOpenKeyEx, which would require its own C# signature. After you open a handle and use it, you have to close it with a call to RegCloseKey, which requires yet another signature. The hKey parameter wants a handle to an open key. An IntPtr handles this, but you still need two more functions to get that solution working. Windows PowerShell 4.0 or Windows PowerShell 3.0 provide an open handle to a registry key when you get the key by using Get-ChildItem or Get-Item (you can actually thank the .NET Framework). Take a look at the Handle property on a RegistryKey object: As far as I can tell, you can use the SafeRegistryHandle in place of an IntPtr in the signature. Taking all of that into account, our call to Add-Type looks something like the following: $Namespace = "HeyScriptingGuy" System.Runtime.InteropServices.ComTypes.FILETIME lpftLastWriteTime ); } $($Namespace | ForEach-Object { "}" }) "@ The $Namespace variable exists only so that you can easily change the namespace in one place, and it will be reflected throughout the script. You can assign an array of strings to have nested namespaces, too. The LP and P prefixes on the parameter names mean that you’re actually passing pointers, so that’s why we’re using the Out keyword on almost all of the parameters (and when we use them, we’ll pass them by reference). Here’s how to use the function: # Store the type in a variable: $RegTools = ("{0}.advapi32" -f ($Namespace -join ".")) -as [type] # Get a RegistryKey object (we need the handle) $RegKey = Get-Item HKLM:\SOFTWARE # Create any properties that we want returned: $LastWrite = New-Object System.Runtime.InteropServices.ComTypes.FILETIME # Call function: $RegTools::RegQueryInfoKey($RegKey.Handle, $null, [ref] $null, $null, [ref] $null, [ref] $null, [ref] $null, [ref] $null, # Create datetime object [datetime]::FromFileTime($FileTimeInt64) The call to RegQueryInfoKey should return 0 if the call was successful. Passing $null as a parameter means that we aren’t interested in it (notice in the C++ signature that they’re optional). The other parameters aren’t really that useful because they’re almost all available already. But here’s how you would get them (assuming you already ran the previous lines that define $RegTools and $RegKey): # Create any properties that we want returned: $SubKeyCount = $ValueCount = $null $LastWrite = New-Object System.Runtime.InteropServices.ComTypes.FILETIME $StringBuffer = 255 $ClassName = New-Object System.Text.StringBuilder $StringBuffer # Call function: $RegTools::RegQueryInfoKey($RegKey.Handle, $ClassName, [ref] $StringBuffer, $null, [ref] $SubKeyCount, [ref] $null, [ref] $null, [ref] $ValueCount, # Return results: [PSCustomObject] @{ Key = $RegKey.Name ClassName = $ClassName.ToString() SubKeyCount = $SubKeyCount ValueCount = $ValueCount LastWriteTime = [datetime]::FromFileTime($FileTimeInt64) } So now we can manually run this Win32 function when we want to get the last-modified time. The only issue is that using it is a lot of work—look at how many parameters there are! Thanks to Colin Robertson and Lee Hart, Windows programming writers, who Thanks to Colin Robertson and Lee Hart, Windows programming writers, who explained the risks of converting the FILETIME structure to UInt64. Please join me tomorrow to see how we can take what we’ve learned today and wrap it up into a much friendlier and easier-to-use reusable function. The complete script for this blog post is available in the Script Center Repository: Get Last Write Time and Class Name of Registry Keys. ~Rohn Thanks, Rohn, for sharing your expertise. Article!!! Very Helpful The method shown requires at least PSv3, but there is a PSv2 method if you follow the link to the script repository.
https://blogs.technet.microsoft.com/heyscriptingguy/2013/12/30/use-powershell-to-access-registry-last-modified-time-stamp/
CC-MAIN-2017-47
refinedweb
1,195
62.48
$99.99. Premium members get this course for $31.20. Premium members get this course for $159.20. Premium members get this course for $23.20. Premium members get this course for $63.20. 1. why using swap()? swap() isn't a const function i suppose because the name implies you want to replace list of this object with list of rValue. So it fails. But why swapping at all? 2. rValue.consensusSequenceLi That is because an iterator on a list isn't const either. 3. What to do? Try this: CProtease::CProtease(const consensusSequenceList = rValue.consensusSequenceLi csl::iterator i; for(i = consensusSequenceList.begi *i = new CCutScheme(*(*i)); name = rValue.name; } >> *i = new CCutScheme(*(*i)); That looks weird, but it makes a 'copy' of the object whose pointer had copied flat when assigning the list _AND_ assigns the pointer of the copy to the list entry. That all would be much easier if you hadn't used pointers as template arguments but objects. Then consensusSequenceList = rValue.consensusSequenceLi had done all the job. Regards, Alex I feel like we::we(const we &rValue) { listPos = rValue.listPos; myList = rValue.mylist; } is closer... listPos = rValue.listPos; is not needed, and you should re-create the iterator the way you do in the for statement. -- Also, are you sure that you need a pointer list? Could you do this with a value based list and use the default copy constructor/constructor? With monday.com’s project management tool, you can see what everyone on your team is working in a single glance. Its intuitive dashboards are customizable, so you can create systems that work for you. I was actually using the same approach as you before but when I do this we *whatever = new we(some arguments); whatever does not contain the list which was created in we(some arguments). Seems to me that only the pointers are copied and they are of course deleted when we(some arguments) falls out of scope ;-) ~we() { for(...) delete(*listPos); myList.clear(); } If I forget about deleting in the destructor, which is of course not the right thing to do, I should say. whatever contains the elements formerly contained in we(some arguments). @corey I probably wouldn't need a pointer list for this combination of classes but that's not what I work with ;-) If I try this I get a compiler error: we *whatever(some arguments); so for me it seems I cannot avoid the new operation here but that also means that I need a working operator= function and a copy contructor. Plus I cannot delete what I haven't newed so the destructor wouldn't work. Regards, Jens listPos = rValue.listPos; the same error comes up in the next line: for(listPos= rValue.myList.begin(); listPos != rValue.myList.end(); listPos++) This is quite weird. It would probably work if I do not add const to my operator= and copy constructor argument lists. Wouldn't be the best workaround I guess. >> I mean really copying values not merely pointers. Try this: we::we(const we &rValue) { listPos = rValue.listPos; for(listPos= rValue.myList.begin(); listPos != rValue.myList.end(); listPos++) myList.push_back(new ac(*(*listPos)); } That will work if class ac has a copy constructor. BTW, 'we' and 'ac' are poor class names. Could you explain what you're doing there? There is one right parenthesis missing, should be myList.push_back(new ac(*(*listPos))); >> listPos = rValue.listPos; >> is not needed That means you don't need an iterator as a member. Do this: we::we(const we &rValue) { ls::iterator iter; for(iter= rValue.myList.begin(); iter != rValue.myList.end(); iter++) myList.push_back(new ac(*(*iter))); } and get rid of that member listPos. Regards, Alex That is not a problem, as with listPos = rValue.myList.begin() it gets a valid entry again. But as the iterator gets invalid at end of any loop it makes no sense to store it as a member. Furthermore, as an STL iterator isn't associated with a specific instance of the container class, you could use your own member listPos and didn't need using rValue.listPos as iterator. Regards, Alex I get a compiler error, however: error C2679: binary '=' : no operator found which takes a right-hand operand of type 'std::list<_Ty>::const_ite with [ _Ty=CCutScheme * ] which points me to this line: CProtease CProtease::operator=(const { csl::iterator i; //produces error see above in the for loop for(i = rValue.consensusSequenceLi consensusSequenceList.push name = rValue.name; return(*this); } Same error for the copy constructor. @itsmeandnobodyelse do you like this class names? @corey You are right about the iterator but it is not the iterator I want because I learned how to get, pass and create one here: Nicely done by khkremer, I have to add. I want all the data members of the linked list copied to the class not just referenced. Cheers Jens What is csl?? Can you post the typedef to that?? There should be a typedef in class declaration of CProtease, similar to that class CProtease { ... typedef std::list<_Ty>::const_iter }; And you shouldn't use a 'csl' in another class. And CProtease::consensusSequen Maybe you have to use std::list<_Ty>::iterator and not std::list<_Ty>::const_iter Regards, Alex that's why I tried it with the easy example I made up at the beginning ;-) Anyway here is the further information. csl is unique to CProtein and is typedeffed below: class CProtein { public: //nobody else is ever gonna use these classes make em public for now typedef list<CCutScheme*> csl; csl consensusSequenceList; csl::iterator listPos; //some other stuff }; Greetz Jens CCutScheme has both, a functioning operator= function and a working copy constructor. Doesn't contain lists or any other pointer structures so that was easy. Even I got that straight ;-) You should post all your class declarations to give us a chance not only guessing what's going wrong but to know for sure. Regards, Alex myList.swap(rValue.myList) The problem seems to be somewhere else. As far as I can see, my copy constructors or operator= functions are never called. So there may be something wrong with that. See the headers for the two classes below. If you need anything beside the headers please post. I will increase points because this may take longer than I thought it would. Seems to be a bigger problem. @rendaduiyan looks good but fails to copy the const list rValue. Seems like a const list cannot be iterated through ... strange since no attempt to change the data is made. OK, I just now did some serious debugging // at least I tried so I came up with this problem: class CSequenceSuggestions contains a class variable: COptions options. //this is the only definition in the header of CSequencSuggestions . If I set a break point in the constructor: COptions::COptions(void) { PRECISION = 700; PRECISION /= 1000000; FRUNMINID = 5; SRUNMINID = 3; MININTRONLENGTH = 1; MAXINTRONLENGTH = 700; PATHTOCLEAVINGENZYMES = "c:\\gpfc++\\enzymes.cdf"; CCuttingEnzyme cutter; if(cutter.ReadEnzymesFromF PROTEASE = cutter.GetEnzyme("TRYPSIN" else { PROTEASE->name = "TRYPSIN"; PROTEASE->consensusSequenc PROTEASE->consensusSequenc } } CProtease is well defined at this point ;-) Next time I call options every variable therein is fine. Except for Protease which then is undefined. Header for COptions: CCuttingEnzyme contains a list of CProtease which contains a list of CCutScheme. As I stated before this used to work for me until I learned how to actually delete list members. When I implemented this in all the destructors above this happend. So I thought it was somehow related. Please tell me if you need to know anything else. Thank you very much so far Jens #pragma once #include "protease.h" class COptions { public: COptions(void); ~COptions(void); int FRUNMINID, SRUNMINID, SRUNABSMIN, MININTRONLENGTH, MAXINTRONLENGTH; double PRECISION; CProtease *PROTEASE; CString PATHTOCLEAVINGENZYMES; }; PROTEASE is a pointer in COptions. However you never assign storage to it in the constructor. So, PROTEASE->name = "TRYPSIN"; will definitively fail if the else branch wins. You have to do this somewhere in Constructor: PROTEASE = new CProtease(...); >> Next time I call options every variable therein is fine A constructor runs only once for any instance. A 'next' call constructs a new COptions instance that knows nothing from a previous call. >> Except for Protease which then is undefined. I think, first time cutter.ReadEnzymesFromFile Regards, Alex class COptions { public: COptions(void); ~COptions(void); int FRUNMINID, SRUNMINID, SRUNABSMIN, MININTRONLENGTH, MAXINTRONLENGTH; double PRECISION; CProtease PROTEASE; CString PATHTOCLEAVINGENZYMES; }; Then, you have to change in constructor PROTEASE = cutter.GetEnzyme("TRYPSIN" to PROTEASE = *cutter.GetEnzyme("TRYPSIN Now the assign operator gets called. Further changes are: PROTEASE->name = "TRYPSIN"; PROTEASE->consensusSequenc PROTEASE->consensusSequenc to PROTEASE.name = "TRYPSIN"; PROTEASE.consensusSequence PROTEASE.consensusSequence That applies for all occurences of PROTEASE in any of your member functions. Regards, Alex PROTEASE = cutter.GetEnzyme("TRYPSIN" Fix-> PROTEASE = new CProtease(*cutter.GetEnzym I just have a pointer here so as soon as cutter falls out of scope and deletes its list the pointer is invalid. So when I implemented the actual deletion of the list members I got this problem here. It all breaks down to copying the list now. Means we are back at the original question. How can I have the list actually copied. What bugs me is that with a standard declaration of the cc and op= functions it will not compile: CProtease::CProtease(const this declaration will not let me access the list in rValue: consensusSequenceList.swap csl::iterator i; for(i = rValue.consensusSequenceLi //and this doesn't work either. consensusSequenceList.push name = rValue.name; } With this declaration it works fine: CProtease::CProtease(CProt } But I would like it to be standard (const). Thanks, Jens how do you come up with that stuff? I am sure the pointers I push onto the list at this point may as well be objects but I would like to have a deeper understanding of the matter. Besides I will need to handle bigger objects soon so it will be of use to know how to deal with pointer lists. I like this: "That looks weird, but it makes a 'copy' of the object whose pointer had copied flat when assigning the list _AND_ assigns the pointer of the copy to the list entry.". I will assign you all the points if there aren't any objections to that. I know all of you did a great job and invested a lot of time. I really apreciate that. BTW is anyone from Philly, so I could hire you to sit together with me an look over all my code for a weekend or so? I guess, I really need some real time help.
https://www.experts-exchange.com/questions/20963591/How-can-I-deep-copy-an-stl-list-of-pointers-In-regards-to-copy-constructor-and-operator-for-a-class-containing-an-stl-list.html
CC-MAIN-2018-09
refinedweb
1,767
67.04
Red Hat Bugzilla – Bug 1315010 gcc: -ftrack-macro-expansion=0 triggers indentation warnings with -Wall Last modified: 2017-07-27 04:04:32 EDT Created attachment 1133367 [details] C reproducer Description of problem: Macros htons and ntohs emit gcc warning Wmisleading-indentation. Which is enabled with gcc argument -Wall. It breaks my effort to compile without warning and with -Werror. Version-Release number of selected component (if applicable): glibc-headers-2.23.1-5.fc24.x86_64 gcc-6.0.0-0.14.fc24.x86_64 How reproducible: Deterministic Steps to Reproduce: 1. Compile attached file sh$ gcc -O2 -Wall -ftrack-macro-expansion=0 -c pok.c Actual results: pok.c: In function ‘main’: pok.c:13:5: warning: statement is indented as if it were guarded by... [-Wmisleading-indentation] SOME_MACRO(htons(16), uint16_t); ^~~~~~~~~~ pok.c:13:5: note: ...this ‘else’ clause, but it is not pok.c:14:5: warning: statement is indented as if it were guarded by... [-Wmisleading-indentation] SOME_MACRO(ntohs(16), uint16_t); ^~~~~~~~~~ pok.c:14:5: note: ...this ‘else’ clause, but it is not Expected results: No gcc warnings. Additional info: Attached diff /usr/include/bits/byteswap-16.h fixed issue for me. I can see similar pattern also in /usr/include/bits/byteswap.h but function ntohl and htonl used different part of header with optimisation (__bswap_32). But I can prepare patch if you want. Created attachment 1133368 [details] Proposed fix I believe this was a GCC bug. I no longer get a -Wall warning for the following test program with gcc-6.0.0-0.20.fc24.x86_64. #include <stdlib.h> #include <stdio.h> #include <arpa/inet.h> int main(int argc, char **argv) { int x = atoi (argv[1]); printf ("%d\n", ntohs (x)); } I'm sorry but I can still reproduce this bug. Please try to reproduce with attached C file and try to follow steps in description of this ticket. (In reply to Lukas Slebodnik from comment #3) > I'm sorry but I can still reproduce this bug. > Please try to reproduce with attached C file and try to follow steps in > description of this ticket. Ah, the culprit is -ftrack-macro-expansion=0. Why do you use it? due to another bug in gcc BZ986923 Fair enough, reassigning to gcc. We cannot fix this on the glibc side; the macro definition is fine as-is. I'm not sure what the right solution for this problem is. Maybe indentation warnings in macros need to be disabled with -ftrack-macro-expansion=0. Macro definition is fine but after expansion it is on single line. and therefore there will be something like else __asm__ (....); __v; and it's misleading indentation :-) I thought that proposed change for glibc [ attachment 1133368 [details] ] is a reasonable compromise. (In reply to Lukas Slebodnik from comment #7) > I thought that proposed change for glibc [ attachment 1133368 [details] ] is > a reasonable compromise. We would have to rewrite many more macros in a way which is not conforming to the GNU coding style. I don't think this is an option. With -ftrack-macro-expansion=0 you really don't know if you are from a macro expansion or not, that is the whole point of the option.. Filed as an upstream bug.
https://bugzilla.redhat.com/show_bug.cgi?id=1315010
CC-MAIN-2018-17
refinedweb
541
68.67
The initial UI will allow us to change the name of a person and save those changes to the database. This requires a couple of things: 1. The database should be able to commit changes to the database at any time 2. The view-model needs to be writable from the view 3. The view has to allow input and bind the input to the view-model To be able to commit changes to the database at any time, a long running transaction is needed. This transaction should be attached to the PersonJson view-model. In code, that is done by wrapping everything in our handler inside a Db.Scope: Handle.GET("/HelloWorld", () =>{return Db.Scope(() =>{Session.Ensure();var person = Db.SQL<Person>("SELECT p FROM Person p").FirstOrDefault();return new PersonJson { Data = person };});}); To make properties in the view-model writable from the view, a dollar sign it added to the end of it. With this, "FirstName" becomes "FirstName$" and "LastName" becomes "LastName$". The view-model should then look like this: {"Html": "/HelloWorld/PersonJson.html","FirstName$": "","LastName$": ""} As mentioned earlier, we also want the possibility to save at will. In order to do this, there needs to be some kind of communication between the view and the code-behind. This can be accomplished using a trigger property which is basically an integer that can be changed from the client and handled in the code-behind. This is how it should look: {"Html": "/HelloWorld/PersonJson.html","FirstName$": "","LastName$": "","SaveTrigger$": 0} To act on the change in the view-model that is triggered from the view, an event handler can be registered in the code-behind. In this case, where the goal is to save, the following code can be used: using Starcounter;namespace HelloWorld{partial class PersonJson : Json{void Handle(Input.SaveTrigger action){Transaction.Commit();}}} Input.SaveTrigger action makes the method run when a change is detected in the SaveTrigger value. Note that we do not need to use a $ here like in the view-model. The rule is that we use $ for the view, and view-model, but not in the application code. Transaction.Commit() commits the current state of the view-model to the database so that the data is accessible from other transactions. With server-side view-models like this, you don't have to write a single line of "glue code" to update the view in HTML. Any change in the view-model made in C# will instantly be synced to the client using Palindrom, which in turn automatically renders because of Polymer's data bindings. This saves you from creating single-purpose REST APIs, need for double validation of user input, and more. This also means that all logic that can be written on the server-side should be written on the server-side to enjoy these benefits. Now, with a view-model that is writable and a database which allows commits at any point in time, the view can include elements that change the properties in the view-model. We'll change our previous text elements to input elements and add a button: <template><template is="dom-bind"><fieldset><label>First name:</label><input value="{{model.FirstName$::input}}"></fieldset><fieldset><label>Last name:</label><input value="{{model.LastName$::input}}"></fieldset><button value="{{model.SaveTrigger$::click}}" onmousedown="++this.value">Save</button></template></template> The ::input declaration on the input value sets up an event listener. It updates the property it's bound to on every keystroke. This means that every time a change is made in the input field, the view-model will reflect that change. To increment the SaveTrigger$ value in the view-model, we bind it to the value on the button and attach a ::click event listener. We then increment this value when the button is pressed. We now have a program where we can change the view-model in real time and then commit our changes to the database at will. To see how it looks, start the application with F5 and go to in the browser. You should see two input boxes with their respective label and a button below. If you are an especially curious person, you can try to change the name and then take a look at the database again with SQL. Here's how it should work: Neat! Right? The next step is to display the name change in real time and let the code-behind calculate the full name. If you get any errors, you can check your code against the source code.
https://docs.starcounter.io/hello-world-tutorial/first-interactive-ui
CC-MAIN-2019-30
refinedweb
758
63.7
How to write simple mathematical programs in C# ....1 This article aims to provide a solution to a simple mathematical problem.. how to arrive at "amicable numbers" using C# as the tool. It uses the basic algebra of C# loops, variables and arraylists, all very basic to solve an interesting anomaly of the mathematical world. I want to share with you how C# can be used to solve fun problems in the mathematical world. Here I use some simple C# constructs like loops, ref variables and arraylists to achieve a practical solution. The solution is divided into 2 methods both of which constitute steps in the solution. Comments have been added listing the significance of each step in easy to understand language. This article is aimed at an audience that has grasped the essential elements of C# and is looking for some avenues to "get dirty with code". I wish to emphasize that this is my original solution and is not plagiarized from some forum or newsgroup. The solution is not "rocket science". It is realistic and works well. Try out the code and work backwords from the solution. By reverse engineering the code you will arrive at my thought process. Not very hard to fathom. As an example consider "amicable numbers". Amicable numbers are defined as a pair of numbers, each of which is the sum of the factors of the other (e.g. 220 and 284). How can we find such numbers in C#. Here is my original solution. First the routine itself. This function lists the factors of a number, excluding the number itself and returns the sum through the ref parameter. private void GetFactorDetails(double num, ref double numsum) { double n = num; //here we identify the input double j = 2; // start number double k = 0; //initialize variable double fcount = 1; //initializer ArrayList ar = new ArrayList(); //an arraylist remember to refer to the namespace System.Collections while (j < n) // loop from 2 to n but don't include n itself (please note) { k = (n % j); // is the number perfectly divisible i.e. is j a factor if (n != j) // all factors not including the input { if ((int)k == 0) { fcount += j; j++; numsum = fcount; continue; //take the sum of the factors (as required in the problem statement) } else { j++; continue; // skip else } } n++; fcount = 0; numsum = 0; if ((int)n == 1000) break; // an arbitrary limit has been set for convenience }//loop } Then the calling routine. Here we call the above routine twice (to implement the above logic). If the number in the first call equals the sum of the factors in the second call, we have our numbers. //Amicable Numbers ArrayList ar = new ArrayList(); double snum = 0; double tnum = 0; for (int jj = 1; jj < 2000; jj++) { GetFactorDetails(jj, ref snum); //.....(1) jj is the input and snum is the sum of the factors GetFactorDetails(snum, ref tnum); //....(2) snum is the input and tnum is the sum of the factors if ((jj == tnum) && (jj != snum)) // here comes the test at last { if (!(ar.Contains(jj.ToString()))) // dont add the same number twice { ar.Add(jj.ToString()); } if (!(ar.Contains(snum.ToString()))) // similar to above { ar.Add(snum.ToString()); } } } for (int jk = 0; jk < ar.Count; jk++) { MessageBox.Show(ar[jk].ToString()); // show output } It's as simple as that. Funny, if you think some complex algorithm is involved. Easy as....1 2.3.
https://www.dotnetspider.com/resources/45770-How-to-write-simple-mathematical-programs-in-C-1.aspx
CC-MAIN-2020-40
refinedweb
567
65.52
brk, sbrk - change space allocation (LEGACY) #include <unistd.h> int brk(void *addr); void *sbrk(intptr_t incr); The brk() and sbrk() functions are used to change the amount of space allocated for the calling process. The change is made by resetting the process' break value and allocating the appropriate amount of space. The amount of allocated space increases as the break value increases. The newly-allocated space is set to 0. However, if the application first decrements and then increments the break value, the contents of the reallocated space are unspecified. The brk() function sets the break value to addr and changes the allocated space accordingly. The sbrk() function adds incr bytes to the break value and changes the allocated space accordingly. If incr is negative, the amount of allocated space is decreased by incr bytes. The current value of the program break is returned by sbrk(0). The behaviour of brk() and sbrk() is unspecified if an application also uses any other memory functions (such as malloc(), mmap(), free()). Other functions may use these other memory functions silently. It is unspecified whether the pointer returned by sbrk() is aligned suitably for any purpose. These interfaces need not be reentrant. if: - [ENOMEM] - The requested change would allocate more space than allowed. The brk() and sbrk() functions may fail if: - [EAGAIN] - The total amount of system memory available for allocation to this process is temporarily insufficient. This may occur even though the space requested was less than the maximum data segment size. - [ENOMEM] - The requested change would be impossible as there is insufficient swap space available, or would cause a memory allocation conflict. None. The brk() and sbrk() functions have been used in specialised cases where no other memory allocation function provided the same capability. The use of malloc() is now preferred because it can be used portably with all other memory allocation functions and with any function that uses other allocation functions. None. exec, malloc(), mmap(), <unistd.h>.
http://pubs.opengroup.org/onlinepubs/007908799/xsh/brk.html
crawl-003
refinedweb
328
55.95
The golden ratio is the larger root of the equation φ² – φ – 1 = 0. By analogy, golden ratio primes are prime numbers of the form p = φ² – φ – 1 where φ is an integer. To put it another way, instead of solving the equation φ² – φ – 1 = 0 over the real numbers, we’re looking for prime numbers p where the equation can be solved in the integers mod p. [1] Application When φ is a large power of 2, these prime numbers are useful in cryptography because their special form makes modular multiplication more efficient. (See the previous post on Ed448.) We could look for such primes with the following Python code. from sympy import isprime for n in range(1000): phi = 2**n q = phi**2 - phi - 1 if isprime(q): print(n) This prints 19 results, including n = 224, corresponding to the golden ratio prime in the previous post. This is the only output where n is a multiple of 32, which was useful in the design of Ed448. Golden ratio primes in general Of course you could look for golden ratio primes where φ is not a power of 2. It’s just that powers of 2 are the application where I first encountered them. A prime number p is a golden ratio prime if there exists an integer φ such that p = φ² – φ – 1 which, by the quadratic theorem, is equivalent to requiring that m = 4p + 5 is a square. In that case φ = (1 + √m)/2. Here’s some code for seeing which primes less than 1000 are golden ratio primes. from sympy import primerange def issquare(m): return int(m**0.5)**2 == m for p in primerange(2, 1000): m = 4*p + 5 if issquare(m): phi = (int(m**0.5) + 1) // 2 assert(p == phi**2 - phi - 1) print(p) By the way, there are faster ways to determine whether an integer is a square. See this post for algorithms. (Update: Aaron Meurer pointed out in the comments that SymPy has an efficient function sympy.ntheory.primetest.is_square for testing whether a number is a square.) Instead of looping over primes and testing whether it’s possible to solve for φ, we could loop over φ and test whether φ leads to a prime number. for phi in range(1000): p = phi**2 - phi - 1 if isprime(p): print(phi, p) Examples The smallest golden ratio prime is p = 5, with φ = 3. Here’s a cute one: the pi prime 314159 is a golden ratio prime, with φ = 561. The golden ratio prime that started this rabbit trail was the one with φ = 2224, which Mike Hamburg calls the Goldilocks prime in his design of Ed448. Related posts [1] If p = φ² – φ – 1 for some integer φ, then φ² – φ – 1 = 0 (mod p). But the congruence can have a solution when p is not a golden ratio prime. The following code shows that the smallest example is p = 31 and φ = 13. from sympy import primerange from sympy.ntheory.primetest import is_square for p in primerange(2, 100): m = 4*p + 5 if not is_square(m): for x in range(p): if (x**2 - x - 1) % p == 0: print(p, x) exit() 3 thoughts on “Golden ratio primes” SymPy has sympy.ntheory.primetest.is_square that performs those faster algorithms. John, in your opening paragraph you seem to be tacitly saying that every prime that *divides* a number of the form n^2-n-1 can actually be *written* in the form k^2-k-1 (since the former condition is equivalent to the mod-p solvability of the equation). Is there an easy way to see that this fact is true? Or are you not claiming this? Cf
https://www.johndcook.com/blog/2019/05/12/golden-ratio-primes/
CC-MAIN-2020-05
refinedweb
631
78.38
Created on 2021-03-23 22:56 by levineds, last changed 2021-04-23 18:21 by steve.dower. This issue is now closed. Windows. RFC8089 doesn't specify "a mechanism for translating namespaced paths ["\\?\" and "\\.\"] to or from file URIs", and the Windows shell doesn't support them. So what's the practical benefit of supporting them in nturl2path? > Windows file paths are limited to 256 characters, Classically, normal filepaths are limited to MAX_PATH - 1 (259) characters, in most cases, except for a few cases in which the limit is even smaller. For a normal filepath, the API replaces slashes with backlashes; resolves relative paths; resolves "." and ".." components; strips trailing dots and spaces from the final path component; and, for relative paths and DOS drive-letter paths, reserves DOS device names in the final path component (e.g. CON, NUL). The kernel supports filepaths with up to 32,767 characters, but classically this was only accessible by using a verbatim \\?\ filepath, or by using workarounds based on substitute drives or filesystem mountpoints and symlinks. With Python 3.6+ in Windows 10, if long paths are enabled in the system, normal filepaths support up to the full 32,767 characters in most cases. The need for the \\?\ prefix is thus limited to the rare case when a verbatim path is required, or when a filepath has to be passed to a legacy application that doesn't support long paths. I really meant 255 characters not 256 because I was leaving three for "<drive name>:/". I suppose the most reasonable behavior is to strip out the "\\?\" before attempting the conversion as the path is sensible and parsable without that, as opposed to the current behavior which is to crash. The practical benefit is to permit the function to work on a wider range of inputs than currently is possible for essentially no cost. > I. I think that would make the most sense, yes. New changeset 3513d55a617012002c3f82dbf3cec7ec1abd7090 by Steve Dower in branch 'master': bpo-43607: Fix urllib handling of Windows paths with \\?\ prefix (GH-25539) New changeset 04bcfe001cdf6290cb78fa4884002e5301e14c93 by Miss Islington (bot) in branch '3.9': bpo-43607: Fix urllib handling of Windows paths with \\?\ prefix (GH-25539) New changeset e92d1106291e5a7d4970372478f2882056b7eb3a by Miss Islington (bot) in branch '3.8': bpo-43607: Fix urllib handling of Windows paths with \\?\ prefix (GH-25539)
https://bugs.python.org/issue43607
CC-MAIN-2021-43
refinedweb
389
64.51
Reactjs Application setup from scratch How to setup without using any wrapper library Hi everyone, In this article we are going to understand how to setup react application from scratch without using any wrapper library such as create-react-app. lets begin with empty folder named REACT-DEMO. Initial Setup Lets open Command prompt inside our folder and run the below command for the initial setup. npm init -y This will now create package.json file inside our application folder. Installing Bundlers We are now using webpack as bundler library which is recommended by react community and babel compiler for transpiling javascript code in your application. Lets now install these libraries in our application as a development dependency. npm i react react-dom webpack webpack-cli webpack-dev-server @babel/core @babel/preset-env @babel/preset-react --save-dev webpack :- It is a static module bundler for modern JavaScript applications webpack-cli :- Webpack can be configured with webpack.config.js. Any parameters sent to the CLI will map to a corresponding parameter in the configuration file. webpack-dev-server :- Development server that provides live reloading. @babel/core :- Babel is a toolchain that is mainly used to convert ECMAScript 2015+ code into a backwards compatible version of JavaScript in current and older browsers or environments. @babel/preset-env or preset-react :- It is a smart preset that allows you to use the latest JavaScript without needing to micromanage which syntax transforms (and optionally, browser polyfills) are needed by your target environment(s). Structuring Application lets now create webpack.config.js inside our application folder. Inside webpack.config.js we are going to give instructions to webpack and we mainly concentrate on this file throughout the article. Application will be structured as below. Configuring webpack.config.js Entry :- An entry point indicates which module webpack should use to begin building out its internal dependency graph. Output :- The output property tells webpack where to emit the bundles it creates and how to name these files. contenthash will generate unique id prefix to the bundled file in our case it will generate as shown below for every build new contenthash will be generated. Before starting actual bundling of React code lets understand webpack loaders . Loaders Loaders are transformations that are applied to the source code of a module. They allow you to pre-process files as you import or “load” them. Loaders can transform files from a different language (like TypeScript) to JavaScript or load inline images as data URLs.in our application we are going to use style-loader ,css-loader and babel-laoder. lets install them in our application. npm install style-loader css-loader babel-loader --save-dev css-loader :- The css-loader interprets @import and url() like import/require() and will resolve them. style-loader :- It is used to Inject CSS into the DOM. babel-loader :- This package allows transpiling JavaScript files using Babel and webpack. lets now add some react code and try to bundle for that I was going to create two files index.js and App.js inside src folder. which shown below. now we need to alter the webpack.config.js to bundle. We need to tell Babel compiler our code includes react so whenever you came across react code use babel-loader to compile it. Configuration will looks as below. In above configuration we are telling webpack whenever you came across file ending with .js use babel-loader to compile the file and then babel-loader internally uses presets to transform vendor library syntax. Ex:-React. Plugins plugins can be leveraged to perform a wider range of tasks like bundle optimization, asset management and injection of environment variables.in our setup we are going to use html-webpack-plugin. html-webpack-plugin :- This simplifies creation of HTML files to serve your webpack bundles. This is especially useful for webpack bundles that include a hash in the filename which changes every compilation. Install the plugin to our application by running below command. npm install --save-dev html-webpack-plugin After installing the plugin we need to tell the webpack how it should behave for that first we need to create index.html file inside inside public folder. The file looks as below. lets now configure the webpack.config.js file. This file will look as below after configuring the plugin. Running Application To turn our application live we need to tell webpack-dev-server which we were previously installed .lets configure webpack.config.js file. Add below tag inside package.json file inside script tag. "start": "webpack serve" the output will be show as below. lets add now some css to make the beautiful. Create App.css inside src folder and add some styles . the file show below. The Final output will look as below. GitHub - abhishekn123/React-setup-medium- Contribute to abhishekn123/React-setup-medium- development by creating an account on GitHub. github.com Above is the github Repository which used for the entire article.
https://abhisheknabhi.medium.com/reactjs-application-setup-from-scratch-7789947d06da
CC-MAIN-2022-40
refinedweb
830
50.02
Class::Declare - Declare classes with public, private and protected attributes and methods. package My::Class; use strict; use warnings; use base qw( Class::Declare ); __PACKAGE__->declare( public => { public_attr => 42 } , private => { private_attr => 'Foo' } , protected => { protected_attr => 'Bar' } , class => { class_attr => [ 3.141 ] } static => { static_attr => { a => 1 } } , restricted => { restricted_attr => \'string' } , friends => 'main::trustedsub' , init => sub { # object initialisation ... 1; } , strict => 0 ); sub publicmethod { my $self = __PACKAGE__->public( shift ); ... } sub privatemethod { my $self = __PACKAGE__->private( shift ); ... } sub protectedmethod { my $self = __PACKAGE__->protected( shift ); ... } sub classmethod { my $self = __PACKAGE__->class( shift ); ... } sub staticmethod { my $self = __PACKAGE__->static( shift ); ... } sub restrictedmethod { my $self = __PACKAGE__->restricted( shift ); ... } 1; ... my $obj = My::Class->new( public_attr => 'fish' ); One of Perl's greatest strengths is it's flexible object model. You can turn anything (so long as it's a reference, or you can get a reference to it) into an object. This allows coders to choose the most appropriate implementation for each specific need, and still maintain a consistent object oriented approach. A common paradigm for implementing objects in Perl is to use a blessed hash reference, where the keys of the hash represent attributes of the class. This approach is simple, relatively quick, and trivial to extend, but it's not very secure. Since we return a reference to the hash directly to the user they can alter hash values without using the class's accessor methods. This allows for coding "short-cuts" which at best reduce the maintainability of the code, and at worst may introduce bugs and inconsistencies not anticipated by the original module author. On some systems, this may not be too much of a problem. If the developer base is small, then we can trust the users of our modules to Do The Right Thing. However, as a module's user base increases, or the complexity of the systems our module's are embedded in grows, it may become desirable to control what users can and can't access in our module to guarantee our code's behaviour. A traditional method of indicating that an object's data and methods are for internal use only is to prefix attribute and method names with underscores. However, this still relies on the end user Doing The Right Thing. Class::Declare provides mechanisms for module developers to explicitly state where and how their class attributes and methods may be accessed, as well as hiding the underlying data store of the objects to prevent unwanted tampering with the data of the objects and classes. This provides a robust framework for developing Perl modules consistent with more strongly-typed object oriented languages, such as Java and C++, where classes provide public, private, and protected interfaces to object and class data and methods. Class::Declare allows class authors to specify public, private and protected attributes and methods for their classes, giving them control over how their modules may be accessed. The standard object oriented programming concepts of public, private and protected have been implemented for both class and instance (or object) attributes and methods. Attributes and methods belong to either the class or an instance depending on whether they may be invoked via class instances (class and instance methods/attributes), or via classes (class methods/attributes only). Class::Declare uses the following definitions for public, private and protected: Public attributes and methods may be accessed by anyone from anywhere. The term public is used by Class::Declare to refer to instance attributes and methods, while the equivalent for class attributes and methods are given the term class attributes and methods. Private attributes and methods may be accessed only by the class defining them and instances of that class. The term private is used to refer to instance methods and attributes, while the term static refers to class attributes and methods that exhibit the same properties. Protected attributes and methods may only be accessed by the defining class and it's instances, and classes and objects derived from the defining class. Protected attributes and methods are used to define the interface for extending a given class (through normal inheritance/derivation). The term protected is used to refer to protected instance methods and attributes, while protected class methods and attributes are referred to as restricted. Note: since version 0.02, protected class methods and attributes are refered to as restricted, rather than shared. This change was brought about by the introduction of Class::Declare::Attributes and then clash with the existing Perl threading attribute :shared. The term restricted has been chosen to reflect that the use of these methods and attributes is restricted to the family of classes derived from the base class. The separation of terms for class and instance methods and attributes has been adopted to simplify class declarations. See declare() below. Class attributes are regarded as constant by Class::Declare: once declared they may not be modified. Instance attributes, on the other hand, are specific to each object, and may be modified at run-time. Internally, Class::Declare uses hashes to represent the attributes of each of its objects, with the hashes remaining local to Class::Declare. To the user, the objects are represented as references to scalars which Class::Declare maps to object hashes in the object accessors. This prevents users from accessing object and class data without using the class's accessors. The granting of access to attributes and methods is determined by examining the target of the invocation (the first parameter passed to the method, usually represented by $self), as well as the context of the invocation (where was the call made and who made it, determined by examining the caller() stack). This adds an unfortunate but necessary processing overhead for Class::Declare objects for each method and attribute access. While this overhead has been kept as low as possible, it may be desirable to turn it off in a production environment. Class::Declare permits disabling of the access control checks on a per-module basis, which may greatly improve the performance of an application. Refer to the strict parameter of declare() below for more information. Class::Declare inherits from Exporter, so modules derived from Class::Declare can use the standard symbol export mechanisms. See Exporter for more information. To define a Class::Declare-derived class, a package must first use Class::Declare and inherit from it (either by adding it to the @ISA array, or through use base). Then Class::Declare::declare() must be called with the new class's name as its first parameter, followed by a list of arguments that actually defines the class. For example: package My::Class; use strict; use warnings; use base qw( Class::Declare ); __PACKAGE__->declare( ... ); 1; Class::Declare::declare() is a class method of Class::Declare and has the following call syntax and behaviour: declare()'s primary task is to define the attributes of the class and its instances. In addition, it supports options for defining object initialisation code, friend methods and classes, and the application of strict access checking. param may have one of the following values: public expects either a hash reference of attribute names and default values, an array reference containing attribute names whose default values will be undef, or a single attribute name whose value will default to undef. These represent the public attributes of this class. Class::Declare constructs accessor methods within the class, with the same name as the attributes. These methods are lvalue methods by default (see also Attribute Modifiers below), which means that the attributes may be assigned to, as well as being set by passing the new value as an accessor's argument. For example: package My::Class; use strict; use warnings; use base qw( Class::Declare ); __PACKAGE__->declare( public => { name => 'John' } ); 1; my $obj = My::Class->new; print $obj->name . "\n"; # prints 'John' $obj->name = 'Fred'; # the 'name' attribute is now 'Fred' $obj->name( 'Mary' ); # the 'name' attribute is now 'Mary' The default value of each attribute is assigned during the object initialisation phase (see init and new() below). Public attributes may be set during the object creation call: my $obj = My::Class->new( name => 'Jane' ); print $obj->name . "\n"; # prints 'Jane' public attributes are instance attributes and therefore may only be accessed through class instances, and not through the class itself. Note that the declare() call for My::Class from above could have been written as __PACKAGE__->declare( public => [ qw( name ) ] ); or __PACKAGE__->declare( public => 'name' ); In these cases, the attribute name would have had a default value of undef. As with public above, but the attributes are private (i.e. only accessible from within this class). If access is attempted from outside the defining class, then an error will be reported through die(). Private attributes may not be set in the call to the constructor, and as with public attributes, are instance attributes. See also strict and friends below. As with private above, but the attributes are protected (i.e. only accessible from within this class, and all classes that inherit from this class). Protected attributes are instance attributes, and they may not be set in the call to the constructor. See also strict and friends below. This declares class attributes in the same manner as public above. class attributes are not restricted to object instances, and may be accessed via the class directly. The accessor methods created by Class::Declare, however, are not lvalue methods, and cannot, therefore, be assigned to. Nor can the values be set through the accessor methods. They behave in the same manner as values declared by use constant (except they must be called as class or instance methods). Class attributes may not be set in the call to the constructor. As with class attributes, except access to static attributes is limited to the defining class and its objects. static attributes are the class-equivalent of private instance attributes. See also friends. As with class attributes, except access to restricted attributes is limited to the defining class and all classes that inherit from the defining class, and their respective objects. restricted attributes are the class-equivalent of protected instance attributes. See also friends. Here you may specify classes and methods that may be granted access to the defining classes private, protected, static and restricted attributes and methods. friends expects either a single value, or a reference to a list of values. These values may either be class names, or fully-qualified method names (i.e. class and method name). When a call is made to a private or protected method or attribute accessor, and a friend has been declared, a check is performed to see if the caller is within a friend package or is a friend method. If so, access is granted. Otherwise, access is denied through a call to die(). Note that friend status may not be inherited. This is to avoid scenarios such as the following: package My::Class; use strict; use warnings; use base qw( Class::Declare ); __PACKAGE__->declare( ... friends => 'My::Trusted::Class' ); 1; package My::Trusted::Class; ... 1; package Spy::Class; use strict; use warnings; use base qw( My::Trusted::Class ); sub infiltrate { .. do things here to My::Class objects that we shouldn't } 1; This defines the object initialisation code, which is executed as the last phase of object creation by new(). init expects a CODEREF which is called with the first argument being the new object being created by the call to new(). The initialisation routine is expected to return a true value to indicate success. A false value will cause new() to die() with an error. The initialisation routines are invoked during object creation by new(), after default and constructor attribute values have been assigned. If the inheritance tree of a class contains multiple init methods, then these will be executed in reverse @ISA order to ensure the primary base-class of the new class has the final say on object initialisation (i.e. the class left-most in the @ISA array will have it's init routine executed last). If a class appears multiple times in an @ISA array, either through repetition or inheritance, then it will only be executed once, and as early in the init execution chain as possible. Class::Declare uses a CODEREF rather than specifying a default initialisation subroutine (e.g. sub INIT { ... }) to avoid unnecessary pollution of class namespaces. There is generally no need for initialisation routines to be accessible outside of new(). If strict is set to true, then Class::Declare will define class(), static(), restricted(), public(), private(), and protected() methods (see "Class Methods" and "Object Methods" below) within the current package that enforce the class/static/restricted/public/private/protected relationships in method calls. If strict is set to false and defined (e.g. 0, not undef), then Class::Declare will convert the above method calls to no-ops, and no invocation checking will be performed. Note that this conversion is performed for this class only. By setting strict to undef (or omitting it from the call to declare() altogether), Class::Declare will not create these methods in the current package, but will rather let them be inherited from the parent class. In this instance, if the parent's methods are no-ops, then the child class will inherit no-ops. Note that the public(), private(), etc methods from Class::Declare enforce the public/private/etc relationships. One possible use of this feature is as follows: package My::Class; use strict; use warnings; use base qw( Class::Declare ); __PACKAGE__->declare( public => ... , private => ... , protected => ... , strict => $ENV{ USE_STRICT } ); ... 1; Here, during development and testing the environment variable USE_STRICT may be left undefined, or set to true to help ensure correctness of the code, but then set to false (e.g. 0) in production to avoid the additional computational overhead. Setting strict to false does not interfere with the friends() method (see below). Turning strict access checking off simply stops the checks from being performed and does not change the logic of whether a class or method as been declared as a friend of a given class. Note: If any of the above rules are violated, then declare() will raise an error with die(). Once a Class::Declare-derived class has been declared, instances of that class may be created through the new() method supplied by Class::Declare. new() may be called either as a class or an instance method. If called as a class method, a new instance will be created, using the class's default attribute values as the default values for this instance. If new() is called as an instance method, the default attribute values for the new instance will be taken from the invoking instance. This may be used to clone Class::Declare-derived objects. Class::Declare::new() has the following call syntax and behaviour: new() creates instances of Class::Declare objects. If a problem occurs during the creation of an object, such as the failure of an object initialisation routine, then new() will raise an error through die(). When called as a class method, new() will create new instances of the specified class, using the class's default attribute values. If it's called as an instance method, then new() will clone the invoking object. new() accepts named parameters as arguments, where param corresponds to a public attribute of the class of the object being created. If an unknown attribute name, or a non-public attribute name is specified, then new() will die() with an error. Public attribute values specified in the call to new() are assigned after the creation of the object, to permit over-riding of default values (either class-default attributes or attributes cloned from the invoking object). If the calling class, or any of its base classes, has an object initialisation routine defined (specified by the init parameter of declare()), then these routines will be invoked in reverse @ISA order, once the object's attribute values have been set. An initialisation routine may only be called once per class per object, so if a class appears multiple times in the @ISA array of the new object's class, then the base class's initialisation routine will be called as early in the initialisation chain as possible, and only once (i.e. as a result of the right-most occurrence of the base class in the @ISA array). The initialisation routines should return a true value to indicate success. If any of the routines fail (i.e. return a false value), then new() will die() with an error. When a new instance is created, instance attributes (i.e. public, private and protected attributes) are cloned, so that the new instance has a copy of the default values. For values that are not references, this amounts to simply copying the value through assignment. For values that are references, Storable::dclone() is used to ensure each instance has it's own copy of the references data structure (the structures are local to each instance). However, if an instance attribute value is a CODEREF, then new() simply copies the reference to the new object, since CODEREFs cannot be cloned. Class attributes are not cloned as they are assumed to be constant across all object instances. Class::Declare provides the following class methods for implementing class, static and restricted access control in class methods. These methods may be called either through a Class::Declare-derived class, or an instance of such a class. Note that a class method is a public class method, a static method is a private class method, and a restricted method is a protected class method. Ensure a method is called as a class method of this package via the target. sub myclasssub { my $self = __PACKAGE__->class( shift ); ... } A class method may be called from anywhere, and target must inherit from this class (either an object or instance). If class() is not invoked in this manner, then class() will die() with an error. See also the strict parameter for declare() above. Ensure a method is called as a static method of this package via target. sub mystaticsub { my $self = __PACKAGE__->static( shift ); ... } A static method may only be called from within the defining class, and target must inherit from this class (either an object or instance). If static() is not invoked in this manner, then static() will die() with an error. See also the strict and friends parameters for declare() above. Ensure a method is called as a restricted method of this package via target. sub myrestrictedsub { my $self = __PACKAGE__->restricted( shift ); ... } A restricted method may only be called from within the defining class or a class that inherits from the defining class, and target must inherit from this class (either an object or instance). If restricted() is not invoked in this manner, then restricted() will die() with an error. See also the strict and friends parameters for declare() above. Note: restricted() was called shared() in the first release of Class::Declare. However, with the advent of Class::Declare::Attributes, there was a clash between the use of :shared as an attribute by Class::Declare::Attributes, and the Perl use of :shared attributes for threading. Class::Declare provides the following instance methods for implementing public, private and protected access control in instance methods. These methods may only be called through a Class::Declare-derived instance. Ensure a method is called as a public method of this class via target. sub mypublicsub { my $self = __PACKAGE__->public( shift ); ... } A public method may be called from anywhere, and target must be an object that inherits from this class. If public() is not invoked in this manner, then public() will die() with an error. See also the strict parameter for declare() above. Ensure a method is called as a private method of this class via target. sub myprivatesub { my $self = __PACKAGE__->private( shift ); ... } A private method may only be called from within the defining class, and target must be an instance that inherits from this class. If private() is not invoked in this manner, then private() will die() with an error. See also the strict and friends parameters for declare() above. Ensure a method is called as a protected method of this class via target. sub myprotectedsub { my $self = __PACKAGE__->protected( shift ); ... } A protected method may only be called from within the defining class or a class that inherits from the defining class, and target must be an instance that inherits from this class. If protected() is not invoked in this manner, then protected() will die() with an error. See also the strict and friends parameters for declare() above. Object destruction is handled via the normal Perl DESTROY() method. Class::Declare implements a DESTROY() method that performs clean-up and house keeping, so it is important that any class derived from Class::Declare that requires a DESTROY() method ensures that it invokes it's parent's DESTROY() method, using a paradigm similar to the following: sub DESTROY { my $self = __PACKAGE__->public( shift ); ... do local clean-up here .. # call the parent clean-up $self->SUPER::DESTROY( @_ ); } # DESTROY() By default Class::Declare class attributes ( class, static, and restricted) are read-only, while instance attributes ( public, private, and protected) are read-write. Class::Declare provides two attribute modifiers, rw and ro for changing this behaviour, allowing class attributes to be read-write, and instance attributes to be read only. The modifiers may be imported separately, use Class::Declare qw( :read-only ); or use Class::Declare qw( ro ); or use Class::Declare qw( :read-write ); or use Class::Declare qw( rw ); or collectively, using the :modifiers tag. use Class::Declare qw( :modifiers ); To use the modifiers, they must be incorporated into the attribute definition for the class. For example: package My::Class; use strict; use Class::Declare qw( :modifiers ); use vars qw( @ISA ); @ISA = qw( Class::Declare ); __PACKAGE__->declare( class => { my_class => rw undef } , public => { my_public => ro 1234 } ); Here, the attribute my_class has been declared read-write by rw, permitting it's value to be changed at run time. The public attribute my_public has been declared read-only by ro, preventing it from being changed once set. Please note that although they may be marked as read-only, public attributes may still be set during object creation (i.e. in the call to new()). However, once set, the value may not be changed. Declare a class attribute to be read-write, instead of defaulting to read-only. Note that this has no effect on instance attributes as they are read-write by default. Declare an instance attribute to be read-only, instead of defaulting to read-write. Note that this has no effect on class attributes as they are read-only by default. Class::Declare objects may be serialised (and therefore cloned) by using Storable. Class::Declare uses Storable::dclone() itself during object creation to copy instance attribute values. However, Storable is unable to serialise CODEREFs, and attempts to do so will fail. This causes the failure of serialisation of Class::Declare objects that have CODEREFs as attribute values. However, for cloning, Class::Declare avoids this problem by simply copying CODEREFs from the original object to the clone. The following methods are class methods of Class::Declare provided to simplify the creation of classes. They are provided as convenience methods, and may be called as either class or instance methods. Returns true if the calling class or method is a friend of the given class or object. That is, for a given object or class, friend() will return true if it is called within the context of a class or method that has been granted friend status by the object or class (see friend in declare() above). A friend may access private, protected, static and restricted methods and attributes of a class and it's instances, but not of derived classes. friend() will return true for a given class or object if called within that class. That is, a class is always it's own friend. In all other circumstances, friend() will return false. package Class::A; my $object = Class::B; sub somesub { ... $object->private_method if ( $object->friend ); ... } Generate a textual representation of an object or class. Since Class::Declare objects are represented as references to scalars, Data::Dumper is unable to generate a meaningful dump of Class::Declare-derived objects. dump() pretty-prints objects, showing their attributes and their values. dump() obeys the access control imposed by Class::Declare on it's objects and classes, limiting it's output to attributes a caller has been granted access to see or use. dump() will always observe the access control mechanisms as specified by Class::Declare::class(), Class::Declare::private(), etc, and it's behaviour is not altered by the setting of strict in declare() to be false (see declare() above). This is because strict is designed as a mechanism to accelerate the execution of Class::Declare-derived modules, not circumvent the intended access restrictions of those modules. dump() accepts the following optional named parameters: If all is true (the default value), and none of the attribute/method type parameters (e.g. public, static, etc) have been set, then dump() will display all attributes the caller has access to. If any of the attribute type parameters have been set to true, then all will be ignored, and only those attribute types specified in the call to dump() will be displayed. If class is true, then dump() will display only class attributes of the invocant and their values, and all other types of attributes explicitly requested in the call to dump() (the all parameter is ignored). If the caller doesn't have access to class methods, then dump() will die() with an error. If no class attributes exist, and no other attributes have been requested then undef is returned. As with class, but displaying static attributes and their values. As with class, but displaying restricted attributes and their values. As with class, but displaying public attributes and their values. Note that public attributes can only be displayed for class instances. Requesting the dump() of public attributes of a class will result in dump() die()ing with an error. As with public, but displaying private attributes and their values. As with public, but displaying protected attributes and their values. If friends is true, then dump() will display the list of friends of the invoking class or object. By default, dump() operates recursively, creating a dump of all requested attribute values, and their attribute values (if they themselves are objects). If depth is set, then dump() will limit it's output to the given recursive depth. A depth of 0 will display the target's attributes, but will not expand those attribute values. indent specifies the indentation used in the output of dump(), and defaults to 4 spaces. If an attribute type parameter, such as static or private, is set in the call to dump() then this only has effect on the target object of the dump() call, and not any subsequent recursive calls to dump() used to display nested objects. The code to implement dump() is quite long, and so has been split into a separate module Class::Declare::Dump. The first time dump() is called on a Class::Declare-derived object or class, Class::Declare::Dump is loaded, and the dump generated. If the loading of Class::Declare::Dump fails, a warning is given, and dump() returns the stringification of the given class or instance. A class helper method for handling named argument lists. In Perl, named argument lists are supported by coercing a list into a hash by assuming a key/value pairing. For example, named arguments may be implemented as sub mysub { my %args = @_; ... } and called as mysub( name => 'John' , age => 34 ); %args is now the hash with keys name and age and corresponding values 'John' and 34 respectively. So if named arguments are so easy to implement, why go to the trouble of calling arguments()? To make your code more robust. The above example failed to test whether there was an even number of elements in the argument list (needed to flatten the list into a hash), and it made no checks to ensure the supplied arguments were expected. Does mysub() really want a name and age, or does it want some other piece of information? arguments() ensures the argument list can be safely flattened into a hash, and raises an error indicating the point at which the original method was called if it can't. Also, it ensures the arguments passed in are those expected by the method. Note that this does not check the argument values themselves, but merely ensures unknown named arguments are flagged as errors. arguments() also enables you to define default values for your arguments. These values will be assigned when a named argument is not supplied in the list of arguments. The calling convention of arguments() is as follows (note, we assume here that the method is in a Class::Declare-derived class): sub mysub { ... my %args = $self->arguments( \@_ => { name => 'Guest user' , age => undef } ); ... } Here, mysub() will accept two arguments, name and age, where the default value for name is 'Guest user', while age defaults to undef. Alternatively, arguments() may be called in either of the following ways: my %args = $self->arguments( \@_ => [ qw( name age ) ] ); or my %args = $self->arguments( \@_ => 'name' ); Here, the default argument values are undef, and in the second example, only the the single argument name will be recognized. If default is not given (or is undef), then arguments() will simply flatten the argument list into a hash and assume that all named arguments are valid. If default is the empty hash (i.e. {}), then no named arguments will be accepted. If called in a list context, arguments() returns the argument hash, while if called in a scalar context, arguments() will return a reference to the hash. arguments() may be called as either a class or instance method. Extract the revision number from CVS revision strings. REVISION() looks for the package variable $REVISION for a valid CVS revision strings, and if found, will return the revision number from the string. If $REVISION is not defined, or does not contain a CVS revision string, then REVISION() returns undef. package My::Class; use strict; use base qw( Class::Declare ); use vars qw( $REVISION ); $REVISION = '$Revision: 1.49 $'; ... 1; print My::Class->REVISION; # prints the revision number Replacement for UNIVERSAL::VERSION(), that falls back to REVISION() to report the CVS revision number as the version number if the package variable $VERSION is not defined. If this class directly implements the given method(), then return a reference to this method. Otherwise, return false. This is similar to UNIVERSAL::can(), which will return a reference if this class either directly implements method(), or inherits it. Class::Declare has been designed to be thread-safe, and as such is suitable for such environments as mod_perl. However, it has not been proven to be thread-safe. If you are coding in a threaded environment, and experience problems with Class::Declare's behaviour, please let me know. The name. I don't really like Class::Declare as a name, but I can't think of anything more appropriate. I guess it really doesn't matter too much. Suggestions welcome. Apart from the name, Class::Declare has no known bugs. That is not to say the bugs don't exist, rather they haven't been found. The testing for this module has been quite extensive (there are over 2500 test cases in the module's test suite), but patches are always welcome if you discover any problems. Class::Declare::Dump, Class::Declare::Attributes, Exporter, Storable, perlboot, perltoot. Ian Brayshaw, <ian@onemore.org> This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~ibb/Class-Declare-0.05/Declare.pm
crawl-002
refinedweb
5,297
61.97
Neo: LOAD CSV FROM "" AS row FIELDTERMINATOR "\t" return COUNT(*) At java.io.InputStreamReader@4d307fda:6484 there's a field starting with a quote and whereas it ends that quote there seems to be character in that field after that ending quote. That isn't supported. This is what I read: 'weird al"' This blows up because (as the message says) we've got a field which uses double quotes but then has other characters either side of the quotes. A quick search through the file reveals one of the troublesome lines: $ grep "\"weird" lastfm-dataset-360K/usersha1-artmbid-artname-plays.tsv | head -n 1 ran a file containing only that line through CSV Lint to see what it thought and indeed it is invalid: ![2015-05-04_10-50-43.png 2015 05 04 10 50 43]( /uploads/2015/05/2015-05-04_10-50-43.png) Let's clean up our file to use single quotes instead of double quotes and try the query again: $ tr "\"" "'" < lastfm-dataset-360K/usersha1-artmbid-artname-plays.tsv > lastfm-dataset-360K/clean.tsv LOAD CSV FROM "" as row FIELDTERMINATOR "\t" return COUNT(*) 17559530 And we're back in business! Interestingly Python's CSV reader chooses to strip out the double quotes rather than throw an exception: import csv with open("smallWeird.tsv", "r") as file: reader = csv.reader(file, delimiter="\t") for row in reader: print row $ python explore.py [ prefer LOAD CSV's approach but it's an interesting trade off I hadn't considred before.
https://markhneedham.com/blog/2015/05/04/neo4j-load-csv-java-io-inputstreamreader-theres-a-field-starting-with-a-quote-and-whereas-it-ends-that-quote-there-seems-to-be-character-in-that-field-after-that-ending-quote-that-isnt-suppor/
CC-MAIN-2020-24
refinedweb
253
72.56
CPP Project for class 12 – Address book contains all the major suggestion recommended by the CBSE for the partial fulfillment of a Computer Science Project in CBSE school- Class 12 final Examination. The purpose behind Computer project in Class 12 is to consolidate the concepts and practices imparted during the course and to serve as a record of competence. As per the CBSE guideline for Computer project, every project must cover the following areas - Flow of control - Data Structure - Object Oriented Programming Concepts in C++ - Data File handing through C++ and the project listed here contains all these parts, so rest assured this C++ Project for class 12 is perfect project as per the CBSE guideline. AND This C++ project for class 12 is not listed anywhere on the internet. How to run CPP Project for Class 12- Address Book Since the project was developed using Dev C++, so you are requested to download Dev C++ from its official website and install it on your local system. - Open the download project in MS-word and copy the whole project. - Open a new File in Dev C++ - Paste the source code - Save as a AddressBook.CPP file - Compile it and enjoy The software is not compatible with Turbo C++, if you want to run this software on Turbo C++ then you are required to make these changes on this system - Add .h after each header file like <iostream.h> as Turbo C++ does not support <iostream> - Replace all occurrence of system(“cls”) with clrscr( ). Do not forget to add conio.h header file - Remove “using namespace std” from the header of this file. - Now save this file , Compile it and enjoy. In case you need the documentation for this Cpp project, You can refer our other project for the same that contains all the pages required for the submission of a Computer Science Project for class 12. IN case you are not able to perform the above mentioned task on this file. Feel free to contact us.
http://cbsetoday.com/cbse/cpp-project-class-12-address-book/
CC-MAIN-2017-51
refinedweb
336
67.38
Templating Basics Templates are the home for what the user sees, like forms, buttons, links, and headings. In this section of the Guides, you will learn about where to write HTML markup, plus how to add interaction, dynamically changing content, styling, and more. If you want to learn in a step-by-step way, you should begin your journey in the Tutorial instead. Writing plain HTML Templates in Ember have some superpowers, but let's start with regular HTML. For any file in an Ember app that has an extension ending in .hbs, you can write HTML markup in it as if it was an .html file. HTML is the language that browsers understand for laying out content on a web page. .hbs stands for Handlebars, the name of a tool that lets you write more than just HTML. For example, every Ember app has a file called application.hbs. You can write regular HTML markup there or in any other hbs file: <h1>Starting simple</h1> <p> This is regular html markup inside an hbs file </p> When you start an app with ember serve, your templates are compiled down to something that Ember's rendering engine can process more easily. The compiler helps you catch some errors, such as forgetting to close a tag or missing a quotation mark. Reading the error message on the page or in your browser's developer console will get you back on track. Types of templates There are two main types of templates: Route templates and Component templates. A Route template determines what is shown when someone visits a particular URL, like. A Component template has bits of content that can be reused in multiple places throughout the app, like buttons or forms. If you look at an existing app, you will see templates in many different places in the app folder structure! This is to help the app stay organized as it grows from one template to one hundred templates. The best way to tell if a template is part of a Route or Component is to look at the file path. Making new templates New templates should be made using Ember CLI commands. The CLI helps ensure that the new files go in the right place in the app folder structure, and that they follow the essential file naming conventions. For example, either of these commands will generate .hbs template files (and other things!) in your app: ember generate component my-component-name ember generate route my-route-name Template restrictions A typical, modern web app is made of dozens of files that have to all be combined together into something the browser can understand. Ember does this work for you with zero configuration, but as a result, there are some rules to follow when it comes to adding assets into your HTML. You cannot use script tags directly within a template, and should use actions or Component Lifecycle Hooks to make your app responsive to user interactions and new data. If you are working with a non-Ember JavaScript library and need to use a js file from it, see the Guide section Addons and Dependencies. You should not add links to your own local CSS files within the hbs file. Style rules should go in the app/styles directory instead. app/styles/app.css is included in your app's build by default. For CSS files within the styles directory, you can create multiple stylesheets and use regular CSS APIs like import to link them together. If you want to incorporate CSS from an npm package or similar, see Addons and Dependencies for instructions. To load styles through a CDN, read the next section below. What is index.html for? If HTML markup goes in hbs templates, what is index.html for? The index.html file is the entry point for an app. It is not a template, but rather it is where all the templates, stylesheets, and JavaScript come together into something the browser can understand. When you are first getting started in Ember, you will not need to make any changes to index.html. There's no need to add any links to other Ember app pages, stylesheets, or scripts in here by hand, since Ember's built-in tools do the work for you. A common customization developers make to index.html is adding a link to a CDN that loads assets like fonts and stylesheets. Here's an example: <link integrity="" rel="stylesheet" href=""> Understanding a Template's context A template only has access to the data it has been given. This is referred to as the template's "context." For example, to display a property inside a Component's template, it should be defined in the Component's JavaScript file: import Component from '@ember/component'; export default Component.extend({ firstName: 'Trek', lastName: 'Glowacki', favoriteFramework: 'Ember' }); Properties like firstName can be used in the template by putting them inside of curly braces, plus the word this: Hello, <strong>{{this.firstName}} {{this.lastName}}</strong>! Together, these render with the following HTML: Hello, <strong>Trek Glowacki</strong>! Things you might see in a template A lot more than just HTML markup can go in templates. In the other pages of this guide, we will cover the features one at a time. In general, special Ember functionality will appear inside curly braces, like this: {{example}}. Here are a few examples of Ember Handlebars in action: Route example: <!-- outlet determines where a child route's content should render. Don't delete it until you know more about it! --> <div> {{outlet}} </div> <!-- One way to use a component within a template --> <MyComponent /> {{! Example of a comment that will be invisible, even if it contains things in {{curlyBraces}} }} Component example: <!-- A property that is defined in a component's JavaScript file --> {{this.numberOfSquirrels}} <!-- Some data passed down from a parent component or controller --> {{@weatherStatus}} <!-- This button uses Ember Actions to make it interactive. A method named `plantATree` is called when the button is clicked. `plantATree` comes from the JavaScript file associated with the template, like a Component or Controller --> <button onclick={{action 'plantATree'}}> More trees! <button> <!-- Here's an example of template logic in action. If the `this.skyIsBlue` property is `true`, the text inside will be shown --> {{#if this.skyIsBlue}} If the skyIsBlue property is true, show this message {{/if}} <!-- You can pass a whole block of markup and handlebars content from one component to another. yield is where the block shows up when the page is rendered --> {{yield}} Lastly, it's important to know that arguments can be passed from one Component to another through templates: <MyComponent @favoriteFramework={{this.favoriteFramework}} /> To pass in arguments associated with a Route, define the property from within a Controller. Learn more about passing data between templates here. Helper functions Ember Helpers are a way to use JavaScript logic in your templates. For example, you could write a Helper function that capitalizes a word, does some math, converts a currency, or more. A Helper takes in two types of arguments, positional (an array of the positional values passed in the template) or named (an object of the named values passed in the template), which are passed into the function, and should return a value. Ember gives you the ability to write your own helpers, and comes with some helpers built-in. For example, let's say you would like the ability to add two numbers together. Define a function in app/helpers/sum.js to create a sum helper: import { helper as buildHelper } from '@ember/component/helper'; export function sum(params) { return params[0] + params[1] }; export const helper = buildHelper(sum); Now you can use the sum() function as {{sum}} in your templates: <p>Total: {{sum 1 2}}</p> The user will see a value of 3 rendered in the template! Ember ships with several built-in helpers, which you will learn more about in the following guides. Nested Helpers Sometimes, you might see helpers invoked by placing them inside parentheses, (). This means that a Helper is being used inside of another Helper or Component. This is referred to as a "nested" Helper Invocation. Parentheses must be used because curly braces {{}} cannot be nested. {{sum (multiply 2 4) 2}}. Many of Ember's built-in helpers (as well as your custom helpers) can be used in nested form.
https://guides.emberjs.com/release/templates/
CC-MAIN-2019-39
refinedweb
1,395
63.29
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo You can subscribe to this list here. Showing 2 results of 2 Hi, Any plans for the 1.0 release? Regards, Mike Schoenborn, Oliver wrote: ... >When >the user changes something in the view, say data in a text control box, the >view commits the data to a leaf in the hierarchy of the document and >generates a message for '(doc, changed, fromView)' topic, in case 'someone' >is interested in the fact that document has been changed. Similarly, the >import reader generates a message for topic '(doc, changed, fromImport)' >when it imports some external data into some leaves of an existing document. > > Okay, so the approach I outlined should work with PyDispatcher. I'll put it on the (seemingly infinitely long) list of things to play with some day. Basically it would look something like this: def sendHier( signal=Any, sender=Anonymous, *arguments, **named ): if isinstance( signal, HierMessage ): responses = [] while signal: responses.extend(sendExact( signal, sender, *arguments, **named )) signal = signal[:-1] responses.extend( sendExact( Any, sender, *arguments, **named )) return responses else: return send( signal, sender, *arguments, **named ) I'd have to take some time to write up test cases and check that the last sendExact would do the proper thing, but I think that basically gives all the features you've described. Have fun, Mike _______________________________________ Mike C. Fletcher Designer, VR Plumber, Coder
http://sourceforge.net/p/pydispatcher/mailman/pydispatcher-devel/?viewmonth=200404
CC-MAIN-2015-14
refinedweb
242
61.87
Python: How to check if keys exists and retrieve value from Dictionary in descending priority I have a dictionary and I would like to get some values from it based on some keys. For example, I have a dictionary for users with their first name, last name, username, address, age and so on. Let's say, I only want to get one value (name) - either last name or first name or username but in descending priority like shown below: (1) last name: if key exists, get value and stop checking. If not, move to next key. (2) first name: if key exists, get value and stop checking. If not, move to next key. (3) username: if key exists, get value or return null/empty #my dict looks something like this myDict = {'age': ['value'], 'address': ['value1, value2'], 'firstName': ['value'], 'lastName': ['']} #List of keys I want to check in descending priority: lastName > firstName > userName keySet = ['lastName', 'firstName', 'userName'] What I tried doing is to get all the possible values and put them into a list so I can retrieve the first element in the list. Obviously it didn't work out. tempList = [] for key in keys: get_value = myDict.get(key) tempList .append(get_value) Is there a better way to do this without using if else block? Answers One option if the number of keys is small is to use chained gets: value = myDict.get('lastName', myDict.get('firstName', myDict.get('userName'))) But if you have keySet defined, this might be clearer: value = None for key in keySet: if key in myDict: value = myDict[key] break The chained gets do not short-circuit, so all keys will be checked but only one used. If you have enough possible keys that that matters, use the for loop. Use .get(), which if the key is not found, returns None. for i in keySet: temp = myDict.get(i) if temp is not None: print temp break You can use myDict.has_key(keyname) as well to validate if the key exists. Edit based on the comments - This would work only on versions lower than 3.1. has_key has been removed from Python 3.1. You should use the in operator if you are using Python 3.1 If we encapsulate that in a function we could use recursion and state clearly the purpose by naming the function properly (not sure if getAny is actually a good name): def getAny(dic, keys, default=None): return (keys or default) and dic.get(keys[0], getAny( dic, keys[1:], default=default)) or even better, without recursion and more clear: def getAny(dic, keys, default=None): for k in keys: if k in dic: return dic[k] return default Then that could be used in a way similar to the dict.get method, like: getAny(myDict, keySet) and even have a default result in case of no keys found at all: getAny(myDict, keySet, "not found") Need Your Help MongoDB C# Query for 'Like' on string mongodb mongodb-.net-driveri am using official mongodb c# driver. How to process a large ActiveRecord result set in groups ruby-on-rails ruby arrays activerecordI'm wondering if there is a way to take an array of ActiveRecord results (or any array, for that matter) and process it in groups of 25 or so. Something like this:
http://www.brokencontrollers.com/faq/24723060.shtml
CC-MAIN-2019-51
refinedweb
553
68.3
I am trying to set some obscure namespaces in folders in a big project, but for some reason PhpStorm keeps returning to the default settings on restart. What might be causing this and how do I put a stop to it? The project has a main folder, which is marked as "Content root", then it has the dot marked as being a Source Folder. It also has some vendors which are excluded. The directories I want to mark as being sources with a specific namespace prefix live inside some subfolders in a "custom" folder. When I go to "Directories" and mark some of them as Source and then give them a specific prefix, it all seems to work. I can generate new classes and they get the right namespace. But then when I exit PhpStorm and restart it, the settings are removed again and I'm left only with the original settings again. Things I have checked/tried: - They seem to get saved to the settings properly; I can see the results in the .iml file in the project folder. They remain there until I reload PhpStorm, then the file gets updated and my changes are gone. - I am not using a Settings Repository or Startup tasks I am running PhpStorm 2017.3 on Ubuntu 17.10 Hi there, That looks like IDE is synchronising these settings with your composer.json. This happens on project opening or when that file change is detected. "Settings/Preferences | Languages & Frameworks | PHP | Composer" -- uncheck appropriate option there ("Synchronize IDE Settings with composer.json"). Alternatively -- just bump your PHP version in composer.json if you will be using that PHP version as minimal required version for sure. P.S. 2017.3 should have been showing you the popup box offering to sync such settings (did for me). Later versions (2017.3.2 or so) may do this automatically and offer to revert the changes (not sure if implemented already or still in the works -- have not created any new projects recently). P.P.S. Actual ticket to watch after: Thank you Andrly, that fixed it. Thanks Andrly, works for me! Thanks, works for me! Oh, thanks… I was going to throw PhpStorm through the window because of this 🤣 Thank you for answering this Andriy Bazanov. I was ready to throw PhpStorm through the window as well.
https://intellij-support.jetbrains.com/hc/en-us/community/posts/115000794610-When-marking-sources-PhpStorm-automatically-returns-to-previous-settings-on-restart?page=1
CC-MAIN-2019-09
refinedweb
391
73.68
A lot has happened in 2017, and it can be a bit overwhelming to think about. We all like to joke about how quickly things change in frontend engineering, and for the last few years that has probably been true. At this risk of sounding cliché, I’m here to tell you that this time it’s different. Frontend trends are starting to stabilize — popular libraries have largely gotten more popular instead of being disrupted by competitors — and web development is starting to look pretty awesome. In this post, I’ll summarize some of the important things that happened this year in the frontend ecosystem with an eye toward big-picture trends. Crunching the numbers It’s hard to tell when something is the next big thing, especially when you’re still on the last big thing. Getting accurate usage data for open source tools is tricky. Typically we look in a few places: - GitHub star counts are loosely correlated with a library’s popularity, but people often star libraries that look interesting and then never return. - Google Trends is helpful for seeing trends at a rough level, but doesn’t offer data with enough granularity to accurately compare a particular set of tools. - Stack Overflow question volume is more of an indication of how confused people are using a technology rather than how popular it is. - NPM download stats is the most accurate measure of how many people are actually using a particular library. Even these might not be totally accurate since this number also includes automatic downloads like those done in continuous integration tools. - Surveys like the State of JavaScript 2017 are helpful for seeing trends among a large sample size (20,000 developers). Frameworks React React 16 was released in September, bringing with it a complete rewrite of the internal core architecture without any major API changes. The new version offers improved error handling with the introduction of error boundaries as well as support for rendering a subsection of the render tree onto another DOM node. The React team chose to rewrite the core architecture in order to support asynchronous rendering in a future release, something that was impossible with the previous architecture. With async rendering, React would avoid blocking the main thread when rendering heavy applications. The plan is to offer it as an opt-in feature in a future minor release of React 16, so you can expect it some time in 2018. React also switched to an MIT license after a period of controversy over the previous BSD license. It was widely believed that the patent clause was too restrictive, causing many teams to consider switching to an alternative Javascript view framework. However, it has been argued that the controversy was unfounded, and that the new patent actually leaves React users less protected than before. Angular After eight beta releases and six release candidates, Angular 4 was released in March. The key feature in this release is ahead of time compilation — views are now compiled at build-time instead of render time. This means that Angular apps no longer need to ship with a compiler for application views, reducing the bundle size significantly. This release also improves support for server-side rendering and adds many small “quality of life” improvements to the Angular template language. Over the course of 2017, Angular has continued to lose ground compared to React. Although the release of Angular 4 has been a popular release, it is now even farther away from the top spot than it was at the beginning of the year. Source: npmtrends.com Vue.js 2017 has been a great year for Vue, allowing it to take its place as a premiere frontend view framework alongside React and Angular. It has become popular because of its simple API and comprehensive suite of companion frameworks. Since it has a template language similar to Angular and the component philosophy of React, Vue is often seen as occupying a sort of “middle ground” between the two options. There has been an explosion of growth in Vue in the last year. This has generated a considerable amount of press and dozens of popular UI libraries and boilerplate projects. Large companies have started to adopt Vue — Expedia, Nintendo, GitLab among many others. At the beginning of the year, Vue had 37k stars on Github and 52k downloads on NPM per week. By the middle of December, it had 76k stars on Github and 266k downloads per week, twice as many stars and five times as many downloads. This still pales in comparison to React, which had 1.6 million downloads per week in the middle of December according to NPM Stats. One can expect Vue to continue its rapid growth and perhaps become one of the top two frameworks in 2018. TL;DR: React has won for now but Angular is still kicking. Meanwhile, Vue is surging in popularity. ECMAScript The 2017 edition of the ECMAScript specification underlying Javascript was released in June after an exhaustive proposal process concluded with several groundbreaking features such as asynchronous functions and shared memory and atomic operations. Async functions allow for writing clear and concise asynchronous Javascript code. They are now supported in all major browsers. NodeJS added support in v7.6.0 after they upgraded to V8 5.5, which was released in late 2016 and also brought significant performance and memory improvements. Shared Memory and atomic operations are a hugely significant feature that hasn’t gotten a lot of notice. Shared Memory is implemented using the SharedArrayBuffer construct, which allows web workers to access the same bytes of a typed array in memory. Workers (and the main thread) use atomic operations provided by the new Atomics global to safely access this memory across different execution contexts. SharedArrayBuffer offers a much faster method of communication between workers compared to message sending or transferrable objects. Adoption of shared memory will be hugely significant in the years to come. Near-native performance for JavaScript applications and games means that the web becomes a more competitive platform. Applications can become more complex and do more expensive operations in the browser without sacrificing performance or offloading tasks to the server. A truly parallel architecture with shared memory is a great asset for anyone trying to create games with WebGL and web workers. As of December 2017 they are supported by all major browsers, and Edge starting with v16. Node does not currently support web workers, so they have no plans on supporting Shared Memory. However, they are currently rethinking their worker support, so it’s possible that it might find its way into Node in the future. TL;DR: Shared memory will make high-performance parallel computing in JavaScript much easier to work with and far more efficient. WebAssembly WebAssembly (or WASM) provides a way to compile code written in other languages to a form that can be executed in the browser. This code is a low-level assembly-like language that is designed to run at near-native performance. JavaScript can load and execute WebAssembly modules using a new API. The API also provides a memory constructor that provides a way for JavaScript to directly read and manipulate the memory accessed by a WebAssembly module instance, allowing for a higher degree of integration with JavaScript applications. All major browsers now support WebAssembly, with Chrome support arriving in May, Firefox in March, and Edge in October. Safari supports it in their 11th release, which ships with MacOS High Sierra and an update is available for the Sierra and El Capitan releases. Chrome for Android and Safari Mobile also support WebAssembly. You can compile C/C++ code to WebAssembly using the emscripten compiler and configuring it to target WebAssembly. You can also compile Rust to WebAssembly, as well as OCaml. There are multiple ways to compile JavaScript (or something close to it) to WebAssembly. Some of these, like Speedy.js and AssemblyScript leverage TypeScript for type checking but add lower-level types and basic memory management. None of these projects are production-ready and their APIs are changing frequently. Given the desire for compiling JS to WebAssembly one can expect these projects to gain momentum as WebAssembly becomes more popular. There are already a lot of really interesting WebAssembly projects. There is a virtual DOM implementation targeting C++, allowing to create an entire frontend application in C++. If your project uses Webpack, there is a wasm-loader that eliminates the need to manually fetch and parse .wasm files directly. WABT offers a suite of tools that allow you to transform between the binary and text WebAssembly formats, print information about WASM binaries, and merge .wasm files. Expect WebAssembly to become more popular in the coming year as more tools are developed and the JavaScript community wakes up to its possibilities. It’s currently in the “experimental” phase, and browsers have only recently begun to support it. It will become a niche tool for speeding up CPU-intensive tasks like image processing and 3D rendering. Eventually as it matures I suspect that it will find use cases in more everyday applications. TL;DR: WebAssembly will change everything eventually, but it’s still pretty new. Package Managers 2017 was a great year for JavaScript package management. Bower continued it’s decline and replacement by NPM. It’s last release was November 2016 and its maintainers now officially recommend that users use NPM for frontend projects. Yarn was introduced in October of 2016 and brought innovation to JavaScript package management. Although it uses the same public package repository as NPM, Yarn offered faster dependency download and installation times and a more user-friendly API. Yarn introduced lock files which allowed for reproducible builds across different machines and an offline mode that allowed users to reinstall packages without an internet connection. As a result its popularity exploded and thousands of projects started using it. NPM responded with a massive v5 release which significantly improved performance and overhauling the API. Yarn responded with introducing Yarn Workspaces, allowing first-class support for monorepo package management similar to the popular Lerna tool. There are now more options for NPM clients than just Yarn and NPM. PNPM is another popular option, billing itself as “fast, disk space efficient package manager”. Unlike Yarn and NPM it keeps a global cache of every package version ever installed which it symlinks into the node_modules folder of your package. TL;DR: NPM has adapted quickly to Yarn’s surge in popularity, both are now popular. Stylesheets Recent Innovations Within the last few years CSS preprocessors like SASS, Less, and Stylus have become popular. PostCSS — which was introduced way back in 2014 — really took off in 2017, becoming by far the most popular CSS preprocessor. Unlike the other preprocessors, PostCSS adopts a modular plugin approach similar to what Babel does for JavaScript. In addition to transforming stylesheets it also provides linters and other tools. Source: NPM Stats accessed on December 15, 2017 There has always been a desire to solve some of the lower-level problems with CSS that make it difficult to use it in concert with component-based development. Particularly, the global namespace makes it difficult to create styles that are isolated within a single component. Keeping CSS in a different file than the component code means that smaller components take up a larger footprint and require two files to be open in order to develop. CSS Modules augments normal CSS files by adding namespaces that can be used to isolate component styles. This works by generating a unique class name for each “local” class. This has become a viable solution with the widespread adoption of frontend build systems like Webpack, which has support for CSS Modules with its css-loader. PostCSS has a plugin to provide the same functionality. However, with this solution CSS remains in a separate file from component code. Other Solutions “CSS in JS” was an idea introduced in a famous talk in late 2014 by Christopher “Vjeux” Chedeau, a Facebook engineer on the React development team. This has spawned several influential libraries making it easier to create componentized styles. By far the most popular solution has been styled-components, which uses ES6 tagged template literals to create React components from CSS strings. Another popular solution is Aphrodite, which uses JavaScript object literals to create framework-agnostic inline styles. In the State of JavaScript 2017 survey, 34% of developers said that they have used CSS-in-JS. TL;DR: PostCSS is the preferred CSS preprocessor, but many are switching to CSS-in-JS solutions. Module Bundlers Webpack In 2017 Webpack as solidified its lead over the previous generation of JavaScript bundling tools by a wide margin: Source: npmtrends.com Webpack 2 was released in February of this year. It brought important features like ES6 modules (no longer requiring Babel to transpile import statements) and tree shaking (which eliminates unused code from your bundles). V3 was released shortly after that, bringing a feature called “scope hoisting” which places all of your webpack modules into a single JavaScript bundle, significantly reducing its size. In July the Webpack team received a grant from the Mozilla Open Source Support program in order to develop first-class support for WebAssembly. The plan is to eventually offer deep integration with WebAssembly and the JavaScript module system. There has been innovation in the module bundler space not related to Webpack. While it remains popular, developers have complained about the difficulty in configuring it correctly and the wide array of plugins required to get acceptable performance on large projects. Parcel Parcel is an interesting project that gained notice in early December (10,000 stars on Github in only 10 days!). It bills itself as a “blazing fast, zero configuration web application bundler”. It largely achieves this by utilizing multiple CPU cores and an efficient filesystem cache. It also operates on abstract syntax trees instead of strings like Webpack. Like Webpack, Parcel also handles non-JavaScript assets like images and stylesheets. The module bundler space displays a common pattern in the JavaScript community: the constant back-and-forth between the “batteries included” (aka centralized) and “configure everything” (aka decentralized) approaches. We see this in the transition from Angular to React / Redux and from SASS to PostCSS. Webpack and the bundlers and task-runners before it were all decentralized solutions with many plugins. In fact, Webpack and React share similar complaints in 2017 for nearly the same reasons. It makes total sense that people would desire a “batteries included” solution for bundling as well. Rollup Rollup generated a considerable amount of attention before the release of Webpack 2 in 2016 by introducing a popular feature known as tree shaking, which is just a fancy way of saying dead-code elimination. Webpack responded with support for Rollup’s signature feature in its second release. Rollup bundles modules differently than Webpack, making the overall bundle size smaller but at the same time preventing important features like code splitting, which Rollup does not support. In April the React team switched to Rollup from Gulp, prompting many to ask why they chose Rollup over Webpack. The Webpack team responded to this confusion by actually recommending Rollup for library development and Webpack for app development. TL;DR: Webpack is still by far the most popular module bundler, but it might not be forever. TypeScript In 2017 Flow lost considerable ground to TypeScript: Although this trend has existed for the last few years, it’s picked up pace in 2017. TypeScript is now the third-most loved language according to the 2017 Stack Overflow Developer Survey (Flow didn’t even garner a mention). Reasons often cited as for why TypeScript won include: superior tooling (especially with editors like Visual Studio Code), linting support (tslint has become quite popular) larger community, larger database of third-party library type definitions, better documentation, and easier configuration. Early on TypeScript got automatic popularity by virtue of being the language of choice for the Angular project, but in 2017 has solidified its usage across the entire community. According to Google Trends, TypeScript grew twice as popular over the course of the year. TypeScript has adopted a rapid release schedule that has allowed it to keep up the pace with JavaScript language development while also fine-tuning the type system. It now supports ECMAScript features like iterators, generators, async generators, and dynamic imports. You can now typecheck JavaScript using TypeScript, which is achieved through type inference and JSDoc comments. If you use Visual Studio Code, TypeScript now supports powerful transformation tools in-editor which allow you to rename variables and automatically import modules. TL;DR: TypeScript is winning against Flow. State Management Redux continues to be the preferred state management solution for React projects, enjoying 5x growth in NPM downloads throughout 2017: Source: NPM Trends Mobx is an interesting competitor to Redux for client-side state management. Unlike Redux, MobX uses observable state objects and an API inspired by functional reactive programming concepts. Redux in contrast was heavily influenced by classic functional programming and favors pure functions. Redux can be considered a “manual” state manager in that actions and reducers explicitly describe state changes; MobX in contrast is a “automatic” state manager because the observable pattern does all of that for you behind the scenes. MobX makes very little assumptions about how you structure your data, what type of data you’re storing, or whether or not it’s JSON-serializable. The above reasons makes it very easy for beginners to start using MobX. Unlike Redux, MobX is not transactional and deterministic, meaning you don’t automatically get all of the benefits Redux enjoys with debugging and logging. You can’t easily take a snapshot of the entire state of a MobX application, meaning that debugging tools like LogRocket have to watch each of your observables manually. It’s used by several high-profile companies like Bank of America, IBM, and Lyft. There is also a growing community of plugins, boilerplates, and tutorials. It’s also growing really fast: from 50k NPM downloads at the beginning of the year to a peak of 250k NPM downloads in October. Because of the aforementioned limitations, the MobX team has been hard at work at combining the best of both world of Redux and MobX in a project called mobx-state-tree (or MST). It’s essentially a state container that uses MobX behind the scenes to provide a way to work with immutable data as easy as working with mutable data. Basically, your state is still mutable, but you work with an immutable copy of this state referred to as a snapshot. There is already a plethora of developer tools that help debug and inspect your state trees — Wiretap and mobx-devtools are great options. Because they operate in a similar way, you can even use Redux devtools with mobx-state-tree. TL;DR: Redux is still king, but look out for MobX and mobx-state-tree. GraphQL GraphQL is a query language and runtime for APIs that offers a more descriptive and easy-to-use syntax for reasoning about your data sources. Instead of building REST endpoints, GraphQL provides a typed query syntax that allows JavaScript clients to request only the data that they need. It is perhaps the most important innovation in API development within the last few years. Although the GraphQL language spec hasn’t changed since October 2016, interest in it has continued to climb. Over the last year, Google Trends has seen a 4x increase in searches for GraphQL, NPM has seen a 13x increase in NPM downloads for the JavaScript reference GraphQL client. There are now many client and server implementations to choose from. Apollo is an popular client and server choice, adding comprehensive cache controls and integrations with many popular view libraries like React and Vue. MEAN is a popular full-stack framework that uses GraphQL as an API layer. Within the last year the community behind GraphQL has also grown immensely. It has created server implementations in over 20 languages and thousands of tutorials and starter projects. There is a very popular “awesome list”. React-starter-kit — the most popular React boilerplate project — also uses GraphQL. TL;DR: GraphQL is gaining momentum. Also worth mentioning… NapaJS Microsoft’s new multi-threaded JavaScript runtime is built on top of V8. NapaJS provides a way to use multithreading in a Node environment, allowing expensive CPU-bound tasks to be performed that otherwise would have been slow using the existing Node architecture. It offers a thread alternative to Node’s multiprocessing model, implemented as a module and available in NPM like any other library. While it has been possible to use threads in Node using the node-webworker-threads package and by interfacing with lower-level languages, Napa makes this seamless with the rest of the Node ecosystem by adding the ability to use the Node modules system from inside worker threads. It also includes a comprehensive API for sharing data across workers, similar to the newly released shared memory standard. The project is Microsoft’s effort to use bring high-performance architecture to the Node ecosystem. It’s currently used by the Bing search engine as part of its backend stack. Given that it has the support of a major company like Microsoft, you can expect long term stability. It will be interesting to see how far the Node community goes with multithreading. Prettier The trend in recent years has overwhelmingly been an increase in the importance of and complexity of build tools. With the debut of Prettier, code formatting is now a popular addition to a frontend build pipeline. It bills itself as an “opinionated” code formatter designed to enforce a consistent coding style by parsing and reprinting it. While linting tools like ESLint have long been able to automatically enforce linting rules, prettier is the most feature rich solution. Unlike ESLint, Prettier also supports JSON, CSS, SASS, and even GraphQL and Markdown. It also offers deep integration with ESLint and many popular editors. Now if we could just agree on semicolons, we’d be alright..
https://blog.logrocket.com/frontend-in-2017-the-important-parts-4548d085977f/
CC-MAIN-2019-43
refinedweb
3,694
52.6
- Type: Bug - Status: Closed - Priority: P1: Critical - Resolution: Done - Affects Version/s: production - Fix Version/s: None - Component/s: Coin (obsolete) - Labels:None Currently Coin goes through all Kernel images available and selects the first one in the list. @async_lru_cache(maxsize=1) async def macKernelImageID() -> str: image_pool = (await one.imagepool.info(-2, -1, -1))["IMAGE_POOL"] for image in xmlrpc_childElements(image_pool, "IMAGE"): if image["TYPE"] == one_image_type.kernel: return image["ID"] assert False, "Could not find Mac Kernel image id" This prevents us from supporting different Kernel images. One that works with macOS 10.13 (and 10.12) does not work with older ones. And one that works with older ones doesn't work with 10.13. Somehow Coin needs to know which Kernel image is used with which macOS. Hard code it in Coin? - is required for QTQAINFRA-1270 Add macOS 10.13 VM to the CI - Closed
https://bugreports.qt.io/browse/QTQAINFRA-2031
CC-MAIN-2020-16
refinedweb
147
60.72
Results 1 to 5 of 5 As far as I know there is a task_struct stcuture which relates all the data related to the process. In case of socket there is no socket pointer in task_struct ... - Join Date - Dec 2011 - 30 how to print socket data structures In case of socket there is no socket pointer in task_struct . then how to relate the socket data structure of a process through task_struct ???? Is there another method to do so ?? I want to know the deifferences between data structure of two sockets (AF_UNIX) for this I want to print this data structure . So, I cant print it without the use of task_struct and also I tried including the pointer of socket in task_struct there are many pointers which I need to include as only socket pointer doesnot contain socket info there are other stucture like sock,sock_common .Also including these pointer and their related .h files gives lots of errors during compiling. Also printk can print all the data structures but there are many files which are related to socket. So we can miss out many things and also it will take lots of time and is inefficient. So my idea was to create a kernel module with parameter as pid which will print all the socket data through task_struct. If not possible through task_struct then how to do so?????? Hence ,I want to know How the kernel relates the socket data structures and the corresponding process. Plzzzzz help me out!!!!! - Join Date - Feb 2012 - Location - Hyderabad, India - 11 Can you try this module. #include <linux/init.h> #include <linux/module.h> #include <linux/fs.h> #include <linux/pid.h> #include <linux/net.h> #include <linux/sched.h> #include <linux/fdtable.h> unsigned int pidnr = 1; module_param(pidnr, uint, 0644); int mod_init(void) { struct file *file; struct socket *sock; int fd = 0; struct inode *inode; struct task_struct *task; struct pid *mypid; struct files_struct *files; mypid = find_vpid(pidnr); if (!mypid) { return -ESRCH; } task = pid_task(mypid, PIDTYPE_PID); if (!task) { printk("\nNo Such Process\n"); return -ESRCH; } printk("\nProcess %s\n", task->comm); files = task->files; rcu_read_lock(); file = fcheck_files(files, fd); rcu_read_unlock(); while(file) { inode = file->f_path.dentry->d_inode; if (S_ISSOCK(inode->i_mode)) { sock = file->private_data; printk("\ntype = %d\n", sock->type); } fd++; rcu_read_lock(); file = fcheck_files(files, fd); rcu_read_unlock(); } return 0; } void mod_exit(void) { } module_init(mod_init); module_exit(mod_exit); MODULE_LICENSE("GPL"); - Join Date - Dec 2011 - 30 - Join Date - Feb 2012 - Location - Hyderabad, India - 11 - Join Date - Dec 2011 - 30 Sir, how to access the msghdr pointer for socket?? We could not find any structure which contains the pointer for msghdr eg:socket structure and sock structure etc. Sock is in socket so we can access it but then how to access msghdr Same is the problem with sk_buff; Actually, I want to print the difference in the variables of two client programs(using AF_UNIX sockets) which are connected to the same server. For example for two open file can be distinguished by their inode , etc Similar what are the variables related to the sockets that differ between the two clients?? Is it that only the socket discriptor and the run time buffer for the two clients changes?? or there are other variables also that differ?? Because I have tried printing the socket structure variables and sock structure variables but there is no major change between the variables for the two clients.. I am very confused Plzzzz help... :'(
http://www.linuxforums.org/forum/kernel/186606-how-print-socket-data-structures.html
CC-MAIN-2014-41
refinedweb
571
63.7
#include <CSIS3600.h> class CSIS3600 { CSIS3600(unsigned long base, int crate = 0) throws std::string; void Reset(); const void LightLed(); const void ClearLed(); const bool isLedLit(); const void SetLatchMode(); const void SetCoincidenceMode(); const bool isLatchMode(); const void EnableExternalClear(); const void DisableExternalClear(); const bool ExternalClearEnabled(); const void EnableExternalNext(); const void DisableExternalNext(); const bool ExternalNextEnabled(); void SetFastClearWindow(int ns); const int GetFastClearWindow(); const void Enable(); const void Disable()(); const void Clock(); const void StartLatch(); const void EndLatch(); const bool DataReady(); const void ClearData(); const unsigned long Read() throws throws std::string; const unsigned int Read(void* pBuffer, int nLongs);} This class provides low level support for the SIS 3600 multievent latch. The module can act either as a pure latch or as a coincidence register. A latch stores the inputs presented to it on the falling edge of a gate while coincidence registers, store 1's for all inputs that have had true values for the duration of the input gate. Creates an instance of a CSIS3600 class. This object will allow you to manipulate the SIS 3600 module that has the base address base in VME crate number crate. If the crate is not supplied, it defaults to 0 which is suitable for single crate systems. Performs a module soft reset. Turns on the U led on the module. This LED is available for application specific signalling. Turns off the U led on the module. Returns true if the U LED is lit. Requests that the module operate in latch mode. In latch mode, inputs that are asserted will be transferred to the event memory when the next event signal transitions to asserted. See also SetCoincidenceMode. Places the module in coincidence mode. In coincidence mode, signals that are asserteed during the length the gate is present are transmitted to the event memory as one, otherwise as zero. Returns true if the module has been set in latch mode. If the module is in coincidence mode, this will return false. Enables the external fast clear input. This is used in conjunction with the fast clear window to allow events to be rejected prior to transferring them to the fifo. A fast clear signal that arrives within the fast clear window of the end(?) of the gate discards the event prior to transfer. Disables the module external fast clear input. Transitions to asserted on this line will have no impact on the operation of the module, regardess of their timing relative to the fast clear window. Returns true if the external fast clear is enabled. Enables the external next input to act either as a latch or coincidence window depending on the mode of the module. Disables the external next input. When external next is disabled, data can only be latched by a programmatic latch. Returns true if the external next signal is enabled. Sets the fast clear window in nano seconds. Note that there is a granularity to the fast clear window width. See section 7.4 of the SIS 3600 manual. Returns the value of the fast clear window in ns. Due to the granularity of the fast clear window, it's not strange to set a fast clear window and then ask the card what the fast clear window actually turned out to be. Enables the module's ability to clock in new events. Disables the module's ability to clock in new events. Clocks in the next event. If the module is in latch mode, it is pretty easy to see that the data presented to the module at the time the function is called is loaded into the FIFO as the next event. It is not clear from the manual documentation what it means to call this whe the module is in coincidence mode. My guess is that there is a finite width to the internal clock generated by this function, and the width of that signal determines the coincidence window. Begins a latch operation. In coincidence mode this allows a (long) software timed coincidence window to be programmed. Calling this starts the window. Ends a software timed latch (load next event) window. Returns true if the module has data in its FIFO. Clears any data that may be hanging around in the FIFO. Reads an event (one longword) from the FIFO of the module and returns it as the function value. If there are no events in the FIFO, the function throws an exception of type std::string. Reads at most nLongs from the event FIFO to the buffer pointed to by pBuffer. Returns the number of longwords that were actually read. This can be less than nLongs or even 0 if there fewer than nLongs in the module's FIFO For various errors, std::string exceptions are thrown. The exception contents are error messages so it's reasonable to say: try { … // Calls to CSIS 3600 stuff. } catch (std::string msg) { cerr << "String error caught " << msg << endl; exit(-1); }
http://docs.nscl.msu.edu/daq/newsite/nscldaq-11.2/r31685.html
CC-MAIN-2017-39
refinedweb
822
64.81
CodeGuru Forums > .NET Programming > C-Sharp Programming > problems with XP PDA Click to See Complete Forum and Search --> : problems with XP mathiasneyer May 30th, 2008, 08:46 AM hi there I got a problem... my c# projects are running fine on my devleoper machine on win2k. now i got a second PC with XP on it. I installed VS2005 and thought everything is fine but now a single project is making troubles. if i start the application with the debuger the attached error message appears dependency walker says: Error: The Side-by-Side configuration information in "s:\st3d\optimisation2d\OPTI.EXE.manifest" contains errors. This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem (14001). I have no idea where is coming from.... it would be great if you could give me any tips or suggestions Thanks in advance Mathias marioana May 30th, 2008, 08:57 AM Did you check if the user you logged in to has full control? It seems to me that it can't start an application, which might be because of not enough rights... JonnyPoet May 30th, 2008, 02:18 PM I had a similar problem once when I saved one of my projects (created on a Vista OS PC) on a USB stick and brought it to another machine and tried to open it there. After recomilation of the whole project, starting to recompile the dll's and then the mainproject the problem was fixed. mathiasneyer June 2nd, 2008, 02:00 AM thanks for your suggestions... i don't solved the problem yet... All my other projects are working fine but i can't even debug this application! Im really (re)tired now... I don't know from where is coming i checked all dependencies namespaces etc. so long Mathias codeguru.com
http://forums.codeguru.com/archive/index.php/t-454169.html
crawl-003
refinedweb
307
64.51
What’s new in Celery 3.0 (Chiastic Slide. If you use Celery in combination with Django you must also read the django-celery changelog and upgrade to django-celery 3.0. This version is officially supported on CPython 2.5, 2.6, 2.7, 3.2 and 3.3, as well as PyPy and Jython. Highlights¶ Overview A new and improved API, that is both simpler and more powerful. Everyone must read the new First Steps with Celery tutorial, and the new Next Steps tutorial. Oh, and why not reread the user guide while you’re at it :) There are no current plans to deprecate the old API, so you don’t have to be in a hurry to port your applications. The worker is now thread-less, giving great performance improvements. The new “Canvas” makes it easy to define complex workflows. All of Celery’s command-line programs are now available from a single celery umbrella command. This is the last version to support Python 2.5. Starting with Celery 3.1, Python 2.6 or later is required. Support for the new librabbitmq C client. Celery will automatically use the librabbitmqmodule if installed, which is a very fast and memory-optimized replacement for the py The workers remote control command exchanges has been renamed (a new pidbox name), this is because the auto_delete flag on the exchanges has been removed, and that makes it incompatible with earlier versions. You can manually delete the old exchanges if you want, using the celery amqp command (previously called camqadm): $ celery amqp exchange.delete celeryd.pidbox $ celery amqp exchange.delete reply.celeryd.pidbox EventloopD_FORCE_EXECV setting is enabled by default if the eventloop is not used. New celery umbrella command¶ All Celery’s command-line programs are now available from a single celery umbrella command. You can see a list of subcommands and options by running: $ celery help Commands include: celery worker(previously celeryd). celery beat(previously celerybeat). celery amqp(previously camqadm). The old programs are still available ( celeryd, celerybeat, etc), but you are discouraged from using them. Now depends on billiard.¶ Billiard is a fork of the multiprocessing containing the no-execv patch by sbt (), and also contains the pool improvements previously located in Celery. This fork was necessary as changes to the C extension code was required for the no-execv patch to work. - Issue #625 - Issue #627 - Issue #640 - django-celery #122 < - django-celery #124 < celery.app.task no longer a package¶ The celery.app.task module is now a module instead of a package. The setup.py install script will try to remove the old package, but if that doesn’t work for some reason you have to remove it manually. This command helps: $ rm -r $(dirname $(python -c ' import celery;print(celery.__file__)'))/app/task/ If you experience an error like ImportError: cannot import name _unpickle_task, you just have to remove the old package and everything is fine. Last version to support Python 2.5¶ The 3.0 series will be last version to support Python 2.5, and starting from 3.1 Python 2.6 and later will be required. With several other distributions taking the step to discontinue Python 2.5 support, we feel that it is time too. Python 2.6 should be widely available at this point, and we urge you to upgrade, but if that are not compatible with Celery versions prior to 2.5. You can disable UTC and revert back to old local time by setting the CELERY_ENABLE_UTC setting. Redis: Ack emulation improvements¶. News¶ Chaining Tasks¶ Tasks can now have callbacks and errbacks, and dependencies are recorded The task message format have been updated with two new extension keys Both keys can be empty/undefined or a list of subtasks. callbacks Applied if the task exits successfully, with the result of the task as an argument. errbacks Applied if an error occurred while executing the task, with the uuid of the task as an argument. Since it may not be possible to serialize the exception instance, it passes the uuid of the task instead. The uuid can then be used to retrieve the exception and traceback of the task from the result backend. linkand link_errorkeyword arguments has been added to apply_async. These add callbacks and errbacks to the task, and you can read more about them at Linking (callbacks/errbacks). We now track what subtasks a task sends, and some result backends supports retrieving this information. task.request.children Contains the result instances of the subtasks the currently executing task has applied. AsyncResult.children Returns the tasks dependencies, as a list of AsyncResult/ ResultSetinstances. AsyncResult.iterdeps Recursively iterates over the tasks dependencies, yielding (parent, node) tuples. Raises IncompleteStream if any of the dependencies has not returned yet. AsyncResult.graph A DependencyGraphof the tasks dependencies. This can also be used to convert to dot format: with open('graph.dot') as fh: result.graph.to_dot(fh) which can than be used to produce an image: $ dot -Tpng graph.dot -o graph.png A new special subtask called chainis also included: >>> from celery import chain # (2 + 2) * 8 / 2 >>> res = chain(add.subtask((2, 2)), mul.subtask((8, )), div.subtask((2,))).apply_async() >>> res.get() == 16 >>> res.parent.get() == 32 >>> res.parent.parent.get() == 4 Adds AsyncResult.get_leaf() Waits and returns the result of the leaf subtask. That is the last node found when traversing the graph, but this means that the graph can be 1-dimensional only (in effect a list). Adds subtask.link(subtask)+ subtask.link_error(subtask) Shortcut to s.options.setdefault('link', []).append(subtask) Adds subtask.flatten_links() Returns a flattened list of all dependencies (recursively) Redis: Priority support.¶: >>> BROKER_TRANSPORT_OPTIONS = { ... 'priority_steps': [0, 2, 4, 6, 8, 9], ... } Priorities implemented in this way is not as reliable as priorities on the server side, which is why the feature is nicknamed .¶together, since it was very difficult to migrate the TaskSet class to become a subtask. A new shortcut has been added to tasks: >>> task.s(arg1, arg2, kw=1) as a shortcut to: >>> task.subtask((arg1, arg2), {'kw': 1}) Tasks can be chained by using the |operator: >>> (add.s(2, 2), pow.s(2)).apply_async() Subtasks can be “evaluated” using the ~operator: >>> ~add.s(2, 2) 4 >>> ~(add.s(2, 2) | pow.s(2)) is the same as: >>> chain(add.s(2, 2), pow.s(2)).apply_async().get() A new subtask_type key has been added to the subtask dicts This can be the string “chord”, “group”, “chain”, “chunks”, “xmap”, or “xstarmap”. maybe_subtask now uses subtask_type to reconstruct the object, to be used when using non-pickle serializers. The logic for these operations have been moved to dedicated tasks celery.chord, celery.chain and celery.group. subtask no longer inherits from AttributeDict. It’s now a pure dict subclass with properties for attribute access to the relevant keys. The repr’s now outputs how the sequence would like imperatively: >>> from celery import chord >>> (chord([add.s(i, i) for i in xrange(10)], xsum.s()) | pow.s(2)) tasks.xsum([tasks.add(0, 0), tasks.add(1, 1), tasks.add(2, 2), tasks.add(3, 3), tasks.add(4, 4), tasks.add(5, 5), tasks.add(6, 6), tasks.add(7, 7), tasks.add(8, 8), tasks.add(9, 9)]) | tasks.pow(2) New remote control commands¶ These commands were previously experimental, but they have proven stable and is now documented as part of the offical API. add_consumer/ cancel_consumer Tells workers to consume from a new queue, or cancel consuming from a queue. This command has also been changed so that the worker remembers the queues added, so that the change will persist even if the connection is re-connected. These commands are available programmatically as app.control.add_consumer()/ app.control.cancel_consumer(): >>> celery.control.add_consumer(queue_name, ... destination=['w1.example.com']) >>> celery.control.cancel_consumer(queue_name, ... destination=['w1.example.com']) or using the celery control command: $ celery control -d w1.example.com add_consumer queue $ celery control -d w1.example.com cancel_consumer queue Note Remember that a control command without destination will be sent to all workers. autoscale Tells workers with –autoscale enabled to change autoscale max/min concurrency settings. This command is available programmatically as app.control.autoscale(): >>> celery.control.autoscale(max=10, min=5, ... destination=['w1.example.com']) or using the celery control command: $ celery control -d w1.example.com autoscale 10 5 pool_grow/ pool_shrink Tells workers to add or remove pool processes. These commands are available programmatically as app.control.pool_grow()/ app.control.pool_shrink(): >>> celery.control.pool_grow(2, destination=['w1.example.com']) >>> celery.contorl‘s can now be immutable, which means that the arguments will not be modified when calling callbacks: >>> chain(add.s(2, 2), clear_static_electricity.si()) means it will not receive the argument of the parent task, and .si() is a shortcut to: >>> clear_static_electricity.subtask(immutable=True) Logging Improvements¶ Logging support now conforms better with best practices. Classes used by the worker no longer uses app.get_default_logger, but uses celery.utils.log.get_logger which simply gets the logger not setting the level, and adds a NullHandler. Loggers are no longer passed around, instead every module using logging defines a module global logger that is used throughout. All loggers inherit from a common logger called “celery”. Before task.get_logger would setup a new logger for every task, and even set the loglevel. This is no longer the case. - Instead all task loggers now inherit from a common “celery.task” logger that is set up when programs call setup_logging_subsystem. - Instead of using LoggerAdapter to augment the formatter with the task_id and task_name field, the task base logger now use a special formatter adding these values at runtime from the currently executing task. In fact, task.get_loggeris no longer recommended, it is better to add a module-level logger to your tasks module. For example, like this: from celery.utils.log import get_task_logger logger = get_task_logger(__name__) @celery.task def add(x, y): logger.debug('Adding %r + %r' % (x, y)) return x + y The resulting logger will then inherit from the "celery.task"logger so that the current task name and id is included in logging output. Redirected output from stdout/stderr is now logged to a “celery.redirected” logger. In addition a few warnings.warn have been replaced with logger.warn. Now avoids the ‘no handlers for logger multiprocessing’ warning Task registry no longer global¶ Every Celery instance now has its own task registry. You can make apps share registries by specifying it: >>> app1 = Celery() >>> app2 = Celery(tasks=app1.tasks) Note that tasks are shared between registries by default, so that tasks will be added to every subsequently created task registry. As an alternative tasks can be private to specific task registries by setting the shared argument to the @task decorator: @celery.task(shared=False) def add(x, y): return x + y Abstract tasks are now lazily bound.¶ The Task class is no longer bound to an app by default, it will first be bound (and configured) when a concrete subclass is created. This means that you can safely import and make task base classes, without also initializing the app environment: from celery.task import Task class DebugTask(Task): abstract = True def __call__(self, *args, **kwargs): print('CALLING %r' % (self, )) return self.run(*args, **kwargs) >>> DebugTask <unbound DebugTask> >>> @celery1.task(base=DebugTask) ... def add(x, y): ... return x + y >>> add.__class__ <class add of <Celery default:0x101510d10>> Lazy task decorators¶ The @task decorator is now lazy when used with custom apps. That is, if accept_magic_kwargs is enabled (herby called “compat mode”), the task decorator executes inline like before, however for custom apps the @task decorator now returns a special PromiseProxy object thatmodule named ‘celery’, and get the celery attribute from that module. E.g. if you have a project named ‘proj’ where the celery app is located in ‘from not be signalled (Issue #595). Contributed by Brendon Crawford. Redis event monitor queues are now automatically deleted (Issue #436). App instance factory methods have been converted to be cached descriptors that creates a new subclass on access. This means that e an URL Currently only supported by redis. Example use: CELERY_RESULT_BACKEND = 'redis://localhost/1' Heartbeat frequency now every 5s, and frequency sent with event The heartbeat frequency is now available in the worker event messages, so that clients can decide when to consider workers offline based on this value. Module celery.actors has been removed, and will be part of cl instead. Introduces new celerycommand, which is an entrypoint for all other commands. The main for this command can be run by calling celery.start(). Annotations now supports decorators if the key startswith ‘@’. E.g.: def debug_args(fun): @wraps(fun) def _inner(*args, **kwargs): print('ARGS: %r' % (args, )) return _inner CELERY_ANNOTATIONS = { 'tasks.add': {'@__call__': debug_args}, } Also tasks are now always bound by class so that annotated methods end up being bound. Bugreport to .id xmap(task, sequence)and xstarmap(task, sequence) Returns a list of the results applying the task function to every item in the sequence. Example: >>> from celery import xstarmap >>> xstarmap(add, zip(range(10), range(10)).apply_async() [0, 2, 4, 6, 8, 10, 12, 14, 16, 18] chunks(task, sequence, chunksize) group.skew(start=, stop=, step=) Skew will skew the countdown for the individual tasks in a group, e.g. with a group: >>> g = group(add.s(i, i) for i in xrange(10)) Skewing the tasks from 0 seconds to 10 seconds: >>> g.skew(stop=10) Will have the first task execute in 0 seconds, the second in 1 second, the third in 2 seconds and so on. 99% test Coverage CELERY_QUEUESD_FORCE_EXECVis now enabled by default. If the old behavior is wanted the setting can be set to False, or the new --no-execvto celery worker. Deprecated module celery.confhas been removed. The CELERY_TIMEZONEnow always require the pytzlibrary to be installed (exept if the timezone is set to UTC). The Tokyo Tyrant backend has been removed and is no longer supported. Now uses maybe_declare()to cache queue declarations. There is no longer a global default for the CELERYBEAT_MAX_LOOP_INTERVALsetting, it is instead set by individual schedulers. Worker: now truncates very long message bodies in error reports. No longer deepcopies/Celerybeat no longer logs the startup banner. Previously it would be logged with severity warning, now it’s only written to stdout. The contrib/directory in the distribution has been renamed to extra/. New signal: task_revoked celery.contrib.migrate: Many improvements including filtering, queue migration, and support for acking messages on the broker migrating from. Contributed by John Watson. Worker: Prefetch count increments are now optimized and grouped together. Worker: No longer calls consumeon the remote control command queue twice. Probably didn’t cause any problems, but was unecessary. Internals¶ app.broker_connectionis now app.connection Both names still work. Compat modules are now generated dynamically upon use. These modules are celery.messaging, celery.log, celery.decoratorsand celery.registry. celery.utilsrefactored into multiple modules: Now using kombu.utils.encodinginstead of celery.utils.encoding. Renamed module celery.routes-> celery.app.routes. Renamed package celery.db-> celery.backends.database. Renamed module celery.abstract-> celery.worker.bootsteps. Command line docs are now parsed from the module docstrings. Test suite directory has been reorganized. setup.py now reads docs from the requirements/directory. Celery commands no longer wraps output (Issue #700). Contributed by Thomas Johansson. Experimental¶ celery.contrib.methods: Task decorator for methods¶ This is an experimental module containing a task decorator, and a task decorator filter, that can be used to create tasks out of methods: from celery.contrib.methods import task_method class Counter(object): def __init__(self): self.value = 1 @celery.task(name='Counter.increment', filter=task_method) def increment(self, n=1): self.value += 1 return self.value See celery.contrib.methods for more information. Unscheduled Removals¶ Usually we don’t make backward incompatible removals, but these removals should have no major effect. The following settings have been renamed: CELERYD_ETA_SCHEDULER-> CELERYD_TIMER CELERYD_ETA_SCHEDULER_PRECISION-> CELERYD_TIMER_PRECISION Deprecations¶ See the Celery Deprecation Timeline. do not modify anything, while idempotent control commands that make changes are on the control objects. Fixes¶ Retry sqlalchemy backend operations on DatabaseError/OperationalError (Issue #634) Tasks that called retrywas not acknowledged if acks late was enabled Fix contributed by David Markey. The message priority argument was not properly propagated to Kombu (Issue #708). Fix contributed by Eran Rundstein
http://docs.celeryproject.org/en/latest/whatsnew-3.0.html
CC-MAIN-2016-22
refinedweb
2,721
51.85
4. ANOVA analysis (introduction)¶ Although we provide a Standalone application called gdsctools_anova, the most up-to-date and flexible way of using GDSCTools is to use the library from an IPython shell. This method (using IPython shell) is also the only way to produce Data Packages. In this section we will exclusively use Python commands, which should also be of interest if you want to look at the Notebooks section later. We assume now that (i) you have GDSCtools installed together with IPython. If not, please go back to the Installation section. (ii) you are familiar with the INPUT data sets that will be used hereafter. Note Beginners usually enter in an Python shell (typing python). Instead, we strongly recommend to use IPython, which is a more flexible and interactive shell. To start IPython, just type this command in a terminal: ipython You should now see something like: Python 2.7.5 (default, Nov 3 2014, 14:33:39) Type "copyright", "credits" or "license" for more information. IPython 4.0.0 -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object', use 'object??' for extra details. In [1]: Note All snippets in this documentation are typed within IPython shell. You may see >>> signs. They indicate a python statement typed in a shell. Lines without those signs indicate the output of the previous statement. For instance: >>> a = 3 >>> 2 + a 5 means the code 2 + a should print the value 5 4.1. The IC50 input data¶ Before starting, we first need to get an IC50 data set example. Let us use this IC50 example test file. See also More details about the data format can be found in the Data Format and Readers section as well as links to retrieve IC50 data sets. In Python, one can easily import all functionalities available in GDSCTools as follows: from gdsctools import * Although this syntax is correct, in the following we will try to be more explicit. So, we would rather use: from gdsctools import IC50 This is better coding practice and has also the advantage of telling beginners which functions are going to be used. Here above, we imported the IC50 class that is required to read the example file aforementioned. Note that this IC50 example is installed with GDSCTools and its location can be obtained using: from gdsctools import ic50_test print(ic50_test.filename) The IC50 class is flexible enough that you can provide the filename location or just the name ic50_test as in the example below, and of course the filename of a local file would work as well: >>> from gdsctools import IC50, ic50_test >>> ic = IC50(ic50_test) >>> print(ic) Number of drugs: 11 Number of cell lines: 988 Percentage of NA 0.206569746043 As you can see you can get some information about the IC50 content (e.g., number of drugs, percentage of NaNs) using the The print statement function. See gdsctools.readers.IC50 and Data Format and Readers for more details. 4.2. Getting help¶ At any time, you can get help about a GDSCTools functionality or a python function by adding question tag after a function’s name: IC50? With GDSCTools, we also provide a convenience function called gsdctools_help(): gdsctools_help(IC50) that should open a new tab in a browser redirecting you to the HTML help version (on ReadTheDoc website) of a function or class (here the IC50 class). 4.3. The ANOVA class¶ One of the main application of GDSCTools is based on an ANOVA analysis that is used to identify significant associations between drug and genomic features. As mentionned above, a first file that contains the IC50s is required. That file contains experimentall measured IC50s for a set of drugs across cell lines. The second data file is a binary file that contains various features across the same cell lines. Those features are usually of genomic types (e.g., mutation, CNA, Methylation). A default set of about 50 genomic features is provided and automatically fetched in the following examples. You may also provide your own data set as an input. The default genomic feature file is downloadable and its location can be found using: from gdsctools import datasets gf = datasets.genomic_features See also More details about the genomic features data format can be found in the Data Format and Readers section. The class of interest ANOVA takes as input a compulsary IC50 filename (or data) and possibly a genomic features filename (or data). Using the previous IC50 test example, we can create an ANOVA instance as follows: from gdsctools import ANOVA, ic50_test an = ANOVA(ic50_test) Again, note that the genomic features is not provided, so the default file aforementionned will be used except if you provide a specific genomic features file as the second argument: an = ANOVA(ic50_test, "your_genomic_features.csv") There are now several possible analysis but the core of the analysis consists in taking One Drug and One Feature (ODOF hereafter) and to compute the association using a regression analysis (see Regression analysis for details). The IC50 across the cell lines being the dependent variable and the explanatory variables denoted are made of tissues, MSI and one genomic feature. Following the regression analysis, we compute the ANOVA summary leading to a p-value for the significance of the association between the drug’s IC50s and the genomic feature considered. This calculation is performed with the anova_one_drug_one_feature() method. We will see a concrete example in a minute. Once an ODOF is computed, one can actually repeat the ODOF analysis for a given drug across all features using the anova_one_drug() method. This is also named One Drug All Feature case (ODAF). Finally we can even extend the analysis to All Drugs All Features (ADAF) using anova_all(). The following image illustrates how those 3 methods interweave together like Russian dolls. The computational time is therefore increasing with the number of drugs and features. Let us now perform the analysis for the 3 different cases. 4.3.1. One Drug One Feature (ODOF)¶ Let us start with the first case (ODOF). User needs to provide a drug and a feature name and to call the anova_one_drug_one_feature() method. Here is an example: from gdsctools import ANOVA, ic50_test gdsc = ANOVA(ic50_test) gdsc.anova_one_drug_one_feature(1047, 'TP53_mut', show=True, fontsize=16) Setting the show parameter to True, we created a set of 3 boxplots that is one for each explanatory feature considered: tissue, MSI and genomic feature. If there is only one tissue, this factor is included in the explanatory variable is not used (and the corresponding boxplot not produced). Similarly, the MSI factor may be ignored if irrelevant. In the first boxplot, the feature factor is considered; we see the IC50s being divided in two populations (negative and positive features) where all tissues are mixed. In the second boxplot, the tissue variable is explored; this is a decomposition of the first boxplot across tissues. Finally, the third boxplot shows the impact of the MSI factor. Here again, all tissues are mixed. In the MSI column, zeros and ones correspond to MSI unstable and stab le, respetively. The pos and neg labels correspond to the feature being true or not, respetively. The output of an ODOF analysis is a time series that contains statistical information about the association found between the drug and the feature. See for gdsctools.anova_results.ANOVAResults for more details. If you want to repeat this analysis for all features for the drug 1047, you will need to know the feature names. This is stored in the following attribute: gdsc.feature_names The best is to do it in one go though since it will also fill the FDR correction column based on all associationa computed. See also gdsctools.anova and Data Packages. 4.3.2. One Drug All Features (ODAF)¶ Now that we have analysed one drug for one feature, we could repeat the analysis for all features. However, we provide a method that does exactly that for us ( anova_one_drug()): from gdsctools import ANOVA, ic50_test gdsc = ANOVA(ic50_test) results = gdsc.anova_one_drug(999) results.volcano() (Source code, png, hires.png, pdf) In a python shell, you can click on a dot to get more information. Here, we have a different plot called a volcano plot provided in the gdsctools.volcano module. Before explaining it, let us understand the x and y-axis labels. Each row in the dataframe produced by anova_one_drug() is made of a set of statistical metrics (look at the header results.df.columns). It includes a p-value (coming from the ANOVA analysis) and a signed effect size can also be computed as follows. In the ANOVA analysis, the population of IC50s is split into positive and negative sets (based on the genomic feature). The two sets are denoted and . Then, the signed effect size is computed as follows: where and is the effect size function based on the Cohens metric (see gdsctools.stats.cohens()). In the volcano plot, each drug vs genomic feature has a p-value. Due to the increasing number of possible tests, we have more chance to pick a significant hit by pure chance. Therefore, p-values are corrected using a multiple testing correction method (e.g., BH method). The column is labelled FDR. Significance of associations should therefore be based on the FDR rather than p-values. In the volcano plot, horizontal dashed lines (red) shows several FDR values and the values are shown in the right y-axis. Note, however that in this example there is no horizontal lines. Indeed, the default value of 25% is well above the limits of the figure telling us that there is no significant hits. Note that the right y-axis (FDR) is inversed, so small FDRs are in the bottow and the max value of 100% should appear in the top. Note P-values reported by the ODOF method need to be corrected using multiple testing correction. This is done in the the ODAF and ADAF cases. For more information, please see the gdsctools.stats.MultipleTesting() description. 4.3.3. All Drug All Features (ADAF)¶ Here we compute the associations across all drugs and all features. In essence, it is the same analysis as the ODAF case but with more tests. In order to reduce the computational time, in the following example, we restrict the analysis to the breast tissue using set_cancer_type() method. This would therefore be a cancer-specific analysis. If all cell lines are kept, this is a PANCAN analysis. The information about tissue is stored in the genomic feature matrix in the column named TISSUE_FACTOR. from gdsctools import ANOVA, ic50_test gdsc = ANOVA(ic50_test) gdsc.set_cancer_type('breast') results = gdsc.anova_all() results.volcano() (Source code, png, hires.png, pdf) Warning anova_all() may take a long time to run (e.g., 10 minutes, 30 minutes) depending on the number of drugs and features. We have a buffering in place. If you stop the analysis in the middle, you can call again anova_all() method and previous ODAF analysis will be retrieved starting the analysis where you previously stoped. If this is not what you want, you need to call reset_buffer() method. The volcano plot here is the same as in the previous section but with more data points. The output is the same as in the previous section with more associations. 4.4. Learn more¶ If you want to learn more, please follow one of those links: - About the settings also covers how to set some parameters yourself. - Creating HTML reports from the analysis: HTML report. - Learn more about the input Data Format and Readers . - How to reproduce these analysis presented here above using the Standalone application. - Get more examples from IPython Notebooks. - How to produce Data Packages and learn about their contents.
https://gdsctools.readthedocs.io/en/master/anova_partone.html
CC-MAIN-2019-18
refinedweb
1,960
64.2
Sometimes when you are coding, you want to know how long it takes for a particular function to run. This topic is known as profiling or performance tuning. Python has a couple of profilers built into its Standard Library, but for small pieces of code, it’s easier to just use Python’s timeit module. Thus, timeit will be the focus of this tutorial. The timeit module uses a platform-specific method to get the most accurate run time. Basically, the timeit module will do a setup once, run the code n number of times and returns the time it took to run. Usually it will output a “best of 3” score. Oddly enough, the default number of times it runs the code is for 1,000,000 loops. The timeit does timing with time.time() on Linux / Mac and time.clock() on Windows to get the most accurate readings, which is something most people don’t think about. You can run the timeit module from the command line or by importing it. We will look at both use cases. timeit in the console Using the timeit module on the command line is quite easy. Here are a couple of “-m” option, you are telling it to look up a module and use it as the main program. The “-s” tells the timeit module to run setup once. Then it runs the code for n number of loops 3 times and returns the best average of the 3 runs. For these silly examples, you won’t see much difference. Let’s take a quick look at timeit’s help so we can learn more about how it works: C:\Users\mdriscoll. This tells use all the wonderful flags we can pass the timeit as well as what they do. It also tells us a little about how timeit works under the covers. Let’s write a silly function and see if we can time it from the command line: # simple_func.py def my_function(): try: 1 / 0 except ZeroDivisionError: pass All this function does is cause an error that is ignored. Yes, it’s a dumb’llit’s namespace and then we call timeit.timeit. You will note that we pass a call to the function in quotes, then the setup string. And that’s really all there is to it! Wrapping Up Now you know how to use the timeit module. It is really good at timing simple pieces of code. You would normally use it for code that you suspect is taking an inordinate amount of time to run. If you wanted more granular details about what’s going on in your code, then you would want to switch to a profiler. Have fun and happy coding! - Python documentation on timeit - PyMOTW – timeit – Time the execution of small bits of Python code. - Dive Into Python section on timeit - Dream-in-code forum tutorial on timeit
http://www.blog.pythonlibrary.org/2014/01/30/how-to-time-small-pieces-of-python-code-with-timeit/
CC-MAIN-2020-10
refinedweb
487
82.54
This part of the Bookshelf app tutorial shows how the sample app stores its persistent data in a PostgreSQL database. This sample uses a PostgreSQL server, running on a Compute Engine virtual machine instance, to store its persistent data. If you prefer to use a different PostgreSQL server, you can deploy this sample app to Google Cloud Platform (GCP) and configure it to use any PostgreSQL server of your choice. This page is part of a multi-page tutorial. To start from the beginning and read the setup instructions, go to Ruby Bookshelf app. Running PostgreSQL on Compute Engine - Use the PostgreSQL image provided by Bitnami in the Google Cloud Platform Marketplace. Click Launch on Compute Engine. It takes a few minutes to create the instance and deploy PostgreSQL. When the deployment is done, make a note of the Admin user and Admin password for your PostgreSQL database. Click the instance link to go to the VM instance details. Make a note of the External IP address of your instance. To edit the instance, click Edit. In the Network tags field, enter postgres-bitnami, and then click Save. Create a firewall rule to allow traffic to PostgreSQL on instances with the postgres-bitnaminetwork tag. gcloud compute firewall-rules create default-allow-postgresql \ --allow tcp:5432 \ --source-ranges 0.0.0.0/0 \ --target-tags postgres-bitnami \ --description "Allow access to PostgreSQL from all IPs" Configuring settings Go to the getting-started-ruby/2-postgresqldirectory, and copy the sample database.ymlfile: cp config/database.example.yml config/database.yml To configure your database, edit the config/database.ymlfile. Set the value of databaseto bookshelf. Replace the [YOUR_POSTGRES_*]placeholders with the specific values for your PostgreSQL instance and database. For example, suppose your IPv4 address is 173.194.230.44, your username is postgres, and your password is secret123. Then the postgresql_settingssection of your database.ymlfile would look like this: postgresql_settings: &postgresql_settings adapter: postgresql encoding: unicode pool: 5 username: postgres password: secret123 host: 173.194.230.44 database: bookshelf Installing dependencies In the 2-postgresql directory enter the following command: bundle install Creating a database and tables Create the database and the required tables. bundle exec rake db:create bundle exec rake db:migrate app's code and explains how it works. List books When you visit the app's home page, you are routed to the index action of the BooksController class. This is configured in the config/routes.rb file. Rails.application.routes.draw do # Route root of application to BooksController#index action root "books#index" # Restful routes for BooksController resources :books end The BookController#index action retrieves a list of books from the Cloud SQL database. The app lists at most 10 books on each web page, so the list depends on which page the user is viewing. For example, suppose there are 26 books in the database, and the user is on the third page ( /?page=3). In that case, params[:page] is equal to 3, which is assigned to the page_number variable. Then a list of 6 books, starting at offset 20, is retrieved and assigned to @books. class BooksController < ApplicationController PER_PAGE = 10 def index page_number = params[:page] ? params[:page].to_i : 1 book_offset = PER_PAGE * (page_number - 1) @books = Book.limit(PER_PAGE).offset(book_offset) @next_page = page_number + 1 if @books.count == PER_PAGE end The Book class is a simple ActiveRecord model that represents an individual book in the books table. class Book < ActiveRecord::Base validates :title, presence: true end In the routes.rb file, the resources :books call configures RESTful routes for creating, reading, updating, and deleting books that are routed to the corresponding actions in the BooksController class. After BooksController.index retrieves a list of books, the embedded Ruby code in the books/index.html.erb file renders the list. <% @books.each do |book| %> <div class="media"> <%= link_to book_path(book) do %> <div class="media-body"> <h4><%= book.title %></h4> <p><%= book.author %></p> </div> <% end %> </div> <% end %> <% if @next_page %> <nav> <ul class="pager"> <li><%= link_to "More", books_path(page: @next_page) %></li> </ul> </nav> <% end %> Display book details When you click an individual book on the web page, the BookController#show action retrieves the book, specified by its ID, from the books table. def show @book = Book.find params[:id] end Then the embedded Ruby code in the show.html.erb file displays the book's details. <div class="media"> <div class="media-body"> <h4><%= @book.title %> | <small><%= @book.published_on %></small></h4> <h5>By <%= @book.author || "unknown" %></h5> <p><%= @book.description %></p> </div> </div> Create books When you click Add book on the web page, the BooksController#new action creates a new book. The embedded Ruby code in the new.html.erb file points to _form.html.erb, which displays the form for adding a new book. <%= form_for @book do |f| %> <div class="form-group"> <%= f.label :title %> <%= f.text_field :title %> </div> <div class="form-group"> <%= f.label :author %> <%= f.text_field :author %> </div> <div class="form-group"> <%= f.label :published_on, "Date Published" %> <%= f.date_field :published_on %> </div> <div class="form-group"> <%= f.label :description %> <%= f.text_area :description %> </div> <button class="btn btn-success" type="submit">Save</button> <% end %> When you submit the form, the BooksController#create action saves the book in the database. If the new book is saved successfully, the book's page is displayed. Otherwise, the form is displayed again along with error messages. The book_params method uses strong parameters to specify which form fields are allowed. In this case, only book title, author, publication date, and description are allowed. def create @book = Book.new book_params if @book.save flash[:success] = "Added Book" redirect_to book_path(@book) else render :new end end private def book_params params.require(:book).permit(:title, :author, :published_on, :description) end Edit books When you click Edit book on the web page, the BooksController#update action retrieves the book from the database. The embedded Ruby code in the edit.html.erb file points to _form.html.erb, which displays the form for editing the book. def update @book = Book.find params[:id] if @book.update book_params flash[:success] = "Updated Book" redirect_to book_path(@book) else render :edit end end When you submit the form, the BooksController#update action saves the book in the database. If the new book is saved successfully, the book's page is displayed. Otherwise, the form is displayed again along with error messages. Delete books When you click Delete Book on the web page, the BooksController#destroy action deletes the book from the database and then displays the list of books. def destroy @book = Book.find params[:id] @book.destroy redirect_to books_path end
https://cloud.google.com/ruby/getting-started/deploy-postgres
CC-MAIN-2019-22
refinedweb
1,098
59.9
1. Overview Welcome to the Integrating the Firebase App Distribution SDK in your iOS app codelab. In this codelab, you'll add the App Distribution live testers - How to integrate the App Distribution iOS SDK into your app - How to alert a tester when there is a new pre-release build ready to install - How to customize the SDK to fit your unique testing needs What you'll need - Xcode 12 (or higher) - CocoaPods 1.9.1 (or higher) - An Apple Developer account for Ad Hoc distribution - A physical iOS device for testing. (The iOS simulator app will work for most of the codelab, but simulators cannot download releases.) How will you use this tutorial? How would rate your experience with building iOS apps? 2. Create Firebase console project Add new Firebase project - In the Firebase console, click Add Project, and then name your project "Firebase Codelab." You don't need to enable Google Analytics for this project. - Click Create project. Add App to Firebase Follow the documentation to register your app with Firebase. Use "com.google.firebase.codelab.AppDistribution.<your_name>" as the iOS Bundle ID. When prompted, download your project's GoogleService-Info.plist file. You will need this later. 3. Get the Sample Project Download the Code Begin by cloning the sample project. git clone git@github.com:googlecodelabs/firebase-appdistribution-ios.git If you don't have git installed, you can also download the sample project from its GitHub page or by clicking on this link. Download dependencies and Open the project in Xcode - Open the Podfile in the same directory cd firebase-appdistribution-ios/start Open Podfile - Add the following line to your podfile: Podfile pod 'Firebase/AppDistribution' Run pod update in the project directory and open the project in Xcode. pod install --repo-update xed . Update Bundle Identifier to match your Firebase app In the left menu, double click on AppDistributionExample. Then, locate the General tab, and change the bundle identifier to match the bundle identifier of your Firebase app, which can be found in project settings. This should be "com.google.firebase.codelab.AppDistribution.<your_name>" Add Firebase to your app Locate the GoogleService-Info.plist file you downloaded earlier in your file system, and drag it to the root of the Xcode project. You can also download this file any time from your project's settings page. In your AppDistributionExample/AppDelegate.swift file import Firebase at the top of the file AppDistributionExample/AppDelegate.swift import Firebase And in the didFinishLaunchingWithOptions method add a call to configure Firebase. AppDistributionExample/AppDelegate.swift FirebaseApp.configure() 4. Set up in-app new build alerts with the App Distribution SDK In this step, you will add the Firebase App Distribution will need to log in with the same account and select the correct project from the drop down menu at the top. will start with the basic alert configuration. You can use checkForUpdate to display a pre-built enable alerts dialogue to testers who haven't yet enabled alerts, and then check if a new build is available. Testers enable alerts by signing into an account that has access to the app in App Distribution. When called, the method enacts the following sequence: - Checks if a tester has enabled alerts. If not, displays a pre-built dialogue that prompts them to sign into App Distribution with their Google account. Enabling alerts is a one-time process on the test device and persists across updates of your app. Alerts remain enabled on the test device until either the app is uninstalled, or until the signOutTester method is called. See the method's reference documentation ( Swift or Objective-C) for more information. You can include checkForUpdate at any point in your app. For example, you can prompt your testers to install newly available builds at startup by including checkForUpdate in the viewDidAppear of the UIViewController. In your AppDistributionViewController.swift file import Firebase at the top of the file AppDistributionViewController.swift import Firebase Open AppDistributionExample/AppDistributionViewController.swift, and copy lines into the viewDidAppear method like this: AppDistributionViewController.swift override func viewDidAppear(_ animated: Bool) { checkForUpdate() } Now let's implement the checkForUpdate() method. AppDistributionViewController.swift private func checkForUpdate() { AppDistribution.appDistribution().checkForUpdate(completion: { [self] release, error in var uiAlert: UIAlertController if error != nil { uiAlert = UIAlertController(title: "Error", message: "Error Checking for update! \(error?.localizedDescription ?? "")", preferredStyle: .alert) } else if release == nil { uiAlert = UIAlertController(title: "Check for Update", message: "No releases found!!", preferredStyle: .alert) uiAlert.addAction(UIAlertAction(title: "Ok", style: UIAlertAction.Style.default)) } else { guard let release = release else { return } let title = "New Version Available" let message = "Version \(release.displayVersion)(\(release.buildVersion)) is available.".present(uiAlert, animated: true, completion: nil) }) } 5. Build and invite testers to download your app In this step, you will build your app and test your implementation by distributing the build to testers using the Firebase console. Build your app When you're ready to distribute a pre-release version of your app to testers, select "Any iOS Device (arm64)" as build destination, and Product->Archive. Once the archive is created, build a signed distribution with Development distribution profile. When the build completes, it saves an IPA file and some log files in the folder you specify. You distribute the IPA file to your testers in the following steps. If you run into issues building your app, see Apple's codesigning docs for troubleshooting steps. Distribute your app to testers To distribute your app to testers, upload the IPA file using the Firebase console: - Open the App Distribution page of the Firebase console. Select your Firebase project when prompted. - Press Get Started - On the Releases page, select the app you want to distribute from the drop-down menu. - Drag your app's IPA file to the console to upload it. - When the upload completes, specify the tester groups and individual testers you want to receive the build. (Add your email to receive the invite.) Then, add release notes for the build. See Manage testers for more on creating tester groups. - Click Distribute to make the build available to testers. Add yourself as a tester to the You'll need to first register your test device to download and test an Ad Hoc release. - On your iOS test device, open the email sent from Firebase App Distribution and tap the Get Started link. Make sure to open the link in Safari. - In the Firebase App Distribution tester web app that appears, sign in with your Google account and tap Accept invitation. Now, you'll see the release you've been invited to. - Tap Register device to share your UDID with Firebase so you can update your app's provisioning profile later. - Follow the instructions, and go to settings to download the profile and share your UDID. Now, when you go back into App Distribution, the release is now marked as "Device registered": The tester's UDID has now been shared with the developer. It's now up to the developer to build the tester a new version of the app. View tester information in the console Back in the developer's view in the Firebase console, the tester will show up as "Accepted" under the release: You'll then also get an email as the developer if the device they are using isn't already included in the provisioning profile. This will notify you of the new UDID you need to add. You also have the option of exporting all the UDIDs as a text file. - To export all UDIDs, open the Testers & Groups tab. - Click "Export Apple UDIDs." The file should contain the UDID of your test device. Device ID Device Name Device Platform 1234567890 tester.app.distribtuion@gmail.com - iPhone SE 2nd Gen ios. -. Download the release from the test device Now the release has the test device's UDID, so the test device can download and install the app. App Distribution sends an email to testers when their UDID is added to a new release. - On the! - When the app starts, it'll ask you to enable new build alerts. Select "Turn on" - Then it'll ask you to sign in. Click "Continue. - You'll be taken back to the app. You won't have to login or accept alerts next time you run the app. Distribute an update to your testers - Update your build number to "2". - Select "Any iOS Device (arm64)" as build destination, and Product->Archive. Once the archive is generated, build a signed distribution with Development distribution profile. - When the build completes, it saves an IPA file and some log files in the folder you specify. Upload this new IPA in your Firebase console, add your email as tester again and Distribute. Test build alerts - Make sure you closed the app if it was open. Restart the app. - When the app restarts, you should receive a "New Version Available" alert. - Click "Update" to receive the latest version. - Click "Install" on the next screen. - Congratulations! You were able to update your app with the build-in alerts. 6. Customize tester's viewDidAppear by commenting out checkForUpdate() call. AppDistributionViewController.swift override func viewDidAppear(_ animated: Bool) { // checkForUpdate() } Instead, let's call checkForUpdate() in checkForUpdateButtonClicked(). @objc func checkForUpdateButtonClicked() { checkForUpdate() } Now, let's implement our signInOutButtonClicked() method which will signIn the user if they are signed out, or sign out the user if they are already signed in. AppDistributionViewController.swift @objc func signInOutButtonClicked() { if isTesterSignedIn() { AppDistribution.appDistribution().signOutTester() self.configureCheckForUpdateButton() self.configureSignInSignOutButton() self.configureSignInStatus() } else { AppDistribution.appDistribution().signInTester(completion: { error in if error == nil { self.configureCheckForUpdateButton() self.configureSignInSignOutButton() self.configureSignInStatus() } else { let uiAlert = UIAlertController(title: "Custom:Error", message: "Error during tester sign in! \(error?.localizedDescription ?? "")", preferredStyle: .alert) uiAlert.addAction(UIAlertAction(title: "Ok", style: UIAlertAction.Style.default) { _ in }) self.present(uiAlert, animated: true, completion: nil) } }) } } Finally let's implement the isTesterSignedIn method. AppDistributionViewController.swift private func isTesterSignedIn() -> Bool { return AppDistribution.appDistribution().isTesterSignedIn } Build and test your implementation 7. Congratulations! You have built the "in-app alerts display" feature into an app using Firebase App Distribution iOS SDK. What we've covered - Firebase App Distribution - Firebase App Distribution New Alerts iOS SDK Next Steps Learn More Have a Question? Report Issues
https://firebase.google.com/codelabs/appdistribution-ios?authuser=0&hl=bg
CC-MAIN-2022-33
refinedweb
1,690
57.57
Adding the cast just doesn't seem right. We implicitly cast void* to other things all the time. Why does *this* one break things? I just don't buy the need to do the cast. Really... void* should be automatically castable to anything. I'm guessing something more subtle is occurring, and the cast is simply hiding that subtlety. Cheers, -g On Sat, May 26, 2001 at 06:12:38PM -0400, Christian Gross wrote: > On Sat, 26 May 2001 14:19:25 -0500, you wrote: > > >I believe this would break type saftey. These are very carefully constructed to > >ensure that the proper hook fn is registered for the appropriate hook. > > > >I'll take a look at your patch later today and see what (if) it breaks anything, > >or if we were simply missing the "C" namespace wrappers. > > > The problem is that apr_array_push returns a void pointer. And since > pHook is a predefined data type there is a type problem. The extern > "C" wrappers only make the function behave using C linkage. The code > within is still treated as C++. I tried it and the same type cast > error still occured. > > I think the only way to solve is this to use a type cast. But correct > me if I am wrong. > > Christian > > >----- Original Message ----- > >From: "Christian Gross" <ChristianGross@yahoo.de> > >Sent: Saturday, May 26, 2001 1:33 PM > > > > > >I was playing around with hooks and noticed that there is a problem > >with using hooks on C++. The problem was that C++ does not allow a > >type conversion without a type cast. Following is the fix > > > >--------------------------------------------------------------- > >--- c:/httpd2_16/srclib/apr-util/include/apr_hooks.h Wed Apr 4 > >01:35:46 2001 > >+++ c:/httpd/srclib/apr-util/include/apr_hooks.h Sat May 26 > >14:21:58 2001 > >@@ -98,7 +98,7 @@ > > > >_hooks.link_##name=apr_array_make(apr_global_hook_pool,1,sizeof(ns##_LINK_##name##_t)); > >\ > > apr_hook_sort_register(#name,&_hooks.link_##name); \ > > } \ > >- pHook=apr_array_push(_hooks.link_##name); \ > >+ pHook=(ns##_LINK_##name##_t *)apr_array_push(_hooks.link_##name); > >\ > > pHook->pFunc=pf; \ > > pHook->aszPredecessors=aszPre; \ > > pHook->aszSuccessors=aszSucc; \ > > > > > > -- Greg Stein,
http://mail-archives.apache.org/mod_mbox/apr-dev/200105.mbox/%3C20010527001233.N5402@lyra.org%3E
CC-MAIN-2014-49
refinedweb
340
68.16
Most user settings are kept in either the project or the workspace file. Unfortunately, this is not true for building a function tree (Options/Generate Function Tree). Starting CVI the previous entries are lost. Actually this is an old issue but may be not too old to be improved... So I suggest to save the information about Instrument name, Function prefix, Default qualifier, and Output function tree in the appropriate configuration file (prj or cws) Thanks When building a project consisting of many files it would be much more convenient if the Build Output would jump to the line with the first file showing an error (or, if no error occurred, to the first warning). Right now, if there is a build error I have to scroll through the (possibly long) list of files to see which file produced the error; only then I can click on the error message to have the IDE highlight the problematic line. Thanks NI is not a C/C++ Editor-Debugger company. And, it will never be able to invest the man power needed to get there. NIs strengths are its Instrument UIs, its libraries, and it's visual application UI pieces. The LabWindows/CVI tool looks and feels like tools from the mid 90's (ie. like an old Borland C editor, but even less featured). It lacks the toolset found in VisualStudios, NetBeans, and Eclipse. And, it will always be behind. The Verigy93k tester was like this several years ago. They wrote their own C/C++ editor, and it was at a mid 80's level. When a team was asked to rewrite the UI and bring it up to date, they made a novel choice (they recognized that they were not a UI platform / editor company), and they moved their product under Eclipse. Teradyne Flex did something similar a year or two later moving under Excel and Visual Studio. The thing is this, both companies realized that they could make more money focusing on their real strength. They added libraries and apis to work in the platforms framework, and changed/adapted the platform framework to work for them. ie. Teradynes Flex test tool does not say "Excel/Visual Studio", it says it is a Teradyne product based on MS Excel and VS. And, they have adapted the platform to their needs adding on the extra Windows/UIs/... to meet their needs. Same with the Verigy 93K. In Teradynes case, they went back to the drawing board. So, we will ignore this (even with their success). In Verigys case, all their existing APIs worked in the new platform, and the user didn't need to change anything when they upgraded. But, suddenly the Editor and Debugger were up to date, with latest greatest features. It was a huge change overnight. LabWindows really should make a shift to Eclipse. Keep your old legacy stuff at first, but working under Eclipse. Add in "Views" and "Tools" to supplement what Eclipse doesn't give you for free. And, remove unwanted or confusing plugins from the eclipse base. (This is what advantest did.) Leave in features that make Eclipse great, like error view, and the ability to have several "perspectives". And really focus the man power into making a product that will blow the others out of the water. NI has what it takes to make great Instrument editing/debugging windows in Eclipse. But, NI doesn't have the 1,000's of people and millions of man hours required to make an Editor/Debugger that will compare to the Eclipses/VisualStudios of the world. As a business they should focus on what will make them a differentiator, and reuse what is accepted and common. Anyway. My 2 cents on how you could really improve LabWindows in a few short months. (Note: Verigy spent all of 9 months and 9 engineers on their C/C++ integration into eclipse... I know... I was there at the time.) If you took the LabWindows team, and a year or two... Imagine how much better of a job you could do. This X just closes out of the tab that is on top. Pretty much every other program with tabs has the X on the tab you’re closing out of. The current placement makes me hesitate every time, because it feels like you’re X-ing out of the entire code-viewing pane, not just the single file you want to close. It’s also not consistent with the rest of the environment. For example, in the pane on the bottom in the screenshot above, “Threads” and “Wa tch” look like two tabs, but clicking the X in that pane causes the entire pane to disappear rather than just closing the tab that is on top. Hello, building upon my earlier (but difficult to implement) suggestion and the forum discussion on event data here I can provide a hopefully improved suggestion. The issues addressed by this idea: Idea: Provide eventData1 and eventData2 information. For example, eventData1 could tell about the numeric value: eventData2 could tell about the increment/decrement arrow, e.g.: Since so far no eventdata are provided this suggestion should not break backward compatibility. Hi, The CVI runtime engine calls the Windows API function SetProcessDPIAware() that tells Windows that the application is DPI aware in Windows Vista and later. This seems to be forced upon all applications built with CVI, whether they are actually DPI aware or not. Most applications built in CVI using the default tools are not going to be DPI aware out of the box, and setting Windows to another DPI setting than what the programmer used to create the UI will cause many graphical glitches and possibly make the application unusable. The purpose of this request is to suggest to NI that the CVI Runtime Engine not call SetProcessDPIAware() so that the programmer can handle (or not handle) DPI scaling as they see fit. If the programmer does nothing, the application will then, by default, be scaled using Windows DPI Virtualization. This is not optimal, but it would leave the application usable and looking like how the programmer intended. This is per this discussion here: Thanks. As discussed here, distributing the code of #include <ansi_c.h> int main ( int argc, char *argv [] ) { printf ( "%s", "Hello world" ); } generated in CVI2013 results in a distribution kit of 74 MB minimum... Using the NI default settings results in 219 MB... Yes, I do have TB drives, but I dislike bloated software. Allow you to select multiple lines and then go to Edit>>Block Comment/Uncomment to be able to comment or uncomment multiple sections of code at once. At present CVI is missing a serious report printing facility that permits to create flexible, professional and good looking reports. A quick search in CVI forum shows that periodically somebody posts questions about reporting but available instruments at the moment are not satisfactory in my experience. As far as I can tell, a good reporting instrument: I want to be able to do the following: If you haven’t written any code in the callback already, you can just change the default events and re-generate (replace) the control callback. However, if you have already written code for one event case, the only way I can find to add an event case is to do it manually. I go to Code>>Preferences>>Default Events or use the Operate tool to look for the constant name of the event that I am interested in, then I go back to my code and manually type out “case EVENT_CONSTANT_NAME: break;” with the name of the event and hope I remember it correctly and spell it right. CVI is all about minimizing user errors and reducing development time by, you know, not making you type things out yourself, so I think this functionality would be a useful addition. (Coming from this forum discussion) If you double click on a UI browser element you'll go to the corresponding panel or control in the editor. That's good! There doesn't seem to be a way to go the opposite direction (i.e. from a control in the user interface editor to the corresponding item in the user interface browser tree) : this could be useful in case of complex UIR files with several panels and controls, especially if you have more than one control array in it. So I suggest to add two options to the control context menu to locate the control in the UI browser ad to locate in a control aray if it is included in any of them. I'm thinking to something like that: It would be handy if these new items could be assigned a shortcut key too: ctrl+U and ctrl+R could be a good choice (presently ctrl+U is not used in the UIR editor and ctrl+R is not active at design time) Right now it is possible to set the color, and since CVI 2013 also the line style, of all grid lines collectively. I would like to distinguish between major grid lines and minor grid lines, e.g., draw major grid lines with dashed lines / light grey, and minor grid lines with dotted lines / dark grey. Thanks! Hi, Currently, when a CVI application is deployed on a PC with a DPI setting different than the development machine, the application will generally not look like how the programmer intended as the fonts get resized while the other UI elements do not, leading to overlapping text, awkwardly resized controls, etc. One way that DPI scaling could be implemented in CVI would be to use the same functionality as the 'Scale Contents on Resize' option for panels, where internally CVI could resize the panel and use this scaling to rescale all the UI elements (including fonts) on the panel to the appropiate size per the Windows DPI setting. This would make the panel look a lot closer to what the programmer intended and would generally make most applications usable with different DPI settings without any additional work from the programmer. Per my other suggestion, I would request that the programmer has the capability of disabling this functionality so they can handle DPI scaling themselves should they wish to do so. Thanks. In CVI 2013 the array display has changed (for the worse, in my opinion). There are two minor inconveniences and one acute shortage I would like to see improved (hopefully prior to CVI2020 ) First, the right click context menu: If I want to see values of a numerical array, it offers a 'Graphical Array View' but no 'Array View', so one first has to chose 'View Variable Value' and then 'Array Display' - maybe one could save one step and already provide the 'Array Display' in the first case...? Second, the new Array View table still is very slow, not extremely slow as prior to SP1 but still very slow... Most importantly, at present it is impossible to debug large arrays, large meaning arrays with more than 10000 elements. The current implementation requires to select a slice of data - but this makes it impossible to check or compare say array indices 5, 10005, and 20005... Of course I agree that there is no need to simultaneously see more than 10000 data values - but why not have a table with say 100 rows that can be turned over, e.g. displaying either elements 1-100, 101-200, ... this way one could access and inspect all array values... Hello, because I had installed CVI2010 on a brand new Windows 7 machine, I was curios to find out about all the service processes running on the system. It seems that there are quite a few NI services that start after log-on. Some of them seem superfluous, such as the Lookout Citadel service (no LabVIEW, no Lookout installed), but due to the lack of any information I did not bother trying to stop them Suggestions: 1) NI should critically review the services and only start the services that are absolutely needed. 2) Services that are optional might be selected by a checkbox during installation or from the Options / Environment setting 3) NI should provide some documentation / explanation of each service and why it is needed. Thanks! This issue is that old that we all forgot about it... But this thread brought it back to my attention and I'd like to suggest two improvements: Setting the width or the height of a control does not always succeed because there are limitations concerning the minimum and maximum size. Suggestion 1: If a function fails it should return a warning. However, calling e.g. status = SetCtrlAttribute ( panel_handle, PANEL_RING, ATTR_WIDTH, 5 ) returns success (0) even though the width of the ring control will be much larger than 5 pixels. For checkboxes, the situation is even worse because checkboxes are drawn right aligned to a transparent rectangular frame. So calling status = SetCtrlAttribute ( panel_handle, PANEL_CHECKBOX, ATTR_WIDTH, 500 ) will result in a transparent drawing rectangle of width of 500 but with the checkbox size remaining at the default size. Since the checkbox is drawn right aligned to this transparent frame the checkbox eventually may disappear from the panel (setting the width to say 10000 will not draw anything). Suggestion 2: Complement the documentation, the idea is given below: Constant: ATTR_WIDTH Data Type: int Description: The width of the control body in pixels. Valid Range: 0 to 32767 Control Type Restrictions: Not valid for controls of type CTRL_VERTICAL_SPLITTER and CTRL_VERTICAL_SPLITTER_LS For checkboxes, the minimum size is ... pixels, and the maximum size is ... pixels. For ring controls, the minimum size is ... pixels. ... LabWindows/CVI Compatibility: LabWindows/CVI 3.0 and later Control Types: All If the thread that FileSelectPopup (and similar) is accessed is multithreaded, wacky things happen. The programmer can fix this by creating a new thread that is itself not multithreaded and pass information back to the current threrad. It would be helpful if the current functions were designed to default to create such a thread, return the value(s), and garbage collect removing the programmer from the loop. In the case of MultiFileSelectPopup, it is not clear to me what would be the best practice given the unknown number of results. I guess one could assume a limit for the number of results that may change as the Windows API does. Other possible solutions can include an added parameter (variable switch) or with a whole new function. I could see the default case as an effective solution for legacy code that is partially refactored for multithreaded performance. Hi, there has been the valuable suggestion of a "Picture and Text" button allowing more modern buttons. For all those focusing on programming instead of UI design it would be also nice if CVI could provide more default buttons ready to use as some examples shown in the image below (taken from the NI community). As they seem to be already available in LabVIEW it shouldn't be much effort for NI to adapt them to CVI... - hopefully.
http://forums.ni.com/t5/LabWindows-CVI-Idea-Exchange/idb-p/cviideas
CC-MAIN-2015-35
refinedweb
2,513
58.42
Content-type: text/html strftime, wcsftime - Converts a date and time to a string or wide-character string Standard C Library (libc.so, libc.a) #include <time.h> size_t strftime( char *s, size_t maxsize, const char *format, const struct tm *timeptr); #include <wchar.h> size_t wcsftime( wchar_t *wcs, size_t maxsize, const wchar_t *format, const struct tm *timeptr); For the wcsftime() function, the XPG4 standard specifies the format parameter as type const char * rather than const wchar_t *, as specified by the latest version of the ISO C standard. Both type declarations are supported. Interfaces documented on this reference page conform to industry standards as follows: strftime(), wcsftime(): ISO C, XPG4, XPG4-UNIX Refer to the standards(5) reference page for more information about industry standards and associated tags. Points to the array containing the output date and time string. Specifies the maximum number of bytes or wide characters to be written to the array pointed to by the s or wcs parameter. Points to a sequence of format codes that specify the format of the date and time to be written to the output string or wide-character string. See the DESCRIPTION section for more information. Points to a type tm structure that contains broken-down time information. Points to the wide-character array containing the output date and time string. The strftime() function places characters into the array pointed to by the s parameter as controlled by the string pointed to by the format parameter. The string pointed to by the format parameter is a sequence of characters. Depending on the locale setting, the characters may be single-byte or multibyte characters. Local time zone information is used as though the strftime() function called the tzset() function. Time information used in this subroutine is fetched from space containing type tm structure data, which is defined in the time.h include file. The type tm structure must contain the time information used by this subroutine to construct the time and date string. The format string consists of characters that represent zero or more conversion specifications and ordinary characters that represent the date and time values and null string terminator. A conversion specification consists of a % (percent sign) character followed by a character that determines how the conversion specification constructs the formatted string. All ordinary characters (including the terminating null character) are copied unchanged into the s array. When copying between objects that overlap, behavior of this function is undefined. No more than the number of bytes specified by the maxsize parameter are written to the array (including the terminating null byte). Each conversion specification is replaced by the appropriate characters as described in the following list. The appropriate characters are determined by the LC_TIME category of the current locale and by values specified by the type tm structure pointed to by the timeptr parameter. The wcsftime() function formats the data in the timeptr parameter according to the specification contained in the format parameter and places the resulting wide-character string into the wcs parameter. No more than the number of wide characters specified by the maxsize parameter are written to the array (including the terminating null wide character). The wcsftime() function behaves as if the character string generated by the strftime() function is passed to the mbstowcs() function as the character-string parameter and the mbstowcs function places the result in the wcs parameter of the wcsftime() function, up to the limit of wide-character codes specified by the maxsize parameter. Only the wchar.h include file needs to be specified for the wcsftime() function. These functions use the local timezone information. The format parameter consists of a series of zero or more conversion specifiers and ordinary characters. Each conversion specification starts with a % (percent sign) and ends with a conversion-code character that specifies the conversion format. The strftime() function and the version of the wcsftime() function that conforms to XPG4 replace the conversion specification with the appropriately formatted date or time value. Ordinary characters are written to the output buffer unchanged. [ISO C] For wcsftime(), each conversion specification starts with a % (percent sign) and ends with a conversion-code wide character that specifies the conversion format. The function replaces the conversion specification with the appropriately formatted date or time value. Ordinary wide characters in the format are written to the output buffer unchanged. The format parameter has the following syntax: [ordinary-text] [%[[-|0]width] [.precision] format-code \ [ordinary-text]]... Text that is copied to the output parameter with no changes. [Digital] A decimal digit string that specifies the minimum field width. If the width of the item equals or exceeds the minimum field width, the minimum is ignored. If the width of the item is less than the minimum field width, the function justifies and pads the item. The optional - (minus sign) or 0 (zero digit) control the justification and padding as follows: Item is right justified and spaces are added to the beginning of the item to fill the minimum width. Item is left justified and spaces are added to the end of the item to fill the minimum width. Item is right justified and zeros are added to the beginning of the item to fill the minimum width. [Digital] A decimal string that specifies the minimum number of digits to appear for the d, H, I, j, m, M, o, S, U, w, W, y, and Y conversion formats and the maximum number of characters to used from the a, A, b, B, c, D, E, h, n, N, p, r, t, T, x, X, Z, and % conversion formats. When a conversion-code character or conversion-code wide character is not from the preceding list, the behavior of these functions is undefined. The following example uses strftime() to display the current date: #include <time.h> #include <locale.h> #include <stdio.h> #define SLENGTH 80 main() { char nowstr[SLENGTH]; time_t nowbin; const struct tm *nowstruct; (void)setlocale(LC_ALL, ""); if (time(&nowbin) == (time_t) - 1) printf("Could not get time of day from time()\n"); nowstruct = localtime(&nowbin); if (strftime(nowstr, SLENGTH, "%A %x", nowstruct) == (size_t) 0) printf("Could not get string from strftime()\n"); printf("Today's date is %s\n", nowstr); } The %S seconds field can contain a value up to 61 seconds rather than up to 59 seconds to allow leap seconds that are sometimes added to years to keep clocks in correspondence with the solar year. The strftime() function returns the number of bytes written into the array pointed to by the s parameter when the total number of resulting bytes, including the terminating null byte, is not more than the value of the maxsize parameter. The returned value does not count the terminating null byte in the number of bytes written into the array. Otherwise, a value of 0 cast to size_t is returned and the contents of the array are undefined. The wcsftime() function returns the number of wide characters written into the array pointed to by the wcs parameter when the total number of resulting wide characters, including the terminating null wide character, is not more than the value of the maxsize parameter. The returned value does not count the terminating null wide character in the number of wide characters written into the array. Otherwise, a value of 0 cast to size_t is returned and the contents of the array are undefined. Functions: ctime(3), mbstowcs(3), setlocale(3), strptime(3) Standards: standards(5) delim off
http://backdrift.org/man/tru64/man3/wcsftime.3.html
CC-MAIN-2017-09
refinedweb
1,243
51.58
I have two .py files (not in the library) in the same directory where one imports the other. I'd like to run doctests, but the local directory does not seem to be in the search path. Is there any way of doing this apart from fiddling around with sys.path in the .py files themselves? Advertising Minimal example: b.py ==== """ EXAMPLES:: sage: bar() Bar """ def bar(): print "Bar" ==== a.py ==== """ EXAMPLES:: sage: foo() Foo sage: bar() Bar """ from b import bar def foo(): print "Foo" ==== $ sage -t . too many failed tests, not using stored timings Running doctests with ID 2016-09-20-13-38-33-1ca61f3e. Using --optional=ccache,mpir,python2,sage Doctesting 2 files. sage -t a.py ImportError in doctesting framework ********************************************************************** Traceback (most recent call last): File "/local/sage/sage-7.3/local/lib/python2.7/site-packages/sage/doctest/forker.py", line 2128, in __call__ doctests, extras = self.source.create_doctests(sage_namespace) File "/local/sage/sage-7.3/local/lib/python2.7/site-packages/sage/doctest/sources.py", line 670, in create_doctests load(filename, namespace) # errors raised here will be caught in DocTestTask File "/local/sage/sage-7.3/local/lib/python2.7/site-packages/sage/repl/load.py", line 271, in load exec(code, globals) File "./a.py", line 13, in <module> from b import bar ImportError: No module named b sage -t b.py [1 test, 0.00 s] ---------------------------------------------------------------------- sage -t a.py # ImportError in doctesting framework ---------------------------------------------------------------------- Total time for all tests: 0.0 seconds cpu time: 0.0 seconds cumulative wall time: 0.0 seconds ==== When I add import sys sys.path.append(".") to a.py, it works; but I'd prefer not to have to modify a.py. Thank you, Clemens --.
https://www.mail-archive.com/sage-devel@googlegroups.com/msg86694.html
CC-MAIN-2016-40
refinedweb
288
71.61
SYNOPSIS #include <muroar.h> muroar_t muroar_stream (muroar_t fh, int dir, int * stream, int codec, int rate, int channels, int bits); DESCRIPTION This function connects a stream to a sound server supporting the RoarAudio protocol. It takes a connected control connection created with roar_connect(3) and converts it into a connected stream. The socket can no longer be used as control connection. PARAMETERS - fh - The connected control connection. - dir - The stream direction for the new stream. For playback of a waveform stream (PCM data) this is MUROAR_PLAY_WAVE. For all possible values see the official muRoar manual. - stream - This is a pointer to an integer muRoar stores the stream ID in. If your application does not need to know the stream ID of the new stream this can be set to NULL. - codec - This is the codec to be used for the new stream. For signed PCM in host byte order use MUROAR_CODEC_PCM or MUROAR_CODEC_PCM_S. For unsigned PCM use MUROAR_CODEC_PCM_U. There are a lot other codecs defined. However using a codec not supported by the server will result an failure of this call. For all possible values see the official muRoar manual. - rate - This is the sample/frame rate the new stream will use. For streams this setting does not make any sense set this to zero. - channels - This is the number of channels for the new stream. For streams this setting does not make any sense set this to zero. - bits - This is the number of bits per sample to be used by the data. For streams this setting does not make any sense set this to zero. RETURN VALUEOn success this call return the new stream IO handle. This may be the same as the control connection or a new one and the control connection is closed. On error, MUROAR_HANDLE_INVALID is returned. BUGSIn failure where is no way to tell was was going wrong. HISTORY This function first appeared in muRoar version 0.1beta0.
http://manpages.org/muroar_stream/3
CC-MAIN-2019-18
refinedweb
324
76.32
Cotswold Homes Cotswold-Homes.com The Property & Lifestyle Magazine for the North Cotswolds WINTER Edition 2014 Complimentary Copy AGATHA RAISIN ON SCREEN M.C. BEATON’s DETECTIVE GETS TELEVISED KIRSTIE ALLSOPP Her Crusade Against Cancer THE STORY OF SHEEP THE COTSWOLDS’ GOLDEN FLEECE YOUR COTSWOLD OFFERS Shop Local and Save with Our Privilege Card MICHAEL CAINES A Chef’s Life Story AUTHOR SOFKA ZINOVIEFF LIFE WITH ‘THE MAD BOY’ COTSWOLD CALENDAR Christmas & New Year Events HOT PROPERTY Beautiful Cotswold Homes Cotswold Homes magazine coNteNts 8-11 edITor’s WeLCome Welcome to the 2014 Winter Edition of Cotswold Homes magazine. There’s much to see in this issue, all commencing with what we believe is our most generous competition giveaway to date.Tickets to not one, but two sensational family theatre shows, plus tickets to see the fantastic The Shoemaker’s Holiday at the RSC with dining included, passes for Festival Trials Day at Cheltenham Racecourse, London’s Olympia equestrian event and Adam Henson’s Cotswold Farm Park – AND signed copies of Sofka Zinovieff’s latest book. Phew! Entry couldn’t be easier, so do chance your arm – you might end up with a few extra festive treats. We have some exciting news: Cotswold-based author M.C. Beaton (the most borrowed author in UK libraries, don’t you know) will be able to witness her most beloved character, Cotswold sleuth Agatha Raisin, totter onto our screens this December (played by the fabulous Ashley Jensen) – and so will Agatha’s legion of adoring fans. Actor Matt McCooey gives us the inside scoop. agaTHa raisin onsCreen It’s all about Agatha – get the skinny on the Cotswolds’ brassiest detective and her new Christmas show moTHer goose aT THe THeaTre CHipping norTon Chippy’s latest has us in a proper flap: panto writer Ben Crocker gives up the goose 28-29 sam TWisTon-davies Racing’s rising star, proudly sponsored by Harrison James & Hardie We’ve also had the inestimable pleasure of meeting television presenter and home guru Kirstie Allsopp - who had much to say on a serious matter close to her heart – and chef Michael Caines, whose life story is as interesting as his dedication to food. Elsewhere, you can find out how to throw the perfect Cotswold cocktail party, discover how the humble sheep has transformed the Cotswolds, learn about the crazy life of the riotous ‘Mad Boy’ of Faringdon House and explore the RSC’s production of The Christmas Truce. And, as usual, we have the very best of Cotswold property, with an array of expert advice. Our exclusive Privilege Card Offers will help you save money and support local trade this season (and do make sure you catch our pick of winter’s events). Hope you enjoy the issue and we’ll see you in spring. 39 HoW To: THroW a CoTsWoLd CoCkTaiL parTY Julia Sibun spills the secrets to an excellent cocktail-fuelled shindig 48-49 Design team: Alias 0845 257 7475 sayhello@wearealias.com Star Chamber Offices, Hollis House, The Square, Stow-on-the-Wold, Cheltenham, Gloucestershire GL54 1AF 4 Cotswold Homes Magazine cotswold-homes.com the ProPertY & lifestYle maGaZine for the north cotswolDs Cotswold Homes magazine THe CHrisTmas TruCe skeTCHes from THe fronT: bruCe bairnsfaTHer Meet writer Phil Porter and be inspired by the true story of The Christmas Truce – on stage at Stratford this Christmas 14 -15 oXford’s neW maggie’s The story of the ‘trench cartoonist’ whose wildly popular drawings from the front are on display in Stratford THe mad boY & me Author Sofka Zinovieff tells the story of her unruly grandfather – and her surreal inheritance We visit the stunning new Maggie’s centre in Oxford, where Kirstie Allsopp tells us why she supports the charity 22-24 26-27 34-37 THe sTorY of THe sHeep Counting Sheep author Philip Walling tells us how the countryside – and indeed, British life - has been transformed by a humble ruminant 16-17 30-31 miCHaeL Caines Culinary sensation Michael Caines reflects on past, present and future 44-46 HoT properTY Two hundred years of racing at Cheltenham – and a festive ski twist this Christmastime evenTs & priviLege Card offers The best of the North Cotswold housing market, plus advice from our experts on all things property 50-98 Cotswold Homes Magazine Our next edition, Spring 2015, will bring you more upcoming events, offers and articles showcasing the local area – helping you to get more out of life in this beautiful part of the world. We will be distributing the next magazine from late February. CHeLTenHam raCeCourse: a TaLe of TWo CenTuries Our pick of the very best winter events in the Cotswolds – plus Privilege Card offers to help you save whilst supporting our excellent Cotswold businesses 118-125 cotswold-homes.com the ProPertY & lifestYle maGaZine for the north cotswolDs 5 Cotswold Homes Competition Winter Competition GiveaWay We’Ve GoT eVeN more fab PrIZes ThaN eVer To WIN ThIs WINTer – aNd eNTerING CouLdN’T be easIer. Win a panTo nigHT aT CHipping norTon THeaTre for aLL THe famiLY* *For a family of up to five. Catch the sensational Chippy Panto, Mother Goose, at 7.30pm on Wednesday 10th December and find out why The Daily Telegraph and Jeremy Clarkson rave about a visit to Chipping Norton Theatre. Included in this evening will be a goodie bag with a souvenir programme, badges and vouchers for drinks and ice creams. To enter, all you have to do is email admin@ cotswold-homes.com with PANTO in the subject field, remembering to include your address and contact details so we can contact you in case you win. Alternatively, you can enter by liking our Facebook page and sending us a private message at. Entries will be drawn on the 5th of December. Good luck! WIN a ‘HorribLe CHrisTmas’ TreaT aT bIrmINGham’s oLd reP TheaTre We’re in for a horribly fun time this Christmas when HORRIBLE HISTORIES: HORRIBLE CHRISTMAS comes to The Old Rep Theatre, Birmingham from 13 Nov to 17 Jan. (Box office: 0121 245 4455. co.uk.) Christmas is under threat from a jolly man dressed in red, one young boy must try & save the day! From Victorian villains to Medieval monks, Puritan parties to Tudor treats, this latest Horrible Histories show will take you on a hairraising adventure through the amazing history of Christmas. Full of silly jokes, funny songs and lots of hysterical historical facts, it’s great entertainment for all the family from 5 to 105! To win a free family ticket to see Horrible Christmas on Saturday Jan 3rd email admin@cotswold-homes.com with HORRIBLE 18th of December. Good luck! WIN a TrIP To The rsC IN sTraTforduPoN-aVoN To see THe sHoemaker’s HoLidaY WITh afTerNooN Tea aNd a £50 shoPPING VouCher Love shoes? Win a trip to Stratford-uponAvon to see The Shoemaker’s Holiday, with afternoon tea in the Rooftop Restaurant and a £50 voucher to shop at Stratford-upon-Avon’s boutique Nuha. Filled with fun, frivolity, and the fashion of 1599,The Shoemaker’s Holiday is a glorious comedy of class, conflict and cobblers in love, with gorgeous period costumes. Spend the day in the charming town of Stratford-uponAvon, browse the beautiful shoes at Nuha, and indulge in a delicious, decadent festive afternoon tea in the stunning surroundings of the Rooftop Restaurant. (The date of the performance is Saturday 3 January, 7.30pm. A table is reserved in the Rooftop Restaurant for afternoon tea at 4pm. Cannot be used in conjunction with any other offer. No cash alternative will be offered. Nuha voucher may only be used on shoes, boots and handbags, excluding sale items.) For your chance to win, all you have to do is email admin@cotswold-homes.com with STRATFORD in the subject field, remembering to include your address and contact details in case you win. Alternatively, you can enter by liking our Facebook page and sending us a private message at cotswoldhomespage. Entries will be drawn on the 18th of December. Good luck! WIN aN aNNuaL famILy TICkeT (2 X aduLTs + 2 X CHiLdren / 1 X aduLT + 3 X CHiLdren) To adam heNsoN’s CoTsWoLd farm Park Visit Adam Henson’s fabulous farm park all year round with this great pass. Feed the goats and pigs while teaching the little ones all about livestock, food production and rare breeds at this award-winning attraction. For your chance to win, all you have to do is email admin@cotswold-homes.com with FARM PARK! like us on facebook for more chances to win! 6 Cotswold Homes Magazine Cotswold Homes Competition Winter Competition Giveaway WIN 4 X Tickets to Olympia Horse Show (Thursday 8th December Afternoon Performance) in London Olympia is one of the very best equestrian events going – and you and three others could be a part of it for free! Find out more about this fantastic event by visiting or turning to page 40 and reading our Olympia feature by our equestrian correspondent Collette Fairweather.The afternoon performance includes the Horsezone.com Santa Stakes, the Osborne Refridgerators Shetland Pony Grand National, the Kennel Club Medium Dog Jumping Grand Prix, the Ukrainian Cossacks – and much, much, more. To enter, all you have to do is email admin@ cotswold-homes.com with OLYMPIA 5th of December. Good luck! WIN 4 X Tickets for Festival Trials Day at Cheltenham Racecourse (24th January) Arguably the best one-day Jump fixture anywhere in the UK, with top class action unfolding during every race and notable pointers of horses to follow ahead of The Festival in March. consecutive year, behind Lord Windermere. Many of the horses that run on Festival Trials Day are having their final preparation before The Festival and it is rare that this meeting doesn’t feature at least one subsequent Festival winner. In 2014 we saw Lac Fontana win on Festival Trials Day, before winning the Vincent O’Brien Hurdle in March. Gates Open at 10.30am with the first of seven races at 12.40pm. The last race is at 4.10pm. In addition, The Giant Bolster won the Argento Chase, one of the most popular chasers of recent years. He went on to be placed in the Betfred Cheltenham Gold Cup for the third After racing there is another Brightwells Bloodstock sale. See the Brightwells website for more information. For your chance to win, all you have to do is email admin@cotswold-homes.com with RACECOURSE in the subject field, remembering to include your address and contact details in case you win. Alternatively, you can enter by liking our Facebook page and sending us a private message at. com/cotswoldhomespage. Entries will be drawn on the 18th of December. Good luck! WIN 2 X Signed Copies of Sofka Zinovieff’s book THE MAD BOY, LORD BERNERS, MY GRANDMOTHER AND ME… Read the sensational history of Faringdon House and its eccentric occupants in Sofka Zinovieff ’s brilliant new book. Read our interview with Sofka and see how the story continues with two free signed copies (all images copyright Sofka’s personal collection). For your chance to win, all you have to do is email admin@cotswold-homes.com with MAD BOY! 7 IT’S all aBoUT aGaTHa PREPARE FOR A COTSWOLD CHRISTMAS SENSATION AS LOCAL AUTHOR M.C. BEATON’S BELOVED CREATION FINALLY MAKES IT TO THE SCREEN This Christmas, readers of M.C. Beaton’s quirktastic Agatha Raisin crime series can celebrate as Aggie at last makes it to the screen. Starring the sublimely funny Ashley Jensen as the eponymous Agatha, Agatha Raisin and the Quiche of Death will come to Sky 1 HD this December. Bossy and determined PR supremo Agatha Raisin decides to chuck in her big city lifestyle and decamp to the Cotswolds, anticipating quaint cottages, rolling landscapes and restful summer days. But when her ways are greeted with raised eyebrows, she decides to ingratiate herself with 8 Cotswold Homes Magazine the locals by sneakily entering a shop-bought quiche into a cookery competition – only to end up poisoning the judge. Can Agatha rally a few friendly souls to her cause and clear her name… and enjoy a little Cotswold romance with her dashing new neighbour while she’s at it? Ashley Jensen – perhaps best known for her hilarious turn as Ricky Gervais’ hapless best mate in BBC comedy Extras – might not precisely match some readers’ imaginings of the middleaged Agatha, but is in our view (and, crucially, the view of M.C. Beaton) an excellent choice. And M.C. Beaton herself seems equally enthused, tweeting: ‘Ashley Jensen who plays Agatha is a delight. A great comedy actress. Got a great reception. After the Hamish Macbeth TV lot, it was lovely.’ Supporting is a fine cast including Gavin and Stacey’s Mathew Horne and not one but two Cold Feet alumni, Hermione Norris and Robert Bathurst. This is the second sleuthy drama to film in the Cotswold area recently, as the BBC’s adaptation of G.K. Chesterton’s Father Brown books has shot two series in recent years. IT’S all aBoUT aGaTHa WE CAUGHT UP WITH ACTOR MATT McCOOEY, WHO PLAYS AGATHA’S LOVELORN ASSOCIATE, INSPECTOR BILL WONG, TO GET THE SCOOP ON THE NEW SERIES. Hi Matt. Tell us about your character, Bill Wong… Bill is the local police inspector in Evesham. He’s a very nice guy. He’s been living in Carsley for a while, thinking that the professional life of a police inspector might not be as exciting as he first thought. All of a sudden, in comes this glamorous, attractive lady from London, which pricks up his ears a bit – and then, of course, there’s a murder to be solved. He’s jolted back to life a bit… personally and professionally. So is Agatha a romantic interest? I think he fancies her. He finds her quite intriguing. He’s been surrounded by only a few village women this whole time…he’s lovelorn. There’s a dating website he’s checking the whole time. Let’s say the pickings are fairly slim [laughs]. Agatha’s adventures have created legions of devoted fans. What was it like to work with Ashley? How does she play Agatha? She was brilliant. I think she does a great job with Agatha. She’s very brash and assuming. She really shakes up the village and isn’t afraid to get stuck in. Was this your first time in the Cotswolds? Apart from when I came here on a stag do, yeah. It’s truly stunning. I grew up next to a little village much like Agatha’s, with all the fetes and crickets and such. But I have to admit that the Kent countryside has got nothing on what I saw in the Cotswolds. Reading a book or a script, you have images of how it might be, but in real life?…Wow. Have you had a chance to meet author M.C. Beaton? I’ve met her a couple of times, actually – first at the read through in London and then when she came down to set one day. She’s very lovely…I can’t believe that she’s still knocking out these books regularly. An Agatha and a Hamish MacBeth every year! Of course, she’s been burned before by the televised adaptation of Hamish Macbeth…Will this time round be different? To be honest, I can’t remember watching that one, but I think the cast didn’t match the story and the experience she had with the people making it was not a good one. Ashley doesn’t quite resemble the Agatha of the books – she’s blonder, she’s not middle-aged, she’s Scottish – but I know that Marion made a blog post when she visited the set. She wrote that while Ashley doesn’t look much like Agatha she’s playing it absolutely true to the character. A frustrating, determined, sometimes difficult lady. Agatha’s got a massive following. What is it about these stories that causes such devotion? I’m not sure what the term is, it’s kind of easy reading – it’s not dark and gritty, they’re not like hard work like some Scandinavian crime novel. The work has lightness and humour to it. Plus it’s set in a picture perfect place with a real sense of Britishness to it. I always imagine people smiling as they read them. Yes, it’s about death and murder, but it’s done very lightly – and I think there’s a real market for that. Are there any other adaptations on the horizon? Would you be up for another stint as Bill? Well yeah! I think the plan is to see how The Quiche of Death is received, how it looks after the edit, but I’d certainly love to spend a few months in the Cotswolds every year. 9 IT’S all aBoUT aGaTHa A BEGINNER’S GUIDE TO AGATHA RAISIN Agatha Raisin? That’s an unusual name… Well, Agatha’s certainly not your ordinary detective. Born in a ‘tower block slum in Birmingham’, Agatha Styles had the shyness knocked out of her by bullying colleagues during an early stint in a biscuit factory. She soon fled to London where she met and married the wealthy, shifty Jimmy Raisin (who the ambitious Agatha ditched while he lay in a drunken stupor). Having made a reasonable wedge in the Public Relations game, Agatha decides to throw in the towel and, remembering a much-loved childhood holiday, takes an early retirement in the Cotswold village of Carsely (who can blame her?). But it wasn’t quite the peaceful retreat of her imagination, and she soon faced a struggle to clear her name after her shop-bought quiche happens to poison the judge of a cookery competition. This little escapade, Agatha Raisin and the Quiche of Death, is the first of Agatha’s adventures, and the subject of Sky’s new adaptation (starring the supremely funny Ashley Jensen). Goodness gracious! Was it curtains for Agatha? You’ll have to read the book (and watch the series) to see what happens, but seeing how many books there are in the Agatha Raisin series…But here’s a fun fact: Agatha sets up her detective agency in the fifteenth book, having only been something of a hobbyist murdersolver before. What’s Agatha actually like? 10 Cotswold Homes Magazine ‘Bossy, vain and irresistible.’ Hardly the most winsome attributes, but Agatha has an incredibly devoted following – many of her fans claim to see something of themselves in this unlikely heroine. Despite her all-too-real faults (or perhaps because of them?) she’s actually rather endearing, and easily more relatable than a hard-boiled, trench-coat wearing gunslinger. And she doesn’t exist in a timeless void: she’s already in her fifties in Quiche of Death, and ages accordingly as the series progresses. Her friends include local constable Bill Wong, old pal (and occasional lover) Sir Charles Fraith and on-again-off-again boyfriend James Lacey, a rather charming neighbour. When Ashley Jensen was cast as Agatha, she had this to say: ‘I am absolutely delighted to be on board! It's not often a part like this comes along for a woman. Agatha Raisin is a strong forthright, independent, driven, successful woman, who is both funny and flawed, a real woman of our time.’ Has M.C. Beaton written anything besides the Agatha Raisin series? Has she ever! Aside from the twenty-five odd Agatha adventures, the extraordinarily prolific Marion Chesney has written a variety of other works under several pseudonyms (M.C. Beaton is reserved for crime and mystery). There are twenty-nine books (soon thirty) in her Hamish Macbeth series. She’s also written over 100 (!) historical romances, published under such names as Helen Crampton, Ann Fairfax, Jennie Tremaine, and Charlotte Ward. It’s little wonder she’s the most borrowed Adult author in UK libraries. M.C. Beaton So is this the first television adaptation of her work? Erm, not exactly. But Marion isn’t particularly fond of the Hamish Macbeth series, so the less said about that the better. It can at least be credited with launching the career of actor Robert Carlyle. Are the locations found in Agatha’s Cotswolds real or imaginary? Mostly real. Agatha often visits villages and towns such as Moreton-in-Marsh, Stow-on-the-Wold, Chipping Campden and Evesham, but Carsely and Mircester (where Raisin Investigations is based) are fictional, so don’t go looking for them – though they are inspired by real places. IT’S all aBoUT aGaTHa I am absolutely delighted to be on board! It's not often a part like this comes along for a woman. Agatha Raisin is a strong forthright, independent, driven, successful woman, who is both funny and flawed, a real woman of our time.’ ˜Ashley Jensen THE CHRISTMAS TRUCE How do you capture an iconic and incredibly unlikely moment of peace in a bitter war? We interview the writer of The Christmas Truce, Phil Porter, about putting the famous story on stage. Tell us about the process of writing this play and the influence of the story-sharing day hosted by the RSC. I’d been working on the script for about two or three months by the time we had the story-sharing day, so I knew what the overall story was going to be.The day came at a great time for me – people came in with pictures and stories about their relatives, some quite quirky artefacts, postcards and medals and shell cases…One woman brought her grandfather’s ‘Do It Yourself Phrenology’ booklet, which I managed to get into the play – it’s all about head measuring and what you can supposedly tell from that, which was apparently quite a popular pastime in 1914! The day was also very useful for me in that there were a lot of photographs.There are a lot of young male characters in the play and it can be quite hard to flesh them out in your mind, so I used them to help picture one character or another. I also saw various letters home – some of the details contained in those letters might have made their way into the play… What do we know about the Royal Warwickshires and the part they played in the truce? The truce is intriguing partly because it happened in so many places along the Western Front with various different regiments involved, but the part involving the Warwickshires is one of the best documented – partly because Bruce Bairnsfather was there, and he was a very popular writer and artist at the time.The Warwickshires were involved in an area they called Plugstreet, which was their own British take on ‘Ploegsteert.’There had been some quite big battles about five days before 14 Cotswold Homes Magazine Christmas, so both sides had lost a lot of men. They were pretty grim times. But at that point the two trenches of the opposing forces lay quite close together – the Warwickshires could hear the Germans singing, could shout out across No Man’s Land. Eventually there was a small meeting on Christmas Eve based on a bit of banter that was going back and forth, and the next morning the officers went out and negotiated a more formal truce.There are differing ideas on how long that truce might have been, what the terms were. In our play, it’s 48 hours and all men are able to come out, exchange gifts, play football and celebrate Christmas together. You just mentioned the trench cartoonist Bruce Bairnsfather – what sort of a role does he play in The Christmas Truce? He’s pretty much our central man, really. He was There are differing ideas on how long that truce might have been, what the terms were. In our play, it's 48 hours and all men are able to come out, exchange gifts, play football and celebrate Christmas together. a Second Lieutenant, one of the lowest ranking officers.The way I’ve portrayed him is that he’s very good with the Tommies and very good with the officers. He’s obviously very well brought up in a military kind of way, and fairly posh – able to communicate well with the colonel. But he’s also a captain with a small ‘c’ for the men. As well as being a cartoonist and writer, he was also very interested in theatre and used to put on shows and pantomimes. In the play he organises a concert party after they lose one of their friends to rally the men and cheer them up a bit, lifting the morale of the team. But he’s also changed through the experience of the truce, and also learns quite a lot through that interaction. How do the soldiers relate to one another in the trenches? Are there conflicts between them or is there always a sense of congeniality or comradeship? I tried to show something of everything of what would happen if you put a large group of men in a very difficult situation – they will probably bond very quickly and form very close friendships through that hardship but also, inevitably, there’s a certain grumpiness and a certain fearing for your life. We have characters appear that Bairnsfather created in his cartoons, Old Bill and Bert, who are old sweats who fought in South Africa – they’ve seen it all and done it all before. But most of the lads are younger in years and less experienced, or else they’re reservists.There’s a clashing between those old guys and the young, who don’t really like being told what to do. But in the end the overriding feeling is one of friendship – and it’s humour that they use to get them through this difficult time, more than anything. It’s very similar to how Bairnsfather’s cartoons show that there is humour and light even in the darkest of places. 29 November – 31 January | ROYAL SHAKESPEARE THEATRE | STRATFORD-UPON-AVON THE CHRISTMAS TRUCE Photos from What was it like writing that moment when a tentative, spontaneous peace is struck between two forces? The source material is so dramatic in itself that you’re already off to a good start.The presence of music is another tool that I can use to create a tension, but also the beauty of the moment, I suppose.There’s singing and there’s calling between the trenches then one guy goes out and then they’re all playing football – so dramatically it’s sort of a gift, because it unfolds in a very interesting and satisfying way. Probably my favourite thing to write was the scene where Old Bill goes out to meet the Germans in the dark on Christmas Eve.There’s so much tension there – the idea of climbing up over the parapet, making your position known, when up to that point the only thing to go between those two places has been bullets and artillery.That was really enjoyable to write. On a personal level, did you feel a sense of empathy for these men – did you ever reflect on the fact that if you were born earlier it might very well have been you in that position? Yeah, absolutely. Writing about people wandering around in the trenches in the dark and the cold makes you realise how ill-equipped you would be to deal with it, living now in a relatively peaceful time… What do you hope audiences will take away from The Christmas Truce? I think it’s a very optimistic story, and a story about the spirit of Christmas more than anything. I also think our younger audience may well learn things about the war, and that’s great. But I think what makes it such a great story is that it’s about the enduring nature of Christmas and the Christmas spirit, and how it thrives even in the unlikeliest place. It says a lot about friendship and humanity, and I hope that’s what people take away from it. The Christmas Truce SYNOPSIS’ The Christmas Truce tells the story of a group of young Warwickshire men 100 years ago. Warwickshire, August 1914 A glorious summer is interrupted by the outbreak of war and the British Empire's bureaucratic machine swings into action. Reserve soldiers and nurses are called up and readied for action while others with military experience are encouraged to re-join. Among the returning soldiers is Second Lieutenant Bruce Bairnsfather, a charismatic young man from a military family, with a flair for drawing and painting. Three months later... After considerable casualties at the First Battle of Ypres, the Reserves are called upon to provide reinforcement. Bruce travels to Belgium with a group of such soldiers, joined also by Bill and Bert, a couple of old sweats. Arriving to pelting rain and a heavy bombardment, they are taken straight to the Front Line where they learn to adapt to the terrible conditions of the trenches. Meanwhile, at a Clearing Hospital a few miles away, reserve nurse Phoebe Bishop arrives for work and immediately finds herself on the wrong side of Matron. Bonded by shared experience, the soldiers become close friends, but their morale is damaged when they lose their first man to a sniper. Bruce organises a concert party, repairing their spirits, but their happiness is short-lived as they receive orders to attack. The attack is unsuccessful, though they are saved from total obliteration by a seemingly miraculous stroke of luck. Devastated by the loss of several comrades, the surviving men bed in for Christmas. INTERVAL Christmas Eve The Section is disappointed to be in the trenches for Christmas. But the evening takes a magical turn when they see a line of Christmas trees along the German Front Line. The German soldiers sing carols across No Man’s Land and the Tommies sing back. A growing atmosphere of festivity leads to an offer from the German side of a meeting in No Man’s Land. Old Bill meets the enemy face-toface and they exchange gifts. Meanwhile, in the Clearing Hospital, the injured soldiers are woken by the nurses putting up decorations. Matron demands that the decorations are taken down. Christmas Day Bruce and his friend Captain Riley meet a German officer and arrange a truce, though Bairnsfather will not shake the German officer's hand. Soldiers from both sides step out from the trenches and meet. They bury and pay tribute to their lost friends, and then a game of football begins. The spirit of the truce reaches the hospital too, as Phoebe and Matron, inspired by the news from the Front Line, agree to put aside their differences for the common good. And later, in No Man’s Land, the German officer convinces Bruce that their similarities outweigh their differences. Their own personal truce is sealed with a handshake. For a fleeting moment all is peace and goodwill, until the British High Command, fearing mutiny, demand an immediate end to the truce. The war resumes, but they will remember this very special Christmas forever. 29 November – 31 January | ROYAL SHAKESPEARE THEATRE | STRATFORD-UPON-AVON 15 BRUCE BAIRNSFATHER WHEN ARTIST BRUCE BAIRNSFATHER WENT TO WAR, HE DOCUMENTED THE ORDEAL IN A HILARIOUS AND QUINTESSENTIALLY BRITISH SERIES OF DRAWINGS. DISCOVERING A HUGE APPETITE FOR HIS PICTURES, BOTH AMONG HIS BROTHERS-AT-ARMS AND THOSE AT HOME, HE ASSUMED THE NEWLY-CREATED POST OF OFFICER CARTOONIST – AND FOUND HIMSELF PRESENT AT THE CHRISTMAS TRUCE. Mark Warby, leading Bruce Bairnsfather collector and writer, and Jo Whitford, Head of Exhibitions at the Royal Shakespeare Company, reveal more about Bairnsfather’s life and the RSC’s new exhibition commemorating his work. What was Bairnfather’s life like before the war? Had he found much in the way of commercial success? 16 Cotswold Homes Magazine Bairnsfather had originally been destined for an Army career and spent a year with the 5th Battalion, Royal Warwickshire Regiment (Militia) in 1907-08. But he resigned his commission to take up art, which was his real passion, and after spending some time at the John Hassall School of Art in London, had moderate success over the following years with commercial designs for firms such as Players Tobacco, Lipton’s Tea and Beecham’s Pills. However, he was unable to earn enough to sustain a full-time career as an artist, so he worked for a local firm of electrical engineers based in Stratford-upon-Avon, with his occasional commercial art sales supplementing his income. One of his jobs during this time was to work as part of the team installing electric lights in the Shakespeare Memorial Theatre. ON UNTIL 15 MARCH 2015 | PACCAR ROOM, ROYAL SHAKESPEARE THEATRE | STRATFORD-UPON-AVON BRUCE BAIRNSFATHER OLD BILL REPRESENTED A ‘TYPE’ OF SOLDIER WHO EVERYONE COULD RELATE TO AND FEEL A FAMILIARITY WITH. HE WAS A VETERAN OF WAR, A STOIC SURVIVOR WHO HAD BEEN THROUGH IT ALL BEFORE, AND WAS A PHILOSOPHER OF THE TRENCHES. Old Bill represented a ‘type’ of soldier who everyone could relate to and feel a familiarity with. He was a veteran of war, a stoic survivor who had been through it all before, and was a philosopher of the trenches. He was a bit of a grouser, but loyalty and comradeship were paramount to him. Almost everyone probably knew an Old Bill ‘type’ and was able to engage with the character Why did he pick up his pencil again in the trenches? When he went out to France in November 1914, art was the last thing on Bairnsfather’s mind. But in December 1914, at St Yvon, on the edge of Plugstreet Wood, to relieve the monotony of trench life – there were often long periods during the day when the soldiers were not able to move about freely – and to avoid being seen by the enemy, he began to draw. To amuse himself, and those around him, Bairnsfather began making comic sketches on any scrap of paper he could find, and these were soon in great demand, often pinned up in dug-outs up and down the front line. His Colonel even asked him to decorate the walls of the Battalion HQ with some of his cartoons. How did his war drawings first come to publication? It wasn’t long before the popularity of Bairnsfather’s sketches with the men he was serving with led to him becoming known as the artist of his regiment. One day, a fellow officer suggested “why don’t you send something off for publication” – and early in March 1915 this is exactly what he did. While resting in billets two or three miles behind the front lines, he made a finished drawing inspired by an incident in the trenches at St Yvon, and sent it off to The Bystander, a popular weekly magazine published in London. He later said he chose The Bystander after seeing a copy lying around which someone had been sent from home, and he felt the magazine’s style and format suited his drawing. The drawing arrived with Vivian Carter, Editor of The Bystander, who immediately liked it and published the cartoon in the magazine on 31st March 1915. What do you think it was about the Old Bill character that proved so enormously popular? What responsibilities did Bairnsfather’s War Office appointment as ‘Officer Cartoonist’ entail? From August 1916, Bairnsfather was attached to the War Office Department of Military Intelligence Sub Section MI7b, which dealt with press propaganda. In this capacity he made visits (at the request of each of the Allied armies) to the French, Italian and finally American fronts, and subsequently drew a series of cartoons based on each of these visits. He also illustrated propaganda articles written by officers attached to MI7b, which were distributed throughout the dominions, and he was used for other illustrative propaganda work. There is also evidence to suggest that propaganda messages were inserted by MI7b into Bairnsfather’s replies to letters from his fans. What effect did Bairnsfather’s work seem to have on morale? Bairnsfather’s cartoons had an immeasurable effect on morale. He struck a chord with the ON UNTIL 15 MARCH 2015 | PACCAR ROOM, ROYAL SHAKESPEARE THEATRE | STRATFORD-UPON-AVON 17 BRUCE BAIRNSFATHER soldiers fighting in France and their families back home. They knew he had seen active service at the front and was drawing from his first-hand experience. He knew what the soldiers and their families were going through. They knew he wasn’t mocking the soldiers but picturing them in situations which many men had experienced. Soldiers would write home telling their families that life at the front was just like Bairnsfather drew it. The public demand for reproductions of his cartoons in all forms was incredible, and everyone wanted to own something by him. His cartoons had a “carrying on” spirit that was able to lift the public mood even when the outlook seemed grim. Published volumes of his cartoons sold over 1 million copies, and hundreds of thousands of colour prints, postcards and other merchandise were sold. People flocked to exhibitions of his original drawings, and eagerly awaited the next published volume of his cartoons. His play about Old Bill became one of the greatest theatrical successes of the war. He made millions laugh, even in the face of adversity, but could also portray the serious side of war, as shown in a number of his drawings. How well known did Bairnsfather’s work become across the world? From 1916, Bruce Bairnsfather’s work was known throughout the world. His cartoons were reproduced in newspapers overseas, and the compilation volumes of Fragments from France were published in America, Canada and Australia as well as the UK. Merchandise – including postcards, playing cards, jigsaws, Christmas cards, colour prints and more – produced by The Bystander – was also sold worldwide, as was the hugely popular range of Bairnsfather Ware china made by Grimwades of Stoke on Trent. Bairnsfather’s play, The Better ‘Ole, which ran in London for 15 months from August 1917 and was taken around the UK by several touring companies, was also produced successfully in America, Canada, Australia, New Zealand, South Africa and even India, through to 1920. By the time the war ended in 1918, Bairnsfather was truly an international celebrity, and there was great demand for his work, worldwide. He was present at the Christmas Truce of 1914. What effect did being present at the ceasefire have on him? In his book Bullets and Billets, Bairnsfather described this moment as something he wouldn’t have missed for anything. He had started Christmas Eve day feeling down on his luck, accepting it wouldn’t really involve any of the usual seasonal festivities. But on this perfectly still, cold and frosty December day he began to feel there was something invisible and intangible in the air; a kind of feeling that the two sides had ‘something in common’. He describes the truce, as an ‘invigorating tonic’, an event that put back something human and a moment where there was a friendly understanding between them that Christmas would be left to finish in peace. 18 Cotswold Homes Magazine HE SPENT HIS LATER YEARS LIVING QUIETLY IN ENGLAND, MUCH OF HIS TIME OCCUPIED WITH LANDSCAPE PAINTING. BRUCE BAIRNSFATHER DIED IN WORCESTER ON 29TH SEPTEMBER 1959. What did life hold for Bairnsfather after the war had ended? At the end of the war, Bairnsfather was in great demand. Early in 1919 he undertook a lengthy lecture tour of the UK, and from 1919-20 edited his own weekly paper, Fragments. He continued to contribute cartoons to The Bystander until 1923. He undertook a lecture tour of America in 1920, and in 1922 wrote and appeared in a new play, Old Bill MP, which toured the provinces and had a successful London run. He later appeared in variety theatres in the UK and in vaudeville in America, and lived in New York for several years in the late 1920s, contributing to many popular US magazines such as Life, Judge and the New Yorker. In 1927-28 he wrote and directed a film, Carry On Sergeant, in Canada. In the 1930s he returned to England and contributed to popular magazines such as The Passing Show, Illustrated and the British Legion Journal. He toured in variety from 1935-38 and was the first cartoonist to appear on BBC television in 1936. There were further lecture tours in America in the 1930s. From 1938-42 he again contributed to The Bystander, and in 1939 he created a new strip cartoon, Young Bill, for Illustrated magazine. In 1940 the film Old Bill & Son was made, with John Mills as Young Bill. From 1942-44 Bairnsfather was attached to the US Forces in Europe as an accredited correspondentcartoonist, and contributed more than 200 cartoons to the US Forces newspaper Stars & Stripes. He spent his later years living quietly in England, much of his time occupied with landscape painting. Bruce Bairnsfather died in Worcester on 29th September 1959. What works of his can we expect to see at the exhibition? The exhibition features a selection of Bairnsfather’s cartoons from original magazines, books and on a variety of merchandise from the time such as teapots, matchbox holders and jigsaws. It includes famous cartoons such as A Better ‘Ole, Coiffure in the Trenches and Where did that one go? The exhibition also includes archive photographs of Bairnsfather, from his early family life in Stratford-upon-Avon, as a soldier on the frontline and later, as a successful artist and household name. You can see original poster and leaflet designs for brands such as Fry’s chocolate and Lipton’s tea and early examples of his work before he went to war, along with memorabilia related to his work in theatre and film. What did it feel like to be able to honour his life and work right here in Stratford-upon-Avon? Bruce Bairnsfather once wrote “Warwickshire is my county, and I love everything about it” – so it is particularly fitting that this exhibition should be held in Stratford. And more so that the RSC should honour the cartoonist who began his working life as a young electrical engineer, installing electric light at the original Memorial Theatre, and working the lighting switchboard at one of the annual Shakespeare Festivals. He lived at Bishopton, just outside the town, for over 17 years, and many of his famous war cartoons began life in his studio there. THE WAR TO END ALL WARS the Christmas Truce. Silent Night (or Stille Nacht in German) is the one most associated with the event but Allied soldiers rarely mention this hymn in their letters. O Come All Ye Faithful and others listed include It's A Long Way to Tipperary, Auld Lang Syne, While Shepherds Watched Their Flocks and O Tannenbaum. It’s a true tale of peace and reconciliation in the face of a grisly war that captured the hearts of the civilian population back home. ‘The Christmas Truce’ describes a series of unofficial cessation of hostilities that occurred along the Western Front in France during the cold and wintry late December days of 2014. By this point, World War I had been raging for several months, but stories started to circulate during that Christmas of incidences of German and Allied soldiers laying down their arms, stepping out of their trenches and meeting in no-man’s-land. Some chatted and swapped souvenirs, others were reported to have played in a football match. As we commemorate the outbreak of World War I in its centenary year, it’s difficult for us to imagine how different the world was in 1914. European society was a much more rigidly structured affair, with many monarchies still in situ (including those of Russia, Germany, Italy and Austro-Hungary). It was famously the assassination of the heir to the Austrian throne, Archduke Franz Ferdinand, and his wife in Sarajevo on 29th June 1914 that sparked a rapid sequence of events that led to the outbreak of war.The spell of that golden summer of innocence had well and truly been broken. between the opposing trenches, culminating in the Christmas Truce. Much of that December of 1914 in France had been wet. However, on Christmas Eve the temperature dropped and the landscape was enveloped in a sharp white frost.The edgy banter and shouting that had developed between the trench lines subtly changed when the German troops started singing carols and placing Christmas trees with lanterns above the trenches. Interviewed by the British press afterwards, a subaltern who had been part of the Christmas truce said: “Their trenches were a blaze of Christmas trees, and our sentries were regaled for hours with the traditional Christmas songs of the Fatherland.Their officers even expressed annoyance the next day that some of these trees had been fired on, insisting that they were part almost of the sacred rite.” A truce, which days earlier had seemed inconceivable, was now almost an inevitability.The singing of hymns and carols between the trenches is perhaps one of the most atmospheric motifs of Although there is some dispute as to whether a football match did take place, most letters home from the trenches at the time mention that soldiers chatted, swapped jokes, exchanged mementoes and souvenirs and sang songs and hymns. Despite the magic of the Truce being quickly dispelled after Christmas Day with a taking up of arms again – indeed, artillery fire had continued throughout Christmas in some parts of the Front the enduring legacy of the truce has been positive. Today, it's often looked upon as a wonderful example of humanity during a dark hour in our history. It has been the inspiration for many works of arts, paintings, books as well as songs. But its greatest legacy must surely be the message of hope. As a Highland Regiment officer said in The Times in 1915: ." Britain declared war on Germany on 4th August, reacting to an ignored ultimatum to remove German troops from Belgian soil by the end of the previous day. The beginning of August saw the German army sweep past Luxembourg and Belgium on their way into France, initially making rapid progress. With both the German and Allied armies trying to outflank each other, a battle line was eventually drawn across France – the Western Front - stretching from Lorraine in the south to the English Channel in the north. This was the start of trench warfare, with soldiers digging out miles of trenches and erecting barbed wire to hold their positions. With some trenches just yards apart, progress of the armies was slow and these underground tunnels became increasingly fortified. Lines were strengthened on both sides with more men and it was soon realised that this was going to be a war of attrition – a ‘winner’ could only be declared if one side ran out of men or ammunition. Being so close to the enemy in some parts of the Western Front allowed the soldiers to shout out to their opponents or stick signs on wooden boards. After particularly heavy artillery fire, for instance, the soldiers might shout out “Missed” or “Left a bit”. It was this black humour that provided the backdrop to the conversations that started IF YOU WOULD LIKE TO READ MORE ABOUT THE FIRST WORLD WAR, THESE NOVELS AND MEMOIRS CAPTURE THE SPIRIT OF THE AGE BRILLIANTLY: All Quiet on the Western Front, Erich Maria Remarque For younger readers: Birdsong, Sebastian Faulks Private Peaceful, Michael Murpugo A Farewell to Arms, Ernest Hemingway War Horse, Michael Morpugo The Regeneration trilogy, Pat Barker War Game:Village Green to No-Man’s-Land, Michael Foreman A Long Long Way, Sebastian Barry The Amazing Tale of Ali Pasha, Michael Foreman Goodbye to All That, Robert Graves Charlotte Sometimes, Penelope Farmer Testament of Youth,Vera Brittain The Flambards series, KM Peyton Memoirs of an Infantry Officer, Siegfried Sassoon 19 RSC EDITION DOWNLOAD NOW & WIN FREE TICKETS! OUR SPECIAL ONLINE-ONLY RSC EDITION OF COTSWOLD HOMES MAGAZINE! Are you ready for a trip to the theatre? In celebration of the RSC’s sensational winter season, we’ve produced a special, online-only edition of the magazine – free for UK residents - dedicated to all the fantastic productions playing at Stratfordupon-Avon. Containing interviews with the writer of The Christmas Truce, Phil Porter, director of The Shoemaker’s Holiday Phillip Breen and director of Love’s Labour’s Lost Christopher Luscombe, our magazine offers you a sneaky look behind the scenes and an insight into the creative minds responsible for the winter line-up. Have a look at the work of trench cartoonist Bruce Bairnsfather (exhibiting at Stratford 10th October 2014 – 15th March 2015) and find out how community stories helped shape The Christmas Truce. 20 Cotswold Homes Magazine AND THERE’S EVEN AN EXCLUSIVE COMPETITION FOR OUR READERS TO ENTER! WIN 2 Pairs of Tickets for The Shoemaker’s Holiday on Wednesday 17th December in the Swan Theatre - including a pre-theatre dinner for two at 5.30pm in the Rooftop Restaurant for each pair. Readers of this special edition can enter our competition to win a brilliant trip to the theatre to see the RSC’s new production of Thomas Dekker’s The Shoemaker’s Holiday – and entering our draw couldn’t be easier. With dinner in the Rooftop Restaurant included, this is a treat not to be missed. Dekker’s story of love, war, cunning and cobblery is arguably his best work, springing from a time that, although now distant, bears curious similarities to our own (read our interview with director Phillip Breen for more information on this wonderful tale). THE SPECIAL RSC EDITION can be downloaded now on Pocketmags (free for UK residents) - a free app must be downloaded first. (Link to RSC issue: . From this page, button links to other platforms – Apple, Android etc - can be found).The magazine is currently available on Android Google Play, Amazon Kindle Fire and Windows 8 via the Pocketmags App. It's free for UK residents, but £2.49 for the rest of the world ($3.99 / €3.59 / AUD$4.99). SOULS BROTHERS The Great Rissington Sorrow: The Death of Five Brothers The BBC revisits the unique tragedy of the five Souls brothers who lost their lives in the Great War, shattering a family. outside the village until writer Michael Walsh took an interest after a visit to Great Rissington church, poring through battalion and grave commission records to unearth details of the Souls brothers’ fates. Interviewing a 101-year-old resident by the name of Maud Pill, Walsh discovered that the brothers were ‘nice-looking’ but ‘not very tall’ (Fred, Alf and Arthur joined a ‘Bantam’ force for short-statured soldiers) and all were unmarried. Arthur had won the Military Medal, but the details of his act of valour had been lost. In the picturesque Cotswold village of Great Rissington, there exist few reminders of a wartime loss almost peerless in its tragedy. But for the curiosity of a writer, the story might have been forgotten entirely. In the centenary year, the tale of the Souls brothers is remembered again. BBC Points West recently visited the village to hear its schoolchildren tell of the local family ruined by the Great War. Of the six sons born to Julia ‘Annie’ Souls and her husband William, five were killed in WWI. Twins Alfred and Arthur came into the world together; they died just five days apart. Albert, Frederick and Walter Souls all died in the summer of 1916, with Frederick perishing in the Battle of the Somme (where there were over 1.5 million casualties) and Alfred and Walter losing their lives at Bully-Grenay and Rouen. Walter was not killed outright, but was transferred to hospital in a cheery mood after suffering a leg wound, where he soon contracted a blood clot and died. Annie was given one shilling per week for each death – scant compensation for the loss of a son, and with five boys dead her suffering was doubtlessly immeasurable. Yet there were some in the village who considered that the bereaved Mrs Souls had received a generous allowance from the government, and by some accounts Annie was viewed with suspicion by certain members of the small community. After the death of the third brother, Prime Minister Herbert Asquith sent her a letter conveying ‘the sympathy of the King and Queen for Mrs Souls in her great sorrow.’ She kept a candle burning in a small window overlooking the road in memory of Fred, whose body was never found after he disappeared over the top at the Somme. In the summer of 2014, the Year 6 children of Great Rissington School were been involved in a research project with Cotswold Homes magazine, exploring the story of the Souls Brothers and the plane crash that occurred in the garden of The Lamb Inn in 1943. The scale of Annie Souls’ loss recalls the plot of war blockbuster Saving Private Ryan, where a special team led by Tom Hanks is dispatched to recover the missing Private Ryan after his three brothers are killed in action. However, the story of the Souls received little recognition ‘Now in the 21st century we remember and appreciate the Souls,’ wrote the children. ‘Their bravery has been recognised, and with the centenary of WWI and our local connection with these men, we are sure that everyone admires these people and their heroism.’ Photo: Lynne Milner The edition of BBC Points West concerning the lives of the Souls Brothers aired 6.30pm on Friday 7th November. 21 MAGGIE’S RICHARD CURTIS AND KIRSTIE ALLSOPP UNITE TO OPEN A CANCER SUPPORT CENTRE WITH A DIFFERENCE MAGGIE’S, THE CHARITY THAT PROVIDES FREE PRACTICAL, EMOTIONAL AND SOCIAL SUPPORT FOR PEOPLE WITH CANCER AND THEIR FAMILY AND FRIENDS, RECENTLY CELEBRATED THE OPENING OF ITS NEW PURPOSE BUILT CENTRE IN THE GROUNDS OF THE CHURCHILL HOSPITAL, OXFORD. Kirstie Allsopp and Richard Curtis, centre, open the new Maggie’s centre in Oxford The event was attended by Maggie’s Chief Executive Laura Lee and officially opened by television presenter and author Kirstie Allsopp, film 22 Cotswold Homes Magazine hospital. Every year 5,000 people in the region are diagnosed with cancer. As the number of people living with cancer increases, support becomes even more important. This year Maggie’s is celebrating 18 years of supporting people with cancer and the new Maggie’s Oxford Centre at the Patricia Thompson Building will be the 18th Centre to be opened in 18 years. The first Maggie’s Centre opened in Edinburgh in 1996 and Maggie’s Oxford at the Patricia Thompson Building joins 15 other Centres across the UK, an Online Centre and a Centre in Hong Kong. Maggie’s Oxford at the Patricia Thompson Building has more space and increased facilities for visitors to access a range of support that Maggie’s offers, including psychological support, benefits advice, nutrition workshops, relaxation and stress management, art therapy, tai chi and yoga. MAGGIE’S BECCI’S STORY Becci Berry was on site to tell us about her experience in dealing with Maggie’s ‘I live on a farm at the edge of the Cotswolds, between Faringdon and Lechlade. In 2010 my husband Rich was diagnosed with bowel cancer. We originally had the diagnosis in Swindon – it gave us three months at the outside, and the news wasn’t broken particularly well.’ “COMING TO THE NEW CENTRE HAS REMINDED ME HOW IMPORTANT MAGGIE’S HAS BEEN TO OUR FAMILY.” After ‘transferring to Churchill to see the lovely oncologist’ she and Rich found comfort – and a cup of tea – at the onsite Maggie’s centre, then just a portakabin. ‘I can’t really say what exactly they did at that moment…it was just the fact they were there. They brought the humanity back, took us out of that clinical environment into a warm, supportive space. They’re a reminder that you are still you, and there are things that you can do.’ ‘Rich went on to do meditation and mindfulness courses and received nutritional information that transformed his way of thinking. For the nine months during the illness, he was fitter and healthier than he had been for a long time, ironically. It totally changed the quality of his life and the time we spent together as a family. He had his time where he could talk to other sufferers – particularly about how he was coping as a man, which can be very different to how you might [handle] it as a woman.The professional side and practicalities were well taken care of – things we aren’t even aware of, like the benefits that we could receive.’ ‘As Rich was running a farm, it was hard. Being in bed with cancer didn’t mean things stopped – he’d still be on the phone running things. Even sitting attached to a chemo drip he’d be writing lists of things for me to chase up.The advice we received was so helpful, even after Rich passed away and it was just me and the girls.’ Now she is successfully running the farm herself, with a little help from her family. ‘Rich was passionate about dairy and a typical Cotswold farmer. He was a very big part of the local farming community. He was only 34 when he passed away, so he was absolutely in his prime. We live on an idyllic farm owned by the National Trust - Rich was born in that house and died in exactly the same place that he was born, in the same room.The farm is him, and I didn’t want that to disappear for my daughters’ sake.’ ‘So I’ve taken on the farm and today have a dairy herd of about 170 cows. Rich had embarked on a cross breeding programme, and I’ve continued with it.’ ‘Coming to the new centre has reminded me how important Maggie’s has been to our family.’ 23 MAGGIE’S WE SPOKE TO PRESENTER KIRSTIE ALLSOPP ABOUT HER REASONS FOR SUPPORTING MAGGIE’S. Kirstie, what’s your story? How have you been touched by the issues surrounding cancer? find answers to questions and meet people who understand what you are going through. So now the centre is open and not just a concept – what are your thoughts? It’s really remarkable. It’s extraordinary because obviously the NHS doesn’t have masses of land to give out, so the land that Maggie’s is given is often quite compromised.You can’t build conventional foundations because of the tree roots and actually it works fantastically well because it’s a sloped area. And I love that this is an area where the oncologists used to come and have their cigarettes – so we’re not just treating cancer patients, but also preventing it! I’m a big fan of Maggie’s. I think that the work that the doctors do is remarkable, but the very nature of what they do means that they cannot spend as much time as they want to spend treating the effect cancer can have on a patient’s family, their work colleagues, a patient’s individuality. It can be a tad shocking if you’ve never been in a medical environment before – life can be very, very different. So what Maggie’s do is they put the individual back in a patient, and puts them in a space that doesn’t feel municipal. This could actually be a really cool house of a really cool friend, someone who went to art school and built their own house! It even has its own fire. It has sofas and rugs and cushions – I’m a great cushion lover. So for me the difference between homely and municipal is a cushion. How difficult is it to fit all these philanthropic engagements into your schedule? I’m always, always, always [busy]. I’m filming Location, Location, Location tomorrow and it basically never stops. And that’s what’s great about giving time to something like this. There’s always the odd day here and there and my life is wonderfully flexible in that respect, but…I’m very lucky in my job and there are downsides to being a well-known person, but one of the most important is this: being able to give your time and energy to something you passionately believe in. So how did you decide to become involved? Interestingly I’ve always known about the work of Maggie’s, as my mother had cancer for twenty-five years. My partner did a charity bike ride for Maggie’s four years ago and he came back and said ‘Right, that’s it, we’ve got to support Maggie’s in every way we can’ and I said ‘I can’t possibly fit another charity in!’ He responded ‘Well, I’ll do it, so if we need you can come.’ Basically, he’s a huge supporter of Maggie’s and I pitch up when required. I feel passionately about it and feel it is remarkable in every way. There are already 55 Maggie’s centres, and I think there can be one put alongside every cancer ward in the UK. It can be achieved. How do think things might be different if the sort of emotional support that Maggie’s offers becomes more widely available? My mother was remarkably well spoken, but what I find disturbing is that people ring me up who have no family experience of cancer, no knowledge, who simply are completely in the dark when they receive that diagnosis. Those are the people who Maggie’s can help so well, and that’s what happens at these centres. My grandmother had cancer too. I came from a cancer family. Loads of people don’t. It’s a big, terrifying word and they know nothing about how to cope – until they walk into this warm, calm, beautiful space, and they think ‘this is all to do with cancer? Perhaps this is something I can deal with.’ IF YOU WOULD LIKE TO FIND OUT MORE ABOUT HOW MAGGIE’S CAN OFFER SUPPORT TO THOSE AFFECTED BY CANCER, FIND OUT MORE AT 24 Cotswold Homes Magazine soFKa ZinoVieFF for nearly twenty years the eccentric composer lord gerald berners and his gay lover, the romping, fiendishly handsome ‘Mad boy’ turned faringdon house into a bohemian paradise. their home became an exotic playground for the great and the good, the famous and infamous: dali, hg wells, stravinsky, evelyn waugh, nancy Mitford…all frolicked under a skyful of implausible pastel-coloured doves. but their lavish life was put on ice by another world war - and the arrival of the pregnant beauty Jennifer, who became the Mad boy’s wife. now the Mad boy’s (sometimes contested) granddaughter, the writer sofka Zinovieff, has written a book about the extravagant characters responsible for her unexpected inheritance… radio presenter david freeman interviewed sofka about her new book, the Mad boy, lord berners, My grandmother and Me. So, Sofka.Where do you live? Well, since last month I’ve been living at Faringdon House, which is a very big change because before this it was thirteen years in Greece and five years in Italy. Now, you’ve collected some very interesting names, haven’t you? [Laughs] Sofka Zinovieff is my maiden name. I don’t use my married name but my children do, so it’s around the place quite a lot, and it’s Papadimitriou. All of us who live nearby have seen the folly the tower (once complete with notice warning: ‘Members of the public committing suicide here do so at their own risk’). Is that yours? No, it’s not.The folly was built in 1935 by Lord Berners, supposedly as a present for my grandfather the Mad Boy, Robert Heber-Percy. But in the 1980s it had become very dilapidated and was quite a problem and – I hope for only noble reasons – the Mad Boy decided to donate it to the people of Faringdon. So I’m a trustee but it doesn’t belong to me. So in the book that you’ve written, The Mad Boy, Lord Berners, My Grandmother and Me, the Mad Boy is your grandfather, correct? Yes. Robert Heber-Percy was his full name, but many knew him as the Mad Boy. I don’t want to ask an impolite and intrusive question, but…are you sure? [Laughs] Well, as they say it’s the wise man that knows his father. It’s a complicated question and as you read the book you will get to that point. My grandmother was a beautiful, dashing and fairly wild young woman 26 Cotswold Homes Magazine during the war and there came a point where my mother’s paternity needed to be investigated, some hints were dropped, so no, I’m not 100% sure. It’s quite interesting how some people are convinced he is, some are convinced he’s not. And the question is going to remain, I’m afraid. That house that is now yours, it isn’t very modest, is it? The funny thing about it is, well, of course it isn’t modest, and anyone would agree, but…although it looks grand, and has quite grand rooms, it is quite cosy.The rooms are all on a human scale. Does it bother you? No, it really doesn’t bother me, because I think those sort of relationships -especially once you get to the grandparental generation – are so much more to do with the heart and so much less to do with a spare sperm. And how did Lord Berners acquire it? Basically his mother married for the second time after she was widowed and rented Faringdon in the twenties, and when she and her husband died (both in 1931) Berners decided he would make it his main home. He had a home in Rome, a home in London – he was a great traveller, formerly a diplomat, had lived in Constantinople. He sort of knocked around in a Rolls Royce with his chauffeur (in fact the chauffeur’s daughter still lives in Faringdon). It was in the year he made Faringdon his base that he ran into the Mad Boy. The house that is now yours was originally Lord Berners’.Would it be fair to say that his heyday was the 1930s? Yes. Absolutely. What was he famous for? He was famous for firstly being a composer – music was his first great love – but he also wrote books, which many would say were quite light, although there are a couple of lovely memoirs, and he also painted landscapes, some portraits – not great art, but accomplished and charming. Was he well known in his time? I think he knew that being a bit of an eccentric and moving in the right circles would bring a bit of fame to you. He was very good at composing, very serious about it, and knew a lot of people in that world. Stravinsky thought he was OK. At the British premiere of Stravinsky’s The Rite of Spring a piece by Berners was played as well, which puts it in perspective. Most of us spend our lives worrying about money, but he didn’t do that… No. When he inherited the title in 1918 he also inherited quite a bit of money, so he didn’t have to worry about that. But he did, sometimes, in the way that rather rich people do. He was a depressive and a melancholic. But then he went to a house party and met the Mad Boy…Is that the sort of thing one did in those days, go to something like an upmarket sleepover? Exactly [Laughs] Yeah.They were all going around visiting each other. [A man named] Michael Duff has this lovely great house and there were lots of people staying. Berners fell head over heels with the Mad Boy, who was a wild twenty year old, a great daredevil soFKa ZinoVieFF a difficult situation – how kind of him.’ Not actually a characteristic that was typical of him. He was many things but not a particularly kind or thoughtful person. When I looked at dates I realised that actually, she would have only have known she was pregnant if you’d given her a two week band for getting the test done…She could have only really known for one week before they got married. Now that’s pretty quick to actually say ‘Hey Mad Boy, old friend, would you do the decent thing’ and arrange and marriage in London and get the parents to come and all this sort of thing… My guess is that’s it’s all much more unclear than it was in their minds.The Mad Boy’s good friends told me that he believed that my mother was his daughter and I was his granddaughter. But it isn’t clear-cut. All images © Sofka Zinovieff Did you know him? I did know him, but not very well. I met him and went to stay with him for the first time when I was seventeen. My mother hadn’t got on all that well with him. I’d met him a couple of times as a child and at age fourteen but when I was seventeen my mother took me to stay at Faringdon. It was an extraordinary experience, rolling up at the front of the house and the coloured doves flying up. And there was the Mad Boy with a drink and a cigarette, rather dapper in his suit and with a sort of naughty look on his face. He welcomed us into the house – it wasn’t exactly ‘Hi Granddad!’ but there and very handsome: the sort who’d dash off naked on a galloping horse, leaping over the hedges. He was obviously intrigued enough to go back to Faringdon with Berners. They became this most unlikely partnership.You can imagine Berners’ friends –some were more eccentric than others, but they were high society people, Lady This and Duchess Of That, they were just astounded at this handsome young Mad Boy. But then the story gets more astounding, because a pregnant young girl arrives… But you have to remember that this is ten years later, so they’ve had a good decade of living the high life, travelling around Europe, making Faringdon into this jewel of a place with coloured doves… They dunked the doves in paint, did they? No, not paint, lovely vegetable dye. As we still do! [Laughs]. OK, so they had ten years of this… …And then the war came, which changed everything for everybody. Berners moved into a little lodgings in Oxford and gradually moved back to Faringdon: the army moved in to Faringdon, camping in the grounds, soldiers sleeping in the attic…it was a very different environment to what it had been.The gardeners had all gone off to war, so the grass was wild, and there were sheep grazing on the lawn. Probably they managed to have a few vegetables in the garden and they dunked the doves in paint, did they? no, not paint, lovely vegetable dye. as we still do! some cream from the farms, so it was perhaps better than what people had in the cities. It was at that point that the Mad Boy said: ‘I’m getting married to beautiful Jennifer Fry.’ The question that then emerges is why? Why, on all fronts? I think there were many reasons. I think that labels about gay, straight, bisexual these sort of [orientations] were perhaps not so clearly defined, and maybe that gave people a freedom that we don’t have today, when people may have more rights in that they won’t be sent to prison for being gay but they are obliged to define themselves. Perhaps things were more fluid then in that you didn’t have to say ‘Yes, I’m gay.’ The Mad Boy was never very camp, he was manly and tough and good-looking. He had girlfriends too, so I suppose he was what we call bisexual now. My suspicion is that he and Jessica had a fling (he was pretty keen on flings) but they were good friends as well. Why they decided to get married at that point is really tricky to establish. She was just pregnant. When I was researching the book I’d always assumed, like many people did, ‘Oh maybe he helped her out of was champagne. He said to me at a certain point: ‘Do you see that handbag there?’ and he pointed to this amazing old chair and there was a white wicker fishshaped handbag lying on the chair. ‘That belonged to your grandmother Jennifer, who left it there when she left in 1944.’ By this time it is 1979. Pretty astounding. That began my friendship with him. ‘Astounding.’ Is that how you felt when you found out he left you this house? It was flabbergasting. Absolutely shocking. I was a seventeen year old who had been brought up in hippie London and here I was, entering this world of the 1930s. It was like ‘Oh, this is Nancy Mitford’s bed’ and I would be brought breakfast in bed by the housekeeper with a breakfast set that was probably the same as the set Nancy Mitford was given, with a great big magnolia flower. So eight years later, having visited fairly regularly but not often, Robert called me on my own one day. I was a research student at the time, doing a PhD in Social Anthropology and living out in Greece. He summoned me there and said he wanted to leave me Faringdon. It was a gigantic shock. I can still feel the aftershock today. The Mad Boy, Lord Berners, My Grandmother and Me by Sofka Zinovieff is available from all good booksellers. 27 the theatre ChippinG norton The Theatre Chipping Norton Presents MOTHER GOOSE It’s time for another of Chippy’s unmissable Christmas pantos – and this year’s Scandinavian snowbound smash, Mother Goose, has got us in a real flap. We chat to awardwinning panto writer Ben Crocker about the show – and how to write the perfect panto… Tell us about the story of Mother Goose – one perhaps a little lesser known than Cinderella or Aladdin… It’s one of the younger pantomime stories – I think it was first devised for the Drury Lane pantos in around about 1900. It isn’t centuries and centuries old like some of them. Fundamentally, Mother Goose is a lovely lady who looks after geese but unfortunately a demon – or in our case, a wicked troll named Smorg – tries to ruin her by proving the fact that she is weak like all humans…in her case, her sin is vanity. He aims to cause her downfall by exploiting her vanity. (That’s Smorg as in smorgasbord, by the way!). So, Mother Goose is full of snow and woolly jumpers – does it maybe have a bit of a Frozen vibe to it? We’ve decided to set it in Norway. We wanted the Northern Lights and a very rich atmosphere and place to set the story. I actually did this independently of Frozen and only afterwards did I see the resonances. But we didn’t do that deliberately! So how did you become a writer of pantomimes? I used to put on the panto in the theatre I ran down in Exeter and I started writing them. My father wrote pantomimes before me, though I never looked at that and thought ‘that’s what I want to do’. It’s just something that happened really, and I’m delighted to have been doing them for the last six years at Chipping Norton. It’s been six years that you’ve been associated with the panto at Chippy.Which has been your favourite to date? Oh, the current one.The current one is always your favourite! That’s the way of things: the one that you’re 28 Cotswold Homes Magazine the theatre ChippinG norton working on is the one you like the best. Chippy must have a special place in your heart… It has. It’s a lovely theatre and I used to tour to it in the 90s when we ran a theatre company. I think I’ve acted it a couple of times, done about four shows there…it is just the most intimate, lovely little theatre. Are there any panto stars returning to the stage for Mother Goose? Yes! JJ Henry is coming back, who was the dame last year. He’s back as the dame again this year – a lovely, warm dame. Actually, he was one of the reasons I wanted to do Mother Goose, because he was just such a cracking dame.Traditionally Mother Goose always provides the dame with a very substantial role. We loved his Dame Trot in last year’s Jack and the Giant, so we’re really looking forward to his elevated role.What ingredients does the perfect panto have? What do you try and bring to these classic tales as a writer? Well, I think that one should always pay a lot of attention to the telling of the story.That’s what anchors all the silliness and everything.The script should aim to tell the story clearly and truthfully – that should be the skeleton of the script. Within that, you’ve got to have loads of opportunities for fun and song…and most importantly, you have to build a connection with the audience.That’s the one thing that really makes panto unique. Finally, what are audiences going to enjoy seeing in Mother Goose? Photography by Ric Mellis I think audiences are going to enjoy a really funny, intimate show in an absolutely unique venue. The pantomimes at Chipping Norton are always absolutely unique. IT’S COMPETITION TIME! We are delighted to announce we have free tickets to Mother Goose to give away to a lucky winner! For further details and terms and conditions, please turn to page 6. Also: 25% Off tickets on the following dates! Wednesday 10th December at 4pm and 7.30pm / Thursday 11th December at 4pm and 7.30pm / Friday 12th December at 4pm. TO BOOK YOUR TICKETS, CALL THE BOX OFFICE ON 01608 642350 QUOTING CNT-25 This year brings an ‘eggs’-tra week of panto with an extended run – catch Mother Goose 18th November 2014 – 11th January 2015! To book tickets and view times visit P.S. Don’t forget to wear your most ridiculous Christmas jumper – there are prizes to be won! Tweet or Facebook your Christmas jumper pics to @chippytheatre. All pictures will then be displayed publicly in The Theatre bar. 29 Consider the Sheep Consider the Sheep Spare a thought for the humble sheep, the innocuous grazer of the roadside fields. The fleece on its back not only built the wealth of the Cotswolds, but the animal itself transformed English society and the even landscape we inhabit. People and sheep go back far further than most imagine. Radio presenter David Freeman interviews Philip Walling, former barrister, former sheep farmer and now author of the recently released Counting Sheep: a Celebration of the Pastoral Heritage of Britain. I live in a place called Shipton, there’s Shipston-on-Stour, Shipton-under-Wychwood – were these originally sheep towns? They were. That’s a rather expansive answer, for a barrister! I was hoping for something rather more – [Laughs] I’m sorry. Yes they were. They are all over the place, as the foundation of the wealth of the medieval time. England – and it was mostly England – was sheep country. The entire country was populated by sheep. We have a unique climate, a unique topography for keeping sheep and very little snow cover in any winter – it is very rare for there not to be any grazing for sheep. And, of course, there’s enough rainfall to keep things green. The Cotswolds are so beautiful. Rock stars want to live here, royalty has second homes – how much of the beauty that we enjoy in this area of the woods is down to sheep? The landscape was shaped and formed by sheep over centuries, millennia - the Celts kept enormous flocks of sheep. It’s a misunderstanding to suggest that the monasteries were the first to keep sheep, because they weren’t by any stretch of the imagination. What largely happened with the monasteries was that they took over existing sheep-keeping economies, refined them and improved them and enlarged them. I’ve come across documents in the borders in the Southern Uplands of Scotland showing that the kings of Scotland gave land to monks from northern France. There were lawsuits that the locals brought a claim against these new monks, accusing them of appropriating common land and preventing them from grazing. They were terribly aggrieved by this. They thought that the monks took over their grazing rights. In a way it’s true, but they expanded the green as well – they used it as a base from which they expanded all over the Southern Uplands. There are arguments raging at the moment about re-wilding in Derbyshire, fencing around the land and leaving it, ignoring it. The notion is that somehow the land is better kept in a wild state, and that humans and their sheep are somehow an illegitimate interference with the state of the landscape. But it’s a gross misunderstanding because there were huge Celtic flocks of sheep before the Romans ever came to England. Most of Derbyshire, the Pennines, the Lake District and the Southern Uplands haven’t had trees on them for around two thousand years. Coming back down to the southern area, to Gloucestershire, Oxfordshire and this area – how much of the undulating patterns of land and the views that we see have been formed by the grazing of sheep? Quite a lot of it. The landscape and the shape 30 Cotswold Homes Magazine Consider the Sheep of fields and the way that land has been used is a result of accommodating land to graze the sheep. Sheep were the essential domestic animal of every country person until, I don’t know, 150 years ago. Sheep provided soil fertility, which can be underestimated because poor soils would grow nothing without sheep grazing – the ‘golden hoof.’ Many areas would be nothing without sheep. And of course there’s the wool, which made people rich, and don’t forget that before electricity there was also tallow, which they produced… Which in your book, you say was more valuable than the meat… It was at one time, yes, more than twice the value of the meat. For a period it was worth more than the wool. In your book you list ten things that you get from sheep. If you were a sheep merchant you might have had a quite a spare bit of cash, as all this gorgeous architecture might suggest. Did all that come from trading with sheep? Most of the churches in England were built on the profits of wool. Cotswold churches in particular, but also in Norfolk and Lincolnshire there are fine churches. Wool was like the North Sea oil of its day. Everybody wanted to keep sheep, and you’d find them grazing at the roadsides and at the edges of commons, anywhere you could avoid paying rent – village greens, churchyards (where the grazing was popular as it kept the grass down on the graves). There is a story about the popularity of farming towards the end of the 18th century – a parish rector decided to plough up a graveyard to plant turnips, the new crop, for his sheep. And the bishop came round for visitation and was annoyed with the Rector and said: ‘Now look, this must stop, this must change.’ And the Rector said: ‘Oh yes, it’ll be corn next year.’ enormous weaving industry in modern Belgium. The crown had the wool staple – all wool had to be sold through that so the crown could levy a tax on it, and it became unlawful to sell wool other than to staple merchants. Originally the staple was in Calais (which we then owned), and the wool would go through Calais and into Flanders, Florence – and those two places took most of the wool from England. The crown wanted to repatriate it and took the staple back to seven or eight towns in England, which didn’t work, so it went back to Calais. There came a point where the crown made it taxable to send wool abroad so the people in Flanders found themselves making a loss. So the Flemings came to England – so anybody with the name Fleming or any derivative of that will have come here to set up in Norfolk and various other places. Witney blankets were made from British wool for centuries, but not any more. So what happened? Competition from other fabrics, cheaper manufacturing methods abroad, lower wages abroad and there’s no great support in Britain for home-grown industry. It’s happened over thirty or forty years: there isn’t the energy to do it. There is still a limited wool manufacturing in the Scottish borders, based on tweed, but even that is suffering. Was it more important that sheep produce wool than milk, say, or meat? Milk was minimal apart from in the wilder places. There was milk production in Wales. They’d take the lambs off after ten to twelve weeks and milk them for another ten to twelve weeks until their milk dried up – to make cheese for winter, which in the borders was known as ‘white meat.’ It was hard, almost inedible, but it sustained them through the winter. Milking How much of our local affluence would have come from wool? In the Cotswolds? Farmhouses, villages, churches: almost all of it built on the profits of wool. The more you look into it you realise the central role wool played in the English economy. Consider that the revenues to the crown of Henry II almost two thirds came from a tax on wool or profits of wool manufacture. Was there international trading? We’re now learning that Stonehenge might have been a trading centre for bronze-age artefacts. Would a wool-rich area like this have been international, or were all efforts for home consumption? None of it was for home consumption. Most of it was sent to Flanders – and still is, there’s an Most of the churches in England were built on the profits of wool. Cotswold churches in particular, but also in Norfolk and Lincolnshire there are fine churches. Wool was like the North Sea oil of its day. sheep was not universal, and they didn’t eat them. Pastoral people don’t eat their animals, contrary to popular conception, but keep them for other reasons: their wool, their manure, their horns, their bone – which could be used to make drinking vessels and utensils. They’d tend to eat older sheep, but the idea of killing lambs was regarded as horrifying to a lot of primitive pastoral people because they considered it to be wasteful and pointless. So was there a demarcation between sheep kept for wool and sheep kept for meat? The great 18th century breeder Robert Bakewell had a great aphorism: If you breed them for the wool, the meat will suffer, if you breed them for the meat, the wool will suffer. There is a balance to be struck but if you strike a balance you get poor meat and poor wool. You have to decide which you want. There was a huge turning point in the 18th century when we began to be industrialised when people want meat, and the advent of cotton meant that there was less demand for wool. In 1750, 1760 the demand for meat in the growing towns and cities became such that it was recognised that the way forward was to kill young and provide small joints – rather than, as in the old days, to provide huge, barely edible carcasses. The old wool sheep were almost inedible. One last question, then: what is the message of your book? Isn’t this wonderful? Look at this and realise that we are the pre-eminent sheep breeding country in the world and, if we’re not careful, we’re in danger of underestimating it and losing it. Counting Sheep by Philip Walling is available at all good booksellers. 31 Cheltenham raCeCourse a Tale of TWo cenTuries looking back on 200 years of racing aT chelTenham With a £45,000,000 development at Cheltenham Racecourse expected to be completed by March 2016, racing at Cheltenham has come a long way since its turbulent birth so many years ago. What better time to look back on all that has happened? Judgment, thousands of that vast multitude who have served the world, the flesh, and the devil, will trace up all the guilt and misery which has fallen upon them either TO THE RACECOURSE OR TO THE THEATRE! fire anD brimsTone The earliest racing at Cheltenham was savaged by a popular priest - finishing in an inferno before rising phoenix-like from the ashes. It all began nearly two centuries ago.The first races in Cheltenham were held first on Nottingham Hill in 1815 and later at Cleeve Hill in 1818 as a three-day event supported by Colonel William Fitzhardinge Berkeley (a flamboyant character who was said to have horsewhipped the editor of the Cheltenham Chronicle when he was criticised in its pages). Unlike the racing we know Cheltenham for today, these races were held on the flat; but in in 1819, the first race for a ‘Gold Cup’ was held… This was boom-time for the town of Cheltenham. Its mineral spas attracted the leisured classes seeking relaxation and remedy. Soon the races were drawing excitable crowds of up to 30,000 townsfolk and travellers. At Cleeve Hill, the racing itself was only the heart of the entertainment: around its fringes, one could see minstrels and Punch and Judy shows, purchase drink, sweets and food and visit a variety of stalls. And, of course, one could put a little bet on. It was in one or other of these that youthful modesty was first polluted, it was in scenes of this nature that the early bloom of virtue was rudely violated, and every subsequent step which they have taken in the downward road to perdition must be traced to this first aberration from the path of rectitude. Such vices, and such like, more terrible in degree, and numerous as they are terrible, are the consequences of the race week.’ Cheltenham Racecourse as it was purchasing a one-way ticket to Hell: a precarious first step towards damnation: But some regarded the races with suspicion and disgust, fearing the meetings as a downright evil influence. Cheltenham’s new priest, the Reverend Francis Close, was dismayed by the ‘guilty revelry’ on display and considered it deeply sinful that such sums of money were gambled while inequity existed within the town. ‘It is notorious that on this occasion numbers of the most worthless members of society flock in from every part of the country to partake in the unholy revelry, and to increase the amount of crime and guilt which is chargeable upon us. And it is scarcely possible to turn our steps in any direction without hearing the voice of the blasphemer, or meeting the reeling drunkard, or witnessing scenes of the lowest profligacy… Close was a man with an aggressive approach to sin, believing that it should be attacked outright, rather than merely avoided.To his mind, the worldly pleasure they offered was a source of corruption: he regarded the races as ‘a torrent of vice.’ For him, going to the races was in essence the same as Many a diligent and affectionate wife weeps in secret over this season of guilty revelry, and curses the day when it was first established. I could tell of many young people, servants and apprentices of both sexes, ruined in body and soul by this destructive amusement. And I verily believe, that in the Day of 34 Cotswold Homes Magazine The strident sermons of the new priest found an eager audience in his parishioners. Close played to a crowded church and became a prolific pamphleteer, distributing around 3,500 copies of a work entitled ‘The Evil Consequences of Attending the Racecourse Exposed.’ Though support for Close strengthened, many were repelled by this extremist attitude to a bit of fun. They recognised the races not as a horrible hub of excess, but as an economic asset.The 142nd edition of The Gentleman’s Magazine featured an angry, lengthy rebuttal of Reverend Close’s ‘ultra-piety’: ‘…Now, as we do not like killing hens which lay golden eggs, and have friends at Cheltenham who have property that would be deeply injured by the success of his hypercalvinism…As long as there are passions there will be vices, yet if pleasure and passion were not attached to existence, the latter would be a horrible curse… ‘If the suppression of the Races at Cheltenham would put an end to licentiousness and gambling, by all means let the Races be abolished, but as Cheltenham raCeCourse Many champions have been made at Cheltenham. But there is one horse whose name is inextricably linked with the course that made his fortune, and that is the legendary Arkle. we do not think that demolition of the Strand or Covent Garden would put an end to prostitution… In short,TEMPORAL ENJOYMENTS ARE NOT PROHIBITED, IF THEY ARE ACCOMPANIED WITH INNOCENCE AND CHARITY.’ But the ill feeling turning violent in the annual race meeting of 1829, when Close’s followers gathered to hurl insults (and various missiles, including rocks and stones) at the riders and their horses. Bad as this was, the next year was a catastrophe.The night before meeting, the racecourse caught fire and was incinerated. Whether or not arson had been committed in a bid to stop the races, the course was destroyed: the racing, it seemed, was over. aT presTbury reborn Racing revived in a new location – one that would witness the rise of legends… But, as it turned out, the destruction of the Cleeve Hill course would only serve as the foundation for the major racecourse located at Prestbury today. With the flat course lost to the flames, Colonel Berkeley turned his attentions to something he considered rather more thrilling: steeplechasing. The Victorian age saw an emerging interest in jump racing, and such races were held locally in nearby Andoversford from 1834 onwards. In 1865 a course was created in Prestbury Park, but it wasn’t until Mr Baring Bingham purchased the course in 1898 that a Grandstand and railings were added.The first steeplechase race took place here in this same year. It was a big investment, but it paid off: a two-day ‘Cheltenham Festival’ in 1902 drew a huge crowd and in 1904 a four-mile National Hunt steeplechase saw Cheltenham established as a pre-eminent racecourse.The Roaring Twenties saw the arrival of both the Cheltenham Gold Cup as a three-mile-plus steeplechase in 1924 and the Champion Hurdle in 1927. After WWII, the Festival was extended to three days owning to racing’s burgeoning popularity. 1959 saw the first running of the Queen Mother Champion Chase (then called the National Hunt Two Mile Champion Chase), so named in 1980 in recognition of one of racing’s most devoted patrons. A great lover of jump racing and long-term attendee of The Festival, she had 449 winners in a career spanning 50 years. Just as The Beatles formed in 1960 at the beginning of a culturally transformative decade, the Tattersalls Grandstand was opened to accommodate growing crowds. In 1964, Racecourse Holdings Trust was created to secure the development and prosperity of Cheltenham. Now known as Jockey Club Racecourses and wholly owned by The Jockey Club, the trust currently owns 13 other racecourses. Arkle in 1965 arkle – The legenD of ‘himself’ Record-breaking and rivalry immortalised of the greatest horses ever to compete at Cheltenham Many champions have been made at Cheltenham. But there is one horse whose name is inextricably linked with the course that made his fortune, and that is the legendary Arkle. On the 4th of August 1960, an unnamed threeyear-old gelding for sale at Ballsbridge caught the attention of one chaser expert Tom Dreaper, who secured him for Anne, the Duchess of Westminster – widow of the richest man in Britain - for 1,150 guineas. Appearing ‘gangly’ and unimpressive to those who stabled him at Greenogue, the horse soon demonstrated an aptitude for jumping. His first start over the fences was at Cheltenham in 1962 when he took the Honeybourne Chase by twenty lengths – and in the next year, he repeated the trick at the three-mile Broadway Chase at The Festival. A great result for Arkle, but the media was buzzing about the formidable Mill House, who had taken the 1963 Gold Cup by twelve lengths: future wins for this strapping character seemed assured.The two first clashed during a dismal November day at Newbury for the Hennessy Gold Cup, which Mill House won by eight lengths, leaving Arkle in third. But it transpired that Arkle and rider Pat Taaffe had a lot more to give. The stage was set for the 1964 Gold Cup, which was this time level-weighted.The race was even moved from its Thursday slot to a Saturday to give enthusiasts the best chance of catching the action, with Mill House at 8/13 and Arkle at 7/4. Arkle snatched an explosive five-length win and made record time. ‘This is the champion,’ announced the BBC commentator. ‘This is the best we’ve seen for a long time.’ 23 days later, success followed at the Irish Grand National, and the legendary Arkle had arrived. Mill House and Arkle battled again in the 1965 Gold Cup.This time, Arkle took the race with an astonishing twenty lengths over his rival. And in the very next year, 1966, Arkle scored a hat trick when he finished thirty lengths ahead of Dormant – despite blundering at the 11th fence. With three consecutive Gold Cup wins, at still only nine years of age, Arkle seemed to have much more ahead. But just 13 days after a Handicap Chase win at Ascot, Arkle fractured an off-fore pedal bone attempting to take a second King George VI Chase. Despite all hopes to the contrary, his career had been dealt a fatal blow. His retirement was announced in 1968 and he was put down two years later, having suffered from pronounced stiffness and lesions. It was a tragic end for one of Cheltenham’s brightest. However, with a total of £75,107 in prize money and three consecutive Gold Cup wins amongst his many triumphs, his place in racing history was secured – his standing in the public consciousness only matched by greats such as Red Rum. A bar, a statue and a championship race honour him at Cheltenham. 35 Cheltenham raCeCourse Construction underway on the Grandstand in the early 1980s The neW age The dawn of a new millennium brings unprecedented development at the Home of Jump Racing Throughout the late 20th century many additions and alterations were made to the fabric of the iconic Racecourse, but recent years have seen more invested in Cheltenham Racecourse than ever before, and the Prestbury site has since become much more than just a course. Between 2003-2004 around £17,000,000 went to create events and conferencing centre The Centaur, which today hosts concerts and gigs from the best stand-up comedians and musicians. Conferencing facilities continue to host national and local commercial clients attracted to the venue’s astonishing capacity and prestige, establishing ‘the Home of Jump Racing’ as a centre of business as well. (The recent addition of a 2K, 3D enhanced digital cinema has also made The Centaur a popular destination for moviegoers, with new special screenings of hit films such as Frozen and The Rocky Horror Picture Show). In April 2014 construction began on a £45,000,000 development including the construction of a new five-and-a-half story Grandstand (to be completed for The 2016 Festival) and the refurbishment of the See You Then bar, Weighing Room and Horse Walk. (Despite the scale of the improvements, no construction work will affect the racing). Nowadays The Festival itself is worth an estimated £50,000,000 to the local economy, becoming a much greater asset to the region than the naysayers of two hundred years ago could ever have imagined. 5,000 people are employed over the course of The Festival, which boasts the largest tented village of any sporting event. Around 214,000 pints of Guinness and 18,000 bottle of champagne are typically consumed, indicating that the races are still a time of great revelry… And with around £6,000,000 in prize money to be won by champions old and new at Cheltenham every year (with £3,670,000 of that awarded at The Festival alone) there’s never been a more exciting time to spend a day at the races. Find out more about Cheltenham Racecourse at 36 Cotswold Homes Magazine Cheltenham Racecourse This December, Cheltenham Racecourse Hits the Slopes Ski party finishes off a year of racing thrills During The International on Friday 12th and Saturday 13th December, Cheltenham Racecourse and Rock the Cotswolds will be throwing an ‘Après-Ski’ party which will be open to all racegoers. Created in partnership with Rock the Cotswolds - a campaign that aims to increase recognition of the many entrepreneurs, creatives and innovators native to the area – the ski party is a fresh, exciting addition to an already thrilling day at the races. Featuring a ski simulator, Jacuzzi, photo booth and plenty of dancing, the party will let racegoers see out the day in seasonal style. For those who have maybe never considered a day out at the races, the party will demonstrate that the Sport of Kings is really all about fun. Ian Renton, Regional Director of Cheltenham Racecourse, said: “We’re rounding off an incredibly successful year by holding a party like no other. We love that Rock the Cotswolds is showcasing the cool, creative and gregarious side of the Cotswolds, so we’ve invited them to help us celebrate the sociable side of racing: racing is one of THE most fun sporting days out with friends. We have a fair few younger racegoers who often come racing here but the Après Ski party will be enjoyed by all – they really will feel like they’ve stepped into an Alpine resort.” Pictured: Ian Renton, Cheltenham Racecourse Regional Director and Oli Christie, Founder, Rock The Cotswolds ... the party will let racegoers see out the day in seasonal style. For more information and to book tickets to the Cheltenham Racecourse Apres Ski parties contact: Cheltenham Racecourse on 0844 579 3003 or visit will go on sale on Friday 14 November and will cost just £10 in addition to the normal Club/Tattersalls tickets, which is £22 on Friday 12 and £25 on Saturday 13, when booked in advance. The Rock The Cotswold Après-Ski Party will be open from 2.30pm until 7.30pm, with the last race expected to take place just before 4pm both days. 37 saM TWisTOn-DaVies Sam Twiston-Davies: A Super Star In the Making Columnist for the Racing Post, Stable Jockey for Paul Nicholls – It’s All Happening for Sam! The Showcase on Friday 17th October was the very first day of the new season’s racing at Cheltenham and proved to be a great one for Paul Nicholls, Sam Twiston-Davies and his father Nigel, and equally Sam’s second-season sponsors, local estate agency Harrison James & Hardie. Having recently become Paul Nicholls’ stable jockey, Sam got off to a flying start by bringing back the trophy on Vicente in the first. Then came an equally splendid win on Sybarite for Nigel, followed by a very honourable second place in the fifth - much to the delight of the directors of Harrison James & Hardie, who had also sponsored that race. “Sam could not possibly make us more proud than we already are, especially with all the attention and coverage he is getting, but more so because he’s a fabulous chap and he deserves all his success,” said Principal Director James von Speyr. Sam has since gone on to similar success with spectacular wins at The Open. In the Jockey Club’s official preview magazine The Open, Paul Nicholls and Sam talk candidly about their working relationship. Paul Nicholls is looking forward to the new season as he prepares to re-build a stable following the retirement of legendary greats Big Buck’s, Tidal Bay and Celestial Halo. *Photo and interview reproduced by kind permission of Sophia Brudenell, Executive Editor of The Open - a Racing Post production for Cheltenham Racecourse “I’ve won four Gold Cups, eight King Georges, a National and I’ve no intention of stopping there,” says Nicholls to Steve Dennis. “This summer we’ve put in a new gallop, it’s all about the future and I’m as enthusiastic as I’ve ever been to get on with forging a new team and having another successful season.” Nicholls goes on to reveal: “I’ve been very impressed with Sam from the time he started as an amateur, impressed with the whole package, the way he talks, everything. There were strong rumours that [owners] Simon Munir and Dai Walters were about to offer Sam a retainer and I had to act, to move quickly to bring him on board...” He concludes: “There are a lot of young horses here, a young jockey, so much potential for the future. I can’t wait.” Dennis likewise reports back to The Open readers: “Sam is as keen to embrace the coming season as his new boss… Enthusiasm, youth and talent make an irresistible force undaunted by any notion of an immovable object.” Sam confirms this is his dream job, that he is in a hugely privileged position and intends to “grab [his] chance with both hands”. He will still be able to ride for father Nigel when he can, including his beloved The New One for the Champion Hurdle, but he is very clearly inspired and thrilled by the quality and the future prospects of the horses at Paul Nicholls’ stables. “I’m still getting to know them all, but schooling the novice chasers has been wonderful fun. It’s very hard not to get carried away with them… “ He also acknowledges the stresses of expectation on his young shoulders but embraces the challenges that his new position brings: “Yes, there’s more pressure now. As you step up, people put pressure on you to succeed in the same way as you put pressure on yourself… It’s the pressure every jockey wants - it improves you, it makes you a better person – and it goes with the territory, where I am now, where I want to be.” 39 eQUesTRian laDY w O LY M P I A w An Olympic Event Other than the Olympics at Greenwich, London is not generally thought of as a natural home for equestrian competition. Although every Christmas time the Olympia Exhibition Hall in Hammersmith becomes just that, putting its lofty space to use hosting one of the biggest dates in the equestrian calendar. Our Cotswold equestrian correspondent Collette Fairweather looks for racing fun a little further afield… The London International Horse Show Olympia is held from the 16th to the 22nd December and accommodates the finest display of equine disciplines in an enormous, purpose-built central ring. Towering seating encompasses this central stage, ensuring a fine view for even the shortest of visitors in the most cost-effective of seats. Never has an uninterrupted view been more important when you are watching the equestrian equivalent of a variety performance.And although the Queen is not on the guest list, a very jolly fellow with a wispy white beard with a dashing crimson suit takes great pride in making an appearance every night! 40 Cotswold Homes Magazine The matinee and evening performances are compiled of a range of classes.The Metropolitan Mounted Police are one of the many segments, and rather than a display in crowd control, horses will be showcasing their bravery, jumping through fire rings and threading through one another at full-out gallop, whilst riders simultaneously remove their saddles and hoist them triumphantly above their heads. it were derby day itself, cutting the inside track, little legs going like the clappers, each urging their mount’s flaring nostrils over the line first. Another marvellous sight is the Shetland Pony Grand National, where miniature ponies scurry over fences at an astonishing rate, each sporting breed-specific bouffant manes and long-limbed riders in bright jockey colours.These steely-eyed mounts ride as if Although the Olympia International Horse show offers a diverse array of entertainment within its programme, I - like most visitors - make the pilgrimage to watch the very best competition horses in the world. For those keen to see other four-legged creatures strut their stuff, the Kennel Club put on a high-fuelled dog agility display, testing canine speed and wit, and the fitness of their handlers! eQUesTRian laDY “THESE STEELY-EYED MOUNTS RIDE AS IF IT WERE DERBY DAY ITSELF, CUTTING THE INSIDE TRACK, LITTLE LEGS GOING LIKE THE CLAPPERS, EACH URGING THEIR MOUNT’S FLARING NOSTRILS OVER THE LINE FIRST.” Photos: Kit Houghton “I SIT IN AWED SILENCE AS THE GREATEST SHOW JUMPING HORSES IN THE WORLD TACKLE THE MOST FORMIDABLE OF TRACKS: FULL COURSE OF FENCES A METRE AND A HALF HIGH, AND THE SAME DISTANCE WIDE, LEAVES PRECIOUS LITTLE ROOM FOR ERROR HORSE AND RIDER ...” World Cup horse and pony driving is a highly anticipated class, with carriages that seem to defy gravity as they weave through their obstacle courses in heated competition for the fastest times. Olympic-standard dressage displays are a tonic for audiences drained of adrenaline. Here horse and rider work in perfectly synchronised partnership, producing fluid combinations of the most technical and intricate moves. Calm and controlled, it never fails to impress when you consider the only communication is between hand and leg. I am always delighted by the wonderful variety in the classes at Olympia (half of which I don’t have the room to mention) but I go for the show jumping. I sit in awed silence as the greatest show jumping horses in the world tackle the most formidable of tracks: full course of fences a metre and a half high, and the same distance wide, leaves precious little room for error for horse and rider (but nail biting delight for the audience). The puissance class is the real crowd-pleaser. Horse and rider are pushed to their jumping limits - with each round, the red brick jumping wall grows. Challenging the world record, horse and rider fearlessly tackle heights of over two metres! If you can drag yourself away from the packed programme, a supporting shopping village will provide the opportunity to stretch your legs and browse new products, grab an autograph from some of the featured riders, or peruse the stalls and grab a few last-minute Christmas presents for both two-, and four-legged friends. Olympia is a show to rally your inner equine enthusiasm at a time when the majority of owners question their sanity. In my opinion, it’s a family day that’s undeniably worth the effort of a commute - a marvellous show the weather can’t threaten, an international competition adorned with tinselled festive cheer. For further information and tickets or 0871 230 5580 To be in with a chance of winning one of the two pairs of free tickets we are giving away, visit our Cotswold Homes competition page 41 Michael caines Michael CAINES PAST, PRESENT AND FUTURE Culinary genius Michael Caines is a superstar of his trade and certainly not to be confused with his illustrious namesake – although, as Collette Fairweather finds out, a biopic of his life would make for a rather good movie… Adopted into the Caines family at just six weeks old, Michael was the youngest of six siblings. Growing, eating and cooking were a family affair, and it was around the kitchen table that the seeds of his interest were sown. With an unruly enthusiasm Michael enrolled in catering college, and despite a turbulent first year graduated with the ‘Student of the Year’ award in 1987. After cutting his teeth for eighteen months in London at the Grosvenor House Hotel, Michael seized the opportunity of a position at Le Manoir au Quat’Saisons in Oxfordshire, after a chance meeting with its founder Raymond Blanc. Under the guidance of Monsieur Blanc, Michael honed his natural instincts and knowledge, encouraged to make his own mark rather than to hide behind the safety of established dishes. After three years, and with Raymond’s recommendation, Michael headed to France to work with some of the greats of gastronomy. After a year, home beckoned in more ways than one. Gidleigh Park in Exeter was recruiting for a new head chef, and they wanted Michael. However - left exhausted after only two months in his new position - he fell asleep at the wheel whilst driving to a family christening. The accident saw him loose his right arm from the elbow down. Had it not been for the former military doctor 44 Cotswold Homes Magazine who happened upon him, Michael would have lost his life. But fortune favours the brave, and after a mere four weeks he was back in the kitchen. His fortitude was honoured fours years later when he was rewarded with a second Michelin Star, and his consistency has kept that second star for the last sixteen years, and seen Gidleigh Park named the number one restaurant in the Sunday Times Food List last year. Today we find ourselves sliding about on the silky sofas of Lower Slaughter Manor, the venue for tonight’s evening with Michael Caines MBE (with rations of tea and delicious home baked biscuits). Michael is forthcoming, his voice soft yet self-assured. ‘The idea is to extend and promote what the Gidleigh collection is here at Lower Slaughter Manor and Buckland Manor too. I cook my signature dishes from Gidleigh Park with the wines that have been specifically chosen to accompany those dishes; I go out with each course, talking through each dish. So it’s almost like a relay. It’s very much about the synergy of food, wine and service.’ I wonder, how au fait is Michael with our beautiful Wolds? ‘I have lots of friends in the Cotswolds and have hosted a selection of other demonstrations in the area. When I was at Le Manoir, I often came over this way; it’s a beautiful area, particularly the villages. I even used to come up here and take a B&B for the weekend.’ ‘It’s a great region for food. The Fosseway has played a huge part with these old coaching houses, so it has have developed quite naturally. A great strength of the region is great produce combined with great restaurants. It’s a real draw, and we struggle to match that quintessential feel in other counties.’ I must agree: I certainly feel lucky as an unabashed foodie to live in this neck of the woods. However, I wonder if Michael is aware of the challenges the independent producers of the area face. ‘I hate the lack of opportunity that independents get,’ he says, ‘and to get these producers into cities, in the old days market towns were just that.You go to France and they still have market days, in the UK, where the markets used to be, it’s now tarmacked over for parking. And then we complain and moan that the local economy isn’t doing well and that nobody actually buys any of these local products.’ Michael caines “I grew up in a household where we ate around the table, and something that we really looked forward to. We took the time to cook the food and people used to have the time and take the opportunity to cook as a family.” ‘The access to regional food has grown, and so provenance is key, sustainability is key, and the best way to sustain your local economy is to buy locally, to keep farmers farming. Agri-tourism in Italy or France is such that you can create food memories that are specific to that area. It’s what going to new places is all about - it’s something that we should really be pushing in this country.’ have the time. In France, Italy, Spain food is such a large part of their culture. So therefore, they take the time to make the meal time a part of their day. We don’t give ourselves the quality of life I think we deserve and I think that in general we all need to slow down and take time and enjoy the moments that life affords us.’ ‘And people must understand that the true cost of living is worth paying for.’ ‘I thought that everyone sat down and ate as a family, as I did in childhood, and I realised all too soon that it wasn’t so. We shouldn’t abandon these civil things, if you’re sat at a table you are engaging in conversation - and each other.’ I feel a swell of pride, in that this chef who has built his reputation in fine dining, exclusive by its very nature, remains grounded by the inestimable value of simply sharing food. ‘I love food and culture and travel, but I am equally in love with lifestyle and food is a lifestyle. It is accessible to everyone on all different levels - if only we gave it priority.’ ‘I grew up in a household where we ate around the table, and something that we really looked forward to. We took the time to cook the food and people used to have the time and take the opportunity to cook as a family. [Now] the household is such a different place, and the pressure of time means that people are time-poor and they opt for convenient options.’ He adds with a sly smirk: Drawing himself up in his seat he adds: ‘People believe that we are poorer than other nations, and part of that problem is that we don’t ‘I like to think a lot of the world’s problems could be sorted out over a good meal.’ We move on to his new book, Michael Caines at Home. Having salivated over the recipes and delectable photography, I really felt a strong, seasonal influence, and wonder if perhaps this harkens back to his own training whilst under the wings of Raymond Blanc. 45 Michael Caines ‘There are two real ‘keys’ that I took away from Le Manoir, one being the use of the seasons and the other importance of your palate - something I emulated when it came to the book. I also tried to instil the things that interest me when I’m cooking - my focus has always been the produce, through the regions and through the seasons.’ ‘The history of produce is equally fascinating I wanted to do a book that lead with the ingredients, and therefore when you do that the seasons do obviously come into that.’ I wonder how one makes a start. ‘Well for me, I started with alcohol, because it’s always the best way to start!’ Gathering his thoughts, he adds with a sober tone to his voice: ‘What writing does is it takes fine dining - an exclusive a place to get to - and makes it accessible. You can go to my restaurant and experience a few dishes, but a book will give you the ability to taste a wealth of different recipes and taste combinations, and if they are well written they will give you the confidence of execution. There is definitely more than one book in me, but the problem being time, and also, that I felt a little constrained by the format of books.’ ‘I am also very interested in travel and the origins of food, and if that involves me having to travel the world to research it, then so be it!’ And again, that wry smile into his teacup: ‘I think my biography would be an interesting story to write one day.’ I get the distinct feeling that we have met on the cusp of a new chapter in Michael’s life. This is confirmed somewhat with his next reflective statement: 46 Cotswold Homes Magazine “I like to think a lot of the world’s problems could be sorted out over a good meal.” ‘There’s a great expression that I heard from an entrepreneur once – “the best time to plant a tree was twenty years ago, but if you didn’t plant it then, well it’s today”. And it makes me think that I need to plant my trees in my own garden, not someone else’s.’ So tell us about your saplings… ‘Earlier in the year I stepped back from being the director of food and beverage, although I still work within Lower Slaughter Manor and Buckland, I need to get my direction back as an individual rather than as part of someone else’s concept.’ ‘I do have a meeting after this one about a project in this area.’ What a tease! ‘What is important is that I’m looking around for opportunity, I’m ready for a new adventure, I don’t want to be everywhere and nowhere, but I am keen to get involved in new ventures.’ ‘You have to be careful, that with reputation comes expectation. It is important that you start with understanding what you want to create without compromising in terms of delivery, but at the moment I’m looking for conversation and see what materialises from that.’ How much of an adventure? Surely he will stay with Michelin-starred fine dining? ‘Cooking in a two star Michelin can be quite restrictive, where as cooking in a different market leaves you freer in the style of cooking, which sometimes is quite refreshing and appealing.’ One venture he’s a hundred percent committed to however is Williams Formula One and his passion for motor sport. Michael looks like the cat that got the cream as he describes this glorious combination. ‘I’ve been working with them for four seasons - I was introduced by a friend, who thought it would be interesting to introduce an element of focus on the hospitality at a time when the team weren’t performing so well on the track. I was staggered to see the determination to create quality, but I could see the restrictions within their space and their repertoire. I helped to put a focus on creating what is considered to be the finest hospitality within the paddock of F1, with what I can only describe as a pop-up restaurant: it’s like a travelling circus really!’ With pride he adds: ‘It feels very special to say I’m part of the Formula One team.’ Special feels like the right word to describe Michael. Adversity seems to have fuelled him in his pursuit of excellence and, whatever the next project is, I’m sure of one thing: it won’t be ordinary. New Winter Menu TO BOOK 01451870210 CHOOSE YOUR FAVOURITE MEAT OR FISH DISH WHEN BOOKING YOUR TABLE. YOUR TABLE IS YOURS FOR THE WHOLE EVENING TABLES OF FIVE AND LARGER PARTIES CAN BE CATERED FOR Specials Scottish Langoustines…£8.00 1/2 Dozen Oysters…£8.50 Soft shell crab, sweet chilli & soy sauce…£7.50 Fish/Shell fish chowder…£9.00 start, £15.00 main Tempura prawns with dipping sauce...£7.50 Lobster with sweet white crab meat, cucumber, apple and mayo dressing…£15.00 Moules Mariniere, crusty bread…£7.50/£13.00 Aberdeen Angus Prime Fillet Steak…£20.00 Whole 18oz Dover Sole...£19.00 Wild Sea Bass…£17.50 Rack of Cotswold Lamb...£18.00 Classic Lobster Thermidor…start £15.00 or main £25.00 Supreme fillets of John Dory, infused in a creamy light sauce with scallops…£20.00 Roast Local Partridge, wrapped in bacon and sage, Chefs red wine Jus…£19.00 Grilled Sword Fish...£15.00 cOTsWOlD cOcKTail PaRTY HOW TO THROW YOUR OWN COTSWOLD COCKTAIL PARTY This festive season, why not entertain at home with your own cocktail party using Julia Sibun’s definitive guide What could be more than fun than inviting your family and friends to a festive soirée at home during the holiday season? It is a wonderful opportunity to get together with friends and also to say thank you for all those summer barbeques and dinner parties with friends during the year! give you time to plan your evening according to your guest numbers, and don’t hesitate to call people for responses – as knowing your numbers will help to ease the planning. Make sure you are aware of how many people you can take in your home at one time! First send your invitation in good time, perhaps four to six weeks ahead in the festive period, as everyone usually has the same idea to entertain during the holidays – be sure to include a rsvp date which will You will need a space cleared in the house whether that is the drawing room, conservatory or any other easily accessible downstairs room – but as we know the best parties always take place in the kitchen! If you are worried about available space, fear not because small frame marquees can easily be attached to your drawing room or dining room – and with a little bit of additional lighting and decoration they are an extremely quick and easy way to extend the house for the weekend. There are usually many decorations around the house at the holiday time of year, but quick ways to create a party atmosphere include ensuring that you’ve plenty of spice-scented candles in tea-lights, and hurricane lamps in clusters on tables and window-sills. Also why not bring some of the woodland indoors with natural branches and then hang from them sparkling icicles – this creates a gorgeous frosty look! And don’t forget fir cones and spruce will give you a festive perfume around the house, and another added touch is to have a bowl of hyacinths or paper white flowers in one of the entertaining rooms to give your guests a glimmer of the season ahead. If you are lucky to have open fireplaces in the house – make sure they are lit and crackling for guests on arrival – one quick way to warm your guests as soon as they walk through the door. Give thought to the food and drink that you would like to serve to your guests. If it is a particularly cold evening why not serve mulled cider or mulled wine on arrival – then you can always follow up with serving champagne or Prosecco because it is festive and offers so many mixing opportunities. How about an Elderflower Bellini, a totally delicious drink using Prosecco and a drop of St Germain Elderflower Liqueur, or a Peach Bellini with peach juice. It is also fun to have a couple of delicious signature cocktails such as a Negroni or a Spiced Cosmopolitan – please see the special festive cocktail recipe for a Spiced Cosmopolitan on the next page from Matthew Jones, General Manager at Wesley House. Don’t forget the drivers on the evening and have a couple of jugs of non-alcoholic cocktails available such as Virgin Moscow Mule, Ginger Pineapple Sparkling Punch or a delicious simple elderflower and pomegranate presse decorated with pomegranate seeds. Nearly all wine merchants sell their wines, drinks and champagne on a sale-or-return basis so there will not be a problem of over ordering – no-one wants to run out of booze on the night of their party! Food needs to be small, delicious and easy to eat whilst holding a drink and chatting to friends – a few ideas for easy-to-make at home canapés are spiced parmesan biscuits, fresh white crab mayonnaise en croute, salmon on blinis with sour cream and dill, asparagus wrapped in cured salmon with parmesan shavings, mini glasses of hot butternut squash soup, sticky Asian spiced pork belly, prawn and chorizo skewers and bloody mary shots with horseradish. Head Chef at Wesley House, Rob Owen, gives his favourite festive canapé recipes on the following page. Plates and trays for the service of the food can 48 Cotswold Homes Magazine cOTsWOlD cOcKTail PaRTY Spiced Cosmopolitan Ingredients: Method 25ml white rum Shake well and strain into martini glass. 25ml vanilla vodka 30ml cranberry juice Garnish with green or red apple slices. 10ml lime juice 10ml sugar syrup 1 clove, 2 cardamom pods, ½ teaspoon cinnamon (either muddle in or use as syrup instead of the sugar). Mushroom Arancini stuffed with Mozzarella Ingredients: 1 chopped onion 2 cloves garlic – finely chopped look appealing by being decorated with colourful flower heads, herbs, painted fir cones, banana leaves and other small Christmas decorations. If you are passing canapés with a dipping sauce place a thin citrus slice under the sauce cup to prevent it from sliding around on the tray. Don’t forget to stock up on plenty of small colourful cocktail napkins available as well! Where possible make as much as you can ahead of the date and freeze – this will only save time leading up to the date of the party and will keep the food preparation as stress free as possible. Pop on some background music to help your guests relax and create the perfect atmosphere and be prepared to have a few good dance tunes on the playlist as guests normally love to end up having a “boogie” in the kitchen at the end of the evening! If dancing is definitely on the cards why not a rent a jukebox?! Live music can also make a wonderful difference to the occasion – perhaps a small choir singing on the landing or musicians playing upbeat and festive music in the hall. Student musicians are an excellent source for quality and affordable entertainment. If it is a dry evening you may like your guests to spill out on to the terrace – so have at the ready a roaring brazier or two, or even one of the authentic fire bowls keeping the home fires burning! Towards the end of the evening have coffee available for any guests who may need a little assistance at the end of the party – also your local taxi company number should be handy and offered to guests who do not wish to drive home. Outside garden flares lining your driveway and festoons of outdoor lights in the trees will add a final festive party touch to finish a wonderful evening as you say farewell to your guests. Julia Sibun Director Wesley House Events 225g chestnut mushrooms – finely chopped 350g arborio rice 150ml dry white wine 1.2 litres hot vegetable stock Method Soak the Porcini mushrooms in hot water for 10 minutes, and then drain well. Heat the oil in a large, heavy based saucepan and add the onion and garlic – fry over a gentle heat for 2-3 minutes until softened. Add the chestnut mushrooms and fry for a further 2-3 minutes – until browned. 100g plain flour Stir in the rice and coat with oil. Pour in the wine and simmer, stirring, until the liquid has been absorbed. Add a ladleful of the stock and simmer, stirring again, until the liquid has been absorbed. Continue adding the stock until all the liquid has been absorbed and the rice is plump and tender. 2 large eggs – beaten Leave the risotto to cool. 25g butter 2tbsp chopped parsley 75g mini mozzarella balls 130g breadcrumbs Vegetable oil Bourbon Glazed Pork Belly Drain the mozzarella balls – press 1 ball into 1 tbsp risotto and shape to encase the cheese – repeat. Put the flour, eggs and breadcrumbs onto 3 separate plates – roll the risotto ball in the flour, then the egg and finally the breadcrumbs. Put onto a plate and chill for 30 minutes. Fill a medium pan with 5cm of oil then heat until a cube of bread turns golden brown in 30 seconds. Fry the balls for 3-4 minutes until golden brown. Serve hot. Ingredients: Method 200ml bourbon whiskey Preheat the oven to 160C/140C fan/Gas 3. 1 star anise 2tbsp soy sauce Pour 100ml of the bourbon into a small shallow roasting tin and add star anise. Season the pork belly, put in the tin and cover tightly with foil. Bake for 3 hours. Remove from the oven and leave to cool for at least 1 hour. 2tbsp clear honey Heat the oven to 200c/180c fan/gas 6. 800g pork belly 4tbsp tomato ketchup Remove the pork from the tin. Using a small, sharp knife, pare away the rind from the meat and leave a small layer of fat. Slice into 2cm chunks and return to the tin. Roast for 20 minutes until crisp and sizzling, turning regularly. Meanwhile, place the remainder of the bourbon, soy sauce, honey and ketchup into a small pan and bring to the boil until thick and syrupy. Pour over the pork and coat. Roast for a further 10 minutes until sticky. 49 Hillside, Bourton on the Hill Guide Price ÂŁ700,000 A lovingly restored Grade II Listed village property with an abundance of character features including the large, original bread oven and a wealth of recovered historical artefacts, dating back to as early as 1815. The property also boasts beautifully landscaped gardens, flexible accommodation space and off road parking. Entrance Hall | Sitting Room | Dining Room | Family Room | Kitchen/Breakfast Room | Utility | WC | Five Bedroom | Two Bathrooms | Garden | Off Road Parking | EPC Exempt Fine and Country Harrison James & Hardie, Moreton in Marsh 01608 653 893 1 Red Lodge, Little Compton ÂŁ550,000 A stylishly presented Victorian property situated on the edge of this sought after Cotswold village and benefitting from a self-contained annexe. Entrance Hall | Kitchen/Breakfast Room | Sitting Room | Family Room | Utility Room | W.C | Master Bedroom with En-suite | Two Further Double Bedrooms | Shower Room | Office/Music Room | Workshop | Open Fronted Double Garage | Driveway and Garden | Annexe comprising Bedroom | Shower Room | Sitting Room with Kitchenette | EPC Rating: D Fine and Country Harrison James & Hardie, Moreton in Marsh 01608 653 893 Moreton in Marsh | Bourton on the Water | Stow on the Wold | Mayfair | Lettings 1 Pinchester Cottages, Little Compton ÂŁ475,000 A well-proportioned Cotswold stone period home, located in a quiet backwater of this traditional Cotswold village. Offered with no onward chain. Entrance Hall | Sitting Room | Dining Room | Study | Kitchen/Breakfast Room | Utility Room | WC | Two First Floor Double Bedrooms | Bathroom | Second Floor Double Bedroom (with additional room ideal for dressing room or study) | Parking | Garden | EPC Rating: E Fine and Country Harrison James & Hardie, Moreton in Marsh 01608 653 893 3 & 4 Manor Farm Cottages, Donnington ÂŁ420,000 A traditional Grade ll listed Cotswold stone cottage located in a quiet backwater of this highly sought after and picturesque Cotswold village. The property offers scope to improve and extend (subject to the necessary consents) and benefits from a generously proportioned and secluded rear garden. Kitchen/Breakfast Room | Sitting Room | Family Room | Two Double Bedrooms | Mezzanine Bedroom | Bathroom | Garden | EPC Exempt Fine and Country Harrison James & Hardie, Moreton in Marsh 01608 653 893 Country Homes from harrison james & hardie Wyck Hill Lodge, Nr Stow on the Wold Guide Price ÂŁ795,000 A delightful and traditionally styled Cotswold stone Grade II Listed lodge, benefiting from a delightful mature and well-tended garden extending to approximately 0.441 acre with exceptional views over the adjoining countryside, stable block and paddock extending to approximately 1.065 acre. Entrance Hall | Drawing Room | Study | Dining Room | Kitchen | Utility | Bedroom | Ensuite | First Floor Master Bedroom | Ensuite | Second Floor Guest Bedroom | Ensuite | Parking | Gardens | Stable Block | Paddock | EPC: Exempt Fine and Country, Harrison James & Hardie, Bourton on the Water 01451 824 977 Oxleigh & Honeysuckle Cottage, Bourton on the Water ÂŁ720,000 A pair of semi-detached cottages with ancillary accommodation, used for many years as holiday cottages. Oxleigh - Entrance Porch | Sitting Room | Dining Room | Kitchen/Breakfast Room | Cloakroom | Utility | WC | Three Bedrooms | Bathroom | Shower Room | Garden | Parking | EPC Rating: E Honeysuckle Cottage - Entrance Porch | Sitting Room | Dining Room | Kitchen/Breakfast Room | Cloakroom | Utility | WC | Three Bedrooms | Bathroom | Garden | Parking | EPC Rating: D Annexe - Sitting Room/Bedroom 2 | Reception/Laundry | Kitchen/Breakfast Room | Bedroom with Ensuite | Bathroom | Garden | Parking | EPC Rating: D Fine and Country, Harrison James & Hardie, Bourton on the Water 01451 824 977 Bourton on the Water | Moreton in Marsh | Stow on the Wold | Mayfair | Lettings 3 Greystones, Cold Aston ÂŁ395,000 A well-presented Cotswold stone period two double bedroom cottage situated in the desirable village of Cold Aston.The cottage benefits from a sunny patio, garden and off road parking, perfect as a second home/investment or to be enjoyed as a main home within this reputable village. No onward chain. Entrance Hall | Cloakroom | Dining Room/Potential Third Bedroom | Kitchen | Sitting Room | First Floor Main Bedroom with Ensuite Bathroom | Second Bedroom | Family Bath and Shower Room | Off Road Parking | Patio Terrace | Garden | EPC Rating: G Fine and Country, Harrison James & Hardie, Bourton on the Water 01451 824 977 Honeysuckle Cottage, Condicote ÂŁ325,000 A charming two bedroom stone cottage set in a rural position offering views over farmland, near to the unspoilt village of Condicote and market town of Stow on the Wold. Entrance Hall | Open Plan Sitting Room/Kitchen/Breakfast Room | Two Double Bedrooms | Bathroom | Parking Area | Small Garden Area | Raised Terrace | EPC Rating: E Fine and Country, Harrison James & Hardie, Bourton on the Water 01451 824 977 Country Homes from harrison james & hardie HOT PROPERTY - ASK THE EXPERTS Ask the experts Ethical Karen Harrison Q A Estate Agency? We have an old house with an acre garden but need to move to something much more manageable. We asked to look at an ideal property with a local agency but were very taken aback to be told that they would not consider a viewing unless our own house was already on the market. Apparently they have a cash offer, albeit not at an acceptable level. The agent suggested she might persuade the vendor to wait for us to if we instructed her company to sell our home. is this even remotely ethical behaviour? All estate agents hope to gain new instructions on the back of a particularly desirable local property and i would venture there is nothing wrong in chain-building, per se, when it ensures the best possible outcome for the existing vendor. indeed, it seems sensible in a rising market to encourage the vendor to wait whilst you try to sell, given your house is particularly desirable and likely to go under offer very quickly. When there is competition, this is likely to raise the bar on the eventual sale price, after all. Being in a comparatively weak position, though, you might feel the pressure to offer close to the asking price in order to persuade the vendor to wait. equally, this might force the cash buyer to make a higher offer in response - all good strategic play as far as the agent is concerned, right the way to Best and Finals! in this situation, you are effectively the cat’s paw and do bear in mind that the agent is obliged at all times to work in the vendor’s best interests. that’s the important caveat – not your best interests, nor the agent’s best interests, either. the ethics of representation can get complicated when instructing a company that already has the house you want to buy unless they are scrupulously careful, and it’s almost better not to use them if you have a whiff already of unethical behaviour. An agency most certainly should not oblige you to instruct their company nor make it a condition of viewing the property in the first place. This is effectively blackmail, by implying they will put you in a stronger position only if you give them your business, and the irony is they probably can’t and won’t. Don’t give in to such tactics. they would like your property on their books, of course, but you might very well lose out on this particular property and then find yourself tied in to their agency for another three to four months. At the very least, negotiate only a short tie-in period on one-off viewings rather than a sole agency, but also ask out another couple of reputable local agents before 54 Cotswold Homes Magazine you make any decisions about who should sell your own home. it really is important to seek recommendations and do your research. Does the agency regularly sell properties of a similar value to your home, therefore having plenty of registered applicants in that specific price range, or will they have to rely on website listings to generate interest? Will they have the sophistication of approach at your price level when it comes to negotiating the sale on your behalf? Discover the relative merits between apparently similar agencies, look beyond the shop front and online presence to the service standards they actually deliver, by calling in and registering with each agency. Who is most experienced, knowledgeable, friendly, helpful and reliable? Who is most straightforward, honest and yes, ethical in their approach? Which agency, ultimately, do you feel has the best marketing strategy, know-how and service standards to sell your house at the best possible price? no matter how much pressure you are under to avoid disappointment with this property, never succumb to a lesser agency just because they happen to have the property you want to buy. the better the agency’s position within the local marketplace, not at a regional or national level, the more skills and contacts they will have established and the more skills they will have developed to find you a buyer at the best price in the best possible timescale. even if you do miss out on this particular property, don’t despair. it might not feel like it but i promise there will always be another house, equally suitable, possibly even better. the more able the estate agency, then the more they will know about other potential properties for you and the more likely they will be to secure your perfect home in the end, even if it is not the first one you happen to see.. HOT PROPERTY - ASK THE EXPERTS Ask the experts The North Cotswold Property Market “The better the agency’s position within the local marketplace, the more skills and contacts they will have to find you a buyer at the best price in the best possible timescale.” Karen Harrison, Ask The Experts, Cotswold Homes Winter Edition 2014 By the end of October, independent estate agency Harrison James & Hardie had already agreed over two hundred house sales and let another hundred homes throughout the North Cotswolds, from small apartments to grand country pads. Renowned amongst its competitors in the North Cotswold marketplace, the company has maintained pole position for well over a decade since it was launched as an independent at the turn of the New Millennium. This year, the number of agreed sales (excluding new homes sites) has actually outstripped the instructions the company has listed, underlining why the company is the local agency of choice right across the price range. Karen Harrison explains why this year has been “the best ever for our company”. ‘We always hope to provide an exemplary standard for the local marketplace. We launched with guiding principles that remain just as important today - a commitment to the most up-to-date technology and innovative marketing methods, backed up by superlative service standards provided by a team of hardworking, experienced, friendly people offering in-depth knowledge of the marketplace. Everyone who joins our company is expected to work to the same high standards, gaining professional qualifications and membership of NAEA or ARLA. Apprenticeships and on-going training are key factors in our consistency – staff turnover is exceptionally low. We have an impressive 150 years’ experience of the North Cotswold marketplace between us, despite an average age of only 32!’ ‘In addition to a great sales and lettings team, we ensure the widest possible promotion online for all our properties, featuring on major property portals such as Rightmove, Prime Location and Zoopla plus associated sites such as the Sunday Times, London Evening Standard, Telegraph and Daily Mail, and another seventy portals via the Guild of Professional Agents and Fine & Country. Combined with Cotswold Homes as another unique sales tool, we utilise every available avenue on behalf of our clients to market all our properties in the best possible way from local to international level.’ ‘For example, the London market has become increasingly important to our success – we have an exclusive licence in the North Cotswolds with Fine & Country and regularly hold Cotswold property exhibitions in the Mayfair office. By collaborating with 240 independent agencies in the UK and abroad, Fine & Country provides us with an exceptional upper quartile marketing strategy, garnering huge acclaim and winning many important industry awards over the last few years.’ Given their sales and lettings result in 2014, how does the local market now compare with the height of the property boom in 2007 and what does the New Year hold in store? James von Speyr says: “Despite much tighter lending criteria, a lack of stock is really helping to drive prices slowly upwards across the board, so buyers will continue to fight for the best new instructions and will start to consider offers on properties that have been lingering, too. But much depends on the political landscape after the general election. “If mansion tax becomes a reality, this will be an immediate game changer in London and the effects will ripple out to the Cotswolds’ upper quartile sector. Similarly, if Help To Buy incentives are withdrawn or if interest rates start to rise, these changes will slow things down again in the residential sector. My advice is to stop waiting for the right moment and just get on with it. The going is good now on both sides of the fence, whether buying or selling, and one thing is for sure - such excellent interest rates won’t last forever’. Caroline Gee is confident, meanwhile, about the Lettings market in 2015: ‘Young couples are really struggling to save up enough money for a deposit to buy, snapping up two and three bedroom properties to rent in the meantime,’ says Caroline. ‘We have a managed portfolio of over a hundred homes, and it takes an average of only two viewings to let any property that we advertise, so I predict that investment will be strong in Lettings during 2015.’ Should you be thinking of selling or letting your property, for further information and to book an appointment to discuss marketing with Harrison James & Hardie, please telephone: Fine & Country North Cotswolds and Residential Sales: James von Speyr: 01451 822977 Karen Harrison: 01608 651000 Residential Lettings: Caroline Gee: 01451 833170 55 MEET THE TEAM MEET THE TEAM AT HARRISON JAMES & HARDIE, FINE & COUNTRY NORTH COTSWOLDS KAREN HARRISON BA (Hons), FOUNDER AND PRINCIPAL DIRECTOR / OWNER JAMES VON SPEYR, FOUNDER AND PRINCIPAL DIRECTOR / OWNER CAROLINE GEE ARLA, FOUNDER DIRECTOR / OWNER set up company in June 2000; responsible for Residential sales and Lettings 20 years’ experience in corporate and independent residential agency, north Cotswolds Joined 2001; responsible for Fine & Country North Cotswolds 24 years’ experience in corporate and independent agency, Cheltenham & north Cotswolds Assisted company set-up in June 2000; based at stow on the Wold branch; Diploma in lettings 20 years’ experience in corporate and independent agency, north Cotswolds KATY FREEMAN MNAEA, BRANCH MANAGER, BOURTON ON THE WATER JO SHIPMAN MNAEA, PA TO JAMES VON SPEYR & ADMINISTRATION MANAGER BOURTON ON THE WATER LUCY DRIVER ARLA, SENIOR SALES MANAGER, RESIDENTIAL SALES BOURTON ON THE WATER Joined in 2006 as apprentice with 3 ‘A’ levels; 8 years’ experience; Diploma in residential sales Joined in 2000; 17 years’ experience; Diploma in residential lettings; working towards MnAeA TOM BURDETT MNAEA, BRANCH MANAGER, MORETON IN MARSH LUCY DICKS MNAEA, PA TO KAREN HARRISON & ADMINISTRATION MANAGER MORETON IN MARSH Joined in 2003 with 13 years’ experience in local agency; Diploma in residential sales SOPHIE KEOGH, SALES NEGOTIATOR, FINE & COUNTRY BOURTON ON THE WATER Joined in 2014; 3 years’ previous experience in london marketplace; working towards MnAeA 56 Cotswold Homes Magazine Joined in 2004 as apprentice with 3 ‘A’ levels; 10 years’ experience; Diploma in residential sales Joined in 2008 as apprentice with 3 ‘A’ levels; 6 years’ experience; Diploma in residential sales MEET THE TEAM STEVEN BUCHANAN MNAEA, EXECUTIVE SALES CONSULTANT, RESIDENTIAL SALES MORETON IN MARSH Joined in 2005, 11 years’ experience in oxon / north Cotswolds agency; Diploma in residential sales EWAN PEASTON ARLA, NEGOTIATOR, RESIDENTIAL SALES MORETON IN MARSH Joined in 2011 as apprentice with 3 ‘A’ levels; Diploma in lettings; working towards MnAeA JAKE LOMBERG-WILLIAMS BSc (Hons), SALES NEGOTIATOR, FINE & COUNTRY MORETON IN MARSH Joined in 2014; Degree in Property Agency and Marketing, Cirencester Agricultural College AMY COLDICOTT ARLA, RESIDENTIAL LETTINGS MANAGER, STOW ON THE WOLD KATY HACKLING, ADMINISTRATOR, RESIDENTIAL LETTINGS STOW ON THE WOLD KIM CATE, SENIOR NEGOTIATOR, RESIDENTIAL LETTINGS STOW ON THE WOLD Joined in 2013; 5 years’ previous experience in local agency; Diploma in residential lettings Joined in 2014 as apprentice with 3 ‘A’ levels; working towards nVQs / Diploma in lettings and ArlA Joined in november 2014; 6 years’ previous experience in local agency; working towards ArlA WHAT CUSTOMERS SAY ABOUT HARRISON JAMES & HARDIE… We would both like to thank you all so much for all the hard work and effort involved in selling [our previous home] and [our new home] to us. We highly recommend you as estate agents. ~ Mr & Mrs M Thank you Lucy, you have been brilliant! ~ Mr o I was very pleased with your service and very impressed with Lucy Driver. ~ Mr F Level of personal service was second to none: ~ Mr W Special thanks to Lucy Driver – she was fantastic. Couldn’t have done it without her. ~ Mr & Mrs g Steve Buchanan was excellent. ~ Ms L Thank you again – great service goes a long way. ~ ~ Mr M Thank you very much for your hard work and professionalism. ~ Mrs d M Thanks for helping us find a home - it has been a pleasure dealing with Harrison James & Hardie. ~ ~ Mr C After my previous bad experience with a local agent I have now got peace of mind that my three properties are being managed properly to the highest standard, thank you. ~ Mr P Amy Coldicott handled the whole process from start to finish with great service and professionalism; she is a huge asset to your company! ~ Ms V All of the staff at HJH I found very approachable and willing to help - the whole team gave excellent customer service. ~ Mr C 57 HOT PROPERTY - ASK THE EXPERTS Ask the experts Funding a Self-Build Project Sue Ellis Q A My husband and i enjoy watching programmes such as grand Designs and this has got us thinking more and more about the possibility of building our own house. how does it work, given we will need finance to fund such a project and don’t really want to sell our own home first? A surprising number of people opt to build their own homes every year and many others will undertake a renovation project or conversion of a property – up to 20,000, according to one source. Of course, the first stage is to find a suitable plot of land – quite possibly also the hardest task, given that land is at such a premium in the north Cotswolds! You should also source a good architect or qualified designer to draw up the plans with you and remember you will probably need other professional help - for example, consulting a structural engineer or quantity surveyor. it is vital from the outset to get the right people – failure to do so might otherwise prove disastrous in the longer term. in my experience, recommendation from trusted friends, including your local estate agency, is probably the best place to start. Before you start your project, do consider whether your ideas are going to fit the plot and location, when it might be a good idea to invite your local estate agent to visit the site, to give you an opinion on the plans and likely value of the finished project, because eventual value must govern your spend as much as how to fund the project. You need to do some serious sums when considering how the project is going to be financed – inevitably things will run over budget, so it is important to be realistic about the true costs and to allow yourself some contingency, usually around 10%. not all lenders will be prepared to consider self-build projects – those that do tend to be limited to a small number of specialist lenders, both banks and building societies. these mortgages work differently, too, in that the traditional ‘self-build mortgage’ will release funds in arrears as the build progresses, lending in clearly defined stages, starting with the purchase of the land (normally 58 Cotswold Homes Magazine between 50% and 85% of the purchase price/value of the land) and finishing with the completion of the build. Funds will only be released once each stage has been completed and signed off by the lender’s appointed valuing surveyor, so your work must be to an acceptable standard, in accordance with building regulations and planning restrictions, especially in an Area of outstanding natural Beauty such as the north Cotswolds, where you must be extremely careful to keep to the rules if you don’t want your house to be demolished afterwards – it has happened! the self-build mortgage option is normally better suited to people who have substantial savings. Remember, you not only have to buy the plot but also get it to the first stage of the build before any funds will be released. Alternatively, you can take a mortgage that releases funds in advance rather than in arrears, lending up to 90% at each stage. this will enable you to buy materials, pay builders and manage your exposure to risk if there are delays on the build for any reason. this option is also ideal if you don’t want to sell your existing house to release equity, allowing you to keep a roof over your head until the new one is built, or if you want to hold some cash in the bank as a contingency fund! interest rates tend to be higher than normal residential mortgages - at the time of writing, a two-year, fixed rate, ‘arrears’ style self-build mortgage is around the 5.25% mark, when lending up to 80% of the value. Self-build can be seen as a daunting challenge but, with thorough research and the right professional team on your side, it is also exciting and a worthwhile project. typically, one can expect to end up with a property that is larger than one of equivalent value on the open market – and, of course, you have the ultimate satisfaction of your home being exactly to your taste in the first place! Claire Barker Q A Planning Your Retirement My husband and i are nearing retirement and are concerned that we do not have enough pension provision to live as we would like to do.We have heard people talking about equity release as an option – can you tell me more about it? “When i was young i thought that money was the most important thing in life; now i am old, i know that it is,” said oscar Wilde. it is very likely that a vast number of today’s ageing population would agree with the sentiment. According to the equity Release Council’s Spring Report 2014, there has been a 36% increase in uptake of equity release products in the past two years alone. However, anecdotal evidence suggests that the majority of people still don’t really understand equity release and how it can be used. equity release is not suitable for everybody and alternatives should always be explored - downsizing, going back to work, investigating eligibility for benefits or home improvement grants, help from family, etc. - but it can provide a solution to lack of liquid cash in retirement, alleviating concerns about how to make ends meet once salary dries up and a meagre pension kicks in. more and more people are seeking to supplement their retirement income using equity release and yet, when groups are questioned, knowledge about the products (and how they work) appears to be low. if i had a pound for every time i have been told by a client that a solicitor has suggested they should go with norwich Union for their equity release, i would be a rich person by now. norwich Union rebranded to Aviva in 2009 and yet many people do not appear to have caught up. nor have they cottoned on to the fact that many other products exist nowadays to serve a huge raft of needs. more importantly, solicitors should not be advising on suitability of products.this is firmly the remit of a financial adviser, who will be qualified and regulated to give best advice. Fortunately, equity release is now one of the most heavily regulated products in the financial services sector, allaying fears about safety. Quite simply, an equity release product cannot be taken up without specialist financial and independent legal advice, without full understanding on how the product works and all the obligations attached to it. Without proper professional advice (see for a specialist) this vast array of products is far too difficult to navigate. For example, it is now possible to pay the interest on an equity release product rather than letting it compound and roll up. this can be a very useful way of releasing a lump sum at a relatively low cost, providing it is possible to service the interest payments with income. Alternatively, the drawdown product has proved extremely popular. A lump sum to suit a client’s immediate needs is drawn down and then a cash facility is ring-fenced; no interest is charged on the cash facility until it is drawn down, whereupon interest is set and then fixed on that portion of the loan forever. others are choosing to take a large lump sum in order to repay interest-only mortgages at the end of the term, especially useful when a lender starts clamouring for repayment. in general, since the 1990s interest rates have dropped for equity release products and are now typically around six per cent. even so, homeowners are confused as to why the rates are apparently high, when a typical bank mortgage hovers between three to four per cent. the reason is simple - in most transactions there is no requirement to make any payments during the life of the loan. the loan, plus interest, is only paid back to the lender when the property is eventually vacated, either due to death or long-term care. this could be twenty to thirty years, which is a long-range bet for lenders in terms of funding, when no repayments are coming in! providing equity release is carried out in a fully informed manner, assisted by experts, it should be a painless and rewarding process. Figures produced for the equity Release Council’s Spring Report 2014 shows that homeowners released almost £1.1bn during 2013. Certainly, for the majority it will have been a life-changing experience, enabling them to continue to live life to the full. Claire Barker is the Managing Partner of equilaw Ltd, a unique law firm specialising in giving legal advice to homeowners who are considering equity release. she is also a Partner at Thomas legal group and an Advisory Member of the standards Board for the equity Release Council, which is the industry body promoting excellence in this sector. Please contact Claire at claire.barker@equilaw.uk.com. 59 HOT PROPERTY - ASK THE EXPERTS Ask the experts Andy Soye Mat Faraday Holiday Letting Out of Season – The Secret To Success Q A i would like to purchase a holiday let property as an investment but i’m really worried about the winter months – will there be enough bookings to cover my mortgage and other associated costs? Holiday flexible to guests’ requirements. For example, at Character Cottages, wherever possible we will happily modify standard “Due to its location in the heart of England, The Cotswolds is also easily accessible from both the north and the south, and is less than two hours by train from London’s Paddington station.” flow of bookings all year round. the key to maximising the number of bookings out of season is to 60 Cotswold Homes Magazine. Andy soye and Mat Faraday are the co-founders and owners of Character Cottages, an independent company specialising in the holiday letting of luxury properties in the Cotswolds. To find out more about their services visit or telephone 020 8935 5375. HOT PROPERTY - ASK THE EXPERTS Ask the experts Robert Hamilton Q A Feeling the Cold? Sustainable Solutions i have recently bought a lovely big old house but with the cold weather beginning to bite, it’s already quite difficult to heat. Any suggestions before the very worst of winter sets in?! period and traditionally built properties can be quite challenging to heat. the glib answer is not to let your house get cold in the first place. Thick masonry walls will act as massive storage radiators but have to be ‘got up to heat’. it is also far easier, more efficient and ecologically desirable to maintain a good ambient temperature than to go for ‘boom or bust’ when heating up a cold house. Whichever fuel source you choose, it’s not wise to depend on a single source, either – electric power is clean and immediate but the supply does dip at times of heavy demand, just when you need to cook your Yorkshire puddings at Sunday lunch, and can fail completely on occasion. As a back-up heat source LpG and oil are both excellent fuels, much more ecologically acceptable these days, especially good in smaller houses and for an AGA / Rayburn in a large country kitchen, to provide consistent warmth where it matters most. Sources of ambient heat are becoming more popular and hence increasingly affordable. My favourite is underfloor heating - an insulating layer under simple electric wiring or fluid filled pipework, topped by your chosen floor. Stone floors are ideal with electric systems but better to use ‘wet’ (fluid filled) systems with wooden floors. Wet systems can gather heat from geothermal sources, although these are still rare. i have also started to see large country houses heated by wood chip furnaces, fuelled by specifically grown biomass products. These are very efficient but need a large storage capacity to ensure sufficient fuel is on hand, therefore are less suitable for smaller properties. Ground source heat exchangers work on the same principle as a fridge but in reverse - these are also increasingly popular. You will find some surprising brand names on offer including panasonic, Vaillant and mitsubishi! Solar panels attract subsidies but generally are not permitted on Listed buildings and have to be discreetly sited in Conservation areas. However, if you are lucky enough to have a large plot of land facing South West, you could have ground-sited panels - english Heritage have useful information on their website and the Planning Office is equally very helpful. Traditional roaring fires are still enormously popular but they do require supervision and stoking, and the cost of air-dried logs has increased greatly in the last few years. they also create dust and waste a huge amount of heat straight up the chimney, so better to invest in a closed stove and/or use your fire to heat your water, too. Like the old-fashioned back-boilers, there are plenty of modern, stylish and super-efficient systems today – remember, though, that fires will only burn efficiently if chimneys and flues are swept regularly. ‘Lookalike’ wood and coal stoves fuelled by gas give almost instant, controllable heat with a similarly comforting appearance but are much cleaner, relatively economical and can even be operated by a phone app or remote controller! All forms of heating will be rendered more efficient by effective insulation. ensure roofs and attics have a good layer of minimum of 270mm mineral wool. other loft insulation can be used: sheep’s wool or hemp that absorbs Co2 during growth, for example. Double-glazing can be difficult in Listed properties but fit removable secondary glazing or wooden shutters and use thick curtains to insulate against cold, draughts and noise. With all this warming and insulation it is also imperative to have great ventilation or you will run the risk of condensation and damp issues, of course. it is wise to have a survey on your property before you buy, to check whether flues are intact and chimneys are functioning properly. Ask if liners are still under warranty and get safety certificates, especially for gas boilers, then make sure you have regular services on all heating systems. Last point - do remember to bleed your radiators to ensure that heat is circulating properly!. 61 BLOCKLEY steppes Cottage Steppes Cottage and The Wool Shop, High Street, Blockley Blockley is a very special village and for those who settle there, the place soon becomes a love affair. it is the village as much as the house that drives a desire for change, informs tastes and channels their secret yearnings. registering with local agents, these prospective buyers will describe their dream home - something older, with potential, more space and a larger garden for their expanding family or conversely, something smaller with a less vertiginous staircase perhaps, somewhere they can be self-sufficient, closer to the village centre. But they are always adamant - this house must be in Blockley. no matter how great the need, they are always prepared to hold out until the perfect house comes up. of course, being Blockley, would be residents have plenty of competition from second-homers. This is also the village of choice for London investors, retirees and escapees, actors and writers, city high flyers and quiet celebrities whose love affair is just as passionate, who speed down the A40 at the close of business every Friday craving the cheerful pub and village shop, the comfort of a traditional church service, the rigour of a brisk countryside walk, the warmth of a log fire and the happy chaos of a sunday roast with friends. 62 Cotswold Homes Magazine The garden is vast, running away down the hillside in a sweep of lawn, terraced, ending in an ordered, generous kitchen garden complete with fig trees, raspberry canes, rows of beans and peas and potatoes. Blockley is, for those who don’t know, a village with a unique history. in times gone by this was not a peaceable, rural cluster of cottage dwellers, obediently serving the Big House and tipping their cap to the establishment like so many of its neighbours, but shaped entirely differently, rapidly, fired by the vigour of Georgian industry. Blockley was new money, not from the wealth of sheep or corn but gained by the rush of water. Quickly and dramatically populated, rows of tiny workers’ terraces and the grand houses of business folk were squeezed into the valley, a thirst fuelled by seventeen silk mills harnessing the powerful flow of natural streams and springs in the folds of its hills. At its centre was the ancient cart-wide high street, suddenly thronging and busy with a whole service of shop fronts – a baker, greengrocer and butcher, post office, bank, and at the end of the High Street a magnificent church telling proudly of its riches. This affluence, the delight in good fortune of its residents still remains today, although the high street is quiet again. the new village shop and the Crown Hotel are all that tell of the original commercial hub. nonetheless, the village still reveals its story, like a visit to a stately home - its history is writ large and the high street is its drawing room. if you are in the know, as true Blockley people are in the know, there is a ‘right side’ to the High Street – just fifteen houses, hugely desirable, discreetly facing away from the hustle and bustle, whose windows look out on marvellous views, over the gardens of the valley below and upwards to the high horizon of hills beyond, a pastoral BLOCKLEY steppes Cottage landscape of handkerchief fields dotted in the far distance with cows, horses and sheep, rising to a tree-lined brow and a burst of sun. one of these quiet gems is almost invisible, a set of stone steps under an archway, something hidden behind what was once a little old wool shop, but nothing to see. if you didn’t know, your eye might be drawn to the wooden garage door on the street, peppered with community notices of fund-raisers and social gatherings, (what a luxury for someone, a place to park the motor) but you wouldn’t even begin to suspect the house itself. And what a lovely house it is. steppes Cottage When the present vendors moved to Blockley they started off their love affair on the other side of the high street but, like so many residents, settled on the next home of their dreams when visiting Steppes Cottage, and immediately saw its potential. they were lucky to snap it up when it came to the market some time later, waiting patiently for their opportunity. they commissioned a clever interior designer with a passion for open spaces who set about harnessing the ood of natural light, knocking out walls, moving doors, putting in windows and creating clean sleek lines, armed with a pale palette of Farrow and Ball colours. She transformed the house entirely. 63 BLOCKLEY steppes Cottage now a large, square kitchen / breakfast room has pride of place, fitted with a bountiful range of high gloss units, enough to satisfy the guiltiest maximalist hoarder. this is somewhere deeply welcoming, somewhere to congregate around the table and to watch the cook at work, with a wide wall of glass to one end, patio doors opening out onto a sunlit patio, drinking in the stunning view, warmed by under-floor heating. A formal sitting room beyond is bright on two sides with huge windows and the same beautiful outlook – a cosy, comfortable retreat for sunday papers and slippers, with a bleached stone fireplace and a sunburst mirror above. the dining room between is lined with stylish floor-to-ceiling bookshelves and display cabinets, even a backlit drinks cupboard ingeniously recessed under the stairs – the perfect place to host family and friends at Christmas. upstairs are four decent bedrooms, again fitted with plenty of cupboards and storage spaces, rooms that enjoy the views at their best and are served by luxurious bathrooms - a party 64 Cotswold Homes Magazine house, an eminently practical house, a proper family home. the garden is vast, running away down the hillside in a sweep of lawn, terraced, ending in an ordered, generous kitchen garden complete with fig trees, raspberry canes, rows of beans and peas and potatoes. Half way up is a rustic double seat swing for the grandchildren, stretching their feet up to touch the top of those hills. An informal and picturesque retreat, the garden is full of mature shrubs and cottage flowers, places to hide, to build dens and to run around, safe and sound. Beneath the Wool shop (now a bijou cottage for two, and available by separate negotiation) is a cellar, accessed at garden level and used by the main house, and another couple of store rooms - space for all the paraphernalia of country living, for the detritus of skateboards and bicycles, the family kit of surfboards and wet suits, toboggans and skis. the garage has to be the ultimate luxury in this narrow high street, where the previous owner famously used to bounce in his car at speed, a strategic mattress placed at one end – the current owners are rather more circumspect, safe to say. Much of the potential of this house has been realised, realised with great style, but the vendors have found a brand new house on the Cornwall coast that they have chosen as their ultimate retirement dream, and so steppes Cottage and the Wool shop will come onto the market in the new Year, launching with an open Day in midJanuary. the owners did once drill a small hole through the wall of one of the bedrooms, just to see if it would be possible to access the cottage in front, and it is. One could definitely do with both, should funds allow. this tiny self-contained property is a little gold mine – a handy sleepover for extra guests, a holiday let, a granny annexe, a writer’s snug, a nanny flat, a teenage den… Too good to lose, given the opportunity to secure it, and fortunately the owners have committed not BLOCKLEY steppes Cottage steppes Cottage to sell the Wool shop before the buyer for the main house has had first refusal. There are bound to be plenty of investors for whom just the tiniest slice of Blockley High street would be a pleasure, who have popped a hopeful note through the door before – those in the know, who have fallen in love like so many visitors and residents before. steppes Cottage and the Wool shop will be offered for sale by the Moreton in Marsh offices of Harrison James & Hardie Fine & Country north Cotswolds - to make an appointment to view on the open Day in January please telephone 01608 651000 and speak to Branch Manager tom Burdett, who will be happy to provide more information. if you are reading this magazine on our Cotswold Homes App, you can hover over hotspots for more photos and floor plans, otherwise simply visit and click on the Harrison James & Hardie badge on the landing page. A formal sitting room beyond is bright on two sides with huge windows and the same beautiful outlook – a cosy, comfortable retreat for Sunday papers and slippers, with a bleached stone fireplace and a sunburst mirror above. 65 BLOCKLEY Cherry orchard A Brief Buyer’s Guide To Blockley Blockley is situated on the western edge of the North Cotswolds, about three miles from Moreton in Marsh and a similar distance to Chipping Campden. the beautiful centre boasts a wonderful, new community-run village store and coffee shop, plus two great pubs and a hugely popular primary school. Renowned for its active social calendar and positive community spirit, this is a happy and integrated village where many families have lived for generations. There has also been an influx of London buyers over recent years, partly because of the relative number of smaller period properties that are available. as a result of the georgian silk mill industry, terraced cottages are relatively abundant and these, of course, provide ideal investment opportunities for second homers. 66 Cotswold Homes Magazine should you wish to purchase a little slice of Blockley, a terraced property averages around £275,000 to £300,000 and a grander, detached property around £525,000. that’s rather more expensive than Moreton in Marsh but not nearly as eye-watering as Chipping Campden where, according to Rightmove, the average house price had already topped £500,000 by the end of 2013. some of the disparity between Blockley and Chipping Campden prices can by explained by the fact that more terraced homes were sold in Blockley than any other type in 2013, whilst in Chipping Campden the sale of detached properties came out top of the list at an average price of over £750,000. in fact, in 2013, Chipping Campden’s average price was 16% up on the previous year, a very impressive localised surge compared with Blockley (and the average Cotswold price hike) of around 6% in the same period. so, what is likely to happen in the next twelve months to the market place in Blockley? Just as in central London, would-be buyers may very well start to radiate their search outwards from Chipping Campden, seeking slightly more affordable alternatives and, without a doubt, detached family properties are likely to be greatly in demand, too. given the golden postcode kudos of Blockley, a significant rise in average value could well be on the cards and of course, the benefit of this depends greatly on whether you happen to be a vendor or a buyer! if you are thinking about buying in Blockley, evidently now is a good time to get ahead of the crowd – so what can you get for your money either to buy, to rent or simply for a weekend away? BLOCKLEY Tom Burdett, Branch Manager of Harrison James & Hardie, Fine & Country Moreton in Marsh says: The beautiful centre boasts a wonderful, new communityrun village store and coffee shop, plus two great pubs and a hugely popular primary school. “there is no doubt that Blockley is a fabulous place to settle upon, either as a family seeking all the comforts and benefits of traditional village life or as an investor seeking a good return. Park Road in Blockley is the perfect destination of choice for second-homers – a long row of terraced, period village homes with stunning panoramic countryside views and cottage gardens. number 55 has recently come onto our books at £289,950 with a kitchen/ breakfast room, sitting room, two double bedrooms and bathroom on the upper ground floor plus two double bedrooms on the first floor, and a lovely little garden overlooking the views. “Family-sized, period detached properties are harder to find. Cherry Orchard sold through us recently - an eminently desirable home that had recently undergone significant refurbishment, offering two reception rooms, kitchen and conservatory, an en-suite master bedroom, dressing room/ study, two further bedrooms, bathroom, garage, gated parking for several vehicles, plus a great garden with some beautiful countryside views. At £525,000, it was unsurprisingly snapped up!” Caroline Gee, Director of Lettings at Harrison James & Hardie, says: “Just a few yards away from steppes Cottage, 3 Milton Court has an awesome outlook over the surrounding countryside, yet is centrally located on the High street. At £695 per calendar month, this spacious period apartment has great character, including high ceilings and a stone fireplace with wood burning stove. there’s a kitchen/ breakfast room and sitting room, master bedroom and en-suite bathroom plus a second guest bedroom and bathroom – it’s a super home but also would make an ideal property as a little trial weekend pad, if you can afford to visit often enough to make the rental worthwhile!” And how about a weekend away in Blockley? Holiday let company Character Cottages has a number of properties in Blockley on the books, including Brook Cottage. Andy Soye says: “Brook Cottage is an idyllic cottage, full of traditional character in an absolutely picture-perfect location, offering the complete antithesis to life in the city. sleeping four, staying Friday to sunday, will cost less than £400 for a quick, wonderful pick-you-up in one of the most desirable villages in the Cotswolds.” 3 Milton Court Brook Cottage 55 Park Road 67 Star Cottage, Bourton on the Hill Offers In Excess Of ÂŁ500,000 A beautifully presented Grade ll listed cottage with an abundance of character and charm. The property benefits from a well-proportioned rear garden, large stone built garage and off road parking. Entrance | Sitting Room | Dining Room | Inner Hall | Kitchen | Family Room | WC | Three Bedrooms | Two Bathrooms | Garden | Garage | Parking | EPC Rating: E Fine and Country Harrison James & Hardie, Moreton in Marsh 01608 653 893 Wayside, Little Rissington ÂŁ915 PCM A newly refurbished character cottage situated in the heart of Little Rissington. Entrance Lobby | Sitting Room | Kitchen/Diner | Two Bedrooms | Bath/Shower room | Rear Outside WC | Garden Store | Separate Garden Shed | Rear Garden | EPC Rating: F Fine and Country Harrison James & Hardie, Stow on the Wold 01451 833 170 High Jinks, Great Rissington £545,000 Alderley, Bourton on the Water £499,950 A Cotswold stone detached upside-down design property, situated in a desirable village. Entrance Hall | Master Bedroom with Ensuite | Two Additional Bedrooms | Utility | Shower Room | First Floor Open Plan Sitting/Dining Room with Balcony | Kitchen/Breakfast Room | Further Bedroom | Bathroom | Parking | Garage | Rear Garden | EPC Rating: C A beautifully presented four bedroom detached property, benefiting from private gardens and off road parking, situated within a level walk to the village centre.The property is currently run as a successful holiday let but would make an excellent family home.Available with no onward chain. Entrance Hall | Cloakroom | Kitchen/Breakfast Room | Sitting/Dining Room | Utility | Bedroom | Ensuite | Three Double Bedrooms | Three Ensuite Facilities | Car Port | Off Road Parking | Side and Rear Gardens | EPC Rating: D Harrison James & Hardie, Bourton on the Water 01451 822 977 Harrison James & Hardie, Bourton on the Water 01451 822 977 21 The Gorse, Bourton on the Water £425,000 Badgers End, Stow on the Wold £375,000 A well-presented detached bungalow situated in a quiet cul-de-sac location, available with no onward chain. Entrance Hall | Sitting Room | Kitchen/Breakfast Room | Three Bedrooms | Bathroom | Front and Rear Gardens | Driveway | Garage | EPC Rating: E A beautifully presented property situated down a private drive, within walking distance of the town centre of Stow on the Wold.The property is ideal as a main home, second home or holiday let investment. Entrance Hall | Kitchen | Sitting Room | Dining Room | Cloakroom | Master Bedroom | Ensuite | Second Bedroom | Bathroom | Bedroom | Cloakroom | Primrose Cottage, Little Rissington £375,000 Lyon Cottage, Northleach £345,000 An immaculately presented Cotswold stone end terrace modern cottage situated on a quiet lane in the village of Little Rissington. Entrance Hall | Kitchen/Breakfast Room | Sitting/Dining Room | Conservatory | Utility | Cloakroom | Three Bedrooms (Master with Ensuite Shower) | Family Bathroom | Off Road Parking | Garage | Rear Garden | EPC Rating: C A charming Grade II Listed town house offering a wealth of character. Lyon Cottage is an ideal investment or family home, set within walking distance of the centre of this historic market town. No Onward Chain. Sitting Room | Study | Kitchen | Dining Room | Cloakroom | Two First Floor Bedrooms | Bathroom | Potential for Two Further Second Floor Bedrooms | Rear Garden | Period Outhouse | EPC Rating: excempt Harrison James & Hardie, Bourton on the Water 01451 822 977 Harrison James & Hardie, Bourton on the Water 01451 822 977 Dolls Cottage, Bourton on the Water £325,000 An opportunity to purchase a top performing holiday cottage situated within walking distance of the village centre. Dolls Cottage is available for sale as an investment holiday let with firm bookings already established for next year.Viewings can be arranged between changeover times - please contact Harrison James & Hardie for information. Sitting/Dining Room | Kitchen | Cloakroom | Conservatory | First Floor Double Bedroom | Bathroom | Second Floor Double Bedroom | Off Road Parking | South Facing Garden | EPC Rating: D Harrison James & Hardie, Bourton on the Water 01451 822 977 Sunnydale, Bourton on the Water £310,000 A double fronted Cotswold stone detached cottage, within walking distance to the centre of the village. Entrance Hall | Sitting Room | Kitchen/Breakfast Room | Utility/Cloakroom | Two Double Bedrooms | Bathroom | Sunny Courtyard Garden | Driveway | EPC Rating: D Harrison James & Hardie, Bourton on the Water 01451 822 977 view all our properties at harrisonjameshardie.co.uk Keystones, Broadway £550,000 29 Summers Way, Moreton in Marsh £400,000 A beautifully presented 4 bedroom detached home located a short walk from the centre of this picturesque North Cotswold village. Reception Hall | Sitting Room | Kitchen/Breakfast Room | Study | Utility Room | WC | Master Bedroom with En-Suite | Guest Bedroom with En-Suite, Two Further Bedrooms | Bathroom | Garage | Garden | Parking | EPC Rating: C A recently constructed substantial detached residence located on the popular Moreton Park development. Entrance Hall | Sitting Room | Dining Room | Kitchen/Breakfast Room | WC | Master Bedroom with En-Suite | Three Further Bedrooms | Study/Bedroom Five | Bathroom | Garage | Garden | Parking | EPC Rating: B Harrison James & Hardie, Moreton in Marsh 01608 651 000 Harrison James & Hardie, Moreton in Marsh 01608 651 000 Rosebank, Ebrington £365,000 3 Lemynton View, Moreton in Marsh £375,000 A traditional period cottage occupying an elevated position within this popular North Cotswold village. The property is currently operating as a successful holiday let and benefits from a large detached garage and off road parking. Entrance Hall | Sitting Room | Dining Room | Kitchen | Conservatory | Utility Room | Three Bedrooms | Bathroom | Detached Stone Built Garage | Parking and Garden | EPC Rating: E An immaculately presented family home, offering spacious and contemporary accommodation. Entrance Hall | Sitting Room | Dining Room | Kitchen/Breakfast Room | Utility Room | WC | Master Bedroom with En-Suite | Three Further Bedrooms | Family Bathroom | Garden | Garage and Parking | EPC Rating: Fairview, Longborough £325,000 36 Blenheim Way, Moreton in Marsh £315,000 A traditional Cotswold cottage, with scope for improvement and modernisation, set in the desirable village of Longborough. Sitting room | Dining room | Kitchen | Bathroom | Two First Floor Double Bedrooms | Further Attic Bedroom on the Second Floor | Garden | EPC Rating: D An immaculately presented 4 bedroom property situated on a much sought after development on the Northern edge of the town. Entrance Hall | Sitting/Dining Room | Kitchen | WC | Master Bedroom with En-Suite | Three Further Bedrooms | Bathroom | Garden | Garage and Covered Parking | EPC Rating C Harrison James & Hardie, Moreton in Marsh 01608 651 000 Harrison James & Hardie, Moreton in Marsh 01608 651 000 Corner Cottage, Moreton in Marsh £295,000 34 Redesdale Place, Moreton in Marsh £295,000 A two bedroom period cottage requiring some modernisation located in a central position within the town. Entrance | Sitting Room | Dining Room | Kitchen | Shower Room | Master Bedroom with En-Suite | Guest Bedroom | Garage | Garden | Parking | EPC Rating: E Situated in a quiet residential location at the end of its street, and in the Cotswold AONB, this extended and recently refurbished family home offers flexible and well-proportioned accommodation including three substantial reception rooms and benefitting from a large rear garden. Entrance Hall | Sitting Room | Kitchen/Breakfast Room | Dining Room | Utility Room | Family Room | Three Bedrooms | Bathroom | Garden | Off Road Parking | EPC Rating: C Harrison James & Hardie, Moreton in Marsh 01608 651 000 Harrison James & Hardie, Moreton in Marsh 01608 651 000 view all our properties at harrisonjameshardie.co.uk WYCK HILL LODGE Wyck Hill Lodge Wyck Hill Lodge, Stow on the Wold & Woodlands, Kitebrook 74 Cotswold Homes Magazine the north Cotswolds is undoubtedly horsey country. Whether the particular obsession is pony club, point-to-point, hunting, eventing or jump racing, everyone from knee-high upwards knows someone whose thoughts revolve completely around horses. of course, this is the landscape of true professionals. A dozen renowned trainers can be found in these hills, their victories the stuff of legend: synchronised, Don’t Push it and imperial Commander to name but a few. no doubt, there are whole hosts of fresh contenders in Jonjo o’neill’s stables plus the beloved new one at the twiston-Davies yard in naunton, of course, but it’s not just about the sport of Kings. Many people keep horses for nothing more complicated than sheer love, to ride out, for hacking across the open Cotswold countryside or, re-igniting their childhood passions, finally capitulating to their own children’s pleas for a pony. if you are not that horsey then you can’t understand it but when horsey people look for a house it’s often only the land that matters – space for a stable, a ménage and a couple of flat paddocks, not a single mention of WYCK HILL LODGE Wyck Hill Lodge Wyck Hill Lodge WYCK HILL LODGE IS A VICTORIAN-GOTHIC DELIGHT. THE PROPERTY WOULD ONCE HAVE BELONGED TO WYCK HILL HOUSE (NOW A GRAND COUNTRY HOTEL) BUT HAS LONG SINCE BEEN SEPARATED FROM ITS ORIGINAL PURPOSE. bedroom sizes or access to amenities. Despite being in the middle of the countryside it is surprisingly difficult to find something to suit a horsey spec, or even just a house with an acre or two, especially if the budget is somewhat less than a million pounds. Land is at a premium and has been for many years since larger period village properties have long since parcelled off their land for development, infilling and capitalising, whilst terraced cottages that were formerly tied to estates have only pocket handkerchiefs and new builds are always maximised for profit, squeezed into smaller and smaller plots. even if it’s not the horse that counts, where does one find a home with an acre but without breaking the bank? the answer is on the edge of a village, a little more out of the way, on a road perhaps - a lodge to a former estate or a detached modern house, something not necessarily of the traditional Cotswold vernacular but built when land was more plentiful. such homes have a particular, luxurious joy afforded by the openness of their setting, blessed with lovely aspects over surrounding countryside, large gardens and, most significantly, great plots. And so, Wyck Hill Lodge is a Victorian-gothic delight. the property would once have belonged to Wyck Hill House (now a grand country hotel) but has long since been separated from its original purpose. set half way up Wyck Hill on the road from stow to Burford, hidden and protected by a slice of mature woodland, this house enjoys a truly wonderful aspect with wide-open views that soar across the broad Windrush valley. For the non-horsey, a description of the property is enough to lure one in: grade ii listed, constructed of natural Cotswold stone under a slate roof with a pretty front gable façade, arched stone mullioned windows and a covered verandah set upon pillars, extended recently to provide generous, versatile accommodation including a substantial entrance hall that leads to a drawing room, study and dining room, interlinked to the kitchen by a separate utility room, with three en-suite bedrooms arranged on three floors, of which the first floor master bedroom has a particularly fine outlook, having great ceiling height and a large picture window designed to take full advantage of the position and views. now, for horsey folk, the very exiting bit. Beyond a delightfully mature and well-tended garden there is a timber-built stable block comprising two stables, a foaling box and a tack room, with a further paddock extending to just over one acre. 75 WOODLANDS Woodlands, Kitebrook If it’s more the land and not the horse that counts, then further northwards on the Oxfordshire boundary of the Cotswolds is another home, set at the edge of Kitebrook on the way towards Chipping Norton. This recently improved and substantial stone-built residence was constructed during the early twentieth century using reclaimed materials sourced from a nearby ancient chapel. Immaculately presented with an abundance of character - including fabulous, arched stained glass windows, an impressive elm staircase and balconied landing, open fireplaces, beamed ceilings and exposed stone walls - Woodlands is approached via a gated driveway that affords plenty of privacy, looking out over manicured grounds and a glade of mature, deciduous trees. The rooms are particularly well proportioned with a sitting room, conservatory, dining room and kitchen/ breakfast room and upstairs a master bedroom with dressing area and en-suite shower room, a 76 Cotswold Homes Magazine second double bedroom plus family bathroom, then two further bedrooms on the second floor above. Outside, there’s space for a positive flotilla of motors including a detached double garage with a room above, a second double garage and an open-fronted carport – but best of all is the plot, approximately two acres in total. Both properties are marketed by Harrison James & Hardie Fine & Country North Cotswolds. Wyck Hill Lodge is on offer at the Bourton on the Water branch and Woodlands at the Moreton in Marsh branch. For more details, floor plans and photographs, hit the hotspots if you are reading our App, or otherwise visit. com and click on the Harrison James & Hardie icon on the landing page. To arrange a viewing, strictly by appointment, contact Katy Hill, Branch Manager at Bourton on the Water, 01451 822977 or Tom Burdett, Branch Manager at Moreton in Marsh, 01608 651000. WOODLANDS Woodlands, Kitebrook WOODLANDS IS APPROACHED VIA A GATED DRIVEWAY THAT AFFORDS PLENTY OF PRIVACY, LOOKING OUT OVER MANICURED GROUNDS AND A GLADE OF MATURE, DECIDUOUS TREES. Woodlands, Kitebrook 77 ALDERWOOD The iconic Arlington Row in Bibury defines our image of the Cotswolds. In any building rush, thoughtful planning is what’s needed. Don’t Shoot The Planning Officer! Builder Craig Siller bewails the amateurish over-development of our beautiful local countryside With its quaint cottages, isolated farms and sprawling mansion houses, it seems the North Cotswolds is alive likes fleas on a dog with planning applications right now. Everyone owning more than a garden shed-sized plot is trying to capitalise on the new planning rules, proffering odd little parcels of land for planning permission, speculating on anything from one to a hundred homes, putting forward their land as if they are doing the local council a favour whilst setting down caveats with peculiar lists of preconditions (that the development must not be too close to their own property, that the houses must have windows pointing upwards or downwards, that the inhabitants must not have satellite dishes and cannot store caravans or fire up barbecues when the wind blows in an easterly direction, etc.) It would not surprise me if planning departments soon have to bring in Men-In-Black-style bouncers to protect their staff against the literal flood of green-wellie-wearing enthusiasts who are descending like a veritable army upon hapless council officials, frightening the life out of poor junior planning assistants, heavily armed with weird historic maps, quotes from ministers and important hand-drawn plans (sketched out on kitchen tables in front of their Agas, no doubt). Give the guy on the front desk some security glazing and a panic button lest he is peremptorily abducted to a field in the middle of nowhere, exhorted to admire the view then to contemplate how much better it would look with a few six-bedroom houses. 80 Cotswold Homes Magazine “Above all, we must petition to bring back tougher planning laws and put a stop to the wilful destruction of our countryside before it’s too late.” Landowners are offering everything for consideration from Great Aunt Maude's old apple orchard to ancient ridge-and-furrow fields, and even negotiating on parcels of land it turns out they don’t actually own. Meanwhile, the countryside is literally undergoing a revolution, a siege from within, being slowly attacked by those who might pretend to be addressing the housing shortage but who are quietly rubbing their hands with ill-disguised glee. The truth is that these bits and pieces of land are often too expensive for anyone to make a sensible return, let alone that the homes themselves are often poorly "planned" and when no-one gives a fig about infrastructure. (You know, the non-important stuff like electricity, water or drainage… Does anyone actually realise the exorbitant amount of money required to live “off grid” in one of these pseudo-eco homes?) Really, what we need is more land that has been professionally and efficiently planned, homes built upon brown sites in suburban settings where amenities will flourish and services already exist, sensibly designed and thoughtfully considered developments that take into account the actual needs of the local area and are empathetic towards the lives of those people who will actually have to settle there. We should fight for sustainable, quality-built homes that utilise the knowledge, skills and wisdom of good local professional architects, planning consultants, engineers and builders. Above all, we must petition to bring back tougher planning laws and put a stop to the wilful destruction of our countryside before it’s too late. (Anyway, I have just received a call from a landowner who reckons there’s potential for twenty homes on his field of holiday yurts and needs my advice… Got to go and make an honest buck, dear reader…) Visit to view a portfolio of our building projects and request a free quote for your building works. HOLIDAY MARKET Oxleigh and Honeysuckle Cottage A Nest-egg in the North Cotswolds Andy Soye and Mat Faraday of Character Cottages run a successful holiday let company. The majority of properties on their books are situated in the North Cotswolds so they have a keen understanding and detailed knowledge of the local market place. Here, they propose two pairs of properties suitable for a holiday-let portfolio that would produce a reliable, yearround income for someone on the verge of retirement. 82 Cotswold Homes Magazine HOLIDAY MARKET I am in my late 50s and plan to retire in three years, moving from Yorkshire to be closer to my daughter and the grandchildren. I have £850,000 from the sale of my late mother’s property and would like to buy two holiday-let properties in the North Cotswolds, using one property occasionally but otherwise letting it out until I am able to move in permanently, and the other providing a consistent holiday let income, especially necessary when I give up work in due course. What do you suggest? Oxleigh Cottage Oxleigh Cottage Oxleigh Cottage Andy Soye says: The North Cotswolds is a perfect part of the world to ensure a consistent flow of holiday let income even in the winter months. Of all the places you might choose, Bourton on the Water comes up very high on the list as a popular all-year-round destination for holidaymakers. Not only an iconic Cotswold village, it is also a central point for visiting the whole of the North Cotswolds, large enough to be provided with a wealth of attractions and amenities yet surrounded on all sides by beautiful countryside - a wonderful place to live and the ideal base for you, until you are able to retire. As luck would have it there is a perfect pair of cottages currently on the market, ticking every single box on your wish list. Oxleigh Cottage and Honeysuckle Cottage are situated on Moore Road, just off the High Street – slightly away from the hustle and bustle where one can walk to the centre within a couple of minutes, pick up the paper and enjoy the a quiet early morning stroll along the riverside before the tourists arrive, or in the early evening when they all go home! There’s plenty to do in the village itself, including Birdland, the Motor Museum and the Dragonfly Maze, and a host of great places to eat out – a thriving and well-provided community for residents and visitors alike. With loads of “kerb appeal”, built of Cotswold stone with mullioned windows, full of traditional charm, these two cottages have the benefit of a large, shared garden to the rear. Certain to drive great revenues, they can run independently but are ideal for ease of maintenance when it comes to a gardener, cleaner, etc. There is even an annexe at the back you can use as a private bolthole on an occasional basis, thereby ensuring that both cottages can be let all year round without diminishing your potential revenues. Whilst each property might be able to sleep six at a push they “Oxleigh Cottage and Honeysuckle Cottage are situated on Moore Road, just off the High Street – slightly away from the hustle and bustle where one can walk to the centre within a couple of minutes ...” are really better suited to accommodate five, in which case each cottage is capable of delivering around £30,000 gross income per annum - a total of £60,000 gross annual income, plus use of the annexe whenever you wish. 83 HoLiDay MaRket Bull Pen Mat Faraday says: Until recently, rosebank in ebrington was on our books, so we know it provides a reliable holiday let income. i envisage this delightful property as your eventual home, somewhere wonderful to retire to - a chocolate-box cottage with really good ground-floor space and full of character with a private garden, parking and double garage. it is situated in a picture-perfect village; rows of honey-coloured stone cottages surrounding a wide central green and served by a great pub, renowned for its friendly community feel. Being within easy reach of Moreton in Marsh, Chipping Campden and Stratford upon avon, the location provides a fantastic base for holidaymakers visiting the northern edge of the Cotswolds and as such, rosebank generates £30,000 gross income per annum, come rain or shine. I would happily pair Rosebank with The Bull Pen in Ditchford, a property situated on the Fosseway about five minutes north of Moreton in Marsh. One of a recent conversion of farm buildings forming a beautiful courtyard of individual, characterful properties, the Bull Pen is presented in immaculate order. I would utilise the ground floor study as a fourth bedroom, thereby sleeping eight. Having a wonderful, fresh modern finish including a lovely open plan kitchen/dining room as its glamorous centrepiece, the gorgeous interiors of 84 Cotswold Homes Magazine Bull Pen The Bull Pen will certainly attract plenty of interest. Within a stone’s throw of Moreton in Marsh, this location will remain a strong contender for holidaymakers throughout the winter months and should generate around £45,000 gross income per annum. Even if you were to use Rosebank quite regularly until your retirement, compared with the potential revenues you can generate from the two cottages in Bourton on the Water by using the annexe, Rosebank gives you the luxury of accommodating your own guests as often as you wish without worrying about the overall impact on your revenues, taking into account the larger income that The Bull Pen will provide. I admit the purchase will stretch your budget a bit, being about £50,000 more than you want to spend, but together they should produce £75,000 gross income per year, compared with £60,000 in Bourton on the Water – within three years you will have evened out the figures! As a footnote, have a look at Steppes Cottage and The Wool Shop in Blockley (p.64 – 67), another two properties that provide another winning combination, generating similar revenues for around the same initial budget – choices, choices, choices! HoLiDay MaRket Bull Pen rosebank “The Bull Pen will certainly attract plenty of interest. Within a stone’s throw of Moreton in Marsh, this location will remain a strong contender for holidaymakers throughout the winter months and should generate around £45,000 gross income per annum.” rosebank rosebank 85 HigH Jinks Having seen BBC 2’s the Great interior Design Challenge I have been inspired to find a house where i can create my own “wow factor”. i am looking for something a bit tired where i can open up the space and do something innovative, rather than a complete renovation and / or extension. With two children, i need a minimum of three bedrooms and two bathrooms and would prefer a detached, period property with a garden and views in a quintessential Cotswold village. i have a budget of £600,000 max to spend on the finished project. homes will dislike a very modern scheme because they expect a traditional Cotswold vernacular – inglenooks, stone floors, exposed beams and pale colours. Bold transformative design in a modern property is more likely to confound expectations in a good way and your money will also go a lot further. Far better to seek out something that requires imagination and updating, that won’t require listed buildings consent, where you will have more space to play with and can be really creative with a broad palette of colours, materials, fixtures and fittings. Karen Harrison says: “Period properties are inherently gorgeous and they usually have small reception rooms that might look good if knocked through but alterations inevitably require permission (if you can do it at all) and will cost heavily in materials and labour. Whilst liberating your creative side, you must also keep an eye on future saleability even if you intend this to be your forever home. Many buyers of period Modern family homes can get a bit passé and tired, especially when there isn’t cash to splash around beyond the exigencies of everyday life, so there are occasional gems to snap up. For example, seventies chalet-style houses have great footprints and large gardens, and spending around £50,000 upon innovative design, opening up spaces and upgrading kitchens and bathrooms can be really transformative. A substantial makeover is 86 Cotswold Homes Magazine only worthwhile, however, if it enables your family to live there for at least five years or so. Spent wisely, you should see every penny back but this is about indulging your passion for good design and not about making a quick profit, which is the reason why developers ignore such properties (unless something can be significantly extended or there happens to be a potential in-fill plot in the grounds). High Jinks in Great Rissington, priced to sell at £545,000, cannot be extended and is not in need of anything more substantial than a lick of paint and some energetic gardening, but you could easily do something quite inspirational to create a really stunning home. It’s a four bedroom “upside down” house built of Cotswold stone, situated in an ideal village for a family. With an outstanding primary school and a traditional pub on your doorstep, plus a convenience store just along the road in Upper Rissington, and the A40 and A417 accessible in a few minutes, Great Rissington is as High Jinks “Bold transformative design in a modern property is more likely to confound expectations in a good way and your money will also go a lot further.” picturesque and traditional as you could hope to find anywhere in the North Cotswolds, making it a highly desirable place to live. With plenty of potential to indulge your creative urges, I would begin your transformation on the first floor where the living accommodation takes best advantage of views over the surrounding hills and garden. Currently, there is one living / dining room plus a comparatively small kitchen / breakfast room, bathroom and bedroom. I would totally mess with this space to create a huge kitchen / dining / living room, New York loft-style. Open plan living is very restful and sociable, and given the dimensions and the shapes in the main room, with deep eaves looking out to the views, the effect could be quite stunning. I would also knock out the existing kitchen, and the bathroom and bedroom, to create a wide-open ‘L’ shaped living space, given you only need three bedrooms. You could make another formal sitting room with dual aspect if you prefer, but remember to save some of that space for a generous cloakroom and loo - storage for garden coats, boots and so on, beyond the provision on the ground floor. Downstairs, there are already three great bedrooms and two bathrooms plus the utility room - keeping the noise of washing machine and tumble dryer away from open plan living makes perfect sense, so it’s a very practical family home and the general arrangement works really well. However, because the house is built into the side of the hill and the upward sloping garden is a bit overgrown, the bedrooms don’t get quite enough of the available light, compounded by a substantial wooden balcony that serves the first floor. The balcony is a fantastic benefit and a lovely feature, catching the sun in the heat of the day and large enough to dine out with family and friends, but could do with an overhaul to allow light to flood through below – perhaps interspersing some of the floorboards with toughened glass might do the trick. Meanwhile, the lower part of the garden could be dug out a little further into the bank in order to give a broader, sunnier patio, when it would make perfect sense to install pairs of glazed doors opening out from the bedrooms at the back – how lovely to eat breakfast outdoors in your dressing gown and read the Sunday papers in complete privacy! The rest of the garden is great for children as it is, with a good flat lawn half way up, but by creating a series of interesting, well-maintained terraces climbing up towards a summerhouse at the top, say, you would have somewhere really beautiful and more grown-up to relax with a glass of wine and to revel in one’s good fortune at having snapped up a designer’s gem!” 87 Lettings Melville Cottage, Great Rissington ÂŁ975 PCM A charming two bed period cottage, set within the beautiful village of Great Rissington. Large Kitchen/Diner | Sitting Room with Wood Burning Stove | Downstairs Shower Room | Two Double Bedrooms | Bathroom | Off Road Parking | Garden | Awaiting EPC Fine and Country, Harrison James & Hardie, Stow-on-the-Wold 01451 833 170 4 Hazel Grove, Bledington ÂŁ1,450 PCM A well presented detached family home set within the heart of the pretty village of Bledington. Entrance Hall | Sitting Room | Kitchen/Diner | Utility Room | WC | Master Bedroom with En Suite Shower Room | Two Further Bedrooms | Family Bathroom | Workshop | Rear Garden | Off Road Parking | EPC Rating: D Fine and Country, Harrison James & Hardie, Stow-on-the-Wold 01451 833 170 Bourton on the Water | Moreton in Marsh | Stow on the Wold | Mayfair | Lettings Lettings 3 Milton Court, Blockley £695 pcm 50 Croft Holm, Moreton in Marsh £1,075 pcm LET AGREED A spacious period apartment located in the North Cotswold village of Blockley. Entrance Hall | Kitchen/Breakfast Room | Sitting Room | Master Bedroom | En Suite Shower Room | Second Bedroom | Bathroom | EPC Rating: D A substantial detached family home situated within walking distance of the popular High Street of Moreton in Marsh. Reception Hall | Dining Room | Sitting Room | Kitchen | Utility | Cloakroom | Two Double Bedrooms with En Suites | Further Double Bedroom | Single Bedroom | Family Bathroom | Garage | Off Road Parking | Garden | EPC Rating: D Harrison James & Hardie, Stow-on-the-Wold 01451 833 170 Harrison James & Hardie, Stow-on-the-Wold 01451 833 170 4 Church Street, Willersey £895 pcm LET AGREED May Cottage, Kitebrook £845 pcm LET AGREED A charming period cottage set within the heart of the picturesque village of Willersey. Entrance Lobby | Kitchen Diner | Utility Room | Sitting Room | Ground Floor Shower Room | Three Bedrooms | First Floor Shower Room | Rear Garden | Car Port | Off Road Parking | EPC Rating: E A delightful refurbished semi-detached Stone Cottage with extensive rural views and some original character features. Entrance Hall | Sitting room | Kitchen/Breakfast Room | Two Double Bedrooms | Family Bath and Shower Room | Off Road Parking For Two Cars | Outside Storage/Utility Room | Front Garden | Large Rear Garden | Sunny Patio Area | EPC Rating: E Harrison James & Hardie, Stow-on-the-Wold 01451 833 170 Harrison James & Hardie, Stow-on-the-Wold 01451 833 170 Country Homes from harrison james & hardie Can’t Buy, Won’t Buy, then Rent! The Manse at Naunton, offered through the Lettings department of Harrison James & Hardie, is a stunning example of just such an opportunity - a luxurious home in an outstanding setting that would absolutely justify your decision to rent not buy. 90 Cotswold Homes Magazine Can’t Buy, Won’t Buy, then Rent! Who Lives In A House Like This? Can’t Buy, Won’t Buy, Then Rent! If you yearn for a traditional Cotswold home and won’t compromise on your style but your budget doesn’t quite stretch to the kind of properties gracing the pages of this magazine, then don’t despair because we have the solution. Why not put invest your funds in stocks and shares and rent something wonderful instead? Everyone tends to associate the lettings market with bland, modern properties but there are some exquisite period village homes to be found, if only you know where to look. The Manse at Naunton, offered through the Lettings department of Harrison James & Hardie, is a stunning example of just such an opportunity - a luxurious home in an outstanding setting that would absolutely justify your decision to rent not buy. Situated in the glorious village of Naunton on the western edge of the North Cotswolds (less than half an hour’s drive to Cheltenham, with the centres of Bourton on the Water and Stow on the Wold both providing a wide range of day-today amenities and services), the Manse is a period Cotswold stone Grade II listed detached home presented in gorgeous decorative order, having lovely gardens and breath-taking views over adjacent countryside and the River Windrush. Recently the subject of extensive renovation, you will have the benefit of two reception rooms, a kitchen with an Aga, four bedrooms and four bathrooms. Make this exquisite property into your very own country pad and turn all your friends and family green with jealousy for £2,600 per calendar month* - but hurry, it takes Caroline Gee and her team an average of two viewings to let a property so it won’t be available for long! For further information on this and other period homes to let through Harrison James & Hardie, contact Caroline Gee, Lettings Director on 01451 833170. For more photos and a floor plan visit or, if you are reading this on our App, simply hover over the hotspots! *(plus upfront deposit monies and tenancy administration costs of £350.00 + VAT) 91 THE HENEVER, UPPER RISSINGTON The Henever Investing in Upper Rissington Since the RAF base at Upper Rissington was sold by the MOD for development sixteen years ago, those who have brought up families here over the last few years will tell you what a wonderful place Upper Rissington is to live. Set high up on a Cotswold plain and surrounded by open countryside within a triangle formed by Stow on the Wold, Burford and Bourton on the Water, this little rural community has much to offer on its doorstep. Now, a new development is well underway, including a brand new school being built in the middle of the village, serving as a split site with the Outstanding primary school at Great Rissington. The Henever 94 Cotswold Homes Magazine Also nearing completion is a fabulous community centre with a badminton court, a stage, changing rooms and showers, a large meeting hall and a fitted kitchen, designed to bring the whole village together for social occasions and a variety of sports / evening classes, and to provide a new home for the resident playgroup. Allotments and cycle paths are planned in the outer edges of the village, through woods and across fields offering wide-open views over the Oxfordshire edge of the Cotswolds. The village market square will boast an improved range of amenities including a supermarket, shops and a restaurant.The location gives easy access to major arterial routes, not least a mainline station to The Henever London Paddington within a quarter of an hour’s drive and now a bus route from the village, too. Whilst some new build environments take a time to settle and mature, here the site is blessed with a huge variety of beautiful, established British trees, providing new residents with an immediate and delightful sense of life in the countryside. The village is already overrun with wildlife – hedgehogs, squirrels, muntjac deer, pheasants and green woodpeckers are all occasional visitors to gardens whilst a clamour of crows and red kites soar above in the broad skies, engaged in aerial combat. Light aircraft and gliders make the most of good weather THE HENEVER, UPPER RISSINGTON "... HEDGEHOGS, SQUIRRELS, MUNTJAC DEER, PHEASANTS AND GREEN WOODPECKERS ARE ALL OCCASIONAL VISITORS TO GARDENS WHILST A CLAMOUR OF CROWS AND RED KITES SOAR ABOVE IN THE BROAD SKIES, ENGAGED IN AERIAL COMBAT." at weekends and occasional troops of parachutists can be seen floating down beyond the wood - the RAF still makes occasional use of the airfield, a great enticement for any anorak plane spotters! The Henever A select development of new homes are rising up fast on the southern edge of the village, with many new families taking up residence over the last six months. Mirroring the vernacular of the old part of the village, starter homes and family properties will broaden out into little cul-de-sacs of luxury homes in generous plots with double garages, enjoying splendid views.The Henever is a Bovis Show Home, exemplifying the style and quality of properties currently being built on the development, with one of this design available on the current phase being offered to the market at £410, 995.This detached property has a free-flowing living space including a study, open-plan kitchen/dining room and a sitting room with patio doors opening out onto a private, enclosed garden. On the first floor there are four generous bedrooms and two bathrooms, all arranged around a wide, light-filled balcony landing. Outside, there is a detached garage and ample parking, making it a perfect family home. Amy Coldicott from Harrison James & Hardie Lettings reveals that investment buyers are also snapping up properties: The Henever “I have another four homes being offered out to rent as soon as they complete at the end of November. Rental values are really good – averaging £925 per calendar month for a three bedroom property - and there is a ready supply of eager young couples and families who are unable to get onto the housing ladder, who cannot afford the deposit or who don’t qualify for the shared ownership scheme, falling over themselves to secure a property to let up here, instead.” To find out more about buying a property at Upper Rissington, simply arrange an appointment via the sole agents Harrison James & Hardie on 01451 822977, or for lettings details telephone Amy Coldicott on 01451 833170. For floor plans, photographs, house types and prices, go to property section or, if you are looking at this on the App version, simply hover over the hot spots! The Henever 95 Interested in promoting your business in Cotswold Homes? Turn to page 123 to find out how! DAYTRIPPER D AY T R I P P E R : A D L E S T R O P A B E A U T I F U L LY E N G L I S H I L L U S I O N WA S T H E P O E T E D WA R D T H O M A S S E N T T O H I S D E AT H B Y H I S B E S T F R I E N D R O B E R T F R O S T ’ S P O E M , T H E R O A D N O T TA K E N ? A N D WA S T H E G O L D E N S U M M E R O F 1 9 1 4 R E A L LY A L L T H AT L U S T R O U S ? W E I N V E S T I G AT E T H E MYTHS SURROUNDING THOMAS’ MUCH. It is a work of endearing simplicity, a noting of an uneventful, unplanned stop at a Cotswold railway station. A few lines of inspiration snatched by a poet from a moment of nothingness: No one left, and no one came. Yet when read in the context of the four years of world-consuming bloodshed that was WWI – resulting in the death of the poem’s author, Edward Thomas – these lines are heavy with tragedy, their imagery almost bitterly poignant. The ‘Guns of August’ loom somewhere in the background of this richly evoked late June day. Peace, like a summer day, suddenly seems a fleeting, ephemeral thing. Thanks to Thomas, we too still remember Adlestrop - though the Cotswold station enshrined in one of the nation’s favourite poems was in fact closed in 1966.The village has continued to attract visitors seeking to experience Thomas’ own Adlestrop, even though 102 Cotswold Homes Magazine he never really experienced it (and they are inevitably disappointed to find that famous station has long since gone). In the July of the following year, 1915,Thomas enlisted to fight – even though, at the age of 37, he was not required to. His decision was to prove fatal. Why did he willingly throw himself into that danger? With a wife and three children to support, perhaps his motives were, in part, financial. Since graduating from Oxford, Thomas had chosen a difficult path, eking out a life as a critic and a reviewer of books, and their life had often been frugal. But given that the often-melancholy Thomas was of a thoughtful disposition, it is likely there were more complicated reasons involved in his choice to fight. Since Thomas was generally disdainful of jingoism and immune to propaganda, many speculate – with good reason - that it was his friendship with the great American poet Robert Frost that pushed him to go to war. Though the two poets are now well known, at the time of their first meeting in 1913 in London, neither had made a significant mark. Frost’s poetry had impressed the discerning Thomas, who through his newspaper work had become an influential critic of poetry. Theirs became a friendship of mutual admiration and mutual assistance: Thomas raised Frost’s profile, and Frost saw in Thomas’ prose the possibility for beautiful poetry (for despite publishing numerous books and thousands of articles, Frost had not yet allowed himself to write a poem). When word of war arrived in 1914, both were sitting on a stile near Frost’s Gloucestershire cottage, joking about the possibility of hearing guns from where they sat, then far away from danger. DAYTRIPPER “THANKS TO THOMAS, WE TOO STILL REMEMBER ADLESTROP - THOUGH T H E C O T S W O L D S TAT I O N E N S H R I N E D I N O N E O F T H E N AT I O N ’ S FAV O U R I T E P O E M S WA S I N FA C T C L O S E D I N 1 9 6 6 . T H E V I L L A G E H A S C O N T I N U E D T O AT T R A C T V I S I T O R S S E E K I N G T O E X P E R I E N C E T H O M A S ’ O W N A D L E S T R O P, E V E N T H O U G H H E N E V E R R E A L LY E X P E R I E N C E D I T . . . ” The pair took many reflective walks together in Gloucestershire (which they called ‘talks-walking’) during which Frost would tease his companion about his indecisive nature. On one such outing they encountered a gamekeeper who ordered them out of the woods, called them ‘cottagers’ and threatened them with a shotgun. Frost boisterously challenged the man, even tracking him back down to his cottage, but the meeker Thomas’ instinct was to withdraw, perhaps sensibly, from a man pointing a gun. But this self-perceived act of cowardice was to hound his thoughts for months to come. When Frost sent Thomas an advance copy of his poem The Road Not Taken (found many, many years later by the Favourite Poem Project to be America’s best loved poem), the introspective Thomas might have read in those striking lines a subtle if affectionate rebuke – a reference to his indecisive nature - and felt compelled to make a gesture of bravery and certainty. Thomas’ mind itself had long been a battlefield, and he questioned himself ceaselessly. Since university he had wrestled with black moods, and was plagued with doubts of his worth. Later, his widow Helen wrote: ‘Our time together was never, as it were, on the level – it was either great heights or great depths.’Thomas would estrange himself from his family for weeks at a time, resenting himself for venting his misery upon them. But he also believed that England – or the very idea of England – was itself threatened by the war. ‘It seems foolish to have loved England up to now without knowing it could perhaps be ravaged and I could and perhaps would do nothing to prevent it,’ he wrote in a notebook on one of his walks with Frost.There can be little doubt that the English countryside was the primary inspiration to Thomas, who even though now commonly regarded as a war poet, wrote more about it than anything else. Even so, when Frost left England,Thomas acknowledged in a letter to his friend that ‘Frankly, I do not want to go, but hardly a day goes by without thinking I should.’ It was the arrival of Frost’s The Road Not Taken that appeared to settle matters in Thomas’ mind.The pair disputed the intended meaning of the poem, with Thomas left affronted and perhaps comparing himself unflatteringly to his more freewheeling American friend. He abandoned a notion that he had long entertained – of joining Frost in America – in favour of fighting in the war, though their friendship remained fast. His wife, Helen, wrote a letter to Frost after her husband had reached Arras: ‘What a soldier. Oh he’s just fine, full of satisfaction in his work, & his letters free from care & responsibility but keen to have a share in the great stage when it begins where he is…And he said “Outside of you and the children and my mother, Robert Frost comes next.” And I know he loves you.’ However, the censor returned Helen’s letter. Before she had the opportunity to send it again, she received news of Thomas’ death. At the bottom of the letter, she added: ‘This letter was returned by the Censor ages after I posted it. I have had to take out the photographs.’ ‘But lately I have just received the news of Edward’s death. He was killed on Easter Monday by a shell.You will perhaps feel from what I have said that all is well with me. For a moment indeed one loses spirit [and] feeling. With his love all was well & is. We love him, & someday I hope we may meet & talk of him for he is very great & splendid.’ Thomas was killed on the very first day of the battle of Arras, struck by the concussive blast of one of the day’s last shells while he stood up to light his pipe. He had been in France for two months. Helen fell into a deep despair, and her next letters to the Frosts present a picture of a wife and mother in the grip of a near intolerable grief: ‘I try to think the children will fill my life and he filled it, but I know they can’t. And how can I fill theirs like he did.’ Unfortunately, Helen’s relationship with the Frosts took an awkward turn when she published her memoirs, As It Was, in 1926. What was for her an exercise in confronting her grief seemed, to Robert Frost, to be an unnecessarily revealing and intimate account of her husband, perhaps contrary to Thomas as Frost wished to remember him.There fell a spell of silence between them that would last many years. Frost became a titan of poetry and Helen Thomas continued to write and publish. She died in 1967 – the year after Adlestrop Station was closed. One of the reasons why we love the tiny, untroubled world of Adlestrop so much is because it evokes a ‘golden age’ that existed prior to the war. But perhaps that time was anything but. The suffragettes were clamouring to win rights for women; the ‘great unrest’ of 1911-1914 saw the working class fight burgeoning inequality with massive industrial action. It was in many ways a time of turbulence, unfairly overshadowed by the magnitude of the events to follow. In short: don’t go to Adlestrop.The station is long gone – the ‘England’ we imagined before the war was never really there at all. And poor Edward Thomas was only ever passing through. Pictures supplied by the Edward Thomas Fellowship 103 103 DENTAL HEALTH MATTERS Dental Health Matters Acid Attacks Dr Trevor Bigg, Milton Dental Practice BDS, MGDS RCS(Eng), FDS RCS(Ed), FFGDP(UK) Dental decay is caused by bacteria or germs in our mouth, feeding on the sugars we eat and producing an acid - dentists call this an ‘acid attack’. It only takes a few minutes for this acid to be produced, but it remains in the mouth for at least 20 minutes. In time, the acid will eat through the outer layer of enamel, through the softer dentine beneath and even through to the pulp or nerve of the tooth. This can cause an abscess, which could lead to the tooth’s extraction. Dental decay has four components: 1. Carbohydrates or sugar in its many forms. Many experiments have shown that without sugar we get no tooth decay. 2. Plaque. This is the name dentists use for the colony of bacteria that form daily on our teeth. 3. A susceptible tooth surface. Decay is more likely to start in the fissures on the biting surfaces of teeth, or at the contact points between two teeth or, as we age, along the gum-line. 4. Time. The longer the sugar is in the mouth, the more decay we will get. Reduce the number of ‘acid attacks’! Dental decay is not a one-way process. The two most important healing factors are saliva and fluoride. The body can heal small areas that have had an ‘acid attack’ by means of calcium, fluoride and other salts carried in the saliva. There are two types of saliva made by the body. When we eat we are stimulated to produce a lot "There are two types of saliva made by the body. When we eat we are stimulated to produce a lot of thin saliva. This contains salts that help remove the acids produced by plaque." of thin saliva. This contains salts that help remove the acids produced by plaque. When we are not eating we produce thicker saliva to lubricate the mouth. This saliva contains fewer salts to neutralise plaque acids. This is why a pudding or sweet after a meal will do us less harm then sweet foods eaten between meals. So to prevent dental decay: • Clean your teeth after meals with a toothpaste that contains fluoride and use floss or inter- dental brushes to reach the hidden areas where decay starts. • Try not to eat snacks that contain carbohydrates between meals. That includes sweets, chocolates, biscuits (particularly so- called energy bars), cakes and soft drinks. • If you can’t clean your teeth, try to finish a meal with a little cheese or use sugar-free gum. If you want more information about the contents of the article, go to the British Dental Health Foundation web site at Tell Me About/Dental Decay, or. 105 TIM SPITTLE new indoor outdoor gYm PERSONAL TRAINER TIM SPITTLE TALKS TO MATT DICKS ABOUT THE INSPIRATION BEHIND HIS NEW INDOOR & OUTDOOR FITNESS CENTRE, FREESTYLE 360. TIM & AMY SPITTLE’S NEW PROJECT FREESTYLE 360 WILL MAKE THE MOST OF HIS FAMILY’S FARMLAND TO CREATE AN ENTIRELY NEW CONCEPT IN PERSONAL FITNESS. TIM’S PARENTS HAVE FARMED IN BLOCKLEY FOR MANY YEARS BUT TIM HAS ALWAYS HAD A STRONG ENTREPRENEURIAL STREAK. WHEN LAUNCHING RAPID FX, TIM WAS ABLE TO HARNESS HIS NATURAL FOCUS AND DETERMINATION BY PROVIDING A BESPOKE FITNESS PROGRAMME THAT HELPED CLIENTS, ON A ONE-TO-ONE BASIS, TO GET FIT AND STAY FIT. HIS FIRST STUDIO PROVED SO SUCCESSFUL THAT HE WAS ABLE TO OPEN A SECOND LARGER GYM, THEN ANOTHER IN CHIPPING CAMPDEN AND MEANWHILE, HE BEGAN TO DEVELOP THE IDEA THAT STARTED OUT AS BOOT CAMP INTO A WHOLLY NEW AND EXCITING CONCEPT FOR TRAINING AND FITNESS. TIM & AMY’S NEW INDOOR AND OUTDOOR GYM FREESTYLE 360 OPENS ON MONDAY 8TH DECEMBER 2014 – HE DESCRIBES IT AS “A DREAM COME TRUE FOR FITNESS LOVERS”. tIm, there Is a real buzz around your unIQue venture – can you descrIbe what freestyle 360 offers and how dId the Idea evolve? It has a wider range of facilities under one roof and the sky! With a huge amount of outdoor space we have been able to build a fifty-station obstacle course, as well as up-to-the-minute equipment and the latest types and styles of fitness training, for example Barrefit, Cross training, Tabata and mud runs. The biggest challenge has been no sleep - I had a vision and a timescale to ensure it opened on time, which meant maintaining the current gym to our high standards and working all hours behind the scenes. It’s been very physically exhausting because of the diversity and size of the project. My wife & I couldn’t have done it without the immense help of our family, trainers and close friends. I wanted to offer a better solution to overall health and fitness and provide a better quality of training that would interest and excite a wider range of people. It made sense to integrate the land with the established training business to embrace a brand new concept in fitness training, offering a unique indoor and outdoor facility in beautiful Cotswold countryside, encompassing all aspects of health and fitness. We are lucky enough to have twenty-eight acres of great terrain where we will be able to host new and exciting classes, 360 personal training and mud running, as well as offering effective advice on lifestyle and nutrition. how does freestyle 360 stand apart from other local gyms and what challenges have you faced? 106 Cotswold Homes Magazine The new gym under construction the fIfty-statIon obstacle course sounds rather challengIng. who Is thIs aImed at? This course is aimed at anybody who likes the TIM SPITTLE idea of challenging themselves and taking their fitness to another level. That isn’t to say you have to be an elite athlete - it’s based around fun outdoor fitness for all ages and abilities. It’s a new dimension offering a total body workout over a challenging terrain and obstacles. We are so excited about this and our idea for a Mud Run Obstacle event next year, too - mud runs are really addictive, so watch this space! "You will get a big warm welcome from someone who will listen and understand your needs and goals, who will take you by the hand without judging, to help you achieve the results you want." I have never walked into a gym before so what would entice me into yours? We pride ourselves on client care, supporting and motivating each person one hundred per cent of the time. You will get a big warm welcome from someone who will listen and understand your needs and goals, who will take you by the hand without judging, to help you achieve the results you want. We now use Myzone, for example, as a way of recording your personal progress, truly to understand how you are performing whilst you are training, which is a great motivator. How does the monitoring system Myzone work, exactly? Quite simply, it’s similar to a heart rate monitor that is also linked to a hub within the gym, so it downloads everything you do and calculates all that information into a comprehensive list of statistics. It also helps with nutrition and allows your trainer to tailor your fitness programme to your optimum level, as you can both look back and analyse your performance. It can be used wherever you are in the world to record your results, with lots of great potential applications - for example, during a class it can display your heart rate on a screen, which is an interactive experience that certainly adds a new dimension to spinning classes! You have always had a strong entrepreneurial flair and are very driven in business. What motivates you, what aspirations do you have for Freestyle 360 and how are you preparing for your success with this project? Not wanting to fail drives me to succeed but really, it’s about being creative and coming up with an idea, then putting it into practice and seeing it grow - that’s the bigger motivation. I want Freestyle360 to be known as the best place to train in the area, somewhere that people can come and enjoy fitness and be among likeminded people. As far as longer-term aspirations go, I’d like national recognition for being a training venue for mud running. We already have a great team of trainers, but we are also taking on new staff and employing an apprentice to learn how to train the Freestyle 360 way, in order to allow the business to grow quickly but organically. As we move into 2015, we are already planning a whole calendar of events such as mud runs, our own take on a strongman / woman competition, corporate days and team building events. Due to the location and ample parking the venue is perfect for private parties. We have an amazing music and lighting system - and even a disco ball! Blockley holds a great attraction as a holiday destination, too – how about something for weekenders? As you say, there are many second-homers and holidaymakers in the Cotswolds who work and live away during the week, so we will also offer a weekend-only membership that will enable people to train more cost effectively without the commitment of a full membership. Will there be any special offers for the launch on 8th December and what do you get as a new member? New members receive a health appraisal and a bespoke training plan with their membership card plus a Freestyle360 drawstring gym bag, a water bottle and car sticker. Yes, we plan to offer a number of opening offers, such as waiving our joining fee, and on-going offers that will either feature on our website or be sent out via e-mail. Guests are welcome at any time - just give us a call because we’d love to show you round. I promise you won’t want to leave! 107 Diary of a Farmer’s Wife Winter Winter, my second favourite season after spring. I love the dark days, log fires and the exciting run up to Christmas. My husband is not so keen on early starts in the dark and early finishes in the dark equals much grumbling about ‘not getting anything done’. The net result of this anxiety about shorter daylight hours is, basically, that if I want a log fire then I need to chop the wood. I don’t have a problem with this in principle, as I am more than capable and since my parents gave me a little axe for Christmas last year, kindling is now something of a speciality. However, the fact that our wheelbarrow has had a puncture since January makes getting the wood to the back door something of a challenge. On the farm, the focus has switched from arable to beef. The winter crops are sown and the cattle are in their housing and need feeding twice a day – quite a commitment. Up until this year feeding the cattle involved manually forking hay up into hay ricks that ran the length of the barn along both sides of a passageway. This was an achievable task for a young man who needed to work off the excesses of college life but fast forward over a decade to one suffering from (what I call) parenting fatigue and we have an urgent need for improvement. Enter William Brittain-Jones, a childhood friend 112 Cotswold Homes Magazine "On the farm, the focus has switched from arable to beef. The winter crops are sown and the cattle are in their housing and need feeding twice a day – quite a commitment." of Jimmy’s who works wonders with agricultural buildings. At the beginning of autumn he pitched up and set to on the farm, working wonders on the barn. 1260ft of timber, 432 man hours, 250 welding rods and quite a few cups of tea later, the cattle now eat their hay and silage through ground-level feed barriers (upcycled and therefore ‘on trend’). The new wider passage also means that Jimmy can drive his tractor into the shed and use a silage feeder to distribute the feed along the passage. The time it takes for him to feed the cattle has now reduced from three to two hours a day - that’s one extra hour a day that has suddenly become available. Time that I happen to think would be very well spent mending wheelbarrows and chopping wood. Happy days! Anna MacCurrach Working on through Winter FARM PARK As the Cotswold Farm Park closes its doors to visitors for eight weeks, things must keep turning over behind the scenes. Countryfile presenter Adam Henson anticipates the arrival of spring… As you might expect, each year follows a very similar pattern for us here on the farm, both on the arable side of the business and for the livestock. Things don’t just run like clockwork by themselves though – a lot of hard work goes on behind the scenes throughout the year. One of the main highlights, particularly for visitors to the Farm Park, is our lambing season. We reopen our gates to the public on February 14th and we’ll be hoping to have our first lambs born at around the same time. In order for that to happen, though, my Livestock Manager, Mike, carefully plans the dates using the average gestation period of 5 months. After some fairly rigorous health checks to ensure they’re all ready for the task ahead, the ewes first meet their suitors at the end of September, marking the start of ‘tupping’. Each ram is fitted with a special harness called a ‘raddle’; it has a wax block attached to the chest and each week, the colour of the block will change. This helps the Livestock team and I to estimate the ewe’s due The Farm Park closes to the public for 8 weeks and we can all put our feet up for a while … if only! date, as when she becomes pregnant, the ram will lose interest. We can tell by the colour of the marking on the ewe when the last mating would have taken place and bring them into the lambing shed with plenty of time before their due date. Before we greet another wonderful (albeit hectic) lambing season though, we have the winter months to get through. The Farm Park closes to the public for 8 weeks and we can all put our feet up for a while … if only! In reality, the farm itself continues as normal, but sometimes with the added complication of adverse weather conditions. Being 960ft above sea level, we are usually the first to see snow falling and then we’ll still be trudging around in it days after the slush has disappeared from the lower ground! There are still crops to fertilise and keep an eye on, as well as all of the livestock to feed, look after and in the case of some of the more vulnerable animals, keep sheltered from any severe conditions. The Farm Park continues to be a hive of activity as well. We start compiling lists of ‘winter improvements’ all the way back in September, so if you’re a regular, see if you can play spot the difference! We prepare for the new season with lots of site maintenance work, as well as staff induction and training days to make sure we’re all at the top of our game, ready and waiting for our first visitors of the year. Hoping to see you soon, Ada m 113 Photo: Lynne Milner A Message of Hope And Peace for the New Year, in Memory of the Fallen Like most communities, on the 11th November we remember the fallen. In our little corner of the Cotswolds this year, on the hundredth anniversary of the outbreak of WW1, we were reminded once again of the tragedy of the Souls family of Great Rissington. Annie Souls had five sons who went to war. None of the five returned, all killed in action. A parent’s worst fear realised, emphasising the sacrifice made by so many families. As this year draws to a close and another dawns, I find myself looking back over the last twelve months. 2014 has caused us to look so much further back than usual, over the last hundred years – remembering, and giving thanks for all who offered, and still offer, their lives in service of Queen and Country. The end of a year also prompts me to look forward to what the New Year might hold. Some of us may do so with fear, worrying what war might bring, but we should also look forward with hope. Our remembrance of the past, its lessons and pain, serves to emphasise the need to learn from those experiences and to pledge to strive together for a different future, where our best hopes might be realised rather than our worst fears. There are no guarantees but the Christian message is one of hope. In the last week of every 114 Cotswold Homes Magazine “2014 has caused us to look so much further back than usual, over the last hundred years – remembering, and giving thanks for all who offered, and still offer, their lives in service of Queen and Country.” year, during one of the busiest times of year, we should pause to remember a baby, the Christ child Immanuel whose name means “God With Us” and in whom, as the beautiful Christmas Carol tells us, our ‘hopes and fears of all the years are met in thee tonight’. I hope that 2015 is a year of peace, joy, love and hope, but even if there is sadness and despair, that the message of Christmas continues beyond the end of the Christmas leftovers and after the decorations are down, to remind everyone that God is with us, now and for always. Rev Rachel Rosborough Rector, Bourton on the Water with Clapton & the Rissingtons WHAT THE GAMEKEEPER SAW Guiting-based Gamekeeper Adam Tatlow always keeps his camera handy so he can capture his encounters with the wildlife of the Cotswolds. Having received no formal education in photography, Adam has nonetheless attracted much attention for his work, exhibiting around the Cotswolds and having his photographs published in local and national media. His next exhibition will be held at WWT Slimbridge from December 7th to February 8th. Visit Adam’s website to browse his portfolio and buy images and cards online 115 TANNER & OAK Business Showcase Beautiful Bags and Artisan Satchels Kira Watkin shares the story of her business, from a London College to the Cotswolds via the French Riviera - and tells of her unrivalled passion for British manufacturing. Hi Kira.What’s special about your designs? My bags and accessories are constructed to last, and the leather we use ages beautifully with use, so in turn I needed to create designs that have longevity. There's no point buying a bag to last years if it is so stylised you would grow bored of the design in 6 months. Hence, the pieces are classic with a quirky or modern twist. strong manufacturing history and it is lovely to still be part of that. Who is your target market? Who buys your products? I have actually found that the designs are appealing to both young and old. I feel like I've succeeded in my design goal with that though - classic enough to appeal to the older generation whilst quirky enough for the younger generation.There are young trendy city girls carrying bags in London and yummy mummies travelling the Cotswolds with them. What are your plans for the future? I would love to open a shop. Currently we do online sales, which are obviously a crucial part of trading these days, and country shows and fairs including CLA, Moreton-in-Marsh and the Daylesford Christmas Fair. I love doing the shows, I find it exciting and love speaking to customers and hearing all the lovely comments from people. If you ever see the Tanner & Oak tent at a show then pop in and say how pretty my bags are, even if you don’t intend on buying one - it really does make my day! Unfortunately, with a baby on the way (due in the Spring) the shows have been put on hold next summer. A little shop is my ideal next step. I'd love a high street presence, but at the moment it is hard to get in the shops. I made a conscious decision when I set up Tanner & Oak to not scare customers away with the prices. Obviously it is much more expensive to manufacture in the UK than overseas, so as a result I have very low margins. Unfortunately, this makes wholesale to shops tricky as most operate a margin What made you decide to become a designer and where did you train? I trained at the London College of Fashion but didn’t think I would end up running my own design business as I have always been the product side rather than the business side of fashion. After a move to the French Riviera working with some amazing French and Italian designers at Faconnable, along with a big boost in self confidence (moving to a foreign country on your own without speaking the language will do that to you!) I felt like I could take on the challenge. I always imagined that if I were going to be my own boss, it would be much later in life. What’s important to you about British manufacturing in particular? I have always had a love for the old-school British factories. Over the years I have visited some amazing manufacturers around the world, and don’t get me wrong, there are some slick operations in beautiful countries like Italy - and I am also not opposed to 'Made in China'. People often think that it means cheap, but it doesn’t have to. More often than not, it does, but that is because it is what the consumer wants. Asian production has no limits and almost anything is possible. But for me, there is a real romance in British manufacturing and visiting the family-run factories with all their old-school techniques and machinery. Our country has such a 116 Cotswold Homes Magazine Which is your favourite piece? Why? It’s hard to choose. I love the attention the Leighstone bag gets when I'm out and about, but I just adore carrying the satchel as well. If I could have the Leighstone in every colour I would! anywhere from 3 to 7 times cost price - which I just can’t compete with. I'd also love to hire someone to do my social marketing - not my strong point! Oh, and a weekend bag next year… What motivates you to succeed in business and what has been your proudest moment so far? I did a collaboration project with Bristol Motors earlier this year that I'm very proud of. Bristol have the most stunning cars, I used to love going to the office and sneaking a peak at all the beautiful vintage cars. We did a folio cover for them. It's great being associated with such an iconic British company. What’s your background & what do you like about the Cotswolds? I grew up all over the place, most of my childhood was in Berkshire, so not far from the Cotswolds, and then Devon. I moved to the Cotswolds when I came back from France two years ago, and just couldn’t be happier! Every time I take the dog, Chester, out on a fabulous weekend walk or drive to friends and look out of the window at the rolling countryside and gorgeous little towns I feel so lucky. All of our London friends adore coming up to see us - the Cotswolds is just so untouched and beautiful, we love it. I can’t imagine leaving and we are so thrilled we get to raise a little family here now as well. Find out more and browse products at MOVIE INSIDER ADAM RUBINS, CEO OF AWARD-WINNING MEDIA AGENCY WAY TO BLUE, IS AN UNAPOLOGETIC MOVIE JUNKIE. USING HIS POSITION TO PREVIEW THE COLDEST SEASON’S HOTTEST RELEASES, HE’S HERE TO GIVE OUR COTSWOLD READERS A SNEAKY GLANCE AT WHAT’S IN STORE FOR UK CINEMA AUDIENCES, FROM ARTISAN FESTIVAL FAVOURITES TO ANIMATED BLOCKBUSTERS. BUT HE’S BEEN LEFT FLUMMOXED BY AN EMERGING PHENOMENON…AN INCREASING TREND FOR OVERLY-LENGTHY FILM TITLES. It’s that time of year again and your Movie Insider is feeling rather Christmassy. This issue, we take a look at the films that will warm your icy bottoms over December, January and February – and, boy, is there plenty to choose from. I’m feeling besieged by films with these everlengthening movie titles that seem at war with my shrinking word count – check out the abundance of colons in this little lot. In December we have animated spin off Penguins of Madagascar (5th), The Hobbit: The Battle Of The Five Armies (12th), Night At The Museum: Secret Of The Tomb (19th) and Ridley Scott Moses epic Exodus: Gods And Kings (26th). In January, the colon gets a further two outings with horror sequel The Woman In Black: Angel Of Death (1st) and Matthew Vaughn action spy thriller Kingsmen: The Secret Service (29th). A little relief arrives in February with The Second Best Marigold Hotel (27th). Moses, mummies, penguins, spies, hobbits and ghosts…Well, it’s a real pick n’ mix in December’s bag, that’s for sure. So from long titles to potential Academy challengers, this time of the year usually sees critics’ favourites lead the late charge for award Big Hero 6 The Hobbit season. Look out for Angelina Jolie’s war drama Unbroken on December 26th and Stephen Hawking biopic The Theory Of Everything on January 1st. Additional darlings from the festival circuit include the return of Michael Keaton in Birdman (Jan 2nd), Steve Carrell drama Foxcatcher (Jan 9th), Miles Teller starrer Whiplash (Jan 16th) and Reece Witherspoon drama Wild (Jan 16th). And if that’s not enough, we round out with a wide variety of fun filled blockbusters. The return of Harry and Lloyd in Dumb and Dumber 2 (Dec 19th), musical re-make with Jamie Foxx and Cameron Diaz Annie (Dec 26th), Liam Neeson sequel Taken 3 (Jan 8th), Disney musical Into The Woods (Jan 9th), Seth Rogen Korean comedy The Interview (Feb 6th) and don’t forget to lock up your partners on Valentine’s because it is Fifty Shades of Grey time, as the kinky smash-hit franchise finally arrives on screen (Feb 13th). If you’re looking for family fun January brings you the next Disney Feature Animation adventure in Big Hero 6 (30th) and in February Shaun The Sheep comes to the big screen (6th). FOLLOW ME ON TWITTER: @ADAMRUBINS Now, at this point I usually like to leave you with a few personal tips. Outside of the contenders that we have already discussed (Kingsmen, Birdman, Foxcatcher, Whiplash and Wild), look out for Alex Garland’s sci-fi drama Ex Machina (Jan 23rd) and Paul Thomas Anderson crime comedy Inherent Vice (Jan 30th). My suggestion? Start saving now, because with these flicks you’ll be spending the vast majority of Winter in a cosy theatre near you. The Woman in Black 117 WINTER TS EN EV MBER 2014 COTSWOLD CALENDAR DECE THERE’S PLENTY GOING ON THIS WINTER IN THE COTSWOLDS, ESPECIALLY IN THE RUN UP TO CHRISTMAS. HERE WE’VE ROUNDED UP OUR PICK OF FAYRES AND PANTOS AND NOTED A FEW SPECIAL APPEARANCES FROM FATHER CHRISTMAS HIMSELF. OUR FAMILY PICK: MOTHER GOOSE AT THE THEATRE CHIPPING NORTON TUESDAY 18TH NOVEMBER – SUNDAY 11TH JANUARY If you only see one panto every year, make it the Chippy panto: beautifully designed, boisterous, laugh-a-minute…fun is virtually guaranteed. This year’s frolic is suitably infused with ‘snowmen, northern lights and woolly jumpers…and even the occasional moose.’ Mother Goose asks its audience: What would you give to be rich and beautiful? Featuring fabulous songs from a TONY award-winning composer. SANTA SERVICE AT THE GWR STEAM RAILWAY SATURDAY 29TH NOVEMBER – SATURDAY 24TH DECEMBER Give your children an adventure they won’t forget by getting some tickets for this special Christmas service, featuring special appearances from Father Christmas. You can either depart from Cheltenham Racecourse and visit the grotto in Winchcombe or leave from Toddington and meet Santa on board – but don’t delay, tickets go fast. Visit the website for more details. 118 Cotswold Homes Magazine ENCHANTED CHRISTMAS AT WESTONBIRT ARBORETUM FRIDAY 28TH NOVEMBER - SUNDAY 21ST DECEMBER 2014 A one-mile illuminated trail lights the nature lover’s way into Christmas this year, featuring Father Christmas in green, a chance to see reindeer and many fantastic new varieties of plant life to discover. COTSWOLD CALENDAR SLEEPING BEAUTY PANTOMIME AT THE EVERYMAN THEATRE FRIDAY 28TH NOVEMBER 2014 – SUNDAY 11TH JANUARY 2015 Giffords Circus fave Tweedy the Clown returns to the Everyman with his own imitable brand of hilarious capering for a new panto season. Get ready to whoop, cheer and boo as the time-honoured tale of Sleeping Beauty is brought to life. BEAUTY AND THE BEAST PANTOMIME AT THE ROSES THEATRE, TEWKESBURY SATURDAY 29TH NOVEMBER 2014 - SATURDAY 3RD JANUARY 2015 See The Roses’ take on this timeless tale of beastliness and hidden beauty with an extended run. Will Belle see through the beast’s foreboding exterior to the gentle heart within? Take the children along and find out. VICTORIAN LATE NIGHT CHRISTMAS SHOPPING FRIDAY 5TH DECEMBER 2014 Bourton-on-the-Water once again welcomes all-comers to its traditional late night shopping. Make sure you’re there at 6.00pm for the lighting of the Christmas Tree (proudly situated in the middle of the River Windrush) and stick around for Father Christmas, the jazz band and the hog roast. CHRISTMAS ORATORIO CONCERT, GLOUCESTER CATHEDRAL SATURDAY 6TH DECEMBER 2014 Come to the inspiring Gloucester Cathedral to hear the Gloucester Choral Society perform Bach’s ‘Christmas Oratorio’, accompanied by the Corelli Orchestra. A festive treat for music lovers. From 7pm, tickets £10 - £30. VICTORIAN CHRISTMAS AT THE HOLST BIRTHPLACE MUSEUM, CHELTENHAM SATURDAY 6TH DECEMBER 2014 Why not partake in a Victorian Christmas at the Holst Birthplace Museum. Make some festive cards, stir the Christmas pud, get crafty and have a sing-along around the piano while enjoying a visit to the birthplace of this famous composer. Tickets £5, con £4.50. SANTA (& REINDEER) LIVE IN CHELTENHAM SATURDAY 6TH DECEMBER & SATURDAY 13TH DECEMBER MEET FATHER CHRISTMAS ADAM HENSON’S COTSWOLD FARM PARK SATURDAY 29TH NOVEMBER - SUNDAY 21ST DECEMBER 2014 Have you been naughty or nice? Find out by taking a trip to the much-loved Cotswold Farm Park to see Father Christmas himself (Saturdays and Sundays only). Admission to the grotto is included in the ticket price, so there’s no excuse not to bring your lists along! WINCHCOMBE CHRISTMAS FESTIVAL TUESDAY 2ND DECEMBER 2014 All the delights of a Christmas Fayre – shopping, street entertainment, Santa’s grotto – all hosted in one of the Cotswolds’ oldest and most appealing villages. Join in the festive fun times from 5-8pm. BROADWAY LATE NIGHT CHRISTMAS SHOPPING FRIDAY 5TH DECEMBER 2014 Beautiful Broadway gets into the Christmas spirit with its traditional pageantry. Carol singing, horse and carriage rides, hog roasts, chestnuts – get the full festive package from 5.30-8.30pm. Be careful not to miss these two appearances at Cheltenham’s Brewery – Santa’s on a tight schedule! From 12pm-4pm on Saturday 6th and 13th December 2014 you can come and cuddle a reindeer. Make sure you’ve got your lists ready! GLOUCESTER QUAYS FESTIVE FAYRE FRIDAY 12TH - SUNDAY 14TH DECEMBER 2014 Hear the brass band, meet a real-life Rudolph and get in some lastminute Christmas shopping done at Gloucester Quays this Christmas season. Shrewd shoppers can even pick up a Christmas tree. THE INTERNATIONAL AT CHELTENHAM RACECOURSE FRIDAY 12TH DECEMBER – SATURDAY 13TH DECEMBER 2014 Featuring the Stan James International Hurdle, one of the most important hurdles of the season (won last year by Nicholls-trained Zardandar) and Brightwells Bloodstock Sale. What better way to see off the year than with three days of thrilling race action? For up-todate times and online ticket purchasing, head to the website.fixtures/the-international/about/tickets-andpackages/ 119 COTSWOLD CALENDAR NEW YEAR EVENTS 2015 JANUARY-FEBRUARY START OFF 2015 IN STYLE WITH OUR PICK OF TALKS, CONCERTS, PLAYS, SHOWS AND SPORTING EVENTS. DAVID DURSTON: TWO GENRES, CORINIUM MUSEUM CIRENCESTER SATURDAY 10TH JANUARY - SUNDAY 8TH FEBRUARY 2015 Award-winning local artist and beloved educator David Durston exhibits his ‘Two Genres’ exhibition in the Corinium Museum. The title ‘Two Genres’ makes reference to the dual strands, developed in parallel, in more recent works – abstract and figurative. Painting techniques, from both styles, have served to inform each other resulting in the production of vibrant images. People familiar with Durston’s work may be both surprised and interested by his development as an artist. JANUARY NEW YEAR’S DAY RACING AT CHELTENHAM RACECOURSE THURSDAY 1ST JANUARY 2015 What better way to see in a new year and brush off the Christmas lethargy than by catching some blood-pumping racing action at Cheltenham Racecourse? Seven races, tours and children’s entertainment including face painting and a special appearance from Peppa Pig make a visit to the racecourse the winning choice for new year family fun. Gates open 10.30am – ticket prices and booking online.fixtures/new-years-day/about/tickets-andpackages GLOUCESTER RUGBY VS SARACENS, KINGSHOLM SATURDAY 10TH JANUARY 2015 Rugby titans clash when AVIVA Premiership finalists Saracens return to Kingsholm for another go at the Cherry and Whites. Will Gloucester be able to avenge their last defeat? Find out from 3pm. Tickets online. COTSWOLD CIRCULAR WALK, FROM THE CHEQUERS CHIPPING NORTON THURSDAY 1ST JANUARY 2015 A circular walk of 5.0 miles. Bring good walking shoes or boots (or, if wet, wellingtons) plus weatherproof clothing. Dogs on leads are permitted. Children with parents come along free of charge. Walks usually start and finish at a country inn for easy parking and for those who wish to partake of a light pub lunch upon return (at your own expense). Meet at 10.00am, cost £5.50. 120 Cotswold Homes Magazine PHILHARMONIA ORCHESTRA, CHELTENHAM TOWN HALL SUNDAY 11TH JANUARY 2015 And now for something a little more refined…Don’t miss this concert from the highly acclaimed Philharmonia Orchestra, returning to Cheltenham to the delight of Cotswold music-lovers. Led by Domingo Hindoyan of the Berlin State Opera, the orchestra will treat audiences to Beethoven’s ‘No.5’ in E flat major, followed by ‘Symphony No. 7’ in major plus Grieg’s ‘Holberg Suite, Op. 40’. Tickets from £11.50 COTSWOLD CALENDAR STUART MACONIE, CHELTENHAM TOWN HALL SUNDAY 18TH JANUARY 2015 Beloved radio personality and journalist Stuart Maconie swings by Cheltenham Town Hall to discuss his book The People’s Songs, an informal social history as told through the fifty influential tracks. From 7.30pm, tickets £15. A GIRL WITH A BOOK, THE THEATRE CHIPPING NORTON THURSDAY 22ND JANUARY 2015 It’s October 2012. Gunmen stop a bus in Pakistan and shot three girls for wanting to go to school. How can a writer respond? Can a Guardian reader really be prejudiced? Knowing nothing about the situation, able to offer little more than outrage this writer was forced to come out from behind his desk and go into the community searching for answers to help him tell the story of a brave young woman’s fight for girls’ education. When his research uncovers attitudes at odds with his liberal convictions he also has to deal with what he learns about himself. From 7.45pm. Tickets £13.00 / Concessions £11.00 / Schools £8.50 CHELTENHAM FESTIVAL TRIALS DAY, CHELTENHAM RACECOURSE SATURDAY 24TH JANUARY 2015 Pick up some tips for The Festival at Trials Day, with seven action-filled races and bloodstock sales. Come along and see if you can identify a future Festival winner. Racing starts from 12.40pm. Ticket prices £12 and £25 per day in advance, or £15 and £30 on the day (packages and group discounts available).fixtures/festival-trials-day/about/tickets-andpackages FEBRUARY FOLK THREE FESTIVAL, CHELTENHAM TOWN HALL FRIDAY 13TH - SUNDAY 15TH FEBRUARY 2015 Six acts, three nights and one stage. Take in three barnstorming headliners (Shooglenifty, Eliza and Martin Carthy and Seth Lakeman) and three fabulous supports as Cheltenham gets folked up good and proper. Pick and choose which evening to attend at £20 a night, or visit all three for only £50. A must for music lovers. STEPHEN K. AMOS, THE THEATRE CHIPPING NORTON FRIDAY 13TH FEBRUARY 2015 The maestro of feel-good comedy is back on tour with his new show. Fresh from sell-out tours of Australia and New Zealand, as heard on BBC Radio 4 Life: An Idiot’s Guide and What Does the K Stand For? All tickets £18 / Starts 7.45pm. MICHAEL PORTILLO ‘LIFE: A GAME OF TWO HALVES’ AT GLOUCESTER GUILDHALL FRIDAY 20TH FEBRUARY 2015 Since quitting politics Portillo has since made a name for himself in presenting and reviewing. There will be much for him to discuss when he visits Gloucester, from his days advising Margaret Thatcher to his programmes on single motherhood, teenage suicide and railway travel. Doors 7.30pm, tickets £16. JIMMY CARR’S ‘FUNNY BUSINESS’, CHELTENHAM TOWN HALL TUESDAY 24TH FEBRUARY 2015 Comedian and presenter Jimmy Carr takes his latest sell-out tour on the road, and if you’re a fan of Jimmy’s close to the bone, cringe n’ giggle comedy style, you better book fast – tickets to his shows go like hot cakes. AND THEN THERE WERE NONE, EVERYMAN THEATRE CHELTENHAM MONDAY 26TH TO SATURDAY 31ST JANUARY 2015 / THURSDAY 29TH JANUARY 2015/ SATURDAY 31ST JANUARY 2015 It’s the tenth anniversary of the Agatha Christie Theatre Company, and to celebrate they’ve adapted the author’s masterpiece And Then There Were None, which has sold over more than 100 million print copies. When ten strangers are suspiciously drawn to a distant island under wildly different pretexts, it’s anyone’s guess as to which of them – if any – will survive. From 7.45pm (matinees on Thursday and Saturday from 2pm). Tickets begin at £16. 121 PROMOTE YOUR INDEPENDENT BUSINESS OR LOCAL EVENT WITH COTSWOLD HOMES! WITH AN ESTIMATED READERSHIP IN EXCESS OF 50,000 RESIDENTS AND VISITORS EACH QUARTER, THIS BEAUTIFUL MAGAZINE WILL BE FREELY DISTRIBUTED TO THE LOVELIEST HOMES, HOTELS AND TOURIST ATTRACTIONS IN OVER 60 VILLAGES AND TOWNS THROUGHOUT THE NORTH COTSWOLDS, AND CAN BE FOUND ON MAGAZINE STANDS AT LOCAL TRAIN STATIONS ALONG THE FIRST GREAT WESTERN ROUTE INTO LONDON PADDINGTON AND OUTSIDE THE OFFICES OF HARRISON JAMES & HARDIE IN STOW ON THE WOLD, BOURTON ON THE WATER AND MORETON IN MARSH SPREAD THE WORD ABOUT YOUR BUSINESS OR EVENT TO THE NORTH COTSWOLD MARKETPLACE, FROM STARTUP VENTURE TO CELEBRITY SUCCESS STORY! STEP 1. BECOME A MEMBER OF COTSWOLD HOMES DIRECTORY OF INDEPENDENT BUSINESSES: from only £10 plus VAT per month for annual membership Get a two page listing on Cotswold-Homes.com, receive free entry to our networking events and up to 30% discount on all magazine, social media and online advertising STEP 2. EVENTS CALENDAR OR PC OFFER: only £112.50 plus VAT per quarter (with annual membership) Our standard campaign includes: Monthly e-marketing to thousands of residents and visitors via our Cotswold Homes database, Facebook and Twitter A Feature Advert on Cotswold-homes.com Privilege Card Section / Events Calendar A Space in the Privilege Card section or Events Calendar in one edition of our magazine STEP 3. ADVERTISING: from only £112.50 for a quarter page (with annual membership) CHRISTMAS TREE FESTIVAL ST EDWARD’S CHURCH STOW ON THE WOLD THE FESTIVAL RUNS FROM BETWEEN 10.00AM AND 5.00PM ON THURSDAY 4TH TO SUNDAY 7TH DECEMBER. There will be 40 real Christmas trees, provided by Fosseway Garden Centre and decorated by children’s groups, schools, organisations and businesses. The choice of favourite tree in each category is voted for by the public. Plus a celebrity choice will be made this year by Tony Archer of Radio 4’s The Archers STEP 4. BROADCAST YOUR SUCCESS STORIES WITH A RANGE OF BESPOKE PACKAGES We offer many marketing USPs for established businesses and events including promotional editorial, bespoke e-marketing campaigns and minimags on Cotswold Homes App R&D WALKER T/A P CHECKETTS 10% OFF SAUSAGES. 5LBs OF RINDLESS BACK BACON FOR £9.99 VALID UNTIL 28/02/15 24 High Street Moreton-in-Marsh Gloucestershire GL56 OAF 01608 651002 123 Winter offers from a host of local businesses, make sure you pick up your card as soon as possible! Free property appraisals, free photographs and up to £500 cash back for new joiners until the end of Feb 2015 T: 0208 935 5375 W: E: owners@character-cottages.com OFFERING ADVICE IN ALL AREAS OF FINANCIAL PLANNING. FREE INITIAL CONSULTATION VALID UNTIL 28/02/15 VALID UNTIL THE END OF FEBRUARY 2015. JEM Financial Planning, Unit 3, Grafton House, High Street, Chipping Campden, Gloucestershire, GL55 6AT Tel: 01386 840777 JEM Financial Planning is a trading style of John Magee which is authorised & regulated by the Financial Conduct Authority. MOVING HOUSE? THEN CONTACT THE CONVEYANCING EXPERTS AND GET 15% OFF OUR STANDARD LEGAL FEES! CALL 01452 657950 FOR FURTHER DETAILS 10% OFF Thomas Legal Group is a dedicated provider of conveyancing services in and around the Cotswolds VALID UNTIL END OF FEBRUARY 2015 Tel: 01452 657950 Thomas Legal Group, Brunswick House, Brockworth, Gloucestershire, GL3 4AA Web: E-mail: info@tlg.uk.com Cotswold -Homes.com NEW PATIENT EXAMINATIONS FOR ONLY £59.00 (NORMALLY £89.00). WITH A FREE DENPLAN EXAMINATION. ASK PENNY FOR DETAILS. ALL LOCAL CHEESE 5 High Street, Moreton in Marsh, GL56 0AH Tel: 01608 652862 sales@cotswoldcheese.com Trevor Bigg Breakspeare House, Shipton Road, Milton-Under-Wychwood, Oxford, OX7 6JW 01993 831396 20% Off our premium made-tomeasure hardwood window shutters. Call for a free no obligation survey & quote. Valid until 28/02/15 Tel: 01242 649592 37 Eldon Road, Cheltenham. GL52 6TX Cotswold -Homes.com 10% OFF EVERYTHING IN STORE, PERFECT GIFTS FOR FRIENDS AND FAMILY. UNTIL THE END OF FEBRUARY 2015. Tel: 01451 822800 Box of Delights, High Street, Bourton-on-the-Water, Cheltenham, Gloucestershire, GL54 2AQ £20 OFF A 360 MUD COURSE PERSONAL TRAINING SESSION Valid until 28/02/15 Freestyle 360, Sheaf House Farm, Draycott Road, Blockley, Gloucestershire GL56 9DY Info@freestyle360.co.uk M: 07818 033629 | T: 01386 700039 6.30-9pm Friday, Saturday, Sunday: Select your fish at time of booking & it will be delivered fresh on the day. 24 HOUR TAXI SERVICE 50% OFF RETURNS FOLLOWING AN OUTBOUND JOURNEY WITHIN A 20 MILE RADIUS OF BOURTON-ONTHE-WATER. VALID UNTIL THE END OF FEBRUARY 2015. Tel: 01451 820778 Mob: 07585 308838/ 07748 983 311 The Cotswolds LUNCH at Wesley House Restaurant - Wine Bar & Grill Valid for 2 persons per Privilege Card 26th November 2014 - 28th February 2015 Excludes: Christmas Eve, Christmas Day, New Year’s Day, Valentine’s Day Wesley House, High Street, Winchcombe, Gloucestershire GL54 5LJ Telephone: 01242 602366 15% OFF REFLEXOLOGY SESSIONS VALID UNTIL THE END OF FEBRUARY 2015. Valid until 28/02/15. Beautylicious, George Moore Community Centre, Moore Road, Bourton on the Water, GL54 2AZ 01451 870 210 Digbeth St, Stow on the Wold, Gloucestershire, GL54 1BN T: 01451 828 129 M: 07900 058 919 20% Off Liberty Art Fabrics Valid until 28/02/15 01993 822385 Mob - 07976 353 996 BeautyliciousBourton@gmail.com 20% OFF SURVEYS PRIVILEGE CARD DISCOUNT MUST BE REQUESTED BEFORE QUOTE IS PROVIDED, CANNOT BE USED IN CONJUNCTION WITH ANY OTHER OFFERS, DISCOUNTS OR PROMOTIONS. VALID UNTIL THE END OF FEBRUARY 2015. Tel: 01285 640840 Central Surveying, 17 Black Jack Street, Cirencester, Gloucestershire, GL7 2AA 10% Off any bag. Visit (Valid until the end of February 2015). Tel: 01993812466 info@tanerandoak.com 50% OFF A 30 MINUTE RIDING LESSON FOR NEW CUSTOMERS OR BUY 2 LESSONS AND GET THE 3RD FREE. UNTIL THE END OF FEBRUARY 2015. Durham’s Farm Riding School Chastleton Moreton-in-Marsh Gloucestershire GL56 0SZ 01608 674867 10% DISCOUNT OFF ALL NEW FURNITURE AND FABRIC. Valid until the end of February 2015. Tel: 01608 659091 5 Threshers Yard, West Street, Kingham Oxfordshire, OX7 6YF COTSWOLD CARRIERS FREE PACKING MATERIAL FOR YOUR MOVE IF YOU QUOTE THIS NUMBER WHEN YOU CONTACT US: 730500 UNTIL THE END OF FEBRUARY 2015 01608 730500 Warehouse No 2, The Walk, Hook Norton Road Chipping Norton, Oxon OX7 5TG Welcome to Cotswold Homes Magazine, your indispensable guide to lifestyle and property in the UK's most beautiful region. In this seasonal i... Published on Nov 24, 2014 Welcome to Cotswold Homes Magazine, your indispensable guide to lifestyle and property in the UK's most beautiful region. In this seasonal i...
https://issuu.com/cotswoldhomesmagazine/docs/cotswold_homes_15
CC-MAIN-2018-51
refinedweb
45,942
64.04
This site uses strictly necessary cookies. More Information I just don't know how to professionally describe what I'm trying to accomplish and how it's called in programing world but I hope that you'll get it. In unity we have those functions like OnEnable, OnTriggerEnter etc. that fire when specific event occurs in the game and my question is how to make such a custom function? How are they made? Like I want some class to have this kind of funcion that I don't need to call and it's called by itself when something occurs( I guess it has something to do with inheritance) and you can define it or not but it still exists behind the scenes. Maybe on the example it will be cleaner: public class RandomClass : MonoBehaviour { private void OnEnable() // It's called when game object is enabled { //you can define it if you need it or not and delete function but it's still here all time for use } private void OnSomethingHappened() // It's called when Something defined by me happened { //I want this to be definable if I need it or not and delete function but still be here all the time for use } } So let's say I have a special button on the scene. Whenever I press it I want to call function OnSpecialButtonPress() on every class that implements it. It would look like so: public class UseButtonClass : MonoBehaviour { //I wanna know when button was pressed void OnSpecialButtonPress() { print("Yay I pressed it!!"); } } public class DontUseButtonClass : MonoBehaviour { //i dont care about button so i dont implement OnSpecialButtonPress() function //but I could if I wanted } I hope you got what I'm trying to accomplish Answer by Kciwsolb · Apr 25, 2018 at 06:13 PM You get the option to implement those methods because your scripts inherit from MonoBehavior. If you wanted to make something similar you would have to make all of your classes inherit from another class that you have those methods in. I doubt this is what you want. More than likely you are interested in either Unity Events or C# delegates and events. Unity Events: C# Delegates /5 People are following this question. Multiple Cars not working 1 Answer Relationship between Events and Update() 2 Answers Do you know the solution of this probelm? (Problem is in the details) 0 Answers Access one function in multiple classes 2 Answers Distribute terrain in zones 3 Answers EnterpriseSocial Q&A
https://answers.unity.com/questions/1498674/how-to-make-a-function-like-onenable-that-fire-by.html
CC-MAIN-2021-49
refinedweb
413
63.12
In Chrome 72, we've added support for: - Creating public class fields in JavaScript is now much cleaner. - You can see if a page has been activated with the new User Activation API - Localizing lists becomes way easier with the Intl.format()API. And there’s plenty more! I’m Pete LePage. Let’s dive in and see what’s new for developers in Chrome 72! Change log This covers only some of the key highlights, check the links below for additional changes in Chrome 72. - Chromium source repository change list - ChromeStatus.com updates for Chrome 72 - Chrome 72 deprecations & removals Public class fields My first language was Java, and learning JavaScript threw me for a bit of a loop. How did I create a class? Or inheritance? What about public and private properties and methods? Many of the recent updates to JavaScript that make object oriented programming much easier. I can now create classes, that work like I expect them to, complete with constructors, getters and setters, static methods and public properties. Thanks to V8 7.2, which ships with Chrome 72, you can now declare public class fields directly in the class definition, eliminating the need to do it in the constructor. class Counter { _value = 0; get value() { return this._value; } increment() { this._value++; } } const counter = new Counter(); console.log(counter.value); // → 0 counter.increment(); console.log(counter.value); // → 1 Support for private class fields is in the works! More details are in Mathias’s article on class fields for more details. User Activation API Remember when sites could automatically play sound as soon as the page loaded? You scramble to hit the mute key, or figure out which tab it was, and close it. That’s why some APIs require activation via a user gesture before they’ll work. Unfortunately, browsers handle activation in different ways. Chrome 72 introduces User Activation v2, which simplifies user activation for all gated APIs. It’s based on a new specification that aims to standardize how activation works across all browsers. There’s a new userActivation property on both navigator and MessageEvent, that has two properties: hasBeenActive and isActive: hasBeenActiveindicates if the associated window has ever seen a user activation in its lifecycle. isActiveindicates if the associated window currently has a user activation in its lifecycle. More details are in Making user activation consistent across APIs Localizing lists of things with Intl.format I love the Intl APIs, they’re super helpful for localizing content into other languages! In Chrome 72, there’s a new .format() method that makes rendering lists easier. Like other Intl APIs, it shifts the burden to the JavaScript engine, without sacrificing performance. Initialize it with the locale you want, then call format, and it’ll use the correct words and syntax. It can do conjunctions - which adds the localized equivalent of and (and look at those beautiful oxford commas). It can do disjunctions - adding the local equivalent of or. And by providing some additional options, you can do even more. const opts = {type: 'disjunction'}; const lf = new Intl.ListFormat('fr', opts); lf.format(['chien', 'chat', 'oiseau']); // → 'chien, chat ou oiseau' lf.format(['chien', 'chat', 'oiseau', 'lapin']); // → 'chien, chat, oiseau ou lapin' And more! These are just a few of the changes in Chrome 72 for developers, of course, there’s plenty more. - Chrome 72 changes the behavior of Cache.addAll()to better match the spec. Previously, if there were duplicate entries in the same call, later requests would simply overwrite the first. To match the spec, if there are duplicate entries, it will reject with an InvalidStateError. - Requests for favicons are now handled by the service worker, as long as the request URL is on the same origin as the service worker. 73 is released, I’ll be right here to tell you -- what’s new in Chrome! Feedback RSS or Atom feed and get the latest updates in your favorite feed reader!Subscribe to our
https://developers.google.com/web/updates/2019/01/nic72?hl=zh-cn
CC-MAIN-2019-35
refinedweb
660
58.89
On Aug 24, 2007, at 9:34 AM, Jonathan Gallimore wrote: > Thanks for the feedback. I'm still using EJB 2, and haven't used > EJB 3 before. My new EJB 3 book is on its way from Amazon. Great. I like the O'Reilly one. Though I haven't really read the book since Richard retired for book authoring. > It sounds like what I've done for generating a deployment > descriptor for EJB 2 is unlikely to be useful to people using EJB > 3, and perhaps any kind of deployment descriptor generator is > unnecessary to those people? Pretty much. A bit more detail below. > I'm sure I can convince people at work that we need to move to EJB > 3, if that's what's recommended. The great thing about ejb3 is that for quite a lot of it it's subtractive, so you're not really moving but re-moving. Any app becomes an EJB 3.0 app the second you delete the namespace in the ejb- jar.xml that declares it is 2.1 (or 2.0, etc). It will be assumed as an EJB 3.0 app and from there you can keep deleting. The more annotations you add, the more descriptor you can delete. It's a pretty direct conversion for Session beans and MessageDriven Beans. Let me give you the crib notes: @Stateless or @Stateful on a class replaces the descriptor tags <session>, <ejb-name>, <session-type>Stateless</session-type>, and <session-type>Stateful</session-type>. @RemoteHome or @LocalHome is used on a class to replace <home> or <local-home>. We inspect the actual home or local-home interface to determine the related <remote> or <local> interface. @TransactionManagement on a class replaces <transaction-type> @TransactionAttribute on a class and/or method replaces <container- transaction>. You'll need one @TransactionAttribute for each <container-transaction>/<method> xml element. A <method-name>*</ method-name> would turn into an @TransactionAttribute on a class, everything else would go on methods. @EJB used on a class replaces <ejb-ref> and <ejb-local-ref>. @EJB used on a field or setter method (private or public) request injection of that EJB as well. When @EJB is used on a class the "beanInterface" attribute is required. When used on a field or setter method, the "beanInterface" attribute defaults to the type of the field or setter method. @Resource used on a class replaces <env-entry>, <resource-ref>, <resource-env-ref>. @Resource used on a field or method (private or public) requests injection of that resource. When @Resource is used on a class, the "type" attribute is required. When used on a field or method, the "type" attribute defaults to the type of the field or setter method. There are more annotations obviously, but these are fairly simple and get rid of a ton of xml. > I'm still planning on doing more work the XDoclet plugin, just > hoping for some spare time when the day job isn't getting in the > way ;-) If you're interested in working on a truly cool tool, help us write the anti-XDoclet :) -David > > Jon > > David Blevins wrote: >> >>/build/en/pdf/JBoss >> >> >> >> >>
http://mail-archives.apache.org/mod_mbox/geronimo-user/200708.mbox/%3CCB27B3CE-C6CE-441E-A2FC-D15A51C2AF8F@visi.com%3E
CC-MAIN-2019-39
refinedweb
526
65.32
A wrapper around a blob in a net. BlobReference gives us a way to refer to the network that the blob is generated from. Note that blobs are, essentially, just strings in the current workspace. Definition at line 179 of file core.py. None Initializes a blob reference. Note that this does not prepends the namescope. If needed, use ScopedBlobReference() to prepend the existing namespace. Definition at line 187 of file core.py. A wrapper allowing one to initiate operators from a blob reference. Example: for a blob reference b that comes from network n, doing b.Relu(...) is equivalent to doing net.Relu([b], ...) Definition at line 256 of file core.py.
https://caffe2.ai/doxygen-python/html/classcaffe2_1_1python_1_1core_1_1_blob_reference.html
CC-MAIN-2019-35
refinedweb
113
70.9
menu. You will learn a little about Java from this example but keep in mind that its main objective is to take you though the steps of creating and running a program so that you know everything is working and what to do. If you click the Design tab you can use the Palette (top right) to place user interface objects. If you want to see what your user interface looks like, before you have added any code, when the program is run you can click the Preview icon. To make this our first program in Java we really do have to enter some Java instructions. Double click on the button and you will automatically generate a click event handler - i.e. a block of Java code that is obeyed in response to clicking the button. The code is generated in among a lot of other generated code - try not to be too worried about the rest of the generated code and try not to accidentally change it.. To make something happen when the button is clicked we can make use of the JOptionPane object which has a range of methods that can be used to make message boxes pop up. There are lots of objects supplied by the Java framework and they all have names like JOptionPane that usually only make sense when you know what the object is for. Before we can use JOptionPane we have to tell the compiler that we are intending to use it and you need to add: import javax.swing.JOptionPane; to the start of the file. To do this click the Source tab - if you can't already see the code - and scroll to the very top of the file. Type or copy and paste the line in before the line starting public class NewJFrame:. blog comments powered by Disqus <ASIN:0072263148>
http://i-programmer.info/ebooks/modern-java/1376-getting-started-with-java.html?start=1
CC-MAIN-2017-04
refinedweb
310
74.73
Package vfs Overview ▹ Overview ▾ Package vfs defines types for abstract file system access and provides an implementation accessing the file system of the underlying OS. Index ▹ Index ▾ Package files emptyvfs.go namespace.go os.go vfs ReadFile ¶ func ReadFile(fs Opener, path string) ([]byte, error) ReadFile reads the file named by path from fs and returns the contents. type BindMode ¶ type BindMode int const ( BindReplace BindMode = iota BindBefore BindAfter ) type FileSystem ¶ The FileSystem interface specifies the methods godoc is using to access the file system for which it serves documentation. type FileSystem interface { Opener Lstat(path string) (os.FileInfo, error) Stat(path string) (os.FileInfo, error) ReadDir(path string) ([]os.FileInfo, error) String() string } func OS ¶ func OS(root string) FileSystem OS returns an implementation of FileSystem reading from the tree rooted at root. Recording a root is convenient everywhere but necessary on Windows, because the slash-separated path passed to Open has no way to specify a drive letter. Using a root lets code refer to OS(`c:\`), OS(`d:\`) and so on. type NameSpace ¶ A NameSpace is a file system made up of other file systems mounted at specific locations in the name space. The representation is a map from mount point locations to the list of file systems mounted at that location. A traditional Unix mount table would use a single file system per mount point, but we want to be able to mount multiple file systems on a single mount point and have the system behave as if the union of those file systems were present at the mount point. For example, if the OS file system has a Go installation in c:\Go and additional Go path trees in d:\Work1 and d:\Work2, then this name space creates the view we want for the godoc server: NameSpace{ "/": { {old: "/", fs: OS(`c:\Go`), new: "/"}, }, "/src/pkg": { {old: "/src/pkg", fs: OS(`c:\Go`), new: "/src/pkg"}, {old: "/src/pkg", fs: OS(`d:\Work1`), new: "/src"}, {old: "/src/pkg", fs: OS(`d:\Work2`), new: "/src"}, }, } This is created by executing: ns := NameSpace{} ns.Bind("/", OS(`c:\Go`), "/", BindReplace) ns.Bind("/src/pkg", OS(`d:\Work1`), "/src", BindAfter) ns.Bind("/src/pkg", OS(`d:\Work2`), "/src", BindAfter) A particular mount point entry is a triple (old, fs, new), meaning that to operate on a path beginning with old, replace that prefix (old) with new and then pass that path to the FileSystem implementation fs. If you do not explicitly mount a FileSystem at the root mountpoint "/" of the NameSpace like above, Stat("/") will return a "not found" error which could break typical directory traversal routines. In such cases, use NewNameSpace() to get a NameSpace pre-initialized with an emulated empty directory at root. Given this name space, a ReadDir of /src/pkg/code will check each prefix of the path for a mount point (first /src/pkg/code, then /src/pkg, then /src, then /), stopping when it finds one. For the above example, /src/pkg/code will find the mount point at /src/pkg: {old: "/src/pkg", fs: OS(`c:\Go`), new: "/src/pkg"}, {old: "/src/pkg", fs: OS(`d:\Work1`), new: "/src"}, {old: "/src/pkg", fs: OS(`d:\Work2`), new: "/src"}, ReadDir will when execute these three calls and merge the results: OS(`c:\Go`).ReadDir("/src/pkg/code") OS(`d:\Work1').ReadDir("/src/code") OS(`d:\Work2').ReadDir("/src/code") Note that the "/src/pkg" in "/src/pkg/code" has been replaced by just "/src" in the final two calls. OS is itself an implementation of a file system: it implements OS(`c:\Go`).ReadDir("/src/pkg/code") as ioutil.ReadDir(`c:\Go\src\pkg\code`). Because the new path is evaluated by fs (here OS(root)), another way to read the mount table is to mentally combine fs+new, so that this table: {old: "/src/pkg", fs: OS(`c:\Go`), new: "/src/pkg"}, {old: "/src/pkg", fs: OS(`d:\Work1`), new: "/src"}, {old: "/src/pkg", fs: OS(`d:\Work2`), new: "/src"}, reads as: "/src/pkg" -> c:\Go\src\pkg "/src/pkg" -> d:\Work1\src "/src/pkg" -> d:\Work2\src An invariant (a redundancy) of the name space representation is that ns[mtpt][i].old is always equal to mtpt (in the example, ns["/src/pkg"]'s mount table entries always have old == "/src/pkg"). The 'old' field is useful to callers, because they receive just a []mountedFS and not any other indication of which mount point was found. type NameSpace map[string][]mountedFS func NewNameSpace ¶ func NewNameSpace() NameSpace NewNameSpace returns a NameSpace pre-initialized with an empty emulated directory mounted on the root mount point "/". This allows directory traversal routines to work properly even if a folder is not explicitly mounted at root by the user. func (NameSpace) Bind ¶ func (ns NameSpace) Bind(old string, newfs FileSystem, new string, mode BindMode) Bind causes references to old to redirect to the path new in newfs. If mode is BindReplace, old redirections are discarded. If mode is BindBefore, this redirection takes priority over existing ones, but earlier ones are still consulted for paths that do not exist in newfs. If mode is BindAfter, this redirection happens only after existing ones have been tried and failed. func (NameSpace) Fprint ¶ func (ns NameSpace) Fprint(w io.Writer) Fprint writes a text representation of the name space to w. func (NameSpace) Lstat ¶ func (ns NameSpace) Lstat(path string) (os.FileInfo, error) func (NameSpace) Open ¶ func (ns NameSpace) Open(path string) (ReadSeekCloser, error) Open implements the FileSystem Open method. func (NameSpace) ReadDir ¶ func (ns NameSpace) ReadDir(path string) ([]os.FileInfo, error) ReadDir implements the FileSystem ReadDir method. It's where most of the magic is. (The rest is in resolve.) Logically, ReadDir must return the union of all the directories that are named by path. In order to avoid misinterpreting Go packages, of all the directories that contain Go source code, we only include the files from the first, but we include subdirectories from all. ReadDir must also return directory entries needed to reach mount points. If the name space looks like the example in the type NameSpace comment, but c:\Go does not have a src/pkg subdirectory, we still want to be able to find that subdirectory, because we've mounted d:\Work1 and d:\Work2 there. So if we don't see "src" in the directory listing for c:\Go, we add an entry for it before returning. func (NameSpace) Stat ¶ func (ns NameSpace) Stat(path string) (os.FileInfo, error) func (NameSpace) String ¶ func (NameSpace) String() string type Opener ¶ Opener is a minimal virtual filesystem that can only open regular files. type Opener interface { Open(name string) (ReadSeekCloser, error) } type ReadSeekCloser ¶ A ReadSeekCloser can Read, Seek, and Close. type ReadSeekCloser interface { io.Reader io.Seeker io.Closer }
http://docs.activestate.com/activego/1.8/pkg/golang.org/x/tools/godoc/vfs/index.html
CC-MAIN-2019-09
refinedweb
1,130
51.48
Hi,I use ReSharper 8.2.3 Full Edition with Visual Studio Ultimate 2013 update 4 and .NET 4.5.The background is that I have a C# WPF solution with a large number of projects. To simplify the description here, I have the following projects:Project 1: Library with UI resources (styles, fonts etc), including a resource dictionary named ResourceLibrary.xaml.Project 2: Main exe with some WPF user controls/views and an app.xaml. The WPF user controls do NOT refer directly to ResourceLibrary.xaml in project 1. The IntelliSense works fine and ReSharper does NOT generate any warnings (blue squiggly lines).Project 3: Library with some WPF user controls/views. All these user controls refer to ResourceLibrary.xaml: <UserControl.Resources> <ResourceDictionary Source="/<my_resource_namespace>;component/ResourceLibrary.xaml" /> </UserControl.Resources>The IntelliSense works fine and ReSharper does NOT generate any warnings (blue squiggly lines).The solution builds fine, IntelliSense works (finds the static resources in ResourceLibrary.xaml) and ReSharper does not generate any warnings.However, if the above lines are removed, IntelliSense does not work and ReSharper generates the warning "Resource XXX is not found". Note that this is NOT the case with the XAML files in project 2 that also uses resources in the resource library. The solution builds fine.I want to avoid to include the resource library in every XAML file due to performance reasons but I do want ReSharper to give me relevant warnings. Is there a workaround to this problem?I know I can turn the "The resource XXX is not found" warnings off in the ReSharper, but that would prevent me from detecting real resource problems.
https://resharper-support.jetbrains.com/hc/en-us/community/posts/206652995-WPF-Resource-is-not-found-by-ReSharper-in-design-time-but-solution-builds
CC-MAIN-2020-16
refinedweb
273
58.89
0 Hi guys!, I thought i'd try and learn a bit of python to keep myself occupied, and i was wondering if you could give me some constructive criticism on a word guessing game i have created. I'm an amatuer programmer and i'm always looking at improving when and where ever i can. You can see the code below. import random #global LIVES = 5 CORRECT = 0 GUESSED = [] #main is first function to be called, which intiates everything. def main(): startGame() difficulty = input() playGame(difficulty) #choice for options def startGame(): print ("So you would like to play a word game? \n") print ("Please choose a difficulty") print ("0: Easy") print ("1: Medium") print ("2: Hard") #which actually initiates the game fully, bringing in the right difficulty, then running the game. def playGame(difficulty): global LIVES global CORRECT global GUESSED if(difficulty == 0): selectedWord = ["dog", "cat", "cow", "eat", "mut","die","cry","buy","why"] elif(difficulty == 1): selectedWord = ["joker","poker","pride","cried","lied"] elif(difficulty == 2): selectedWord = ["medium", "elephant", "cheeses", "breeder", "sugar", "python"] print("Easy? Really? You Wimp!\n") length =random.choice(selectedWord) print("The length of your word is"), len(length) print length #in for testing purposes. while(LIVES !=0 and CORRECT != 1): word_update(length,GUESSED) if(CORRECT==1): break print("Press 1 to enter a letter, or 2 to guess the full word") enternw = input() guess(enternw, length) replayGame(length) #replay the game option. def replayGame(word): global LIVES if(LIVES == 0): print ("GAME OVER, You ran out of LIVES \n The word was"), word print ("Would you like to replay?, Y for yes, N for no") replay = raw_input() if(replay == "y" or replay == "Y"): LIVES = 5 main() elif(replay == "n" or replay == "N"): print ("Thank you for playing") else:("Unknown Option. Goodbye") #determines whether to enter letter or word, then runs and controls lives etc. def guess(enternw, length): global LIVES global CORRECT global GUESSED if(enternw == 1): print("You chose to select a letter, please selecter a letter") letter = raw_input() if (letter in GUESSED): print("You already guessed"), letter elif(letter in length): print"Found",letter print length.find(letter) GUESSED.append(letter) else: print "Not Found" LIVES = LIVES - 1 print LIVES,("Lives Remain") GUESSED.append(letter) if(enternw == 2): print("You chose to guess a word, please guess the word.") guessedWord = raw_input() if(guessedWord == length): print("Correct, the word was"), length CORRECT = 1 else: print("Sorry, wrong word, keep trying!") LIVES = LIVES - 1 print LIVES,("Lives Remain") #courtesy of def word_update(word, letters_guessed): global CORRECT masked_word = "" for letter in word: if letter in letters_guessed: masked_word += letter else: masked_word += "-" print "The word:", masked_word if(word == masked_word): print("You guessed correct, well done!") CORRECT = 1 main()
https://www.daniweb.com/software-development/python/threads/445008/new-to-python-criticism-required
CC-MAIN-2015-22
refinedweb
449
61.06
Note, it would be helpful if you could copy me on the response to this. I am following the dev list, but am not subscribed to it at the moment, so it is difficult to respond on the thread unless I get the e-mail directly. I can't speak for Anders, but it does sound like he is interested in addressing the same issue I am trying to face currently. Hopefully he will express more of his thoughts later, but here is my shot at answering your questions: > Ceki says: > I still don't see the *practical* use of having the renderer depend on > the layout or appender. Would it be possible for you to describe a > practical case where such functionality would be useful. I wouldn't necessarily say the 'ObjectRenderer(s)' should depend on 'Layout(s)' or 'Appender(s)', but rather that there can exist multiple 'ObjectRenderer' sets (refered to as 'ObjectRendererBundle' from here on out), and that the log4j configuration include the ability to specify per 'Layout' an 'ObjectRendererBundle' that the 'Layout' will use. The 'ObjectRendererBundle' is independant of the 'Layout' and can be associated with multiple 'Layout(s)' in the same sense that a 'Layout' is independant of an 'Appender' and can be associated with multiple 'Appender(s)'. In the architecture, it appears that 'Layout(s)' are responsible for formatting log data that is sent to the 'Appender', but 'ObjectRenderer(s)' are also responsible for part of this formatting process. I am running a few cases where I have multiple 'Layout(s)' and each of these 'Layout(s)' need to coordinate with the 'ObjectRenderer(s)' to format text correctly. In other words, the result each of these 'Layout(s)' need from the 'ObjectRender' is different for the same object, and there does not seem to be a way to address this in log4j at present. I will try to give a practical example here. You will have to let me know if this helps, or if there are things I need to clarify. Lets say we have an FTP server written in java. We include the capability to log FTP commands that the FTP server receives. The FTP command info is stored in an instance of the following class: public class FTPCommandInfo { public String m_fromIP; // FTP client IP public String m_toIP; // FTP server IP public int m_port; // FTP port public String m_username; // User for connection public String m_password; // Password for connection public String m_command; // The FTP command that was issued public String m_filePath; // The file public String m_result; // The FTP server result ... } In some FTP server class we have some code like the following: Logger log = ...; ... FTPCommandInfo ftpInfo = ...; // The FTP command info gets populated in ftpInfo ... log.info(ftpInfo); ... The logging occurs for each FTP command that get called. In configuring log4j, we decide we want a few different 'Appender(s)' to listen to this 'Logger', with the ability to run them all at the same time. We would like the 'Layout' associated with a first 'Appender' to return result to be in a comma separated list of all the info stored in the 'FTPCommandInfo' object: <FTP client IP>,<FTP Server IP>,<FTP port> ... <FTP Password> ... <FTP result>\r\n The 'Layout' for the second 'Appender' should return results in an XML format such as the following: <FTPCommand client_ip="<FTP client IP>" server_ip="<FTP server IP>" ... The 'Layout' for the third 'Appender' should return a comma separated list similar to the first 'Layout'. But in this case we do not want to include the 'm_password' value in the log file because the log is made public and we do not want to expose passwords. So the format might be exactly like the first case, but without the passwords included. It is the job of the 'Layout' to format the text. The 'Layout(s)' supposedly are fairly generic and delegate rendering of object data to the 'ObjectRenderer(s)'. This is very handy since the 'Layout' does not have to understand each object that comes in. Making an 'ObjectRenderer' to hand bac k a correct 'String' to the 'Layout' in any one of these cases is fairly trivial. It is fairly trivial to make an 'ObjectRenderer' that returns ftp info in a comma separated list, or one that returns the ftp info as a set of xml attributes, or even one that excludes certian info the loggable object contains that should not be displayed. However these does not seem to be a way for these 'ObjectRenderer(s)' to coexist in log4j at the same time and associated with the respective 'Layout(s)'. A workaround is to ignore 'ObjectRenderer(s)' and add the 'ObjectRender(ing)' capabilities to the 'Layout' directly but this is not ideal since it supervents a part of the architecture and does not allow for an extensible way to render new objects. It would really be much better to be able to use the 'ObjectRenderer' architecture, but there is only one unchangable 'ObjectRenderer' set so I have to deal with a workaround. I think it is justifiable that you may want multiple log files for different purposes (we run into that fairly often), and it is quite possible that each of the log files may need to render the object data in different formats. The example above is just one case. The best way I can think of to address this is to allow 'ObjectRendererBundle(s)' that allow you to render object differently in different 'Appender(s)' (or more correctly in different 'Layout(s)'). In the case that you only use a default 'ObjectRendererBundle' the architecture reduces to the current one. I expect that this change would have minor impact on the API overall, but would allow more flexibility as necessary. Is this a practical enough example? It is similar to a problem I am currently trying to address and I hope my explaination did it justice. I am feeling the limitations of the current 'ObjectRenderer' configuration options. I need to have a way to render the same object in multiple ways within the same program. I believe that the changes I have suggested would help make log4j fundamentally for extensible. It is not a huge leap from where things are at right now and seems to be a logical extension. Log4j really is an excellent product. I hope that you will consider the changes I have suggested because I think it would help make log4j more extensible to meet a broader set of needs in a clean and clear fashion. Please let me know if there is anything I can clarify, or that does not seem entirely sound in the discussion above. Thanks again for an excellent product. -Lance Larsen > This might be another case where Anders is a few steps ahead of me. It > has happened in the past... so I am all ears. > TIA, Ceki > ps: Would it be sufficient if Appender and layouts accepted configuration > directives for elements unknown at compile time? For example, appenders > are known to contain a layout element and there is code to handle layout > directives within an appender. The idea is to support unknown element > types... such as object renderers. ----- Original Message ----- From: Lance Larsen To: log4j-dev@jakarta.apache.org Sent: Thursday, February 07, 2002 11:54 AM Subject: Making object rendering more extensible I have looked through the dev list to see if this has come up, but I haven't seen anything so far. I have been using log4j on a few projects, and like the configurability and extensibility it provides - very nice - thankyou for the excellent tool. However, there seems to be one part of the architecture where I have run into limitations, and would like to submit a feature request. Log4j provides a mechanism to register 'ObjectRenderer(s)' in the configuration. This is very handy since it allows you to log various types of objects that log4j would not natively understand by simply creating a new 'ObjectRenderer' class and including this in the log4j configuration (no other changes). This part is very nice. The problem I have run into is cases where I would like the set of 'ObjectRenderer(s)' to be different for various 'Layout(s)' used at the same time. There does not appear to be a way to do this. The current assumption seems to be that there is one application global fundamental 'String' mapping for each loggable object type. There seem to be many cases where you may want to object to be rendered differently in different contexts. Log4j gives you this flexibility in the relationship of 'Appender(s)' to 'Layout(s)', but for some reason did not extend this to the relationship between 'Layout(s)' and 'ObjectRenderer(s)'. One case where having different object renderers is useful is in a case where there are several things you can pull out of an object. One 'Appender' (or more correctly 'Formatter') may log part of the info and another 'Appender' may log different info from the same object. In another case, two 'Layout(s)' may need the same info, but the string format may need to be different for each. I do not see a clean way to currently handle either of these cases. My suggestion would be to add a new concept of 'ObjectRendererBundle(s)' which includes a set of 'ObjectRenderer(s)' that a 'Layout' will use. In the configuration, the 'Layout' can be assigned an 'ObjectRendererBundle' in a similar way that an 'Appender' is assigned a 'Layout'. The 'Layout' class could include a couple of new methods something like 'setObjectRendererBundle' and 'getObjectRendererBundle' to access the set of object renders for the 'Layout'. There would probably be a default 'ObjectRendererBundle' that was global as in the current case, but the 'ObjectRenderer' model would be more extensible. Are there any other thoughts or somments on this? Does this seem like a reasonable approach? Is this something that you are interested in including into 'log4j'? -Lance Larsen -- To unsubscribe, e-mail: <mailto:log4j-dev-unsubscribe@jakarta.apache.org> For additional commands, e-mail: <mailto:log4j-dev-help@jakarta.apache.org>
http://mail-archives.us.apache.org/mod_mbox/logging-log4j-dev/200202.mbox/%3C006601c1b3fb$8058f000$2050b0a3@netopia.com%3E
CC-MAIN-2019-39
refinedweb
1,681
50.67
Created on 2011-12-11 19:38 by roger.serwy, last changed 2014-02-26 21:56 by terry.reedy.. "terminate abruptly"? I thought that print(file=None) silently returned, without printing but without an error. A delayed popup to display (otherwise discarded) output is a nice feature, though. You can try triggering bug #8900 quite simply. From a shell or an editor, press Ctrl+N and then Ctrl+O. Open a file and watch IDLE terminate abruptly. Also, see #12274. If you want to play with this problem further, try adding a "raise Exception" to a constructor for one of the extensions. Running IDLE using python.exe will display a traceback in the console, but IDLE keeps running. However, IDLE won't even bring up a window when using pythonw.exe; it just fails to launch from the user's perspective. Here is a list of open issues that describe IDLE suddenly crashing, which can be traced back to pythonw.exe: #4765, #5707, #6257, #6739, #9404, #9925, #10365, #11437, #12274, #12988, #13052, #13071, #13078, #13153 This patch does not fix these errors, but at least these errors will no longer be fatal. This patch also "future-proofs" against yet to be discovered (or yet to be introduced) bugs into IDLE. IDLE.app on OS X also has the issue of stderr messages not being presented to users unless you know to look in the system log where they get written. So writing to stderr is not fatal but displaying them in a popup would be an improvement. I don't have a Mac to test against. Is there anything I need to do to improve the patch? Todd, do you have a Mac to test this on? This patch treat sending messages to a widget as a backup option. In 18318 I propose we make Idle a true gui app, with all messages other than 'No tkinter' handled by the gui. Console writing, when available, would then be a secondary option for those who want it. But in the meanwhile, I would like this or something like it applied. Yes I have a Mac and I am glad to help, so I gave it a test run tonight. The first thing I did was apply the patch then I ran idle from the console like so: ./python.exe Lib/idlelib/idle.py For testing I used a simple print command to print to stderr: sys.stderr.write("spam/n") which I got the output of spam/n6 I also tried to use the newer print function: print("fatal error", file=sys.stderr) Python 3.4 on the Mac behaved exactly the same way with or without the patch. I got the stderr output in the Python shell and nothing appeared in the console. With the patch applied I saw no dialog box to capture the stderr output. Maybe I didn't perform the test correctly? Another thing to consider is for Mac IDLE runs in a special mode via macosxSupport.py which I turn on by forcing runningAsOSXApp() to always return True. Even after setting runningAsOSXApp() to true a dialog box does not appear when writing to stderr. Maybe I am not testing this patch correctly? Let me know if I can do anything else to help, thanks. Print in the user process goes to shell window. You need to stimulate (or just add) print or warn in the idle process, which normally goes to console, or nowhere. It is hard (intentionally, I am sure) to dynamically manipulate idle process code. Roger said "try adding a "raise Exception" to a constructor for one of the extensions". I would start with a warning in PyShell. See doc or test_warnings for an example warnings call. I may be mistaken but I thought, as of not too long ago, that pythonw.exe is no longer needed by current versions. 2.7.5 No change. pythonw.exe is still present and in use. Terry, Bottom line I can't seem to get this patch to do anything for me. Before the patch is applied IDLE seems to be handling warnings and exceptions just fine in PyShell on the Mac. I get no crash and the output matches the normal console. Here is a small test program I wrote: import warnings def fxn(): # user warnings are not ignored by default and will cause a dump of information # to standard error. warnings.warn("User warning: Warn on purpose for IDLE", UserWarning) if __name__ == "__main__": fxn() print("the program should not terminate with the warning, but keep on running") a = 10 * 1000 print(a) # exception testing each of these will stop the program # divide by zero b = 10 * (1/0) print(b) # variable not defined c = 4 + spam*3 print(c) # can't convert 'int' object o str implicitly d = '2' + 2 print(d) Then I wanted to make sure I was executing the patched code so I made sure I called idle.pyw (normally I wouldn't do that I would use idle.py). After I called the correct script I changed the code to force std error to ErrorNotify class around line 101: import sys ##if 0: # For testing ## sys.__stderr__ = None ## sys.stderr = None if sys.__stderr__ is None: sys.__stderr__ = ErrorNotify() if sys.stderr is None: sys.stderr = ErrorNotify() if sys.__stdout__ is None: sys.__stdout__ = ErrorNotify(devnull=True) if sys.stdout is None: sys.stdout = ErrorNotify(devnull=True) sys.__stderr__ = ErrorNotify() sys.stderr = ErrorNotify() I would expect after this code runs any message sent to stderr would go through the ErrorNotify class and a widget should appear with the stderr output. Even after this change I can't get the error widget to appear. I have not given up yet but I wanted to provide a status update. Idle start up seems unnecessarily fragmented into multiple files. idlelib/__main__.py idlelib/idle.py idlelib/idlew.py can all be started from the command line by name with either python or pythonw or run once by import. idlelib/__main__.py can also be started by 'python(w) -m idlelib'. I checked that in Command Prompt C:\Programs\Python34>python lib/idlelib/idle.py C:\Programs\Python34>python lib/idlelib/idle.pyw C:\Programs\Python34>pythonw lib/idlelib/idle.py C:\Programs\Python34>pythonw lib/idlelib/idle.pyw all do the same thing except that the first two caused a new console python icon to appear on the taskbar. All three files contain import idlelib.PyShell idlelib.PyShell.main() which are equivalent to from idlelib.PyShell import main; main() PyShell can also by run by 'python(w) -m idlelip.PyShell' though I believe that is somewhat 'deprecated'. idle.py also has a path addition that seems obsolete. I believe the addition was intended for developing idle with installed python and a subrepository consisting of idlelib/*. Subrepositories were possible with svn but are not with hg. In any case, proper development testing ultimately requires testing revised idle with current tkinter.py and compiled _tkinter.c. I suppose that the addition would still work to let someone clone the repository and run the repository Lib/*.py, etc, with installed binaries, but that seems ultimately chancy. idle.pyw (which is also called by idle.bat) has additions for a scenario that I don't understand: idle.pyw is present and running but apparently not in idlelib, Idle is 'not installed', but PyShell and the rest of idlelib are somewhere on sys.path. There is nothing that I see as pythonw specific. I think this file should be dropped and any needed path manipulations added to idle.py. A single start up file (idle.py) should first import tkinter (and _tkinter) as in the patch, but in try;except. If the import fails print to stderr if there is one (a console) or use subprocess to start python in a console to display the message. Ditto for creating root. Once that succeeds, I think stderr or the idle process should be replaces unconditionally. A message box as in the patch is one possibility. An error log window is another. The latter could accumulate all non-fatal error messages to be edited and saved if the user wishes. I think the arg parsing code in PyShell that decides whether to open a Shell or an Editor should be moved to the startup file. Ditto for any other configuration stuff that precedes opening one or the other.
http://bugs.python.org/issue13582
CC-MAIN-2015-06
refinedweb
1,408
75.81
Please always reply all so a copy goes to the tutor list. We all participate and learn. Jason DeBord wrote: > Bob, > > Thanks for the reply. > > When I said I don't understand the above, that was just to give you > guys a reference for where I am at. > > At my current job I rarely touch the web server. I have just been > using php for server side development, thus, installing python and > mod_python was new territory for me, though I managed to get it > working surprisingly quick. As we speak I am trying to get Django up > and running. This is proving difficult and I think I am getting ahead > of myself. > > I'm lost with the code below because up to this point all I have ever > had to do is type in " <?php " and away I go. > OK - line - by - line commentary: mod_python is an apache add-on that creates a "long-running" Python process. This means the Python interpreter is load once (maximizing response time) and there is memory from one invocation by a request to the next. > from mod_python import apache Here mod_python is a python module that cooperates with the add-on. import loads the module (once) and brings the object apache from the module namespace into the local namespace. The reference to apache.OK is resolved by looking at the imported object for a property (attribute) named OK. > > def handler(req): Defines a function handler. Apache by default calls handler for each request, and passes the request object (here named req - arbitrarily). > req.write("Hello World!") calls the req object's write method passing a character string. The write method adds the string to the response under construction. > return apache.OK Terminates the function and returns apache.OK (whatever that is). I believe that returning apache.OK from the handler call tells apache that the request is complete and to return the page to the browser. -- Bob Gailer Chapel Hill NC 919-636-4239
https://mail.python.org/pipermail/tutor/2008-November/065463.html
CC-MAIN-2016-40
refinedweb
330
75.61
Re: Moving computer and user accounts over to a new server From: Jeff Middleton [SBS-MVP] (jeff_at_cfisolutions.com) Date: 08/24/04 - ] Date: Tue, 24 Aug 2004 09:52:06 -0500 Hi Tim, I apologize if I contributed to the confusing that was apparently building, let me try to clean up the picture here. First of all, to get my last brief post out of the way, I said to "snap a mirror" which isn't actually official language, rather an abbreviated reference to the idea of creating a mirror of an existing volume exclusive as a way to get a snapshot of the volume, not for a fault tolerant installation process. I can expand that thought. If we are all familiar with the idea that you can mirror a pair of drives, including OS/system drive and additional partitions, then we also know the idea is that you get the identical partition contents on both. If you break the mirror appart, you have the original volume and a separate drive that contains the same information, either of which can be made to run the machine. It may also be a familiar idea, perhaps not, the you can do mismatched mirroring where you have, for instance, (3) drives of 20G that are not in any kind of array, and yet you can add a forth drive like an 80G and mirror against all three of the others. The one drive would then contain the same partitions on the single drive that the others held on separate drives. If you split this, you have a way to pull a "snapshot" of the last running condition of the production drive condition. When you power off, break the mirror and pull the production drives out, just switch over to run from the single 80G instead. Now you have the original server hardware running with the original AD/system configuration, but on a different drive than the production ones you want to move. Now you build a different server as you described, and you can use the production drives again. <backtracking> The previous part of the thread that Cris introduced, and then Chris Puckett elaborated upon, is a fairly technical concept of how to sustain AD via a migration across DCs. It's no surprise to me that you might find the process that Chris described to be a little over the edge in either complexity of unfamiliarity. Essentially that is the core of a process that I've written a migration document on that will be released to the public soon, but isn't yet. However, I put 90+ pages of information into making it a fully rounded concept as a strategic migration document. My details extend beyond what Chris described to actually loop the process twice to come back to the original SBS server name. However, I'm going to tell you that I wouldn't have put together 90 pages of information on this if I thought it was trivial enough to be described in 2 pages that anyone could just glance at and go with it. It really depends upon how much of that if familiar concept or not. I would say that 1 in 50 SBS techs at most would find the entire process as familiar, if that. There's a whole lot of "need to know" distance between what is required to do that whole process and what we all typically think of as the day to day stuff we make a living doing. <options> You asked an original question that was a simple question for which MS, quite honestly, doesn't have a simple answer that is a great answer. Here's an outline of options: ADMT: The closest answer that MS has on official terms it the ADMT based migration on the SBS website that describes how to migration from SBS 2000 to SBS 2003. That process breaks the namespace of your domain, servername, therefore Exchange Org....and it is a one-way process because it alters your original SBS server in the process of migrating it. You have some significant technical issues there, but the process works. That's how MS typically answers this question of "moving a domain". Hardware Shove: MS has a KB on how to recover a Domain Controller by moving to different hardware. This one talks about the idea of lifting the entire drive contents to another box, then repairing the installation for the breaks that it causes due to the "shove". You can optimize this process by preparing the HAL, boot drive controllers, and configuration for the move in advance if the original server is still operational. However, this doesn't repair the installation, it just transports it to new hardware. In you case, this doesn't do what you want.;en-us;263532&Product=win2000;en-us;249694&Product=win2000;en-us;822052&Product=win2000 292175 How to perform an in-place upgrade of Windows 2000 FMSO Transfer: That's what Chris Puckett described. You retain the AD by replication to another server, then in-place upgrade it to be your new SBS. Unfortunately, this will break the server namespace, even though the domain is retained because you can't have two domain controllers in the same domain with the same name. If you change the name of your server, you change all the UNC paths from the workstations to the server, you break ISA installs, Outlook configuration, recently used documents, all related shortcuts. It's a blunt force trauma to the UNC, but the same AD. You have to knock out the SBS server, DNS configurations, and Exchange, then reconstruct that again. 216498 How To Remove Data in Active Directory After an Unsuccessful Domain Swing Migration: This is what I have documented, but it has complications that are solvable if you have the right steps, in the right sequence, with the right planning. I think it's the best way to do an upgrade, and it's a very nice way to rescue a dead SBS configuration, or a damaged one if you don't have a System State backup you can use to roll-back prior to the damage event. System State Migration: This last idea is functional, but it doesn't solve your problem. It's valuable when you can move your intact configuration because the hardware is totally hosed, perhaps stolen. In this case, you build a baseline Windows Server installation, then do a System State DSRM restore of the old server to the new hardware. From there, you next repair problems with the network bindings and IP assignments, clean up DNS as needed, and repair whatever problems you have in mounting the Exchange, if you are dealing with that at the same time.;en-us;249694&Product=win2000 810161 Network adapters are missing or incorrect in Device Manager after you run NTBackup to restore system state data 292175 How to perform an in-place upgrade of Windows 2000 The last part of what you asked about refers to the Exchange information migration. If you want to do an Exmerge, you have an entire answer there. If you were looking to move it intact, then ADMT can do that for you, so can a Shove, so can Swing, so can System State Migration. Making that happen with FSMO transfer means you have to get creative and us LegacyDN to solve the change in namespace. At this point, that's just over 2 pages of summary statements on the options you have. But as you can see, the options you have all have strings attached. IMO, MS doesn't have a good answer to this problem, and that's why I spent the time to research and document the Swing Migration process. My documentation will be available quite obviously when the book is published with that chapter included. I will be presenting a session at SMB Nation on the Swing Migration process. In addition, I believe that in some manner, it will be possible to get this chapter in electronic form before the release of the book, perhaps even at SMB Nation itself. I hope that I've answered more questions for you than I have raised, but I'm afraid that if you want a really short answer to your question, it's the link to the ADMT migration at this point. All the other answers are longer, even if they have desirable options provided. "Timothy Morris" <tim@online.kingswoodhouse.e7even.com> wrote in message news:Om3lraZiEHA.3608@TK2MSFTNGP09.phx.gbl... > All points taken and I perfectly understand that my question is confusing. > Apologies, it is late here (now 04:32am) and it has been a long day. > > I understand what a mirror is, but the term "snap a mirror" is foreign to > me. > > I have four ATA-133 120Gb HDs (Maxtor 6Y0P0s) connected to a Promise Ultra > IDE controller. The first is split into two partitions - the first third > (C:) houses Windows and Program Files, and the remainder (D:) has folders > containing the contents of all the install CDs (SBS, Office 2003, NAI etc). > E: is a Combo DVD Rom/ CD-RW, and F: is the remaining 3 HDs striped in a > RAID 0 configuration for performance purposes. > > I'm upgrading the mainboard, CPU and RAM (ABit NF7/AMD 3200 XP+/2Gb Corsair > 3200 400MHz RAM. The new machine will have a 400Gb Ultrium tape drive, but I > do have a 500Gb LaCie USB 2.0 that I can back up the contents of the > existing system's HD to. > > Now perhaps we can start again (please)? I want to start again using the > same disk configurationbut wiped clean (apart from D:). The one thing that > will make life easy for me is if I can maintain the machine and user SIDs > rather than have to set them all up again from scratch. > > Apologies and thanks! > > Tim. > > > "Cris Hanna (SBS-MVP)" <crishannanospam@computingpossibilities.net> wrote in > message news:ulaqVRYiEHA.2908@TK2MSFTNGP10.phx.gbl... > > Nor are we suggesting that you are being dense but if the term mirror is > > foreign to you, then its understandable that none of this is making sense > > > > But the question you've restated here is not the question you ask in your > > subject line > > > > Your subject says moving to a new server, now you're asking about fresh > > install on the same hardware. > > > > These are two different things. Do you have a tape backup that you can > > restore from? Are your drives configured as a Raid 5 or each drive a > > separate single drive?? Are these SCSI, IDE or SATA?? > > > > -- > > CRIS HANNA > > SBS-MVP > > -------------------------------------------------------- > > Please do not respond to me directly by email but only in the newsgroups > so > > that all can benefit from the information > > - ]
http://www.tech-archive.net/Archive/Windows/microsoft.public.windows.server.sbs/2004-08/7670.html
crawl-002
refinedweb
1,796
66.47
C++ is an object oriented programming language, that was developed by Bjarne Stroustrup. The language derives its name from the C language, since most of the functionalities of the language have been derived from the C language itself. However, C was a procedural programming language, but C++ also had the concept of classes, thus initially it was thought to be named as ‘C with classes’, the final name C++ came from the increment operator (++) which is a part of the language. In this article we will be showing 50 C++ Interview Questions and Answers - Who is the father of C++? Bjarne Stroustrup - Define token in C++. A token in C++ can be a keyword, identifier, literal, constant and symbol. - What is a class in C++? A class in C++ is as a collection of function and related data under a single name. It is a blueprint of objects. A C++ program can have any number of classes. - What is the syntax of declaring a class in C++? class name { // some data // some functions }; - What is C++? - C++ is an object-oriented programming language that was created by Bjarne Stroustrup. It was released in the year 1985. - C++ is a superset of C programming language with the major addition of classes in C. - C++ is an object oriented programming language. - What are the advantages of C++? - C++ maintains all aspects from C language, and simplifies memory management as well. - C++ can run on any platform. - C++ is an object-oriented programming language, including the concepts such as classes, objects, inheritance, polymorphism, abstraction. - C++ has the concept of inheritance. - Data hiding helps the programmer to build secure programs - Message passing is a technique used for communication between the objects. - Explain the concepts of OOPs in C++ OOPs stand for Object Oriented Programming, OOPs is an important feature in C++, through which the concept of classes, objects, inheritance, data encapsulation, data hiding, abstraction and polymorphism come into play. All these features help in building a strong foundation for C++. - What is void main () in C++ language? The first step, in running a C++ program is compilation where the conversion of C++ code to object code takes place. The second step includes linking, where combining of object code from the programmer and from libraries takes place. This function is operated by main () in C++ language. Additionally, main() is the starting point of every C++ program, whereas void is the return type of main function which ensures that the main() does not return anything - What are C++ objects? A class gives blueprints for object, so an object is created from a class, and is an instance of a class. The data and functions are bundled together as a self-contained unit called an object. An object is a real world entity that has a real existence. - What is Inheritance in C++? Inheritance provides reusability. Reusability ensures the functionalities of the existing class extend to new classes. It eliminates the redundancy of code. Inheritance is a technique of deriving a new class from the old class. The old class is the base class, and the new class is known as derived class. - What is the syntax of inheriting classes in C++? class derived_class :: visibility-mode base_class; Where visibility-mode can be public, private, protected. - What is Encapsulation in C++? Encapsulation is a technique of wrapping the data members and member functions in a single unit. It encapsulates the data within a class, and no outside method can access the data. It is an important feature to hide the data from unauthorized access - What is Abstraction in C++? Abstraction is a technique of showing only essential details of a class without representing the implementation details. If the members have a public keyword, then they are accessible outside the class also, otherwise certain restrictions are placed depending on the visibility type. - What is Data binding in C++? Data binding is a process of binding the application UI and business logic. Any change made in the business logic will reflect directly to the application UI. - What is Polymorphism in C++? Polymorphism means multiple forms. Polymorphism means having more than one function with the same name but with different functionalities. Polymorphism is of two types: Static polymorphism is also known as early binding. Dynamic polymorphism is also known as late binding. - Define storage class in C++. Storage class in C++ specify the life, scope, access area and the initial values of symbols, including the variables, functions, etc. There are four storage classes in C++: auto, static, extern, register - What are the two types of Polymorhism in C++. Explain with example Runtime polymorphism Runtime polymorphism or dynamic polymorphism occurs when the child class contains the method which is already present in the parent class. Function overriding is an example of runtime polymorphism. Compile time polymorphism Compile-time polymorphism or static polymorphism is implemented at the compile time. Method overloading is an example of compile-time polymorphism. - What is ‘this’ pointer in C++? The ‘this’ pointer is a constant pointer and it holds the memory address of the current object. It passes as a hidden argument to all the non-static member function calls. - How is function overloading different from operator overloading? Function overloading allows two or more functions with different type and number of parameters to have the same name. Operator overloading, on the other hand, allows for redefining the way an operator works for user-defined types. - Is it possible for a C++ program to be compiled without the main() function? Yes, it is possible to compile the program. However, as the main() function is essential for the execution of the program, the program will stop after compiling and will not execute - What is the difference between a structure and a class in C++? When deriving a structure from a class or some other structure, the default access specifier for the base class or structure is public. The default access specifier is private when deriving a class. The members of a structure are public by default, the members of a class are private by default - What is a namespace in C++? The namespace is a logical division of the code, which is designed to stop the naming conflict. It specifies the scope where the identifiers such as variables, class, functions are declared. Used to remove the ambiguity. For example: if there are two functions exist with the same sum() To prevent ambiguity, functions are declared in different namespaces. - What is the standard namespace in C++? A standard namespace by the name std, contains inbuilt classes and functions. So, by using the statement “using namespace std;” namespace “std” is included in a C++ program. - What are the characteristics of Class Members in C++? Data and Functions are known as class members of a C++ class, their characteristics are: Within the class definition, data members and methods must be declared Within a class, a member cannot be re-declared Other that in the class definition, no new member can be added to the class - What is a Member Function in a C++ Class? The member function is like a regular function, however, it regulates the behavior of the class. It provides a definition for supporting various operations on data held in the form of an object. The members can be used to initialize and use the class data members (variables). - What is a Loop in C++? To execute a set of statements repeatedly until a particular condition is satisfied a Loop is used. The loop statement is kept under the curly braces { }, known as the Loop body. - How many different types of loops are there in C++? In C++ language, three types of loops are used While loop For loop, and Do-while loop - What is the difference between While and DO While Loop in C++? While loop is an entry controlled loop, while do-while loop is an exit controlled loop. In while loop if the condition is false then no statements are executed as the condition is checked on entry itself, in do while loop even if the condition is false then also the statements are executed once as the condition is checked on exit. - How functions are classified in C++ ? In C++ functions are classified on the basis of Return type – Void or a data type Function Name – Different function names Parameters – Accepting arguments or no arguments - Explain what are Access specifiers in C++ class? What are the types? Access specifiers determine the access rights for the statements. Access specifiers decide how the members of the class can be accessed, they are also used in inheritance. There are three types of specifiers. Private Public Protected - What is a reference variable in C++? A reference variable is declared using & Operator. In this variable, a reference is stored in the form of another name for an already existing variable. Eg: int a = 10 int &b = a // Here b is a reference variable, holding the reference of a. - What are the different types of Member Functions in C++? They are of 5 types: - Simple functions - Static functions - Const functions - Inline functions - Friend functions - How delete [ ] is different from delete? Delete is used to release a unit of memory, delete[] is used to release an array. - What is recursion in C++? The process through which a function repeatedly calls itself is known as recursion. The corresponding function is called the recursive function - What is an Inline function in C++? An Inline function is a function that is compiled by the compiler as the point of calling the function and the code is substituted at that point. This makes compiling faster. This function is defined by prefixing the function prototype with the keyword “inline”. - What is the keyword auto used for in C++? By default, a local variable of the function is automatic i.e. auto. Eg: void f() { int a; auto int b; } - What is a Static Variable in C++? A static variable is a local variable that retains its value across the function calls. Static variables are declared using the keyword “static. Their default value is zero. Example: void f() { static int i; ++i; printf(“%d “,i); // The function will print 1 2 3 …. } - What is Name Mangling in C++? C++ compiler encodes the parameter types with functions into a unique name. This process is called name mangling. The reverse process of the same is called demangling. - What is a Constructor and how is it called? Constructor is a member function of the class having the same name as the class. It is generally used for initializing the members of the class. By default constructors are public. There are two ways in which the constructors are called: Implicit Calling: Constructors are implicitly called by the compiler when an object of the class is created. This creates an object on a Stack. Explicit Calling: When the object of a class is created using new keyword, constructors are called explicitly. This usually creates an object on a Heap. - What is upcasting in C++? The act of converting a sub class reference or pointer into its super class reference or pointer is called upcasting. - What is pre-processor in C++? Pre-processors are the directives, which give instruction to the compiler to pre-process the information before actual compilation starts. They include information such as definitions of built in functions and keywords like, cout, cin, pi, sin, cos, etc. - What is a copy constructor in C++? Copy constructor is a constructor that accepts an object of the same class and copies all its data members to an object on the other object of the same class. - What is the difference between new and malloc() in C++? new is an operator while malloc() is a function. new initializes the new memory to 0 while malloc() gives random value in the newly allotted memory location (garbage) new allocates the memory and calls the constructor for the object initialization as well and malloc() function allocates the memory but does not call the constructor. new is faster than the malloc() function. - What is a friend function in C++? Friend function can access the private and protected members of the class. The friend function is not a member of the class, but it must be listed in the class definition. The friend has the ability to access the private data of the class. It has more privileges than a normal function. - What is a virtual function? A virtual function replaces the implementation provided by the base class. The replacement is called when the object is of the derived class. A virtual function is a member function which is present in the base class and redefined by the derived class. It is recognized by the keyword ‘virtual’. - What is a destructor? A Destructor is used to delete the resources allocated by the object. A destructor is called automatically once the object goes out of the scope. It is complete opposite of a constructor and has the same name as class name, it is preceded by a tilde sign. Destructors does not contain any argument or a return type. - What is an overflow error? It is a type of arithmetical error. It occurs when the result of an arithmetical operation evaluates to be greater than the actual space provided by the system. - What is virtual inheritance? Virtual inheritance facilitates the creation of only one copy of each object even if the object appears in more than one class in the hierarchy. - What does Scope Resolution operator do? A scope resolution operator (::) is used to define the member function outside the class, with a global scope. - What is a pure virtual function in C++? The pure virtual function is a virtual function which does not contain any definition, it has only the declaration part. The pure virtual function ends with 0. Syntax: virtual void function_name() = 0 ; //pure virtual function. Report Error/ Suggestion
https://www.studymite.com/cpp/cpp-interview-question-frequently-asked/?utm_source=home_recentpost&utm_medium=home_recent
CC-MAIN-2020-50
refinedweb
2,319
66.03
Agenda See also: IRC log <yamx> This is Yam. <scribe> Scribe: Steven Steven: Partial regrets from Alex ... he's only on irc Mark: I propose Steven as chair Yam: Second <scribe> New members: Susan and John from Progeny (sent regrets for today) FtF in NYC in June Steven: My feeling is that not enough people could come in June ... so the nex nextt FtF will be in September ... In Madrid if I remember right ... 10 and 11 September Mark: Yes, Madrid Steven: So do we want to have an editor's meeting on the Monday and or Tuesday? ... (let's do it offline) Reports back from conferences <Alex-DERI> As I already said via mail, I got the OK from DERI, so I could attend the FTF meeting in june Steven: Mark and I were at the two main ewb conferences the last two weeks ... There was a panel on the future of HTML at XTech ... quite a lot of panelists ... only 45 minutes, so not as lot of time ... BBC interviewed me and Mike Smith (BBC) afterwards ... Another interesting presentation was Joost ... who showed a lovely system using compound documents, XHTML , CSS< SVG ... really nice, and using our idea of how it should be Mark: The mobile Ajax session was good too, talking about XForms, XHTML2 and so on ... I argued that you can do Ajax declaratively, and the session ended up talking about that as the main point ... Volantis backed me up on this ... they deliver XHTML2 and XForms to mobile devices by transformations Rich: Do we have any data on how much declarative saves? Steven: Yes, that is the core of my talks at the moment ... you can show that costs can be reduced to 3% by doing it declaratively ... and we have a couple of real-life examples to back that up Shane: I wonder if Tina can summarise the thread on www-html Steven: She sent regrets Shane: there were two main threads: 1) Why is HTML5 codifying broken behaviour ... why not separate implementation guidelines from spec ... 2) changing the semantics between HTML4 and 5 Steven: It is not clear to me the best way to reach that group of people ... whether it is worth talking on those discussions or not <Alex-DERI> As a side note, I'm on HTML5 too Mark: We need to clarify what the difference is between what we are doing and what they are doing Alex, so is Mark <Alex-DERI> I know Mark: You can use XHTML2/XForms to show how much work you save compared with using libraries Shane: I fixed the xmlns problem, and passed the changes on, and it will be in the next version ... so you can use xmlns:xxx in markup and not get validation errors ... but you still need a schema Steven: There was another mail from someone from Japan, mentioning a fix Shane: That was about C1 characters ... the validator is now fixed, but we need to update our specs ... because we ship xml declarations ... and they now need to be fixed Steven: In TR space? Shane: Yes ... but they don't show up in the spec ... I will approach Ian about it Steven: When will the xmlns fix be shipped? Shane: It is in beta now Steven: An excellent demo at XTech of an RDFa extension for Mozilla ... seems to point to maturity of RDFa ... we can move forward on this quite fast Mark: Lots of talks mentioned RDFa at WWW2007 Shane: I have just raised the problem with Ian Jacobs ... he says we can't fix it in place ... we have to reissue the specs Steven: Why? Shane: Just because Steven: So it would be a PER Shane: Yes Steven: Painless enough Shane: I had an action item to ask how XML parsing works when QNames get turned into tuples ... the answer was "they don't" ... inside attributes ... and what we are trying to do is allow scoped attribute values in some taxonomy ... so an application has to do the interpretation itself ... So then I said to myself, if the parser isn't helping us, why are we using xmlns to establish the relationship? ... and the conclusion I reached with Mark, is that xmlns is something everyone understands, so we should use it ... but I wonder if we shouldn't use CURIEs, since they don't carry the baggage with them Mark: Do you mean to change the sysntax of CURIEs? Shane: No Mark: It should always have said CURIEs, but because of the pushback we were getting, we said let's use QNames for now Steven: Are you suggesting we use something other than xmlns for the prefixes? Mark: We can unbind the connection ... use xmlns, or triples using <link> for instance ... Jeremy Carroll was enthusiastic to use it for SPARQL ... and we can persuade IPTC to use it too ... and it would allow us to have default prefixes ... if profile = X, the dc maps to this, foaf maps to that, etc. ... and then making XHTML documents much easier to write, cut/paste and so on Steven: Will the same notation (like namespaces) confuse people Mark: Good question; one option is to change the syntax of CURIEs ... so they look different ... we could even let people define the separator ... but one scenario is that within the flexibility, we use xmlns as an option [ADJOURN] This is scribe.perl Revision: 1.128 of Date: 2007/02/23 21:38:13 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/nex/next/ Succeeded: s/cary/cary/ Succeeded: s/cary/carry/ Succeeded: s/DO/Do/ Succeeded: s/namespaces/prefixes/ Found Scribe: Steven Inferring ScribeNick: Steven Default Present: Steven, Rich, yamx, markbirbeck, ShaneM Present: Steven Rich yamx markbirbeck ShaneM Regrets: Alessio Tina Susan Agenda: Got date from IRC log name: 23 May 2007 Guessing minutes URL: People with action items:[End of scribe.perl diagnostic output]
http://www.w3.org/2007/05/23-xhtml-minutes
CC-MAIN-2019-35
refinedweb
989
79.9
Recent: Archives: An Easter egg As a result, this article is instead about developing a Java applet that draws textured Easter eggs. The textures are just a tile pattern built from a straightforward mathematical function of sines and cosines. We will transform these planar textures onto a sphere's surface to produce our finished product. To quickly draw these images to the screen we will render them into Java Image objects using classes from the java.awt.image package, letting the browser take care of any issues involved in actually displaying the resulting pictures. See Resources for the complete source code. I must admit that the original inspiration for this article comes from Clifford A. Pickover's Computers, Pattern, Chaos and Beauty: Graphics from an Unseen World (St. Martin's Press, ISBN: 031206179X). If pretty computer-generated pictures interest you, I recommend you pick up a copy of this book. The first issue we encounter when generating our eggs is what texture to use. So as not to unduly restrict ourselves I'm going to start by defining a generic Texture interface that can be supported by a variety of different texture functions. A texture function public interface Texture { public RGB getTexel (double i, double j); } An implementation of this interface must provide a method that returns the color of the texture element at the specified texture coordinate (i,j). The texture coordinate will be a value 0.0 <= i,j < 1.0, meaning that a texture function will define a texture over a square domain paramaterized by i and j. The texture function should, however, accept values outside this range, clipping, replicating, or extending the texture as appropriate. The value returned from the getTexel() method is of type RGB: public class RGB { double r, g, b; public RGB (double r, double g, double b) { this.r = r; this.g = g; this.b = b; } public RGB (int rgb) { r = (double) (rgb >> 16 & 0xff) / 255; g = (double) (rgb >> 8 & 0xff) / 255; b = (double) (rgb >> 0 & 0xff) / 255; } public void scale (double scale) { r *= scale; g *= scale; b *= scale; } public void add (RGB texel) { r += texel.r; g += texel.g; b += texel.b; } public int toRGB () { return 0xff000000 | (int) (r * 255.99) << 16 | (int) (g * 255.99) << 8 | (int) (b * 255.99) << 0; } } Our RGB class is similar to the standard Color class, except that it stores RGB colors in double precision; the color components should have values 0.0 <= r,g,b <= 1.0. We also provide some helper methods to convert, scale, and combine colors. An image-based texture This class implements a texture that uses an Image object as a source. We can use this class to map images onto a sphere by first converting the image into an array of integer RGB values (using the java.awt.image.PixelGrabber class) and then using this array to calculate texel values (as pixel is to picture element, so texel is to texture element). An image-mapped sphere import java.awt.*; import java.awt.image.*; public class ImageTexture implements Texture { int[] imagePixels; int imageWidth, imageHeight; public ImageTexture (Image image, int width, int height) throws InterruptedException { PixelGrabber grabber = new PixelGrabber (image, 0, 0, width, height, true); if (!grabber.grabPixels ()) throw new IllegalArgumentException ("Invalid image; pixel grab failed."); imagePixels = (int[]) grabber.getPixels (); imageWidth = grabber.getWidth (); imageHeight = grabber.getHeight (); } public RGB getTexel (double i, double j) { return new RGB (imagePixels[(int) (i * imageWidth % imageWidth) + imageWidth * (int) (j * imageHeight % imageHeight)]); } } Note that we simply convert the texture coordinate into an integer location on the surface of the image and then return the image color at that exact point. If the texture is sampled at a greater or lower frequency than the original image, the result will be jagged as pixels are skipped or replicated. Properly addressing this problem requires us to interpolate between colors of the image; however, such a task is difficult to do properly when we don't know where the texture will finally be displayed. Ideally, we would determine the amount of texture area covered by a single pixel on screen, and would then sample this amount of the actual texture. This approach is not practical, however, so we will not attempt to address it; supersampling, which we will examine later, is a much simpler way to reduce the effects of the problem. An algorithmic texture We may wish to experiment with an alternate texture, a completely artificial mathematical function. We could go with something like the Mandelbrot set or a Lyapanov function, but we'll instead go with a texture computed from the sin() function (described in Pickover's book). public class SineTexture implements Texture { double multiplier, scale; int modFunction; public SineTexture (double multiplier, double scale, int modFunction) { this.multiplier = multiplier; this.scale = scale; this.modFunction = modFunction; } public RGB getTexel (double i, double j) { i *= multiplier; j *= multiplier; double f = scale * (Math.sin (i) + Math.sin (j)); return ((int) f % modFunction == 0) ? new RGB (1.0, 0.0, 0.0) : new RGB (0.0, 1.0, 0.0); } } This class computes a simple sinusoidal function of (i,j). If the result, modulo a certain value, is 0, it returns bright red; otherwise, it returns bright green. The function uses three constants that control details of the resulting texture. Now that we have a texture function, we must decide how to map a square, flat texture onto the closed surface of a sphere; or in other words, how to transform a point on the surface of the sphere into an (i,j) texture coordinate. An obvious transformation is simply from longitude to i and latitude to j. The primary problem with this solution is that near the poles, the i coordinate will be greatly compressed: Walking around the earth at latitude 89o North is a lot quicker than at latitude 0. In other words, our uniform flat texture will be squashed at the poles. how to do this in RMIBy Anonymous on June 7, 2009, 6:13 amhow to draw in RMI? Thanks Reply | Read entire comment geometryBy Anonymous on February 26, 2009, 3:30 amwat is the mathmatical name to the easter egg? Can you help by saying on the screen please!thankyou. Reply | Read entire comment View all comments
http://www.javaworld.com/javaworld/jw-06-1998/jw-06-step.html
crawl-002
refinedweb
1,039
55.84
Since Facebook released their GraphQL technology back in 2015, it has evolved into a wonderful query language perfectly suited for all kinds of today's API endpoints. Why? Because of these advantages, it's a great alternative to REST. Convinced? Then let's create an application that fetches data from a GraphQL endpoint!. Let's start with creating a new empty directory and moving into it: Now let's create the project's package.json file inside that directory: { "version": 1, "name": "graphql-api", "scripts": { "start": "micro index.js" } } The code above tells Now the name of the project ("graphql-api") and also to execute the index.js file (using micro, which we'll install in the next paragraph) when the npm start command is run in your terminal or on the server. Next, we need to install a few packages: Run this command in your terminal to install them using npm: Now we need to create the index.js file and populate it with content. As the first step, load the packages: const {buildSchema} = require('graphql') const server = require('express-graphql') const CORS = require('micro-cors')() Then define the type of data provided by the API. In our case, we'll only respond with a single key named hello, holding a "Hello world" message. In turn, it will be of type String: const schema = buildSchema(` type Query { hello: String } `) And now the value for each key of the response: const rootValue = { hello: () => 'Hello world' } As the last line of code in the file, you need to call express-graphql, wrap it with micro-cors and export it: module.exports = CORS(server({ schema, rootValue })) Now you should be able to run npm start inside the directory containing the API and you should be handed a URL which will show the following when opened in a browser window: On the first glance, it looks like an error. Well... It actually is one. But in our case, it's a sign that you've managed to set up the code for the GraphQL API properly. Why? Because it indicates that the endpoint is able to accept data (the error comes directly from express-graphql). After we've finished building the API endpoint, we need to deploy it. Simply run this command in your terminal: Once Now has finished uploading the files, you'll see a URL that points to your freshly created API. We'll use this address later in the application and load some data from it. But in the case of a real API (not used for testing purposes), you would now have to assign an alias to it. Now that we have the deployment for the API endpoint in place, we need to build an application that loads the data from there and shows it to the visitor. As a framework, we'll use create-react-app, a neat way of building React apps. Creating the project's file structure is as easy as running this command (the directory will be called "graphql-client"): Then you can start the development server by running this command: This works because inside the package.json file, there's a script property named start defined that executes the react-scripts start command when run in the terminal or on the server. In order to make the application capable of loading data using GraphQL, we need to first install react-apollo, a package that provides all of the tools necessary for interacting with a GraphQL API using React. To install it, run this command in a separate terminal tab (please ensure that you're inside the "graphq-client" directory): Once all of them have finished installing, open the index.js file inside the src directory. Now remove all the code and start fresh with loading all packages (including React) and the built-in <App/> component: import React from 'react' import ReactDOM from 'react-dom' import { ApolloClient, ApolloProvider, createNetworkInterface } from 'react-apollo' import App from './App' Continue with creating an instance of ApolloClient and pointing it to your GraphQL server created earlier: const client = new ApolloClient({ networkInterface: createNetworkInterface({ uri: `} <P.B>'REPLACE_THIS_WITH_YOUR_API_URL'</P.B> {` }) }) As the next step, we need to connect your client instance to the component tree. This can be done using the ApolloProvider component. Generally, it should be placed somewhere high in your view hierarchy, above all places where you need to access data. In our case, we only have one existing component ( <App/>, which was already there when we generated the application). In turn, we only need to wrap this one with the ApolloProvider component: ReactDOM.render( <ApolloProvider client={client}> <App /> </ApolloProvider> , document.getElementById('root') ) Now that the interface for communicating with the API endpoint is in place, the only thing left for displaying the "Hello World" example is telling the client exactly which data to request. Open the App.js file inside the src directory, remove its content and start with loading all packages required: import React from 'react' import { gql, graphql } from 'react-apollo' Next, use gql to create the data query and assign it to a constant. This will tell react-apollo to only load the hello property (which we've defined earlier while writing the GraphQL API): const myQuery = gql`{ hello }` Now we only need to create the <App/> component. Let's just render a heading with the data from the API inside it: class App extends React.Component { render() { return <h1>{this.props.data.hello}</h1> } } Because we want to use it inside the index.js file (we already loaded it there), we need to export it now. But to receive the data using the react-apollo package, we also need to wrap the component into the graphql() helper and pass the query to it: export default graphql(myQuery)(App) That's all for building the client! You should now be able to access the app on the address you saw in the terminal while running the npm start command earlier}. By default, it should be the following one:. In the browser, the client should look like this: Pat yourself on the shoulder! You've managed to build your first GraphQL API, deploy it and even create a client that pulls data from it. Isn't that cool? Absolutely! So guess what's next! Now we'll make the client accessible from all over the world as well. But before we can do that, we need to prepare it by installing a tiny tool of ours (named serve). Because create-react-app doesn't come with a built-in webserver that can be used in production, you need to provide your own. So let's install serve by running this command in the terminal (inside the "graphql-client" directory): Then you need to add the now-start property to your package.json file. The command specified inside it will be run on now when the deployment is about to start. "scripts": { ... "now-start": "serve -s ./build" } Now you can do the same you did for the API. Deploy the client by running this command: Open the URL provided by Now and you should see the "Hello world!" example again. This means that you did everything right and your first GraphQL application is online. Well done!
https://zeit.co/docs/v1/examples/graphql
CC-MAIN-2019-04
refinedweb
1,209
60.55
Problem I got this error when I tried running pip: from pip import main ImportError: cannot import name 'main' Solution I had installed pip from Ubuntu repositories using this command: $ sudo apt install python-pip However, I had previously used a pip from Anaconda on this computer. Though I had uninstalled it, it looked like it had left some files that were affecting how the Ubuntu pip worked. I found the ~/.local/bin and ~/.local/lib directories that Anaconda pip had created and deleted those directories. Ubuntu pip worked fine after that. Tried with: Ubuntu 16.04 Advertisements One thought on “Cannot import name main”
https://codeyarns.com/2018/08/03/cannot-import-name-main/
CC-MAIN-2019-26
refinedweb
105
56.86
I have list of ranges { start, end }, and a value (point) , now I am looking for effective way to get last n th index from range in which given value is present. For example: List: [ { 0, 4 }, {5, 10 }, {11, 14 }, {15, 20} , {21, 25} ] n : 2 value: 22 So here, 22 is in range {21, 25 } which is at index 4 ( 0 based ). and since n is 2, function should return index of {11, 14 } because this is n th range from matching range. Here, I can write binary function easily since I have sorted list of ranges. But I do not want to write while / for , I am looking for some C++ 11 / 14 algorithms / lambdas if available, which can solve this issue. Can you please help? Thanks, Kailas I like Jan's answer, but since the appropriate solution is notably different if your data is known to be sorted, here's an answer for the question as-asked: #include <cstddef> #include <utility> #include <stdexcept> #include <algorithm> #include <iterator> template<typename RngT, typename ValT, typename OffsetT> std::size_t find_prev_interval(RngT const& rng, ValT const& value, OffsetT const offset) { using std::begin; using std::end; auto const first = begin(rng), last = end(rng); auto const it = std::lower_bound( first, last, value, [](auto const& ivl, auto const& v) { return ivl.second < v; } ); // optional if value is *known* to be present if (it == last || value < it->first) { throw std::runtime_error("no matching interval"); } auto const i = std::distance(first, it); return offset <= i ? i - offset : throw std::runtime_error("offset exceeds index of value"); } As the implementation only needs forward-iterators, this will work for any standard library container or C-array; however for std::set<> or something like boost::containers::flat_set<>, you'll want to alter the logic to call rng.lower_bound() rather than std::lower_bound(). Also, replace exceptions with something like boost::optional if it is usual for offset to be too high to return a valid index.
https://codedump.io/share/9sZOxEMbrfH1/1/efficient-search-through-list-of-ranges
CC-MAIN-2017-39
refinedweb
328
52.23
Why will Symfony 2.0 finally use PHP 5.3? The framework shootout As you might know, I'm back from the 2009 Zend Conference. I have already blogged about the conference, but I intentionally forgot to talk about the framework shootout. That's because I felt the need to think about the feedback I had during and after the session, and I wanted to mature my reflection with the Symfony core team before going further into the discussion. Instead of a classic closing keynote, the Zend Conference ended with a framework shootout. Several frameworks were represented on stage, mostly by their lead developer, and the audience were given the opportunity to ask any question they wanted. One of them was about PHP 5.3 and how each framework will make the transition to this latest and greatest PHP version. Will Symfony 2.0 be compatible with PHP 5.2? Both the upcoming Zend Framework and CakePHP 2.0 versions will rely on PHP 5.3. And for Symfony, I said it will still be compatible with PHP 5.2. From my point of view at the time, it would be a mistake to upgrade major frameworks to PHP 5.3 now for one main reason: major frameworks are used by many big companies, and upgrading to the latest version of a software fast in such companies is not always feasible. Sensio works for a lot of them, and we know what we are talking about. And no, some of our customers cannot "simply" compile their own version, because they rely on standard Linux distributions and their associated professional support; or because they have hundreds of machines and don't want to install a specific version of PHP just for some projects. If you attended the keynote, don't feel angry yet, keep reading. Apparently, many people were quite disappointed with this decision, so I started to think about this matter again. I made the decision to still support PHP 5.2 for Symfony 2.0 almost a year and a half ago. At that time, it was obviously out of question to only support PHP 5.3 and I thought Symfony 2.0 was right around the corner. Time flies and Symfony 2.0 is far from being ready. So, given that PHP 5.3 is now stable and released, and given that Symfony 2.0 is definitely not right around the corner (probably for late 2010), does it still make sense to continue supporting PHP 5.2? Or do we need to reconsider my decision? As hinted in this post title, the Symfony core team has changed its mind, and Symfony 2.0 will leverage PHP 5.3 and drop PHP 5.2 compatibility. I hope some of you feel better now, but I can already hear some others starting to grumble. Bear with us, as this decision was not made overnight. Keep reading this post to understand the reasoning behind this decision. Symfony 2.0 on PHP 5.3: A better language From a technical standpoint, using PHP 5.3 for Symfony 2.0 is great news. First, because the language has evolved and sports a lot of new exciting features like namespaces, closures, anonymous functions, late static binding, SPL enhancements, better Windows support, and much more. PHP 5.3 is also the fastest PHP release ever, which is a great benefit for complex beasts like frameworks. Symfony 2.0 on PHP 5.3: Better tools PHP 5.3 is also about a new ecosystem. The tools are getting better. The old PEAR installer (the one used to manage symfony plugins) is being phased out and replaced with Pyrus, a modern and robust software installer. Symfony 2.0 will definitely be able to leverage this new tool and take advantage of the native phar support to provide simple and better plugin management tools. Speaking of tools and third party libraries, let's talk about those we use natively in Symfony. First, our beloved Doctrine ORM. As announced some time ago, Doctrine 2.0 will be only compatible with PHP 5.3. And man, Doctrine 2.0 is gorgeous. Doctrine 2.0 is one of best things that's happened to PHP in a long time. Then, there is the Zend Framework. Symfony does not rely on the Zend Framework for its core features, but a large number of Symfony developers use some Zend Framework components in their Symfony projects (to add a search engine, connect to some well-known web services, create PDF documents on the fly, ...). Of course, using the new Zend Framework 2.0 components will require that we share the same requirements. Using PHP 5.3 is about having a better integration with the next generation of frameworks, tools, and third-party libraries out of the box. Symfony 2.0 on PHP 5.3: Helping the community Last but not the least, it is also about helping the PHP community. Symfony was one of the first frameworks to join the "Go PHP5" initiative some years ago when the community tried to help people adopting PHP5 faster. The adoption rate problem is probably what made our decision much more easier to make. As some people said on Twitter and during the keynote, if all major PHP frameworks and libraries start using PHP 5.3 for their next major version, Linux distributions, third-party tools (IDEs, ...), and companies will probably need to embrace the new version faster. And if Symfony can help in this effort, we would be very happy. Of course, we won't change Redhat's plans overnight. They have a quite strict roadmap, and because of their long-term support, they won't upgrade to the latest PHP version anytime soon; but if some companies use one of these conservative Linux distributions, they will also be conservative with the version of the framework they use. The good news for these companies is that symfony 1.4 will be maintained for three full years (see below). The Linux distribution problem is also mitigated by the native support of PHP 5.3 in the upcoming Zend Server. If companies are serious about PHP support, they can also switch to the Zend Server and have support for PHP 5.3 out of the box. Symfony 2.0 on PHP 5.3: Backward compatibility As Symfony 2.0 is about breaking backward compatibility anyway, the Symfony core team thinks that it is better to break backward compatibility once and for all and avoid yet another major compatibility break in coming years in order to finally support PHP 5.3. Switching to PHP 5.3 means dropping PHP 5.2 support by embracing PHP 5.3 new features like namespaces. It means that all Symfony libraries (from the MVC framework to the Symfony components) will require some drastic changes like renaming every class (renaming sfRequest to symfony\core\request for instance; say good bye to the good old sf prefix!). For components we have already published, like the dependency injection container, we will soon create a 1.0 branch and release a 1.0 version. Then, the trunk will start a new life and be migrated to PHP 5.3. But we definitely don't want to loose people who won't be able to upgrade to PHP 5.3. And Symfony is in a very good position to cope with this drastic change, thanks to its Long Term Support (LTS) releases. Symfony 1.4, to be released by the end of this year, is our upcoming LTS release. As any Symfony LTS release, it will be maintained for three years by the Symfony core team. In concrete terms, developers using this version will have support until early 2013. That's plenty of time for projects which cannot afford migrating to PHP 5.3 yet. And keep in mind that the maintenance covers forward compatibility with newer PHP versions. Symfony 1.3/1.4 already works on PHP 5.3.0, and will work on upcoming 5.3.X minor releases. Conclusion To sum up, we have decided to take the best of both worlds: symfony 1.4 (with support until early 2013) will be the best version for existing projects and conservative companies. Symfony 2.0 (probably to be released late 2010) will be the best version for new projects and companies willing to either install PHP 5.3, use the Zend Server, or install the "right" Linux distribution. I hope you will all understand and approve this move. The Symfony core team is really excited about the opportunities it gives us, and we think the Symfony 2.0 release will be a blast. Wish us luck! Thanks to Kris Wallsmith, Fabian Lange, Jonathan Wage, Dustin Whittle, and Stefan Koopmanschap, members of the Symfony core team, who helped me to thoroughly mature my decision and for reviewing this blog post. Thanks Fabien, for this effort. I'm happy to hear your points, and certainly feel you're doing the right thing -- you'll get flack with either decision you make, but your points are quite solid. Symfony has to stay innovative and you choose the good option. Regarding to symfony 1.4 LTS for existing projects, I couldn't be so enthusiastic considering the quantity of 1.0 stuff not supported on 1.4... I think this is the right decision. I agree with your points on the upgrade path for some large and small companies on 5.3 but like it was said in the shootout I agree that the PHP community and Open Source projects are drivers in making these upgrades happen. If no one adopted 5.3 for another 2-3 yrs we would have the same issue as we do now. The goal should be to keep the code simple and clean, but as far as I've seen them in action, scripts became verbose in a bad way. I just hope that this will happen only to the core symfony files, and not to the ones the developers write. That will be a huge #fail (ops, that's not twitter :) ) That being said, looking at their previous release history () they are do for an immediate update, but I have heard nothing in that direction. Is 1.4 backwards compatible to 1.2? Obviously not to 1.0 without ugliness. @tayhimself: We are cutting "a lot" to make supporting "a clean" version for 3 years. If you consider something listed there in error please discuss on the developer mailing list. If you really need a deprecated feature, but nobody else thinks it should be supported, you are free to take the code and just use it for your apps (as a plugin?) Though PHP 5.3 is already out, I can't use it so much as I like to, because all the projects I manage and maintain used Symfony 1.2 or a minor version, then, I' m still bound to PHP 5.2. With this news now I'm sure I'll use Symfony 2.0 with PHP 5.3 and finally have the power of namespaces and my beloved closures :D, in my everyday work!!! \symfony\core\Request ? The PHP community could learn a lot from the Ruby's community agility in embracing new versions. It makes me sick that some people still do their main dev in PHP4!!! If you let people say I am not ready to upgrade then people will never be ready. If you tell them to be ready by a date then you are more likely to get people to follow you. If they don't want to move forward they problem aren't that helpful in the first place. How was the audience ? Hope you impressed the audience with symfony. Anyway good decision too. To ensure that comments stay relevant, they are closed for old posts. Anna said on Oct 27, 2009 at 18:56 #1
http://symfony.com/blog/why-will-symfony-2-0-finally-use-php-5-3
CC-MAIN-2017-39
refinedweb
1,992
75.2
NAME libroar - RoarAudio sound library roarvio - RoarAudio virtual IO layer SYNOPSIS #include <roaraudio.h> struct roar_vio_calls; DESCRIPTION The RoarAudio VIO interface is RoarAudio's IO abstraction layer. It provides basic IO functions such as read and write independing on the underlaying IO. For Example can you open a plain or a gziped file via the VIO layer. After a successful open both objects behave the same, libroar takes care about the compression in the gzip case. TUTORIALS Tutorials can be found in roartutvio(7). IMPORTANT FUNCTIONS There are several important functions. This is a small list of the most important ones. Opening roar_vio_open_file(3), roar_vio_open_fh(3), roar_vio_open_stdio(3), roar_vio_open_dstr(3), roar_vio_open_proto(3). While there are a lot functions important for opening files the most important one is roar_vio_open_dstr(3). It opens a stream based on URLs that can point to local files or files on remote machines. It also can handle compression and encryption. Closing roar_vio_close(3), roar_vio_shutdown(3) Reading and writing roar_vio_read(3), roar_vio_write(3) Seeking and positioning roar_vio_lseek(3) Non-Blocking and Asyncron IO roar_vio_nonblock(3), roar_vio_sync(3), roar_vio_select(3) Networking and Sockets roar_vio_accept(3) String handling roar_vio_printf(3) BUGS A lot... SEE ALSO roar-config(1), roartypes(1), roartutvio(7), RoarAudio(7).
http://manpages.ubuntu.com/manpages/precise/man7/roarvio.7.html
CC-MAIN-2014-10
refinedweb
205
58.38
Vista an Uneasy Sleeper 395 Emmy King writes "..."One thing we just can't wrap our mind about is the terrible, broken, and completely pitiful support for waking Vista up from a Deep Sleep or hibernation. Linux (Score:3, Funny) Re:Linux (Score:4, Funny)) Re: (Score:2) Indeed. But how come, Win2k and XP hibernation features work damn near perfect and Vista doesn't? I hibernate my XP laptop every night and I've yet to encounter a BSOD or another problem. Well, I did encounter a couple of instances where the network wouldn't work, but disabling and re-enabling the network fixed that. Re: (Score:3, Informative) I'll assume you wanted an answer. Hardware was designed with W2K and XP as their development test cases, and was specifically made to work with those OSes. XP is only an incremental update over W2K (ver 5.1 from 5.0), whereas Vista is a complete rewrite in many areas. So they code power management according to the spec, such that all ACPI-compliant devices will work, then they make tweaks and exceptions for Re: (Score:3, Interesting) ---John Holmes... Re: (Score:3, Insightful) Re: (Score:3, Interesting) If you used something native or if the manufacturer supported linux you'd probably be OK. I've experienced this myself. Re:How hard can it be? (Score:4, Insightful) I don't see how this is such a huge deal in Vista, anyway. It seems to work fine under XP, and you're going to be running most of the same apps for now... Re: (Score:3, Funny) They say each successive version is "more stable and secure!" And with 9 shut down options to boot... (Score:5, Funny) Re:And with 9 shut down options to boot... (Score:5, Funny) Re: (Score:2, Funny) What options must I think of??? Option 1. Shutdown Vista. Option 2. Hybernate mode. Option 3. Restart. Option 4. Force Shutdown. Option 5. Shutdown all Users in Usergroup. Option 6. Shutdown all Users in Network Option 7. Restart in Safemode. Option 8. Restart in Safemode (network conn). and finaly last one! Option 9. Shutdown every Vista User PC located in the world!!! whahahaha! Le Marquis Re: (Score:3, Insightful):S3 is not hibernate/deep sleep. (Score:5, Interesting): (Score:3, Interesting)) Re:Press SHIFT to se all commands! (Score:5, Funny) Re: (Score:3, Funny) Re: (Score:2) IIRC "Deep Sleep" is not the official name of any ACPI state, although some refer to S3 that was ("deep" relative to S1). bummer (Score:5, Insightful). Re: (Score:2) ACPI Sucks. (Score:2, Interesting) I've concluded that power management is just insanely tricky. APM/ACPI must be inconsistently implemented on every device, otherwise it could never work as poorly as it does. ACPI [wikipedia.org] does suck. It's a typical M$, "extensible," "do it in software" nightmare described in 500 pages of spec. It reminds me of nothing more than a winmodem. It will be hard even for careful hardware makers to follow and that's what M$ likes. APM, on the other hand, worked well for laptops and still does if supported. I close t Re: (Score:3, Interesting) here's something you may not have known: Sun boxen (some workstations) can actually suspend to disk (and power down) and when you resume (such as the next day when you power up the workstation) the unix o/s resumes gracefully and FULLY architected (not a hack but proper part of solaris). it surprised me since you don't think of Sun as an 'APM' implementation company, but it is true for at least some S Re: (Score:2) If you want that, get a Mac. My iBook has been waking from sleep reliably (and almost instantaneously!) since 2003, and the new Intel Macs can hibernate, too. Screw Ups (Score:4, Informative) Re:Screw Ups (Score:5, Interesting) Sleep did not work on either of them under winxp. This sound like unfounded ms bashing by someone who got frustated. Re: (Score:2) MSDN (Score:3, Informative) Re: (Score:2) Only in the *final* (Score:2, Informative) Throughout the beta, Deep Sleep in Windows Vista went great. [...] But in the final version of Windows Vista, something is very, very majorly wrong. The problem is in the final version only, not a beta. This wasn't mentioned in the Slashdot summary, though, which could have saved confusion for those that don't RTFA. Ghee, I musta been sleeping... (Score:4, Funny) Or maybe I'm still sleeping and this is a dream. Vista released with major operational flaws. Now that's a Linux promotion! Re: (Score:2) Re: (Score:2) Re: (Score:2) "no buggy software" (Score:5, Funny) Re: (Score:2) Really? The last time I ran "Hello World" a virus did a low level format of my hard disk...or was that "ILOVEU"??? Re: (Score:2, Funny) Nope, has an output format bug. It should end with an exclamation point, as in: "Hello World!" Hello World bug free? (Score:2) Yeah, right -- I bet it had a buffer overflow in printf or something! Re: (Score:2) Re: (Score:2, Funny) int main(void) { char msg[4]; sprintf(msg, "Hello, World!"); printf("%s\n", msg); return 0; } Re: (Score:3, Funny) #include <stdio.h> int main(void) { char msg[10]; gets(msg); printf("%d\n", (int) msg); printf(msg, "Hello, World!"); printf("%s\n", msg); return 1; } Hibernating (Score:2) Interesting that TFA says Vista hibernated fine in beta but not in the release version. Oddly, Xp hibernated flawlessly on my laptop but openSuSE 10.1 hangs every time. No Linux distro hibernates this particular laptop (toshiba). We'll see if 10.2 will as soon as ATI gets done developing Vista drivers and gives us a driver for Xorg 7.2 Re: (Score:2) Do you mean suspend? Hibernate is only very slightly better than shutting down and restarting again. Suspend on the other hand, is overwhelmingly fantastic. . Re: (Score:3, Insightful) Not a single issue with sleep or standby (Score:2, Insightful) They don't quite Bill's 6 second boot time either - but both systems clock in right around 10 seconds, and that's pretty hard to complain about. Re: (Score:2)) Re: (Score:3, Informative) Uneasy lies the head... (Score:5, Funny) -Henry IV. Part II. Re: (Score:2) That's a good version, but for some reason they plan to make it obsolete rather quickly and replace it with Henry VI, which is far more inferior. This, plus the ensuing civil war and an unavoidable Richard III rootkit, tends to drive up the TOC. Not exactly great with other OSes (Score:5, Insightful). Re: (Score:2) Just as a point of comparison, since you mention external devices and motherboards, I have a oldish Powerbook, say I Re: (Score:3, Insightful) Re: (Score:2) Why is it Microsoft's job to simplify a process so it can be better implemented by their competitors? If you have evidence that they are demanding companies test against Windows exclusively, maybe you have a legitimate gripe. Short of that, they aren't doing anything wrong (by not forcing to a spec) that I can see. Besides which, it sounds like the companies are making a choice that testing against Windows and ignoring other markets is a cost-effective tradeoff. Point of view (Score:2) From a business standpoint? It isn't Microsoft's job. From a technical standpoint? That should be obvious. (Quality control, ease of implementation, etc.) The fact they prefer to use their dominant market position to make it harder for competitors, rather than making it better for everyone, is one of the reasons Microsoft is not good for computing, nor for business. Re: (Score:3, Insightful) The main reason is that ACPI bugs can be worked-around in software (if you know everything there is to know about the hardware and BIOS implementation) and the manufacturer has to write a driver anyhow. So they do a quick, one-off driver that just barely works, and don't care about all the problems that will result from that mindset in. Re: (Score:2) I like to test my application software on a fresh virtual machine. You'd be surprised how often having a stray dll around saves your ass while you're running, so you need to test on a fresh machine. ON the other hand, you can't test for every case where a stray dll will do you in. I wouldn't be surprised if a lot of the people having this problem are upgraders. No input file specified. (Score:2) And the site crashes (Score:2) Does this work in Linux? (Score:2) I wasn't planning on laptop installs anyway (Score:2, Informative) Pun... (Score:5, Insightful) The pun was clearly intended, otherwise there would not have been quotation marks around 'power'. Why can't we all just be honest about our use of puns? Puns are not always bad. There's no need to be ashamed of them. Re: (Score:3, Funny) How many times do you test before calling it truth (Score:5, Informative) Works for me syndrome... (Score:3, Informative) but it sounds to me like this is a classic case of "not enough research" A rather funny comment coming from someone who presumably tested one system and found it to work, so therefore all systems must work. The article mentions that the author had problems with "deep sleep" on 6 of 8 systems. So he's obviously not making the claim that hiberate/Deep Sleep is broken on ALL systems, since there w Only In America!! (Score:2) Global Warning Denial USA Rules OK!! Questions about sleeping (Score:2, Informative) What happens with network applications (take google earth as an example - it connects and logs in at program start)? How about a domain? What happens if you go to sleep on one domain and wake up plugged into another? What happens when you wake up outside the login hours? What happens if your server slot is taken for an application (because you disconnected and someone else took it)? What happens if you are editing a Re: (Score:2)] Apparently ... (Score:2, Informative) Anyhow, I've been running Vista RC1 since it was released (and the beta before that) and never had a problem with the sleep function. Other problems, yes, but none with sleep and none so bad I'd complain about them (mostly my preferences vs. Microsoft's, predictable stuff like that). In fact, I was just telling my wife the other day (she just melts when I talk sweet to her like this) that the sleep/hibernat My Experience is Completely the Opposite (Score:3, Informative) The OP makes it sound like their experience applies to everyone, so I have to call FUD on this. At any rate, I have zero problems with these features, using Vista Home Ultimate 64 bit. Re: (Score:3, Interesting) But your comment about my Windows experience is off the mark. I never used pirated windows, and I've used pretty much every version of windows in some form or another from 98 SE to Vista (although using that horrible abortion ME wasn't my choice). I worked in tech support for a university campus for a couple of years. I also use Linux. Right now I've got a dis Re: (Score:2) Re: : (Score:2) Re:Why are you even putting it in sleep mode (Score:5, Funny) Re: (Score:2) That method wastes a fair amount of power. I prefer the way I go now: I close the lid, the computer hibernates, on the morning, I open & press the power button, finds & plugs in the mouse and power, say good morning on so on... and then the laptop is good to go. Also, stuffing the laptop in a bag without powerdown would make it rather hot? I wish I could do this with my main computer, but alas, Re: (Score:2) You talk to your computer? Man, you need to get laid, lol. Re:Why are you even putting it in sleep mode (Score:4, Insightful) Re: (Score:2) Sometimes you just have to hold a company to its promises. If the OS is released, it is being installed on computers that will be sold for this Christmas. If there are bugs that affect simple operations it is a serious problem. Re:Slashdot Ramps Up the Vista FUD (Score:4, Interesting) Re: (Score:2, Funny) BTW, it's funny how the parent is flamebait, while replacing a few words makes you insightful. Moderators, make up your mind. Re: (Score:2)
http://slashdot.org/story/06/12/10/1338254/vista-an-uneasy-sleeper
CC-MAIN-2015-35
refinedweb
2,125
73.27
Adding a new column in python is a easy task. But have you tried to add a column with values in it based on some condition. Like a column with values which depends on the values of another column. For a small data set with few numbers of rows it may be easy to do it manually but for a large dataset with hundreds of rows it may be quite difficult to do it manually. We can do this hectic manual work with few lines of code. We can create a function which will do it for us for all the rows. So this recipe is a short example of how to create a function which will insert a new column with values in it based on some condition. Get Closer To Your Dream of Becoming a Data Scientist with 70+ Solved End-to-End ML Projects import pandas as pd import numpy as np We have imported pandas and numpy. No other library is needed for the this function. Here we have created a Dataframe with columns 'bond_name' and 'risk_score'. We have used a print statement to view our initial dataset. raw_data = {'bond_name': ['govt_bond_1', 'govt_bond_2', 'govt_bond_3', 'pvt_bond_1', 'pvt_bond_2', 'pvt_bond_3', 'pvt_bond_4'], 'risk_score': [1.6, 0.9, 2.3, 3.0, 2.7, 1.8, 4.1]} df = pd.DataFrame(raw_data, columns = ['bond_name', 'risk_score']) print(df) First, we have created empty list named as rating which we will append and assign values as per the conditition. rating = [] Now we have created a loop which will iterate over all the rows in column 'risk_score' and assign values in the list. We are using if-else function to make the conditon on which we want to assign the values in the column. Here, we want to assign rating on the based of risk_score. The condition which we are making is: rating = [] for row in df['risk_score']: if row < 1.0 : rating.append('AA') elif row < 2.0: rating.append('A') elif row < 3.0: rating.append('BB') elif row < 4.0: rating.append('B') elif row < 5.0: rating.append('C') else: rating.append('Not_Rated') Explore More Data Science and Machine Learning Projects for Practice. Fast-Track Your Career Transition with ProjectPro So finally we are adding that list as a column in the dataset and printing the final dataset to see the changes. df['rating'] = rating print(df) As an output we get: bond_name risk_score 0 govt_bond_1 1.6 1 govt_bond_2 0.9 2 govt_bond_3 2.3 3 pvt_bond_1 3.0 4 pvt_bond_2 2.7 5 pvt_bond_3 1.8 6 pvt_bond_4 4.1 bond_name risk_score rating 0 govt_bond_1 1.6 A 1 govt_bond_2 0.9 AA 2 govt_bond_3 2.3 BB 3 pvt_bond_1 3.0 B 4 pvt_bond_2 2.7 BB 5 pvt_bond_3 1.8 A 6 pvt_bond_4 4.1 C Here we can see that a new column has been added with the values according to the risk_score
https://www.projectpro.io/recipes/insert-new-column-based-on-condition-in-python
CC-MAIN-2021-43
refinedweb
487
77.23
is the server was started. This starts a new server. This uses the networkPort property as the listen port. //This is a script that creates a Toggle that you enable to start the Server. //Attach this script to an empty GameObject //Create a Toggle GameObject by going to Create>UI>Toggle. //Click on your empty GameObject. //Click and drag the Toggle GameObject from the Hierarchy to the Toggle section in the Inspector window. using UnityEngine; using UnityEngine.UI; using UnityEngine.Networking; //This makes the GameObject a NetworkManager GameObject public class Example : NetworkManager { public Toggle m_Toggle; Text m_ToggleText; void Start() { //Fetch the Text of the Toggle to allow you to change it later m_ToggleText = m_Toggle.GetComponentInChildren<Text>(); OnOff(false); } //Connect this function to the Toggle to start and stop the Server public void OnOff(bool change) { //Detect when the Toggle returns false if (change == false) { //Stop the Server StopServer(); //Change the text of the Toggle m_ToggleText.text = "Connect Server"; } //Detect when the Toggle returns true if (change == true) { //Start the Server StartServer(); //Change the Toggle Text m_ToggleText.text = "Disconnect Server"; } } //Detect when the Server starts and output the status public override void OnStartServer() { //Output that the Server has started Debug.Log("Server Started!"); } //Detect when the Server stops public override void OnStopServer() { //Output that the Server has stopped Debug.Log("Server Stopped!"); } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/2018.3/Documentation/ScriptReference/Networking.NetworkManager.StartServer.html
CC-MAIN-2021-10
refinedweb
232
57.67
1.14 anton 289: type the words in in lower case (However, @pxref{core-idef}). 1.1 anton 1.16 ! anton 412: ! 413: The outer (aka text) interpreter converts numbers containing a dot into ! 414: a double precision number. Note that only numbers with the dot as last ! 415: character are standard-conforming. ! 416: 1.1 anton 417: doc-d+ 418: doc-d- 419: doc-dnegate 420: doc-dabs 421: doc-dmin 422: doc-dmax 423: 1.4 anton 424: @node Floating Point, , Double precision, Arithmetic 425: @subsection Floating Point 1.16 ! anton 426: ! 427: The format of floating point numbers recognized by the outer (aka text) ! 428: interpreter is: a signed decimal number, possibly containing a decimal ! 429: point (@code{.}), followed by @code{E} or @code{e}, optionally followed ! 430: by a signed integer (the exponent). E.g., @code{1e} ist the same as ! 431: @code{+1.0e+1}. Note that a number without @code{e} ! 432: is not interpreted as floating-point number, but as double (if the ! 433: number contains a @code{.}) or single precision integer. Also, ! 434: conversions between string and floating point numbers always use base ! 435: 10, irrespective of the value of @code{BASE}. If @code{BASE} contains a ! 436: value greater then 14, the @code{E} may be interpreted as digit and the ! 437: number will be interpreted as integer, unless it has a signed exponent ! 438: (both @code{+} and @code{-} are allowed as signs). 1.4 anton 439: 440: Angles in floating point operations are given in radians (a full circle 441: has 2 pi radians). Note, that gforth has a separate floating point 442: stack, but we use the unified notation. 443: 444: Floating point numbers have a number of unpleasant surprises for the 445: unwary (e.g., floating point addition is not associative) and even a few 446: for the wary. You should not use them unless you know what you are doing 447: or you don't care that the results you get are totally bogus. If you 448: want to learn about the problems of floating point numbers (and how to 1.11 anton 449: avoid them), you might start with @cite{David Goldberg, What Every 1.6 anton 450: Computer Scientist Should Know About Floating-Point Arithmetic, ACM 451: Computing Surveys 23(1):5@minus{}48, March 1991}. 1.4 anton 452: 453: doc-f+ 454: doc-f- 455: doc-f* 456: doc-f/ 457: doc-fnegate 458: doc-fabs 459: doc-fmax 460: doc-fmin 461: doc-floor 462: doc-fround 463: doc-f** 464: doc-fsqrt 465: doc-fexp 466: doc-fexpm1 467: doc-fln 468: doc-flnp1 469: doc-flog 1.6 anton 470: doc-falog 1.4 anton 471: doc-fsin 472: doc-fcos 473: doc-fsincos 474: doc-ftan 475: doc-fasin 476: doc-facos 477: doc-fatan 478: doc-fatan2 479: doc-fsinh 480: doc-fcosh 481: doc-ftanh 482: doc-fasinh 483: doc-facosh 484: doc-fatanh 485: 486: @node Stack Manipulation, Memory access, Arithmetic, Words 1.1 anton 487: @section Stack Manipulation 488: 489: gforth has a data stack (aka parameter stack) for characters, cells, 490: addresses, and double cells, a floating point stack for floating point 491: numbers, a return stack for storing the return addresses of colon 492: definitions and other data, and a locals stack for storing local 493: variables. Note that while every sane Forth has a separate floating 494: point stack, this is not strictly required; an ANS Forth system could 495: theoretically keep floating point numbers on the data stack. As an 496: additional difficulty, you don't know how many cells a floating point 497: number takes. It is reportedly possible to write words in a way that 498: they work also for a unified stack model, but we do not recommend trying 1.4 anton 499: it. Instead, just say that your program has an environmental dependency 500: on a separate FP stack. 501: 502: Also, a Forth system is allowed to keep the local variables on the 1.1 anton 503: return stack. This is reasonable, as local variables usually eliminate 504: the need to use the return stack explicitly. So, if you want to produce 505: a standard complying program and if you are using local variables in a 506: word, forget about return stack manipulations in that word (see the 507: standard document for the exact rules). 508: 1.4 anton 509: @menu 510: * Data stack:: 511: * Floating point stack:: 512: * Return stack:: 513: * Locals stack:: 514: * Stack pointer manipulation:: 515: @end menu 516: 517: @node Data stack, Floating point stack, Stack Manipulation, Stack Manipulation 1.1 anton 518: @subsection Data stack 519: doc-drop 520: doc-nip 521: doc-dup 522: doc-over 523: doc-tuck 524: doc-swap 525: doc-rot 526: doc--rot 527: doc-?dup 528: doc-pick 529: doc-roll 530: doc-2drop 531: doc-2nip 532: doc-2dup 533: doc-2over 534: doc-2tuck 535: doc-2swap 536: doc-2rot 537: 1.4 anton 538: @node Floating point stack, Return stack, Data stack, Stack Manipulation 1.1 anton 539: @subsection Floating point stack 540: doc-fdrop 541: doc-fnip 542: doc-fdup 543: doc-fover 544: doc-ftuck 545: doc-fswap 546: doc-frot 547: 1.4 anton 548: @node Return stack, Locals stack, Floating point stack, Stack Manipulation 1.1 anton 549: @subsection Return stack 550: doc->r 551: doc-r> 552: doc-r@ 553: doc-rdrop 554: doc-2>r 555: doc-2r> 556: doc-2r@ 557: doc-2rdrop 558: 1.4 anton 559: @node Locals stack, Stack pointer manipulation, Return stack, Stack Manipulation 1.1 anton 560: @subsection Locals stack 561: 1.4 anton 562: @node Stack pointer manipulation, , Locals stack, Stack Manipulation 1.1 anton 563: @subsection Stack pointer manipulation 564: doc-sp@ 565: doc-sp! 566: doc-fp@ 567: doc-fp! 568: doc-rp@ 569: doc-rp! 570: doc-lp@ 571: doc-lp! 572: 1.4 anton 573: @node Memory access, Control Structures, Stack Manipulation, Words 1.1 anton 574: @section Memory access 575: 1.4 anton 576: @menu 577: * Stack-Memory transfers:: 578: * Address arithmetic:: 579: * Memory block access:: 580: @end menu 581: 582: @node Stack-Memory transfers, Address arithmetic, Memory access, Memory access 1.1 anton 583: @subsection Stack-Memory transfers 584: 585: doc-@ 586: doc-! 587: doc-+! 588: doc-c@ 589: doc-c! 590: doc-2@ 591: doc-2! 592: doc-f@ 593: doc-f! 594: doc-sf@ 595: doc-sf! 596: doc-df@ 597: doc-df! 598: 1.4 anton 599: @node Address arithmetic, Memory block access, Stack-Memory transfers, Memory access 1.1 anton 600: @subsection Address arithmetic 601: 602: ANS Forth does not specify the sizes of the data types. Instead, it 603: offers a number of words for computing sizes and doing address 604: arithmetic. Basically, address arithmetic is performed in terms of 605: address units (aus); on most systems the address unit is one byte. Note 606: that a character may have more than one au, so @code{chars} is no noop 607: (on systems where it is a noop, it compiles to nothing). 608: 609: ANS Forth also defines words for aligning addresses for specific 610: addresses. Many computers require that accesses to specific data types 611: must only occur at specific addresses; e.g., that cells may only be 612: accessed at addresses divisible by 4. Even if a machine allows unaligned 613: accesses, it can usually perform aligned accesses faster. 614: 615: For the performance-concious: alignment operations are usually only 616: necessary during the definition of a data structure, not during the 617: (more frequent) accesses to it. 618: 619: ANS Forth defines no words for character-aligning addresses. This is not 620: an oversight, but reflects the fact that addresses that are not 621: char-aligned have no use in the standard and therefore will not be 622: created. 623: 624: The standard guarantees that addresses returned by @code{CREATE}d words 625: are cell-aligned; in addition, gforth guarantees that these addresses 626: are aligned for all purposes. 627: 1.9 anton 628: Note that the standard defines a word @code{char}, which has nothing to 629: do with address arithmetic. 630: 1.1 anton 631: doc-chars 632: doc-char+ 633: doc-cells 634: doc-cell+ 635: doc-align 636: doc-aligned 637: doc-floats 638: doc-float+ 639: doc-falign 640: doc-faligned 641: doc-sfloats 642: doc-sfloat+ 643: doc-sfalign 644: doc-sfaligned 645: doc-dfloats 646: doc-dfloat+ 647: doc-dfalign 648: doc-dfaligned 1.10 anton 649: doc-maxalign 650: doc-maxaligned 651: doc-cfalign 652: doc-cfaligned 1.1 anton 653: doc-address-unit-bits 654: 1.4 anton 655: @node Memory block access, , Address arithmetic, Memory access 1.1 anton 656: @subsection Memory block access 657: 658: doc-move 659: doc-erase 660: 661: While the previous words work on address units, the rest works on 662: characters. 663: 664: doc-cmove 665: doc-cmove> 666: doc-fill 667: doc-blank 668: 1.4 anton 669: @node Control Structures, Locals, Memory access, Words 1.1 anton 670: @section Control Structures 671: 672: Control structures in Forth cannot be used in interpret state, only in 673: compile state, i.e., in a colon definition. We do not like this 674: limitation, but have not seen a satisfying way around it yet, although 675: many schemes have been proposed. 676: 1.4 anton 677: @menu 678: * Selection:: 679: * Simple Loops:: 680: * Counted Loops:: 681: * Arbitrary control structures:: 682: * Calls and returns:: 683: * Exception Handling:: 684: @end menu 685: 686: @node Selection, Simple Loops, Control Structures, Control Structures 1.1 anton 687: @subsection Selection 688: 689: @example 690: @var{flag} 691: IF 692: @var{code} 693: ENDIF 694: @end example 695: or 696: @example 697: @var{flag} 698: IF 699: @var{code1} 700: ELSE 701: @var{code2} 702: ENDIF 703: @end example 704: 1.4 anton 705: You can use @code{THEN} instead of @code{ENDIF}. Indeed, @code{THEN} is 1.1 anton 706: standard, and @code{ENDIF} is not, although it is quite popular. We 707: recommend using @code{ENDIF}, because it is less confusing for people 708: who also know other languages (and is not prone to reinforcing negative 709: prejudices against Forth in these people). Adding @code{ENDIF} to a 710: system that only supplies @code{THEN} is simple: 711: @example 712: : endif POSTPONE then ; immediate 713: @end example 714: 715: [According to @cite{Webster's New Encyclopedic Dictionary}, @dfn{then 716: (adv.)} has the following meanings: 717: @quotation 718: ... 2b: following next after in order ... 3d: as a necessary consequence 719: (if you were there, then you saw them). 720: @end quotation 721: Forth's @code{THEN} has the meaning 2b, whereas @code{THEN} in Pascal 722: and many other programming languages has the meaning 3d.] 723: 724: We also provide the words @code{?dup-if} and @code{?dup-0=-if}, so you 725: can avoid using @code{?dup}. 726: 727: @example 728: @var{n} 729: CASE 730: @var{n1} OF @var{code1} ENDOF 731: @var{n2} OF @var{code2} ENDOF 1.4 anton 732: @dots{} 1.1 anton 733: ENDCASE 734: @end example 735: 736: Executes the first @var{codei}, where the @var{ni} is equal to 737: @var{n}. A default case can be added by simply writing the code after 738: the last @code{ENDOF}. It may use @var{n}, which is on top of the stack, 739: but must not consume it. 740: 1.4 anton 741: @node Simple Loops, Counted Loops, Selection, Control Structures 1.1 anton 742: @subsection Simple Loops 743: 744: @example 745: BEGIN 746: @var{code1} 747: @var{flag} 748: WHILE 749: @var{code2} 750: REPEAT 751: @end example 752: 753: @var{code1} is executed and @var{flag} is computed. If it is true, 754: @var{code2} is executed and the loop is restarted; If @var{flag} is false, execution continues after the @code{REPEAT}. 755: 756: @example 757: BEGIN 758: @var{code} 759: @var{flag} 760: UNTIL 761: @end example 762: 763: @var{code} is executed. The loop is restarted if @code{flag} is false. 764: 765: @example 766: BEGIN 767: @var{code} 768: AGAIN 769: @end example 770: 771: This is an endless loop. 772: 1.4 anton 773: @node Counted Loops, Arbitrary control structures, Simple Loops, Control Structures 1.1 anton 774: @subsection Counted Loops 775: 776: The basic counted loop is: 777: @example 778: @var{limit} @var{start} 779: ?DO 780: @var{body} 781: LOOP 782: @end example 783: 784: This performs one iteration for every integer, starting from @var{start} 785: and up to, but excluding @var{limit}. The counter, aka index, can be 786: accessed with @code{i}. E.g., the loop 787: @example 788: 10 0 ?DO 789: i . 790: LOOP 791: @end example 792: prints 793: @example 794: 0 1 2 3 4 5 6 7 8 9 795: @end example 796: The index of the innermost loop can be accessed with @code{i}, the index 797: of the next loop with @code{j}, and the index of the third loop with 798: @code{k}. 799: 800: The loop control data are kept on the return stack, so there are some 801: restrictions on mixing return stack accesses and counted loop 802: words. E.g., if you put values on the return stack outside the loop, you 803: cannot read them inside the loop. If you put values on the return stack 804: within a loop, you have to remove them before the end of the loop and 805: before accessing the index of the loop. 806: 807: There are several variations on the counted loop: 808: 809: @code{LEAVE} leaves the innermost counted loop immediately. 810: 811: @code{LOOP} can be replaced with @code{@var{n} +LOOP}; this updates the 812: index by @var{n} instead of by 1. The loop is terminated when the border 813: between @var{limit-1} and @var{limit} is crossed. E.g.: 814: 1.2 anton 815: @code{4 0 ?DO i . 2 +LOOP} prints @code{0 2} 1.1 anton 816: 1.2 anton 817: @code{4 1 ?DO i . 2 +LOOP} prints @code{1 3} 1.1 anton 818: 819: The behaviour of @code{@var{n} +LOOP} is peculiar when @var{n} is negative: 820: 1.2 anton 821: @code{-1 0 ?DO i . -1 +LOOP} prints @code{0 -1} 1.1 anton 822: 1.2 anton 823: @code{ 0 0 ?DO i . -1 +LOOP} prints nothing 1.1 anton 824: 825: Therefore we recommend avoiding using @code{@var{n} +LOOP} with negative 826: @var{n}. One alternative is @code{@var{n} S+LOOP}, where the negative 827: case behaves symmetrical to the positive case: 828: 1.7 pazsan 829: @code{-2 0 ?DO i . -1 S+LOOP} prints @code{0 -1} 1.1 anton 830: 1.7 pazsan 831: @code{-1 0 ?DO i . -1 S+LOOP} prints @code{0} 1.1 anton 832: 1.7 pazsan 833: @code{ 0 0 ?DO i . -1 S+LOOP} prints nothing 1.1 anton 834: 1.2 anton 835: The loop is terminated when the border between @var{limit@minus{}sgn(n)} and 1.1 anton 836: @var{limit} is crossed. However, @code{S+LOOP} is not part of the ANS 837: Forth standard. 838: 839: @code{?DO} can be replaced by @code{DO}. @code{DO} enters the loop even 840: when the start and the limit value are equal. We do not recommend using 841: @code{DO}. It will just give you maintenance troubles. 842: 843: @code{UNLOOP} is used to prepare for an abnormal loop exit, e.g., via 844: @code{EXIT}. @code{UNLOOP} removes the loop control parameters from the 845: return stack so @code{EXIT} can get to its return address. 846: 847: Another counted loop is 848: @example 849: @var{n} 850: FOR 851: @var{body} 852: NEXT 853: @end example 854: This is the preferred loop of native code compiler writers who are too 855: lazy to optimize @code{?DO} loops properly. In GNU Forth, this loop 856: iterates @var{n+1} times; @code{i} produces values starting with @var{n} 857: and ending with 0. Other Forth systems may behave differently, even if 858: they support @code{FOR} loops. 859: 1.4 anton 860: @node Arbitrary control structures, Calls and returns, Counted Loops, Control Structures 1.2 anton 861: @subsection Arbitrary control structures 862: 863: ANS Forth permits and supports using control structures in a non-nested 864: way. Information about incomplete control structures is stored on the 865: control-flow stack. This stack may be implemented on the Forth data 866: stack, and this is what we have done in gforth. 867: 868: An @i{orig} entry represents an unresolved forward branch, a @i{dest} 869: entry represents a backward branch target. A few words are the basis for 870: building any control structure possible (except control structures that 871: need storage, like calls, coroutines, and backtracking). 872: 1.3 anton 873: doc-if 874: doc-ahead 875: doc-then 876: doc-begin 877: doc-until 878: doc-again 879: doc-cs-pick 880: doc-cs-roll 1.2 anton 881: 882: On many systems control-flow stack items take one word, in gforth they 883: currently take three (this may change in the future). Therefore it is a 884: really good idea to manipulate the control flow stack with 885: @code{cs-pick} and @code{cs-roll}, not with data stack manipulation 886: words. 887: 888: Some standard control structure words are built from these words: 889: 1.3 anton 890: doc-else 891: doc-while 892: doc-repeat 1.2 anton 893: 894: Counted loop words constitute a separate group of words: 895: 1.3 anton 896: doc-?do 897: doc-do 898: doc-for 899: doc-loop 900: doc-s+loop 901: doc-+loop 902: doc-next 903: doc-leave 904: doc-?leave 905: doc-unloop 1.10 anton 906: doc-done 1.2 anton 907: 908: The standard does not allow using @code{cs-pick} and @code{cs-roll} on 909: @i{do-sys}. Our system allows it, but it's your job to ensure that for 910: every @code{?DO} etc. there is exactly one @code{UNLOOP} on any path 1.3 anton 911: through the definition (@code{LOOP} etc. compile an @code{UNLOOP} on the 912: fall-through path). Also, you have to ensure that all @code{LEAVE}s are 1.7 pazsan 913: resolved (by using one of the loop-ending words or @code{DONE}). 1.2 anton 914: 915: Another group of control structure words are 916: 1.3 anton 917: doc-case 918: doc-endcase 919: doc-of 920: doc-endof 1.2 anton 921: 922: @i{case-sys} and @i{of-sys} cannot be processed using @code{cs-pick} and 923: @code{cs-roll}. 924: 1.3 anton 925: @subsubsection Programming Style 926: 927: In order to ensure readability we recommend that you do not create 928: arbitrary control structures directly, but define new control structure 929: words for the control structure you want and use these words in your 930: program. 931: 932: E.g., instead of writing 933: 934: @example 935: begin 936: ... 937: if [ 1 cs-roll ] 938: ... 939: again then 940: @end example 941: 942: we recommend defining control structure words, e.g., 943: 944: @example 945: : while ( dest -- orig dest ) 946: POSTPONE if 947: 1 cs-roll ; immediate 948: 949: : repeat ( orig dest -- ) 950: POSTPONE again 951: POSTPONE then ; immediate 952: @end example 953: 954: and then using these to create the control structure: 955: 956: @example 957: begin 958: ... 959: while 960: ... 961: repeat 962: @end example 963: 964: That's much easier to read, isn't it? Of course, @code{BEGIN} and 965: @code{WHILE} are predefined, so in this example it would not be 966: necessary to define them. 967: 1.4 anton 968: @node Calls and returns, Exception Handling, Arbitrary control structures, Control Structures 1.3 anton 969: @subsection Calls and returns 970: 971: A definition can be called simply be writing the name of the 972: definition. When the end of the definition is reached, it returns. An earlier return can be forced using 973: 974: doc-exit 975: 976: Don't forget to clean up the return stack and @code{UNLOOP} any 977: outstanding @code{?DO}...@code{LOOP}s before @code{EXIT}ing. The 978: primitive compiled by @code{EXIT} is 979: 980: doc-;s 981: 1.4 anton 982: @node Exception Handling, , Calls and returns, Control Structures 1.3 anton 983: @subsection Exception Handling 984: 985: doc-catch 986: doc-throw 987: 1.4 anton 988: @node Locals, Defining Words, Control Structures, Words 1.1 anton 989: @section Locals 990: 1.2 anton 991: Local variables can make Forth programming more enjoyable and Forth 992: programs easier to read. Unfortunately, the locals of ANS Forth are 993: laden with restrictions. Therefore, we provide not only the ANS Forth 994: locals wordset, but also our own, more powerful locals wordset (we 995: implemented the ANS Forth locals wordset through our locals wordset). 996: 997: @menu 1.4 anton 998: * gforth locals:: 999: * ANS Forth locals:: 1.2 anton 1000: @end menu 1001: 1.4 anton 1002: @node gforth locals, ANS Forth locals, Locals, Locals 1.2 anton 1003: @subsection gforth locals 1004: 1005: Locals can be defined with 1006: 1007: @example 1008: @{ local1 local2 ... -- comment @} 1009: @end example 1010: or 1011: @example 1012: @{ local1 local2 ... @} 1013: @end example 1014: 1015: E.g., 1016: @example 1017: : max @{ n1 n2 -- n3 @} 1018: n1 n2 > if 1019: n1 1020: else 1021: n2 1022: endif ; 1023: @end example 1024: 1025: The similarity of locals definitions with stack comments is intended. A 1026: locals definition often replaces the stack comment of a word. The order 1027: of the locals corresponds to the order in a stack comment and everything 1028: after the @code{--} is really a comment. 1029: 1030: This similarity has one disadvantage: It is too easy to confuse locals 1031: declarations with stack comments, causing bugs and making them hard to 1032: find. However, this problem can be avoided by appropriate coding 1033: conventions: Do not use both notations in the same program. If you do, 1034: they should be distinguished using additional means, e.g. by position. 1035: 1036: The name of the local may be preceded by a type specifier, e.g., 1037: @code{F:} for a floating point value: 1038: 1039: @example 1040: : CX* @{ F: Ar F: Ai F: Br F: Bi -- Cr Ci @} 1041: \ complex multiplication 1042: Ar Br f* Ai Bi f* f- 1043: Ar Bi f* Ai Br f* f+ ; 1044: @end example 1045: 1046: GNU Forth currently supports cells (@code{W:}, @code{W^}), doubles 1047: (@code{D:}, @code{D^}), floats (@code{F:}, @code{F^}) and characters 1048: (@code{C:}, @code{C^}) in two flavours: a value-flavoured local (defined 1049: with @code{W:}, @code{D:} etc.) produces its value and can be changed 1050: with @code{TO}. A variable-flavoured local (defined with @code{W^} etc.) 1051: produces its address (which becomes invalid when the variable's scope is 1052: left). E.g., the standard word @code{emit} can be defined in therms of 1053: @code{type} like this: 1054: 1055: @example 1056: : emit @{ C^ char* -- @} 1057: char* 1 type ; 1058: @end example 1059: 1060: A local without type specifier is a @code{W:} local. Both flavours of 1061: locals are initialized with values from the data or FP stack. 1062: 1063: Currently there is no way to define locals with user-defined data 1064: structures, but we are working on it. 1065: 1.7 pazsan 1066: GNU Forth allows defining locals everywhere in a colon definition. This 1067: poses the following questions: 1.2 anton 1068: 1.4 anton 1069: @menu 1070: * Where are locals visible by name?:: 1.14 anton 1071: * How long do locals live?:: 1.4 anton 1072: * Programming Style:: 1073: * Implementation:: 1074: @end menu 1075: 1076: @node Where are locals visible by name?, How long do locals live?, gforth locals, gforth locals 1.2 anton 1077: @subsubsection Where are locals visible by name? 1078: 1079: Basically, the answer is that locals are visible where you would expect 1080: it in block-structured languages, and sometimes a little longer. If you 1081: want to restrict the scope of a local, enclose its definition in 1082: @code{SCOPE}...@code{ENDSCOPE}. 1083: 1084: doc-scope 1085: doc-endscope 1086: 1087: These words behave like control structure words, so you can use them 1088: with @code{CS-PICK} and @code{CS-ROLL} to restrict the scope in 1089: arbitrary ways. 1090: 1091: If you want a more exact answer to the visibility question, here's the 1092: basic principle: A local is visible in all places that can only be 1093: reached through the definition of the local@footnote{In compiler 1094: construction terminology, all places dominated by the definition of the 1095: local.}. In other words, it is not visible in places that can be reached 1096: without going through the definition of the local. E.g., locals defined 1097: in @code{IF}...@code{ENDIF} are visible until the @code{ENDIF}, locals 1098: defined in @code{BEGIN}...@code{UNTIL} are visible after the 1099: @code{UNTIL} (until, e.g., a subsequent @code{ENDSCOPE}). 1100: 1101: The reasoning behind this solution is: We want to have the locals 1102: visible as long as it is meaningful. The user can always make the 1103: visibility shorter by using explicit scoping. In a place that can 1104: only be reached through the definition of a local, the meaning of a 1105: local name is clear. In other places it is not: How is the local 1106: initialized at the control flow path that does not contain the 1107: definition? Which local is meant, if the same name is defined twice in 1108: two independent control flow paths? 1109: 1110: This should be enough detail for nearly all users, so you can skip the 1111: rest of this section. If you relly must know all the gory details and 1112: options, read on. 1113: 1114: In order to implement this rule, the compiler has to know which places 1115: are unreachable. It knows this automatically after @code{AHEAD}, 1116: @code{AGAIN}, @code{EXIT} and @code{LEAVE}; in other cases (e.g., after 1117: most @code{THROW}s), you can use the word @code{UNREACHABLE} to tell the 1118: compiler that the control flow never reaches that place. If 1119: @code{UNREACHABLE} is not used where it could, the only consequence is 1120: that the visibility of some locals is more limited than the rule above 1121: says. If @code{UNREACHABLE} is used where it should not (i.e., if you 1122: lie to the compiler), buggy code will be produced. 1123: 1124: Another problem with this rule is that at @code{BEGIN}, the compiler 1.3 anton 1125: does not know which locals will be visible on the incoming 1126: back-edge. All problems discussed in the following are due to this 1127: ignorance of the compiler (we discuss the problems using @code{BEGIN} 1128: loops as examples; the discussion also applies to @code{?DO} and other 1.2 anton 1129: loops). Perhaps the most insidious example is: 1130: @example 1131: AHEAD 1132: BEGIN 1133: x 1134: [ 1 CS-ROLL ] THEN 1.4 anton 1135: @{ x @} 1.2 anton 1136: ... 1137: UNTIL 1138: @end example 1139: 1140: This should be legal according to the visibility rule. The use of 1141: @code{x} can only be reached through the definition; but that appears 1142: textually below the use. 1143: 1144: From this example it is clear that the visibility rules cannot be fully 1145: implemented without major headaches. Our implementation treats common 1146: cases as advertised and the exceptions are treated in a safe way: The 1147: compiler makes a reasonable guess about the locals visible after a 1148: @code{BEGIN}; if it is too pessimistic, the 1149: user will get a spurious error about the local not being defined; if the 1150: compiler is too optimistic, it will notice this later and issue a 1151: warning. In the case above the compiler would complain about @code{x} 1152: being undefined at its use. You can see from the obscure examples in 1153: this section that it takes quite unusual control structures to get the 1154: compiler into trouble, and even then it will often do fine. 1155: 1156: If the @code{BEGIN} is reachable from above, the most optimistic guess 1157: is that all locals visible before the @code{BEGIN} will also be 1158: visible after the @code{BEGIN}. This guess is valid for all loops that 1159: are entered only through the @code{BEGIN}, in particular, for normal 1160: @code{BEGIN}...@code{WHILE}...@code{REPEAT} and 1161: @code{BEGIN}...@code{UNTIL} loops and it is implemented in our 1162: compiler. When the branch to the @code{BEGIN} is finally generated by 1163: @code{AGAIN} or @code{UNTIL}, the compiler checks the guess and 1164: warns the user if it was too optimisitic: 1165: @example 1166: IF 1.4 anton 1167: @{ x @} 1.2 anton 1168: BEGIN 1169: \ x ? 1170: [ 1 cs-roll ] THEN 1171: ... 1172: UNTIL 1173: @end example 1174: 1175: Here, @code{x} lives only until the @code{BEGIN}, but the compiler 1176: optimistically assumes that it lives until the @code{THEN}. It notices 1177: this difference when it compiles the @code{UNTIL} and issues a 1178: warning. The user can avoid the warning, and make sure that @code{x} 1179: is not used in the wrong area by using explicit scoping: 1180: @example 1181: IF 1182: SCOPE 1.4 anton 1183: @{ x @} 1.2 anton 1184: ENDSCOPE 1185: BEGIN 1186: [ 1 cs-roll ] THEN 1187: ... 1188: UNTIL 1189: @end example 1190: 1191: Since the guess is optimistic, there will be no spurious error messages 1192: about undefined locals. 1193: 1194: If the @code{BEGIN} is not reachable from above (e.g., after 1195: @code{AHEAD} or @code{EXIT}), the compiler cannot even make an 1196: optimistic guess, as the locals visible after the @code{BEGIN} may be 1197: defined later. Therefore, the compiler assumes that no locals are 1198: visible after the @code{BEGIN}. However, the useer can use 1199: @code{ASSUME-LIVE} to make the compiler assume that the same locals are 1200: visible at the BEGIN as at the point where the item was created. 1201: 1202: doc-assume-live 1203: 1204: E.g., 1205: @example 1.4 anton 1206: @{ x @} 1.2 anton 1207: AHEAD 1208: ASSUME-LIVE 1209: BEGIN 1210: x 1211: [ 1 CS-ROLL ] THEN 1212: ... 1213: UNTIL 1214: @end example 1215: 1216: Other cases where the locals are defined before the @code{BEGIN} can be 1217: handled by inserting an appropriate @code{CS-ROLL} before the 1218: @code{ASSUME-LIVE} (and changing the control-flow stack manipulation 1219: behind the @code{ASSUME-LIVE}). 1220: 1221: Cases where locals are defined after the @code{BEGIN} (but should be 1222: visible immediately after the @code{BEGIN}) can only be handled by 1223: rearranging the loop. E.g., the ``most insidious'' example above can be 1224: arranged into: 1225: @example 1226: BEGIN 1.4 anton 1227: @{ x @} 1.2 anton 1228: ... 0= 1229: WHILE 1230: x 1231: REPEAT 1232: @end example 1233: 1.4 anton 1234: @node How long do locals live?, Programming Style, Where are locals visible by name?, gforth locals 1.2 anton 1235: @subsubsection How long do locals live? 1236: 1237: The right answer for the lifetime question would be: A local lives at 1238: least as long as it can be accessed. For a value-flavoured local this 1239: means: until the end of its visibility. However, a variable-flavoured 1240: local could be accessed through its address far beyond its visibility 1241: scope. Ultimately, this would mean that such locals would have to be 1242: garbage collected. Since this entails un-Forth-like implementation 1243: complexities, I adopted the same cowardly solution as some other 1244: languages (e.g., C): The local lives only as long as it is visible; 1245: afterwards its address is invalid (and programs that access it 1246: afterwards are erroneous). 1247: 1.4 anton 1248: @node Programming Style, Implementation, How long do locals live?, gforth locals 1.2 anton 1249: @subsubsection Programming Style 1250: 1251: The freedom to define locals anywhere has the potential to change 1252: programming styles dramatically. In particular, the need to use the 1253: return stack for intermediate storage vanishes. Moreover, all stack 1254: manipulations (except @code{PICK}s and @code{ROLL}s with run-time 1255: determined arguments) can be eliminated: If the stack items are in the 1256: wrong order, just write a locals definition for all of them; then 1257: write the items in the order you want. 1258: 1259: This seems a little far-fetched and eliminating stack manipulations is 1.4 anton 1260: unlikely to become a conscious programming objective. Still, the number 1261: of stack manipulations will be reduced dramatically if local variables 1262: are used liberally (e.g., compare @code{max} in @ref{gforth locals} with 1263: a traditional implementation of @code{max}). 1.2 anton 1264: 1265: This shows one potential benefit of locals: making Forth programs more 1266: readable. Of course, this benefit will only be realized if the 1267: programmers continue to honour the principle of factoring instead of 1268: using the added latitude to make the words longer. 1269: 1270: Using @code{TO} can and should be avoided. Without @code{TO}, 1271: every value-flavoured local has only a single assignment and many 1272: advantages of functional languages apply to Forth. I.e., programs are 1273: easier to analyse, to optimize and to read: It is clear from the 1274: definition what the local stands for, it does not turn into something 1275: different later. 1276: 1277: E.g., a definition using @code{TO} might look like this: 1278: @example 1279: : strcmp @{ addr1 u1 addr2 u2 -- n @} 1280: u1 u2 min 0 1281: ?do 1282: addr1 c@ addr2 c@ - ?dup 1283: if 1284: unloop exit 1285: then 1286: addr1 char+ TO addr1 1287: addr2 char+ TO addr2 1288: loop 1289: u1 u2 - ; 1290: @end example 1291: Here, @code{TO} is used to update @code{addr1} and @code{addr2} at 1292: every loop iteration. @code{strcmp} is a typical example of the 1293: readability problems of using @code{TO}. When you start reading 1294: @code{strcmp}, you think that @code{addr1} refers to the start of the 1295: string. Only near the end of the loop you realize that it is something 1296: else. 1297: 1298: This can be avoided by defining two locals at the start of the loop that 1299: are initialized with the right value for the current iteration. 1300: @example 1301: : strcmp @{ addr1 u1 addr2 u2 -- n @} 1302: addr1 addr2 1303: u1 u2 min 0 1304: ?do @{ s1 s2 @} 1305: s1 c@ s2 c@ - ?dup 1306: if 1307: unloop exit 1308: then 1309: s1 char+ s2 char+ 1310: loop 1311: 2drop 1312: u1 u2 - ; 1313: @end example 1314: Here it is clear from the start that @code{s1} has a different value 1315: in every loop iteration. 1316: 1.4 anton 1317: @node Implementation, , Programming Style, gforth locals 1.2 anton 1318: @subsubsection Implementation 1319: 1320: GNU Forth uses an extra locals stack. The most compelling reason for 1321: this is that the return stack is not float-aligned; using an extra stack 1322: also eliminates the problems and restrictions of using the return stack 1323: as locals stack. Like the other stacks, the locals stack grows toward 1324: lower addresses. A few primitives allow an efficient implementation: 1325: 1326: doc-@local# 1327: doc-f@local# 1328: doc-laddr# 1329: doc-lp+!# 1330: doc-lp! 1331: doc->l 1332: doc-f>l 1333: 1334: In addition to these primitives, some specializations of these 1335: primitives for commonly occurring inline arguments are provided for 1336: efficiency reasons, e.g., @code{@@local0} as specialization of 1337: @code{@@local#} for the inline argument 0. The following compiling words 1338: compile the right specialized version, or the general version, as 1339: appropriate: 1340: 1.12 anton 1341: doc-compile-@local 1342: doc-compile-f@local 1.2 anton 1343: doc-compile-lp+! 1344: 1345: Combinations of conditional branches and @code{lp+!#} like 1346: @code{?branch-lp+!#} (the locals pointer is only changed if the branch 1347: is taken) are provided for efficiency and correctness in loops. 1348: 1349: A special area in the dictionary space is reserved for keeping the 1350: local variable names. @code{@{} switches the dictionary pointer to this 1351: area and @code{@}} switches it back and generates the locals 1352: initializing code. @code{W:} etc.@ are normal defining words. This 1353: special area is cleared at the start of every colon definition. 1354: 1355: A special feature of GNU Forths dictionary is used to implement the 1356: definition of locals without type specifiers: every wordlist (aka 1357: vocabulary) has its own methods for searching 1.4 anton 1358: etc. (@pxref{Wordlists}). For the present purpose we defined a wordlist 1.2 anton 1359: with a special search method: When it is searched for a word, it 1360: actually creates that word using @code{W:}. @code{@{} changes the search 1361: order to first search the wordlist containing @code{@}}, @code{W:} etc., 1362: and then the wordlist for defining locals without type specifiers. 1363: 1364: The lifetime rules support a stack discipline within a colon 1365: definition: The lifetime of a local is either nested with other locals 1366: lifetimes or it does not overlap them. 1367: 1368: At @code{BEGIN}, @code{IF}, and @code{AHEAD} no code for locals stack 1369: pointer manipulation is generated. Between control structure words 1370: locals definitions can push locals onto the locals stack. @code{AGAIN} 1371: is the simplest of the other three control flow words. It has to 1372: restore the locals stack depth of the corresponding @code{BEGIN} 1373: before branching. The code looks like this: 1374: @format 1375: @code{lp+!#} current-locals-size @minus{} dest-locals-size 1376: @code{branch} <begin> 1377: @end format 1378: 1379: @code{UNTIL} is a little more complicated: If it branches back, it 1380: must adjust the stack just like @code{AGAIN}. But if it falls through, 1381: the locals stack must not be changed. The compiler generates the 1382: following code: 1383: @format 1384: @code{?branch-lp+!#} <begin> current-locals-size @minus{} dest-locals-size 1385: @end format 1386: The locals stack pointer is only adjusted if the branch is taken. 1387: 1388: @code{THEN} can produce somewhat inefficient code: 1389: @format 1390: @code{lp+!#} current-locals-size @minus{} orig-locals-size 1391: <orig target>: 1392: @code{lp+!#} orig-locals-size @minus{} new-locals-size 1393: @end format 1394: The second @code{lp+!#} adjusts the locals stack pointer from the 1.4 anton 1395: level at the @var{orig} point to the level after the @code{THEN}. The 1.2 anton 1396: first @code{lp+!#} adjusts the locals stack pointer from the current 1397: level to the level at the orig point, so the complete effect is an 1398: adjustment from the current level to the right level after the 1399: @code{THEN}. 1400: 1401: In a conventional Forth implementation a dest control-flow stack entry 1402: is just the target address and an orig entry is just the address to be 1403: patched. Our locals implementation adds a wordlist to every orig or dest 1404: item. It is the list of locals visible (or assumed visible) at the point 1405: described by the entry. Our implementation also adds a tag to identify 1406: the kind of entry, in particular to differentiate between live and dead 1407: (reachable and unreachable) orig entries. 1408: 1409: A few unusual operations have to be performed on locals wordlists: 1410: 1411: doc-common-list 1412: doc-sub-list? 1413: doc-list-size 1414: 1415: Several features of our locals wordlist implementation make these 1416: operations easy to implement: The locals wordlists are organised as 1417: linked lists; the tails of these lists are shared, if the lists 1418: contain some of the same locals; and the address of a name is greater 1419: than the address of the names behind it in the list. 1420: 1421: Another important implementation detail is the variable 1422: @code{dead-code}. It is used by @code{BEGIN} and @code{THEN} to 1423: determine if they can be reached directly or only through the branch 1424: that they resolve. @code{dead-code} is set by @code{UNREACHABLE}, 1425: @code{AHEAD}, @code{EXIT} etc., and cleared at the start of a colon 1426: definition, by @code{BEGIN} and usually by @code{THEN}. 1427: 1428: Counted loops are similar to other loops in most respects, but 1429: @code{LEAVE} requires special attention: It performs basically the same 1430: service as @code{AHEAD}, but it does not create a control-flow stack 1431: entry. Therefore the information has to be stored elsewhere; 1432: traditionally, the information was stored in the target fields of the 1433: branches created by the @code{LEAVE}s, by organizing these fields into a 1434: linked list. Unfortunately, this clever trick does not provide enough 1435: space for storing our extended control flow information. Therefore, we 1436: introduce another stack, the leave stack. It contains the control-flow 1437: stack entries for all unresolved @code{LEAVE}s. 1438: 1439: Local names are kept until the end of the colon definition, even if 1440: they are no longer visible in any control-flow path. In a few cases 1441: this may lead to increased space needs for the locals name area, but 1442: usually less than reclaiming this space would cost in code size. 1443: 1444: 1.4 anton 1445: @node ANS Forth locals, , gforth locals, Locals 1.2 anton 1446: @subsection ANS Forth locals 1447: 1448: The ANS Forth locals wordset does not define a syntax for locals, but 1449: words that make it possible to define various syntaxes. One of the 1450: possible syntaxes is a subset of the syntax we used in the gforth locals 1451: wordset, i.e.: 1452: 1453: @example 1454: @{ local1 local2 ... -- comment @} 1455: @end example 1456: or 1457: @example 1458: @{ local1 local2 ... @} 1459: @end example 1460: 1461: The order of the locals corresponds to the order in a stack comment. The 1462: restrictions are: 1.1 anton 1463: 1.2 anton 1464: @itemize @bullet 1465: @item 1466: Locals can only be cell-sized values (no type specifers are allowed). 1467: @item 1468: Locals can be defined only outside control structures. 1469: @item 1470: Locals can interfere with explicit usage of the return stack. For the 1471: exact (and long) rules, see the standard. If you don't use return stack 1472: accessing words in a definition using locals, you will we all right. The 1473: purpose of this rule is to make locals implementation on the return 1474: stack easier. 1475: @item 1476: The whole definition must be in one line. 1477: @end itemize 1478: 1479: Locals defined in this way behave like @code{VALUE}s 1.4 anton 1480: (@xref{Values}). I.e., they are initialized from the stack. Using their 1.2 anton 1481: name produces their value. Their value can be changed using @code{TO}. 1482: 1483: Since this syntax is supported by gforth directly, you need not do 1484: anything to use it. If you want to port a program using this syntax to 1485: another ANS Forth system, use @file{anslocal.fs} to implement the syntax 1486: on the other system. 1487: 1488: Note that a syntax shown in the standard, section A.13 looks 1489: similar, but is quite different in having the order of locals 1490: reversed. Beware! 1491: 1492: The ANS Forth locals wordset itself consists of the following word 1493: 1494: doc-(local) 1495: 1496: The ANS Forth locals extension wordset defines a syntax, but it is so 1497: awful that we strongly recommend not to use it. We have implemented this 1498: syntax to make porting to gforth easy, but do not document it here. The 1499: problem with this syntax is that the locals are defined in an order 1500: reversed with respect to the standard stack comment notation, making 1501: programs harder to read, and easier to misread and miswrite. The only 1502: merit of this syntax is that it is easy to implement using the ANS Forth 1503: locals wordset. 1.3 anton 1504: 1.4 anton 1505: @node Defining Words, Wordlists, Locals, Words 1506: @section Defining Words 1507: 1.14 anton 1508: @menu 1509: * Values:: 1510: @end menu 1511: 1.4 anton 1512: @node Values, , Defining Words, Defining Words 1513: @subsection Values 1514: 1515: @node Wordlists, Files, Defining Words, Words 1516: @section Wordlists 1517: 1518: @node Files, Blocks, Wordlists, Words 1519: @section Files 1520: 1521: @node Blocks, Other I/O, Files, Words 1522: @section Blocks 1523: 1524: @node Other I/O, Programming Tools, Blocks, Words 1525: @section Other I/O 1526: 1527: @node Programming Tools, Threading Words, Other I/O, Words 1528: @section Programming Tools 1529: 1.5 anton 1530: @menu 1531: * Debugging:: Simple and quick. 1532: * Assertions:: Making your programs self-checking. 1533: @end menu 1534: 1535: @node Debugging, Assertions, Programming Tools, Programming Tools 1.4 anton 1536: @subsection Debugging 1537: 1538: The simple debugging aids provided in @file{debugging.fs} 1539: are meant to support a different style of debugging than the 1540: tracing/stepping debuggers used in languages with long turn-around 1541: times. 1542: 1543: A much better (faster) way in fast-compilig languages is to add 1544: printing code at well-selected places, let the program run, look at 1545: the output, see where things went wrong, add more printing code, etc., 1546: until the bug is found. 1547: 1548: The word @code{~~} is easy to insert. It just prints debugging 1549: information (by default the source location and the stack contents). It 1550: is also easy to remove (@kbd{C-x ~} in the Emacs Forth mode to 1551: query-replace them with nothing). The deferred words 1552: @code{printdebugdata} and @code{printdebugline} control the output of 1553: @code{~~}. The default source location output format works well with 1554: Emacs' compilation mode, so you can step through the program at the 1.5 anton 1555: source level using @kbd{C-x `} (the advantage over a stepping debugger 1556: is that you can step in any direction and you know where the crash has 1557: happened or where the strange data has occurred). 1.4 anton 1558: 1559: Note that the default actions clobber the contents of the pictured 1560: numeric output string, so you should not use @code{~~}, e.g., between 1561: @code{<#} and @code{#>}. 1562: 1563: doc-~~ 1564: doc-printdebugdata 1565: doc-printdebugline 1566: 1.5 anton 1567: @node Assertions, , Debugging, Programming Tools 1.4 anton 1568: @subsection Assertions 1569: 1.5 anton 1570: It is a good idea to make your programs self-checking, in particular, if 1571: you use an assumption (e.g., that a certain field of a data structure is 1572: never zero) that may become wrong during maintenance. GForth supports 1573: assertions for this purpose. They are used like this: 1574: 1575: @example 1576: assert( @var{flag} ) 1577: @end example 1578: 1579: The code between @code{assert(} and @code{)} should compute a flag, that 1580: should be true if everything is alright and false otherwise. It should 1581: not change anything else on the stack. The overall stack effect of the 1582: assertion is @code{( -- )}. E.g. 1583: 1584: @example 1585: assert( 1 1 + 2 = ) \ what we learn in school 1586: assert( dup 0<> ) \ assert that the top of stack is not zero 1587: assert( false ) \ this code should not be reached 1588: @end example 1589: 1590: The need for assertions is different at different times. During 1591: debugging, we want more checking, in production we sometimes care more 1592: for speed. Therefore, assertions can be turned off, i.e., the assertion 1593: becomes a comment. Depending on the importance of an assertion and the 1594: time it takes to check it, you may want to turn off some assertions and 1595: keep others turned on. GForth provides several levels of assertions for 1596: this purpose: 1597: 1598: doc-assert0( 1599: doc-assert1( 1600: doc-assert2( 1601: doc-assert3( 1602: doc-assert( 1603: doc-) 1604: 1605: @code{Assert(} is the same as @code{assert1(}. The variable 1606: @code{assert-level} specifies the highest assertions that are turned 1607: on. I.e., at the default @code{assert-level} of one, @code{assert0(} and 1608: @code{assert1(} assertions perform checking, while @code{assert2(} and 1609: @code{assert3(} assertions are treated as comments. 1610: 1611: Note that the @code{assert-level} is evaluated at compile-time, not at 1612: run-time. I.e., you cannot turn assertions on or off at run-time, you 1613: have to set the @code{assert-level} appropriately before compiling a 1614: piece of code. You can compile several pieces of code at several 1615: @code{assert-level}s (e.g., a trusted library at level 1 and newly 1616: written code at level 3). 1617: 1618: doc-assert-level 1619: 1620: If an assertion fails, a message compatible with Emacs' compilation mode 1621: is produced and the execution is aborted (currently with @code{ABORT"}. 1622: If there is interest, we will introduce a special throw code. But if you 1623: intend to @code{catch} a specific condition, using @code{throw} is 1624: probably more appropriate than an assertion). 1625: 1.4 anton 1626: @node Threading Words, , Programming Tools, Words 1627: @section Threading Words 1628: 1629: These words provide access to code addresses and other threading stuff 1630: in gforth (and, possibly, other interpretive Forths). It more or less 1631: abstracts away the differences between direct and indirect threading 1632: (and, for direct threading, the machine dependences). However, at 1633: present this wordset is still inclomplete. It is also pretty low-level; 1634: some day it will hopefully be made unnecessary by an internals words set 1635: that abstracts implementation details away completely. 1636: 1637: doc->code-address 1638: doc->does-code 1639: doc-code-address! 1640: doc-does-code! 1641: doc-does-handler! 1642: doc-/does-handler 1643: 1.14 anton 1644: 1645: 1.4 anton 1646: @node ANS conformance, Model, Words, Top 1647: @chapter ANS conformance 1648: 1.15 anton 1649: To the best of our knowledge, gforth is an 1.14 anton 1650: 1.15 anton 1651: ANS Forth System 1652: @itemize 1653: @item providing the Core Extensions word set 1654: @item providing the Block word set 1655: @item providing the Block Extensions word set 1656: @item providing the Double-Number word set 1657: @item providing the Double-Number Extensions word set 1658: @item providing the Exception word set 1659: @item providing the Exception Extensions word set 1660: @item providing the Facility word set 1661: @item providing @code{MS} and @code{TIME&DATE} from the Facility Extensions word set 1662: @item providing the File Access word set 1663: @item providing the File Access Extensions word set 1664: @item providing the Floating-Point word set 1665: @item providing the Floating-Point Extensions word set 1666: @item providing the Locals word set 1667: @item providing the Locals Extensions word set 1668: @item providing the Memory-Allocation word set 1669: @item providing the Memory-Allocation Extensions word set (that one's easy) 1670: @item providing the Programming-Tools word set 1671: @item providing @code{AHEAD}, @code{BYE}, @code{CS-PICK}, @code{CS-ROLL}, @code{STATE}, @code{[ELSE]}, @code{[IF]}, @code{[THEN]} from the Programming-Tools Extensions word set 1672: @item providing the Search-Order word set 1673: @item providing the Search-Order Extensions word set 1674: @item providing the String word set 1675: @item providing the String Extensions word set (another easy one) 1676: @end itemize 1677: 1678: In addition, ANS Forth systems are required to document certain 1679: implementation choices. This chapter tries to meet these 1680: requirements. In many cases it gives a way to ask the system for the 1681: information instead of providing the information directly, in 1682: particular, if the information depends on the processor, the operating 1683: system or the installation options chosen, or if they are likely to 1684: change during the maintenance of gforth. 1685: 1.14 anton 1686: @comment The framework for the rest has been taken from pfe. 1687: 1688: @menu 1689: * The Core Words:: 1690: * The optional Block word set:: 1691: * The optional Double Number word set:: 1692: * The optional Exception word set:: 1693: * The optional Facility word set:: 1694: * The optional File-Access word set:: 1695: * The optional Floating-Point word set:: 1696: * The optional Locals word set:: 1697: * The optional Memory-Allocation word set:: 1698: * The optional Programming-Tools word set:: 1699: * The optional Search-Order word set:: 1700: @end menu 1701: 1702: 1703: @c ===================================================================== 1704: @node The Core Words, The optional Block word set, ANS conformance, ANS conformance 1705: @comment node-name, next, previous, up 1706: @section The Core Words 1707: @c ===================================================================== 1708: 1709: @menu 1.15 anton 1710: * core-idef:: Implementation Defined Options 1711: * core-ambcond:: Ambiguous Conditions 1712: * core-other:: Other System Documentation 1.14 anton 1713: @end menu 1714: 1715: @c --------------------------------------------------------------------- 1716: @node core-idef, core-ambcond, The Core Words, The Core Words 1717: @subsection Implementation Defined Options 1718: @c --------------------------------------------------------------------- 1719: 1720: @table @i 1721: 1722: @item (Cell) aligned addresses: 1723: processor-dependent. Gforths alignment words perform natural alignment 1724: (e.g., an address aligned for a datum of size 8 is divisible by 1725: 8). Unaligned accesses usually result in a @code{-23 THROW}. 1726: 1727: @item @code{EMIT} and non-graphic characters: 1728: The character is output using the C library function (actually, macro) 1729: @code{putchar}. 1730: 1731: @item character editing of @code{ACCEPT} and @code{EXPECT}: 1732: This is modeled on the GNU readline library (@pxref{Readline 1733: Interaction, , Command Line Editing, readline, The GNU Readline 1734: Library}) with Emacs-like key bindings. @kbd{Tab} deviates a little by 1735: producing a full word completion every time you type it (instead of 1736: producing the common prefix of all completions). 1737: 1738: @item character set: 1739: The character set of your computer and display device. Gforth is 1740: 8-bit-clean (but some other component in your system may make trouble). 1741: 1742: @item Character-aligned address requirements: 1743: installation-dependent. Currently a character is represented by a C 1744: @code{unsigned char}; in the future we might switch to @code{wchar_t} 1745: (Comments on that requested). 1746: 1747: @item character-set extensions and matching of names: 1748: Any character except 0 can be used in a name. Matching is 1749: case-insensitive. The matching is performed using the C function 1750: @code{strncasecmp}, whose function is probably influenced by the 1751: locale. E.g., the @code{C} locale does not know about accents and 1752: umlauts, so they are matched case-sensitively in that locale. For 1753: portability reasons it is best to write programs such that they work in 1754: the @code{C} locale. Then one can use libraries written by a Polish 1755: programmer (who might use words containing ISO Latin-2 encoded 1756: characters) and by a French programmer (ISO Latin-1) in the same program 1757: (of course, @code{WORDS} will produce funny results for some of the 1758: words (which ones, depends on the font you are using)). Also, the locale 1759: you prefer may not be available in other operating systems. Hopefully, 1760: Unicode will solve these problems one day. 1761: 1762: @item conditions under which control characters match a space delimiter: 1763: If @code{WORD} is called with the space character as a delimiter, all 1764: white-space characters (as identified by the C macro @code{isspace()}) 1765: are delimiters. @code{PARSE}, on the other hand, treats space like other 1766: delimiters. @code{PARSE-WORD} treats space like @code{WORD}, but behaves 1767: like @code{PARSE} otherwise. @code{(NAME)}, which is used by the outer 1768: interpreter (aka text interpreter) by default, treats all white-space 1769: characters as delimiters. 1770: 1771: @item format of the control flow stack: 1772: The data stack is used as control flow stack. The size of a control flow 1773: stack item in cells is given by the constant @code{cs-item-size}. At the 1774: time of this writing, an item consists of a (pointer to a) locals list 1775: (third), an address in the code (second), and a tag for identifying the 1776: item (TOS). The following tags are used: @code{defstart}, 1777: @code{live-orig}, @code{dead-orig}, @code{dest}, @code{do-dest}, 1778: @code{scopestart}. 1779: 1780: @item conversion of digits > 35 1781: The characters @code{[\]^_'} are the digits with the decimal value 1782: 36@minus{}41. There is no way to input many of the larger digits. 1783: 1784: @item display after input terminates in @code{ACCEPT} and @code{EXPECT}: 1785: The cursor is moved to the end of the entered string. If the input is 1786: terminated using the @kbd{Return} key, a space is typed. 1787: 1788: @item exception abort sequence of @code{ABORT"}: 1789: The error string is stored into the variable @code{"error} and a 1790: @code{-2 throw} is performed. 1791: 1792: @item input line terminator: 1793: For interactive input, @kbd{C-m} and @kbd{C-j} terminate lines. One of 1794: these characters is typically produced when you type the @kbd{Enter} or 1795: @kbd{Return} key. 1796: 1797: @item maximum size of a counted string: 1798: @code{s" /counted-string" environment? drop .}. Currently 255 characters 1799: on all ports, but this may change. 1800: 1801: @item maximum size of a parsed string: 1802: Given by the constant @code{/line}. Currently 255 characters. 1803: 1804: @item maximum size of a definition name, in characters: 1805: 31 1806: 1807: @item maximum string length for @code{ENVIRONMENT?}, in characters: 1808: 31 1809: 1810: @item method of selecting the user input device: 1811: The user input device is the standard input. There is current no way to 1812: change it from within gforth. However, the input can typically be 1813: redirected in the command line that starts gforth. 1814: 1815: @item method of selecting the user output device: 1816: The user output device is the standard output. It cannot be redirected 1817: from within gforth, but typically from the command line that starts 1818: gforth. Gforth uses buffered output, so output on a terminal does not 1819: become visible before the next newline or buffer overflow. Output on 1820: non-terminals is invisible until the buffer overflows. 1821: 1822: @item methods of dictionary compilation: 1823: Waht are we expected to document here? 1824: 1825: @item number of bits in one address unit: 1826: @code{s" address-units-bits" environment? drop .}. 8 in all current 1827: ports. 1828: 1829: @item number representation and arithmetic: 1830: Processor-dependent. Binary two's complement on all current ports. 1831: 1832: @item ranges for integer types: 1833: Installation-dependent. Make environmental queries for @code{MAX-N}, 1834: @code{MAX-U}, @code{MAX-D} and @code{MAX-UD}. The lower bounds for 1835: unsigned (and positive) types is 0. The lower bound for signed types on 1836: two's complement and one's complement machines machines can be computed 1837: by adding 1 to the upper bound. 1838: 1839: @item read-only data space regions: 1840: The whole Forth data space is writable. 1841: 1842: @item size of buffer at @code{WORD}: 1843: @code{PAD HERE - .}. 104 characters on 32-bit machines. The buffer is 1844: shared with the pictured numeric output string. If overwriting 1845: @code{PAD} is acceptable, it is as large as the remaining dictionary 1846: space, although only as much can be sensibly used as fits in a counted 1847: string. 1848: 1849: @item size of one cell in address units: 1850: @code{1 cells .}. 1851: 1852: @item size of one character in address units: 1853: @code{1 chars .}. 1 on all current ports. 1854: 1855: @item size of the keyboard terminal buffer: 1856: Varies. You can determine the size at a specific time using @code{lp@ 1857: tib - .}. It is shared with the locals stack and TIBs of files that 1858: include the current file. You can change the amount of space for TIBs 1859: and locals stack at gforth startup with the command line option 1860: @code{-l}. 1861: 1862: @item size of the pictured numeric output buffer: 1863: @code{PAD HERE - .}. 104 characters on 32-bit machines. The buffer is 1864: shared with @code{WORD}. 1865: 1866: @item size of the scratch area returned by @code{PAD}: 1867: The remainder of dictionary space. You can even use the unused part of 1868: the data stack space. The current size can be computed with @code{sp@ 1869: pad - .}. 1870: 1871: @item system case-sensitivity characteristics: 1872: Dictionary searches are case insensitive. However, as explained above 1873: under @i{character-set extensions}, the matching for non-ASCII 1874: characters is determined by the locale you are using. In the default 1875: @code{C} locale all non-ASCII characters are matched case-sensitively. 1876: 1877: @item system prompt: 1878: @code{ ok} in interpret state, @code{ compiled} in compile state. 1879: 1880: @item division rounding: 1881: installation dependent. @code{s" floored" environment? drop .}. We leave 1882: the choice to gcc (what to use for @code{/}) and to you (whether to use 1883: @code{fm/mod}, @code{sm/rem} or simply @code{/}). 1884: 1885: @item values of @code{STATE} when true: 1886: -1. 1887: 1888: @item values returned after arithmetic overflow: 1889: On two's complement machines, arithmetic is performed modulo 1890: 2**bits-per-cell for single arithmetic and 4**bits-per-cell for double 1891: arithmetic (with appropriate mapping for signed types). Division by zero 1892: typically results in a @code{-55 throw} (floatingpoint unidentified 1893: fault), although a @code{-10 throw} (divide by zero) would be more 1894: appropriate. 1895: 1896: @item whether the current definition can be found after @t{DOES>}: 1897: No. 1898: 1899: @end table 1900: 1901: @c --------------------------------------------------------------------- 1902: @node core-ambcond, core-other, core-idef, The Core Words 1903: @subsection Ambiguous conditions 1904: @c --------------------------------------------------------------------- 1905: 1906: @table @i 1907: 1908: @item a name is neither a word nor a number: 1909: @code{-13 throw} (Undefined word) 1910: 1911: @item a definition name exceeds the maximum length allowed: 1912: @code{-19 throw} (Word name too long) 1913: 1914: @item addressing a region not inside the various data spaces of the forth system: 1915: The stacks, code space and name space are accessible. Machine code space is 1916: typically readable. Accessing other addresses gives results dependent on 1917: the operating system. On decent systems: @code{-9 throw} (Invalid memory 1918: address). 1919: 1920: @item argument type incompatible with parameter: 1921: This is usually not caught. Some words perform checks, e.g., the control 1922: flow words, and issue a @code{ABORT"} or @code{-12 THROW} (Argument type 1923: mismatch). 1924: 1925: @item attempting to obtain the execution token of a word with undefined execution semantics: 1926: You get an execution token representing the compilation semantics 1927: instead. 1928: 1929: @item dividing by zero: 1930: typically results in a @code{-55 throw} (floating point unidentified 1931: fault), although a @code{-10 throw} (divide by zero) would be more 1932: appropriate. 1933: 1934: @item insufficient data stack or return stack space: 1935: Not checked. This typically results in mysterious illegal memory 1936: accesses, producing @code{-9 throw} (Invalid memory address) or 1937: @code{-23 throw} (Address alignment exception). 1938: 1939: @item insufficient space for loop control parameters: 1940: like other return stack overflows. 1941: 1942: @item insufficient space in the dictionary: 1943: Not checked. Similar results as stack overflows. However, typically the 1944: error appears at a different place when one inserts or removes code. 1945: 1946: @item interpreting a word with undefined interpretation semantics: 1947: For some words, we defined interpretation semantics. For the others: 1948: @code{-14 throw} (Interpreting a compile-only word). Note that this is 1949: checked only by the outer (aka text) interpreter; if the word is 1950: @code{execute}d in some other way, it will typically perform it's 1951: compilation semantics even in interpret state. (We could change @code{'} 1952: and relatives not to give the xt of such words, but we think that would 1953: be too restrictive). 1954: 1955: @item modifying the contents of the input buffer or a string literal: 1956: These are located in writable memory and can be modified. 1957: 1958: @item overflow of the pictured numeric output string: 1959: Not checked. 1960: 1961: @item parsed string overflow: 1962: @code{PARSE} cannot overflow. @code{WORD} does not check for overflow. 1963: 1964: @item producing a result out of range: 1965: On two's complement machines, arithmetic is performed modulo 1966: 2**bits-per-cell for single arithmetic and 4**bits-per-cell for double 1967: arithmetic (with appropriate mapping for signed types). Division by zero 1968: typically results in a @code{-55 throw} (floatingpoint unidentified 1969: fault), although a @code{-10 throw} (divide by zero) would be more 1970: appropriate. @code{convert} and @code{>number} currently overflow 1971: silently. 1972: 1973: @item reading from an empty data or return stack: 1974: The data stack is checked by the outer (aka text) interpreter after 1975: every word executed. If it has underflowed, a @code{-4 throw} (Stack 1976: underflow) is performed. Apart from that, the stacks are not checked and 1977: underflows can result in similar behaviour as overflows (of adjacent 1978: stacks). 1979: 1980: @item unexepected end of the input buffer, resulting in an attempt to use a zero-length string as a name: 1981: @code{Create} and its descendants perform a @code{-16 throw} (Attempt to 1982: use zero-length string as a name). Words like @code{'} probably will not 1983: find what they search. Note that it is possible to create zero-length 1984: names with @code{nextname} (should it not?). 1985: 1986: @item @code{>IN} greater than input buffer: 1987: The next invocation of a parsing word returns a string wih length 0. 1988: 1989: @item @code{RECURSE} appears after @code{DOES>}: 1990: Compiles a recursive call to the defining word not to the defined word. 1991: 1992: @item argument input source different than current input source for @code{RESTORE-INPUT}: 1993: !!???If the argument input source is a valid input source then it gets 1994: restored. Otherwise causes @code{-12 THROW} which unless caught issues 1995: the message "argument type mismatch" and aborts. 1996: 1997: @item data space containing definitions gets de-allocated: 1998: Deallocation with @code{allot} is not checked. This typically resuls in 1999: memory access faults or execution of illegal instructions. 2000: 2001: @item data space read/write with incorrect alignment: 2002: Processor-dependent. Typically results in a @code{-23 throw} (Address 2003: alignment exception). Under Linux on a 486 or later processor with 2004: alignment turned on, incorrect alignment results in a @code{-9 throw} 2005: (Invalid memory address). There are reportedly some processors with 2006: alignment restrictions that do not report them. 2007: 2008: @item data space pointer not properly aligned, @code{,}, @code{C,}: 2009: Like other alignment errors. 2010: 2011: @item less than u+2 stack items (@code{PICK} and @code{ROLL}): 2012: Not checked. May cause an illegal memory access. 2013: 2014: @item loop control parameters not available: 2015: Not checked. The counted loop words simply assume that the top of return 2016: stack items are loop control parameters and behave accordingly. 2017: 2018: @item most recent definition does not have a name (@code{IMMEDIATE}): 2019: @code{abort" last word was headerless"}. 2020: 2021: @item name not defined by @code{VALUE} used by @code{TO}: 2022: @code{-32 throw} (Invalid name argument) 2023: 1.15 anton 2024: @item name not found (@code{'}, @code{POSTPONE}, @code{[']}, @code{[COMPILE]}): 1.14 anton 2025: @code{-13 throw} (Undefined word) 2026: 2027: @item parameters are not of the same type (@code{DO}, @code{?DO}, @code{WITHIN}): 2028: Gforth behaves as if they were of the same type. I.e., you can predict 2029: the behaviour by interpreting all parameters as, e.g., signed. 2030: 2031: @item @code{POSTPONE} or @code{[COMPILE]} applied to @code{TO}: 2032: Assume @code{: X POSTPONE TO ; IMMEDIATE}. @code{X} is equivalent to 2033: @code{TO}. 2034: 2035: @item String longer than a counted string returned by @code{WORD}: 2036: Not checked. The string will be ok, but the count will, of course, 2037: contain only the least significant bits of the length. 2038: 1.15 anton 2039: @item u greater than or equal to the number of bits in a cell (@code{LSHIFT}, @code{RSHIFT}): 1.14 anton 2040: Processor-dependent. Typical behaviours are returning 0 and using only 2041: the low bits of the shift count. 2042: 2043: @item word not defined via @code{CREATE}: 2044: @code{>BODY} produces the PFA of the word no matter how it was defined. 2045: 2046: @code{DOES>} changes the execution semantics of the last defined word no 2047: matter how it was defined. E.g., @code{CONSTANT DOES>} is equivalent to 2048: @code{CREATE , DOES>}. 2049: 2050: @item words improperly used outside @code{<#} and @code{#>}: 2051: Not checked. As usual, you can expect memory faults. 2052: 2053: @end table 2054: 2055: 2056: @c --------------------------------------------------------------------- 2057: @node core-other, , core-ambcond, The Core Words 2058: @subsection Other system documentation 2059: @c --------------------------------------------------------------------- 2060: 2061: @table @i 2062: 2063: @item nonstandard words using @code{PAD}: 2064: None. 2065: 2066: @item operator's terminal facilities available: 2067: !!?? 2068: 2069: @item program data space available: 2070: @code{sp@ here - .} gives the space remaining for dictionary and data 2071: stack together. 2072: 2073: @item return stack space available: 2074: !!?? 2075: 2076: @item stack space available: 2077: @code{sp@ here - .} gives the space remaining for dictionary and data 2078: stack together. 2079: 2080: @item system dictionary space required, in address units: 2081: Type @code{here forthstart - .} after startup. At the time of this 2082: writing, this gives 70108 (bytes) on a 32-bit system. 2083: @end table 2084: 2085: 2086: @c ===================================================================== 2087: @node The optional Block word set, The optional Double Number word set, The Core Words, ANS conformance 2088: @section The optional Block word set 2089: @c ===================================================================== 2090: 2091: @menu 1.15 anton 2092: * block-idef:: Implementation Defined Options 2093: * block-ambcond:: Ambiguous Conditions 2094: * block-other:: Other System Documentation 1.14 anton 2095: @end menu 2096: 2097: 2098: @c --------------------------------------------------------------------- 2099: @node block-idef, block-ambcond, The optional Block word set, The optional Block word set 2100: @subsection Implementation Defined Options 2101: @c --------------------------------------------------------------------- 2102: 2103: @table @i 2104: 2105: @item the format for display by @code{LIST}: 2106: First the screen number is displayed, then 16 lines of 64 characters, 2107: each line preceded by the line number. 2108: 2109: @item the length of a line affected by @code{\}: 2110: 64 characters. 2111: @end table 2112: 2113: 2114: @c --------------------------------------------------------------------- 2115: @node block-ambcond, block-other, block-idef, The optional Block word set 2116: @subsection Ambiguous conditions 2117: @c --------------------------------------------------------------------- 2118: 2119: @table @i 2120: 2121: @item correct block read was not possible: 2122: Typically results in a @code{throw} of some OS-derived value (between 2123: -512 and -2048). If the blocks file was just not long enough, blanks are 2124: supplied for the missing portion. 2125: 2126: @item I/O exception in block transfer: 2127: Typically results in a @code{throw} of some OS-derived value (between 2128: -512 and -2048). 2129: 2130: @item invalid block number: 2131: @code{-35 throw} (Invalid block number) 2132: 2133: @item a program directly alters the contents of @code{BLK}: 2134: The input stream is switched to that other block, at the same 2135: position. If the storing to @code{BLK} happens when interpreting 2136: non-block input, the system will get quite confused when the block ends. 2137: 2138: @item no current block buffer for @code{UPDATE}: 2139: @code{UPDATE} has no effect. 2140: 2141: @end table 2142: 2143: 2144: @c --------------------------------------------------------------------- 2145: @node block-other, , block-ambcond, The optional Block word set 2146: @subsection Other system documentation 2147: @c --------------------------------------------------------------------- 2148: 2149: @table @i 2150: 2151: @item any restrictions a multiprogramming system places on the use of buffer addresses: 2152: No restrictions (yet). 2153: 2154: @item the number of blocks available for source and data: 2155: depends on your disk space. 2156: 2157: @end table 2158: 2159: 2160: @c ===================================================================== 2161: @node The optional Double Number word set, The optional Exception word set, The optional Block word set, ANS conformance 2162: @section The optional Double Number word set 2163: @c ===================================================================== 2164: 2165: @menu 1.15 anton 2166: * double-ambcond:: Ambiguous Conditions 1.14 anton 2167: @end menu 2168: 2169: 2170: @c --------------------------------------------------------------------- 1.15 anton 2171: @node double-ambcond, , The optional Double Number word set, The optional Double Number word set 1.14 anton 2172: @subsection Ambiguous conditions 2173: @c --------------------------------------------------------------------- 2174: 2175: @table @i 2176: 1.15 anton 2177: @item @var{d} outside of range of @var{n} in @code{D>S}: 1.14 anton 2178: The least significant cell of @var{d} is produced. 2179: 2180: @end table 2181: 2182: 2183: @c ===================================================================== 2184: @node The optional Exception word set, The optional Facility word set, The optional Double Number word set, ANS conformance 2185: @section The optional Exception word set 2186: @c ===================================================================== 2187: 2188: @menu 1.15 anton 2189: * exception-idef:: Implementation Defined Options 1.14 anton 2190: @end menu 2191: 2192: 2193: @c --------------------------------------------------------------------- 1.15 anton 2194: @node exception-idef, , The optional Exception word set, The optional Exception word set 1.14 anton 2195: @subsection Implementation Defined Options 2196: @c --------------------------------------------------------------------- 2197: 2198: @table @i 2199: @item @code{THROW}-codes used in the system: 2200: The codes -256@minus{}-511 are used for reporting signals (see 2201: @file{errore.fs}). The codes -512@minus{}-2047 are used for OS errors 2202: (for file and memory allocation operations). The mapping from OS error 2203: numbers to throw code is -512@minus{}@var{errno}. One side effect of 2204: this mapping is that undefined OS errors produce a message with a 2205: strange number; e.g., @code{-1000 THROW} results in @code{Unknown error 2206: 488} on my system. 2207: @end table 2208: 2209: @c ===================================================================== 2210: @node The optional Facility word set, The optional File-Access word set, The optional Exception word set, ANS conformance 2211: @section The optional Facility word set 2212: @c ===================================================================== 2213: 2214: @menu 1.15 anton 2215: * facility-idef:: Implementation Defined Options 2216: * facility-ambcond:: Ambiguous Conditions 1.14 anton 2217: @end menu 2218: 2219: 2220: @c --------------------------------------------------------------------- 2221: @node facility-idef, facility-ambcond, The optional Facility word set, The optional Facility word set 2222: @subsection Implementation Defined Options 2223: @c --------------------------------------------------------------------- 2224: 2225: @table @i 2226: 2227: @item encoding of keyboard events (@code{EKEY}): 2228: Not yet implemeted. 2229: 2230: @item duration of a system clock tick 2231: System dependent. With respect to @code{MS}, the time is specified in 2232: microseconds. How well the OS and the hardware implement this, is 2233: another question. 2234: 2235: @item repeatability to be expected from the execution of @code{MS}: 2236: System dependent. On Unix, a lot depends on load. If the system is 2237: lightly loaded, and the delay is short enough that gforth does not get 2238: swapped out, the performance should be acceptable. Under MS-DOS and 2239: other single-tasking systems, it should be good. 2240: 2241: @end table 2242: 2243: 2244: @c --------------------------------------------------------------------- 1.15 anton 2245: @node facility-ambcond, , facility-idef, The optional Facility word set 1.14 anton 2246: @subsection Ambiguous conditions 2247: @c --------------------------------------------------------------------- 2248: 2249: @table @i 2250: 2251: @item @code{AT-XY} can't be performed on user output device: 2252: Largely terminal dependant. No range checks are done on the arguments. 2253: No errors are reported. You may see some garbage appearing, you may see 2254: simply nothing happen. 2255: 2256: @end table 2257: 2258: 2259: @c ===================================================================== 2260: @node The optional File-Access word set, The optional Floating-Point word set, The optional Facility word set, ANS conformance 2261: @section The optional File-Access word set 2262: @c ===================================================================== 2263: 2264: @menu 1.15 anton 2265: * file-idef:: Implementation Defined Options 2266: * file-ambcond:: Ambiguous Conditions 1.14 anton 2267: @end menu 2268: 2269: 2270: @c --------------------------------------------------------------------- 2271: @node file-idef, file-ambcond, The optional File-Access word set, The optional File-Access word set 2272: @subsection Implementation Defined Options 2273: @c --------------------------------------------------------------------- 2274: 2275: @table @i 2276: 2277: @item File access methods used: 2278: @code{R/O}, @code{R/W} and @code{BIN} work as you would 2279: expect. @code{W/O} translates into the C file opening mode @code{w} (or 2280: @code{wb}): The file is cleared, if it exists, and created, if it does 1.15 anton 2281: not (both with @code{open-file} and @code{create-file}). Under Unix 1.14 anton 2282: @code{create-file} creates a file with 666 permissions modified by your 2283: umask. 2284: 2285: @item file exceptions: 2286: The file words do not raise exceptions (except, perhaps, memory access 2287: faults when you pass illegal addresses or file-ids). 2288: 2289: @item file line terminator: 2290: System-dependent. Gforth uses C's newline character as line 2291: terminator. What the actual character code(s) of this are is 2292: system-dependent. 2293: 2294: @item file name format 2295: System dependent. Gforth just uses the file name format of your OS. 2296: 2297: @item information returned by @code{FILE-STATUS}: 2298: @code{FILE-STATUS} returns the most powerful file access mode allowed 2299: for the file: Either @code{R/O}, @code{W/O} or @code{R/W}. If the file 2300: cannot be accessed, @code{R/O BIN} is returned. @code{BIN} is applicable 2301: along with the retured mode. 2302: 2303: @item input file state after an exception when including source: 2304: All files that are left via the exception are closed. 2305: 2306: @item @var{ior} values and meaning: 1.15 anton 2307: The @var{ior}s returned by the file and memory allocation words are 2308: intended as throw codes. They typically are in the range 2309: -512@minus{}-2047 of OS errors. The mapping from OS error numbers to 2310: @var{ior}s is -512@minus{}@var{errno}. 1.14 anton 2311: 2312: @item maximum depth of file input nesting: 2313: limited by the amount of return stack, locals/TIB stack, and the number 2314: of open files available. This should not give you troubles. 2315: 2316: @item maximum size of input line: 2317: @code{/line}. Currently 255. 2318: 2319: @item methods of mapping block ranges to files: 2320: Currently, the block words automatically access the file 2321: @file{blocks.fb} in the currend working directory. More sophisticated 2322: methods could be implemented if there is demand (and a volunteer). 2323: 2324: @item number of string buffers provided by @code{S"}: 2325: 1 2326: 2327: @item size of string buffer used by @code{S"}: 2328: @code{/line}. currently 255. 2329: 2330: @end table 2331: 2332: @c --------------------------------------------------------------------- 1.15 anton 2333: @node file-ambcond, , file-idef, The optional File-Access word set 1.14 anton 2334: @subsection Ambiguous conditions 2335: @c --------------------------------------------------------------------- 2336: 2337: @table @i 2338: 2339: @item attempting to position a file outside it's boundaries: 2340: @code{REPOSITION-FILE} is performed as usual: Afterwards, 2341: @code{FILE-POSITION} returns the value given to @code{REPOSITION-FILE}. 2342: 2343: @item attempting to read from file positions not yet written: 2344: End-of-file, i.e., zero characters are read and no error is reported. 2345: 2346: @item @var{file-id} is invalid (@code{INCLUDE-FILE}): 2347: An appropriate exception may be thrown, but a memory fault or other 2348: problem is more probable. 2349: 2350: @item I/O exception reading or closing @var{file-id} (@code{include-file}, @code{included}): 2351: The @var{ior} produced by the operation, that discovered the problem, is 2352: thrown. 2353: 2354: @item named file cannot be opened (@code{included}): 2355: The @var{ior} produced by @code{open-file} is thrown. 2356: 2357: @item requesting an unmapped block number: 2358: There are no unmapped legal block numbers. On some operating systems, 2359: writing a block with a large number may overflow the file system and 2360: have an error message as consequence. 2361: 2362: @item using @code{source-id} when @code{blk} is non-zero: 2363: @code{source-id} performs its function. Typically it will give the id of 2364: the source which loaded the block. (Better ideas?) 2365: 2366: @end table 2367: 2368: 2369: @c ===================================================================== 2370: @node The optional Floating-Point word set, The optional Locals word set, The optional File-Access word set, ANS conformance 1.15 anton 2371: @section The optional Floating-Point word set 1.14 anton 2372: @c ===================================================================== 2373: 2374: @menu 1.15 anton 2375: * floating-idef:: Implementation Defined Options 2376: * floating-ambcond:: Ambiguous Conditions 1.14 anton 2377: @end menu 2378: 2379: 2380: @c --------------------------------------------------------------------- 2381: @node floating-idef, floating-ambcond, The optional Floating-Point word set, The optional Floating-Point word set 2382: @subsection Implementation Defined Options 2383: @c --------------------------------------------------------------------- 2384: 2385: @table @i 2386: 1.15 anton 2387: @item format and range of floating point numbers: 2388: System-dependent; the @code{double} type of C. 1.14 anton 2389: 1.15 anton 2390: @item results of @code{REPRESENT} when @var{float} is out of range: 2391: System dependent; @code{REPRESENT} is implemented using the C library 2392: function @code{ecvt()} and inherits its behaviour in this respect. 1.14 anton 2393: 1.15 anton 2394: @item rounding or truncation of floating-point numbers: 2395: What's the question?!! 1.14 anton 2396: 1.15 anton 2397: @item size of floating-point stack: 2398: @code{s" FLOATING-STACK" environment? drop .}. Can be changed at startup 2399: with the command-line option @code{-f}. 1.14 anton 2400: 1.15 anton 2401: @item width of floating-point stack: 2402: @code{1 floats}. 1.14 anton 2403: 2404: @end table 2405: 2406: 2407: @c --------------------------------------------------------------------- 1.15 anton 2408: @node floating-ambcond, , floating-idef, The optional Floating-Point word set 2409: @subsection Ambiguous conditions 1.14 anton 2410: @c --------------------------------------------------------------------- 2411: 2412: @table @i 2413: 1.15 anton 2414: @item @code{df@@} or @code{df!} used with an address that is not double-float aligned: 2415: System-dependent. Typically results in an alignment fault like other 2416: alignment violations. 1.14 anton 2417: 1.15 anton 2418: @item @code{f@@} or @code{f!} used with an address that is not float aligned: 2419: System-dependent. Typically results in an alignment fault like other 2420: alignment violations. 1.14 anton 2421: 1.15 anton 2422: @item Floating-point result out of range: 2423: System-dependent. Can result in a @code{-55 THROW} (Floating-point 2424: unidentified fault), or can produce a special value representing, e.g., 2425: Infinity. 1.14 anton 2426: 1.15 anton 2427: @item @code{sf@@} or @code{sf!} used with an address that is not single-float aligned: 2428: System-dependent. Typically results in an alignment fault like other 2429: alignment violations. 1.14 anton 2430: 1.15 anton 2431: @item BASE is not decimal (@code{REPRESENT}, @code{F.}, @code{FE.}, @code{FS.}): 2432: The floating-point number is converted into decimal nonetheless. 1.14 anton 2433: 1.15 anton 2434: @item Both arguments are equal to zero (@code{FATAN2}): 2435: System-dependent. @code{FATAN2} is implemented using the C library 2436: function @code{atan2()}. 1.14 anton 2437: 1.15 anton 2438: @item Using ftan on an argument @var{r1} where cos(@var{r1}) is zero: 2439: System-dependent. Anyway, typically the cos of @var{r1} will not be zero 2440: because of small errors and the tan will be a very large (or very small) 2441: but finite number. 1.14 anton 2442: 1.15 anton 2443: @item @var{d} cannot be presented precisely as a float in @code{D>F}: 2444: The result is rounded to the nearest float. 1.14 anton 2445: 1.15 anton 2446: @item dividing by zero: 2447: @code{-55 throw} (Floating-point unidentified fault) 1.14 anton 2448: 1.15 anton 2449: @item exponent too big for conversion (@code{DF!}, @code{DF@@}, @code{SF!}, @code{SF@@}): 2450: System dependent. On IEEE-FP based systems the number is converted into 2451: an infinity. 1.14 anton 2452: 1.15 anton 2453: @item @var{float}<1 (@code{facosh}): 2454: @code{-55 throw} (Floating-point unidentified fault) 1.14 anton 2455: 1.15 anton 2456: @item @var{float}=<-1 (@code{flnp1}): 2457: @code{-55 throw} (Floating-point unidentified fault). On IEEE-FP systems 2458: negative infinity is typically produced for @var{float}=-1. 1.14 anton 2459: 1.15 anton 2460: @item @var{float}=<0 (@code{fln}, @code{flog}): 2461: @code{-55 throw} (Floating-point unidentified fault). On IEEE-FP systems 2462: negative infinity is typically produced for @var{float}=0. 1.14 anton 2463: 1.15 anton 2464: @item @var{float}<0 (@code{fasinh}, @code{fsqrt}): 2465: @code{-55 throw} (Floating-point unidentified fault). @code{fasinh} 2466: produces values for these inputs on my Linux box (Bug in the C library?) 1.14 anton 2467: 1.15 anton 2468: @item |@var{float}|>1 (@code{facos}, @code{fasin}, @code{fatanh}): 2469: @code{-55 throw} (Floating-point unidentified fault). 1.14 anton 2470: 1.15 anton 2471: @item integer part of float cannot be represented by @var{d} in @code{f>d}: 2472: @code{-55 throw} (Floating-point unidentified fault). 1.14 anton 2473: 1.15 anton 2474: @item string larger than pictured numeric output area (@code{f.}, @code{fe.}, @code{fs.}): 2475: This does not happen. 2476: @end table 1.14 anton 2477: 2478: 2479: 2480: @c ===================================================================== 1.15 anton 2481: @node The optional Locals word set, The optional Memory-Allocation word set, The optional Floating-Point word set, ANS conformance 2482: @section The optional Locals word set 1.14 anton 2483: @c ===================================================================== 2484: 2485: @menu 1.15 anton 2486: * locals-idef:: Implementation Defined Options 2487: * locals-ambcond:: Ambiguous Conditions 1.14 anton 2488: @end menu 2489: 2490: 2491: @c --------------------------------------------------------------------- 1.15 anton 2492: @node locals-idef, locals-ambcond, The optional Locals word set, The optional Locals word set 1.14 anton 2493: @subsection Implementation Defined Options 2494: @c --------------------------------------------------------------------- 2495: 2496: @table @i 2497: 1.15 anton 2498: @item maximum number of locals in a definition: 2499: @code{s" #locals" environment? drop .}. Currently 15. This is a lower 2500: bound, e.g., on a 32-bit machine there can be 41 locals of up to 8 2501: characters. The number of locals in a definition is bounded by the size 2502: of locals-buffer, which contains the names of the locals. 1.14 anton 2503: 2504: @end table 2505: 2506: 2507: @c --------------------------------------------------------------------- 1.15 anton 2508: @node locals-ambcond, , locals-idef, The optional Locals word set 1.14 anton 2509: @subsection Ambiguous conditions 2510: @c --------------------------------------------------------------------- 2511: 2512: @table @i 2513: 1.15 anton 2514: @item executing a named local in interpretation state: 2515: @code{-14 throw} (Interpreting a compile-only word). 1.14 anton 2516: 1.15 anton 2517: @item @var{name} not defined by @code{VALUE} or @code{(LOCAL)} (@code{TO}): 2518: @code{-32 throw} (Invalid name argument) 1.14 anton 2519: 2520: @end table 2521: 2522: 2523: @c ===================================================================== 1.15 anton 2524: @node The optional Memory-Allocation word set, The optional Programming-Tools word set, The optional Locals word set, ANS conformance 2525: @section The optional Memory-Allocation word set 1.14 anton 2526: @c ===================================================================== 2527: 2528: @menu 1.15 anton 2529: * memory-idef:: Implementation Defined Options 1.14 anton 2530: @end menu 2531: 2532: 2533: @c --------------------------------------------------------------------- 1.15 anton 2534: @node memory-idef, , The optional Memory-Allocation word set, The optional Memory-Allocation word set 1.14 anton 2535: @subsection Implementation Defined Options 2536: @c --------------------------------------------------------------------- 2537: 2538: @table @i 2539: 1.15 anton 2540: @item values and meaning of @var{ior}: 2541: The @var{ior}s returned by the file and memory allocation words are 2542: intended as throw codes. They typically are in the range 2543: -512@minus{}-2047 of OS errors. The mapping from OS error numbers to 2544: @var{ior}s is -512@minus{}@var{errno}. 1.14 anton 2545: 2546: @end table 2547: 2548: @c ===================================================================== 1.15 anton 2549: @node The optional Programming-Tools word set, The optional Search-Order word set, The optional Memory-Allocation word set, ANS conformance 2550: @section The optional Programming-Tools word set 1.14 anton 2551: @c ===================================================================== 2552: 2553: @menu 1.15 anton 2554: * programming-idef:: Implementation Defined Options 2555: * programming-ambcond:: Ambiguous Conditions 1.14 anton 2556: @end menu 2557: 2558: 2559: @c --------------------------------------------------------------------- 1.15 anton 2560: @node programming-idef, programming-ambcond, The optional Programming-Tools word set, The optional Programming-Tools word set 1.14 anton 2561: @subsection Implementation Defined Options 2562: @c --------------------------------------------------------------------- 2563: 2564: @table @i 2565: 1.15 anton 2566: @item ending sequence for input following @code{;code} and @code{code}: 2567: Not implemented (yet). 1.14 anton 2568: 1.15 anton 2569: @item manner of processing input following @code{;code} and @code{code}: 2570: Not implemented (yet). 2571: 2572: @item search order capability for @code{EDITOR} and @code{ASSEMBLER}: 2573: Not implemented (yet). If they were implemented, they would use the 2574: search order wordset. 2575: 2576: @item source and format of display by @code{SEE}: 2577: The source for @code{see} is the intermediate code used by the inner 2578: interpreter. The current @code{see} tries to output Forth source code 2579: as well as possible. 2580: 1.14 anton 2581: @end table 2582: 2583: @c --------------------------------------------------------------------- 1.15 anton 2584: @node programming-ambcond, , programming-idef, The optional Programming-Tools word set 1.14 anton 2585: @subsection Ambiguous conditions 2586: @c --------------------------------------------------------------------- 2587: 2588: @table @i 2589: 1.15 anton 2590: @item deleting the compilation wordlist (@code{FORGET}): 2591: Not implemented (yet). 1.14 anton 2592: 1.15 anton 2593: @item fewer than @var{u}+1 items on the control flow stack (@code{CS-PICK}, @code{CS-ROLL}): 2594: This typically results in an @code{abort"} with a descriptive error 2595: message (may change into a @code{-22 throw} (Control structure mismatch) 2596: in the future). You may also get a memory access error. If you are 2597: unlucky, this ambiguous condition is not caught. 2598: 2599: @item @var{name} can't be found (@code{forget}): 2600: Not implemented (yet). 1.14 anton 2601: 1.15 anton 2602: @item @var{name} not defined via @code{CREATE}: 2603: @code{;code} is not implemented (yet). If it were, it would behave like 2604: @code{DOES>} in this respect, i.e., change the execution semantics of 2605: the last defined word no matter how it was defined. 1.14 anton 2606: 1.15 anton 2607: @item @code{POSTPONE} applied to @code{[IF]}: 2608: After defining @code{: X POSTPONE [IF] ; IMMEDIATE}. @code{X} is 2609: equivalent to @code{[IF]}. 1.14 anton 2610: 1.15 anton 2611: @item reaching the end of the input source before matching @code{[ELSE]} or @code{[THEN]}: 2612: Continue in the same state of conditional compilation in the next outer 2613: input source. Currently there is no warning to the user about this. 1.14 anton 2614: 1.15 anton 2615: @item removing a needed definition (@code{FORGET}): 2616: Not implemented (yet). 1.14 anton 2617: 2618: @end table 2619: 2620: 2621: @c ===================================================================== 1.15 anton 2622: @node The optional Search-Order word set, , The optional Programming-Tools word set, ANS conformance 2623: @section The optional Search-Order word set 1.14 anton 2624: @c ===================================================================== 2625: 2626: @menu 1.15 anton 2627: * search-idef:: Implementation Defined Options 2628: * search-ambcond:: Ambiguous Conditions 1.14 anton 2629: @end menu 2630: 2631: 2632: @c --------------------------------------------------------------------- 1.15 anton 2633: @node search-idef, search-ambcond, The optional Search-Order word set, The optional Search-Order word set 1.14 anton 2634: @subsection Implementation Defined Options 2635: @c --------------------------------------------------------------------- 2636: 2637: @table @i 2638: 1.15 anton 2639: @item maximum number of word lists in search order: 2640: @code{s" wordlists" environment? drop .}. Currently 16. 2641: 2642: @item minimum search order: 2643: @code{root root}. 1.14 anton 2644: 2645: @end table 2646: 2647: @c --------------------------------------------------------------------- 1.15 anton 2648: @node search-ambcond, , search-idef, The optional Search-Order word set 1.14 anton 2649: @subsection Ambiguous conditions 2650: @c --------------------------------------------------------------------- 2651: 2652: @table @i 2653: 1.15 anton 2654: @item changing the compilation wordlist (during compilation): 2655: The definition is put into the wordlist that is the compilation wordlist 2656: when @code{REVEAL} is executed (by @code{;}, @code{DOES>}, 2657: @code{RECURSIVE}, etc.). 1.14 anton 2658: 1.15 anton 2659: @item search order empty (@code{previous}): 2660: @code{abort" Vocstack empty"}. 1.14 anton 2661: 1.15 anton 2662: @item too many word lists in search order (@code{also}): 2663: @code{abort" Vocstack full"}. 1.14 anton 2664: 2665: @end table 1.13 anton 2666: 2667: 1.4 anton 2668: @node Model, Emacs and GForth, ANS conformance, Top 2669: @chapter Model 2670: 2671: @node Emacs and GForth, Internals, Model, Top 2672: @chapter Emacs and GForth 2673: 2674: GForth comes with @file{gforth.el}, an improved version of 2675: @file{forth.el} by Goran Rydqvist (icluded in the TILE package). The 2676: improvements are a better (but still not perfect) handling of 2677: indentation. I have also added comment paragraph filling (@kbd{M-q}), 1.8 anton 2678: commenting (@kbd{C-x \}) and uncommenting (@kbd{C-u C-x \}) regions and 2679: removing debugging tracers (@kbd{C-x ~}, @pxref{Debugging}). I left the 2680: stuff I do not use alone, even though some of it only makes sense for 2681: TILE. To get a description of these features, enter Forth mode and type 2682: @kbd{C-h m}. 1.4 anton 2683: 2684: In addition, GForth supports Emacs quite well: The source code locations 2685: given in error messages, debugging output (from @code{~~}) and failed 2686: assertion messages are in the right format for Emacs' compilation mode 2687: (@pxref{Compilation, , Running Compilations under Emacs, emacs, Emacs 2688: Manual}) so the source location corresponding to an error or other 2689: message is only a few keystrokes away (@kbd{C-x `} for the next error, 2690: @kbd{C-c C-c} for the error under the cursor). 2691: 2692: Also, if you @code{include} @file{etags.fs}, a new @file{TAGS} file 2693: (@pxref{Tags, , Tags Tables, emacs, Emacs Manual}) will be produced that 2694: contains the definitions of all words defined afterwards. You can then 2695: find the source for a word using @kbd{M-.}. Note that emacs can use 2696: several tags files at the same time (e.g., one for the gforth sources 2697: and one for your program). 2698: 2699: To get all these benefits, add the following lines to your @file{.emacs} 2700: file: 2701: 2702: @example 2703: (autoload 'forth-mode "gforth.el") 2704: (setq auto-mode-alist (cons '("\\.fs\\'" . forth-mode) auto-mode-alist)) 2705: @end example 2706: 2707: @node Internals, Bugs, Emacs and GForth, Top 1.3 anton 2708: @chapter Internals 2709: 2710: Reading this section is not necessary for programming with gforth. It 2711: should be helpful for finding your way in the gforth sources. 2712: 1.4 anton 2713: @menu 2714: * Portability:: 2715: * Threading:: 2716: * Primitives:: 2717: * System Architecture:: 2718: @end menu 2719: 2720: @node Portability, Threading, Internals, Internals 1.3 anton 2721: @section Portability 2722: 2723: One of the main goals of the effort is availability across a wide range 2724: of personal machines. fig-Forth, and, to a lesser extent, F83, achieved 2725: this goal by manually coding the engine in assembly language for several 2726: then-popular processors. This approach is very labor-intensive and the 2727: results are short-lived due to progress in computer architecture. 2728: 2729: Others have avoided this problem by coding in C, e.g., Mitch Bradley 2730: (cforth), Mikael Patel (TILE) and Dirk Zoller (pfe). This approach is 2731: particularly popular for UNIX-based Forths due to the large variety of 2732: architectures of UNIX machines. Unfortunately an implementation in C 2733: does not mix well with the goals of efficiency and with using 2734: traditional techniques: Indirect or direct threading cannot be expressed 2735: in C, and switch threading, the fastest technique available in C, is 2736: significantly slower. Another problem with C is that it's very 2737: cumbersome to express double integer arithmetic. 2738: 2739: Fortunately, there is a portable language that does not have these 2740: limitations: GNU C, the version of C processed by the GNU C compiler 2741: (@pxref{C Extensions, , Extensions to the C Language Family, gcc.info, 2742: GNU C Manual}). Its labels as values feature (@pxref{Labels as Values, , 2743: Labels as Values, gcc.info, GNU C Manual}) makes direct and indirect 2744: threading possible, its @code{long long} type (@pxref{Long Long, , 2745: Double-Word Integers, gcc.info, GNU C Manual}) corresponds to Forths 2746: double numbers. GNU C is available for free on all important (and many 2747: unimportant) UNIX machines, VMS, 80386s running MS-DOS, the Amiga, and 2748: the Atari ST, so a Forth written in GNU C can run on all these 2749: machines@footnote{Due to Apple's look-and-feel lawsuit it is not 1.5 anton 2750: available on the Mac (@pxref{Boycott, , Protect Your Freedom---Fight 1.3 anton 2751: ``Look And Feel'', gcc.info, GNU C Manual}).}. 2752: 2753: Writing in a portable language has the reputation of producing code that 2754: is slower than assembly. For our Forth engine we repeatedly looked at 2755: the code produced by the compiler and eliminated most compiler-induced 2756: inefficiencies by appropriate changes in the source-code. 2757: 2758: However, register allocation cannot be portably influenced by the 2759: programmer, leading to some inefficiencies on register-starved 2760: machines. We use explicit register declarations (@pxref{Explicit Reg 2761: Vars, , Variables in Specified Registers, gcc.info, GNU C Manual}) to 2762: improve the speed on some machines. They are turned on by using the 2763: @code{gcc} switch @code{-DFORCE_REG}. Unfortunately, this feature not 2764: only depends on the machine, but also on the compiler version: On some 2765: machines some compiler versions produce incorrect code when certain 2766: explicit register declarations are used. So by default 2767: @code{-DFORCE_REG} is not used. 2768: 1.4 anton 2769: @node Threading, Primitives, Portability, Internals 1.3 anton 2770: @section Threading 2771: 2772: GNU C's labels as values extension (available since @code{gcc-2.0}, 2773: @pxref{Labels as Values, , Labels as Values, gcc.info, GNU C Manual}) 2774: makes it possible to take the address of @var{label} by writing 2775: @code{&&@var{label}}. This address can then be used in a statement like 2776: @code{goto *@var{address}}. I.e., @code{goto *&&x} is the same as 2777: @code{goto x}. 2778: 2779: With this feature an indirect threaded NEXT looks like: 2780: @example 2781: cfa = *ip++; 2782: ca = *cfa; 2783: goto *ca; 2784: @end example 2785: For those unfamiliar with the names: @code{ip} is the Forth instruction 2786: pointer; the @code{cfa} (code-field address) corresponds to ANS Forths 2787: execution token and points to the code field of the next word to be 2788: executed; The @code{ca} (code address) fetched from there points to some 2789: executable code, e.g., a primitive or the colon definition handler 2790: @code{docol}. 2791: 2792: Direct threading is even simpler: 2793: @example 2794: ca = *ip++; 2795: goto *ca; 2796: @end example 2797: 2798: Of course we have packaged the whole thing neatly in macros called 2799: @code{NEXT} and @code{NEXT1} (the part of NEXT after fetching the cfa). 2800: 1.4 anton 2801: @menu 2802: * Scheduling:: 2803: * Direct or Indirect Threaded?:: 2804: * DOES>:: 2805: @end menu 2806: 2807: @node Scheduling, Direct or Indirect Threaded?, Threading, Threading 1.3 anton 2808: @subsection Scheduling 2809: 2810: There is a little complication: Pipelined and superscalar processors, 2811: i.e., RISC and some modern CISC machines can process independent 2812: instructions while waiting for the results of an instruction. The 2813: compiler usually reorders (schedules) the instructions in a way that 2814: achieves good usage of these delay slots. However, on our first tries 2815: the compiler did not do well on scheduling primitives. E.g., for 2816: @code{+} implemented as 2817: @example 2818: n=sp[0]+sp[1]; 2819: sp++; 2820: sp[0]=n; 2821: NEXT; 2822: @end example 2823: the NEXT comes strictly after the other code, i.e., there is nearly no 2824: scheduling. After a little thought the problem becomes clear: The 2825: compiler cannot know that sp and ip point to different addresses (and 1.4 anton 2826: the version of @code{gcc} we used would not know it even if it was 2827: possible), so it could not move the load of the cfa above the store to 2828: the TOS. Indeed the pointers could be the same, if code on or very near 2829: the top of stack were executed. In the interest of speed we chose to 2830: forbid this probably unused ``feature'' and helped the compiler in 2831: scheduling: NEXT is divided into the loading part (@code{NEXT_P1}) and 2832: the goto part (@code{NEXT_P2}). @code{+} now looks like: 1.3 anton 2833: @example 2834: n=sp[0]+sp[1]; 2835: sp++; 2836: NEXT_P1; 2837: sp[0]=n; 2838: NEXT_P2; 2839: @end example 1.4 anton 2840: This can be scheduled optimally by the compiler. 1.3 anton 2841: 2842: This division can be turned off with the switch @code{-DCISC_NEXT}. This 2843: switch is on by default on machines that do not profit from scheduling 2844: (e.g., the 80386), in order to preserve registers. 2845: 1.4 anton 2846: @node Direct or Indirect Threaded?, DOES>, Scheduling, Threading 1.3 anton 2847: @subsection Direct or Indirect Threaded? 2848: 2849: Both! After packaging the nasty details in macro definitions we 2850: realized that we could switch between direct and indirect threading by 2851: simply setting a compilation flag (@code{-DDIRECT_THREADED}) and 2852: defining a few machine-specific macros for the direct-threading case. 2853: On the Forth level we also offer access words that hide the 2854: differences between the threading methods (@pxref{Threading Words}). 2855: 2856: Indirect threading is implemented completely 2857: machine-independently. Direct threading needs routines for creating 2858: jumps to the executable code (e.g. to docol or dodoes). These routines 2859: are inherently machine-dependent, but they do not amount to many source 2860: lines. I.e., even porting direct threading to a new machine is a small 2861: effort. 2862: 1.4 anton 2863: @node DOES>, , Direct or Indirect Threaded?, Threading 1.3 anton 2864: @subsection DOES> 2865: One of the most complex parts of a Forth engine is @code{dodoes}, i.e., 2866: the chunk of code executed by every word defined by a 2867: @code{CREATE}...@code{DOES>} pair. The main problem here is: How to find 2868: the Forth code to be executed, i.e. the code after the @code{DOES>} (the 2869: DOES-code)? There are two solutions: 2870: 2871: In fig-Forth the code field points directly to the dodoes and the 2872: DOES-code address is stored in the cell after the code address 2873: (i.e. at cfa cell+). It may seem that this solution is illegal in the 2874: Forth-79 and all later standards, because in fig-Forth this address 2875: lies in the body (which is illegal in these standards). However, by 2876: making the code field larger for all words this solution becomes legal 2877: again. We use this approach for the indirect threaded version. Leaving 2878: a cell unused in most words is a bit wasteful, but on the machines we 2879: are targetting this is hardly a problem. The other reason for having a 2880: code field size of two cells is to avoid having different image files 1.4 anton 2881: for direct and indirect threaded systems (@pxref{System Architecture}). 1.3 anton 2882: 2883: The other approach is that the code field points or jumps to the cell 2884: after @code{DOES}. In this variant there is a jump to @code{dodoes} at 2885: this address. @code{dodoes} can then get the DOES-code address by 2886: computing the code address, i.e., the address of the jump to dodoes, 2887: and add the length of that jump field. A variant of this is to have a 2888: call to @code{dodoes} after the @code{DOES>}; then the return address 2889: (which can be found in the return register on RISCs) is the DOES-code 2890: address. Since the two cells available in the code field are usually 2891: used up by the jump to the code address in direct threading, we use 2892: this approach for direct threading. We did not want to add another 2893: cell to the code field. 2894: 1.4 anton 2895: @node Primitives, System Architecture, Threading, Internals 1.3 anton 2896: @section Primitives 2897: 1.4 anton 2898: @menu 2899: * Automatic Generation:: 2900: * TOS Optimization:: 2901: * Produced code:: 2902: @end menu 2903: 2904: @node Automatic Generation, TOS Optimization, Primitives, Primitives 1.3 anton 2905: @subsection Automatic Generation 2906: 2907: Since the primitives are implemented in a portable language, there is no 2908: longer any need to minimize the number of primitives. On the contrary, 2909: having many primitives is an advantage: speed. In order to reduce the 2910: number of errors in primitives and to make programming them easier, we 2911: provide a tool, the primitive generator (@file{prims2x.fs}), that 2912: automatically generates most (and sometimes all) of the C code for a 2913: primitive from the stack effect notation. The source for a primitive 2914: has the following form: 2915: 2916: @format 2917: @var{Forth-name} @var{stack-effect} @var{category} [@var{pronounc.}] 2918: [@code{""}@var{glossary entry}@code{""}] 2919: @var{C code} 2920: [@code{:} 2921: @var{Forth code}] 2922: @end format 2923: 2924: The items in brackets are optional. The category and glossary fields 2925: are there for generating the documentation, the Forth code is there 2926: for manual implementations on machines without GNU C. E.g., the source 2927: for the primitive @code{+} is: 2928: @example 2929: + n1 n2 -- n core plus 2930: n = n1+n2; 2931: @end example 2932: 2933: This looks like a specification, but in fact @code{n = n1+n2} is C 2934: code. Our primitive generation tool extracts a lot of information from 2935: the stack effect notations@footnote{We use a one-stack notation, even 2936: though we have separate data and floating-point stacks; The separate 2937: notation can be generated easily from the unified notation.}: The number 2938: of items popped from and pushed on the stack, their type, and by what 2939: name they are referred to in the C code. It then generates a C code 2940: prelude and postlude for each primitive. The final C code for @code{+} 2941: looks like this: 2942: 2943: @example 2944: I_plus: /* + ( n1 n2 -- n ) */ /* label, stack effect */ 2945: /* */ /* documentation */ 1.4 anton 2946: @{ 1.3 anton 2947: DEF_CA /* definition of variable ca (indirect threading) */ 2948: Cell n1; /* definitions of variables */ 2949: Cell n2; 2950: Cell n; 2951: n1 = (Cell) sp[1]; /* input */ 2952: n2 = (Cell) TOS; 2953: sp += 1; /* stack adjustment */ 2954: NAME("+") /* debugging output (with -DDEBUG) */ 1.4 anton 2955: @{ 1.3 anton 2956: n = n1+n2; /* C code taken from the source */ 1.4 anton 2957: @} 1.3 anton 2958: NEXT_P1; /* NEXT part 1 */ 2959: TOS = (Cell)n; /* output */ 2960: NEXT_P2; /* NEXT part 2 */ 1.4 anton 2961: @} 1.3 anton 2962: @end example 2963: 2964: This looks long and inefficient, but the GNU C compiler optimizes quite 2965: well and produces optimal code for @code{+} on, e.g., the R3000 and the 2966: HP RISC machines: Defining the @code{n}s does not produce any code, and 2967: using them as intermediate storage also adds no cost. 2968: 2969: There are also other optimizations, that are not illustrated by this 2970: example: Assignments between simple variables are usually for free (copy 2971: propagation). If one of the stack items is not used by the primitive 2972: (e.g. in @code{drop}), the compiler eliminates the load from the stack 2973: (dead code elimination). On the other hand, there are some things that 2974: the compiler does not do, therefore they are performed by 2975: @file{prims2x.fs}: The compiler does not optimize code away that stores 2976: a stack item to the place where it just came from (e.g., @code{over}). 2977: 2978: While programming a primitive is usually easy, there are a few cases 2979: where the programmer has to take the actions of the generator into 2980: account, most notably @code{?dup}, but also words that do not (always) 2981: fall through to NEXT. 2982: 1.4 anton 2983: @node TOS Optimization, Produced code, Automatic Generation, Primitives 1.3 anton 2984: @subsection TOS Optimization 2985: 2986: An important optimization for stack machine emulators, e.g., Forth 2987: engines, is keeping one or more of the top stack items in 1.4 anton 2988: registers. If a word has the stack effect @var{in1}...@var{inx} @code{--} 2989: @var{out1}...@var{outy}, keeping the top @var{n} items in registers 1.3 anton 2990: @itemize 2991: @item 2992: is better than keeping @var{n-1} items, if @var{x>=n} and @var{y>=n}, 2993: due to fewer loads from and stores to the stack. 2994: @item is slower than keeping @var{n-1} items, if @var{x<>y} and @var{x<n} and 2995: @var{y<n}, due to additional moves between registers. 2996: @end itemize 2997: 2998: In particular, keeping one item in a register is never a disadvantage, 2999: if there are enough registers. Keeping two items in registers is a 3000: disadvantage for frequent words like @code{?branch}, constants, 3001: variables, literals and @code{i}. Therefore our generator only produces 3002: code that keeps zero or one items in registers. The generated C code 3003: covers both cases; the selection between these alternatives is made at 3004: C-compile time using the switch @code{-DUSE_TOS}. @code{TOS} in the C 3005: code for @code{+} is just a simple variable name in the one-item case, 3006: otherwise it is a macro that expands into @code{sp[0]}. Note that the 3007: GNU C compiler tries to keep simple variables like @code{TOS} in 3008: registers, and it usually succeeds, if there are enough registers. 3009: 3010: The primitive generator performs the TOS optimization for the 3011: floating-point stack, too (@code{-DUSE_FTOS}). For floating-point 3012: operations the benefit of this optimization is even larger: 3013: floating-point operations take quite long on most processors, but can be 3014: performed in parallel with other operations as long as their results are 3015: not used. If the FP-TOS is kept in a register, this works. If 3016: it is kept on the stack, i.e., in memory, the store into memory has to 3017: wait for the result of the floating-point operation, lengthening the 3018: execution time of the primitive considerably. 3019: 3020: The TOS optimization makes the automatic generation of primitives a 3021: bit more complicated. Just replacing all occurrences of @code{sp[0]} by 3022: @code{TOS} is not sufficient. There are some special cases to 3023: consider: 3024: @itemize 3025: @item In the case of @code{dup ( w -- w w )} the generator must not 3026: eliminate the store to the original location of the item on the stack, 3027: if the TOS optimization is turned on. 1.4 anton 3028: @item Primitives with stack effects of the form @code{--} 3029: @var{out1}...@var{outy} must store the TOS to the stack at the start. 3030: Likewise, primitives with the stack effect @var{in1}...@var{inx} @code{--} 1.3 anton 3031: must load the TOS from the stack at the end. But for the null stack 3032: effect @code{--} no stores or loads should be generated. 3033: @end itemize 3034: 1.4 anton 3035: @node Produced code, , TOS Optimization, Primitives 1.3 anton 3036: @subsection Produced code 3037: 3038: To see what assembly code is produced for the primitives on your machine 3039: with your compiler and your flag settings, type @code{make engine.s} and 1.4 anton 3040: look at the resulting file @file{engine.s}. 1.3 anton 3041: 1.4 anton 3042: @node System Architecture, , Primitives, Internals 1.3 anton 3043: @section System Architecture 3044: 3045: Our Forth system consists not only of primitives, but also of 3046: definitions written in Forth. Since the Forth compiler itself belongs 3047: to those definitions, it is not possible to start the system with the 3048: primitives and the Forth source alone. Therefore we provide the Forth 3049: code as an image file in nearly executable form. At the start of the 3050: system a C routine loads the image file into memory, sets up the 3051: memory (stacks etc.) according to information in the image file, and 3052: starts executing Forth code. 3053: 3054: The image file format is a compromise between the goals of making it 3055: easy to generate image files and making them portable. The easiest way 3056: to generate an image file is to just generate a memory dump. However, 3057: this kind of image file cannot be used on a different machine, or on 3058: the next version of the engine on the same machine, it even might not 3059: work with the same engine compiled by a different version of the C 3060: compiler. We would like to have as few versions of the image file as 3061: possible, because we do not want to distribute many versions of the 3062: same image file, and to make it easy for the users to use their image 3063: files on many machines. We currently need to create a different image 3064: file for machines with different cell sizes and different byte order 3065: (little- or big-endian)@footnote{We consider adding information to the 3066: image file that enables the loader to change the byte order.}. 3067: 3068: Forth code that is going to end up in a portable image file has to 1.4 anton 3069: comply to some restrictions: addresses have to be stored in memory with 3070: special words (@code{A!}, @code{A,}, etc.) in order to make the code 3071: relocatable. Cells, floats, etc., have to be stored at the natural 3072: alignment boundaries@footnote{E.g., store floats (8 bytes) at an address 3073: dividable by~8. This happens automatically in our system when you use 3074: the ANS Forth alignment words.}, in order to avoid alignment faults on 3075: machines with stricter alignment. The image file is produced by a 3076: metacompiler (@file{cross.fs}). 1.3 anton 3077: 3078: So, unlike the image file of Mitch Bradleys @code{cforth}, our image 3079: file is not directly executable, but has to undergo some manipulations 3080: during loading. Address relocation is performed at image load-time, not 3081: at run-time. The loader also has to replace tokens standing for 3082: primitive calls with the appropriate code-field addresses (or code 3083: addresses in the case of direct threading). 1.4 anton 3084: 3085: @node Bugs, Pedigree, Internals, Top 3086: @chapter Bugs 3087: 3088: @node Pedigree, Word Index, Bugs, Top 3089: @chapter Pedigree 3090: 3091: @node Word Index, Node Index, Pedigree, Top 3092: @chapter Word Index 3093: 3094: @node Node Index, , Word Index, Top 3095: @chapter Node Index 1.1 anton 3096: 3097: @contents 3098: @bye 3099:
http://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/Attic/gforth.ds?annotate=1.16;sortby=log;f=h;only_with_tag=MAIN
CC-MAIN-2019-39
refinedweb
19,406
63.49
bambu-webhooks 2.0 Create webhooks and allow users to assign URLs to them Create webhooks and allow users to assign URLs to them About Bambu Webhooks This package allows web apps to provide third-party integration via webhooks. You as the developer can trigger a webhook by name, and provide an interface whereby the user can manage the URL to post the webhook’s data to._webhooks rather than bambu.webhooks. Installation Install the package via Pip: pip install bambu-webhooks Add it to your INSTALLED_APPS list: INSTALLED_APPS = ( ... 'bambu_webhooks' ) Add bambu_webhooks.urls to your URLconf: urlpatterns = patterns('', ... url(r'^webhooks/', include('bambu_webhooks.urls')), ) Run manage.py syncdb or manage.py migrate to setup the database tables. Basic usage Register a webhook within your models.py file. from hashlib import md5 import bambu_webhooks bambu_webhooks.site.register('webhook_name_', description = 'A description of the webhook' ) In the save() method for your model, trigger any webhooks that have receivers attached, thus posting the data to the user’s specified URL. def save(self, *args, **kwargs): ... bambu_webhooks.send('webhook_name_', self.author, { 'id': self.pk, 'name': self.name }, md5('testproject.myapp.mymodel:%d' % self.pk).hexdigest() ) Better with Bootstrap This package, among most in the Bambu toolset is designed to work with Bambu Bootstrap, a collection of flexible templates designed for web apps based on the Twitter Bootstrap framework. It’s not a package requirement, but it’ll mean the template structure and the context variables exposed by the view makes a little more sense. Todo - Allow webhooks to be categorised and/or filtered - Prepare for internationalisation - Write tests Documentation Full documentation can be found at ReadTheDocs. Questions or suggestions? Find me on Twitter (@iamsteadman) or visit my blog. - Author: Steadman - Categories - Package Index Owner: iamsteadman - DOAP record: bambu-webhooks-2.0.xml
https://pypi.python.org/pypi/bambu-webhooks/2.0
CC-MAIN-2016-36
refinedweb
298
50.43
Opened 8 years ago Closed 8 years ago Last modified 8 years ago #9222 closed (wontfix) Allow callable queryset in generic views Description I need to pass a variable queryset based on some external condition (i.e. it can be Entry.objects.all() or Entry.objects.filter(...) or other), but with current implementation it is impossible, because generic views want a queryset, which is set indefinitely (I'm thinking about the use in urls.py, not in custom views). Actually I wrote a simple wrapper, as def obj_list(request, queryset, ...): if callable(queryset): queryset = queryset() return object_list(request, queryset, ...) ... and pass a function which returns the queryset I need. I found it could be useful if it is included directly in the generic views function. The attached patch adds this functionality, and it doesn't break any previous code. Attachments (1) Change History (4) Changed 8 years ago by comment:1 Changed 8 years ago by comment:2 Changed 8 years ago by Thanks for the patch, but we aren't going to do this. The solution to any problem involving wanting to pass some kind of customised queryset to a generic view is to create your own view that creates the right queryset and then calls the generic view. Although each little addition to generic views only feels like a few lines, it is a few extra lines each time that requires extra maintenance, prompts extra questions on the users list, adds extra documentation, requires extra tests, etc. It basically adds to the overhead when it's a three line function when the user needs it (which usually results in cleaner code, too, since it's clear what you're trying to do). comment:3 Changed 8 years ago by Milestone post-1.0 deleted Is this really neccessary? Since the queryset-function is not getting access to the request, this seems to be kind of limited. Wouldn't a custom manager method do the trick much better? For more complex computations of the queryset it would often be useful to have access to the request and likely the view arguments too, hence a wrapper would be way more useful. See The patch is also missing tests. -1
https://code.djangoproject.com/ticket/9222
CC-MAIN-2016-40
refinedweb
369
62.27
Here is my Strategy design patterns tutorial. You use this pattern if you need to dynamically change an algorithm used by an object at run time. Don’t worry, just watch the video and you’ll get it. The pattern also allows you to eliminate code duplication. It separates behavior from super and subclasses. It is a super design pattern and is often the first one taught. All of the code follows the video to help you learn. If you liked this video, tell Google so more people can see it [googleplusone] Sharing is nice Code & Comments from the Video ANIMAL.JAVA public class Animal { private String name; private double height; private int weight; private String favFood; private double speed; private String sound; // Instead of using an interface in a traditional way // we use an instance variable that is a subclass // of the Flys interface. // Animal doesn't care what flyingType does, it just // knows the behavior is available to its subclasses // This is known as Composition : Instead of inheriting // an ability through inheritance the class is composed // with Objects with the right ability // Composition allows you to change the capabilities of // objects at run time! public Flys flyingType;; } /* BAD * You don't want to add methods to the super class. * You need to separate what is different between subclasses * and the super class public void fly(){ System.out.println("I'm flying"); } */ // Animal pushes off the responsibility for flying to flyingType public String tryToFly(){ return flyingType.fly(); } // If you want to be able to change the flyingType dynamically // add the following method public void setFlyingAbility(Flys newFlyType){ flyingType = newFlyType; } } DOG.JAVA public class Dog extends Animal{ public void digHole(){ System.out.println("Dug a hole"); } public Dog(){ super(); setSound("Bark"); // We set the Flys interface polymorphically // This sets the behavior as a non-flying Animal flyingType = new CantFly(); } /* BAD * You could override the fly method, but we are breaking * the rule that we need to abstract what is different to * the subclasses * public void fly(){ System.out.println("I can't fly"); } */ } BIRD.JAVA public class Bird extends Animal{ // The constructor initializes all objects public Bird(){ super(); setSound("Tweet"); // We set the Flys interface polymorphically // This sets the behavior as a non-flying Animal flyingType = new ItFlys(); } } FLYS.JAVA // The interface is implemented by many other // subclasses that allow for many types of flying // without effecting Animal, or Flys. // Classes that implement new Flys interface // subclasses can allow other classes to use // that code eliminating code duplication // I'm decoupling : encapsulating the concept that varies public interface Flys { String fly(); } // Class used if the Animal can fly class ItFlys implements Flys{ public String fly() { return "Flying High"; } } //Class used if the Animal can't fly class CantFly implements Flys{ public String fly() { return "I can't fly"; } } ANIMALPLAY.JAVA public class AnimalPlay{ public static void main(String[] args){ Animal sparky = new Dog(); Animal tweety = new Bird(); System.out.println("Dog: " + sparky.tryToFly()); System.out.println("Bird: " + tweety.tryToFly()); // This allows dynamic changes for flyingType sparky.setFlyingAbility(new ItFlys()); System.out.println("Dog: " + sparky.tryToFly()); } } It is the first time I encounter this design pattern which uses interface in a nice way. I’m glad to hear that. I’ll cover many more. If you are into thinking about building great software, you’ll love the tutorials I make over the next few months 🙂 You’re awesome! Thank you so much for your tutorials. You’re very welcome 🙂 Thank you for the kind words Your tutorials are pretty awesome, the best stuff I’ve seen on youtube hands down…, anyway thanks alot for taking the time to post. I am dutifully reviewing the code as you recommended and I came across something I wasn’t sure of… In the comments line above you mention the following: // without effecting Animal, or Flys. Not sure if it is my novice thinking or not, but wanted to know if “effecting” or “affecting” was the word intended here. Thanks for clarifying, Allan. Thank you very much 🙂 I do my best to make the best videos I can. In regards to your question, honestly I don’t know the difference between the two words. I’m really good at a few things, but grammar has never been my strong point. Sorry about that Alan, it’s clearly “effecting” meaning – “to have consequence” Derek was right the first time, maybe you should look up “pedantic” 🙂 Actually, “affecting” would be right in this case. “Effecting” means to bring about, or to cause/create . Affecting means to influence, or to cause/create a change in . You are not bringing Animal about. You are causing a change in Animal. If you wanted to use “effecting”, you could, but you would have to say it like this: “without effecting a change in Animal”. Thank you very much for the tutorials they are just awesome !!! Learnt them pretty fast. You’re very welcome 🙂 Nice tutorials. I think, the best video tutorials I have ever seen in YouTube. Thank you so much boss. 🙂 Keep up the good work. Thank you very much 🙂 most people don’t know about my videos so I’m happy you found and liked them Thank you for the awesome videos on design pattern. well done!!! You have done amazing videos and very simple to understand with the code. Thanks a lot 🙂 You’re very welcome 🙂 I’m glad you liked them First of all; great tutorials!! Watched most of the design patterns. Had just one question about the explanation at this pattern. You’re talking about Composition (Animal.java) but i had the feeling it was Aggregation? I find it hard to tell the difference. (I know about there definitions) Thank you 🙂 yes I misspoke and you are correct. Most people just refer to everything as composition, but you understand the difference. Thank you for the quick reply!! it was probably…. you….that had told me the differences in some tutorial ;). i’ll continue with the refactoring tutorials and i am looking forward to the upcoming Android tuts! Your videos are easy to follow and nicely backed by the code on the website. Amazing job! Thank you very much 🙂 I do my best to present everything in an understandable format. I glad you like the videos Firstly, thanks for your great video and enclosed source code, but I also have a question for you: Why don’t you encapsulate the flyingType field in the Animal class with protected modifier so that the sub-classes can directly access to this field but the clients can’t, they must use the setFlyingAbility method instead. You could definitely do that. There are many ways to create each design pattern. They are but a guide for writing flexible code Great Video. Question: I want to design an object for exporting a file to XLS; CSV; Word in ASP.NET. They have the same properties but have different values. The initial reaction is to use a Switch statement which breaks the Open-Close principle. However I don’t want to create an object for each export type during implementation. Instead I want to take the value from the button click event for the export process and using that value create a reference that evaluates the value being passed and determine which object to retrieve like an XLS export object; CSV export etc. Feedback on design would be great I’m sorry, but I haven’t used ASP for many years and I don’t think I could help you. Sorry I’m confused.. how does the Animal class know about Flys.. maybe I need to watch the video again. Flys flyingType is stored in every Animal object as a field. In the Bird class we then give it the ability to fly flyingType = new ItFlys(). I cover the strategy pattern again here Code Refactoring Strategy I hope that helps Great stuff ! Thanks Thank you very much 🙂 Thank you so much for your time and effort! 🙂 This really helps! You’re very welcome 🙂 I’m glad I was able to help Hello Derek, Very good explanation of Strategy Design Pattern. I have one question regarding this. Can strategy class have access to members of animal class? If yes how should animal class access should be given to strategy class? Thank you 🙂 If by doing that you are increasing coupling, then that should be avoided. i hope that helps Darek, In my application I need access members of Animal class in strategy class? I am planning to use interface which gives only few members access to strategy. Is it good for avoiding coupling or need to something else? I may not be understanding the question. The strategy pattern is used to separate behavior from the super and subclasses. So, it would defeat the point if they communicated directly with each other. Thanks for this perfect tutorial, and for your effort. You’re very welcome 🙂 Thank you for visiting! amazing explanation…. 🙂 Thanx 🙂 Thank you 🙂 Very good explanation. I really liked the way you presented the patterns. Keep up the good work Sir. Thank you very much 🙂 Great Tutorial Brian. Thanks a lot! Congratulations. This is the firt video I watched, and now I will carry on watching all the rest. Great staff! Thank you very much 🙂 I did my best to make the design patterns easy to understand and fun to learn about Great stuff, sir. Awesome voice too! Thank you 🙂 Great tutorial. Thanks for sharing your wisdom. Thank you for the compliment 🙂 You’re very welcome Hi Derek, These days I am working n C# and I had used Java around 7 years back in my college days, but you explained things with such a beauty that it helped me a lot. Hats off to you… Excellent Job I am a fan of yours… Cheers Thank you very much 🙂 I try to do the best I can. Awesome work ! Thank you very much 🙂 I’ve just started with design patterns. To begin with, I was told by my friends to start with some book but after going through first four videos on your youtube channel, I have cancelled the order for that book. I find this sufficient enough to start implementing the patterns. Thanks for all the stuff. And, please accept my congratulations for the same. I’m very happy that you are enjoying the design patterns videos. I also cover Refactoring, which is normally covered after design patterns. You’re very welcome – Derek I’m working through memorizing every design pattern I can and you’ve made it that much easier. I really appreciate the work on the videos. Thank you! I’m very happy I have been able to help 🙂 You’re very welcome. Great video! I’ve started reading some book about Design Patterns, but it is lack of simple examples as you show in your tutorials! would it make any difference, if I would make the Fly interface as an abstract class & ItsFly,CantFly as derived classes of Fly? Awesome!… I referred multiple examples but I was not able to understand the real benefit of this pattern. But your video gave me a very clear and thorough understanding. Great work and thanks for all your effort on this!… Thank you 🙂 I’m very happy that I was able to clear it up Well, this is so much better than reading those theoretical books which do not give much insight into the code. The best tutorial for sure! Thanks 🙂 Thank you 🙂 I’m glad you enjoyed it. Waw! Another great video! I got a final exam tommorow at 9 and you did a tutorial about every pattern we have covered during the semester ! Definetly the most effective study method i’ve used in many years! Thanks again to use those simple examples, every teacher should be like you! I’m very happy that I was able to help. Good luck on your exams 🙂 Your explanation and so easy to remember technique is something that can’t be thanked enough. Thanks a lottt.. Thank you for the nice compliment 🙂 You’re very welcome. hey Thanks for bringing up this tutorial,it is very easy to understand, I liked it now i can learn design pattern very fast. You’re very welcome 🙂 I’m glad I was able to clear up this topic for so many people Sir, I was wondering how the Animal.java know about the Flys interface???I could n’t understand how they are related…. Thank you for clarifying Take a look at this line // Composition allows you to change the capabilities of objects at run time public Flys flyingType; The Flys object is stored in every Animal. It can then be changed if needed without disturbing the code I have only one word for you: YourTutorialsAreGreat!! Keep it up. I’m going to watch all of them.. Thank you very much 🙂 I’m happy that you like them. You’re very welcome 🙂 I have all of them here. They aren’t neat because I never expected to give them away, but every slide is here. I have started design pattern series, Thanks Derek. Now I am clear on Strategy pattern, I have just one question. What is Association and Aggregation, and what is difference between these with Composition. Thanks Pradeep Hi Pradeep, You’re very welcome 🙂 This diagram explains everything you asked about with examples as well. I hope it helps. Thanks Derek,The diagram helps lot, Now clear on Association, Aggregation and Composition 🙂 . Thanks Pradeep Great I’m glad it helped 🙂 Hi Derek, thank you very much for your amazing videos… I’m learning so much thank to you.. i just can’t tell you how glad I am for came across your videos! ♥ Thank you 🙂 I’m very happy that you enjoy them. Many more are coming. Derek, thank you buddy. You are the best!!! Thank you 🙂 Very good series! You are truly demystifying an important topic. Above, in the comments here to the Strategy Pattern, you say, “The strategy pattern is used to separate behavior from the super and subclasses.” With that one sentence, I finally clearly understood the purpose of it. Often, with design patterns, teachers focus get lost in explaining the mechanics, and the reason-to-use gets lost in the wash of information. Can you make a list of these kinds succinct formulations, and post them on your site? ie, one sentence per each design pattern. I think for certain patterns, like Strategy, the “why” is not obvious whereas the mechanics are simple. I would truly appreciate it if you focused like a laser on the why’s, so that we know when to use a given pattern. And, as you do in your video, please state the why’s in different ways (perhaps two or more sentences instead of one) so that the clarification may come from different angles. Thanks again for your excellent efforts! Thank you 🙂 It is always great to hear that I was able to clear everything up for people. I’ll see what I can do about that list. These tutorials became popular many months after I originally posted them, so I stopped short of making a video like you requested. I’ll see what i can do now. Derek, Glad to hear! I will look forward to that list very much. I was thinking of a more complete sentence to describe the Strategy pattern, “The strategy pattern is used to separate behavior from the super and subclasses, when you want to invoke this dissimilar behavior with the same function in the client (making use of polymorphism).” Also, I was reading on the Strategy pattern page (), and it says there that the Strategy pattern and the Bridge pattern have the same UML diagram, “but differ in their intent”. Again, another example of the teacher being clear on the easy stuff (mechanics/structure) but using jargon, almost maniacally trying to keep secret the main point: why. Why would you use one or the other? This is where the esoteric side of design patterns starts to lose us regular coders who are not academics, but simply want to write better code. The jargon starts to get too thick, and the reasons for using a particular pattern are obscured. We want credible examples and, more importantly, scenarios indicating when a given pattern is appropriate. Thanks for listening to my mini rant. I’ll try to explain them in simple terms. The Bridge Pattern allows you to create 2 types of abstract objects that can interact with each other in numerous extendable ways. In my bridge design pattern example I created a way for an infinite variety of remote controls to interact with an infinite variety of different devices. The strategy pattern allows you to change what an object can do while the program is running. In my strategy design pattern example I showed how you can add an object that represents whether an Animal can fly or not to the Animal class. This attribute was then added to every class that extended from Animal. Then at run time I was able to decide if an Animal could fly or not. I hope that helps 🙂 I am a little bit confused on the use of strategy and factory both created at runtime. Kindly please elaborate the difference. Thank you in advance and I love your videos. In my strategy design pattern tutorial I demonstrated how you can at run time give any Animal subclass the ability to fly. 1. I added the Flys object to Animal which all subclasses then receive with : public Flys flyingType; 2. ItFlys and CantFly implement the Flys interface 3. At runtime I can change the version of Flys : sparky.setFlyingAbility(new ItFlys()); 4. Now sparky can fly With the Factory pattern tutorial I showed how the EnemyShipFactory pops out a different type of ship based off of user input : EnemyShip newShip = null; 13 14 if (newShipType.equals(“U”)){ 15 16 return new UFOEnemyShip(); 17 18 } else 19 20 if (newShipType.equals(“R”)){ 21 22 return new RocketEnemyShip(); 23 24 } else 25 26 if (newShipType.equals(“B”)){ 27 28 return new BigUFOEnemyShip(); 29 30 } else return null; – See more at: I hope that helps 🙂 Thank you for your explanation. I have another question if you don’t mind. Why some examples use factory in calculator program and others use strategy? Most patterns have the same goal which is to add flexibility or to streamline the code. They can very often be used interchangeably. Thank you very much. For almost 12 hours I have been going back and forth to understanding strategy n factory. They are almost closely the same even their uml. Your video are good and I also like the code refactoring. When are you going to start j2ee, spring, hibernate…..I guess many are eager for those videos. I hope soon and hopefully it will happen this year You’re very welcome 🙂 Before Java Enterprise I want to make an Android tutorial that teaches everything in a format in which anyone will be able to make any Android app they can imagine. Sorry it is taking so long, but I’m very serious about teaching Android right now. You are like the best Tutor I’ve ever seen. The quality of the lessons and the way you teach is amazing. Thanks a ton Thank you very much 🙂 Hi, our professor taught us this pattern but I still don’t get on how do I access the dog digHole() method if I used the class Animal other than forcing it. ((Dog) Sparky).digHole(); which is not appropriate 🙂 I’m confused. That code you have there does work. You wouldn’t have to cast it though if you had the original method in the super class though. Great video and awesome teaching!! Thanks a lot for taking the effort and making things very interesting 😀 Thank you very much 🙂 Awesome tutorial, Clearly described. Thanks a lot for your videos. Thank you 🙂 You’re very welcome Hey Derek! I have been watching most of your OOP videos. well, I am not proud of myself for not having the courtesy to thank you until now. you r an amazing teacher. I will soon convert all the code you used in Design Pattern videos into Php and put on github. I will let u know when I finish 🙂 Thank you for taking the time to tell me that you have found them useful. I greatly appreciate that. Yes definitely post your link. Hello Derek, Thank you for these videos they are very informative and helpful. Can you please explain to me why you have the flyingType variable is public? If it is public why do you need the setter method? You’re very welcome 🙂 It didn’t need to be public. Really really nice tutorials, I love watching your tutorials on Design Patterns, I have planned that I’ll watch all of your tutorials on youtube. Thanks a lot for all the tutorials 🙂 Can you please tell me the screencasting software you used to create these tutorials Thank you 🙂 I use Camtasia 2 to record. would be really nice if you redo the videos in C#. I understand that most concept are the same but still better for and for a whole lot of newbies. thank you so much for videos, really great stuff although you are talking little bit fast 🙂 I’m working on a C# tutorial. I’ll see what I can do patterns wise as well. As a current Design Patterns student going through Head First Design Patterns, your videos are a helpful reinforcement. Thanks! Thank you 🙂 I’m very glad that I could help Thanks! This videos are many times more interesting for me, that the booring books and lectures))) In addition, it helps to learn English (it’s not my native language). You’re very welcome 🙂 Banas for president, Thank you for your help That’s funny 🙂 You’re very welcome. What if you just had an interface Flys, and have Bird implement it. Can you elaborate on why this option is duplicate code, and not good. Thanks. You could definitely do that instead. I just wanted to demonstrate the pattern in a simple way.
http://www.newthinktank.com/2012/08/strategy-design-pattern-tutorial/?replytocom=17554
CC-MAIN-2020-29
refinedweb
3,719
65.22
On Sun, 2008-09-14 at 10:08 +0100, Arnaud Delobelle wrote: On 14 Sep 2008, at 09:44, Cliff Wells wrote: j = range(3) for i in j: i # evaluates to [] for i in j: continue # evaluates to [] for i in j: continue i # evaluates to [0,1,2] Let's not call it continue, but YIELD for now: for i in J: YIELD i Now this won't work for nested loops. E.g. in current python def flatten(I): for J in I: for j in J: yield j >>> '-'.join(flatten(['spam', 'eggs'])) 's-p-a-m-e-g-g-s' ) Now compare with the current syntax: '-'.join(j for J in I for j in J) Certainly more clear and concise, but since (luckily for me this time) we're maintaining backwards-compatibility, that form would still be available. Cliff
https://mail.python.org/archives/list/python-ideas@python.org/message/GNSMSB26YMMIKWNKVYDJ3VMMJXDRYA54/
CC-MAIN-2021-17
refinedweb
144
66.1
The End of Global CSS. While undoubtedly a massive step forward for taming CSS, none of these methodologies have addressed the real problem with our style sheets. No matter which convention we choose, we’re still stuck with global selectors. But all of that changed on April 22, 2015. As we covered in an earlier post — “Block, Element, Modifying Your JavaScript Components” — we can leverage Webpack to import our CSS from within a JavaScript module. If this sounds unfamiliar to you, it’s probably a good idea to go read that article now, lest you miss the importance of what’s to follow. Using Webpack’s css-loader, importing a component’s CSS looks like this: require('./MyComponent.css'); At first glance — even ignoring the fact that we’re importing CSS rather than JavaScript — this is quite strange. Typically, a require call should provide something to the local scope. If it doesn’t, it’s a sure sign that a global side effect has been introduced — often a symptom of poor design. But this is CSS — global side effects are a necessary evil. Or so we thought. On April 22, 2015, Tobias Koppers — the ever tireless author of Webpack — committed the first iteration of a new feature to css-loader, at the time called placeholders, now known as local scope. This feature allows us to export class names from our CSS into the consuming JavaScript code. In short, instead of writing this: require('./MyComponent.css'); We write this: import styles from './MyComponent.css'; So, in this example, what does styles evaluate to? To see what is exported from our CSS, let’s take a look at an example of what our style sheet might look like. :local(.foo) { color: red; }:local(.bar) { color: blue; } In this case, we’ve used css-loader’s custom :local(.identifier) syntax to export two identifiers — foo and bar. These identifiers map to class strings that we can use in our JavaScript file. For example, when using React: import styles from './MyComponent.css';import React, { Component } from 'react';export default class MyComponent extends Component { render() { return ( <div> <div className={styles.foo}>Foo</div> <div className={styles.bar}>Bar</div> </div> ); }} Importantly, these identifiers map to class strings that are guaranteed to be unique in a global context. We no longer need to add lengthy prefixes to all of our selectors to simulate scoping. More components could define their own foo and bar identifiers which — unlike the traditional global selector model—wouldn’t produce any naming collisions. It’s critical to recognise the massive shift that’s occurring here. We can now make changes to our CSS with confidence that we’re not accidentally affecting elements elsewhere in the page. We’ve introduced a sane scoping model to our CSS.. As a result of this scoping model, we’ve handed control of the actual class names over to Webpack. Luckily, this is something that we can configure. By default, css-loader transforms our identifiers into hashes. For example, this: :local(.foo) { … } Is compiled into this: ._1rJwx92-gmbvaLiDdzgXiJ { … } In development, this isn’t terribly helpful for debugging purposes. To make the classes more useful, we can configure the class format in our Webpack config as a parameter to css-loader: loaders: [ ... { test: /\.css$/, loader: 'css?localIdentName=[name]__[local]___[hash:base64:5]' } ] In this case, our foo class identifier from earlier would compile into this: .MyComponent__foo___1rJwx { … } We can now clearly see the name of the identifier, as well as the component that it came from. Using the node_env environment variable, we can configure different class patterns for development and production. loader: 'css?localIdentName=' + ( process.env.NODE_ENV === 'development' ? '[name]__[local]___[hash:base64:5]' : '[hash:base64:5]' ) Now that Webpack has control of our class names, we can trivially add support for minified classes in production. As soon as we discovered this feature, we didn’t hesitate to localise the styles in our most recent project. We were already scoping our CSS to each component with BEM — if only by convention — so it was a natural fit. Interestingly, a pattern quickly emerged. Most of our CSS files contained nothing but local identifiers: :local(.backdrop) { … }:local(.root_isCollapsed .backdrop) { … }:local(.field) { … }:local(.field):focus { … }etc… Global selectors were only required in a few places in the application. This instinctively led towards a very important question. What if — instead of requiring a special syntax — our selectors were local by default, and global selectors were the opt-in exception? What if we could write this instead? .backdrop { … }.root_isCollapsed .backdrop { … }.field { … }.field:focus { … } While these selectors would normally be too vague, transforming them into css-loader’s local scope format would eliminate this issue and ensure they remain scoped to the module in which they were used. For those few cases where we couldn’t avoid global styles, we could explicitly mark them with a special :global syntax. For example, when styling the un-scoped classes generated by ReactCSSTransitionGroup: .panel :global .transition-active-enter { … } In this case, we’re not just scoping the local panel identifier to our module — we’re also styling a global class that is outside of our control. Once we started investigating how me might implement this local-by-default class syntax, we realised that it wouldn’t be too difficult. To achieve this, we leveraged PostCSS — a fantastic tool that allows you to write custom CSS transformers as plugins. One of the most popular CSS build tools today — Autoprefixer — is actually a PostCSS plugin that doubles as a standalone tool. To formalise the usage of local CSS, I’ve open sourced a highly experimental plugin for PostCSS called postcss-local-scope. It’s still under heavy development, so use it in production at your own risk. If you’re using Webpack, it’s a relatively straightforward process to hook postcss-loader and postcss-local-scope up to your CSS build process. Rather than document it here, I’ve created an example repository — postcss-local-scope-example—that shows a small working example. Excitingly, introducing local scope is really just the beginning. Letting the build tool handle the generation of class names has some potentially huge implications. In the long term, we could decide to stop being human compilers and let the computer optimise the output. In the future, we could start generating shared classes between components automatically, treating style re-use as an optimisation at compile time.. Understanding the ramifications of this shift is something that we’re still working through. With your valuable input and experimentation, I’m hoping that this is a conversation we can have together as a larger community. To get involved, make sure you see it with your own eyes by checking out postcss-local-scope-example. Once you’ve seen it in action, I think you’ll agree that it’s not just hyperbole — the days of global CSS are coming to an end. The future of CSS is local. Note: Automatically optimising style re-use between components would be an amazing step forward, but it definitely requires help from people a lot smarter than me. Hopefully, that’s where you come in ☺ Addendum 24 May, 2015: The original ideas presented in postcss-local-scope have been accepted into Webpack by Tobias Koppers, meaning that the project is now deprecated. Support for CSS Modules — as they are tentatively known — are now available in css-loader via an opt-in module flag. I’ve created a working example of CSS Modules in css-loader to demonstrate their usage, including class inheritance to intelligently share common styles between components. Image Credit: Jay Mantri ()
https://medium.com/seek-blog/the-end-of-global-css-90d2a4a06284
CC-MAIN-2021-43
refinedweb
1,271
56.05