text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
The following program can be used to test that the LED circuit is built correctly. It should turn the LEDs all on and then off. from microbit import * # turn on the LEDs pin0.write_digital(1) pin1.write_digital(1) pin2.write_digital(1) pin8.write_digital(1) sleep(1000) # turn them off pin0.write_digital(0) pin1.write_digital(0) pin2.write_digital(0) pin8.write_digital(0) - To turn an LED on, we write a HIGH signal (1) to the digital pin. - To turn an LED off, we write a LOW signal (0) to the digital pin. There is no repetition in this program. Press the reset button if you were not watching the micr:bit when it started. SPeak to your teacher if any of your LEDs is not working. Task 2 Look back over the programs you have previously written. Using a while True: loop, you can make all of the lights blink on and off once per second. Task 3 Write a program that turns the LEDs on when the A button is pressed and turns them off when the B button is pressed. Your program should be something like this. The lines that begin with a # are comments. Press enter after these lines and include the code that does the job described by the comment. from microbit import * while True: if button_a.was_pressed(): # turn on the lights elif button_b.was_pressed(): # turn off the lights sleep(20) Task 4 Write a program that always has 2 of the LEDs on. When you turn 2 off, you turn the other 2 on and vice versa. Make the program do this once every second. Task 5 Copy and test the following program, from microbit import * ledpins = [pin0,pin1,pin2,pin8] for led in ledpins: led.write_digital(1) sleep(500) In this program, we define a list called ledpins. It is a list of the digital pins that we have connected the LEDs to. The FOR loop can be used to reduce the number of lines we need to turn all of the LEDs on or off. Copy the 3 lines of the FOR loop and paste them at the end of the program. Make the tiny change to the code needed to turn the lights off again. Task 6 Another way to make it easier to work with the LEDs is to make a subroutine. A subroutine is a reusable block of code that has a name that you choose. Copy and test the following program, from microbit import * def all(x): ledpins = [pin0,pin1,pin2,pin8] for led in ledpins: led.write_digital(x) all(1) sleep(500) all(0) The subroutine in this program is called all. The letter x is called a parameter. When we run the subroutine, we need to supply a 1 or a 0 to let the subroutine know whether it is turning the LEDs on or off. A subroutine is only defined once in a program, usually at the start. Do not add any other statements before the subroutine. The definition of the subroutine (the 4 lines starting from the word def) must appear before the subroutine is called. Adapt the program to use a while True: loop to make the LEDs blink on and off. Remember to include a pause when the LEDs are off as well as when they are on. Task 7 Rewrite your program for Task 3 so that it makes use of the efficient subroutine. You need to copy the 4 lines of code that define the subroutine and place them before the place in the program where you turn lights on or off. Task 8 Write a program that lights up the first LED, pauses for about 100 milliseconds, before turning the LED off. Then it should do the same with the second LED, then the third, then the fourth. Once this has happened, the same process should be carried out working backwards up the line, turning the third, second and first lights on and off in order. The whole process should carry on repeating forever. It should look a little bit like the light is forwards and backwards along the line of LEDs. Task 9 The following lines of code show how to generate a random number in MicroPython. The variable a, is storing a random number from 0 to 3. from microbit import * import random a = random.randint(0,3) Write a program that allows a user to turn on a randomly chosen LED when the A button on the micro:bit is pressed. Pressing the button should turn off all of the LEDs before randomly selecting one to turn on. If you have the following list defined in your program, ledpins = [pin0,pin1,pin2,pin8] you could turn on the first LED with a statement like, ledpins[0].write_digital(1) Instead of writing a literal number in the square brackets, you need to use the variable that stores the random number that you generated. Task 10 With the LEDs lined up in the circuit, we can use them as place value indicators for a binary pattern to represent any integer from 0 to 15 (4 bits). The following lines of code should help you, from microbit import * denary = 15 eights = denary // 8 remainder = denary % 8 fours = remainder // 4 remainder = remainder % 4 twos = remainder // 2 units = remainder % 2 The variables eights, fours, twos, units will each store a 1 or 0 depending on whether the place value is a 1. The code depends on the variable denary which stores the number before it has been converted to denary. Get your program to count from 0 to 15 in binary numbers. As an extension, make a subroutine for the binary conversion. Task 11 Make your program from Task 10 count up and down in binary using for loops.
http://www.multiwingspan.co.uk/micro.php?page=pyex4
CC-MAIN-2019-09
refinedweb
966
80.51
In a previous post, I showed you how to remove comments from a Word file. In today’s post, I am going to show you how to accomplish the same scenario, but this time with Excel and PowerPoint files. Excel and PowerPoint have a Document Inspector feature, which is able to remove multiple types of data/content, including comments. This feature works great for client side solutions, but how do you cleanse these files of comments on the server? The Open XML SDK can accomplish these scenarios in just a few lines of code. Note: The code showed in this post is backwards compatible with version 1 of the SDK. Solution for PowerPoint Imagine I start off with a PowerPoint deck that has multiple slides and comments. Here is a screenshot of a comment within a PowerPoint slide: To remove comments from a PowerPoint deck we need to take the following actions: - Open up the PowerPoint deck via the Open XML SDK - Access the main presentation part, which will give us access to all the slide parts within the package - Delete the comment part associated with each slide part - Save changes made to the deck If you want to jump straight into the code, feel free to download this solution here. The Code The code is pretty easy and maps 1:1 to the steps I mentioned above: End Result Running this method, I end up with a presentation void of comments. Pretty easy! Here is a screenshot of the final presentation: Solution for Excel This solution is very similar to the PowerPoint solution. Imagine I start off with an Excel workbook that has multiple worksheets, where each worksheet contains multiple comments. Here is a screenshot of a comment within a worksheet cell: To remove comments from an Excel workbook we need to take the following actions: - Open up the Excel workbook via the Open XML SDK - Access the main workbook part, which will give us access to all the worksheet parts within the package - Delete all comment parts associated with each worksheet part - Save changes made to the workbook If you want to jump straight into the code, feel free to download this solution here. The Code This code is very similar to the code used to remove comments from a PowerPoint deck: End Result Running this method, I end up with a workbook void of comments. Here is a screenshot of the final workbook: Zeyad Rajabi PingBack from FYI, the comments in Excel don’t consist in only the Comments part. This kind of snippet is pretty bad given the actual work to do to delete comments properly. It’s bad since it’s wrong, and that developers tend to copy/paste someone else’s code without checking first. Anon, Are you talking about removing the legacy vml drawing parts as well? Zeyad Rajabi Sure. And the reference to it in the sheet footer. And any singleton child part referenced by the legacy drawings. That makes a lot of difference with what you have posted… Catching up on links to blog posts I’ve found interesting this month … I was on vacation the first week Sorry for absolute off-topic Open Packaging Conventions question. Can anyone explain me the purpose of defining content types, when content can be determined from relationships? Thanks in advance and once more, sorry for OT question. Brian, I have a question somewhat unrelated to this post. But it’s a simple one. I have added some XML parts to an Excel spreadsheet via an Excel Application Add-in. When I do this I use a namespace uri to identify the part that I added. But when I use the Open XML SDK this namespace uri that I used doesn’t appear to be available. How do I find this namespace uri, given this is what I used as the key to my content. Anon – Is there any way you can share an example file? I want to make sure I fully understand your concerns. Ferou – Take a look at the following post: Kang – You can either iterate through all the parts related to a particular part and look for the uri yourself or you could use System.IO.Packaging to get access to the specific part you want.
https://blogs.msdn.microsoft.com/brian_jones/2009/04/20/removing-comments-from-excel-and-powerpoint-files/
CC-MAIN-2017-13
refinedweb
715
66.88
taskrunner 0.2.1 Execute a certain sequence of tasks and later their cleanups in reverse order.# TaskRunner Execute a certain sequence of tasks and later their cleanups. It is useful for running tasks with many varying configurations. It doesn't have any dependencies, just the standard library. ```python # file examples/simple.py import taskrunner class ExampleTask(taskrunner.Task): def __init__(self, msg, clean_msg, **kwargs): super(ExampleTask, self).__init__(**kwargs) self.msg = msg self.clean_msg = clean_msg def run(self, context): print self.msg def cleanup(self, context): print self.clean_msg task1 = {'task': ExampleTask, 'name': 'task1', 'msg': 'hello world', 'clean_msg': 'goodbye'} task2 = {'task': ExampleTask, 'name': 'task2', 'msg': 'hello again', 'clean_msg': ':)'} pipeline = [task1, task2] ``` ``` $ bin/taskrunner -f examples/simple.py pipeline 2013-12-02 19:46:37,952 - taskrunner - INFO - =========== run task1 =========== hello world 2013-12-02 19:46:37,953 - taskrunner - INFO - =========== run task2 =========== hello again 2013-12-02 19:46:37,953 - taskrunner - INFO - --------- cleanup task2 --------- :) 2013-12-02 19:46:37,953 - taskrunner - INFO - --------- cleanup task1 --------- goodbye ``` ### How it works The pipeline is a list of task configurations, which are normal Python dictionaries with the special item `'task':ExampleTask`. It goes trough the list and for each task, it instantiates `ExampleTask` with the rest of the dictionary content as parameters. Then it executes `ExampleTask.run()` for all of the tasks. After it passes trough the whole list, it goes trough it in reverse order and executes `ExampleTask.cleanup()` for each item. The tasks can write into `context` and the content of it will be passed to the next task. ### Usage You can specify the pipeline directly as arguments: $ bin/taskrunner -f examples/simple.py task1 task2 Or you can combine multiple pipelines, which will run all the tasks from each pipeline: $ bin/taskrunner -f examples/simple.py pipeline another_task_pipeline Or even combine pipelines and tasks (this will run *task2* twice): $ bin/taskrunner -f examples/simple.py pipeline task2 To use the tool as a library, you can directly use `execute`: ```python taskrunner.execute([task1, task2]) ``` #### Taking control of the cleanup execution Sometimes you want to only execute the `run()` part of the tasks, debug something and only run the cleanups after you are done. To skip the cleanups, you can do: $ bin/taskrunner -f examples/simple.py pipeline --cleanup=never To run the cleanups only: $ bin/taskrunner -f examples/simple.py pipeline --cleanup=pronto You can also use the options `--cleanup=on_success` or `--cleanup=on_failure`, which will get executed based on how the `run` turned out. Don't forget to make the cleanups independent of the runs, otherwise this won't work. #### Exception and signal handling When an exception occurs in the task run, its traceback is logged and it jumps right into the cleanups. However, it doesn't clean up the tasks that didn't run (but it does clean up the task which failed and got only partially executed). If you get an exception during some cleanup, the traceback is logged but execution continues with the next task's cleanup. The list of errors gets logged again after everything else finishes, in the order they happened. If you terminate the run using `ctrl-c` (also known as *SIGINT*), it will go straight to the cleanups. Sending the termination signal again will stop it completely. This works for the *SIGTERM* signal too. #### The name of a task By default, the name of a task is the class name. To have more readable logs, you can specify the keyword `name` in the task configuration. The task names can be important for configuration redefinition from the command line. #### Redefining the task configuration trough CLI arguments Sometimes you want to run a sequence of tasks with some changes in their configuration, but don't want to change the files. You can redefine it using the parameter `-D`. $ bin/taskrunner -f examples/simple.py pipeline -D task1.msg=ping It can't contain any spaces, has to be in the exact format of `taskname.key1.key2.key3...=newvalue`, where `taskname` is either the name of the task specified in the configuration dictionary or the class name. If more tasks have the same name, it will get rewritten for all of them. For example, $ bin/taskrunner -f examples/simple.py pipeline -D ExampleTask.msg=ping will change the message for both `task1` and `task2`, because they have the same class name. #### Using multiple files for the task configurations You probably don't want to have everything in a single file. You can load multiple modules and reference the tasks normally. $ bin/taskrunner -f examples/advanced.py -f examples/simple.py \ mytask task1 task2 In case you have any name conflicts, you can specify the name of the module. $ bin/taskrunner -f examples/advanced.py -f examples/simple.py \ advanced.pipeline ### Best practices for writing tasks and their configurations * don't make the `cleanup` method dependent on `run`, because with the option `--cleanup=pronto`, the `run` method won't get executed * don't assume that the `run` got executed completely * put the tasks into a separate file, which will be imported in the file with the task configurations * use the minimum of Python features in the task configuration files (which are just `.py` files), variable definitions and if conditions are usually sufficient. You will be later able to switch to some other configuration format. - Downloads (All Versions): - 17 downloads in the last day - 129 downloads in the last week - 470 downloads in the last month - Author: Martina Kollarova - License: Apache License, Version 2.0 - Package Index Owner: mkollaro - DOAP record: taskrunner-0.2.1.xml
https://pypi.python.org/pypi/taskrunner/0.2.1
CC-MAIN-2015-27
refinedweb
931
57.27
I've recently been using Microsoft's WCF REST starter kit to access information over the web, and have been very impressed with its ease of use and design, but I wanted more. I didn't want to evaluate the toolkit against a noddy little application that I'd thrown together. I wanted to test it against a busy system; one which was getting thousands of hits a second; one which would really test it to see if it was worth the free asking price. In order to test out the functionality that Microsoft is providing, I decided to write an application against a source of data that provides a fairly full and feature rich RESTful API. To that end, I've developed a simple Twitter client that allows you to view your latest posts: the latest posts that have been made to Twitter, and the latest posts that your friends have made to Twitter. As well as doing this, it also allows you to upload new messages. The resulting application is called SongBird, just because I think it's prettier for birds to sing than Tweet (sorry, I know it's a bad pun, but I liked it). You may be wondering why there's a need for RESTful services. After all, there's a perfectly serviceable means to perform operations over HTTP called SOAP. To get a feel for this, it's important to understand some of the key principals of REST. As you can see, if your application needs to perform simple operations, REST is a good choice - especially as you can test your commands in the browser address bar and email them to your friends as links. If your application needs to perform more complex operations, such as updating a master object with lots of sub objects, then REST is not going to be the best choice for this. The services we are going to use to access Twitter are provided via a RESTful API, which are fairly well documented at. The API supports a wide variety of MIME type encodings, so it's easy to find a format that suits our needs. It's very important that you read the API documentation for each API call to find out what these types are (known as Formats in Twitter), and to identify the HTTP operations that apply. For instance, if you want to search Twitter using ATOM, you would use. As always, the API documentation should also tell you what parameters need passing and how they should be formatted. In order to write SongBird, I decided to use the WCF REST Starter Kit which is currently at Preview 2 status. For the purposes of SongBird, I was happy to use a library that hadn't officially been released yet, and which may undergo some changes before it is finally released. For the purposes of SongBird, it's unlikely that the elements of the component are likely to undergo any serious revisions. Not surprisingly, for those who know me, I have written SongBird using WPF. I have a long history of loving WPF, and use it in pretty much everything that I write now. The interface to SongBird was quickly pulled together using a combination of Visual Studio 2008 and Expression Blend 2. Internally, SongBird makes heavy use of two patterns: Model Model ViewModel (MVVM) and Mediator. I'm not going to go into MVVM in too much depth, except to show how it helped in writing SongBird - if you want to know more about it, please read Josh Smith's, Karl Shifflett's, and Sacha Barber's excellent articles on MVVM. The Mediator pattern is used to relay messages backwards and forwards between Views (well, strictly speaking, it's between ViewModels and not Views). The themes in SongBird come from the WPFThemes project on CodePlex, and I've also made fairly heavy use of David Anson's excellent BackgroundTaskManager class to neaten up my background tasks. BackgroundTaskManager When SongBird starts up, the user is presented with the following screen: The only thing the user can do is enter their account name and password, and minimize or close the window. The portion of the screen at the bottom (the status portion) is disabled, and the posts tabs are hidden until the user has been authenticated. When the user types in their account name and password, the Log in button becomes enabled. This is all accomplished through simple two way databinding and the judicious use of ICommand to wire up commands. The code for the login screen is found in ViewModel/LoginViewModel.cs in the solution. ICommand When the user clicks Log in, SongBird kicks the login process off on a background thread, like so: private void SignInExecute(object parameter) { try { string accountName = AccountName; string password = Password; bool rememberMe = RememberMe; Mouse.OverrideCursor = Cursors.Wait; LoginMessage = string.Empty; BackgroundTaskManager.RunBackgroundTask(() => { return Login(); }, (result) => { Mouse.OverrideCursor = null; if (result == null) { Mediator.Instance.NotifyColleagues( ViewModelMessages.UserAuthenticated, false); LoginMessage = "Unfortunately, Twitter did not recognise" + " this username and " + "password. Please try again"; } else { LoginResult(result as XElement); } } ); } catch { LoginMessage = "We are sorry, SongBird was unable" + " to connect to Twitter. " + "Please check yournetwork connection and try again."; } } private XElement Login() { try { return new LoginManagement().Login(AccountName, Password); } catch (ArgumentOutOfRangeException ex) { LoginMessage = "We are sorry, SongBird was unable" + " to connect to Twitter. " + "Please check your network connection and try again."; } return null; } private void LoginResult(XElement root) { SaveDetails("AccountName", AccountName); SaveDetails("Password", Password); SaveDetails("RememberMe", RememberMe); LoginMessage = "You were successfully logged in."; Mediator.Instance.NotifyColleagues(ViewModelMessages.UserAuthenticated, true); } private void SaveDetails(string key, object value) { App.Current.Properties.Remove(key); App.Current.Properties.Add(key, value); } The SignInExecute method is triggered by the command associated with the Log in button. It starts off by changing the mouse cursor to the wait cursor, before it creates a background login task. If the log in completes successfully, then the user is informed of this and the Mediator notifies all of the interested ViewModels (I'll shorten this to VMs from now on in) that the user has been successfully authenticated. Note that we save the account name, password, and the remember me flag to the application properties bag so that we can use them in other VMs with ease. So, how does the login actually work? If you remember, we said earlier that REST uses the URL to communicate. There is a little bit more to this, and this is where the WCF REST starts to provide us with that little bit more support than writing our own code would. As you can imagine, it would not be secure to pass the account name and password as a querystring parameter. Instead, we need to set up some credentials that get passed with the REST request. What we need to do is create an instance of an HttpClient object and give it some credentials -the account name and password that the user has supplied. HttpClient is a utility class designed to manipulate the request and response objects in a much easier, more natural way. We're only going to scratch the surface of HttpClient here, as it provides a lot more power than we need to communicate with the Twitter API. The actual login object looks like the following: HttpClient private const string BASEADDRESS = ""; private const string VERIFYCREDENTIALS = "verify_credentials.xml"; public XElement Login(string accountName, string password) { using (HttpClient client = new HttpClient(BASEADDRESS)) { client.TransportSettings.Credentials = new NetworkCredential(accountName, using (HttpResponseMessage response = client.Get(VERIFYCREDENTIALS)) { response.EnsureStatusIsSuccessful(); return response.Content.ReadAsXElement(); } } } When we create the HttpClient object, we need to give it a base address for our operations. Again, the Twitter API comes to our rescue, and we find that user verification is present at. (If we wanted to use a different encoding (such as JSON), we could, but we are going to use XLinq to query the information returned by this request.) The account name and password are supplied as credentials to the HttpClient TransportSettings. XLinq TransportSettings Our code is going to attempt the authentication using the GET verb. HttpClient makes this simple by providing the Get method, and we provide the location of the operation we want to perform here as a parameter to the method. Cunningly enough, other Verbs are named in a similar meaningful fashion, so a PUT is Put, and a POST is Post. Get Put A feature of the WCF REST implementation that I really like is its provision of the EnsureStatusIsSuccessful method, and I heartily recommend that you include this in all of your requests. Basically, this method ensures that the application receives a HTTP Status 200, and throws an exception if it doesn't. This is a quick and convenient way to ensure that the application successfully connects to the target. EnsureStatusIsSuccessful Finally, we retrieve the content from the response (inside a HttpResponseMessage) and return it as an XElement object. HttpResponseMessage XElement At this point, assuming that the user has successfully been authenticated, the mediator has notified interested VMs that the UserAuthenticated is set to true. This causes the posts tabs to be displayed, and the update status area to be enabled. UserAuthenticated true The images have been blurred to preserve the anonymity of the posters. The code to read the posts again runs on a background thread. This VM provides a bit more functionality because it sets SongBird to reread posts every 5 minutes, or when the users update their status. Again, the VMs don't need to know about each other's existence because the Mediator will notify this code that it needs to reread the posts - it's a simple but very powerful mechanism, and one that I recommend that you take a good look at. protected override void Initialize() { base.Initialize(); Mediator.Instance.Register((object o) => { ReadTwitter(); _timer = new Timer(300000); // 5 minute refresh interval _timer.Elapsed += new ElapsedEventHandler(timer_Elapsed); _timer.Enabled = true; _timer.Start(); }, ViewModelMessages.UserAuthenticated); Mediator.Instance.Register((object o) => { _timer.Stop(); ReadTwitter(); _timer.Start(); }, ViewModelMessages.StatusUpdated); } The code to actually read the posts is: private void ReadTwitter() { try { _accountName = App.Current.Properties["AccountName"].ToString(); _password = App.Current.Properties["Password"].ToString(); Mouse.OverrideCursor = Cursors.Wait; BackgroundTaskManager.RunBackgroundTask(() => { return DoReadTwitter(); }, { Mouse.OverrideCursor = null; List<PostItemModel> postItems = new List<PostItemModel>(); if (posts != null && posts.Count() > 0) { // Read through the posts and update them... foreach (TwitterPost post in posts) { postItems.Add(PostItemModel.Fill(post)); } Model.Clear(); Model.AddRange(postItems); } } ); } catch (Exception) { } } The method to read Twitter is set up in a similar way to the method that verifies the user credentials. What quickly becomes apparent, is how well thought out the design actually is, and how easy the consumption of REST services actually is: private static List<TwitterPost> RetrieveStatuses(string accountName, string password, string location) { List<TwitterPost> posts = new List<TwitterPost>(); using (HttpClient client = new HttpClient(BASEADDRESS)) { client.TransportSettings.Credentials = new NetworkCredential(accountName, password); using (HttpResponseMessage response = client.Get(location)) { response.EnsureStatusIsSuccessful(); XElement root = response.Content.ReadAsXElement(); var statuses = root.Descendants("status"); foreach (XElement status in statuses) { posts.Add(TwitterPost.Fill(status)); } } } return posts; } When the posts are being read and added to our collection for display with the aid of our ViewModel, we don't want to reread the same image over and over. The image information is returned as part of the XML from the status as a profile URL. We use this URL to retrieve the image using the following helper class: public static class ImageManager { private static Dictionary<string, BitmapImage> _profileImages = new Dictionary<string,BitmapImage>(); public static BitmapImage GetImage(string key) { return _profileImages[key]; } public static void AddImage(string key) { if (!_profileImages.ContainsKey(key)) { _profileImages.Add(key, new BitmapImage(new Uri(key))); } } } With the aid of this, our posts bind to the image by retrieving the BitmapImage that maps to the relevant profile URL. BitmapImage Finally, we have the update status code. Again, this executes on a background thread, and it raises the StatusUpdated notification from the Mediator, informing the posts VMs that they need to update. StatusUpdated The interesting thing behind the HttpClient scenes here is the provision of a handy method to encode the status text for passing as a parameter. In our code, we wrap the status text post with Uri.EscapeUriString to encode the text. Uri.EscapeUriString SongBird makes use of a couple of helpers that I've blogged about in the past. There's an implementation of a bulk loadable ObservableCollection, and a databound PasswordBox attached property that you are free to use as you see fit. ObservableCollection PasswordBox You may receive the exception "Error 5 The tag 'VisualStateManager.VisualStateGroups' does not exist in XML namespace 'clr-namespace:System.Windows;assembly=WPFToolkit'. Line 67 Position 38. Theme.xaml 67 38 SongBird" when you compile SongBird. This is a problem with the WPFToolKit DLL and you can safely ignore it. SongBird will still compile safely, and Visual Studio seems to ignore this error. SongBird started off as a simple intellectual exercise aimed at testing how the WCF REST functionality stands up to higher load sites, and has impressed me with the ease of development it provides. The question has become - am I finished with SongBird? Currently, SongBird does the things I do with Twitter, but it could do so much more. If there is sufficient interest in it, if people think it has potential as a Twitter client, then I would look to formalise the development of SongBird and implement some of the features such as friends management and turn SongBird into a fully featured Twitter client. What do you think? It's up to you. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) Forgive your enemies - it messes with their heads Dr.Luiji wrote:NOTE: the name SongBird is already used for an open source music player. Cheers, Karl Just a grain of sand on the worlds beaches. yes you have to reference a VB assembly, but it works! Karl Shifflett wrote:Nice work, didn't know about the REST toolkit. Sacha Barber wrote:Ba humbug to you Shifflett. Sacha Barber wrote:Anyway Pete have a 5 Karl Shifflett wrote:FYI: There is a Network.NetworkAvailabilityChanged Event that you can subscrbe to to track the current state of the computer's network connection.[^] We also have a property My.Computer.Network.IsAvailable that can be used to check the status of the network before using it.[^] (yes you have to reference a VB assembly, but it works!) General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/37434/SongBird-a-Twitter-Hybrid-Smart-Client
CC-MAIN-2014-52
refinedweb
2,439
53.92
An alternative to handle state in React: the URL ! GaelS ・6 min read For a french version of this article, go here :) The state of a react app Managing the global state of a React app is an endless war between libraries. Therefore, in order to add my 2 cents, we'll see how to do the exact same thing using the URL and react-router. URL FTW In single page application, the URL does not really matter. Most of the time, it is just an endpoint to request all the assets and that's it. Landing on ou would have no impact on what you'll see on the first loading of the app. Let's change that. Some code I started to use develop this idea in a side project (code and demo to play with :)) However, for this article, I have redeveloped a codesandbox to focus only on what matters for here. First of all, how will we use the URL ? Well, we'll use everything located after the ? in the URL, the so-called search params. Some docs here on the subject. Get values from the URL In the context of this article, we are going to look for only one parameter that we'll name query. To retrieve that parameter (if it is indeed in the URL eg. ), we are going to check the ... search params. Luckily, they are easy to find, located in the object window. More precisely in winndow.location.search. Therefore, if we request:, what we get in the console is: console.log(window.location.search); // "?query=react" Ideally, rather than a string, it would be more convenient to manipulate a JS object properly formatted. To achieve that, instead of splitting the URL by = and ?, we'll use the URLSearchParams object, available in recent browsers. Otherwise, you can polyfill it thanks to this lib for instance. With code, it yields to : function getParams(location) { const searchParams = new URLSearchParams(location.search); return { query: searchParams.get('query') || '', }; } therefore, const params = getParams(''); console.log(params) // { query: "react" } Now that we can get an object out of the URL, we'll make it work with our app which uses react-router. We'll therefore create a router which handles a route getting the properties in the URL as props. import React from "react"; import { render } from "react-dom"; import { BrowserRouter as Router, Route } from "react-router-dom"; // ... // getParams code above //a simple component to display //the value of the query ... // which is for now unknown //so we'll instantiate it with an empty value const MainPage = (props) => { let query = ''; return ( <h2>{`Query : ${query}`}</h2> ); } const App = () => ( <React.Fragment> <Router> <React.Fragment> <Route path="/" component={MainPage} /> </React.Fragment> </Router> </React.Fragment> ); render(<App />, document.getElementById("root")); To get the actual value of query, we'll have to manipulate the function getParams that we wrote as well as the implicit props that Route object : <Route path="/" component={MainPage} /> If we logged that props, we'd get : {match: Object, location: Object, history: Object, /*other stuff */} What is of interest here is the location object similar to the window.location we manipulated earlier. Therefore, we can update the const MainPage = (props) => { const { location } = props; const { query } = getParams(location); return ( <h2>{`My query: ${query}`}</h2> ); } Now Update the URL (and the state) ! Now that we can read from the URL, we are going to implement a way to update the URL accordingly to the state of our app. For that, a simple input will do: class InputPage extends React.Component { state = { inputValue: "" }; updateInputValue = e => this.setState({ inputValue: e.target.value }); render() { return ( <React.Fragment> <input type="text" placeholder="Change your URL !" value={this.state.inputValue} onChange={this.updateInputValue} /> <input type="button" value="Change the URL" onClick={null} /> </React.Fragment> ); } } So far, our component manage an internal state to display its current value but we still have to implement the onClick function to update the URL with that very same value. We saw that the implicit props object from Route looks like that: {match: Object, location:Object, history: Object, /*d'autres valeurs */} What is of interest here is the history (additional information about history here...) Thanks to its push function which according to React Router documentation : Pushes a new entry onto the history stack To say it simply, push will allow us to update the URL ! So if the query value in our input is javascript, we'll have to update our URL to the following value:. We therefore need to generate the new searchParams of our URL. In order to do that URLSearchParams object will help us once again: function setParams({ query = ""}) { const searchParams = new URLSearchParams(); searchParams.set("query", query); return searchParams.toString(); } Mind that without a default value for query when query is actually undefined, the generated URL would be ?query=undefined... Now we can write : const url = setParams({ query: "javascript" }); console.log(url); // "query=javascript" We can then implement the onClick in he input component. class InputPage extends React.Component { state = { inputValue: "" }; updateInputValue = e => this.setState({ inputValue: e.target.value }); updateURL = () => { const url = setParams({ query: this.state.inputValue }); //do not forget the "?" ! this.props.history.push(`?${url}`); }; render() { return ( <React.Fragment> <input type="text" className="input" placeholder="What am I looking for ?" value={this.state.inputValue} onChange={this.updateInputValue} /> <input type="button" className="button" value="Update the URL !" onClick={this.updateURL} /> </React.Fragment> ); } } Now if we change that value of the input, a click on the button we'll trigger the update of the URL and the One of the nice thing that comes by having the state of your app in a URL is when you copy/paste link. Given that the state is included in this URL, we'll find the app in a specific state on first load ! When you deal with a search engine for instance, you can trigger the query as soon as the app is loaded. In this application, I use react-apollo but in naive way, we can implement the same thing with any HTTP client. Let's create a component which triggers request using axios and the Github REST API (which does not require any authentication) as soon as it gets props using some of its lifecycle methods. const httpClient = axios.create({ baseURL: "" }); class ResultsPage extends React.Component { state = { results: [], loading: false, error: false }; //Search as soon as it is mounted !! componentDidMount() { return this.searchRepositories(this.props.query); } //Search as soon as query value is updated componentWillReceiveProps(nextProps) { if (nextProps.query !== this.props.query) { this.setState({ query: nextProps.query }); return this.searchRepositories(nextProps.query); } } searchRepositories = query => { //handle if query is undefined if (!query) { return this.setState({ results: [] }); } this.setState({ loading: true, error: false }); //the actual search on Github return httpClient .get(`/search/repositories?q=${query}`) .then(({ data }) => this.setState({ results: data.items, loading: false }) ) .catch(e => this.setState({ loading: false, error: true })); }; render() { return ( <div> {this.state.results.map(repo => ( <div key={repo.id}> <a href={repo.html_url}> {repo.name} </a> <div>{`by ${repo.owner.login}`}</div> </div> ))} </div> ); } } That's it! We now have a component which triggers request whenever the query params contained in the URL is updated ! As mentionned earlier, you can find a live example here. In our example, we deal with only one searchParams but it becomes really cool and powerful if more components can modify the URL. For instance, pagination, filtering, sorting, etc can become parameters of the URL. It could then look like that:. The code would be similar as what we did previously. By modifying the URL, implicit props provided by the Route component are updated. It then triggers a rerender and update all the children listening to a specific value in the URL. Therefore it leads to UI update(s) or side effect(s) such as an HTTP call. Conclusion That's it ! This article was an attempt to show that there are still alternatives to handle the global state of a React App that can be very light in terms of bundle size (0kb in modern browser ><) ans still fun, simple and providing for free a (kind of) deep linking effect which I find very cool :) Hope you enjoy ! How do you convince a client to a static website? How do you convince a client to a static website? What are your favorite players? Do you have any tricks or tips you want to share? Thank you @GealS for this. The example for search is a great way to use query params (QPs). In terms of more advanced component patterns though, do you have any use cases where you can optimize performance/minimize loading by using query params? By using QPs, can you somehow mitigate the almost endless return to previous routes that's triggered by historywhen users click the back button? I'm assuming that QPs allow you update UI without actual route changes? So does this translate to history only going back to route changes and not query changes? Hi, thanks for the feedback :) i'll try to answer the 2 points you raised : Despite the use of the QPs, as I see it, optimizing the loading would mean optimizing the request over the network. So far, I only come up with general ideas (which come regardless QPs) such as caching, smart prefetching (= the request being sent before the user even hits enter), and things like that. As I understand your question here, to save bandwidth and improve UX, I think caching the requests' responses could be - again-the solution here. With QPs, going back in history means querying something you already have so, better use a cache (which comes from free with react-apollo for instance). Thanks a lot for your reply, especially on caching. I liked the comment earlier and was hoping to get back to it. Other than when you use react-apollo, which other libraries do you use for caching in non Apollo projects? With classic rest API, there are several strategies to deal with caching. 1) You can have a REST API providing a cache-Controlin its HTTP headers to help the browser to minimize server round-trips when possible (an interested link on the subject). It's a good one since few to no work is required on the front-end side. 2) You can have a reverse proxy on your back-end to filter incoming requests and use cache when possible. You do not save bandwidth but at least, you release the server's load. 3) However, if you are not in charge of the API but you still want to have a caching method, a naive way of doing it is to use memoization. Just a big word to describe a functionthat can save the results of its computation. Therefore, if it's called twice with the same arguments, the second call costs no compute time. You can achieve this using closure in javascript. Let's say you use axios to query whatever API you want with an idas an argument (mind that it's kind of a pseudo code - it won't run as is) : Obviously, it's a very very very naive implementation which would require additional work to deal with expire dates, invalidation of the cache if a mutation is sent to the server for instance and so on. But I hope you'll get the idea of how the subject could be tackled :) I definitely do. This has sure come in handy, and just cycling back to say thank you. Thanks Gael, Really enjoyed your walkthrough - delightful and enlightening! btw love your miami vice styling!
https://dev.to/gaels/an-alternative-to-handle-global-state-in-react-the-url--3753
CC-MAIN-2019-22
refinedweb
1,928
65.93
Hi, I am trying to find all lists of length x with elements a, b, and c. To this end, I have created the class below, but it is not quite working and I am having trouble figuring out what to change. Does anyone have any insight? class c: def __init__(self): self.traits = 4 # number of list elements self.types = 3 # elements can be 0, 1, or 2 def a(self): l = [] for i in range(self.traits): l.append(self.b(str(i))) return l def b(self, s): if len(s) == self.types: return s for i in range(self.traits): return self.b(s + str(i)) i = c() lst = i.a() Thanks in advance, -d Grokbase › Groups › Python › python-list › October 2004
https://grokbase.com/t/python/python-list/04aq3e4j8x/simple-recursion-help
CC-MAIN-2022-40
refinedweb
126
78.75
If you by any chance would like to use the excellent python shell IPython as an embed shell on an application or virtual terminal, you can instantiate it using the IPShellEmbed class provided by IPython. Let’s say that your environment doesn’t like IPython default color scheme and you need to start it without colors (same as doing ipython -colors NoColor). You can use an embed shell in your code the following way: import IPython embedshell = IPython.Shell.IPShellEmbed(argv=["-colors", "NoColor"]) embedshell() argv are the arguments you would normally pass to the main ipython script. I hope that was useful 🙂 One thought on “How to embed an IPython shell in your application” This has apparently changed in new versions of iPython. Now you just use: from IPython import embed and then call the embed() function (no instantiation necessary)
https://mybravenewworld.wordpress.com/2007/11/27/how-to-embed-an-ipython-shell-in-your-application/
CC-MAIN-2020-40
refinedweb
140
57.3
NAME libblkid - block device identification library SYNOPSIS #include <blkid.h> cc file.c -lblkid DESCRIPTION The and is verified to still be valid before being returned to the user (if the user has read permission on the raw block device, otherwise not). The cache file also allows unprivileged users (normally anyone other than root, or those not in the "disk" group) to locate devices by label/id. The standard location of the cache file can be overridden by the environment variable BLKID_FILE. In situations where one is getting information about a single known device, it does not impact performance whether the cache is used or not (unless you are not able to read the block device directly).. In some cases (modular kernels), block devices are not even visible until after they are accessed the first time, so it is critical that there is some way to locate these devices without enumerating only visible devices, so the use of the cache file is required in this situation. CONFIGURATION FILE The standard location of the /etc/blkid.conf config file can be overridden by the environment variable BLKID_CONF. AVAILABILITY libblkid is part of the util-linux package since version 2.15 and is available from.
https://man.cx/libblkid(3)
CC-MAIN-2020-29
refinedweb
203
53.41
__thrsigdivert — synchronously accept a signal #include <sys/signal.h> #include <sys/time.h> int __thrsigdivert(sigset_t set, siginfo_t *info, const struct timespec *timeout); The __thrsigdivert() function is used to implement sigwait(). It selects a signal pending for the calling thread or process from set, atomically clears it from that set of pending signals, and returns that signal number. If prior to the call to __thrsigdivert() in all threads in the process at the time of the call to __thrsigdivert(); otherwise the signal may be delivered to some thread that does not have it blocked. If more than one thread is using __thrsigdivert() to wait for the same signal, no more than one of these threads shall return from __thrsigdivert() for any given signal that is sent. Which thread returns from __thrsigdivert() if more than a single thread is waiting is unspecified. If the info argument is not NULL, then a siginfo_t will be stored there which has the selected signal number in its si_signo member and the cause of the signal in its si_code member. If the timeout argument is not NULL and no selected signal is currently pending, then __thrsigdivert() will wait no longer than the specified time period for a signal to arrive before failing. If successful, the number of the signal that was accepted is returned. Otherwise, a value of -1 is returned and errno is set to indicate the error. __thrsigdivert() will succeed unless: EWOULDBLOCK] sigaction(2), sigprocmask(2), sigwait(3) The __thrsigdivert() function is specific to OpenBSD and should not be used in portable applications. A thrsigdivert() syscall appeared in OpenBSD 3.9. The info and timeout arguments were added in OpenBSD 4.9. The thrsigdivert() syscall was created by Ted Unangst <tedu@OpenBSD.org>. This manual page was written by Philip Guenther <guenther@OpenBSD.org>.
https://man.openbsd.org/__thrsigdivert.2
CC-MAIN-2021-43
refinedweb
301
61.77
The first category of control flow statements we’ll talk about are the conditional statements. A conditional statement is a statement that specifies whether some associated statement(s) should be executed or not. C++ supports two basic kind of conditionals: if statements (which we introduced in lesson 4.10 -- Introduction to if statements, and will talk about further here) and switch statements (which we’ll cover in a couple of lessons). if statements switch statements Quick if-statement recap The most basic kind of conditional statement in C++ is the if statement. An if statement takes the form: if statement if (condition) true_statement; or with an optional else statement: else statement if (condition) true_statement; else false_statement; If the condition evaluates to true, the true_statement executes. If the condition evaluates to false and the optional else statement exists, the false_statement executes. condition true true_statement false false_statement Here is a simple program that uses an if statement with the optional else statement: This program works just like you’d expect: Enter a number: 15 15 is greater than 10 Enter a number: 4 4 is not greater than 10 If or else with multiple conditional statements New programmers often try something like this: However, consider the following run of the program: Enter your height (in cm): 180 You are tall enough to ride. Too bad! This program doesn’t work as expected because the true_statement and false_statement can only be a single statement. The indentation is deceiving us here -- the above program executes as if it had been written as follows: This makes it clearer that “Too bad!” will always execute. However, it’s common to want to execute multiple statements based on some condition. To do so, we can use a compound statement (block): Remember that blocks are treated as a single statement, so this now works as expected: Enter your height (in cm): 180 You are tall enough to ride. Enter your height (in cm): 130 You are not tall enough to ride. Too bad! To block or not to block single statements There is debate within the programmer community as to whether single statements following an if or else should be explicitly enclosed in blocks or not. if else There are two reasons typically given as rationale for doing so. First, consider the following snippet: Now let’s say we’re in a hurry and modify this program to add another ability: Oops, we’ve just allowed minors to gamble. Have fun in jail! Second, it can make programs more difficult to debug. Let’s say we have the following snippet: Let’s say we suspect something is wrong with the addBeerToCart() function, so we comment it out: addBeerToCart() Now we’ve made checkout() conditional, which we certainly didn’t intend. Neither of these problems occur if you always use blocks after an if or else statement. The best argument for not using blocks around single statements is that adding blocks makes you able to see less of your code at one time by spacing it out vertically, which makes your code less readable and can lead to other, more serious mistakes. The community seems to be more in favor of always using blocks than not, though this recommendation certainly isn’t ubiquitous. Best practice Consider putting single statements associated with an if or else in blocks. A middle-ground alternative is to put single-lines on the same line as the if or else: This avoids both of the above downsides mentioned above at some minor cost to readability. Implicit blocks If the programmer does not declare a block in the statement portion of an if statement or else statement, the compiler will implicitly declare one. Thus: is actually the equivalent of: if (condition) { true_statement; } else { false_statement; } Most of the time, this doesn’t matter. However, new programmers sometimes try to do something like this: This won’t compile, with the compiler generating an error that identifier x isn’t defined. This is because the above example is the equivalent of: x In this context, it’s clearer that variable x has block scope and is destroyed at the end of the block. By the time we get to the std::cout line, x doesn’t exist. std::cout We’ll continue our exploration of if statements in the next lesson. Great lesson! I look forward to the rest of this chapter. 1. Section "Quick if-statement recap": > This program works just like you’d expect: 5 should be 4 (or vice versa) 2. > Now we’ve made checkout() conditional, which we certainly didn’t intend. Oh, wow. I've always thought that if-statements execute the next line, not the next statement. I forgot C++ ignores whitespace. Fixed! (Also fixed the other lessons based on your feedback). Thank you! Hey man, you do competitive programming? Nope :) So I guess you are learning cpp for: 1. CS subject 2. knowledge thirst 3. system programming Which one is it? I don't really know yet. I'm 16 Oh okay, carry on :D Is it your first language or you have experience on another language too? I have some experience with other languages, but I didn't get far with any of them, and I forgot everything. learncpp.com is by far the best tutorial I've encountered, and the only one that I'll complete from start to finish Agreed. This is by far the best website for c++ that I have encountered. I tried some well known books before. But didn'm find them amusing enough. You know I know advanced python. Even wrote games using that (lol, game with python!) But running speed of python is terrible especially for performance critical stuffs like games. So I switched to c++. > I tried some well known books before I'm not a fan of books, because 1. They get outdated 2. They often contain mistakes that don't get fixed 3. They provide no way to ask the author questions, and to view others' questions and answers. This website satisfies all three of those requirements :) > running speed of python is terrible > So I switched to c++ One of the main reasons why I want c++ to be the first language I fully understand. It's fast, easy to use and a lot of languages stem from it (actually C, but C doesn't have OOP) Happy learning :D Yeah, pretty good points. Which chapter are you in now? Valid points. But I still recommend people to use textbooks. I started from Bjarne's principles and it was the first time I enjoyed learning a programming language. What I like from the book is the presentation of the material is full and well organized. And I can feel the difficulty level increasing. Unlike many books that too educational purposes, the explanations for each section always attract curiosity and fun! :D I take advantage of this for anything I need to know other than practical needs. The rest, from learncpp. I can say that the two of them complement each other well. Unity or Unreal? (¬‿¬) I am also learning C++ currently for game dev. Though C# was a good choice, but it's mainly focused in Microsoft Ecosystem. C++ is the complete beast & it can be used in almost anything. After all, performance matters (◍•ᴗ•◍) In the previous chapter S4.5-Enumerated Types "Best practice Don’t assign specific values to your enumerators." It's confusing. Which rule do we follow? Like pretty much all programming. It depends. As a general rule you shouldnt assign specific values, but there are some cases where it makes sense. When in doubt follow generalized rules unless you have a good reason not to. Using an enum class in this case makes sense, it keeps things together all in one place, but an equally valid solution would have been to store them as constants in a namespace. This is just less work and serves the purpose of the example well. Chosing to store it as an enum instead of a constant means that it can be passed as a variable instead of just an int value, which can be really helpful sometimes. You inherently get more information with an enum variable than you do with just an int. No specific values should have been assigned in this example, I've updated the lesson. Thanks for pointing it out! What you quoted is a "Best practice", and that's also what @TigerTM described. Follow the "Best practice" unless you have a good reason not to. For specific enumerator values, a good reason to diverge from the best practice can be that the values must not change during updates or that the values are exposed to the user. Both these scenarios can be met with error code enums. If error code 42 meant "file not found" in program version 1.2.1, it should still have the same meaning in version 1.3.4. By assigning specific values, you're preventing the programmer from inserting an enumerator between others, thus shifting all values. Since the enumerator values aren't exposed in this example, there's no need for specific values to be used. Hi Alex and Tiger, Thank you for clarifying. it allows multiple init statements of same type but not different types or am i missing a point here Correct. The init-statement ends at the semicolon. Shouldn't the first line of the "Using logical operators with if statements" chapter end with "(covered in section 5.7 -- Logical operators)" instead of "(covered in section 3.6 -- logical operators)"? Yep, thanks! Name (required) Website Save my name, email, and website in this browser for the next time I comment.
https://www.learncpp.com/cpp-tutorial/if-statements-and-blocks/
CC-MAIN-2021-17
refinedweb
1,624
63.9
A package is a bundle of components that make up an application or piece of functionality that you are building. Packages are typically created so that their components can be clustered in a container for code migration, editing in an IDE, or distribution to other companies in the example of AppExchange packages. Packages are completely private to your salesforce instance, unless uploaded and published to AppExchange. Packages come in two forms: unmanaged and managed. Unmanaged packages can be used for a one-time distribution, like a template. These are ideal for sharing or migrating. This article provides an introduction to packaging and distribution on the Force.com Platform. After a simple tutorial showing how to create and distribute an unmanaged package, the article examines the foundational concepts found in packaging, and provides details behind the difference between the two types of packages. The role of AppExchange as a directory service is also explained, as well as its support for test drives and licensing. Creating and distributing a simple package really drives some of the key concepts home. This tutorial shows how to create and distribute a particular type of package, called an unmanaged package. We start by creating a simple Apex class, which we then package and distribute. If you have two Developer Edition environments yourself, you can follow along. Alternatively, just read on (it's really pretty simple) and install the package yourself. For the purposes of this tutorial we're going to package a simple Apex class to upload to AppExchange as a private listing. Here's the source: public class SimpleClass { public void foo() { System.debug('foo here'); } static testmethod void testFoo() { SimpleClass a = new SimpleClass(); a.foo(); } } To get going, log in to your Developer Environment and then go to Setup -> Develop -> Apex Classes to create the class. Now you're ready to create a package, so navigate to Setup -> Create -> Packages. Hit "New" to start. After providing a name for the package ( SimplePackage), you'll be given the opportunity to add package components. Packages are collections of components, and these components range the gamut of the Force.com Platform. For example, you can package an Apex class, a Visualforce page, triggers, email templates, documents and workflow rules (see references at bottom for links to entire list). For this tutorial, click “Add” and select a component type of "Apex Class" and select the SimpleClass defined above. You'll end up with a package definition that looks similar to the following figure. That's it! You can now hit "Upload". After providing a version name (I chose 14), you will have "uploaded" your package. Essentially you've just created an unmanaged package that now sits in the cloud, that others can install. You'll end up with a screen looking something like this: As you can see, this provides an installation link. That same installation link will be emailed to you as well to facilitate distribution of the installation link to people with which you want to share the package. Click here to access the installation link yourself. After someone clicks the installation link, they'll be asked to enter their credentials, after which they'll be presented with an installation screen: After hitting "Continue" and clicking through the security screens, the package will then be installed directly into your environment. In other words, if you now browse your Apex Classes, you'll find SimpleClass to be present. Moreover, if you navigate to Setup -> View Installed Packages you'll see that SimplePackage is listed there, which you can very easily uninstall. If you do uninstall it, all associated files will also be uninstalled; in our example this means the Apex class, SimpleClass. More interestingly, the package creator can deprecate the package, even delete it. This won't affect you if you have installed it though--it simply means that nobody else will be able to install it as the installation link will no longer be valid. Let's now look at packages, packaging and AppExchange in a little more detail. A package typically consists of a number of different components. For example, it may contain user interface components, database components such as objects and triggers, Apex code, page layouts, reports and more. All of these components can be bundled together into what is called a package, for the purposes of distribution and installation. A package also comes in two flavors: unmanaged and managed. The following sections provide details on these concepts. Here is a table of many of the different types of components that can be packaged: 1 S-Controls have been deprecated and you should use Visualforce instead. Read the S-Control Deprecation article for more information. The package documentation (PDF) provides a comprehensive list of which components can be packaged, as well as a few caveats (for example, some components can only be packaged on custom objects that you write, not on standard objects.) A package can range from a simple Apex class to a set of hundreds of these components. Each component may have dependencies, in which case the dependencies are automatically added. For example, if you add a Visualforce page to a package, and that page relies on a custom controller, then the custom controller will be added to the package as well, automatically. Likewise if that automatically added custom controller class had a dependency on another component, that other component will also be added, and so on. All packages, whether unmanaged or managed, must have at least 75% cumulative test coverage across all Apex classes and triggers, in order to be deployed. The tutorial in the previous section had the following method defined in the class: static testmethod void testFoo() { SimpleClass a = new SimpleClass(); a.foo(); } The testmethod keyword declares that the method is part of the test cases, and as you can see from the code, it instantiates an instance of the class and invokes a method. You would typically add assertions to ensure that expected responses are generated. See An Introduction to Apex for more on Apex and code coverage. Unmanaged packages are bundles of components that are primarily created to distribute components, together with their source code. For example, the package created in the tutorial was an unmanaged package. If you install the package, not only can you execute the Apex class distributed in that package, but also view the source code of that Apex class. For all intents and purposes you can think of unmanaged packages as creating a zip and handing it off to someone—you no longer have control over an installation of that package. It is distribution-by-copy. If someone clicks on the install link and installs the package, they get their own copy and that copy is forever independent of the creator of the package. If the creator subsequently deletes or modifies the package, your version remains unaffected. As a consequence of this you can modify or delete your version of the installed package, and its components, as you see fit. Unmanaged packages are great for template applications and free components that you want to distribute. After installation, everything that was installed as part of the package is open, just as if the package installer had created the components himself. The namespace of the original publishing org is not used in an unmanaged package. If the publishing org has a namespace, that namespace is removed from all components when the unmanaged package is uploaded. A great feature of packages (both managed and unmanaged) is that you can uninstall them. If you uninstall the package, all the components that were installed as part of the package are removed. Managed packages add a whole range of functionality and behavior to packages, including: These features are not available to unmanaged packages. Another way of looking at managed packages are as a closed-source distribution model that supports upgrading, tracking and more—something that is obviously suited to commercial distribution. Think of this as distribution-by-reference. If someone wants to install your package, they install a reference to it and the multi-tenant architecture of the platform handles the rest for you. A user of your packaged application will be a user of the Force.com platform, so it makes sense for there to be a central directory of applications, making it easy to share, find and install applications with the rest of the salesforce.com community building on the platform. Enter AppExchange, an application directory service from that allows you to browse, install, and share apps and components built for the Force.com platform. The AppExchange offers a number of features that enhance packaging even further: Managed and unmanaged packages can be placed on AppExchange—though if you are listing applications for commercial purposes it makes sense to use managed packages only. The Force.com platform provides a rich and powerful metadata model and API. This API encompasses a set of web service methods that provide programmatic access to the configuration and source of your application. Using the metadata API for example, you can remotely access your organization and download all of the source code and configuration that composes your application. You can also use it to create and manipulate these metadata components. For a technical introduction to Force.com metadata, please read An Introduction to Force.com Metadata. The metadata API is primarily used to migrate changes. For example, you can use the metadata API and the tools that are built around it to migrate changes from a test environment such as a sandbox organization, to a production organization. The API can also be used to facilitate Force.com. For example, the Force.com IDE provides a familiar integrated development environment around your application by using the metadata API. Though the API does, in a sense, allow you to manipulate and transfer source components, this form of deployment is entirely different from packaging described in this article.: Hence managed packages provide a strong form of IP protection. Managed and unmanaged packages can be listed on AppExchange. This provides a framework for additional functionality around packaging and distribution, which is summarized in the following sections. Your application listing is what appears in the AppExchange and is made up of marketing materials. It is your "store-front" on the AppExchange. Customers can learn about your application by reviewing content such as a datasheet or whitepaper, seeing a demo or test driving the application as well. They can also install your application by clicking the Get It Now button, or, obtain additional information about your application if you choose to have a Learn More button instead. To learn how to start your listing, please read the Listing Technical Resources article. Test drives are a way for customers to see your application in action in a controlled environment. You set up the demo and they get to use the application. This allows customers to quickly see how powerful your application is and becomes a great selling feature. The License Management Application (LMA) facilitates tracking of installation and upgrades of the managed packages created by your organization. The LMA lets you apply licensing on managed packages built and owned by you so that you can control how many users in a customer organization can access your package and for how long. LMA is itself an application that can be found on AppExchange. See the License Management production documentation for full details. This article describes how the Force.com Platform provides a number of options for packaging and distributing applications. Unmanaged packages can be quickly and easily created, to bundle together components that you want to distribute in an unmanaged fashion, by-copy. Managed packages provide a by-reference functionality. With managed packages you, the package creator, have some control over the package, the intellectual property in the package and its use. The AppExchange provides a convenient marketplace for packages, and the license management application provides a way for you to tightly control the use of a packaged application by another organization.
https://developer.salesforce.com/page/An_Introduction_to_Packaging
CC-MAIN-2018-17
refinedweb
2,002
53.61
ClassNotFound Error when compiling multiple object classes I have a Store class which is an array of Person; I have a Person, Date, Student, Undergraduate and Postgraduate classes. Everything runs in Eclipse, and I have to run this from the Command Line. I've copied the src files and tried to run the main program, but it's just saying it can't find any of my classes. They are all in the same folder and in every class I have assigned the package. I've looked up importing classes and I tried: import oopinterface.Person; etc for all classes. Is there any way I can compile the main program so it will recognise the class files that are very clearly there!? I'm on Windows, but it would help if I could have the 'other' way for Linux? This is the Compiler error: C:\Users\Liloka\Source\oopinterface>javac ContainerInterface.java ContainerInterface.java:41: cannot find symbol symbol : class Store location: class oopinterface.ContainerInterface Store myList = new Store(); //Instance of Store ^ ContainerInterface.java:688: cannot find symbol symbol : class Person location: class oopinterface.ContainerInterface public Person getSupervisor() ^ .. it goes on to pick at every method in other classes.. (39) Thank you in advance! Answers It looks like your classes are in the package oopinterface. Compile the source files from the base directory of the package, like this: C:\Users\Liloka\Source> javac oopinterface\ContainerInterface.java If you have the CLASSPATH environment variable set, then make sure that it is not set, or add the base directory of the package where the compiled class files can be found to the classpath. You can also use the -cp or -classpath option to tell javac where to find the compiled class files. Do all your source files have a package oopinterface; at the top of the file? After compiling, to run the program from the command line do something like this: java -cp C:\Users\Liloka\Source oopinterface.MainClass where MainClass is the class that contains the public static void main(String[] args) method. (This assumes that your compiled class files are in the same directory as the sources). Need Your Help command not found error on MacOS java osx google-app-engine terminalWhen I do ls , my command is listed. So it means I'm on correct path. SQL Server: How to convert column into one row? sql sql-server sql-server-2005 sql-server-2008 pivotI have one column with different values depending on result.
http://unixresources.net/faq/8318176.shtml
CC-MAIN-2019-04
refinedweb
415
65.73
Port |Bug 483856 - Use $(CURDIR) to make pymake happy| to comm-central RESOLVED FIXED in Thunderbird 3.1b2 Status -- trivial People (Reporter: kairo, Assigned: sgautherie) Tracking Firefox Tracking Flags (Not tracked) Details Attachments (1 attachment, 1 obsolete attachment) As mentioned in bug 509095, it's OK to wallpaper over the lack of core_abspath in comm-central config.mk as long as our mainline is based on 1.9.1, but in the long run we probably want to do things right and port bug 483856 to comm-central (after we branch). Blocks: C192ConfSync Severity: normal → trivial Flags: in-testsuite- Target Milestone: Future → --- Summary: Port bug 483856 to comm-central → Port |Bug 483856 - Use $(CURDIR) to make pymake happy| to comm-central *Copy bug 483856 itself. *PP_LOCALIZED_FILES part: was removed by TB bug 533043 and SM bug 521523. *core_abspath: undo TB bug 509095, SM+TB changeset a09a7d1c82d7 (bug 463605) and part of SB changeset b9ea76f33831. Comment on attachment 426993 [details] [diff] [review] (Av1) Just copy it, Remove previous c-c workaround(s) I did review all parts; but I'm not technically a mail/ or a calendar/ reviewer so leaving mark's request open. Attachment #426993 - Flags: review+ Comment on attachment 426993 [details] [diff] [review] (Av1) Just copy it, Remove previous c-c workaround(s) This either needs a refresh or this is fixed now. What's left of patch Av1 after bug 545085 :-/ Attachment #426993 - Attachment is obsolete: true Attachment #431064 - Flags: review?(bugzilla) Comment on attachment 431064 [details] [diff] [review] (Av2) Just copy it [Checkin: See comment 6] >diff --git a/mail/base/Makefile.in b/mail/base/Makefile.in >--- a/mail/base/Makefile.in >+++ b/mail/base/Makefile.in >@@ -68,9 +68,8 @@ endif ... > include $(topsrcdir)/config/rules.mk >- Please drop this change, its unnecessary here. r=Standard8 without that. Attachment #431064 - Flags: review?(bugzilla) → review+ Comment on attachment 431064 [details] [diff] [review] (Av2) Just copy it [Checkin: See comment 6] Av2, with comment 5 suggestion(s). Attachment #431064 - Attachment description: (Av2) Just copy it → (Av2) Just copy it [Checkin: See comment 6] Status: ASSIGNED → RESOLVED Closed: 9 years ago Resolution: --- → FIXED Target Milestone: Thunderbird 3.1b1 → Thunderbird 3.1b2
https://bugzilla.mozilla.org/show_bug.cgi?id=509147
CC-MAIN-2019-26
refinedweb
362
53.61
Created attachment 178412 [details] patch for updating the port - Update 1.15 --> 1.28 Changes: - remove p5-Scalar-Does from build dependencies as not needed. - add new build and test dependencies, except those which are core modules. - add NO_ARCH=yes - "portlint -AC" gives just non-relevant warnings. - Tested (with and without buil-in perl tests) at 11.stable-amd64 with poudriere-3.1.14, perl-5.24.1, and openssl from base. To activate test from within poudriere, file Mk/Uses/perl5.mk was locally augmented with lines: .if defined(RUN_PERL_TESTS) BUILD_DEPENDS+= ${TEST_DEPENDS} pre-install: test .endif All tests say: PASS. Created attachment 178620 [details] patch for updating the port Now with sorted *_DEPENDS. (In reply to Sergei Vyshenski from comment #1) The correct dependency is: BUILD_DEPENDS= p5-MRO-Compat>0:devel/p5-MRO-Compat \ p5-Params-Validate>0:devel/p5-Params-Validate \ p5-SQL-Abstract>=1.73:databases/p5-SQL-Abstract \ p5-namespace-clean>=0:devel/p5-namespace-clean RUN_DEPENDS:= ${BUILD_DEPENDS} TEST_DEPENDS= p5-List-MoreUtils>=0:lang/p5-List-MoreUtils \ p5-Test-Exception>=0:devel/p5-Test-Exception \ p5-Test-Pod>=0:devel/p5-Test-Pod I could not find any clue that p5-Moo, p5-Sub-Quote and p5-Test-Deep are required for testing. 1. A few days ago (after this patch was prepared) p5-Moo has been added as a new build dependency of prerequisite port p5-SQL-Abstract. So it is not needed here. Correct. Sorry. 2. Sub::Defer is a sub-module of Sub::Quote, which is a prerequisite of Moo. So today p5-Sub-Quote is not needed here too. Correct. And sorry again. 3. Running perl tests from within poudriere using a method suggested by you and without p5-Test-Deep in TEST_DEPENDS: Can't locate Test/Deep.pm in @INC (you may need to install the Test::Deep module) <...> Result: FAIL Failed 7/9 test programs. 0/2 subtests failed. *** Error code 2 Stop. Thank you very much. (In reply to Sergei Vyshenski from comment #3) I found that SQL::Abstract::Test requires Test::Deep. I'll update databases/p5-SQL-Abstract first. A commit references this bug: Author: sunpoet Date: Mon Jan 16 15:35:38 UTC 2017 New revision: 431672 URL: Log: Update to 1.28 - Add TEST_DEPENDS - Add NO_ARCH Changes: PR: 215679 Submitted by: Sergei Vyshenski <svysh.fbsd@gmail.com> Approved by: maintainer (timeout, 16 days) Changes: head/databases/p5-SQL-Abstract-More/Makefile head/databases/p5-SQL-Abstract-More/distinfo Committed. Thanks!
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=215679
CC-MAIN-2020-24
refinedweb
415
52.87
Last Updated on December 10, 2020 Time series forecasting is a process, and the only way to get good forecasts is to practice this process. In this tutorial, you will discover how to forecast the monthly sales of French champagne with Python. Working through this tutorial will provide you with a framework for the steps and the tools for working through your own time series forecasting problems. After completing this tutorial, you will know: - How to confirm. Kick-start your project with my new book Time Series Forecasting With Python, including step-by-step tutorials and the Python source code files for all examples. Let’s get started. - Updated Mar/2017: Fixed a typo in the code example. - Updated Apr/2019: Updated the link to dataset. - Updated Aug/2019: Updated data loading and date grouping to use new API. - Updated Feb/2020: Updated to_csv() to remove warnings. - Updated Feb/2020: Fixed data preparation and loading. - Updated May/2020: Fixed small typo when making a prediction. - Updated Dec/2020: Updated modeling for changes to the API. Time Series Forecast Study with Python – Monthly Sales of French Champagne Photo by Basheer Tome, If you need help installing Python and the SciPy environment on your workstation, consider the Anaconda distribution that manages much of it for you. This script will help you check your installed versions of these libraries. The results on my workstation used to write this tutorial are as follows: 2. Problem Description. Download the dataset as a CSV file and place it in your current working directory with the filename “champagne September 1971 1964 to September 1971 (93 observations) - validation.csv: Observations from October 1971 to September 1972 (12 observations) The validation dataset is about 11% of the original dataset. Note that the saved datasets do not have a header line, therefore we do not need to cater for this when working with these files later. 3.2. Model Evaluation Model evaluation will only be performed on the data in dataset.csv prepared in the previous section. Model evaluation involves two elements: - Performance Measure. - Test Strategy. 3.2.1 Performance Measure The observations are a count of champagne sales in millions of units. 3186.501. This means that on average, the model was wrong by about 3,186 million sales five perspectives: - Summary Statistics. - Line Plot. - Seasonal Line Plots - Density Plots. - Box and Whisker Plot. 5.1 Summary Statistics 4,641, which we might consider our level in this series. - The standard deviation (average spread from the mean) is relatively large at 2,486 sales. - The percentiles along with the standard deviation do suggest a large spread to the data. may be an increasing trend of sales over time. - There appears to be systematic seasonality to the sales for each year. - The seasonal signal appears to be growing over time, suggesting a multiplicative relationship (increasing change). - There do not appear to be any obvious outliers. - The seasonality suggests that the series is almost certainly non-stationary. Champagne Sales Line Plot There may be benefit in explicitly modeling the seasonal component and removing it. You may also explore using differencing with one or two levels in order to make the series stationary. The increasing trend or growth in the seasonal component may suggest the use of a log or other power transform. 5.3 Seasonal Line Plots We can confirm the assumption that the seasonality is a yearly cycle by eyeballing line plots of the dataset by year. The example below takes the 7 full years of data as separate groups and creates one line plot for each. The line plots are aligned vertically to help spot any year-to-year pattern. Running the example creates the stack of 7 line plots. We can clearly see a dip each August and a rise from each August to December. This pattern appears the same each year, although at different levels. This will help with any explicitly season-based modeling later. Seasonal Per Year Line Plots It might have been easier if all season line plots were added to the one graph to help contrast the data for each year. 5 shape has a long right tail and may suggest an exponential distribution This lends more support to exploring some power transforms of the data prior to modeling. 5 (1971) only contains 9 months and may not be a useful comparison with the 12 months of observations for other years. Therefore, only data between 1964 and 1970 was plotted. Running the example creates 7 box and whisker plots side-by-side, one for each of the 7 years of selected data. Some observations from reviewing the plots include: - The median values for each year (red line) may show an increasing trend. - The spread or middle 50% of the data (blue boxes) does appear reasonably stable. - There are outliers each year (black crosses); these may be the tops or bottoms of the seasonal cycle. - The last year, 1970, does look different from the trend in prior years Champagne Sales Box and Whisker Plots The observations suggest perhaps some growth trend over the years and outliers that may be a part of the seasonal cycle. This yearly view of the data is an interesting avenue and could be pursued further by looking at summary statistics from year-to-year and changes in summary stats from year-to-year. 6. ARIMA Models In this section, we will develop Autoregressive Integrated Moving Average, or ARIMA, models for the problem. We will approach modeling by both manual and automatic configuration of the ARIMA model. This will be followed by a third step of investigating the residual errors of the chosen model. As such, this section is broken down into 3 steps: - Manually Configure the ARIMA. - Automatically Configure the ARIMA. - Review Residual Errors. 6.1 Manually Configured ARIMA The ARIMA(p,d,q) model requires three parameters and is traditionally configured manually. Analysis of the time series data assumes that we are working with a stationary time series. The time series is almost certainly non-stationary. We can make it stationary this by first differencing the series and using a statistical test to confirm that the result is stationary. The seasonality in the series is seemingly year-to-year. Seasonal data can be differenced by subtracting the observation from the same time in the previous cycle, in this case the same month in the previous year. This does mean that we will lose the first year of observations as there is no prior year to difference with. The example below creates a deseasonalized version of the series and saves it to file stationary.csv. Running the example outputs the result of a statistical significance test of whether the differenced series is stationary. Specifically, the augmented Dickey-Fuller test. The results show that the test statistic value -7.134898 is smaller than the critical value at 1% of -3.515.. For reference, the seasonal difference operation can be inverted by adding the observation for the same month the year before. This is needed in the case that predictions are made by a model fit on seasonally differenced data. The function to invert the seasonal difference operation is listed below for completeness. A plot of the differenced dataset is also created. The plot does not show any obvious seasonality or trend, suggesting the seasonally differenced dataset is a good starting point for modeling. We will use this dataset as an input to the ARIMA model. It also suggests that no further differencing may be required, and that the d parameter may be set to 0. Seasonal Differenced Champagne Sales Line Plot The next first step is to select the lag values for the Autoregression (AR) and Moving Average (MA) parameters, p and q respectively. We can do this by reviewing Autocorrelation Function (ACF) and Partial Autocorrelation Function (PACF) plots. Note, we are now using the seasonally differenced stationary.csv as our dataset. 1 month, with perhaps some significant lag at 12 and 13 months. - Both the ACF and PACF show a drop-off at the same point, perhaps suggesting a mix of AR and MA. A good starting point for the p and q values is also 1. The PACF plot also suggests that there is still some seasonality present in the differenced data. We may consider a better model of seasonality, such as modeling it directly and explicitly removing it from the model rather than seasonal differencing. ACF and PACF Plots of Seasonally Differenced Champagne Sales This quick analysis suggests an ARIMA(1,0,1) on the stationary data may be a good starting point. The historic observations will be seasonally differenced prior to the fitting of each ARIMA model. The differencing will be inverted for all predictions made to make them directly comparable to the expected observation in the original sale count units. Experimentation shows that this configuration of ARIMA does not converge and results in errors by the underlying library. Further experimentation showed that adding one level of differencing to the stationary data made the model more stable. The model can be extended to ARIMA(1,1,1). We will also disable the automatic addition of a trend constant from the model by setting the ‘trend‘ argument to ‘nc‘ for no constant in the call to fit(). From experimentation, I find that this can result in better forecast performance on some problems. The example below demonstrates the performance of this ARIMA model on the test harness. Note, you may see a warning message from the underlying linear algebra library; this can be ignored for now. Running this example results in an RMSE of 956.942, which is dramatically better than the persistence RMSE of 3186.501. This is a great start, but we may be able to get improved results with a better configured ARIMA model. 6.2 Grid Search ARIMA Hyperparameters The ACF and PACF plots suggest that an ARIMA(1,0,1) or similar may be the best that we can do. To confirm this analysis, we can grid search a suite of ARIMA hyperparameters and check that no models result in better out of sample RMSE performance. In this section, we will search values of p, d, and q for combinations (skipping those that fail to converge), and find the combination that results in the best performance on the test set. We will use a grid search to explore all combinations in a subset of integer values. Specifically, we will search all combinations of the following parameters: - p: 0 to 6. - d: 0 to 2. - q: 0 to 6. This is (7 * 3 * 7), or 147, potential runs of the test harness and will take some time to execute. It may be interesting to evaluate MA models with a lag of 12 or 13 as were noticed as potentially interesting from reviewing the ACF and PACF plots. Experimentation suggested that these models may not be stable, resulting in errors in the underlying mathematical libraries. The complete worked example with the grid search version of the test harness is listed below. Running the example runs through all combinations and reports the results on those that converge without error. The example takes a little over 2 minutes to run on modern hardware. The results show that the best configuration discovered was ARIMA(0, 0, 1) with an RMSE of 939.464, slightly lower than the manually configured ARIMA from the previous section. This difference may or may not be statistically significant. We will select this ARIMA(0, 0, 1) model going forward. 6.3 Review Residual Errors A good final check of a model is to review residual forecast errors. Ideally, the distribution of residual errors should be a Gaussian with a zero mean. We can check this by using summary statistics and plots to investigate the residual errors from the ARIMA(0, 0, 1) model. The example below calculates and summarizes the residual forecast errors. Running the example first describes the distribution of the residuals. We can see that the distribution has a right shift and that the mean is non-zero at 165.904728. This is perhaps a sign that the predictions are biased. The distribution of residual errors is also plotted. The graphs suggest a Gaussian-like distribution with a bumpy left tail, providing further evidence that perhaps a power transform might be worth exploring. Residual Forecast Error Density Plots We could use this information to bias-correct predictions by adding the mean residual error of 165.904728 to each forecast made. The example below performs this bias correlation. The performance of the predictions is improved very slightly from 939.464 to 924.699, which may or may not be significant. The summary of the forecast residual errors shows that the mean was indeed moved to a value very close to zero. Finally, density plots of the residual error do show a small shift towards zero. It is debatable whether this bias correction is worth it, but we will use it for now. Bias-Corrected Residual Forecast Error Forecast Error ACF and PACF Plots 7. Model Validation After models have been developed and a final model selected, it must be validated and finalized. Validation is an optional part of the process, but one that provides a ‘last check’ to ensure we have not fooled or misled. The example below trains an ARIMA(0,0,1) model on the dataset and saves the whole fit object and the bias to file. The example below saves the fit model to file in the correct state so that it can be loaded successfully later. Running the example creates two local files: - model.pkl This is the ARIMAResult object from the call to ARIMA.fit(). This includes the coefficients and all other internal data returned when fitting the model. - model_bias.npy This is the bias value stored as a one-row, one-column NumPy array. 7.2 Make Prediction A natural case may be to load the model and make a single forecast. This is relatively straightforward and involves restoring the saved model and the bias and calling the forecast() method. To invert the seasonal differencing, the historical data must also be loaded. The example below loads the model, makes a prediction for the next time step, and prints the prediction. Running the example prints the prediction of about 6794. If we peek inside validation.csv, we can see that the value on the first row for the next time period is 6981. The prediction is in the right ballpark. 361.110 million sales. This is much better than the expectation of an error of a little more than 924 million sales per month. A plot of the predictions compared to the validation dataset is also provided. At this scale on the plot, the 12 months of forecast sales figures look fantastic. Forecast for Champagne Sales Validation Dataset Summary In this tutorial, you discovered the steps and the tools for a time series forecasting project with Python. We about this tutorial? Ask your questions in the comments below and I will do my best to answer. Thank You Jason. Would you recommend this example to a starter. I am totally new to programming and Machine Learning, and I am currently learning Python sytnax and understanding the basic terms around algorithms used. Regards, Benson Hi Benson, this tutorial is an example for a beginner in time series forecasting, but does assume you know your way around Python first (or you can pick it up fast). A surely steep learning curve taking you out of your comfort zone, but that’s the way to learn. ? For sure. Dive in! helo jason will i be able to contact you and discuss, need explaination.. You can use the contact page to contact me any time via email: Jason, thanks for very detailed instructions on how to construct the model. When I add a few additional periods in the validation set (manually) for a short-term forecast beyond the dataset, the model won’t run until I provide some ‘fake’ targets (expected y). However, when I provide some fake targets, I see that model quickly adjusts to those targets. I tried different levels of y values and I see model predicted accordingly, shouldn’t the predictions be independent of what targets it sees? You should not need fake targets Viral. You can forecast a new out of sample data point by training the model on all of the available data and predicting one-step. E.g.: Jason, much appreciate your response. What I am trying to convey is that the model should predict same outputs regardless of the observations. For example, when I change y values in my validation set (last 12 months that the model hasn’t seen), my predictions change and the model gives me a very nice RMSE every time. If the model was trained properly, it should have same output for next twelve months regardless of my y values in the validation set. I think you will see this effect very quickly if you also change your y values from validation set. Unless the model is only good for one period forward and needs to continuously adjust based on observed values of last period. In this specific case we are using walk-forward validation with a model trained for one-step forecasts. This means as we walk over the validation dataset, observations are moved from validation into training and the model is refit. This is to simulate a new observation being available after a prediction is made for a time step and us being able to update our model in response. I think what you are talking about is a multi-step forecast, e.g. fitting the model on all of the training data, then forecasting the next n time steps. This is not the experimental setup here, but you are welcome to try it. Hello Jason, If you don’t mind me asking, where exactly should the line below fit?: yhat = model_fit.forecast()[0] Hi Benson, Fit the ARIMA model on all of your data. After that, make a forecast for the next time step by calling the forecast() function. Time to upgrade to matplotlib 2.0, the colors are nicer 🙂 I agree Juanlu! I have recently upgraded myself. Dear Jason, Thanks for the post. Clear, concise and working. Have you considered to forecast the TS using a SARIMA model instead of substracting the seasonality and adding it latter? As a matter of fact, statsmodel has it integrated in its dev version. () I am wondering if it is possible to take into account hierarchy, like the forecast of monthly sales divided in sub-forecasts of french champagne destination markets, using python. Thanks and keep posting! Hi Hugo, yes I am familiar with SARIMA, but I decided to keep the example limited to ARIMA. Yes, as long as you have the data to train/verify the model. I would consider starting with independent models for each case, using all available data and tease out how to optimize it further after that. Thanks for the wonderful post Jason, Two quick question: 1. When I’m doing Grid Search ARIMA Hyperparameters with your example, it’s quite slow on my machine, spent about 1 minutes. However the parameter range and data are not large at all. Is it slow specific on my end? I’m concerning the model is too slow to use. 2. Does ARIMA support multi-step forecast? E.g. What if I keep use predicted value into next forecast, will it overfitting? Thanks! Your timing sounds fine. If it is too slow for you, consider working with a sub-sample of your data. You can do multi-step forecasting directly (forecast() function) or by using the ARIMA model recursively: Thanks in advice. Hello Jason Thanks for this awesome hands-on on Time series. However I have a query. I extracted year and month from date column as my features for model. Then I built a Linear Regression model and a Random Forest model. My final prediction was a weighted average of both these models. I got an rmse of 366 (similar to yours i.e. 361 on validation data). Can this approach be the alternative for an ARIMA model? What can be the possible drawbacks of this approach? Appreciate your comments Thanks Nice work. ARIMA may be a simpler model if model complexity is important on a project. Hi just a silly note have you consider using itertools for your evaluate_models() function? in python nested loops are not so readable. Thanks for the suggestion. Jason, I just finished reading “Introduction to Time Series Forecasting in Python” in detail and had two questions: * Are you planning to come out with a “Intermediate course in Time Series Forecasting in Python” soon? * You spend the first few chapters discussing how to reframe a time-series problem into a typical Machine Learning problem using Lag features. However, in later chapters, you only use ARIMA models, that obviate the necessity of using explicitly generated lag features. What’s the best way to use explicitly generated timeseries features with other ML algorithms (such as SVM or XGBoost)? (I tried out a few AR lags with XGB and got decent results using but couldn’t figure out how to incorporate the MA parts). Would appreciate any insights. I may have a more advanced TS book in the future. Great question. You would have to calculate the MA features manually. Hi Jason, can it be that TimeGrouper() does not work if there are months missing in a year? Also there are no docs available for TimeGrouper(). May you can use pd.Grouper in your future examples? Regards I think it does work with missing data, allowing you to resample. TimeGrouper does not have docs, but Grouper does and is a good start: Hi, Thanks for this great link on time series. I am not able to acces the dataset stored at:- Regards, Sambit I’m sorry to hear that, here is the full dataset: hi, i am having the following error: X = X.astype(‘float32’) ValueError: could not convert string to float: ‘1972-09’ can you please help me ty Looks like you may have missed some lines of code. Confirm you have them all. Hi Jason, I am having the same error. I checked the lines and I have them all. How can python convert ‘yyyy-mm’ to float32? I dowload the csv from the link of datamarket you gave above. What am i missing? Thnanx I have some suggestions here: Hi Jason , Nice approach for time series forecasting.Thanks for this Article. Just had a one small doubt,Here we are tuning our ARIMA parameter on test data using RMSE as a measure but generally we tune our parameter on the training(train+validation)data and then we check whether those parameters are well generalized or not by using test data. is it right? Because my concern is that if we choose our parameter on that particular test data,whether those parameter will generalized to coming new test data or not? I feel that there will be biasness while choosing parameter because we are specifically choosing parameter those giving less RMSE for that test data.Here we are not checking whether our model is working/fitted well for our train data or not? If i am wrong just correct me.Let me know the logic behind this parameter tuning on the basis of test data? Yes, that is a less biased way to prepare a model. More here: Thank you so much Jason. You’re welcome sandip. Hi Jason, Thank you for this detailed and clear tutorial. I have a question concerning fiding ARIMA’s parameters. I didn’t understand why we have an imbricated “for” loop and we do consider all the p values and not all the q values. Please ask if the question is not that clear. Thank you in advance. Please review the example again, we grid search p, d and q values. Hi Jason, Thanks for this awesome example! It helped me a lot! However if you don’t mind, I have a question on the prediction loop. Why did you use history.append(obs) instead of history.append(yhat) when doing the loop for prediction? Sees like you add the observation (real data in test) to history. And yhat = bias + inverse_difference(history, yhat, months_in_year) is based on the history data. but actually we don’t have those observation data when solving practical problem. I’ve tried to use history.append(yhat) in my model, but the result is worse than using history.append(obs). Appreciate your comments Thanks, Ella Good question, because in this scenario, we are assuming that each new real observation is available after we make a prediction. Pretend we are running this loop one iteration per day, and we get yesterdays observation this morning so we can accurately predict tomorrow. Thank you so much for your explanation!! You’re welcome. Hello, Thanks a lot for this work. I have a point to discuss on concerning the walk forward validation. I saw that you fill the history part by a value from the test set at each iteration : ….. # observation obs = test[i] history.append(obs) …. Do we really have to do this or we have to use the new yhat computed for the next predictions? Indeed when we have to do a future prediction on some range date we don’t have the historical value…. Regards and thanks in advance for your feedback Simo It depends on your model and project goals. In this case, we are assuming that the true observations are available after each modeling step and can be used in the formulation of the next prediction, a reasonable assumption, but not required in general. If you use outputs as inputs (e.g. recursive multi-step forecasting), error will compound and things will go more crazy, sooner (e.g. model skill will drop). See this post: Sorry, I just read the Ella question…So for long range future prediction we have ti use the yhat… Thanks You can if you choose. Hi, I am getting NAN after these steps: from pandas import Series series = Series.from_csv(‘champagne.csv’, header=0) print(len(series)) split_point = len(series) – 12 print(split_point) dataset, validation = series[0:split_point], series[split_point:] print(dataset) print(‘Dataset %d, Validation %d’ % (len(dataset), len(validation))) dataset.to_csv(‘dataset.csv’) validation.to_csv(‘validation.csv’) I’m sorry to hear that. Ensure that you have copied all of the code and that you have removed the footer from the data file. Also confirm that your environment is up to date. Hi Jason, This is extremely useful. Thank you very much! I was wondering if you could provide the full code to extend the forecast for 1 year ahead. I know you mentioned above that there is a forecast function but when I run: yhat = model_fit.forecast()[0] pyplot.plot(y) pyplot.plot(yhat, color=’green’) pyplot.plot(predictions, color=’red’) pyplot.show() I don’t get any green lines in my code. I’m sure I’m missing a lot of things here, but I don’t know what. Thank you again, much appreciated! This post will help you with making longer-term forecasts: Hi Jason, I am using this modelling steps to model my problem. When I am passing best parameter which i have got from Grid Search for doing prediction, I am getting Linear Algebra error and sometimes not converging Due to this I am not able to predict the values. My second concern is if this is happening for Grid search, how can i automate the script in production environment. Lets say i have to forecast some values for 50000 different sites then what is best way to achieve this goal ? Sorry to hear that, perhaps you have not set all of the parameters the same as you used in the grid search? I have some notes about models in production here that may help: Hi Jason, Thank for your post, it is really helpful for me. It is great! I have a small question: In the post, you use 2 data set: dataset.csv and validation.csv + In dataset.csv, you split it and use Walk-forward validation -> I totally agree. + In validation.csv, you still re-train ARIMA model with walk-forward validation (model_fit = model.fit(trend=’nc’, disp=0) – line 45 in section 7.3) -> I think it is unseen data for testing, so we don’t train model here and only test the model’s performance. Is it correct? In the case of walk forward validation, we are assuming that the real observations are available after the time of prediction. You can make different assumptions when evaluating your model. Hi, Jason First of all, thank you for your wonderful tutorial. Until now (part 5.4) I just have one doubt: why you plotted a histogram/kde without removing the trend first? If the goal is to see how the distribution shape is similar to Gaussian distribution, doesn’t a trend changes the distribution of data? Thanks Navid Great question, at that point we are early in the analysis. Generally, I’d recommend looking at density plots before and after making the series stationary. Trend removal generally won’t impact the form of the distribution if we are doing simple linear operations. Hi Jason, Thanks for your awesome post. Could you explain how to set the ‘interval’ in the function difference if I only have 1-year data? My dataset is recorded in half an hour, from June 2016 to June 2017. These data number are large in summer and small in winter, i.e. in winter it is between 0-2000, but in summer it is between 5000-14000. Hi Jason, Thank you for the post.I want to implement ‘ARIMA’ function instead of using built-in function. Do you know where I can find the algorithm to implement ‘ARIMA’ function as well understanding that in detail manner? A good textbook on the math is this one: Hi Jason, Thanks for the brief tutorial on Time Series forecasting. I am receiving the error “Given a pandas object and the index does not contain dates” when running the ARIMA model code snippet. Be sure you copied all of the sample code exactly including indenting, and also be sure you have prepared the data, including removing the footer information from the file. Dear Jason, Thanks for your amazing tut! I have one problem only when i run the rolling forecasts. I use a different dataset which has temperature values for every minute of the day. I train the model with the first four days and i use the last day as a test dataset. My problem might be because i add the test observation to the history and my arima returns the following error: raise ValueError(“The computed initial AR coefficients are not ” ValueError: The computed initial AR coefficients are not stationary You should induce stationarity, choose a different model order, or you can pass your own start_params. for i in range(0, len(x_test)): # difference data diff = difference(history, minutes) # predict model = importer.ARIMA(diff, order=(5, 0, 1)) model_fit = model.fit(trend=’nc’, disp=0) yhat = model_fit.forecast()[0] yhat = inverse_difference(history, yhat, minutes) predictions.append(yhat) # observation obs = x_test[i] history.append(obs) print(‘Predicted=%.6f, Expected=%.6f’ % (yhat, obs)) My x_test dataset contains 1440 rows and i get the error on the 1423 iteration of the loop. Until iteration 1423 the each arima model does not have issues. Your help is precious to me. Thanks again! Kindest Regards, Makis Have you confirmed that your data set is stationary? Perhaps there is still some trend or seasonal structure? I have many posts on detrending and seasonal adjustment that may help. Dear Jason, Thanks for your response! I have used a different hyperparameter (Arima 5,1,1) instead, and everything worked. I don’t know if this is the right thing, since 5,0,1 and 5,1,1 had exactly the same RMSE. What is your opinion about that? And please, one final question, my results now are extremely good, with an RMSE less than 0,01. The prediction line in the plot is almost over the the real data line. Does this has to do with the history.append(obs)? I’m not sure i understand correctly why the test observation is added to the history. And what is the difference of doing a prediction in a for loop with a new model for every step compared by using the steps parameter in 1 model? Sorry for the long questions! Your tutorials are even better than the books i’m currently reading! Cheers from Greece. Well done. I’d recommend using the model that is skillful and stable. Hey Jason! What i’m trying to say is, maybe the good predictions are because i append the y observation to the history. Why we do this? what is the purpose? If we do not append the y observation then the results are going to be still valid? In the test setup we are assuming that real observations are made after each prediction that we can in turn use to make the next prediction. Your specific framing of the problem may differ and I would encourage you to design a test harness to capture that. Hey Jason, for once more thanks for your feedback! I will follow your suggestion right now 🙂 Thanks for everything Kind Regards, Makis Hey Jason, one last question. Since i do the rolling forecast manner and i introduce a new model for every iteration, why do i need to save a model and do a the first prediction based on that? Can i simply do the rolling forecast loop to start from 0 instead of 1? Sure. Hi Jason, Thank you so much for this amazing tutorial. It really helps me to learn time series forecasting and prepare for my new job. I’m a total beginner in data analysis/science as I’m currently making transition from engineering to data analysis. But the problem that I’m facing with this tutorial is as I’m running the code, I’m stuck with the final step, “Validate the Model”. I’ve tried to re-copy and re-run the sample code many times but it didn’t seem to work. Here is the error. >Predicted=10101.763, Expected=9851 >Predicted=13219.067, Expected=12670 >Predicted=3996.535, Expected=4348 >Predicted=3465.934, Expected=3564 >Predicted=4522.683, Expected=4577 >Predicted=4901.336, Expected=4788 >Predicted=5190.094, Expected=4618 >Predicted=4930.190, Expected=5312 >Predicted=4944.785, Expected=4298 >Predicted=1699.409, Expected=1413 >Predicted=6085.324, Expected=5877 >Predicted=7135.720, Expected=nan ValueError Traceback (most recent call last) in () 56 57 # Report performance —> 58 mse = mean_squared_error(y, predictions) 59 rmse = sqrt(mse) 60 print(‘RMSE: %.3f’ % rmse) /Users/amir/Library/Enthought/Canopy/edm/envs/User/lib/python3.5/site-packages/sklearn/metrics/regression.py in mean_squared_error(y_true, y_pred, sample_weight, multioutput) 229 “”” 230 y_type, y_true, y_pred, multioutput = _check_reg_targets( –> 231 y_true, y_pred, multioutput) 232 output_errors = np.average((y_true – y_pred) ** 2, axis=0, 233 weights=sample_weight) /Users/amir/Library/Enthought/Canopy/edm/envs/User/lib/python3.5/site-packages/sklearn/metrics/regression.py in _check_reg_targets(y_true, y_pred, multioutput) 73 “”” 74 check_consistent_length(y_true, y_pred) —> 75 y_true = check_array(y_true, ensure_2d=False) 76 y_pred = check_array(y_pred, ensure_2d=False) 77 /Users/amir/Library/Enthought/Canopy/edm/envs/User/lib/python3.5/site-packages/sklearn/utils/validation.py in check_array(array, accept_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, warn_on_dtype, estimator) 405 % (array.ndim, estimator_name)) 406 if force_all_finite: –> 407 _assert_all_finite(array) 408 409 shape_repr = _shape_repr(array.shape) /Users/amir/Library/Enthought/Canopy/edm/envs/User/lib/python3.5/site-packages/sklearn/utils/validation.py in _assert_all_finite(X) 56 and not np.isfinite(X).all()): 57 raise ValueError(“Input contains NaN, infinity” —> 58 ” or a value too large for %r.” % X.dtype) 59 60 ValueError: Input contains NaN, infinity or a value too large for dtype(‘float32’). Could you please help me with this? Thank you so much Amir I’m not sure of the cause of your fault, sorry. Ensure your libraries are up to date and that you have copied all of the code exactly? The csv file contains some jargon text which should be deleted before reading the file as list. Hi Jason, In the prediction area, where you have added the observation to history and then running the loop to find the ARIMA results. i.e. model = ARIMA(history, order=(4,1,2)) model_fit = model.fit(disp=0) output = model_fit.forecast() yhat = output[0] predictions.append(yhat) obs = test[t] history.append(obs) Here, what if we append yhat to history, when i am appending yhat to history, my results are really bad, pls help.. Because as per my model, we need to predict the test data by using only the training data, so we cannot use obs to be appended in history, hope you get my point. Yes, that would be called a recursive multi-step forecast. It is challenging. You can learn a little more about it here: Concept wise i understand what is Recursive Multi step forecasting, that’s the same i have used in my earlier reply code, i have appended obs in history, so that means everytime my loop runs, it takes the existing training data + next observation, so that should work and should predict correctly, but my results are really bad.. Or do you mean to say, for this we need to plot the ACF, PACF graph in a a loop to determine the pdq values, and then run the ARIMA function on the pdq values, if that so, pls help us in finding a way to determine pdq values in loop Generally using forecasts in a recursive model is challenging, you may need to get creative with your model/models. Hello Jason, Thank you for your clear and straightforward post! I have the same question as Nick above – about the choice of differentiation interval. In your example you set it to 12 according to the expected cycle of the data. However, in a more problematic case the data does not seem to imply a clear cycle (ACF and PACF graphs notwithstanding). In my case, I’ve found out that setting the interval to 12 yielded better results than my default, which was 1. I can understand why choosing a small interval would be generally bad – random noise is too dominant. I have more difficulty understanding how to calibrate the ideal interval value for my data (except brute force, that is. Maybe I shouldn’t calibrate? That could induce overfit. Anyway, I still should find a generally decent value). If the amount of data is relatively small, consider using a grid search across difference values to see what works. And another question. I applied a grid search in order to choose the hyperparameters of my ARIMA model. I fear that bias correction could be counter productive in this case. Do you have any insights on this subject? Would you perform both (grid search followed by bias correction)? Is the answer data dependent? I would only perform bias correction if it lifted model skill (e.g. if the model was biased). Hi Jason, I have around more than 1000 of products. I have sales yearly history for last 7 years for each 1000 product. I want to develop a model by which I can predict sales amount for next 3 years. Could you please help me how do I proceed with evaluate, prediction, validation and interpret the same. Thanking in advance. Perhaps start here: (Noob’s parallelization question, probably) Does anybody know whether statmodels ARIMA uses multi-threading or any other kind of parallelization? I’m trying to run an analysis based on multiple ARIMA fits on my laptop and thread parallelization increases total runtime rather than decrease it. It seems a single ARIMA fit (part of a single thread) uses several processors at once…. Thanks, Udi I think it is single threaded. Hi Jason, This is an excellent article, and it is super helpful. I had two questions for extension that may have been asked but after reading through the comments, I’m not sure if the same advice applies to me. My question is most similar to Nirmals. 1. I have a dataset with multiple “wines”, each with their own historical sales data 2. This dataset has other variables that aren’t just time related. For example, Month 1 Sales Month 2 Sales Month 3 Sales online Reviews, Wine A Wine B Wine C Wine D I’m wondering if there’s an extension of this Time Series model that can take into account other variables and other instances of historical data. Let me know if I should clarify. Thank you for taking the time to clarify. Yes, perhaps you could model each series separately, or groups of series or all together, or all 3 approaches and ensemble the results. Also, I’d recommend moving on to other models, such as ML or even MLP models. I hope to cover this topic in more detail soon. Thanks for the quick reply. I’ll try out your first method. But do you have any good ML/MLP models/tutorials to start with? It’s okay if you don’t! I noticed you have a nice article on multivariate time series here: I haven’t read through it yet, but I think it only takes into account historical data of one instance of multiple variables. I think it would be interesting to find out how to ensemble the results of my #1. one instance of historical data (Wine A’s last 3 month sales) #2. one instance of non-historical variables (Wine A’s online reviews, type, etc.) To me, # 1’s output is a “variable” to be used in the ML model for #2. If that makes sense, do you think that’s the right way of going about things? Thanks for being so responsive! I hope to cover this topic in more detail soon – maybe write a book on the topic working one dataset from many different directions. Thank you Jason! for really making ML practitioners like me being awesome in ML. In this blog you have mentioned that the results suggest that what little autocorrelation is present in the time series has been captured by the model. What will be the next steps if there is high autocorrelation? Thanks in advance. Great question! Some ideas: – Perhaps try alternate models. – Perhaps improve the data preparation. – Perhaps try downstream models to further correct the forecast. Hey Jason, That was a very well informed article! I am trying to forecast on a weekly data. Any tips for improving the model since weekly data is hard to forecast. Also should “months_in_year = 12” used in differencing be changed to “weeks in month=4” to accommodate for weekly frequency? Thanks My best tips are to try lots of data preparation techniques and tune the ARIMA using a grid search. Hi Jason, Thank you for an amazing tutorial! Just one quick question: can you please provide solution for the first scenario (not rolling forward forecast): – Load the model and use it to forecast the next 12 months. The forecast beyond the first one or two months will quickly start to degrade in skill. Thank you~ Yes, this will help: Hi Jason My time series data is non stationary as well and I tried upto 3rd degree of differentiation after which I cant proceed any more as the data is exhausted. It has 3 calculated points and on doing Fuller test for checking stationarity, it errors out “maxlag should be < nobs". What can I do here? Sounds like you might be running out of data in your series. Perhaps you need more observations? Hello, thank you. i wonder if we can say that python give good forecasts than other forecast logiciels like Aperia. To forecast sales we have to base ourselves in historic and baseline but in fact, in industry there are many factors that complexify the calculation of our forecast like the marketing actions, exceptional customer order, … How can we consider these perturbation in our forecasts to ensure good value of accuracy. Thank you for answering. What is Aperia? Additional elements could be used as exogenous variables in the model. Jason, The dataset I have requires to double difference in order to make it stationary, so i used: diff1 = difference(history, 12) diff2 = difference(diff1, 12), it worked and made it stationary with ADF test, however, how do I reverse it back please? Thank you Add the values back. It requires that you keep the original and the first diffed data. Thanks, so would the function look like this? yhat = inverse_difference(diff1, yhat, interval) yhat = inverse_difference(history, yhat, interval) ? Thank you! Correct. First of all, this is a great article, well written and detailed. I have a question regarding on the use of the test set on all of the analysis. What about putting this model in production? You wouldn’t have the ‘test[i]’ (or ‘y[i]’) for each iteration to add to the list ‘history’ to have a truly generalized prediction. My point is that that instead of adding the values from the test set (‘y[i]’, ‘test[i]’) you’d rather add the predictions being made to the training set in order to do a true random walk. Thanks again for the resource and all the help putting out this content. A final model will be created by training it on all available data and then using it to make predictions in the future. You can learn more about final models here: You can learn more about making out of sample predictions here: I keep getting errors ————————————————————————— TypeError Traceback (most recent call last) in () 9 obs = test[i] 10 history.append(obs) —> 11 print(‘>Predicted=%.3f, Expected=%3.f’ % (yhat, obs)) TypeError: must be real number, not ellipsis IE: # predict yhat = … those 3 dots (ellipsis) are throwing errors. The code has … throughout. Any idea how to fix this error? The code with “…” is just example code, it is not for execution. Skip it. Perhaps a more careful read of the tutorial is in order James? I’m having an issue with the PACF plot for the residuals in part 6.3 Plotting The Residuals. I have followed your code exactly with the ACF plot resulting in a perfect match of the one displayed here tower the PACF is plotting values nearing 8 at around 36 – 38 lags. Any idea what might be causing this? Hi Jason, Thanks so much for this tutorial. One thing though. As series.from_csv is depricated, the date format gets lost when opening the dataset, for exemple when trying to generate the seasonal line plots. Are you aware of a workaround that would keep the date format while using read_csv instead of from_csv? Thanks! Yes you can use pandas.read_csv() with the same arguments. How can I improve the readability of Seasonal Per Year Line Plots (especially when it comes to axes) ? I would be grateful for your help. Perhaps create one plot per figure? I’d rather have a comparison between all the subplots. Perhaps plot them all on the same figure? Perhaps calculate the different and plot that? Hi Jason, I was able to predict and chart looks nice. Just wondering how can we predict for the future, I mean to say if I wanted to see for next month prediction how are we able to do that? Thanks, Chintan I show how here: Hi Jason, I deal with daily set data. What is the issue when I try months_in_year = 364?? I mean it’s not throwing best_arima out when I set months_in_year = 364. May I know the reason? Why are you making that change? I don’t follow? Dear Jason, thank you so much for your tutorial. I would like to apply it to a process, forecasting some process variables. The time interval is not monthly based, as in your example, but is much shorter, like some days and the data collection is about every three seconds. Now I would like to ask some questions. Do you think that this model could work equally well ? What should I put in place of ‘months_in_year’ ? For now I put 7, which is the number of different days in my dataset. Is it normal that with 2000+ points the matrix analysis regarding the seeking of the optimal parameters is taking ages ? (e.g. one triplet after 30 min) If yes, how could I improve the code speed ? Thank you again, Best wishes, Patrick. Try it and see. 2K points may be too many for the method, perhaps try reducing to a few hundred at max. Hi Where does the column ‘A’comes from by the code above which is: groups = series[‘1964′:’1970’].groupby(TimeGrouper(‘A’)) y And printing out groups contains only the first 6 months of each year, why? Right there: Hi What would be wrong if instead of defining the difference function by yourself, we would pass over d with the value of 12: model = ARIMA(diff, order=(0,12,1)) Wouldn’t that be the same? Thanks! It should be. Hi Jeson in running your code I get this error: raise ValueError(“The computed initial MA coefficients are not ” ValueError: The computed initial MA coefficients are not invertible You should induce invertibility, choose a different model order, or you can pass your own start_params. How can I solve it? Thank you Sorry to hear that. Perhaps confirm your statsmodels and other libraries are up to date? Perhaps confirm that you copied all of the code in order? Perhaps try an alternative model configuration? Ok I changed ARIMA parameters and now it works thank you! Glad it hear it! This is my data: like…. Name Month Qty Unit Wire Rods Total 2007-JAN 93798 t Wire Rods Total 2007-FEB 86621 t Wire Rods Total 2007-MAR 93118 t My code is : import pandas as pd from sklearn.metrics import mean_squared_error from math import sqrt # load data path_to_file = “C:/Users\ARAVIND\Desktop\jupyter notebook\project\datasets.csv” data = pd.read_csv(path_to_file, encoding=’utf-8′) # prepare data X = data.values X = X.astype(‘float32’) train_size = int(len(X) * 0.50) train, test = X[0:train_size], X[train_size:] # walk-forward validation history = [x for x in train] predictions = list() for i in range(len(test)): # predict yhat = history[-1] predictions.append(yhat) # observation obs = test[i] history.append(obs) print(‘>Predicted=%.3f, Expected=%3.f’ % (yhat, obs)) # report performance mse = mean_squared_error(test, predictions) rmse = sqrt(mse) print(‘RMSE: %.3f’ % rmse) when i run this command it shows “Value error: could not convert string to float”… so could anyone tell how to convert string to float according to my dataset.. i want to convert the columns which is “Name, Month and Unit” to float. Perhaps remove the text and date data? ok sir.. thank you Hi Jason, Thanks for this amazing tutorial. However, I get the below error when I am trying to run it on a time series: ValueError: The computed initial MA coefficients are not invertible You should induce invertibility, choose a different model order, or you can pass your own start_params. I just saw that this has already been answered and it worked when I followed that answer. Thanks! Perhaps try a different configuration of the q/d/p variables for the model? This is really a helpful tutorial. Thank you Jason!! And I have a small question. I got TypeError: a float is required error after i executed this code – history = [x for x in train] predictions = list() for i in range(len(test)): # predict yhat = … predictions.append(yhat) # observation obs = test[i] history.append(obs) print(‘>Predicted=%.3f, Expected=%3.f’ % (yhat, obs)) Can you help me out? Perhaps confirm that you have loaded the data correctly as a float? Hi Jason, What should be the approach when we need to provide long term forecasts ~ 12 months with exogenous variables using a technique like ARIMAX? Should we forecast the covariates and then add it in the model? Regards, Varun Perhaps you can frame your model to predict +12 months given only the observations available? Hi Jason, Just to be clear, if I add regressors and train the model, I would require future values right? E.g. xreg argument in auto.arima etc. How can I forecast +12 months without utilizing regressors that I have used for training? You could train a new productive model that only requires t-12 data to make predictions. Hi! It’s amazing, thank you! Could you give all files of this project. Cannot build it on my own. You can copy them from the article, here’s how: Hi! It’s amazing, thank you! Actually i want develop a model which determine how many calls we can expect to come into our call center on a daily basis ? Sounds great. Perhaps start here: Hi Jason Brownlee , Thank you so much for responded, i need to one more help Could you please explain how to use Seasonality in ARIMA with some example Thanks You can use the blog search box. Here is an example: Here is another: Hi Jason, That’s a very helpful article. My time series is a daily Point of sale data (eg. how many pepsi bottles get bought everyday in a walmart). There are missing dates when pepsi was not sold at all. What I want to forecast is the bottles of pepsi sold on each day for the next 3/4 days. What might be the best approach/ algorithm? Thanks. Probably a linear model like SARIMA or ETS. I have some suggestions here: Hi Jason, I’m getting the error like this, what does this error mean? TypeError Traceback (most recent call last) in test = … predictions = … mse = mean_squared_error(test, predictions) rmse = sqrt(mse) print(‘RMSE: %.3f’ % rmse) TypeError: Expected sequence or array-like, got That is surprising, did you copy of all of the code exactly? Hi Jason, I need to build a forecast having addition predictor, what should i use and it’s week forecasting i need to do? Perhaps follow the process outlined in the above tutorial? Hi Jason, nice article. I have a question in the last code sample for validation. 1) What is the purpose of making the first prediction? # load model #make first prediction 2) I remove the above two sections of the code and got the exact result Does mean only order (0,0,1) and Bias(165.904728) matters and there is no need to save and load the model? It is an example of how we might use bias correction, in general. Hello, I got exactly same results of the Augmented Dickey-Fuller test, however my PACF plot looks much different, It has a lot of spikes from lag 50 to 80, one even reaching -120. Any ideas or previous experience why it can look that strange? Thank you for sharing your knowledge with us. They changed the plot recently. Try scaling the number of time steps way down in the plot. I have tried with earnest to work through your tutorial. Unfortunately, I am running into errors. One being this error. This could be the whole bone to my issues. As for the life of me, I could not resolve it using your code alone. I had to tinker with it – see explanation. series = Series.from_csv(‘dataset.csv’) AttributeError: type object ‘Series’ has no attribute ‘from_csv’ Which I resloved by implementing panda import pd series = pd.read_csv(‘dataset.csv’) series.iloc[0] Once I got over that hurdle – I then ran into this hurdle. dataset = dataset.astype(‘float32’) ValueError: could not convert string to float: ‘1964-01’ More than like its something I am doing wrong. Would it be at all possible to have access to the full code? My attempts in copying and pasting obviously are not helping me. I think I have possibly left some intrinsic part of the code out. Or I have totally confused myself. No problem, changed to: I have updated all examples in the tutorial. That simple – gees… I will try that. Thanks for taking the time to answer my problem. Is your book available on amazon? My books are not on Amazon, only on my website, I explain why here (they take a massive cut): It worked like a charm. Thanks. Another issue that I have found is with the code for 5.3 Seasonal Line Plots. from pandas import TimeGrouper from pandas import DataFrame It seems that pandas don’t support TimeGrouper,DataFrame any more!! Removed the previously deprecated TimeGrouper (GH16942) Removed the previously deprecated DataFrame.reindex_axis and Series.reindex_axis (GH17842) Well done! You can use: I updated the example, thanks! Hi jason, I was wondering if you have used fbprophet for sales prediction. We were fetching data directly from postgresql and we seem to be running into an error Out of bounds nanosecond timestamp: 1-08-11 00:00:00 This seems to be something related with pandas version compatibility. Could you please look into it and try to find the problem behind it? I have not, sorry. hey Jason, I’m running the series=read_csv(r’D:\industrial engineering\Thesis\monthly_champagne_sales.csv’,header=0,index_col=0) code and what I’m getting is a data frame, not a Series what should I do? Try adding squeeze=True argument. dear sir I am running following code in a rolling window framework however, i am not able to see results that come from the analysis. It displays only one value. Can you please let me know what and where i need to fix so that i can get those results: ##Entropy from entropy import * import numpy as np np.random.seed(1234567) x = np.random.rand(3000) n = len(x) result = list() block = 250 for a in range(1, n-block+1): DATA = x[a:a+249] results = perm_entropy(DATA, order=3, normalize=True) result.append(results) return Series(result) This is a common question that I answer here: Hey Jason, I would like to ask you about TS with repeating dates. For example, I have a dataset with different type of deals and the deals have different time periods depending on their agreement. Which means, when a stationary graph is plotted there can be more than one observation (revenue in my case) on each date. For example, 2/11/2018 could have multiple revenue from different deals. Therefore, the graph sketch is very confusing in the sense that there are many/few repetition on the same date. My idea is to sum the revenues up by date but would that affect the accuracy of the model as the prediction should be by deals? Your thoughts and reply would be highly appreciated. Regards Hmmm, it’s hard to say without diving into the data, but perhaps explore preparing your own framing of the problem. This will help as a first step: And this: i was working on forecasting and this function was used for future forcasting and ended up with NotImplemented Error def forcasting_future_months(monthly, no_of_months): monthly_perdict = monthly.reset_index() mon = monthly_perdict[‘Year_Month’] mon = mon + pd.DateOffset(months = no_of_months) future_dates = mon[-no_of_months -1:] monthly_perdict = monthly_perdict.set_index(‘Year_Month’) future = pd.DataFrame(index=future_dates, columns= monthly_perdict.columns) monthly_perdict = pd.concat([monthly_perdict, future]) monthly_perdict[‘forecast’] = results.predict(start = 222, end = 233, dynamic= True) monthly_perdict[[‘No of accidents’, ‘forecast’]].iloc[-no_of_months – 12:].plot(figsize=(12, 8)) plt.show() return monthly_perdict[-no_of_months:] My kaaggle note error is seen here This is a common question that I answer here: X = dataset.values.astype(‘float32’)应该是: X = series.values.astype(‘float32’) Hi Jason, Thank you so much for publishing this article. Most articles that I have found simply tell a user what to enter into each field for their specific example, without really getting in what a parameter means or how it is used. As a result, I haven’t been able to get my arms around what to enter for my situation. Your article does a nice job of explaining how you came up with your parameters. I do have a question, though. What should I do differently for a small dataset? I only have about 30 months of data for what I am trying to predict. For the purposes of going through this tutorial, I faked some data. In doing so, it is providing me with negative sales predictions. Is that enough data to provide an accurate forecast? One additional note, in 7.3 you have the code: series = read_csv(‘dataset.csv’, header=None, index_col=0, parse_dates=True, squeeze=True) X = dataset.values.astype(‘float32’) I believe that it should be: series = read_csv(‘dataset.csv’, header=None, index_col=0, parse_dates=True, squeeze=True) X = series.values.astype(‘float32’) Thanks again for the great tutorial! Thanks! Fixed. Not much would change, other than carefully chose how you evaluate models. Hi Jason, Absolutely great exercise. Loved it… Just one quick question: How do I generate a forecast for next 12 intervals and generate a plot with training, test and predicted together Thanks! Call forecast() and specify 12 time steps. HI EveryBody After read your great post (as usual) I have a couple of questions. First is about residuals. You have calculated manually and then use a dataframe to work with them later. But, I have read arima.fit results have a member called residuals. I’ve been working in an different serie using those residuals, before read your article. Then I did that manually as you did. BUT both are different, and not such slightly different.. Should both be similar? Because I think they should and as many times I’m looking for an bug in my code Second is about differences. when you make a diff to the serie you lose some elements, depending on lags. Well you can preserve the original data, so it is possible to integrate.It is academically perfect. But what to do when you have just only the final model? I mean when you recover new data and want to make a new prediction, the model will bring you predictions over transformed data you need to reescale. If you have used standardization you can get values to detransform but How you can integrate ? there information lost… Thank you. Perhaps check what ARIMA is storing exactly in the residuals property, e.g. inspect the code or API. New data must be prepared in an identical manner as training data, e.g. differencing, scaling, etc. Hi Jason . Couldn’t understand why bias is added to the predicted value. “yhat = bias + inverse_difference(history, yhat, months_in_year)” . Also , is there a possibility of not getting the exact values as shown in this post , when I run the code ? Any reasons why ? Thanks so much! Yes, we can expect different results given the stochastic nature of the algorithm and the differences in precision given different hardware. Thanks . Can u pls let me know on why is bias added to the predicted value ? In an attempt to lift skill of the predictions. Thanks Jason. You’re welcome. Hey Jason I like your example I just have a question about putting in my own data. So my data has a count of visits for everyday of the year for 2019 and so far in 2020. I am wondering if since this does everyday instead of months months_in_year = 12 do I change months_in_year = 12 to days_in_year = 365? Perhaps try it and see? Hey Jason ,am impressed by your commitment, and would you mind to show me about features/attributes of sales dataset to forecast Future sales of a supper market using deep learning/LSTM algorithm Sure, start here: Hi Jason – Can the same python code be used if the dataset has historical data for multiple wines (or products)? Sure. But one model per series for ARIMA models. Hi Jason, Thanks for your tutorials. Very helpful indeed. After differencing, I made the series stationary and now for the ARIMA, I wanna plot the ACF and the PACF. These are my codes – 1) Code for differencing to make the data stationary from pandas import read_csv from statsmodels.tsa.stattools import adfuller from numpy import diff X = diff(X) result = adfuller(X) print(‘ADF Statistic: %f’ % result[0]) print(‘p-value: %f’ % result[1]) for key, value in result[4].items(): print(‘\t%s: %.3f’ % (key, value)) 2) Code to plot the ACF and the PACF from pandas import read_csv from statsmodels.graphics.tsaplots import plot_acf from statsmodels.graphics.tsaplots import plot_pacf from matplotlib import pyplot() After running the second code, I am getting an error which is – FileNotFoundError Traceback (most recent call last) in 3 from statsmodels.graphics.tsaplots import plot_pacf 4 from matplotlib import pyplot —-> 5 series = read_csv(‘X.csv’, header=None, index_col=0, parse_dates=True, squeeze=True) 6 pyplot.figure() 7 pyplot.subplot(211) ~, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, dialect, error_bad_lines, warn_bad_lines, delim_whitespace, low_memory, memory_map, float_precision) 674 ) 675 –> 676 return _read(filepath_or_buffer, kwds) 677 678 parser_f.__name__ = name ~\anaconda3\lib\site-packages\pandas\io\parsers.py in _read(filepath_or_buffer, kwds) 446 447 # Create the parser. –> 448 parser = TextFileReader(fp_or_buf, **kwds) 449 450 if chunksize or iterator: ~\anaconda3\lib\site-packages\pandas\io\parsers.py in __init__(self, f, engine, **kwds) 878 self.options[“has_index_names”] = kwds[“has_index_names”] 879 –> 880 self._make_engine(self.engine) 881 882 def close(self): ~\anaconda3\lib\site-packages\pandas\io\parsers.py in _make_engine(self, engine) 1112 def _make_engine(self, engine=”c”): 1113 if engine == “c”: -> 1114 self._engine = CParserWrapper(self.f, **self.options) 1115 else: 1116 if engine == “python”: ~\anaconda3\lib\site-packages\pandas\io\parsers.py in __init__(self, src, **kwds) 1889 kwds[“usecols”] = self.usecols 1890 -> 1891 self._reader = parsers.TextReader(src, **kwds) 1892 self.unnamed_cols = self._reader.unnamed_cols 1893 pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader.__cinit__() pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader._setup_parser_source() FileNotFoundError: [Errno 2] File X.csv does not exist: ‘X.csv’ Please let me know where am I wrong. Looks like your file cannot be loaded. Perhaps check that your file is in the same directory as your code or in the path you expect. Hi Jason, Please help me out. I am following your tutorials. These are my codes. #1 (it worked) from pandas import read_csv from matplotlib import pyplot series = read_csv(‘D:/Management Books/BSE Index Daily Closing.csv’, header=0, parse_dates=True, index_col=0, squeeze=True) diff = series.diff() pyplot.plot(diff) pyplot.show() #2 (it worked but with an error) from pandas import read_csv from numpy import diff result = adfuller(diff(X)) print(‘ADF Statistic: %f’ % result[0]) print(‘p-value: %f’ % result[1]) for key, value in result[4].items(): print(‘\t%s: %.3f’ % (key, value)) diff.to_csv(‘diff.csv’, header = False) AttributeError Traceback (most recent call last) in 6 for key, value in result[4].items(): 7 print(‘\t%s: %.3f’ % (key, value)) —-> 8 diff.to_csv(‘diff.csv’, header = False) AttributeError: ‘function’ object has no attribute ‘to_csv’ #3 (showing an error) from pandas import read_csv from statsmodels.graphics.tsaplots import plot_acf from statsmodels.graphics.tsaplots import plot_pacf from matplotlib import pyplot X.to_csv(‘X.csv’, header=False)() AttributeError Traceback (most recent call last) in 3 from statsmodels.graphics.tsaplots import plot_pacf 4 from matplotlib import pyplot —-> 5 X.to_csv(‘X.csv’, header=False) 6 series = read_csv(‘X.csv’, header=None, index_col=0, parse_dates=True, squeeze=True) 7 pyplot.figure() AttributeError: ‘numpy.ndarray’ object has no attribute ‘to_csv’ I have seen the solved examples and then trying it out. Please help me. Sorry to hear that you’re having trouble. Perhaps try coping the complete example listed at the end of a given section? Also perhaps these tips will help: I didn’t replicate your codes. Moreover, the first few codes worked. I wonder why is this not working. Hi, I’m new to python and trying to learn from your blog. Would you mind to explain your codes for the following? I don’t understand what is obs. Is yhat the predicted result? obs = test[i] history.append(obs) print(‘>Predicted=%.3f, Expected=%3.f’ % (yhat, obs)) Thank you so much for this very educational step by step blog!! yhat is a prediction made by the predictive model. In the code – we are storing the real observation in history – we are pretending the real observation just became available after we made a prediction so we add it to training data on the next iteration. Hello, I have a query. Can I forecast if I have a dataset of 10 years only. If yes, in that case how will I divide my dataset into Training and Testing dataset ? Please let me know. Yes. Divide the data any way you want – that gives you confidence the model has enough data to train and evaluate. Hi, I have a request. I am looking for a Python code for implementing Ljung Box test for plotting ACF and PACF. Please let me know. Not off hand, perhaps this will help to get you started: Hi, thanks for such an informative article!!! I have experienced that walk forward validation with range of p,d,q parameters is taking hours and google colab. Any tweak possible? You’re welcome. These suggestions may help: Hi Hope you can guide me, i’m new to this ML. in 4. Persistence when i’m applying the following Code i get an error: Code Applied : # report performance mse = mean_squared_error(test, predictions) rmse = sqrt(mse) print(‘RMSE: %.3f’ % rmse) Error : ————————————————————————— NameError Traceback (most recent call last) in 1 # report performance —-> 2 mse = mean_squared_error(test, predictions) 3 rmse = sqrt(mse) 4 print(‘RMSE: %.3f’ % rmse) NameError: name ‘mean_squared_error’ is not defined All previous steps worked correctly. can you please advise what seem to be the issue ? Thanks in advacne. The error suggests that you have not imported the function. Perhaps you skipped some lines of code? Hi, I don’t know a better way to get in touch with you. I think I found someone plagiarizing your work here:. I will delete this comment after you read it. Thank you. Thanks for letting me know, it happens all the time. Thanks Jason, great post Quick question, I am not getting the same results when running the manual ARIMA model. Your results of 956.942 vs. my 961. Everythign else matched up until that point but I was having issues with autocorrelation plots only showing 21 observations vs 81 as well. Any help is greatly appreciated. # evaluate manually configured ARIMA model from pandas import read_csv from sklearn.metrics import mean_squared_error from statsmodels.tsa.arima.model import ARIMA from math import sqrt # create a differenced series def difference(dataset, interval=1): diff = list() for i in range(interval, len(dataset)): value = dataset[i] – dataset[i – interval] diff.append(value) return diff # invert differenced value def inverse_difference(history, yhat, interval=1): return yhat + history[-interval] # load data series = read_csv(‘dataset.csv’, header=None, index_col=0, parse_dates=True, squeeze=True) # prepare data X = series.values X = X.astype(‘float32’) train_size = int(len(X) * 0.50) train, test = X[0:train_size], X[train_size:] # walk-forward validation history = [x for x in train] predictions = list() for i in range(len(test)): # difference data months_in_year = 12 diff = difference(history, months_in_year) # predict model = ARIMA(diff, order=(1,1,1)) model_fit = model.fit() yhat = model_fit.forecast()[0] yhat = inverse_difference(history, yhat, months_in_year) predictions.append(yhat) # observation obs = test[i] history.append(obs) print(‘>Predicted=%.3f, Expected=%.3f’ % (yhat, obs)) # report performance rmse = sqrt(mean_squared_error(test, predictions)) print(‘RMSE: %.3f’ % rmse) ———- Results >Predicted=8076.987, Expected=8314.000 >Predicted=9747.154, Expected=10651.000 >Predicted=5994.362, Expected=3633.000 >Predicted=3820.287, Expected=4292.000 >Predicted=4041.968, Expected=4154.000 >Predicted=4990.405, Expected=4121.000 >Predicted=5129.641, Expected=4647.000 >Predicted=5031.196, Expected=4753.000 >Predicted=4133.285, Expected=3965.000 >Predicted=2095.321, Expected=1723.000 >Predicted=5216.271, Expected=5048.000 >Predicted=5866.317, Expected=6922.000 >Predicted=8591.060, Expected=9858.000 >Predicted=11028.649, Expected=11331.000 >Predicted=4090.352, Expected=4016.000 >Predicted=4767.109, Expected=3957.000 >Predicted=4656.326, Expected=4510.000 >Predicted=4577.708, Expected=4276.000 >Predicted=5108.656, Expected=4968.000 >Predicted=5202.831, Expected=4677.000 >Predicted=4423.982, Expected=3523.000 >Predicted=2162.388, Expected=1821.000 >Predicted=5463.233, Expected=5222.000 >Predicted=7331.345, Expected=6872.000 >Predicted=10258.650, Expected=10803.000 >Predicted=11732.476, Expected=13916.000 >Predicted=4552.498, Expected=2639.000 >Predicted=4578.764, Expected=2899.000 >Predicted=4914.578, Expected=3370.000 >Predicted=4545.624, Expected=3740.000 >Predicted=5229.775, Expected=2927.000 >Predicted=4287.729, Expected=3986.000 >Predicted=3153.080, Expected=4217.000 >Predicted=1827.778, Expected=1738.000 >Predicted=5134.493, Expected=5221.000 >Predicted=6806.278, Expected=6424.000 >Predicted=10643.889, Expected=9842.000 >Predicted=13606.245, Expected=13076.000 >Predicted=2265.432, Expected=3934.000 >Predicted=2936.318, Expected=3162.000 >Predicted=3341.109, Expected=4286.000 >Predicted=3881.790, Expected=4676.000 >Predicted=3156.601, Expected=5010.000 >Predicted=4693.892, Expected=4874.000 >Predicted=4663.923, Expected=4633.000 >Predicted=2045.499, Expected=1659.000 >Predicted=5440.863, Expected=5951.000 RMSE: 961.548 This is a common question that I answer here: TypeError: only size-1 arrays can be converted to Python scalars I am getting this error when running the persistence model for french champagne Sorry to hear that, perhaps these tips will help: no worries it was the way the data was imported Happy to hear that!
https://machinelearningmastery.com/time-series-forecast-study-python-monthly-sales-french-champagne/
CC-MAIN-2021-31
refinedweb
12,355
66.33
by Jose Cruz Learn how to connect sensors to the Raspberry Pi and control them with Java. Published September 2014 The latest release of Java ME 8 includes a powerful API for controlling devices such as LEDs, relays, LCDs, sensors, motors, and switches. This article is the first in a three-part series about how to connect electronic sensors to the Raspberry Pi Model B using general-purpose input/output (GPIO), inter-integrated circuit bus (I2C), serial peripheral interface bus (SPI), or universal asynchronous receiver/transmitter (UART) interfaces. By using Java ME 8 to control devices with different types of interfaces and connecting the devices to a Raspberry Pi, we can create an Internet of Things (IoT) world. This article focuses on using GPIO and shows examples of how to develop classes in Java ME 8 that can Note: The complete example code for this NetBeans IDE 8.0 project can be downloaded here. The Device I/O API specification defines a generic peripheral device I/O API for Java applications running on small embedded devices. It defines APIs for some of the most common peripheral devices, including the following: A GPIO device can be used as either a digital input or a digital output, it can be disabled or enabled, and it can be used to drive "interrupt" lines. However, a very important consideration is that all Raspberry Pi GPIO pins operate at 3.3 V. Therefore, it is very important to check the technical specifications of the devices you want connect to determine if they are using 3.3 V or 5 V. In some cases, you will need to use a logic level converter such as this. Based on the block diagram shown in Figure 1 and the components shown Figure 2, we create the circuits shown in Figure 3. Figure 1. Block diagram of the circuits we will create Figure 2. Components we will use Figure 3. Schematic of the circuits we will create The DFR0076 flame sensor from DFRobot can be used to detect fire or other wavelengths of light between approximately 760 nm and 1100 nm. We can connect it to 3.3 V or 5 V, and the detection range is approximately 20 cm (4.8 V) to 100 cm (1 V). When fire is detected, it pulls up the signal pin. Let's connect the flame sensor to the Raspberry Pi's 3.3 V, Gnd, and GPIO 22 pins, as shown in Figure 3, and create a Java ME 8 class for the flame detector sensor control. First, create a class DFR0076Device that uses the Device Access API, and define a variable pin that supports the interface to GPIO, as shown in Listing 1. public class DFR0076Device { private GPIOPin pin = null; //Define the pin for flame sensor control Listing 1. Class for the flame detector sensor control Next, create a class constructor that initializes and activates the GPIO 22 pin using the DeviceManager API and the GPIOPinConfig class (see Listing 2) to establish the following conditions: pinGPIO) public DFR0076Device(int pinGPIO) { ... pin = (GPIOPin) DeviceManager.open(new GPIOPinConfig( 0, pinGPIO,GPIOPinConfig.DIR_INPUT_ONLY,GPIOPinConfig.MODE_INPUT_PULL_UP, GPIOPinConfig.TRIGGER_RISING_EDGE, false)); ... } Listing 2. Establishing the initial conditions Now, create a method that receives a defined listener class that supports flame detection events, as shown in Listing 3. public void setListener(PinListener flameListener) { ... if (pin!=null) pin.setInputListener(flameListener); ... } Listing 3. Method that supports flame detection events It's also important that you close the pin when you are done, and also make sure you free the pin listener, as shown in Listing 4. public void close() { ... if (pin!=null){ pin.setInputListener(null); pin.close(); } ... } Listing 4. Closing the pin and freeing the listener Now, create a main MIDlet that invokes our code and defines a listener class for processing flame detection events, as shown in Listing 5. public class TestSensors extends MIDlet { DFR0076Device flame; private static final int FLAME_DETECTOR_PIN = 22; public void startApp() { //Initialize Flame Sensor flame = new DFR0076Device(FLAME_DETECTOR_PIN); flame.setListener(new FlameSensor()); } public void destroyApp(boolean unconditional) { flame.close(); } private static int waitnext = 1; class FlameSensor implements PinListener { public void valueChanged(PinEvent event) { if (event.getValue() && --waitnext == 0) { System.out.println("WARNING Flame detected!!!"); waitnext = 10; } } } } Listing 5. Creating a MIDlet to invoke our code Now let's add motion detector functionality to our TestSensors MIDlet. To do that, we need a motion sensor such as the HC-SR501 shown in Figure 2. PIR sensors enable you to sense motion. Everything emits a small amount of infrared radiation, and the hotter something is, the more radiation it emits. PIR sensors are able to detect a change in IR levels that occur within their detection zone (for example, when a human enters a room) and, hence, sense motion. The PIR sensor we'll be using has three pins: ground, digital out, and 3–5 Vdc in. At idle, when no motion has been detected, the digital out signal will remain low. However, when motion is detected, the digital out signal will pulse high (3.3 V). It is OK to connect the digital out pin directly to the Raspberry Pi. For testing, the PIR sensor has a jumper (see Figure 4). You can make the following adjustments: Figure 4. Test jumper and potentiometers Let's connect the PIR sensor to the Raspberry Pi 5 V, Gnd, and GPIO 24 pins, as shown in Figure 3, and create a Java ME 8 class HCSR501Device to control it using the Device Access API, as shown in Listing 6. public class HCSR501Device { private GPIOPin pin = null; Listing 6. HCSR501Device class Then, create a class constructor that initializes and activates the GPIO 24 pin using the DeviceManager API and the GPIOPinConfig class (see Listing 7) to establish the following conditions: pinGPIO) public HCSR501Device(int pinGPIO) { ... pin = (GPIOPin) DeviceManager.open(new GPIOPinConfig( 0, pinGPIO, GPIOPinConfig.DIR_INPUT_ONLY, GPIOPinConfig.MODE_INPUT_PULL_DOWN, GPIOPinConfig.TRIGGER_RISING_EDGE, false)); I2CUtils.I2Cdelay(3000); //wait for 3 seconds ... } Listing 7. Establishing the initial conditions Now, create a method that receives a defined listener class that supports motion detection events, as shown in Listing 8. public void setListener(PinListener pirListener) { ... if (pin!=null) pin.setInputListener(pirListener); ... } Listing 8. Method that supports motion detection events It's also important that you close the pin when you are done, and also make sure you free the pin listener as shown in Listing 9. public void close() { ... if (pin!=null){ pin.setInputListener(null); pin.close(); } ... } Listing 9. Closing the pin and freeing the listener Let's extend our MIDlet class to support the PIR sensor and its listener, as shown in Listing 10. //Define HCSR501 Device object HCSR501Device pir; private static final int MOTION_DETECTOR_PIN = 24; @Override public void startApp() { ... //Initialize PIR sensor pir = new HCSR501Device(MOTION_DETECTOR_PIN); pir.setListener(new PirSensor()); ... } @Override public void destroyApp(boolean unconditional) { ... pir.close(); ... } //Check PIR Sensor for motion detection class PirSensor implements PinListener { @Override public void valueChanged(PinEvent event) { if (event.getValue()) { System.out.println("WARNING Motion detected!!!"); } } } Listing 10. Extending the MIDlet class to support the PIR sensor and its listener The HC-SR04 is an ultrasonic ranging detector that uses sonar to determine the distance to an object much like bats and dolphins do. It comes complete with an ultrasonic transmitter and receiver module and has the following features. To start a measurement, the HC-SR04 trigger pin must receive a high (5 V) pulse for at least 10 µs, which will initiate the sensor and cause it to transmit eight cycles of ultrasonic bursts at a frequency of 40 KHz and then wait for the receiver to receive the reflected ultrasonic bursts. When the sensor detects ultrasonic bursts from the receiver, it will set to high (5 V) the echo pin and wait for a period that is in proportion to the distance being measured. To calculate the distance, use the following formula, where "speed of sound in cm/sec" equals 34,029 cm/sec: Distance = ((Duration in ns of echo pin's high level)*(speed of sound in cm/sec))/ 2 /1000000000 ns As mentioned earlier, this sensor operates from 5 V, so the trigger pin can be activated by a 3.3 V signal from GPIO 23 with no problem. In the case of reading the echo signal level, while it is active, the sensor sends 5 V, which is not supported by the maximum Raspberry Pi GPIO 17 pin's voltage, so it is necessary to work with a voltage divider that allows us to convert the signal from 5 V to 3.3 V. In Figure 5, we see how we can build this voltage divider using two resistors of 330 ohms and 470 ohms to achieve a voltage of 2.9 V at the GPIO 17 pin. You can use this online voltage divider tool to calculate Vout based on the two resistors. Figure 5. Circuit for the voltage divider It's time to create a Java ME 8 class called HCSR04Device to control the HC-SR04 sensor and send a pulse to measure distances in centimeters. See Listing 11. public class HCSR04Device { private final int PULSE = 10000; // #10 µs pulse = 10,000 ns private final int SPEEDOFSOUND = 34029; // Speed of sound = 34029 cm/s private GPIOPin trigger = null; private GPIOPin echo = null; Listing 11. Creating the HCSR04Device class Now, create a class constructor that initializes and activates the GPIO trigger and echo pins using the DeviceManager API and the GPIOPinConfig class (see Listing 12) to establish the following conditions: _triggerand _echo) public HCSR04Device(int _trigger, int _echo) { ... trigger = (GPIOPin) DeviceManager.open(new GPIOPinConfig(0, _trigger, GPIOPinConfig.DIR_OUTPUT_ONLY, GPIOPinConfig.MODE_OUTPUT_PUSH_PULL, GPIOPinConfig.TRIGGER_NONE, false)); echo = (GPIOPin) DeviceManager.open(new GPIOPinConfig(0, _echo, GPIOPinConfig.DIR_INPUT_ONLY, GPIOPinConfig.MODE_INPUT_PULL_UP, GPIOPinConfig.TRIGGER_NONE, false)); I2CUtils.I2Cdelay(500); //wait for 0.5 seconds ... } Listing 12. Establishing the initial conditions The pulse method calculates the distance in centimeters by setting the trigger pin from 1 to 0 for 10 µs (10,000 ns), and then it computes the time it takes for the echo pin to go from 0 to 1, all in nanoseconds, as shown in Listing 13. public double pulse() { long distance = 0; try { trigger.setValue(true); //Send a pulse trigger; must be 1 and 0 with a 10 µs wait I2CUtils.I2CdelayNano(0, PULSE);// wait 10 µs trigger.setValue(false); long starttime = System.nanoTime(); //ns long stop = starttime; long start = starttime; //echo will go 0 to 1 and need to save time for that. 2 seconds difference while ((!echo.getValue()) && (start < starttime + 1000000000L * 2)) { start = System.nanoTime(); } while ((echo.getValue()) && (stop < starttime + 1000000000L * 2)) { stop = System.nanoTime(); } long delta = (stop - start); distance = delta * SPEEDOFSOUND; // echo from 0 to 1 depending on object distance } catch (IOException ex) { Logger.getGlobal().log(Level.WARNING,ex.getMessage()); } return distance / 2.0 / (1000000000L); // cm/s } Listing 13. Method for calculating the distance Finally, free all resources by closing both pins (see Listing 14). public void close() { ... if ((trigger!=null) && (echo!=null)){ trigger.close(); echo.close();; } ... } Listing 14. Closing both pins Now, let's extend our MIDlet class to support the HC-SR04 sensor and calculate distances, as shown in Listing 15. public class TestSensors extends MIDlet { //Define HCSR04 Device object HCSR04Device hcsr04; private static final int TRIGGER_PIN = 23; private static final int ECHO_PIN = 17; //Define execution of read sensors thread private volatile boolean shouldRun = true; private ReadSensors sensorsTask; @Override public void startApp() { ... //Initialize Ultrasound sensor hcsr04=new HCSR04Device(TRIGGER_PIN, ECHO_PIN); //Start read sensors data thread sensorsTask=new ReadSensors(); sensorsTask.start(); } @Override public void destroyApp(boolean unconditional) { shouldRun=false; ... hcsr04.close(); } // Thread to read distance each 5 seconds class ReadSensors extends Thread { private double distance=0.0; @Override public void run() { while (shouldRun){ distance = hcsr04.pulse(); if (distance>0) System.out.println("Object detected at " + distance + " cm."); I2CUtils.I2Cdelay(5000); } } } } Listing 15. Extending the MIDlet class to support the HC-SR04 sensor and calculate distances Before running this MIDlet using NetBeans IDE 8.0, it is important to establish API permissions. To do that, select project JavaMEDemos, right-click and select Properties to show the Project Properties window, select Application Descriptor, and select the API Permissions tab. Include the following four permissions, as shown in Figure 6: jdk.dio.DeviceMgmtPermission *:* , open jdk.dio.gpio.GPIOPinPermission *:* , open, setdirection jdk.dio.gpio.GPIOPortPermission *:* , open java.util.logging.LoggingPermission control, null Figure 6. Establishing API permissions If you do not have a Raspberry Pi but want to test the MIDlet, NetBeans IDE 8.0 has a device emulator that you can create, adapted to a selected pin configuration. In our case we will select the following pin configuration: We will configure the emulator to define these GPIO pins and do our MIDlet tests before uploading Java code to Raspberry Pi. First, select Tools > Java ME > Custom Device Editor to open the Custom Device Editor window shown in Figure 7. Figure 7. Custom Device Editor window Select MEEP and then click New to create a new customizable device call MEEPCustomDevice, which is shown in the Edit MEEP Device window shown in Figure 8. Figure 8. Edit MEEP Device window In the GPIO tab, create the needed GPIO pins and specify their configuration. When you have finished, click OK. In NetBeans IDE, you will now see the new device, MEEPCustomDevice, in the Device Selector window. Install your MIDlet with the device you created by selecting the External Events Generator icon and clicking Install. See Figure 9. Figure 9. Installing the MIDlet with MEEPCustomDevice Select the GPIO Pins tab to see all the GPIO pins (Figure 10) that you configured in the Edit MEEP Device window (Figure 8). Figure 10. Examining the GPIO pins Selecting Tools > External Events Generator opens a new window, which is shown in Figure 11. In that window, you can click the buttons for the GPIO 17, GPIO 22, and GPIO 24 pins to emulate a distance sensor, a flame sensor, and a motion sensor, respectively. In addition, at the console you can see a log of the detected events, as shown in Figure 12. Figure 11. External Events Generator window Figure 12. Log of the detected events Each device has its own technical specifications, which must be checked in detail before deciding what type of interface can be connected. In particular, it is important to determine what voltage each device is designed to work with to ensure the life of each device and the Raspberry Pi. The GPIO interface facilitates the connection of sensors that can be used as either a digital input or a digital output, can be disabled or enabled, and can be used to drive "interrupt" lines. For each device, it is important to define a Java ME 8 class that handles the required transactions and contains suitable control logic, such as the DFR0076Device, HCSR501Device, and HCSR04Device classes we created for our flame detector, motion detector, and distance sensor devices. By creating MIDlets, you can easily deploy an application to the Raspberry Pi and do all kinds of experiments, for which only your imagination is the limit. If you don't have a Raspberry Pi but you want to emulate one, NetBeans IDE 8 is a powerful Java integrated development environment that lets you create a virtual device and experiment with interfaces you define. In the next articles in this series, we will examine other types of sensors—such as sensors that can detect temperature, pressure, and light levels—using other types of interfaces such as I2C, SPI, and UART. Jose Cruz (@joseacruzp) is a software engineer who has been working with Java since 1998. He is a lead developer of Java, Java ME, and Java EE at Ferreteria EPA C.A. in Venezuela. From an early age, his hobby has been electronics. This has led him to combine computing with electronics and develop projects where Java and embedded devices such as Arduino and Raspberry Pi are protagonists.
http://www.oracle.com/technetwork/articles/java/cruz-gpio-2295970.html
CC-MAIN-2016-30
refinedweb
2,638
54.12
Created by . Last modified on 2005/02/27. Last Revised: February 27, 2000 Ron Bickers This document describes setting up and using Zope () on BeOS. I wouldn't run this on BeOS to do any real serving, especially with BeOS's minimal select() support (see below). It should also be noted that NetPositive doesn't behave with the Zope management interface for whatever reason. However, Opera does just fine. NOTE: Since BeOS is now a dead operating system (unless the Open Source version of it takes off), this information is not worth much anymore. This has only been tested on BeOS Intel (4.5.2) with Zope 2.1.2 and without PCGI, so if you're running anything else, you may have problems. PCGI doesn't compile and I haven't tried to get it to since my use of Zope on BeOS is only for testing Zope products during development. Since Zope 2.x requires Python 1.5.2, you'll need to have Python installed. However, as of this writing, the binary distribution of Python for BeOS has a broken select(), so you'll need to either compile Python from the sources or download my binaries. Python compiles cleanly on BeOS. However, in the 1.5.2 release (and in the BeOS binary distribution available on the BeWare site) there is a problem with Modules/selectmodule.c which needs to be patched. Save the following patch as selectmodule.c.path: --- selectmodule.c.dist Sat Jan 15 00:49:54 2000 +++ selectmodule.c Sat Jan 15 00:54:14 2000 @@ -141,7 +141,7 @@ "argument must be an int, or have a fileno() method."); goto finally; } -#if defined(_MSC_VER) || defined(__BEOS__) +#if defined(_MSC_VER) max = 0; /* not used for Win32 */ #else /* !_MSC_VER */ if (v < 0 || v >= FD_SETSIZE) { $ patch < selectmodule.c.patch $ patch < selectmodule.c.patch Note that revision 2.30 and later from the Python CVS repository includes the patch. After applying the patch, follow the directions in BeOS/README carefully to complete the installation. Zope also requires FCNTL.py which can be created by running h2py.py in the Tools/scripts directory as follows: $ python h2py.py /boot/develop/headers/posix/fcntl.h $ python h2py.py /boot/develop/headers/posix/fcntl.h Then copy the resulting FCNTL.py file to /boot/home/config/lib/python1.5/. BeOS has limited support for select(). Very limited. It only supports sockets (not files or pipes or anything else), and it only works for "reads", which means you can only check a socket to see if there is incoming data, not outgoing. Unfortunately, Zope uses select() for reads and writes to sockets in ZServer/medusa/asyncore.py as well as a pipe in ZServer/medusa/select_trigger.py. There is support in select_trigger.py to use a loopback socket instead of the pipe. So with a small patch to get BeOS to use the non-posix code, the lack of pipe support is no longer a problem. However, I haven't found a clean fix for the lack of "write" support, and since poll() isn't supported at all in BeOS, we can't use that. So instead of a clean fix, there's a dirty, but usable, workaround. Since select() will never tell Zope there's something to send, we have to bypass select() for any outgoing data and just attempt to send every time. Fortunately, attempting to send if there really isn't anything to send will simply be ignored by Zope. Another problem is that anything waiting to be sent will not go out until either the select() timeout is reached (set to 30.0 seconds by default) or there is something else to read. To avoid this, we can set the timeout to some ridiculously low number, like 0.1 seconds. That's a lot of unnecessary work every 0.1 seconds, but with Pulse and other tools the load isn't noticable. Changing it to 0.0 seconds did put a strain on the CPU, so don't do it. It's rumored that Be will provide more complete support for select() in an upcoming release, so all of these issues will go away and Zope should compile without modifications. Until then, however, this is a reasonable solution for using BeOS for developing Zope products. If for some reason you're using BeOS as a server, I would wait for a better select(). With better select() support, Zope would compile on BeOS without modifications. However, not living in a perfect world, we're left with this workaround. To compile Zope you need to have the Python sources and you need to modify /boot/home/config/lib/python1.5/config/Makefile. Change LDSHARED to point to BeOS/linkmodule in your Python source tree, and change LINKCC to point to BeOS/ linkcc in your python source tree. By default they're relative paths that don't work when compiling Zope. There are three files that need to be patched. Copy the following patches, select_trigger.py.patch and asyncore.py.patch, to ZServer/medusa/: --- ZServer/medusa/select_trigger.py.dist Fri Jan 14 17:48:40 2000 +++ ZServer/medusa/select_trigger.py Fri Jan 14 17:51:40 2000 @@ -10,7 +10,8 @@ import string import thread -if os.name == 'posix': +# Changed to != 'posix' as workaround for BeOS's incomplete posix select() +if os.name != 'posix': class trigger (asyncore.file_dispatcher): @@ -93,7 +94,7 @@ class trigger (asyncore.dispatcher): - address = ('127.9.9.9', 19999) + address = ('127.0.0.1', 19999) def __init__ (self): a = socket.socket (socket.AF_INET, socket.SOCK_STREAM) @@ -103,7 +104,7 @@ w.setsockopt(socket.IPPROTO_TCP, 1, 1) # tricky: get a pair of connected sockets - host='127.9.9.9' + host='127.0.0.1' port=19999 while 1: try: --- ZServer/medusa/asyncore.py.dist Fri Jan 14 17:48:15 2000 +++ ZServer/medusa/asyncore.py Fri Jan 14 21:36:42 2000 @@ -52,7 +52,7 @@ if s.writable(): w.append (s) - (r,w,e) = select.select (r,w,e, timeout) + (r,None,None) = select.select (r,w,e, timeout) for x in r: try: $ patch < select_trigger.py.patch$ patch < asyncore.py.patch $ patch < select_trigger.py.patch$ patch < asyncore.py.patch Copy the following patch to z2.py.patch at the top level source directory: --- z2.py.dist Sat Jan 15 01:27:34 2000 +++ z2.py Sat Jan 15 01:27:47 2000 @@ -580,7 +580,7 @@ # Start Medusa, Ye Hass! sys.ZServerExitCode=0 -asyncore.loop() +asyncore.loop(timeout=0.1) sys.exit(sys.ZServerExitCode) $ patch < z2.py.patch $ patch < z2.py.patch These patches are tiny, and serve only as the select() workaround. From the top level source directory, run 'python wo_pcgi' to compile Zope. You should now be able to modify 'start' as you wish and run Zope.
http://old.zope.org/Members/rbickers/Zope_on_BeOS/
crawl-003
refinedweb
1,132
69.68
So I have: C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.0\Profile\Client\ System.Speech.dll ...and I can add the reference to Visual Studio but using System.Speech etc won't register with the console, and therefore speech synthesis and recognition does not work. Would appreciate any and all help, thanks! Its a little out of date but this tutorial shows you how to get started using the System.speech class It sounds like what you are doing now is trying to use the class directly. The first step is to make an instance of the SpeechSynthesizer or SpeechRecognizer class with which you can perform the System.speech actions. You need to add a reference to the System.Speech assembly, then you are free to use speech like so: using System; using System.Speech; // <-- sounds like what you are using, not necessary for this example using System.Speech.Recognition; // <--- you need this namespace ConsoleApplication2 { class Program { static void Main(string[] args) { using (SpeechRecognizer recognizer = new SpeechRecognizer()) { // do something } } } } Just in case, here is the reference I am using (via project.References.Add Reference...): It is not necessary to use the Browse function. I'm assuming you are not using COM too.
http://www.devsplanet.com/question/35282092
CC-MAIN-2017-09
refinedweb
205
57.27
I am currently trying to write a palindrome checker in Java that is case insensitive. I checked the other topics but none of them seems to solve my issue. Here's my code: import java.util.Scanner; public class Homework5_2 { public static void main(String[] args) { boolean flag = true; //palindrome or not Scanner console = new Scanner(System.in); System.out.print("Enter one or more words: "); String s = console.next(); //checks if string contains spaces if (s.matches(".*\\s+.*")) { s = s.replaceAll("\\s+",""); } s = s.toLowerCase(); int stringLength = s.length(); int index = 0; //checks the string from both sides going towards the middle for (int i=0;i<stringLength/2;i++) { index = stringLength-i-1; if (!(s.charAt(i) == s.charAt(index))) { flag = false; } } if (flag == true) { System.out.println("The string is a palindrome!"); } else { System.out.println("The string is not a palindrome!"); } } } Use console.nextLine() instead of console.next(). By default, console.next() only gathers the next space-separated token, so when you enter "Os SO" it's only actually storing "Os" into the String s variable. In terms of checking for palindromes, it is much easier to reverse the string and check if the reversed string is equal to the original instead of using indexes to check each individual character in the string.
https://codedump.io/share/58p7MMwr38GQ/1/java-palindrome-checker---case-insensitive
CC-MAIN-2017-09
refinedweb
217
68.57
Designing with Interfaces & Abstract Classes The Compiler Proof Can you prove or disprove that interfaces have a true is-a relationship? In the case of Java, you can let the compiler tell you. Consider the following code: Dog D = new Dog(); Head H = D; When this code is run through the compiler, the following error is produced: Test.java:6: Incompatible type for Identifier. Can't convert Dog to Head. Head H = D; Dog D = new Dog(); Mammal M = D; This is a true inheritance relationship, and it is not surprising that the compiler parses this code cleanly because a dog is a mammal. Now, you can perform the true test of the interface. Is an interface an actual is-a relationship? The compiler thinks so: Dog D = new Dog(); Nameable N = D; This code works fine. So,. You can explore this concept in more detail by providing a good: Listing 7 public class Planet { String planetName; public void getplanetName() {return planetName;}; } Likewise, the Car class may have code like this: Listing 8 public class Car { String carName; public String getCarName() { return carName;}; } And the Dog class may have code like this: Listing 9 public class Dog { String dogName; public String getDogName() { return dogName;}; } The obvious problem, you can create an interface (you can use the Nameable interface that you created previously). The convention is that all classes must implement Nameable. In this way, the users only have to remember a single interface for all classes when it comes to naming conventions: Listing 10 public interface Nameable { public String getName(); public void setName(String aName); } The new classes—Planet, Car, and Dog—should look like this: Listing 11, you have a standard interface and you have used a contract to ensure that it is the case. There is one little issue that you may have thought about. The idea of a contract is great as long as everyone plays by the rules, but what if some shady individual does or she could be reprimanded or even removed from the team.. Conclusion Basically, contracts are "plug-in points" into your code. Any place where you want to make parts of a system abstract, you can use a contract. Instead of coupling to objects of specific classes, you can connect to any object that satisfies the contract. Abstract classes and interfaces provide the mechanism to implement these design techniques. By utilizing these object-oriented concepts, you can produce solid designs that are reliable, flexible, reusable, and very maintainable.<<
https://www.developer.com/lang/article.php/10924_3642656_3/Designing-with-Interfaces-amp-Abstract-Classes.htm
CC-MAIN-2018-09
refinedweb
416
58.21
Lists knowledge in these areas. Lists are not "built-in" as a language construct in Scala; they are defined by an abstract class List in the scala package, which comes with two subclasses for :: and Nil. In the following we present a quick tour through class List. This section presents a somewhat simplified account of the class, compared to its real implementation in the Scala standard library, which is covered in Section 22.3. package scala abstract class List[+T] {List is an abstract class, so you cannot define elements by calling the empty List constructor. For instance the expression "new List" would be illegal. The class has a type parameter T. The + in front of this type parameter specifies that lists are covariant, as discussed in Chapter 19. Because of this property, you can assign a value of type List[Int], say, to a variable of type List[Any]: scala> val xs = List(1, 2, 3) xs: List[Int] = List(1, 2, 3)All list operations can be defined in terms of three basic methods: scala> var ys: List[Any] = xs ys: List[Any] = List(1, 2, 3) def isEmpty: Boolean def head: T def tail: List[T]These three methods are all abstract in class List. They are defined in the subobject Nil and the subclass ::. The hierarchy for List is shown in Figure 22.1. The Nil object defines an empty list. Its definition is shown in Listing 22.1. case object Nil extends List[Nothing] { override def isEmpty = true def head: Nothing = throw new NoSuchElementException("head of empty list") def tail: List[Nothing] = throw new NoSuchElementException("tail of empty list") } The Nil object inherits from type List[Nothing]. Because of covariance, this means that Nil is compatible with every instance of the List type. The three abstract methods of class List are implemented in the Nil object in a straightforward way: the isEmpty method returns true and the head and tail methods both throw an exception. Note that throwing an exception is not only reasonable, but practically the only possible thing to do for head: Because Nil is a List of Nothing, the result type of head must be Nothing. Since there is no value of this type, this means that head cannot return a normal value. It has to return abnormally by throwing an exception.[1] Class ::, pronounced "cons" for "construct," represents non-empty lists. It's named that way in order to support pattern matching with the infix ::. You have seen in Section 16.5 that every infix operation in a pattern is treated as a constructor application of the infix operator to its arguments. So the pattern x :: xs is treated as ::(x, xs) where :: is a case class. Here is the definition of the :: class: final case class ::[T](hd: T, tl: List[T]) extends List[T] { def head = hd def tail = tl override def isEmpty: Boolean = false }The implementation of the :: class is straightforward. It takes two parameters hd and tl, representing the head and the tail of the list to be constructed. The definitions of the head and tail method simply return the corresponding parameter. In fact, this pattern can be abbreviated by letting the parameters directly implement the head and tail methods of the superclass List, as in the following equivalent but shorter definition of the :: class: final case class ::[T](head: T, tail: List[T]) extends List[T] {This works because every case class parameter is implicitly also a field of the class (it's like the parameter declaration was prefixed with val). Recall from Section 20.3 that Scala allows you to implement an abstract parameterless method such as head or tail with a field. So the code above directly uses the parameters head and tail as implementations of the abstract methods head and tail that were inherited from class List. override def isEmpty: Boolean = false } All other List methods can be written using the basic three. For instance: def length: Int = if (isEmpty) 0 else 1 + tail.lengthor: def drop(n: Int): List[T] = if (isEmpty) Nil else if (n <= 0) this else tail.drop(n - 1)or: def map[U](f: T => U): List[U] = if (isEmpty) Nil else f(head) :: tail.map(f) The list construction methods :: and ::: are special. Because they end in a colon, they are bound to their right operand. That is, an operation such as x :: xs is treated as the method call xs.::(x), not x.::(xs). In fact, x.::(xs) would not make sense, as x is of the list element type, which can be arbitrary, so we cannot assume that this type would have a :: method. For this reason, the :: method should take an element value and yield a new list. What is the required type of the element value? You might be tempted to say, it should be the same as the list's element type, but in fact this is more restrictive than necessary. To see why, consider this class hierarchy: abstract class Fruit class Apple extends Fruit class Orange extends FruitListing 22.2 shows what happens when you construct lists of fruit: scala> val apples = new Apple :: Nil apples: List[Apple] = List(Apple@585fa9) scala> val fruits = new Orange :: apples fruits: List[Fruit] = List(Orange@cd6798, Apple@585fa9) The apples value is treated as a List of Apples, as expected. However, the definition of fruits shows that it's still possible to add an element of a different type to that list. The element type of the resulting list is Fruit, which is the most precise common supertype of the original list element type (i.e., Apple) and the type of the element to be added (i.e., Orange). This flexibility is obtained by defining the :: method (cons) as shown in Listing 22.3. def ::[U >: T](x: U): List[U] = new scala.::(x, this) Note that the method is itself polymorphic—it takes a type parameter named U. Furthermore, U is constrained in [U >: T] to be a supertype of the list element type T. The element to be added is required to be of type U and the result is a List[U]. With the formulation of :: shown in Listing 22.3, you can check how the definition of fruits shown in Listing 22.2 works out type-wise: in that definition the type parameter U of :: is instantiated to Fruit. The lower-bound constraint of U is satisfied, because the list apples has type List[Apple] and Fruit is a supertype of Apple. The argument to the :: is new Orange, which conforms to type Fruit. Therefore, the method application is type-correct with result type List[Fruit]. Figure 22.2 illustrates the structure of the lists that result from executing the code shown in Listing 22.3. In fact, the polymorphic definition of :: with the lower bound T is not only convenient; it is also necessary to render the definition of class List type-correct. This is because Lists are defined to be covariant. Assume for a moment that we had defined :: like this: // A thought experiment (which wouldn't work) def ::(x: T): List[T] = new scala.::(x, this)You saw in Chapter 19 that method parameters count as contravariant positions, so the list element type T is in contravariant position in the definition above. But then List cannot be declared covariant in T. The lower bound [U >: T] thus kills two birds with one stone: it removes a typing problem, and it leads to a :: method that's more flexible to use. The list concatenation method ::: is defined in a similar way to ::, as shown in Listing 22.4. def :::[U >: T](prefix: List[U]): List[U] = if (prefix.isEmpty) this else prefix.head :: prefix.tail ::: this Like cons, concatenation is polymorphic. The result type is "widened" as necessary to include the types of all list elements. Note also that again the order of the arguments is swapped between an infix operation and an explicit method call. Because both ::: and :: end in a colon, they both bind to the right and are both right associative. For instance, the else part of the definition of ::: shown in Listing 22.4 contains infix operations of both :: and :::. These infix operations can be expanded to equivalent method calls as follows: prefix.head :: prefix.tail ::: this equals (because :: and ::: are right-associative) prefix.head :: (prefix.tail ::: this) equals (because :: binds to the right) (prefix.tail ::: this).::(prefix.head) equals (because ::: binds to the right) this.:::(prefix.tail).::(prefix.head) The typical access pattern for a list is recursive. For instance, to increment every element of a list without using map you could write: def incAll(xs: List[Int]): List[Int] = xs match { case List() => List() case x :: xs1 => x + 1 :: incAll(xs1) }One shortcoming of this program pattern is that it is not tail recursive. Note that the recursive call to incAll above occurs inside a :: operation. Therefore each recursive call requires a new stack frame. On today's virtual machines this means that you cannot apply incAll to lists of much more than about 30,000 to 50,000 elements. This is a pity. How do you write a version of incAll that can work on lists of arbitrary size (as much as heap-capacity allows)? One approach is to use a loop: for (x <- xs) // ??But what should go in the loop body? Note that where incAll above constructs the list by prepending elements to the result of the recursive call, the loop needs to append new elements at the end of the result list. One, very inefficient possibility is to use :::, the list append operator: var result = List[Int]() // a very inefficient approach for (x <- xs) result = result ::: List(x + 1) resultThis has terrible efficiency, though. Because ::: takes time proportional to the length of its first operand, the whole operation takes time proportional to the square of the length of the list. This is clearly unacceptable. A better alternative is to use a list buffer. List buffers let you accumulate the elements of a list. To do this, you use an operation such as "buf += elem", which appends the element elem at the end of the list buffer buf. Once you are done appending elements, you can turn the buffer into a list using the toList operation. ListBuffer is a class in package scala.collection.mutable. To use the simple name only, you can import ListBuffer from its package: import scala.collection.mutable.ListBufferUsing a list buffer, the body of incAll can now be written as follows: val buf = new ListBuffer[Int] for (x <- xs) buf += x + 1 buf.toListThis is a very efficient way to build lists. In fact, the list buffer implementation is organized so that both the append operation (+=) and the toList operation take (very short) constant time. The implementations of list methods given in Section 22.1 are concise and clear, but suffer from the same stack overflow problem as the non-tail recursive implementation of incAll. Therefore, most methods in the real implementation of class List avoid recursion and use loops with list buffers instead. For example, Listing 22.5 shows the real implementation of map in class List: final override def map[U](f: T => U): List[U] = { val b = new ListBuffer[U] var these = this while (!these.isEmpty) { b += f(these.head) these = these.tail } b.toList } This revised implementation traverses the list with a simple loop, which is highly efficient. A tail recursive implementation would be similarly efficient, but a general recursive implementation would be slower and less scalable. But what about the operation b.toList at the end? What is its complexity? In fact, the call to the toList method takes only a small number of cycles, which is independent of the length of the list. To understand why, take a second look at class ::, which constructs non-empty lists. In practice, this class does not quite correspond to its idealized definition given previously in Section 22.1. The real definition is shown in Listing 22.6. final case class ::[U](hd: U, private[scala] var tl: List[U]) extends List[U] { def head = hd def tail = tl override def isEmpty: Boolean = false } There's one peculiarity: the tl argument is a var! This means that it is possible to modify the tail of a list after the list is constructed. However, because the variable tl has the modifier private[scala], it can be accessed only from within package scala. Client code outside this package can neither read nor write tl. Since the ListBuffer class is contained in a subpackage of package scala, scala.collection.mutable, ListBuffer can access the tl field of a cons cell. In fact the elements of a list buffer are represented as a list and appending new elements involves a modification of tl field of the last :: cell in that list. Here's the start of class ListBuffer: package scala.collection.immutable final class ListBuffer[T] extends Buffer[T] { private var start: List[T] = Nil private var last0: ::[T] = _ private var exported: Boolean = false ...You see three private fields that characterize a ListBuffer: The toList operation is very simple: override def toList: List[T] = { exported = !start.isEmpty start }It returns the list of elements referred to by start and also sets exported to true if that list is nonempty. So toList is very efficient, because it does not copy the list which is stored in a ListBuffer. But what happens if the list is further extended after the toList operation? Of course, once a list is returned from toList, it must be immutable. However, appending to the last0 element will modify the list which is referred to by start. To maintain the correctness of the list buffer operations, you need to work on a fresh list instead. This is achieved by the first line in the implementation of the += operation: override def += (x: T) { if (exported) copy() if (start.isEmpty) { last0 = new scala.::(x, Nil) start = last0 } else { val last1 = last0 last0 = new scala.::(x, Nil) last1.tl = last0 } }You see that += copies the list pointed to by start if exported is true. So, in the end, there is no free lunch. If you want to go from lists which can be extended at the end to immutable lists, there needs to be some copying. However, the implementation of ListBuffer is such that copying is necessary only for list buffers that are further extended after they have been turned into lists. This case is quite rare in practice. Most use cases of list buffers add elements incrementally and then do one toList operation at the end. In such cases, no copying is necessary. The previous section showed key elements of the implementation of Scala's List and ListBuffer classes. You saw that lists are purely functional on the "outside" but have an imperative implementation using list buffers on the "inside." This is a typical strategy in Scala programming: trying to combine purity with efficiency by carefully delimiting the effects of impure operations. You might ask, why insist on purity? Why not just open up the definition of lists, making the tail field, and maybe also the head field, mutable? The disadvantage of such an approach is that it would make programs much more fragile. Note that constructing lists with :: re-uses the tail of the constructed list. So when you write: val ys = 1 :: xs val zs = 2 :: xsthe tails of lists ys and zs are shared; they point to the same data structure. This is essential for efficiency; if the list xs was copied every time you added a new element onto it, this would be much slower. Because sharing is pervasive, changing list elements, if it were possible, would be quite dangerous. For instance, taking the code above, if you wanted to truncate list ys to its first two elements by writing: ys.drop(2).tail = Nil // can't do this in Scala!you would also truncate lists zs and xs as a side effect. Clearly, it would be quite difficult to keep track of what gets changed. That's why Scala opts for pervasive sharing and no mutation for lists. The ListBuffer class still allows you to build up lists imperatively and incrementally, if you wish to. But since list buffers are not lists, the types keep mutable buffers and immutable lists separate. The design of Scala's List and ListBuffer is quite similar to what's done in Java's pair of classes String and StringBuffer. This is no coincidence. In both situations the designers wanted to maintain a pure immutable data structure but also wanted to provide an efficient way to construct this structure incrementally. For Java and Scala strings, StringBuffers (or, in Java 5, StringBuilders) provide a way to construct a string incrementally. For Scala's lists, you have a choice: You can either construct lists incrementally by adding elements to the beginning of a list using ::, or you use a list buffer for adding elements to the end. Which one is preferable depends on the situation. Usually, :: lends itself well to recursive algorithms in the divide-and-conquer style. List buffers are often used in a more traditional loop-based style. In this chapter, you saw how lists are implemented in Scala. List is one of the most heavily used data structures in Scala, and it has a refined implementation. List's two subclasses, Nil and ::, are both case classes. Instead of recursing through this structure, however, many core list methods are implemented using a ListBuffer. ListBuffer, in turn, is carefully implemented so that it can efficiently build lists without allocating extraneous memory. It is functional on the outside, but uses mutation internally to speed up the common case where a buffer is discarded after toList is been called. After studying all of this, you now know the list classes inside and out, and you might have learned an implementation trick or two. [1] To be precise, the types would also permit for head to always go into an infinite loop instead of throwing an exception, but this is clearly not what's wanted.
http://www.artima.com/pins1ed/implementing-lists.html
CC-MAIN-2014-15
refinedweb
3,036
63.59
Hi there, As a developer using some of PyXML's components with other users who use my software (in a Zope context) I would like to kick off a discussion on how to improve PyXML for the developer and end-user experience from a distribution/packaging point of view. I'm a relative outsider and may get things wrong, which is why discussion is needed. I get *many* questions from users trying to install PyXML. There is a range of problems: * People trying to install PyXML for a binary version of Zope on Windows. Binary Zope distributions include their own Python version. This Python version is not found by PyXML's installer. I hack around this by telling people to *unzip the exe file*, and then copy the _xmlplus file manually to the right place in the Zope distribution. * The most recent version of PyXML doesn't get distributed for Python 2.1 in binary form, while Zope still requires Python 2.1. * Hacks to get stuff working by linking _xmlplus or xml to other places aggravate matters. Usually the errors result in rather obscure tracebacks that don't make it very clear PyXML is involved. I think the current setup with Python in the core and _xmplus in site-packages contributes to the problems. It is confusing for developers, and it's hard to debug for people trying to install it. The motivation for this setup seems to be the ability to upgrade the Python library's XML support while actually not upgrading Python (or its library). I believe that this ability is not worth that much by itself and that this approach should never have been adopted; we may not have known better then but I think experience teaches us clearly enough that it's not working. It would be much better to be explicit here, and distribute PyXML explicitly as a 'pyxml' top level package and let developers decide on what they want to import. Sometimes PyXML ships with broken code (I realize this is volunteer work and I know what is needed is a more extensive unit test suite, I'll try to look into it). If PyXML were a standalone package that didn't try to integrate into the 'xml' top level package namespace, that wouldn't be as big a problem. Code written against the Python library's xml package would still continue to work. Now however code sometimes breaks if you install PyXML. Sometimes this is not even due to PyXML breaking anything, it's because of PyXML *fixing* something, but I still believe code using a Python core library should only break if you upgrade the core library. If the developer had the explicit ability to determine which package gets imported this wouldn't have happened. Theoretically PyXML makes a backwards compatibility guarantee. In practice this is very hard to manage right. Explicit is better than implicit. A counterargument could be that since code is planned to eventually move from pyxml to the core library, users will eventually have to modify their code in order to start using the core code. I think that is fine; it's just switching one import and this happens whenever any library makes it into the core. A more implicit solution that may sometimes be legitimate is to import core code into the pyxml package namespace, though I'd be wary of that too (though it's far less risky than the current situation where the reverse happens). So, can we still change this? I propose a new 'pyxml' top level package for PyXML code. An argument could be made that this is too late in the development cycle because lots of code already depends on it. I think that solving the confusion and pain would be worth it. Perhaps a transition strategy can be devided where importing PyXML code through the 'xml' package will issue a warning so that developers can adjust their code. If Python can change its division operator though, a 0.x package can certainly shift around its APIs some. Of course I may be blundering into a non-problem and everybody else is entirely happy with the current situation and thinks the proposed situation would make things much worse. If so, I'd be curious to find out why you think so. Regards, Martijn
http://mail.python.org/pipermail/xml-sig/2003-February/009041.html
crawl-002
refinedweb
723
60.75
Last. Array marshaling Everything a C programmer needs to know about the Windows built-in directory object picker is here on MSDN. The plan for .NET is straightforward: create the managed structures, then the interfaces, and finally a CommonDialog derived class which hides all the nasty details. The first problem is that the .NET marshaler is very limited when it comes to arrays, if you search for it on the web, you’re guaranteed to find a lot of despair and few good answers. The first task was the array pointers in DSOP_INIT_INFO. In structures, the marshaler can only handle an in-place array with a constant size. My first try was implementing an ICustomMarshaler. However, you can’t use it from inside a structure definition, only in method parameters, and when you do that, you have to deal with all fields, not just the arrays. So what to do now? Just leave it as an IntPtr and deal with it somewhere else. And here comes my invention: I created reusable, generic functions which implement array pointer marshaling, and you can use them to create one-line wrapper properties for IntPtr‘s instead of custom, dirty, and repeated code. The functions are named ArrayToPtr and PtrToArray, and can be found in MarshalUtils.cs. ArrayToPtr first allocates space for the array, then stores the elements, and finally, it stores the array pointer and the size somewhere. It also frees the previous contents, so in order to clean up, you just need to set your array to null. Currently it only supports Unicode strings and structures, but it’s easy to add new providers. A bigger problem arises when you want to marshal an in-place array, not a pointer, as in DS_SELECTION_LIST. In native code, you can change the size of a structure any way you wish, but we can’t do that in managed code, a structure needs to have a fixed size. So let’s say we specify a constant array size of 1. After the marshaler reads the structure from unmanaged memory, it constructs a managed structure, and then what? We have no way to read the rest of the elements because we don’t know where they are, the managed structure is at a different memory address than the unmanaged one. No tricks this time, the only way is to read entire the structure manually, given the unmanaged memory pointer. You can see my results in ObjSel.cs. I created two wrapper classes for the aforementioned structures, _DSOP_INIT_INFO and _DS_SELECTION_LIST. They are pretty simple actually, I think I perfectly managed to make them as clean as possible. STGMEDIUM nightmares After I was done with the initialization structures, I needed some more for data retrieval. To my surprise, I found that they were already implemented in the framework, in the System.Runtime.InteropServices.ComTypes namespace. Lucky, I thought. How wrong I was… There was a mysterious error when I tried to read an STGMEDIUM, but only when I specified additional attributes to fetch. IDataObject.GetData kept returning an 0×80004001 (E_NOTIMPL) error. Took me days to hunt down (in my free time, of course). The problem was that STGMEDIUM.pUnkForRelease was declared as an object, marshaled as an IUnknown. When I didn’t fetch any attributes, it was null, but when I did, it was set to something, and it makes sense, because in this case, additional stuff was needed to be freed, and that’s exactly what this field is for. Except the marshaler tried to do something with it that it didn’t actually support. I guess that marshaling it as an object works most of the time, but not in this case. I reimplemented the whole thing in ObjIdl.cs, changed that single field to an IntPtr and it worked. Next I tried to run my code on a 64 bit machine, and guess what, it didn’t work. STGMEDIUM contains a union, and the only way to do that in .NET is with explicit field offsets. This union is the second member of the structure, so I set the offsets to 4. The problem is that on a 64 bit machine, the second field is at offset 8. The solution was that I implemented the union in a separate structure, with all 0 offsets. And then it just worked! I have to admit, Microsoft did a good job with the 64 bit architecture: a 64 bit app will find the 64 bit implementation of the COM class which was compiled with 64 bit structures, and the same goes for 32 bits, because it will load the COM class from under the Wow6432Node in the registry. And behold: DirectoryObjectDialog All this work culminated in DirectoryObjectDialog.cs. It has a very easy to use interface. It hides the underlying nasty details, but not completely, you can still do with it whatever you can in native code. It’s also integrated with .NET’s directory stuff, so you can retrieve the results as Principal or DirectoryEntry objects, and also SID‘s. Here’s a sample code: Easy, huh? When you run it, it displays the standard dialog and prints the results to the console window. It’s all nicely documented, so there’s not much more I want to say about it. Feel free to use it! Conclusions First of all, the unmanaged API is ugly. It supports separate up-level and down-level scopes, and also up-level and down-level filters, but there are stupid inconsistencies between them, they don’t even share anything at the definition level. For example, in up-level scopes, you can select normal and built-in groups separately, but in down-level scopes, you can choose between all or all but built-in groups. (These differences pretty much decided what my interface looked like because I had to implement the common denominator in order to hide these levels.) You can specify if the group, user or computer checkboxes are turned on by default, but not for the well-known principal checkbox. When you add multiple scopes, and don’t specify a starting scope, the documentation says that the first one will be the default, but during my tests, it always defaulted to the computer scope. The documentation also says you can’t choose multiple starting scopes, but it doesn’t throw an error either, I found that the last of them will be the default one. So in order to work in a predictable way, you should specify your scopes in ascending order of priority and say that all of them should be the starting one. Second, the .NET marshaler is stupid. It works in most cases, but when it doesn’t, you’re facing a very hard road. Like I mentioned above, the biggest difficulties arise with arrays. I also hate that it uses the MarshalAsAttribute for parameter and structure marshaling, too, but there are certain differences, like some features can only be used in one context but not the other. It’s nothing but a bad design choice. So I put it on my to-do list to create a new, smart marshaler. The two wrapper classes in my framework code are only meant as temporary fixes. And last but not least, I can see now that my framework is growing pretty nicely. Not only that, but it also looks just beautiful. Even the low-level Windows wrapping stuff, and that’s something. I think that very soon I’ll start providing binaries so that people can really start using my framework. Come on, people! Hi, I have used the code to create an object picker for Active Direvtory input to my program. I am able to select objects and store them to a database using the distinguished name path. Is there anyway way of pre-populating the object picker dialog with names? I would appreciate some help Thanks Richard. You can check the MSDN docs, but I highly doubt it. In all Windows apps I’ve seen so far, you always use this thing to add new accounts to a list, but not for editing the list itself. Hi, I have tried this tool on several pc, and those with windows xp seems to crush when clicking on “advanced” button. It shows the following message: “The advanced page cannot be opened because of following error: No such interface supported” Thanks, Shimon Great work on this! I’ve been looking for ANYONE who has attempted an implementation of the Directory Object Picker. Unfortunately, my abilities are mostly with PowerShell and not the undelying .NET / C# worlds. Might you have a compiled executable of your solution that I could use? Or even better, a Cmdlet implementation of your solution? I would be eternally grateful for the help! I’m constantly planning on providing compiled DLL’s of my class libraries, and also providing some kind of command-line access to the simpler functions. Providing cmdlet’s is an awesome idea, thanks, I would really love that too. Sign up for my RSS and you’ll see when it’s ready. Any other license than GPL? I would like to use but not open source my program? What license would you prefer?
http://joco.name/2011/03/27/directory-object-dialog-in-net-and-advanced-com-marshaling/
CC-MAIN-2018-30
refinedweb
1,542
64.1
notorious for changing hardware designs for newer silicon, sometimesto add features, sometimes to fix bugs. While I understand thetemptation to boil a compatible value down to a nice clean genericstring, doing so only works in a perfect world. In the real world,you still need to have some information about the specificimplementation. I prefer this specifying it to the SoC name, but I'vebeen known to be convinced that specifying it to the ip-block name &version in certain circumstances, like for IP blocks in an FPGA, orsome of the Freescale powerpc pXXXX SoCs which actually had an IPblock swapped out midway through the life of the chip.Besides, encoding an soc or ip block version into the 'generic'compatible values is not just good practice, it has *zero downside*.That's the beauty of the compatible property semantics. Any node canclaim compatibility with an existing device. If no existing devicefits correctly, then the node simple does not claim compatibility.Drivers can bind to any number of compatible strings, so it would bejust fine for the of_match_table list to include both "fsl,imx-21" and"fsl,imx-51" (assuming that is the appropriate solution in this case).>>> > +- reg : address and length of the register set for the device>> > +- interrupts : should contain uart interrupt>> > +- id : should be the port ID defined by soc>> > +>> > +Optional properties:>> > +- fsl,has-rts-cts : indicate it has rts-cts>> > +- fsl,irda-mode : support irda mode>> > +>> > +Example:>> > +>> > +uart@73fbc000 {>> > + compatible = "fsl,imx51-uart", "fsl,imx-uart";>> > + reg = <0x73fbc000 0x4000>;>> > + interrupts = <31>;>> > + id = <1>;>> > + fsl,has-rts-cts;>> > +};>> > diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c>> > index a544731..2769353 100644>> > --- a/drivers/tty/serial/imx.c>> > +++ b/drivers/tty/serial/imx.c>> > @@ -45,6 +45,8 @@>> > #include <linux/delay.h>>> > #include <linux/rational.h>>> > #include <linux/slab.h>>> > +#include <linux/of.h>>> > +#include <linux/of_address.h>>> >>> > #include <asm/io.h>>> > #include <asm/irq.h>>> > @@ -1223,6 +1225,63 @@ static int serial_imx_resume(struct platform_device *dev)>> > return 0;>> > }>> >>> > +#ifdef CONFIG_OF>> > +static int serial_imx_probe_dt(struct imx_port *sport,>> > + struct platform_device *pdev)>> > +{>> > + struct device_node *node = pdev->dev.of_node;>> > + const __be32 *line;>> > +>> > + if (!node)>> > + return -ENODEV;>> > +>> > + line = of_get_property(node, "id", NULL);>> > + if (!line)>> > + return -ENODEV;>> > +>> > + sport->port.line = be32_to_cpup(line) - 1;>>>> Hmmm, I really would like to be rid of this. Instead, if uarts must>> be enumerated, the driver should look for a /aliases/uart* property>> that matches the of_node. Doing it that way is already established in>> the OpenFirmware documentation, and it ensures there are no overlaps>> in the global namespace.>>>> I just gave one more try to avoid using 'aliases', and you gave a> 'no' again. Now, I know how hard you are on this. Okay, I start> thinking about your suggestion seriously :)Ha ha ha.>>> We do need some infrastructure to make that easier though. Would you>> have time to help put that together?>>> Ok, I will give it a try.Cool. We'll talk next week about it.--To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2011/6/19/90
CC-MAIN-2014-15
refinedweb
522
50.23
StartApp ANE is a native extension to use the services of StartApp, a Mobile Ad Network specialized in monetization and distribution of free mobile applications. Unlike other advertising platforms, StartApp pays you for every download of your own app. The extension is extremely easy to use and setup. It has a single method that must be invoked when the game starts. Sign up as a developer and use the promo code A6DAV to get a $25 bonus when you reach the first 100 downloads of your app! Sample import com.vitapoly.nativeextensions.StartAppANE; ... var startApp:StartAppANE = new StartAppANE(); startApp.init(); Doesn’t work, Need Help! I face two problems while integrating this ANE with my AIR App: 1. startApp.init() worked and I can see the StartApp license agreement on my device, but none of the two buttons (Accept, Decline) work, hence the app is stuck. 2. I can see no API other that StartAppANE.init(), and am clueless as to how to show the ads IFF my problem #1 is somehow solved.
https://www.as3gamegears.com/air-native-extension/startapp-ane/
CC-MAIN-2019-39
refinedweb
173
65.93
Schemas and the Management Portal The Management Portal includes tools for viewing and working with both standard and custom schemas. You can find the tools here: Ensemble > Interoperate > HL7 v2.x > HL7 v2.x Schema Structures With this page and its tools you can do all of the following: View the message types and message structures for each schema (standard and custom) available in the namespace along with each schema's segment structures, data structures, and code tables. Import and export schemas. Create, edit, and delete schemas. Note that you will typically only alter custom schemas. Standard schemas should be left intact.
https://docs.intersystems.com/latest/csp/docbook/Doc.View.cls?KEY=THL7_CustomSchemas_3
CC-MAIN-2021-31
refinedweb
102
75.81
I'm a newcomer to Cocoa programming, having never got into Objective-C. Now, I am trying to learn it with Swift by going through the Aaron Hillegrass book “Cocoa Programming for Mac OS X, 4e" and implementing everything there in Swift instead of Obj-C. It has been going okay so far, but I hit a roadblock in Chapter 8 (the RaiseMan application). Here is the Objective-C code from the book: The header: #import <Foundation/Foundation.h> @interface Person : NSObject { NSString *personName; float expectedRaise; } @property (readwrite, copy) NSString *personName; @property (readwrite) float expectedRaise; @end #import "Person.h" @implementation Person @synthesize personName; @synthesize expectedRaise; - (id)init { self = [super init]; if (self) { expectedRaise = 0.05; personName = @"New Person"; } return self; } @end import Cocoa class Person: NSObject { var personName = String() var expectedRaise = Float() } init() { employees = NSMutableArray() println("hi") super.init() // Add your subclass-specific initialization here. var p = Person() p.personName = "New Person" p.expectedRaise = 0.05 } Short answer: you need to specify the class namespace (module name) in Interface Builder: RaiseMan.Person Details and other options: This is because Swift adds a prefix to the name of every class injected into the Objective-C runtime in order to avoid name collisions. The prefix follows this convention: _TtC$$AppName%%ClassName, where $$ is the length of AppName and %% is the length of ClassName (see this other SO question for more info). So in order for the array controller to be able to instantiate the Person class, you need to provide the mangled name in Interface Builder: _TtC8RaiseMan6Person. Another option is to provide an explicit Objective-C name for your Swift class by using the @objc(<#name#>) attribute: @objc(Person) class Person: NSObject { } In that case, you can provide the name specified in your @objc attribute to Interface Builder (e.g. Person). See the Using Swift with Cocoa and Objective-C guide for more details.
https://codedump.io/share/CC5KqPwnWgf2/1/swift--quotcannot-find-object-class-with-namequot
CC-MAIN-2017-39
refinedweb
312
53.21
Apache2::ModXml2 - makes mod_xml2 funtionality available to perl modules use XML::LibXML; use Apache2::ModXml2 qw(:all); # The usual filter stuff is omitted # ... for (my $b = $bb->first; $b; $b = $bb->next($b)) { if ($b->type->name eq 'NODE') { # This is the most important interface function my $node = Apache2::ModXml2::unwrap_node($b); # The nodes are not connected but still know their document my $doc = $node->ownerDocument; if (defined($node)) { if ($node->isa('XML::LibXML::Element')) { my $end = Apache2::ModXml2::end_bucket($b); if ($end) { # If it knows the end bucket, it is a start bucket $node->setAttribute('class', 'mod_xml2'); } Apache2::ModXml2 is a wrapper for the mod_xml2 API. It allows you to write filters that modify the outgoing XML/HTML by modifying XML::LibXML nodes. The apache module mod_xml2 implements the "node" filter. This filter runs the libxml2 parser on the outgoing XML/HTML and wraps the SAX events into a special bucket type. These are called node buckets. Subsequent filters then modify the outgoing by modifying the node bucket stream. With Apache2::ModXml2 this can be done with perl. Node buckets hold a libxml node. ModXml2 wraps it into a XML::LibXML::Node that can be used with the set of funtions provided by XML::LibXML. Note that in case of element nodes start and end bucket hold the same node. The start bucket already knows the end bucket. Even so the start node continues to exist until the end node is reached, modifying it may be pointless if it has been passed to the filter again. The node may have been sent over the network. Apache2::ModXml2 also offers XPath callbacks, that get called on matches of (very) simple XPath selectors. Unlike the simpler ModXml2 functions these can do DOM tree manipulation since the matches get passed in as trees. wrap_node($alloc, $node, $r_log); Returns an APR::Bucket object that has been created wrapping $node into a mod_xml2 node using the APR::BucketAllocator $alloc. $r_log is a request object to use for logging. unwrap_node($b); Returns the XML::LibXML::Node held by the APR::Bucket $b given as a parameter. end_bucket($b); Returns the associated end bucket provided $b is a start element bucket and undef othewise. make_start_bucket($b); Turns the bucket $b into a start element bucket and returns the thereby created end bucket. init_doc($doc, $pool); This function is needed since wrapping of the document node (e.g. by calling $node->ownerDocument) will delete it when the perl node does out of scope. So in case the document is used this needs to be called with the document and a pool to append node deletion as a cleanup. mod_xml2 implements functions for a filter that builds a DOM subtree each time a streaming xpath expression (named pattern by libxml2) matches. The tree is passed passed to a callback function and decomposed into single nodes again afterwards. The streaming xpath expressions are from a very limited xpath subset as described here: xpath_filter_init($f, $xpath, $namespaces, &transform); To create a streaming xpath filter this function needs to be called from filter init. The return value is suitable for returning it from filter init. Every time $xpath matches &transform is called with the subtrees root node as a parameter. The namespaces needed to compile the pattern are passed as a list [URI, prefix, ...]. Be aware that these prefixes are just aliases for pattern usage. They do not need to coincide with the prefixes in the document. xpath_filter($f, $bb); This is simply the work horse filter function. None by default. The concept for this implementation: The mod_xml2 apache module: Joachim Zobel, <jz-2012@heute-morgen.de> This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.10.1 or, at your option, any later version of Perl 5 you may have available.
http://search.cpan.org/~jzobel/Apache2-ModXml2-0.01/lib/Apache2/ModXml2.pm
CC-MAIN-2016-30
refinedweb
646
55.24
Sep 07, 2011 03:07 PM|webJose|LINK Ok, so this is the weirdest ever! Visual Studio 2008 Professional, ASP.net v2.0. I have a UserControl with a single control inside it: A Repeater. This repeater control called rpMain does generate the corresponding variable in the designer. The problem appears when I want to access the repeater control from code. There is a property called Status that receives a specific status structure, and from this status structure the data source for the repeater is built (it ends up in a 2-record data source). By the time the Status property is set, rpMain is null! And it continues to be null after calling EnsureChildControls(). What is wrong with this setup? I'll post whatever code is needed. It isn't much anyway. Note that I overrode CreateChildControls() in order to put a breakpoint to ensure it was being called, and yes, it is called, and yes I call the base implementation of CreateChildControls(). Any input is appreciated!! Participant 1727 Points Sep 08, 2011 08:06 AM|karang|LINK Hi If you can post code then it will be easier to assist. Sep 08, 2011 11:52 AM|webJose|LINK Thanks for stopping by. Between yesterday evening and this morning, it is working. I think it was a mixture of the following: All-Star 94120 Points Sep 08, 2011 09:20 PM|Decker Dong - MSFT|LINK webJoseBetween yesterday evening and this morning, it is working. This is strange.... Have you done something to that? And have you got some exceptions shown on the screen? Your key codes are also welcomed. Thx again Sep 09, 2011 11:03 AM|webJose|LINK Hello Decker. My previous post (the one you took my quote from) explains what I did between the working, the not working, and the again back to working. In a nutshell, it appears that web.config took a part when I added an entire namespace to the controls section. To answer your other question: I used to get an exception, yes, ReferenceNullException (or something like that) because the designer-provided variable rpMain was not bein populated with a new instance of the repeater added at design time. But then again, this problem is now gone. Contributor 4917 Points Sep 11, 2011 10:52 PM|Steven Cheng - MSFT|LINK Hi WebJose, Glad to see that you've got the previous issue resolved and thanks for sharing the detailed information and walkthrough with us. For the further viewstate related control state persistence issue, would you consider starting a new thread with the specific description there? That can help make the thread looks more clear and for other members easier to locate your new question at the first post of the new thread. Sep 12, 2011 11:18 AM|webJose|LINK Hello: Yesterday I was finally able to sort out the view state issue. This link is DA BOMB!! :. This walkthrough explains how controls should be instantiated in the CreateChildControls(), and it shows how to create a routine that takes into account whether the controls are being created due to a data bind or due to a postback. I think a sticky thread is probably in order for this link alone with a few more explanation in a few key points. I also noted that there's probably an error in the walkthrough: If the control is being reconstructed from view state, CreateControlHierachy() uses an array of dummy objects of type object, but the foreach() loop uses a variable of type RssItem. I would imagine that this produces a runtime error. In my implementation, my foreach() uses a variable of type object, and it is cast to the appropriate data type only if data binding. 6 replies Last post Sep 12, 2011 11:18 AM by webJose
https://forums.asp.net/t/1718306.aspx?UserControl+not+creating+the+children+a+single+repeater+
CC-MAIN-2018-34
refinedweb
634
63.9
Manage a Fuseki-Triple Store Project description ### Fuseki CLI Tool A collection of scripts and utility functions to manage and update a Fuseki2 triple store. Requires a default.cfg and credentials to the google spreadsheet for this to work. The main update routine loads values from a google sheets spreadsheet. This file needs the following columns for each vocabulary which should be loaded into the fuseki triple store. The following input rows are possible. Their order is important: 1. Name of Vocabulary 2. Link to the file (http[s]:// or ftp://) 3. Format of the file (accepts values: TTL, RDF, N3, NT, JSON) 4. Short Name (used for Skosmos config.) 5. Fuseki Graph Name (a valid URI) 6. Standard Language (optional, if given will add language tags to every label) 7. Ready? Fill in with __y__. All other lines will be ignored. 8. Base namespace of the vocabulary. The script will use the following columns for output on each vocabulary: 9. Number of triples loaded into Fuseki if successful. If this field is empty the upload was not successful. If this field is 0 then no triples were uploaded (most likely skosify removed all triples from graph). 10. Type of Error (If something went wrong.) 11. Error Message 12. Skosmos vocabularies.ttl entry. Do not use as is. Language tags are not correct. #### Usage pyfuseki default.cfg -all pyfuseki default.cfg -s skos pyfuseki default.cfg -diff Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pyfusekiutil/
CC-MAIN-2022-27
refinedweb
266
70.9
Silver Stackable Bracelets Interested in custom pieces? These gorgeous silver bracelets with Let's get personal. View our CZ's look great alone & stacked! Personalized Collection & customized your piece today! New Bangles Find yourself falling in love with these gorgeous bracelets! 14k Gold Baby ID 14k Large Round Personalized Bracelet Necklace $160.00 $175.00 Jewels & Jewels is your online destination for fabulous jewelry. We offer the latest trends and more.. We invite you to browse through our site and get to know Jewels & Jewels. We promise to offer you great service, quality merchandise and products up-to-date with the latest fashion trends. If you are interested in wholesale purchasing, please visit. Returns & Shipping Jewels & Jewels has made shipping and returning easy for you Please read the following: Shipping • Upon checkout, you may choose your shipping preference. Please keep in mind, this does not include processing time. • If your items are in-stock, your order will be shipped within 3-5 business days (excluding holidays). • If your item is on back-order, Jewels & Jewels will advise you upon receipt of your order. • Orders placed after 12 P.M. (EST) will begin processing the following business day. • Personalized items: Please allow 1-2 weeks for processing (not including shipping time). • Ships within the United States only. Returns • You may return (most) new, unopened and unworn items within 15 days for a full refund. • Return shipping cost will be credited to your account if you choose to make an exchange or store credit instead of a refund. • Tracking number must be provided when returning items. • All sale items are final sale. • All personalized items are final sale. • To make a return, please contact: [email protected] Contact Info Twitter Google+ 2 N.E. 1st Street Facebook Miami, Florida 33132 YouTube USA Phone: (800) 989-8435 Dummy Text It is a long established fact that a reader will be distracted by the readable content of a page when looking at its layout.
https://anyflip.com/fzll/pduk/basic
CC-MAIN-2021-49
refinedweb
329
67.55
Hug makes maintaining and packaging web applications easier, it is a grunt plugin. Tell it how to find your files and it will concatinate them in the right order of dependencies. It will also prevent accidental variable leakage between files by wrapping each file in a self-executing anonymous function. requirestatements. This tells grunt how your files are related so it can package your files in the right order. These statements won't be in the final packaged file. Hugging occurs at build-time, so an application which needs to load it's dependencies at runtime (lazy loading) won't benefit much from hugging -- you'll likely prefer RequireJS. For frameworks or compiled applications, hugging is awesome. Your code will be clean, safe, and boiler-plate-free. As an added benefit, your unhugged code will work in nodejs applications so long as you're not using any browser-specific functions (for example, alert). exportsvariable. An example: exports.numbers = [1,2,3];. requirefunction. An example: var fileExports = require("[A RELATIVE PATH]"). If you want to assign something as the exports object itself, assign it to module.exports. Assigning anything directly to exports would replace the object pointer entirely. If you've done nodejs programming before, you should already be used to this, just remember you can't import npm packages like you can in node -- currently, hug only imports from relative file paths. Say I have two files: // required.jsexportsmessage = "Hello, world!"; // requiree.jsvar otherFile = ;; // grunt.jsmodule{grunt;} When we run grunt hug, simple-example.js is generated. When simple-example.js runs in a browser, it will alert "Hello, world!". Note that the window object is never touched, the entire application is within a closure, it leaves no trace after it's run. Here is the file itself: // simple-example.js{var __m0={moduleexports=exports;exportsmessage = "Hello, world!";;return moduleexports;}{}{};var __m1={moduleexports=exports;var otherFile = __m0;;;return moduleexports;}{}{};}; Say I have the following file structure: src/language/message/aSayingjscomposerjswordsjsexportsjsspeakjsgruntjs With the following file contents: // src/language/message/aSaying.jsvar theWord = aWord;var composer = ;exportswhatTheySay = ; // src/language/composer.js{return "Hello, " + anything + "!";}; // src/language/words.jsexportsaWord = "world"; // src/exports.jsexportsspeak = ;exportsmessage = whatTheySay; // src/speak.jsvar theMessage = whatTheySay;module{;}; // grunt.js// ...other config stuff...// in the object passed into initConfig:hug:advanced:src: "./src/**/*"dest: "./tmp/advanced-example.js"exportedVariable: "myApi"exports: "./src/exports.js"// ...other config stuff... When grunt is asked to hugs these files, it'll concatinate the files in order of dependencies, wrap each file in an anonymous function, and match up dependencies. Running advanced-example.js will create an object named myApi in the window object with the two variables defined in exports.js as it's members. Running myApi.speak() will alert "Hello, world" in a browser. If you've never used grunt before, you will find it very useful to review the Getting Started documentation. Inside your grunt.js file, in the object you pass to initConfig, add an object named hug. Inside this object, add one or more objects for different hug tasks. String or Array One or more comma separated wildcard patterns as well as an array of wildcard patterns. These files define the source code of your package. Which will be concatinated together. String The path for the generated file. Grunt will automatically generate directories if they don't exit. String or Array Grunt will use the given paths as a base (in addition to the current directory) when resolving dependencies. For example if you set path to ["./components"], and grunt runs in to a file with require('jquery/jquery.js') Grunt will search for the file both in the location relative to where it was declared, and in ./components/jquery/jquery.js. This works nicely with package managers such as Bower or NPM. String or Array Grunt will use this list of extensions when trying to resolve dependencies for files it can't find. By default the extensions array is ['', '.js'], meaning, grunt will first try to find an exact match and then try to find the same filename with the '.js' extension. So in the previous example for path we could have just used require('jquery/jquery') and it would have worked just as well. String or Array File(s) which will be prepended to the beginning of the package, making any variable defined in them package-global. Note that these files do not use the exports/require pattern -- what they would normally define as global become limited to the scope of the package, so any file can use their variables without requiring them in. This is particularly useful for utility functions that you do not want to keep requiring, or for libraries which require other libraries in order to work. String A path to a file to use as the exports object for the package. Whatever the file exports, will be exported by the entire package. In essence, this file should define the external api of your package. String(defaults to "exports") If exports is defined, this will be the variable exported to the environment. Whatever the exports file exports will be assigned to this variable. By setting the value of this to "module.exports" you can create a package that can be imported by other hug or nodejs applications. String(defaults to "module") Setting this parameter will break compatibility with importable nodejs modules. This parameter allows you to overwrite the variable name used to refer to modules internally. This is useful if you already use the module variable for something else. String(defaults to "exports") Setting this parameter will break compatibility with importable nodejs modules. This parameter allows you to overwrite the variable name used to refer to exports internally. This is useful if you already use the exports variable for something else. String(defaults to "require") Setting this parameter will break compatibility with importable nodejs modules. This parameter allows you to overwrite the variable name used to refer to the require function. This is useful if you already use the require variable for something else. Web frameworks and applications that can normally be imported into nodejs applications work as intended. For example, you can just require the underscore library file and use it as you would normally. Libraries which aren't node-friendly can be made importable without modifying the original file by adding an 'adapter' file. Here's an example on how to make jQuery importable: // jquery-module.js;moduleexports = windowjQuery;jQuery; First we require the actual library so that it gets included in our package and so that it's evaluated before this file. Then we export out the reference to jQuery. Finally, we call jQuery.noConflict() which returns the jQuery window variable to it's previous owner (but our exports still points to the one we want). Now we can import jquery in other files like so: var $ = require('./jquery-module.js');. Closure (Soy) templates work beautifully with grunt-hug. Just set your file's namespace to "module.exports" and you're done! You can now require the generated soy file and it will provide an object with the templates in it.
https://www.npmjs.com/package/grunt-hug
CC-MAIN-2017-26
refinedweb
1,188
50.33
Walkthrough: Matrix Multiplication This step-by-step walkthrough demonstrates how to use C++ AMP to accelerate the execution of matrix multiplication. Two algorithms are presented, one without tiling and one with tiling. Before you start: Read C++ AMP Overview. Read Using Tiles. Make sure that Windows 7, Windows 8, Windows Server 2008 R2, or Windows Server 2012 is installed on your computer. To create the project On the menu bar in Visual Studio, choose File, New, Project. Under Installed in the templates pane, select Visual C++. Select Empty Project, enter MatrixMultiply in the Name box, and then choose the OK button. Choose the Next button. In Solution Explorer, open the shortcut menu for Source Files, and then choose Add, New Item. In the Add New Item dialog box, select C++ File (.cpp), enter MatrixMultiply.cpp in the Name box, and then choose the Add button.. To multiply without using C++ AMP Open MatrixMultiply.cpp and use the following code to replace the existing code. #include <iostream> void MultiplyWithOutAMP() { int aMatrix[3][2] = {{1, 4}, {2, 5}, {3, 6}}; int bMatrix[2][3] = {{7, 8, 9}, {10, 11, 12}}; int product[3][3] = {{0, 0, 0}, {0, 0, 0}, {0, 0, 0}}; for (int row = 0; row < 3; row++) { for (int col = 0; col < 3; col++) { // Multiply the row of A by the column of B to get the row, column of product. for (int inner = 0; inner < 2; inner++) { product[row][col] += aMatrix[row][inner] * bMatrix[inner][col]; } std::cout << product[row][col] << " "; } std::cout << "\n"; } } void main() { MultiplyWithOutAMP(); getchar(); } The algorithm is a straightforward implementation of the definition of matrix multiplication. It does not use any parallel or threaded algorithms to reduce the computation time. On the menu bar, choose File, Save All. Choose the F5 keyboard shortcut to start debugging and verify that the output is correct. Choose Enter to exit the application. To multiply by using C++ AMP In MatrixMultiply.cpp, add the following code before the main method. void MultiplyWithAMP() { int aMatrix[] = { 1, 4, 2, 5, 3, 6 }; int bMatrix[] = { 7, 8, 9, 10, 11, 12 }; int productMatrix[] = { 0, 0, 0, 0, 0, 0, 0, 0, 0 }; array_view<int, 2> a(3, 2, aMatrix); array_view<int, 2> b(2, 3, bMatrix); array_view<int, 2> product(3, 3, productMatrix); parallel_for_each( product.extent, [=](index<2> idx) restrict(amp) { int row = idx[0]; int col = idx[1]; for (int inner = 0; inner < 2; inner++) { product[idx] += a(row, inner) * b(inner, col); } } ); product.synchronize(); for (int row = 0; row < 3; row++) { for (int col = 0; col < 3; col++) { //std::cout << productMatrix[row*3 + col] << " "; std::cout << product(row, col) << " "; } std::cout << "\n"; } } The AMP code resembles the non-AMP code. The call to parallel_for_each starts one thread for each element in product.extent, and replaces the for loops for row and column. The value of the cell at the row and column is available in idx. You can access the elements of an array_view object by using either the [] operator and an index variable, or the () operator and the row and column variables. The example demonstrates both methods. The array_view::synchronize method copies the values of the product variable back to the productMatrix variable. Add the following include and using statements at the top of MatrixMultiply.cpp. Modify the main method to call the MultiplyWithAMP method. Choose the Ctrl+F5 keyboard shortcut to start debugging and verify that the output is correct. Choose the spacebar to exit the application. Tiling is a technique in which you partition data into equal-sized subsets, which are known as tiles. Three things change when you use tiling. You can create tile_static variables. Access to data in tile_static space can be many times faster than access to data in the global space. An instance of a tile_static variable is created for each tile, and all threads in the tile have access to the variable. The primary benefit of tiling is the performance gain due to tile_static access. You can call the tile_barrier::wait method to stop all of the threads in one tile at a specified line of code. You cannot guarantee the order that the threads will run in, only that all of the threads in one tile will stop at the call to tile_barrier::wait before they continue execution. You have access to the index of the thread relative to the entire array_view object and the index relative to the tile. By using the local index, you can make your code easier to read and debug. To take advantage of tiling in matrix multiplication, the algorithm must partition the matrix into tiles and then copy the tile data into tile_static variables for faster access. In this example, the matrix is partitioned into submatrices of equal size. The product is found by multiplying the submatrices. The two matrices and their product in this example are: The matrices are partitioned into four 2x2 matrices, which are defined as follows: The product of A and B can now be written and calculated as follows: Because matrices a through h are 2x2 matrices, all of the products and sums of them are also 2x2 matrices. It also follows that A*B is a 4x4 matrix, as expected. To quickly check the algorithm, calculate the value of the element in the first row, first column in the product. In the example, that would be the value of the element in the first row and first column of ae + bg. You only have to calculate the first column, first row of ae and bg for each term. That value for ae is 1*1 + 2*5 = 11. The value for bg is 3*1 + 4*5 = 23. The final value is 11 + 23 = 34, which is correct. To implement this algorithm, the code: Uses a tiled_extent object instead of an extent object in the parallel_for_each call. Uses a tiled_index object instead of an index object in the parallel_for_each call. Creates tile_static variables to hold the submatrices. Uses the tile_barrier::wait method to stop the threads for the calculation of the products of the submatrices. To multiply by using AMP and tiling In MatrixMultiply.cpp, add the following code before the main method. void MultiplyWithTiling() { // The tile size is 2. static const int TS = 2; // The raw data. int aMatrix[] = { 1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, 8 }; int bMatrix[] = { 1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, 8 }; int productMatrix[] = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; // Create the array_view objects. array_view<int, 2> a(4, 4, aMatrix); array_view<int, 2> b(4, 4, bMatrix); array_view<int, 2> product(4, 4, productMatrix); // Call parallel_for_each by using 2x2 tiles. parallel_for_each(product.extent.tile< TS, TS >(), [=] (tiled_index< TS, TS> t_idx) restrict(amp) { // Get the location of the thread relative to the tile (row, col) and the entire array_view (rowGlobal, colGlobal). int row = t_idx.local[0]; int col = t_idx.local[1]; int rowGlobal = t_idx.global[0]; int colGlobal = t_idx.global[1]; int sum = 0; // Given a 4x4 matrix and a 2x2 tile size, this loop executes twice for each thread. // For the first tile and the first loop, it copies a into locA and e into locB. // For the first tile and the second loop, it copies b into locA and g into locB. for (int i = 0; i < 4; i += TS) { tile_static int locA[TS][TS]; tile_static int locB[TS][TS]; locA[row][col] = a(rowGlobal, col + i); locB[row][col] = b(row + i, colGlobal); // The threads in the tile all wait here until locA and locB are filled. t_idx.barrier.wait(); // Return the product for the thread. The sum is retained across // both iterations of the loop, in effect adding the two products // together, for example, a*e. for (int k = 0; k < TS; k++) { sum += locA[row][k] * locB[k][col]; } // All threads must wait until the sums are calculated. If any threads // moved ahead, the values in locA and locB would change. t_idx.barrier.wait(); // Now go on to the next iteration of the loop. } // After both iterations of the loop, copy the sum to the product variable by using the global location. product[t_idx.global] = sum; }); // Copy the contents of product back to the productMatrix variable. product.synchronize(); for (int row = 0; row < 4; row++) { for (int col = 0; col < 4; col++) { // The results are available from both the product and productMatrix variables. //std::cout << productMatrix[row*3 + col] << " "; std::cout << product(row, col) << " "; } std::cout << "\n"; } } This example is significantly different than the example without tiling. The code uses these conceptual steps: Copy the elements of tile[0,0] of a into locA. Copy the elements of tile[0,0] of b into locB. Notice that product is tiled, not a and b. Therefore, you use global indices to access a, b, and product. The call to tile_barrier::wait is essential. It stops all of the threads in the tile until both locA and locB are filled. Multiply locA and locB and put the results in product. Copy the elements of tile[0,1] of a into locA. Copy the elements of tile [1,0] of b into locB. Multiply locA and locB and add them to the results that are already in product. The multiplication of tile[0,0] is complete. Repeat for the other four tiles. There is no indexing specifically for the tiles and the threads can execute in any order. As each thread executes, the tile_static variables are created for each tile appropriately and the call to tile_barrier::wait controls the program flow. As you examine the algorithm closely, notice that each submatrix is loaded into a tile_static memory twice. That data transfer does take time. However, once the data is in tile_static memory, access to the data is much faster. Because calculating the products requires repeated access to the values in the submatrices, there is an overall performance gain. For each algorithm, experimentation is required to find the optimal algorithm and tile size. In the non-AMP and non-tile examples, each element of A and B is accessed four times from the global memory to calculate the product. In the tile example, each element is accessed twice from the global memory and four times from the tile_static memory. That is not a significant performance gain. However, if the A and B were 1024x1024 matrices and the tile size were 16, there would be a significant performance gain. In that case, each element would be copied into tile_static memory only 16 times and accessed from tile_static memory 1024 times. Modify the main method to call the MultiplyWithTiling method, as shown. Choose the Ctrl+F5 keyboard shortcut to start debugging and verify that the output is correct. Choose the space bar to exit the application.
http://msdn.microsoft.com/en-us/library/hh873134(v=vs.110)
CC-MAIN-2014-52
refinedweb
1,819
64.51
Cypress allows full-stack end-to-end testing with ease. The API is well documented and the core concepts are totally straightforward. Still, I've found myself having to deal with some issues and strategies that are documented here and there. This post is a summary of the most common ones. Capture fetch requests One of the trade-offs of Cypress is that it only captures XHRs (not fetch). If you have a fetch request in your application (I am sure you do), no request would get captured without the workaround below. cy .visit('/some-url-that-fetches-data', { onBeforeLoad: (win) => { win.fetch = null } }) Ship a fetch Polyfill The workaround code above removes fetch from the window object in the global context of the tested application. Since you still have fetch requests present in your application code, you have to ship a polyfill in your application bundle that will detect that fetch is missing in window (because you've enforced it in your cypress test) and will mutate the global context (window) with a fetch that uses XHR call underneath. ( if (!window.fetch) { window.fetch = functionThatUsesXHR; ... }) whatwg-fetch does exactly that. React example // index.js require('whatwg-fetch'); Unless you'd want to run your tests against production, you wouldn't want to ship that polyfill in your production bundle. You condition that using an environment variable. // index.js if (process.env.REACT_APP_ENV === 'e2e') { require('whatwg-fetch'); } And have a specific build that sets the variable for E2E: "scripts": { "build:e2e": "REACT_APP_ENV=e2e react-app build", } Overwrite cy.visit() To avoid adding the workaround to all your page visits, you can overwrite the visit command. Cypress.Commands.overwrite('visit', (visit, url) => { return visit(url, { onBeforeLoad: (win) => { win.fetch = null } }); }); Decide your testing Strategy When your application interacts with a backend, you have to choose one of this two strategies to deal with the outgoing requests: - Interact with the real backend - Intercept request to your real backend and return mock responses The latter is faster, it does not need the backend to be seeded with data and it gives you full control of request and responses. However, this is not real full end-to-end testing and you're never sure if the mocks match the real backend responses. If you go with interacting with the real backend you can use cypress tasks to seed/clean your backend data before every test. The general rule is that result of a test should not depend on the test position in the test suite. Authenticate using 3rd Api (Cross-origin domain) If you use a third party authentication provider (google, auth0...) to authenticate your users then you will probably deal with the cross-origin domain issue. Basically Cypress (because of a browser security spec) does not let you control a different origin from the one you initially cy.visit()ed. You just can't fill google authentication form inputs with credentials and you would agree that it makes no sense to do so because you have no idea on how the provider's html is structured and how it could change over time, making your tests eventually fail. The only solution you have is to use the provider's authentication api to authenticate your user, by making an async request to your auth provider with a server apikey for example to authenticate a specific user. You can create a specific user for that with different roles depending on what you are willing to test. Now if your application is using a private company SSO that do does not offer an authentication API, I am very sad for you :(, you'd have to eitheir: - Disable the browser security flag so that Cypress can handle redirects. - Authenticate the user using Pupeeter for example as a task and run the test using the token received by pupeeter. Something like: on("task", { authenticate: (args) => { return (async () => { const browser = await puppeteer.launch({ headless: true }) const page = await browser.newPage() await page.goto('https://<your site with login') // Authenticate user here // ... let cookies = await page.cookies(); cookie = cookies.find(o => o.name === 'AUTH_COOKIE_FROM_PROVIDER') return cookie; })(); } }); describe('Your test suite', () => { beforeEach(() => { cy.task("authenticate") .then(cookie => { cy.setCookie( cookie.name, cookie.value, { domain: '<your domain>' } ); }); }); } Run your tests in CI The whole thing about E2E tests is that they need to run on CI (Travis, GitLab CI, CircleCI...) Deploying the develop branch in a staging environment and then running your tests against that environment is wrong because well you've already deployed!, QA guys might do their tests on a broken environment, unless it is a temporary environment dedicated to cypress automated tests. The pipeline workflow should be the following: - Build the application (ship the fetch polyfill) - Serve the application inside the CI (using serve, http-server...) - Run your tests against the served application (CYPRESS_baseUrl or cypress.json) "scripts": { "build:e2e": "REACT_APP_ENV=e2e react-app build", "e2e": "CYPRESS_baseUrl= start-server-and-test 'serve -s build' cypress:run" } start-server-and-test used above is a tool that executes the command serve -s build, waits until it is ready on then runs cypress:run which runs cypress tests against the url set to CYPRESS_baseUrl environment variable. An (incomplete) sample of CircleCI pipeline configuration build: docker: - image: cypress/base:10 working_directory: ~/repo steps: - checkout - run: yarn install - run: yarn lint - run: yarn test - run: yarn build:e2e - run: yarn e2e Use a robust selector Avoid using classes or ids when targeting elements in a page because they're coupled to CSS selectors and these are traditionally used for styling and are subject to be changed at any time. <Button label="Submit" data- // or <Button label="Submit" data- test(() => { cy.get('[data-cy="login-submit"]') .should('have.text', 'Submit') }) The choice of the selector name has to be thought from a user perspective rather than a styling/structuring perspecting. With a custom command Cypress.Commands.overwrite('get', (get, selector) => return selector.startsWith('d:') ? get(`[data-testid="${selector.substr(2)}"]`) : get(selector); ); We can use a simpler syntax cy.get('d:submit-button').click(); Use Cypress Testing Library If you want to stay consistent with the way you query elements in your unit tests, you can use Cypress Testing Library cy.queryByLabelText('Submit').should('exist') Cypress chain of commands Cypress manages a Promise chain on our behalf. Cypress commands are asynchronous and get queued for execution at a later time. Cypress commands don’t do anything at the moment they are invoked, but rather enqueue themselves to be run later. it('submits the form when.....', function() { cy.visit('/') // Nothing happens, command is chained cy.get('[data-cy="submit"]') // Nothing happens, command is chained .click() // Nothing happens, command is chained }) When test function has finished executing, all commands have been queed and Cypress will run them in order. Because of that you cannot assign the commands call to a variable. // BAD, This won't work. it('submits the form when.....', function() { const button = cy.get('[data-cy="submit"]') button.click(); }) If you don't want to repeat yourself, use aliases. That's all I've got! Discussion Are you using cypress.io today still? Yes, Cypress is my default tool for e2e testing.
https://dev.to/chihab/e2e-tests-with-cypress-beyond-the-basics-31k2
CC-MAIN-2021-04
refinedweb
1,198
54.83
Continuing the discussion from Supporting Multiple SQS Queues for the same message type across different instances: I have the same requirement of supporting queues with the same message type for different services as discussed in the above question. The answer with regards to queueprefix worked perfectly for SQS, I'm trying to work out how to get this working for Redis. If I set the queueprefix to "instance-service" with redis I see a pubsub channel created called "instance-service-mq:topic:in If I publish a message using this approach: mqClient.Publish(ob1) I receive the object and can process it. However, if I need the object to go to service2, I have tried the following: mqClient.Publish("instance-service2-mq:topic", clientMsg); mqClient.Publish("instance-service2-mq:", clientMsg); mqClient.Publish("instance-service2-mq", clientMsg); mqClient.Publish("instance-service2", clientMsg); none of which appear to reach the destination. What am I missing here, and what is the correct way to publish / consume to a queue in the format "instance-service-messagename" Best Regards. What's your Queue Names configuration? The Queue Names are static so they're only configurable to use a single prefix. Are you trying to send them to 2 different prefixes within the same App? The QueuePrefix is only set to one value, each service only reads from one set of queues (and each queue belongs to only one service), however, I need a Service to post both to its own queue and to the queue of another service so I need to be able to post to two different queue prefixes. I have the following services Instance "Dev"[ Service A ] < - > [ Service B ] < - > [ Service C ] Service A has the following queue interactions- Posts and Reads to Dev-A-Inbound- Posts to Dev-B-Processing- Reads from Dev-A-Outbound Service B has the following queue interactions - Reads from Dev-B-Processing - Posts to Dev-A-Outbound - Posts to Dev-C-Outbound Service C has the following queue interactions - Posts and Reads from Dev-C-Inbound - Reads from Dev-C-Outbound - Posts to Dev-B-Processing With SQS this works as I could set QueuePrefix as follows for the purpose of reading:A: "dev-a-"B: "dev-b-"C: "dev-c-" and then when publishing a message I could use mqClient.SendMessage(qUrl, json); where qUrl uses the format <add key="QueueUrlPattern" value="[INSTANCE]-[SERVICE]-mq-[TYPE]-inq" /> and I fill the parameters accordingly depending on the destination Without the ability to set a destination queue in this way, it severely limits scalability options and/or I'd have to have a different message model per service, and that really goes against how the architecture design is intended. Think I found the solution, I can do the following: mqClient.Publish("instance-service-mq:TypeName.inq", clientMsg); This achieves the same approach I had with SQS I believe. I'm unable to repro this issue. I've added a new test that uses a QueueName prefix in MqServerQueueNameTests.cs and both Redis and RabbitMQ have the same behavior of using the prefix in each of the Topic Queue Names as well as the Request/Response Queue Names. Can you provide a stand-alone integration test that we can run that shows where the Prefix isn't being used? Hi mythz, sorry about the brevity of my response above. I don't think there is a bug here, I think the issue was that if I wanted to post to a seperate channel e.g. post to queue for service B from service A, I had to use the Publish method with the correct queue path as above. I didn't have this formatted quite right, but I managed to work it out by using the ToQueueNameIn provided by the producer class. So I think all is good here provided this is the correct way to post to another services channel ? Best Regards Yeah that's how to override the default Queue Name. I have got this to work in the following Configuration With two servicesService AService B Service A puts X in dev-A-X queueService A reads X from dev-A-X queueService A puts Y in dev-B-Y queueService B reads Y from dev-B-Y queue The problem I have is that the consumer thread in service B appears to die, it runs once, processes one message, returns and is never seen again. I should mention that this is using multiple test harness host applications and I'm yet to test it fully with the individual windows services, so I was wondering if something screwy was happening with visual studio somehow. I do not appear to be having any exceptions, I am returning null from the method and messages appear to be flowing correctly between the queues from the redis perspective with the message ending up in the out queue. Initially i thought it might have been a race condition with the queuenames prefix somehow (resulting in message ending up in wrong queue), but i traced everything through and it is all correct. It is just that the Service B consumer thread never fires a second time. If I restart the Service B host, it picks up the second message, processes again and then fails to deliver a third. Any thoughts on this or how I may debug it? I was going to try to send X to B just to see if that object sent fine as it appears to work always no problem with service A Declaring a prefix is akin to specifying a "namespace" for your MQ Services where you should only be using a single namespace which all your Services should be using. The issue is that Redis MQ sends the Message to the Queue but then needs to "notify the MQ topic" to tell any RedisMqServer listening that a new message is available. RedisMqServer The name of this topic is QueueNames.TopicIn which is used for all messages. QueueNames.TopicIn You can manually notify the topic of a different prefix with something like: var mqClient = (RedisMessageQueueClient)mqServer.CreateMessageQueueClient(); mqClient.Publish(request); var queueName = MessageFactory.Create(request).ToInQueueName(); mqClient.ReadWriteClient.Publish(QueueNames.TopicIn, queueName.ToUtf8Bytes()); But using internal impl details like this is ugly, ideally you should be avoiding sending messages in different namespaces entirely. Can you just not send a different Request DTO to publish a Message to Service B? var requestB = request.ConvertTo<RequestB>(); mqClient.Publish(requestB); Thanks very much for the information. The problem I have is with this structure: A <-> B <-> C1 <-> C2 <-> C3 <-> C4 <-> C5 Where B will send to 0 or more C services. Yes I can have a Cx request DTO, but it removes the genericity of the code and means that I can't run with an adapter pattern. With SQS I was able to achieve this genericity as the namespace prefix used was data driven so I just sent a single request DTO type to 0 or more queues Best Regards, For now I'll go with inheriting the DTO model as you have suggested, its not quite what I wanted but I'll compromise for now. Cheers. Another alternative is to use RabbitMq which as it's a push-based purpose-built MQ Server doesn't require subscribing to a Pub/Sub topic like RedisMq does. Thanks for the information. Just a question on the Pub/Sub handling for Redis. With SQS, if i have two services subscribe to the same queue, the message will be delivered only once (to one of the services which has implemented the registerhandler) Is this the same with RedisMQ implementation, or will the Pub/Sub broadcast ? All MQ Servers have the same behavior where messages are published to an "In Queue" and only delivered/processed by a single worker. The Pub/Sub topic is an implementation detail to notify the RedisMqServer that they have pending messages, the Pub/Sub topic isn't sent the message itself, just a notification that they have messages in their Message .inq. .inq Great thanks that's perfect.
https://forums.servicestack.net/t/correct-handling-for-multiple-redis-queues/5136
CC-MAIN-2018-43
refinedweb
1,349
57.3
Automating the Capture of Web Timings with Selenium 2 Wed 04 Aug 2010 Measuring the speed of how long a page takes to load is becoming one of the most important metrics that we as software developers or testers can measure. This is because milliseconds can mean the difference between a sale being paid for or abandoned. A couple of years ago Amazon did research that showed for every 100ms slower the page was in loading they lost 1% of sales. The downside to all of this is that we, as developers, testers and even ops, can only control the speed of things within our code and within our datacenters. With this in mind we should be trying to record all of the information that we can from within the browser. Last year at GTAC 2009 ( presentation video) David Henderson and I discussed how we used YSlow and Selenium to record page metrics and load times of the web application. We made the total primed page size decrease by 85% with the work we did. The unfortunate thing about the way we did this was that we had to hack YSlow because it didn't give us all the information that we wanted. We also needed to hack Selenium because it blocked a few of the headers to do with element expiry. This is not really ideal but with the new Web Timings API we are starting to be able to do all of this with just a bit of JavaScript. To access this we need to get hold of the performance object and then get it to return a dictionary of all the timing data. The values that are returned are milliseconds from 1/1/1970. They give you all cool bits of information like how long it takes to unload a page you're on before navigating to the new page and how long that takes to load. It also tells you how long domain lookups take and how long redirects take. These are all neat tidbits of information when performance tuning your web application. So below is a C# extension method that I have created that extends the WebDriver object to collect the information for you. Not all browsers support this at the moment but as web timings becomes the standard the code will start returning data. Current browsers that support this are Chrome 6 and IE9 pre3. A patch has been submitted to Mozilla for Firefox 4 so hopefully this will be applied soon. using System; using System.Collections.Generic; using System.Collections.ObjectModel; using OpenQA.Selenium; namespace AutomatedTester.PagePerf { public static class Extensions { public static Dictionary<string,object> WebTimings(this IWebDriver driver) { var webTiming = (Dictionary<string, object>)((IJavaScriptExecutor)driver) .ExecuteScript(@"var performance = window.performance || window.webkitPerformance || window.mozPerformance || window.msPerformance || {}; var timings = performance.timing || {}; return timings;"); /* The dictionary returned will contain something like the following. * The values are in milliseconds since 1/1/1970 * * connectEnd: 1280867925716 * connectStart: 1280867925687 * domainLookupEnd: 1280867925687 * domainLookupStart: 1280867925687 * fetchStart: 1280867925685 * legacyNavigationStart: 1280867926028 * loadEventEnd: 1280867926262 * loadEventStart: 1280867926155 * navigationStart: 1280867925685 * redirectEnd: 0 * redirectStart: 0 * requestEnd: 1280867925716 * requestStart: 1280867925716 * responseEnd: 1280867925940 * responseStart: 1280867925919 * unloadEventEnd: 1280867925940 */ return webTiming; } } }
http://www.theautomatedtester.co.uk/blog/2010/selenium-webtimings-api.html
CC-MAIN-2015-14
refinedweb
523
53.41
in reply to Re^2: Caching Format in thread Caching Format Hello RichardK, friends, Sorry for answering via a shotgun message. First, again thanks for your time and help. After I posted my first reply I began playing around with and fell in love with it. It was simple and handled the metadata caching format nicely. Collection entries could be pushed/popped as hash key=>value pairs. It also handled file locking and provided many methods to do all of the things I needed to do. Unfortunately I found out later from my boss that not only are Dbases not allowed, but any Perl Module that is not a Perl5 core module cannot be used either. Mulligan! Regarding the heap vs files debate; I learned that the required level of persistence is actually quite high, certainly high enough to warrant the use of a Dbase if that was an option. Essentially collections will be kept indefinitely. That is the reason I chose to use files. I also found out for certain that I could not modify file names. As of now I plan on creating a pseudo-namespace for each collection by throwing collection metadata and files unique directories. Cheers, Hok P.S. I used a lot of buzzwords and somehow left out "Cloud" so there I
http://www.perlmonks.org/?node_id=947848
CC-MAIN-2015-06
refinedweb
217
73.07
It's always DNS: Episode #3416 As part of my work at Hasgeek, I also work on and maintain Hasjob, a job board for startups. To monitor any downtime, we use Uptime Robot. For last few days, uptime robot has been sending out alerts now and then that Hasjob was going down 10-15 minutes as a time. But when I visited the home page, it was working, even when uptime robot was reporting it to be down. So I thought it might be some issue that's affecting some requests and not the others. I looked into the logs but couldn't find anything meaningful. Except one expensive query that we run, which is supposed to be async. We pushed a fix for it yesterday morning, which was supposed to make things better. But suddenly we started getting complaints that the job posts were not loading at all. I go and check. The home page was loading but none of the job posts were. It was 9:43PM on a Friday night, but we had to fix it. So I started digging in. First observation from uwsgi log was that every request to a jobpost page was taking 30+ seconds to load. Every single one of them. Some were taking 60-90 seconds. So we now know that Nginx is giving up on waiting for uwsgi requests because uwsgi is taking too much time to respond. My first hunch was the expensive query. We recently upgrade to Postgres 12. I saw that logging wasn't turned on, so I turned it on for expensive queries. It was logging the expensive queries, but after some observation, found no correlation between them and the HTTP requests. There were more requests that were timing out, than expensive queries. So that's out the window. So it's most likely the application taking time to respond. But we didn't make any changes to the application recently that could affect application response time this much. Something else must suddenly be taking too much time. That's where I decided to profile the requests. We don't use Newrelic or any other service that runs a profiler on each request. That's something we are going to do after last night I think. Anyway, Hasjob is a flask application that runs on uwsgi. So I look into how to put a profiler in between. Went through a bunch of Python profiler packages and none of them seem to fit in my use case. I needed something simple and plug-and-play to run the profiler on a production application. After some scrambling I stumbled upon werkzeug's . We already use werkzeug because of Flask, so this seemed perfect. ProfilerMiddleware Our wsgi file looked like this - # -*- coding: utf-8 -*- import os.path import sys from hasjob import app as application __all__ = ['application'] sys.path.insert(0, os.path.dirname(__file__)) Now I updated to this - # -*- coding: utf-8 -*- import os.path import sys from werkzeug.middleware.profiler import ProfilerMiddleware from hasjob import app as application application.config['PROFILE'] = True application.wsgi_app = ProfilerMiddleware( application.wsgi_app, restrictions=[5], profile_dir='/path/to/profiler/directory' ) __all__ = ['application'] sys.path.insert(0, os.path.dirname(__file__)) restrictions says that show only top 5 issues, and profile_dir says to log all the profiles in the provided directory. I didn't turn off stream output so at the same time the profiler was spitting out the outputs in uwsgi log. And with much amaze I see that all the requests were going through 19-38 time.sleep calls that were delaying them. Which was confusing as hell. So something was definitely blocking the application. My guess once again was an expensive query. Now to look into what exactly was blocking the application. The most popular solution to analyse the profile files I came across was to use gprof2dot to turn the profile files into dot files. And then use graphviz to turn that dot file into a graphical representation of all the method calls. They look something like this. So I install gprof2dot convert one profile file and transfer it to my laptop. I didn't know how much time or processing power it'll require. I install graphviz on my laptop and start processing the dot file at around 12AM. By the time you reach the end of the blog post, it was 1:30AM and that file was still processing. I gave up around 12:20 AM. It was Friday night and every minute some user was getting timed out. I started looking into how else to analyse it. And came across pstats. Let's give this a try. import pstats stats = pstats.Stats('directory/GET.foobar.000255ms.1392663371.prof') stats.sort_stats('time', 'calls') stats.print_callers('time.sleep') Just to find that the caller of time.sleep is - <path_to_venv>/lib/python3.7/site-packages/eventlet/hubs/poll.py:71(wait) I'm like, "Ok..", but why? So I check the caller of above, just to find this - <path_to_venv>/lib/python3.7/site-packages/dns/resolver.py:972(resolve) And I'm like, "Ehhh....!!" This is what was left? I boot up a Python shell and try this - >>> import dns.resolver >>> resolver = dns.resolver.Resolver() >>> resolver.timeout = 30 >>> resolver.lifetime = 30 >>> resolver.query('bibhasdn.com', 'MX') Traceback (most recent call last): File "<input>", line 1, in <module> resolver.query('bibhasdn.com', 'MX') File "/<path_to_venv>/lib/python3.7/site-packages/dns/resolver.py", l ine 1091, in query True) File "/<path_to_venv>/lib/python3.7/site-packages/dns/resolver.py", l ine 1043, in resolve timeout = self._compute_timeout(start, lifetime) File "/<path_to_venv>/lib/python3.7/site-packages/dns/resolver.py", l ine 950, in _compute_timeout raise Timeout(timeout=duration) dns.exception.Timeout: The DNS operation timed out after 31.13741111755371 seconds I'm like, "Really?" After a few more print_callers() calls I find out the root of the issue. We use this library called mxsniff that checks if an email address is from a public email domain. We forgot to add a cache for each domain checks, so it was making DNS queries on every request. DNS queries were timing out and so were the requests. Now to look into why the DNS queries were timing out. I go look into the github repository dnspython that provides dns.resolver. Just to find this - :| I downgrade dnspython to previous available version and the site is back up. All within about 30 seconds. The graphviz command was still running. I hit Ctrl+C. I won't recommend doing it unless you have a supercomputer I guess. Overall it was actually a fun Friday night. I never had to profile a production server by hand. We use flask-debugtoolbar locally, so that usually takes care of everything when developing. Hope this experience helps someone someday. Update: I wrote a small rough script that takes in a profile file, a method name and depth level and returns the chain of callers leading upto the method. Gonna use it next time.
https://bibhasdn.com/its-always-dns-episode-3416/
CC-MAIN-2021-25
refinedweb
1,173
69.38
different firms Hello, I have regression problem: I have a dataset consisting of yearly firm returns (for firm1-firm3) and the volatility of each firm in that month(std1-std3) and a data on a monthly market return (market_return) over a specific timepriod. No I want to estimate the following regression over the whole time period: return_firm_i = a*std_i + b* market_return Its not a problem to do this with only one firm: regress firm1 std1 market_return, robust However, I want to do this for the whole dataset and estimate single coefficients. Could somebody help me? My dataset looks like this: date firm1 firm2 firm3 std1 std2 std3 market_return 1985 0.03 0.04 0.05 0.3 0.4 0.5 0.6 1986 .. 1987 ... Thanks in advance! * * For searches and help try: * * *
http://www.stata.com/statalist/archive/2012-08/msg00445.html
CC-MAIN-2016-50
refinedweb
133
59.33
Can anyone explain what is == and equal( ) ? Matt Swaggi Ranch Hand Joined: May 29, 2008 Posts: 61 posted Jul 05, 2008 07:14:00 0 Hai please help me out in giving the meaning and differences between == and equal() method is for ! Mark Vedder Ranch Hand Joined: Dec 17, 2003 Posts: 624 I like... posted Jul 05, 2008 09:43:00 0 == is used to determine reference equality. In other words, are two variables (i.e. references) referencing or "pointing" to the same object. .equals() is used to determine if two objects are "equal". What makes two objects equal is up to the object. More on that in a little bit. I personally refer to "==" as "reference equivalence", or just "equivalence" for short; so I read "x == y" as "Is x equivalent to y". And I call ".equals()" equality; so I read "x.equals(y)" as "Is x equal to y"; (Note that these are my terms and not official.) Here's some examples using a calendar object: import java.util.Calendar; import java.util.GregorianCalendar; import java.text.DateFormat; import java.text.SimpleDateFormat; public class Equality { public static void main(String[] args) { DateFormat formatter = new SimpleDateFormat("d MMM yyyy"); Calendar calOne = new GregorianCalendar(2008, Calendar.JULY, 5); Calendar calTwo = new GregorianCalendar (2008, Calendar.JULY, 5); Calendar calThree = new GregorianCalendar(2008, Calendar.JULY, 6); Calendar calFour = calOne;(); //Although calOne and calTwo are the same date, they are pointing to different objects, so this will be false boolean result = calOne == calTwo; System.out.println("Result of calOne == calTwo is: " + result); //Here we see calOne and calTwo, since they represent the same date, they are equal and the result will be true result = calOne.equals(calTwo); System.out.println("Result of calOne.equals(calTwo) is: " + result); //This one should be obvious; calOne and CalThree hold different dates, so they are neither equivalent, nor equal. result = calOne.equals(calThree); System.out.println("Result of calOne.equals(calThree) is: " + result); result = calOne == calThree; System.out.println("Result of calOne == calThree is: " + result); //Here we see that since calOne and calFour "point" to the same object, they are equivalent. result = calOne == calFour; System.out.println("Result of calOne == calFour is: " + result); //Any time x == y is true, then x.equals(y) should also always be true (assuming the .equals() method was written correctly result= calOne.equals(calFour); System.out.println("Result of calOne.equals(calFour) is: " + result); //To further illustrate the point, here we change the calOne object by adding a day: calOne.add(Calendar.DAY_OF_MONTH, 1); System.out.println("\n\nResults after adding a day to CalOne (so it is now July 6).");(); //since calOne and calFour point to the same object, they are still equivalent and equal, both now holding July 6 result = calOne == calFour; System.out.println("Result of calOne == calFour is: " + result); result = calOne.equals(calFour); System.out.println("Result of calOne.equals(calFour) is: " + result); //And notice that calOne and calThree are now equal (where as before they were not), but they are still not reference equivalent result = calOne.equals(calThree); System.out.println("Result of calOne.equals(calThree) is: " + result); result = calOne == calThree; System.out.println("Result of calOne == calThree is: " + result); } } And here is the output: Current Values: calOne = 5 Jul 2008 calTwo = 5 Jul 2008 calThree = 6 Jul 2008 calFour = 5 Jul 2008 Result of calOne == calTwo is: false Result of calOne.equals(calTwo) is: true Result of calOne.equals(calThree) is: false Result of calOne == calThree is: false Result of calOne == calFour is: true Result of calOne.equals(calFour) is: true Results after adding a day to CalOne (so it is now July 6). Current Values: calOne = 6 Jul 2008 calTwo = 5 Jul 2008 calThree = 6 Jul 2008 calFour = 6 Jul 2008 Result of calOne == calFour is: true Result of calOne.equals(calFour) is: true Result of calOne.equals(calThree) is: true Result of calOne == calThree is: false Did you notice how calFour's value changed when we added a day to calOne? This is because calOne and CalFour are referencing the same object. Try typing up the program and running it. Change things and make sure you get the results you expect. I find the Calendar class a good one to use when playing around with equality. I would recommend that you not use Strings when trying things with == and .equals(). The String class has some specialness to it; it's an immutable class and, for issues of memory management, the JVM "handles" it differently than most other classes. So you will sometimes get unexpected results. Typically, you want to avoid using == with Strings. When you get farther along in your programming knowledge, you'll learn about that. Note that it would be unusual to use == for two references within the same method like we did above. (we did for demonstration purposes.) Since we said calFour = calOne in the method, we know that calOne == calFour is true. (unless we had something in an if block that may change it and we later want to test to see if it has been changed). So more often == is used when an object is passed into a method and we want to see if it is referencing the same object as another reference. Overall, equals() is typically used far more often (on objects) than == is. As I mentioned above, it is up to the type of object -- in its equals() method -- to determine what makes two of them equal. For example, what makes two people equal? If I use just their name, then two different people with the same name would show as being equal. But sometimes we may decide two object are equal even if one or more of their properties (i.e. variables) are not equal. For example, we may have a program that has a "Car" object that will return true for two cars even if their colors are different. Why? Because in the context of that application, color doesn't matter, so it was decided that it would be ignored when comparing for equality. Yet another application may decide that color is important when determining if two cars are equal. Still, a third application may decide that two cars are equal only if they are the same physical car, and therefore looks at the VIN (Vehicle Identification Number) to determine equality. So in this third application, even if the cars are the same make, model, and color, with the same accessories, they are not equal if they have different VINs; whereas in the second application they would be equal. Because of this, you should always carefully read the javadoc for an .equals() method to determine how that object determines if two objects are equal. For example, the Calendar object we used above can also hold a time. (since we didn;t spcify one, it defaults to midnight.) So is 5 July 2008 10:05AM equal to 5 July 2008 11:12PM? In some applications the time may not matter. In others it does. See if you can read Calendar.equals() javadoc and determine if these two dates (same day, different times) are equal or not as far as the Calendar class is concerned. Then change the above code to test your answer. It's also important to make sure that an Object overrides the equals method. The equals() method is defined in the Object class. So all objects inherit the default equals() method. But if an object does not override the method, the equals method ends up acting more like ==. This is because until you define what makes two "Things" or two "Widgets" equal by overriding the equals method, there is no way to tell. [ July 05, 2008: Message edited by: Mark Vedder ] Mark Vedder Ranch Hand Joined: Dec 17, 2003 Posts: 624 I like... posted Jul 05, 2008 10:01:00 0 Oh, something else I should mention... the above discussion is for Objects. When used on primitives (i.e. int, short, long, double, float, byte, etc.), == acts more like equals(). In other words, it determines if the two values are equal. public class PrimitiveEquality { public static void main(String[] args) { int a = 1; int b = 1; System.out.println("a == b result is: " + (a == b)); //Because a & b are primitives and not objects, there is not such thing as a.equals(b) b +=2; System.out.println("After adding 2 to b, a == b result is: " + (a == b)); Integer x = new Integer(1); Integer y = new Integer(1); System.out.println("x == y result is: " + (x == y)); System.out.println("x.equals(y) result is: " + (x.equals(y))); } } Output: a == b result is: true x == y result is: false x.equals(y) result is: true Amit Ghorpade Bartender Joined: Jun 06, 2007 Posts: 2796 9 I like... posted Jul 05, 2008 10:03:00 0 This question has been posted a lot of times. Please search the forum for more answers. Hope this helps . SCJP, SCWCD. | Asking Good Questions | Muhammad Saifuddin Ranch Hand Joined: Dec 06, 2005 Posts: 1321 I like... posted Jul 05, 2008 10:31:00 0 this thread may help. Saifuddin.. How To Ask Questions On JavaRanch My OpenSource Agarwal Priyanka Greenhorn Joined: Jul 25, 2008 Posts: 20 posted Jul 28, 2008 05:10:00 0 all the primitive type wrapper classes override the Object.equals() method to compare the value of the objects; the default Object.equals() checks if the variables reference the same object. equals(Object anObject)Compares this string to the specified object == operator is used to compare contents of two string reference variables pointing to different String objects public class Strings1 { public static void main(String[] args) { String str1 = "My name is bob"; String str2 = "My name is bob"; String str3 = "My name " + "is bob"; //Compile time expression String name = "bob"; String str4 = "My name is " + name; String str5 = new String("My name is bob"); System.out.println("str1 == str2 : " + (str1 == str2)); System.out.println("str2 == str3 : " + (str2 == str3)); System.out.println("str3 == str1 : " + (str3 == str1)); System.out.println("str4 == str5 : " + (str4 == str5)); System.out.println("str1 == str4 : " + (str1 == str4)); System.out.println("str1 == str5 : " + (str1 == str5)); System.out.println("str1.equals(str2) : " + str1.equals(str2)); System.out.println("str2.equals(str3) : " + str2.equals(str3)); System.out.println("str3.equals(str1) : " + str3.equals(str1)); System.out.println("str4.equals(str5) : " + str4.equals(str5)); System.out.println("str1.equals(str4) : " + str1.equals(str4)); System.out.println("str1.equals(str5) : " + str1.equals(str5)); } } keep trying...thats the way to success...<br />with best regards...<br />p.agarwal Have you checked out Aspose ? subject: Can anyone explain what is == and equal( ) ? Similar Threads juice salary expectation for SCJP Think About == Operator doubts in equals and hashCode All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/410982/java/java/explain-equal
CC-MAIN-2015-32
refinedweb
1,805
66.84
When previously importing my CSV file the data would upload sorted, but now, for no apparent reason, it's not coming out sorted after being uploaded and I'm having difficulty figuring out how to get it to sort. To fix this, I'm looking at my file upload code and HTML code. I built a dynamic drop down box whereby the user selects a state, which populates the next drop down box with all bank listed in that state by bank name. I need the bank names in the second drop down box to appear alphabetically. Previously, when I uploaded my CSV file, the bank names would appear alphabetically, but now it's not doing that. Any help would be greatly appreciated. Probably something silly I'm overlooking here. FDIC MODEL def self.import(file) CSV.foreach(file.path, headers: true) do |row| attributes = row.to_hash attributes['state'] = State.where(name: attributes['state']).first Fdic.create! attributes end end <h4>Select a State</h4> <%= select_tag :state, options_for_select(State.all.pluck(:name)), include_blank: "States" %> <h4>Select a Bank</h4> <%= select_tag :bank, option_groups_from_collection_for_select(@states, :fdic, :name, :id, :bank), include_blank: "Select a Bank" %> If I right understand your problem, and if your model State looks like: class State < .. has_many :banks etc.. end This correction should help you, I think. class State < .. has_many :banks, -> { order("banks.name") } etc.. end P.S. If you will use find_by your code would be more clearer) attributes['state'] = State.find_by(name: attributes['state'])
https://codedump.io/share/FOWd1wXwWd8A/1/sort-imported-file-ruby
CC-MAIN-2017-39
refinedweb
246
57.98
Sieve of Eratosthenes is an algorithm in which we find out the prime numbers less than N. Here N is an integer value. This is an efficient method to find out the prime numbers to a limit. By using this we can find out the prime numbers till 10000000. Here the basic approach is used we just make a memory of size N-1. Store the value continuously from 2 to N in the memory. Now, we traverse from the first unmark number in the memory which is 2. Then, we mark all the multiples of 2 till N. Now we move to the next unmark number in the memory which is 3. Now we mark all the multiples of 3 which are unmarked. Repeat this process until we reach sqrt(N). Here now the unmarked number of our prime number. So, we just iterate one more time and store them in the answer. Let’s see how Sieve of Eratosthenes works. Working Using Example Step:1 Create a memory wich containing numbers from 2 to N(Here we take 100). Step:2 Mark all the multiples of the first unmark number in the memory which is 2. Step:3 Now mark all the multiples next unmarked number which is 3 in the memory. Step:4 Move to the next unmark number which is 5. Now mark all the multiples of 5. Step:5 Move to the next unmarked number which is our last unmarked number in range 2 to the square root of N. Step:6 All the remaining unmarked numbers are our prime numbers. Prime Numbers: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 41, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97. Algorithm Step:1 Store the number from 2 to N in the list. Step:2 Set all numbers as unmarked. Step:3 For i in range 2 to Sqrt(N): If i is unmarked then: Marked all the multiples of i. Step:4 Print all the unmarked numbers. Implementation /*C++ Implementation for Sieve of Eratosthenes.*/ #include<bits/stdc++.h> using namespace std; bool marked[1000000]; void Sieve(int n) { /*for i from 2 to sqrt(N)*/ for(int i=2;i*i<=n;i++) { /*if i is unmarked*/ if(marked[i]==false) { /*marked all the multiples of i*/ for(int j=i*i;j<=n;j+=i) { /*marked*/ marked[j] = true; } } } cout<<"Prime number from 2 to N are: "<<endl; for(int i=2;i<=n;i++) { if(marked[i]==false) { cout<<i<<" "; } } cout<<endl; } int main() { /*Take input N*/ int n; cin>>n; /*call the sieve function*/ Sieve(n); return 0; } 100 Prime number from 2 to N are: 2 3 5 7 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 83 89 97 Time Complexity O(N*log(logN)) where N is the limit until we have to find the prime numbers. Here loglogN comes because of the loop run in the following pattern. For 2 -> It runs N/2 times, For 3 -> It runs N/3 times, For 5 -> It runs N/5 times… So, N*( 1/2 + 1/3 + 1/5 + 1/7 +……) = N*log(logN). Note: Harmonic Progression of the sum of primes is equal to the log(logN) where N is the number of terms. Space Complexity O(N) where N is the limit until we have to find the prime numbers. We use this space to store the numbers which we use to mark.
https://www.tutorialcup.com/interview/algorithm/sieve-of-eratosthenes.htm
CC-MAIN-2021-49
refinedweb
594
80.62
#include <hallo.h> * Peter De Wachter [Sat, Aug 24 2002, 01:25:54AM]: > It has the following license: > -- UNACE-SOURCE v1.2b (extract-util) -- > the source may be distributed and used, > but I,Marcel Lemke, retain ownership of > the copyrights to the source. > --------------------------------------- > > I think this makes it non-free (it doesn't explicitly allow > modifications), but it may help to figure out the algorithm. Worse. I considered packaging it (*) but stopped after discussing the license issue. You are only allowed to use the source, nothing is said about the binary created. Looks like the qmail-src story. (*) while extending the support in the unp Gruss/Regards, Eduard. -- Linux - aus klaren Quellen wird ein starker Strom.
https://lists.debian.org/debian-devel/2002/08/msg01531.html
CC-MAIN-2019-09
refinedweb
116
66.64
NextworkX Python Overview The Python NetworkX module contains tools for creating and manipulating, visualizing networks, also known as graphs. This is not only graph drawing package, but also collaborate with Matplotlib. By using this, we can implement highly flexible graph. A graph is defined as a set of nodes and edges where each edge is associated with two nodes. NetworkX also adds the possibility to associate properties to each node and edge. The Networkx module is designed to handle data on a large scale relevant to modern problems. It has several classes for graphs and digraphs. It also has features to convert a graph from one to several formats. Install Modules The NetworkX can be installed using pip, Miniconda/Anaconda and from source code. Here, we have installed using pip tool - pip install networkx As this module is already installed in the system, that's why it returns the following - Requirement already satisfied: networkx in c:\python37\scripts\projects\env\lib\site-packages (2.4) Requirement already satisfied: decorator>=4.3.0 in c:\python37\scripts\projects\env\lib\site-packages (from networkx) (4.4.1) Import NetworkX First, we import the networkx module. import networkx as nx Create graph object This module provides different classes for different networks, like directed and undirected networks. Let's create a basic undirected graph object - g = nx.Graph() Adding or Removing Nodes We can use add_node() method to add one graph node at a time or use add_nodes_from() method to add a list of nodes or a container of nodes. # Add one node at a time g.add_node(1) # Add a list of nodes g.add_nodes_from([2, 3, 4]) # container of nodes cn = nx.path_graph(5) g.add_nodes_from(cn) To remove a node from the graph, simply use remove_node() method and pass the node. g.remove_node(2) Adding and Removing Edges The add_edge() and add_edges_from() methods are used to add single and list of edges or container of edges respectively. # Add single edge at a time g.add_edge(1, 2) # Add a list of edges g.add_edges_from([(1,2), (1,3)]) To remove the edge from the graph, simply use remove_edge() method and pass the edge. g.remove_edge(1,2) Simple Example)]) # original Graph created plt.subplot(211) nx.draw(g) plt.show() plt.savefig('graph.png') The above code returns the following output - Accessing Nodes and Edges This module provides several methods to access the nodes and edges, number of nodes and edges and so on. import matplotlib.pyplot as plt import networkx as nx g = nx.Graph() g.add_edges_from([(1, 2), (1, 3), (1,4), (2, 3), (2, 4), (2, 5), (3, 4), (4, 5), (4, 6), (5, 7), (5, 8), (7, 8)]) print(g.nodes()) print(g.edges()) print(g.degree(2)) print(g.number_of_nodes()) print(g.number_of_edges()) Convert to Directed Graph The above generated graph is undirected graph, but we can also convert them to directed graph. To convert to directed graph, use to_directed() method. It returns the directed representation of graph.)]) # Directed Graph created plt.subplot(211) H = nx.to_directed(g) nx.draw(H) plt.show() plt.savefig('graph.png') Similarly, we can convert a directed graph to undirected graph using to_undirected() method. Related ArticlesHow to save figure in matplotlib pyplot Python Line Plot Using Matplotlib Matplotlib Pie Chart Python Matplotlib Bar Plot How to capture a video in Python OpenCV and save Python OpenCV Overlaying or Blending Two Images Contour Detection using Python OpenCV Harris Corner Detection using Python OpenCV Human Body Detection Program In Python OpenCV Face Recognition OpenCV Source Code Canny Edge Detector OpenCV Python Python NumPy: Overview and Examples Image processing using Python Pillow Python OpenCV Histogram Equalization
https://etutorialspoint.com/index.php/339-nextworkx-python-overview
CC-MAIN-2020-50
refinedweb
614
55.95
IRC log of tagmem on 2003-05-05 Timestamps are in UTC. 18:47:40 [RRSAgent] RRSAgent has joined #tagmem 18:47:56 [Ian] invite Zakim 18:47:59 [Zakim] Zakim has joined #tagmem 18:48:03 [Ian] Zakim, this will be TAG 18:48:03 [Zakim] ok, Ian; I see TAG_Weekly()2:30PM scheduled to start 18 minutes ago 18:49:28 [DanC] DanC has joined #tagmem 18:50:22 [Ian] Ian has changed the topic to: Agenda 18:57:24 [DanC] eek! issue 9 is in the "intends to discuss" part of the agenda 18:58:17 [Zakim] TAG_Weekly()2:30PM has now started 18:58:23 [Zakim] +Norm 18:58:32 [Ian] zakim, call Ian-BOS 18:58:32 [Zakim] ok, Ian; the call is being made 18:58:34 [Zakim] +Ian 18:59:48 [Chris] Chris has joined #tagmem 19:00:31 [Chris] when were the (re) appointments to the tag made, can't see the announcement 19:00:47 [Norm] Uhm, about the same time as the voting 19:01:02 [Zakim] +??P1 19:01:10 [Norm] zakim, ??P1 is Roy 19:01:10 [Zakim] +Roy; got it 19:01:11 [Ian] zakim, ??P1 is Roy 19:01:11 [Zakim] sorry, Ian, I do not recognize a party named '??P1' 19:01:20 [Norm] zakim, who's on the phone? 19:01:20 [Zakim] On the phone I see Norm, Ian, Roy 19:01:30 [Ian] Chair: NW, Scribe: IJ 19:01:48 [Ian] Regrets: TB, PC, SW. 19:02:43 [Zakim] +DanC 19:03:04 [Zakim] +Chris 19:03:13 [Zakim] -DanC 19:03:34 [Zakim] +DanC 19:07:01 [Ian] zakim, who's here? 19:07:01 [Zakim] On the phone I see Norm, Ian, Roy, DanC, Chris 19:07:02 [Zakim] On IRC I see Chris, DanC, Zakim, RRSAgent, Norm, Ian 19:08:04 [Ian] IJ: I think TBL not available. 19:08:26 [DanC] regrets from timbl; he's in LA 19:08:35 [Ian] Resolved: 28 Apr teleconf minutes accepted 19:08:45 [Ian] 19:09:15 [Ian] This agenda: 19:09:19 [Zakim] +David_Orchard 19:09:21 [Ian] Next meeting 12 May (with Voice WG). 19:09:38 [Ian] Regrets: NW likely to be late. 19:09:46 [Ian] But NW does plan to attend. 19:09:50 [DanC] I'm available 12May 19:09:54 [Ian] DO: Possible regrets. 19:10:33 [Ian] ---- 19:11:03 [Ian] Action SW 2003/04/14: Propose meeting times, structure to TAG. 19:11:08 [Ian] [Not completed] 19:11:50 [Ian] 1.2 W3C Track Presentation 19:11:59 [Ian] Action PC 2003/04/14: Propose TAG's WWW2003 report to TAG. 19:12:06 [Ian] [Not completed] 19:13:38 [Ian] IJ: I think next week last real chance to look this over as a group before track. 19:14:12 [Ian] NW: I think we should schedule Voice Browser WG meeting to start at 3:45, so we have time to discuss presentations. 19:14:39 [Ian] Agenda next week: (1) Review presentations (WWW2003, AC mtg) 3pm to 3:45 (2) Voice stuff after that. 19:14:45 [Ian] 1.3 TAG report at AC meeting 19:14:54 [Ian] Action DO, CL: Present TAG's AC meeting report to TAG. Due 5 May 19:15:05 [Ian] CL: Progress. Let's look at this next week. 19:15:12 [Ian] ------------- 19:15:38 [Ian] CL: One thing we agreed to add on report was TAG role at tech plenary. What was the take-home message. 19:15:48 [Ian] CL: Do people here wish to note something in particular? 19:16:20 [DanC] (I remember some debrief discussions; I'd have to review the minutes of our meetings just after the TP) 19:16:34 [Ian] DO: General constructive interaction to be noted. 19:16:40 [Ian] ------------- 19:17:10 [Ian] * whenToUseGet-7 19:17:10 [Ian] o Solicit review of revised draft finding from IJ? 19:17:10 [Ian] o IJ proposal: Take into account DC's comments and then solicit review. 19:18:40 [Ian] IJ: I propose to accept DC's proposed simplifications (talk about "read"/"write") and put out for review. 19:19:22 [Ian] Resolved: IJ can publish get7 finding with DC's suggestions. 19:19:30 [Ian] Action IJ: Publish revised get7 finding with modifications. 19:19:33 [Ian] --- 19:19:53 [Ian] contentTypeOverride-24 19:19:53 [Ian] * Solicit review of draft finding from IJ? 19:19:53 [Ian] * Completed action SW, NW 2003/04/17: Read Client handling of MIME headers. If +2, then IJ will send to www-tag. (Done) 19:20:23 [Ian] Done: 19:20:30 [Ian] Action IJ: Send email to the Voice Browser WG. 19:21:08 [Ian] IJ: I deleted the word "culprit" from second scenario. 19:21:13 [Ian] --------------------- 19:21:21 [DanC] 19:21:28 [Ian] # uriMediaType-9 19:21:28 [Ian] * Action DC 2003/02/06: Start discussion on discuss@apps.ietf.org, but not urgent 19:21:28 [Ian] * IANA appears to have responded to the spirit of this draft (see email from Chris Lilley).What's required to close this issue? 19:21:37 [Ian] DC: I propose to withdraw that action. 19:22:02 [Ian] DC: Chris pointed out in email that looks like battle over. 19:22:10 [Ian] 19:22:11 [Chris] nearly over 19:22:23 [Chris] I gave specific feedback on how to complete it 19:22:30 [Ian] DC: But in minutes of 13 March IETF/W3C teleconf, 19:22:54 [Ian] some relevant actions taken. 19:23:30 [Ian] RF: I'm not sure that IETF finished the tasks that CL requested. 19:23:47 [Ian] CL: It might be handy if the TAG confirmed my position. 19:24:11 [Ian] CL: I suggest the following changes 19:24:11 [Ian] a) The first column should be the subtype string, as now, and should 19:24:11 [Ian] always link to 19:24:11 [Ian] 19:24:21 [Ian] CL: b) The second column should contain, as now, the name of the format, 19:24:21 [Ian] which is or should be provided for all types. 19:24:35 [Ian] RF: I don't think they would disagree; question of resources perhaps. 19:24:47 [Ian] CL: c) The third column, which does not seem super necessary and could 19:24:47 [Ian] be omitted, would be a link to the person that registered that type 19:24:47 [Ian] or wrote the rfc that registered it or wrote the email that 19:24:47 [Ian] registered it or whatever. I don't see a lot of use for this, 19:24:47 [Ian] really. 19:25:02 [Ian] CL: We could also point out that these changes would address a lot of what DanC talks about in his internet draft. 19:25:16 [Ian] RF: You could suggest changes to the RFC : How they should maintain their site. 19:25:41 [Ian] DC: Next sync point for W3C/IETF is 17 June. 19:25:55 [Ian] DC: I propose we withdraw my action and check in 17 June. 19:27:01 [Ian] Resolved: DC's action withdrawn on this issue. 19:27:53 [Ian] Action CL: Propose CL's three changes to registration process to Ned Freed. [What forum?] 19:28:27 [Ian] DC: You can find out who's editor of MIME spec that talks about registration. You could write editors and cc public-ietf-w3c. 19:28:42 [Chris] 19:28:47 [Ian] DC: I'll likely ping Ned on this (e.g., one week before) 17 June IETF/W3C meeting. 19:30:32 [Ian] IJ: Sounds like we're not quite ready to move issue 9 from pending. 19:30:34 [Ian] DC: Indeed. 19:30:44 [Ian] ----- 19:30:56 [Ian] abstractComponentRefs-37 19:30:56 [Ian] * See issue description from David Orchard 19:31:08 [Ian] 19:31:21 [Ian] Second Draft of summary of TAG issue abstractComponentRefs-37 19:32:10 [Ian] DO: One issue from SW has to do with getting type info out of the URI. Question of "no metadata in URI". 19:32:40 [Ian] DO: Little traffic (publicly or privately) expressing preference for various mechanisms. 19:32:46 [Ian] CL: I commented on syntax ; use of xpointer. 19:33:35 [Ian] DO: I understand that there's some question about whether it's a desirable requirement that the type info be in the URI. 19:34:22 [Ian] CL: Example in schema: "#" followed by "float". 19:34:46 [Ian] RF: If you're dealing with polymorphic operators where function distinguished by type, then you would need something. 19:34:52 [Norm] ack danc 19:34:52 [Zakim] DanC, you wanted to respond to opacity and constraints 19:34:57 [Ian] DC: Opacity has to do with freedom of choosing URIs. 19:35:10 [Chris] eg 19:35:23 [Ian] DC: E.g., if you require first path component to be WSDL type, then you restrict what server manager can do. 19:35:31 [Ian] DC: If you are making up URIs, then you can say "this part of it is the type" 19:36:31 [DanC] haven't we agreed that things that have #'s in them are URIs too? 19:36:48 [Ian] NW, CL: See also TB email on Apple Music Store and use of URI schemes instead of headers 19:37:10 [Ian] DC: We don't have much good motivation for making new URI schemes. Apple folks did it and people seem to be happy. 19:37:37 [DanC] for *not* making new URI scheems. 19:38:32 [Ian] DC: Which of DO's options is likely to bite me if I ignore it? Has the WSDL WG picked one? 19:38:40 [Ian] DO: They picked the one they suggested - namespace + fragid 19:38:52 [Ian] (DO option 8) 19:38:57 [Chris] 8. Use namespace name and new fragment identifier syntax. This is the 19:38:57 [Chris] current WSD proposal. 19:39:12 [Ian] DC: I don't hate option 8 all that much. 19:39:43 [Ian] The sample URI is 19:39:43 [Ian] "(TicketAgent/listFlights/listFlightsR 19:39:43 [Ian] equest)". 19:40:09 [Ian] RF: Hash marks are usually left to people who have no choice, not for designers of the media type. 19:40:59 [Ian] DC: I don't agree that this is a "violation of URI fragment identifier rules that fragment identifiers are 19:40:59 [Ian] based upon the media type of the dereferenced content." 19:41:28 [Ian] DO: If somebody puts a RDDL doc instead of a WSDL doc, the URI producer will have to use a different URI, depending on the media type. 19:41:37 [Ian] DC: The answer then is don't put a RDDL document there. 19:42:11 [Ian] DC: You can use this syntax and be consistent with the URI spec; but you can't have all your cakes and eat all of them (i.e., can't also be able to put a RDDL doc with this same syntax). 19:42:26 [Ian] NW: If you are relying on use of a frag id, then you'd better expect to get back a RDDL doc. 19:42:47 [DanC] all roads lead to issue 8. there is no other issue 1/2 ;-) 19:42:49 [Ian] DO: The consequence is tight coupling between URI syntax and WSDL doc. 19:43:37 [Chris] so, we need a sort of double dereference - from a namespace to a (part of) a rddl document to whatever that part points to/// 19:43:43 [Ian] DO: If you don't use fragids, you might have other probs but not this particular one. 19:44:54 [DanC] hmm... 19:44:59 [Ian] RF: Slash characters can't be used for something other than hierarchy. 19:46:12 [Ian] [Discussion of option 10] 19:46:40 [DanC] option 10 ( ) is inconsistent with timbl's position on issue 14, which I sympathize with 19:47:26 [Ian] RF: Use of parens is problematic (e.g., if relative URIs used within 2 ref items). 19:47:39 [DaveO] DaveO has joined #tagmem 19:48:09 [Ian] DC: TBL's position on issue 14 is that if you want to refer to a non-doc thing, you need to have a "#" in the URI. 19:49:29 [DaveO] Roy, I think you are saying you'd change "(TicketAgent/listFlights/listFlightsRe 19:49:29 [DaveO] quest)" 19:50:00 [DaveO] to " " or 19:50:06 [DaveO] " " 19:50:09 [Ian] DC: It's antisocial to constrain Web site managers to use URIs in a particular way in order to use your format. 19:51:21 [Ian] DC: Don't tell me, e.g., that I have to put all WSDL files at the server root. 19:52:14 [Ian] NW: The proposal would said that the WSDL doc could go anywhere on the server, but the "/input...." is controlled by the WSDL spec; not really "on your web server"; these are parameters to the service. 19:52:25 [Ian] [Some agreement from DO, DC, NW that that's reasonable] 19:53:16 [Ian] q+ 19:53:28 [Chris] q+ 19:53:36 [Zakim] +??P3 19:53:38 [Zakim] -Roy 19:53:48 [Ian] NW: Is it reasonable to say that wsdl docs and expected interactions are something server manager would be controlling, not just any user. 19:53:56 [Ian] zakim, who's here? 19:53:56 [Zakim] On the phone I see Norm, Ian, DanC, Chris, David_Orchard, ??P3 19:53:57 [Zakim] On IRC I see DaveO, Chris, DanC, Zakim, RRSAgent, Norm, Ian 19:54:18 [Ian] zakim, ??P3 is Roy 19:54:18 [Zakim] +Roy; got it 19:54:36 [Roy] Roy has joined #tagmem 19:54:46 [Ian] IJ: In option 11, what is problem of "no hierarchy of names" 19:55:57 [Ian] IJ: Can't that be captured in another parameter? 19:56:01 [Norm] You could have ...ticketagent?service=/TicketAgent/listFlights/listFlightsRequest 19:56:12 [Ian] DO: One advantage of putting in a query string is that you can use relative URIs. 19:56:56 [Ian] CL: If I have a namespace, doesn't necessarily mean that everything "below" a piece of path is co-opted by the namespace. 19:57:15 [Ian] CL: However, with the proposed URIs of option 10, that part of the path is co-opted. 19:57:27 [Ian] DC: The same is true with RDF when used with namespaces. 19:58:08 [Ian] DC: Doesn't blow away space, but allows short local names. 19:58:54 [Ian] DC: There's an issue on our list about making URI from local name and namespace name. 19:58:57 [Ian] (Issue 6) 19:59:05 [DanC] cf rdfmsQnameUriMapping-6 : Algorithm for creating a URI from a QName? 20:01:37 [Chris] Next steps? 20:01:50 [Chris] drum up more feedback? 20:02:02 [Chris] get evidence people have read it 20:02:25 [Ian] DO: Need more input. Maybe straw poll, with deadline. 20:03:11 [Ian] DC: Some of these are just hard issues... 20:04:20 [Ian] DC: Some people have read some of the issues, even though we don't have consensus yet. 20:04:44 [Ian] NW summarizing perceived state of affairs: 20:04:50 [Ian] 1) WSDL group ready to adopt option 8. 20:05:11 [Ian] 2) The TAG needs to register a comment/objection if the TAG doesn't think 8 is a good idea. 20:06:49 [Ian] NW: Please be prepared (i.e., read proposals) for discussion (and I hope closure) at 26 May teleconf. 20:07:51 [Ian] RF: Once I understand what the goal is, I should have a clearer position. 20:08:51 [Roy] I do believe that balanced parens should be avoided 20:08:54 [Ian] ------------------------ 20:09:03 [Ian] # namespaceDocument-8 20:09:09 [Ian] [TB/PC not here] 20:10:10 [Ian] Postponed 20:10:23 [DaveO] Roy, I'll go through the doc and ensure that there are options that do and don't have balanced parens, and list what I understand to be the cons of balanced parens. 20:10:33 [Ian] --------------- 20:10:46 [Ian] Other issues? 20:11:35 [Ian] NW: Apple Music store case is about putting metadata in a URI so that info available without retrieval. 20:11:52 [Ian] DC: I think this is distinct from metadatainURI-31, but am willing to live with it there as long as not dropped. 20:13:04 [Ian] ----------------------- 20:13:11 [Ian] 2.2 Architecture document 20:13:21 [Ian] 1. Completed action DC 2003/02/06: Attempt a redrafting of 1st para under 2.2.4 of 6 Feb 2003 draft. See request from DanC to consider this subsumed based on 26 Mar 2003 Arch Doc text. 20:13:59 [Ian] # Action DC 2003/01/27: write two pages on correct and incorrect application of REST to an actual web page design 20:14:12 [Ian] DC: I would rather drop this. 20:14:36 [Ian] Resolved: Dropped. 20:14:56 [Ian] Action DO 2003/01/27: Please send writings regarding Web services to tag@w3.org. DO grants DC license to cut and paste and put into DC writing. 20:15:10 [Ian] DO: In progress. I hope that it will be published by late next week. 20:15:37 [Ian] Action DC 2003/03/17: : Write some text for interactions chapter of arch doc related to message passing, a dual of shared state. 20:15:49 [Ian] DC: Please continue. 20:16:45 [DanC] IJ: 26Mar draft is now more than a month old... 20:16:54 [Chris] ij has comments on arch doc from timbl, will incorporate comments 20:16:57 [DanC] ... I got comments from timb; I responded, though some of them merit discussiong 20:17:00 [Ian] graham klyne 20:17:05 [Chris] graham kline also sent comments 20:17:34 [Chris] ij recalls chris has text, chris not sure what this refers to 20:17:45 [Chris] no momentum for new draft without new text 20:17:56 [Chris] could incorporate draft findings, seems early though 20:18:12 [Chris] not quiter at three month mark, but not ready for last call in june 20:18:27 [Ian] IJ: What do people expect to generate for new text? 20:18:42 [Ian] CL: I agree with IJ that once findings are reviewed/accepted, that putting conclusions in arch doc a good way to go. 20:18:51 [Chris] adding conclusions from draft findings 9aftyer some discussion) is good 20:18:54 [Ian] q- 20:18:56 [Chris] ack chris 20:19:20 [Ian] DC: Suppose we tried being schedule driven rather than feature-driven: last call in June with what we've got? 20:19:31 [Ian] DC: Can we wall off the parts where issues are not resolved? 20:20:11 [Ian] CL: Let's go through issues list and try to gauge whether we expect to advance on these issues by the summer. 20:20:33 [Ian] [Moving on to 2.3] 20:20:41 [Chris] ie, which issues will make a summer 1.0 arch doc 20:20:51 [Ian] xlinkScope-23? 20:20:56 [Chris] nope 20:21:02 [Ian] CL, NW: Probably not for arch doc 1.0. 20:21:12 [Ian] NW: I think we can probably proceed without that done for 1.0. 20:21:16 [Ian] DC: Where does that hurt? 20:21:22 [Ian] CL: Not static, but lots of dependencies. 20:21:59 [Chris] To what extent should URIs be used in W3C specifications 20:22:02 [Ian] IRIEverywhere-27 20:22:14 [Chris] URIEverywhere-38 20:22:29 [Ian] CL proposes a new issue: URIEverywhere-38: " To what extent should URIs be used in W3C specifications?" 20:22:48 [Ian] CL: I think that once this issue is closed, there will be some effects on IRIEverywhere-27. 20:23:29 [Ian] CL: Goal is to separate (1) issues related to IRI spec (2) issues strictly tied to URIs. 20:23:39 [Ian] DC: That's issue 15 - URIEquivalence. 20:23:43 [Ian] CL: Related but not the same. 20:24:01 [Chris] busy writing this up 20:24:13 [Ian] NW: I'm uncomfortable with moving the arch doc forward without some sort of answer re: IRIs. But perhaps we coudl. 20:24:14 [Chris] agree we should have something in this one for version 1.0 20:24:47 [Ian] # URIEquivalence-15 20:24:57 [Ian] DC: I hope we get this one nailed one for arch 1.0. 20:25:20 [Ian] DC: I think nearly done. 20:25:34 [Ian] NW: I think we can be optimistic that we can close this one soon. 20:26:02 [Ian] DC: I think we need a test suite for URIs (e.g., for comparison). 20:26:18 [Ian] [DC said this as URI CG co-chair] 20:26:29 [Chris] agree that there should be testsuite for uris 20:26:34 [Ian] DC: I don't have a mandate per the Activity Proposal to do a test suite for URIs, but I"m considering proposing that again. 20:27:03 [Ian] RF: I think there may not be many volunteers for a test suite, but I think nobody would stop DC. 20:27:11 [Chris] from forthcoming writeup 20:27:12 [Chris] A fairly small group of such identifiers can be included in 20:27:12 [Chris] specifications and in the associated test suite, with a great benefitr 20:27:12 [Chris] to clarity. For example, see the cases listed at: 20:27:12 [Chris] 20:28:32 [Chris] xmlIDSemantics-32 is now public, will announce right after the call 20:29:56 [Ian] NW: XML Core WG is actively discussing how to deal with xml:id. 20:30:01 [Chris] xml core is discussing how to deal with xml:id 20:30:31 [Ian] Action NW: Point Core WG to CL finding once made public. 20:30:37 [Chris] its at 20:30:53 [Ian] ---- 20:31:03 [Ian] Returning to arch doc. 20:31:16 [Ian] NW: I think we should plan a last call for June, with whatever we've got. 20:31:32 [Ian] ADJOURNED 20:31:37 [Zakim] -Roy 20:31:38 [Zakim] -David_Orchard 20:31:39 [Zakim] -Norm 20:31:39 [Ian] RRSAgent, stop
http://www.w3.org/2003/05/05-tagmem-irc.html
CC-MAIN-2017-04
refinedweb
3,802
79.3
This is the mail archive of the elfutils-devel@sourceware.org mailing list for the elfutils project. > > > mips_debugscn_p (const GElf_Shdr *shdr, const char *name) > > > { > > > if (shdr != NULL && shdr->sh_type == SHT_MIPS_DWARF) > > > return true; > > > return (*generic_debugscn_p) (shdr, name); > > > } > > > > Does that mean generic_debugscn shouldn't check the shdr? Because > > now it would check it and then fail for mips because it's not > > SHT_PROGBITS. The code above never calls generic_debugscn if sh_type is SHT_MIPS_DWARF. If the type is neither SHT_MIPS_DWARF nor SHT_PROGBITS, I don't know why we'd want it to return true. I don't understand what case you are concerned about. Thanks, Roland
https://sourceware.org/ml/elfutils-devel/imported/msg03412.html
CC-MAIN-2018-13
refinedweb
103
67.96
Alexa.Media.PlayQueue GetItem If an item's stream URI has expired, Alexa sends a GetItem request to obtain a new (refreshed) URI to begin playback on an Alexa device. For music providers, this interface is optional and should be implemented only if the stream URIs returned by your music service cloud can expire. For radio providers, this interface is mandatory. The primary use case for this API is when an item cached by Alexa is expired and needs to be refreshed. For example, a user is listening to music on their Alexa device, and then says "Alexa, pause". One hour later, the user says "Alexa, resume" but the stream URI for the item is now expired (based on the item's stream.validUntil field). In this scenario, Alexa calls GetItem to receive a new stream URI, and then sends the new stream URI to the device to resume playback. If your skill does not expire stream URIs, or if the URI expiration period ( stream.validUntil field) is always longer than 24 hours, your skill does not need to support the GetItem API. When a user pauses for an extended period of time (many hours), Alexa eventually forgets about the paused stream. In these cases, when a user later asks "Alexa, resume", Alexa asks the user what she or he would like to hear and then sends new GetPlayableContent and Initiate requests to the skill. -Item request. Header For an overview of the header format, see message header. Payload The following table describes the fields in the payload of a GetItem request. Example GetItem request Imagine the following scenario: Alexa is playing the first song from a skill's Initiate response. The user pauses playback, then one hour later says "Alexa, resume". The stream URI is now expired, so Alexa will send a GetItem request to obtain a new, non-expired stream URI. See the following example. { "header": { "messageId": "2cae4d53-6bc1-4f8f-aa98-7dd2727ca84b", "namespace": "Alexa.Media.PlayQueue", "name": "GetItem", 1234567c838347ae178b4a" }, "location": { "originatingLocale": "en-US" } }, "target" } } } } Response structure When the skill locates the requested item, it sends a GetItem.Response. Header For an overview of the header format, see message header. Payload The following table describes the fields in the payload of a GetItem response. Example GetItem response In response to the preceding example request, the skill returns information about the item identified in the request. See the following example. { "header": { "messageId": "2cae4d53-6bc1-4f8f-aa98-7dd2727ca84b", "namespace": "Alexa.Audio.PlayQueue", "name": "GetItem.Response", "payloadVersion": "1.0" }, "payload": { "item": { "id": "e73befbe-8c27-4e4b-ab0c-9865ce8516f0", "playbackInfo": { "type": "DEFAULT" }, "metadata": { "type": "TRACK", "name": { "speech": { "type": "PLAIN_TEXT", "text": "Float On" }, "display": "float": 208000, "controls": [ { "type": "COMMAND", "name": "NEXT", "enabled": true }, { "type": "COMMAND", "name": "PREVIOUS", "enabled": false } ], "rules": { "feedbackEnabled": true }, "stream": { "id": "STREAMID_92_14629004", "uri": "", "offsetInMilliseconds": 0, "validUntil": "2018-05-10T19:11:35Z" }, "feedback": { "type": "PREFERENCE", "value": "POSITIVE" } } } } Example ErrorResponse If the skill cannot find the item identified in the request, it returns a media-specific": "The item does not exist" } }
https://developer.amazon.com/pt-br/docs/music-skills/api-getitem.html
CC-MAIN-2019-51
refinedweb
497
62.48
Finance sample quiz questions: return on total assets, external funding, modified internal rate of return, estimated weighted average cost of capital and more... Multiple Choice Identify the letter of the choice that best completes the statement or answers the question. ____ 1. You are given the following data: r* = real risk-free rate = 4% Constant inflation premium = 7% Maturity risk premium = 1% Default risk premium for AAA bonds = 3% __________. a. 4%; 14% b. 4%; 15% c. 11%; 14% d. 11%; 15% e. 11%; 17% ____ 2. The real risk-free rate is expected to remain at 3 percent. Inflation is expected to be 3 percent this year, and 4 percent next year. The maturity risk premium is estimated to be equal to 0.1%(t - 1), where t = the maturity of a bond (in years). All Treasury securities are highly liquid, and therefore have no liquidity premium. Three-year Treasury bonds yield 0.5 percentage points (0.005) more than two-year Treasury bonds (that is, two-year bond yield plus 0.5%). What is the expected level of inflation in Year 3? a. 4.5% b. 4.7% c. 5.0% d. 5.6% e. 6.3% ____ 3. ____ 4. Suppose you put $100 into a savings account today, the account pays a nominal annual interest rate of 6 percent, but compounded semiannually, and you withdraw $100 after 6 months. What would your ending balance be 20 years after the initial $100 deposit was made? a. $226.20 b. $115.35 c. $62.91 d. $9.50 e. $3.00 ____ 5.)? a. $894,380 b. $813,073 c. $897,380 d. $987,118 e. $978,688 ____ 6. You have just taken out a 10-year, $12,000 loan to purchase a new car. This loan is to be repaid in 120 equal end-of-month installments. If each of the monthly installments is $150, what is the effective annual interest rate on this car loan? a. 6.5431% b. 7.8942% c. 8.6892% d. 8.8869% e. 9.0438% ____ 7. John and Barbara Roberts are starting to save for their daughter's college education. ? Assume that today's date is September 1. ? College costs are currently $10,000 a year and are expected to increase at a rate equal to 6 percent per year for the foreseeable future. All college payments are due at the beginning of the year. (So for example, college will cost $10,000 if you start now, and $10,600 if you start next September 1). ? Their daughter will enter college 15 years from now. She will be enrolled for four years. Therefore the Roberts will need to make four tuition payments. The first payment will be made on September 1 of the year she enters college (Year 15). The final payment will be made on September 1 of her last year in college (Year 18). Notice that because of rising tuition costs, the tuition payments will increase each year. ? The Roberts would also like to give their daughter a lump-sum payment of $50,000 on the September 1 after she graduates (i.e., at Year 19) one year from now (i.e., at Year 1) and the final contribution will be made on September 1 when she enters college (i.e., Year 15). ? ____ 8. Bates Motors has the following information for the previous year: Net income = $200; Net operating profit after taxes (NOPAT) = $300; Total assets = $1,000; and Total net operating capital = $800. The information for the current year is: Net income = $500; Net operating profit after taxes (NOPAT) = $400; Total assets = $1,300; and Total net operating capital = $900. What is the free cash flow for the current year? a. $100 b. $200 c. $300 d. $400 e. $500 ____ 9. Casey Motors recently reported the following information: ? Net income = $600,000. ? Tax rate = 40%. ? Interest expense = $200,000. ? Operating capital = $9 million. ? After-tax cost of capital = 10%. What is the company's EVA? a. ($300,000) b. ($180,000) c. $0 d. $200,000 e. $400,000 ____ 10. You have been scouring The Wall Street Journal looking for stocks that are "good values" and have calculated the expected returns for five stocks. Assume the risk-free rate (rRF) is 7 percent and the market risk premium (rM - rRF) is 2 percent. Which security would be the best investment? (Assume you must choose just one.) Expected Return Beta a. 9.01% 1.70 b. 7.06% 0.00 c. 5.04% -0.67 d. 8.74% 0.87 e. 11.50% 2.50 ____ 11. A money manager is managing the account of a large investor. The investor holds the following stocks: Stock Amount Invested Estimated Beta A $2,000,000 0.80 B 5,000,000 1.10 C 3,000,000 1.40 D 5,000,000 ??? The portfolio's required rate of return is 17 percent. The risk-free rate, rRF, is 7 percent and the return on the market, rM, is 14 percent. What is Stock D's estimated beta? a. 1.256 b. 1.389 c. 1.429 d. 2.026 e. 2.154 ____ 12. You are holding a stock which has a beta of 2.0 and is currently in equilibrium. The required return on the stock is 15 percent, and the return on an average stock is 10 percent. What would be the percentage change in the return on the stock, if the return on an average stock increased by 30 percent while the risk-free rate remained unchanged? a. +20% b. +30% c. +40% d. +50% e. +60% ____ 13. Allison Engines Corporation = $28; its last dividend was D0 = $2.20, and its expected growth rate is 6 percent. Allison can issue new common stock at a 15 percent flotation cost. What will Allison's marginal cost of equity capital (not the WACC) be if it must fund a capital budget requiring $600 in total new capital? a. 15.8% b. 13.9% c. 7.9% d. 14.3% e. 9.7% ____ 14. Hilliard Corp. wants to calculate its weighted average cost of capital (WACC). The company's CFO has collected the following information: ? The company's long-term bonds currently offer a yield to maturity of 8 percent. ? The company's stock price is $32 per share (P0 = $32). ? The company recently paid a dividend of $2 per share (D0 = $2.00). ? The dividend is expected to grow at a constant rate of 6 percent a year (g = 6%). ? The company pays a 10 percent flotation cost whenever it issues new common stock (F = 10%). ? The company's target capital structure is 75 percent equity and 25 percent debt. ? The company's tax rate is 40 percent. ? The company anticipates issuing new common stock during the upcoming year. What is the company's WACC? a. 10.67% b. 11.22% c. 11.47% d. 12.02% e. 12.56% ____ 15.% ____ 16. Vanderheiden 4 years. Inflation is expected to be zero during the next 4 years. If cash inflows occur at the end of each year, and if Vanderheiden's cost of capital is 10 percent, by what amount will the better project increase Vanderheiden's value? a. $677.69 b. $1,098.89 c. $1,179.46 d. $1,237.76 e. $1,312.31 ____ 17. A company is analyzing two mutually exclusive projects, S and L, whose cash flows are shown below: Years 0 1 2 3 4 S -1,100 900 350 50 10 L -1,100 0 300 500 850 The company's cost of capital is 12 percent, and it can get an unlimited amount of capital at that cost. What is the regular IRR (not MIRR) of the better project? (Hint: Note that the better project may or may not be the one with the higher IRR.) a. 13.09% b. 12.00% c. 17.46% d. 13.88% e. 12.53% ____ 18. Mooradian Corporation estimates that its cost of capital is 11 percent. The company is considering two mutually exclusive projects whose after-tax cash flows are as follows: Project S Project L Year Cash Flow Cash Flow 0 -$3,000 -$9,000 1 2,500 -1,000 2 1,500 5,000 3 1,500 5,000 4 -500 5,000 What is the modified internal rate of return (MIRR) of the project with the highest NPV? a. 11.89% b. 13.66% c. 16.01% d. 18.25% e. 20.12% ____ 19. Simmons Shoes is considering a project with the following cash flows: Project Year Cash Flow 0 -$700 1 400 2 -200 3 600 4 500 Simmons' WACC is 10 percent. What is the project's modified internal rate of return (MIRR)? a. 17.10% b. 18.26% c. 25.28% d. 28.93% e. 29.52% ____ 20. Cannon Company has enjoyed a rapid increase in sales in recent years, following a decision to sell on credit. However, the firm has noticed a recent increase in its collection period. Last year, total sales were $1 million, and $250,000 of these sales were on credit. During the year, the accounts receivable account averaged $41,096. It is expected that sales will increase in the forthcoming year by 50 percent, and, while credit sales should continue ____ 21. The)? a. 8.0% b. 10.0% c. 12.0% d. 16.7% e. 20.0% Solution Summary You will find the answer to this puzzling question inside...
https://brainmass.com/business/capital-budgeting/61182
CC-MAIN-2017-43
refinedweb
1,599
74.9
> Git. Could we perhaps make perf record and perf top default to cycles:p on x86, and silently fall back to regular cycles events if the CPU does not support this event type? That would make our profiles more precise by default - without users having to do anything funky. Thanks, Ingo -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majo...@vger.kernel.org More majordomo info at Please read the FAQ at > On Wed, Jan 26, 2011 at 12:37 PM, Ingo Molnar <mi...@elte.hu> wrote: > > > > * Linux Kernel Mailing List <linux-...@vger.kernel.org> wrote: > > > >> Git. > > >? Thus, in per-thread mode, I believe you, it works. In system-wide, it all depends on how the kernel is configured w.r.t. to idle and what your workload does. If you know you're never going idle on any of the monitored CPUs during the measurement, then you're fine. Otherwise, you have a distortion. You can get samples from halted CPUs, likely pointing to the idle routine. If your system uses idle=poll, then you are okay. Otherwise, the problem mentioned above applies. PEBS does not operate the same way as regular interrupt-based sampling. You need to understand what you are doing when you use it. It cannot really be used transparently, that's unfortunate. There is more than just halted vs. unhalted. In fact, even that is more complicated than it seems. What cycles:pp (inst_retired:cmask=16:i) measures is not clearly defined. In system-wide mode, where you can go idle, it varies depending on the system you're on and the idle implementation, i.e., what mwait() does. If you try varying idle= from default (intel_idle), you'll see different results, just by counting. I don't have a good definition for the 'cycles' that are actually measured by this event. But there are other key differences. To get a sample, you need a PEBS sample and that requires an instruction or uop to retire. PEBS records the machine state at retirement of an instruction or uop. Sometimes, you do not retire instructions for a long time and thus the sample distribution is skewed. For instance, PEBS does not work well with rep-prefixed instructions. It is fairly easy to see the problem with a rep mov on a buffer. I have appended the test program at the end for reference. Here we're copy a 25MB buffer with rep mov. First let's do the regular cycles with target average frequency of 1000Hz (default). $ perf record -e cycles:pp -c 229579021 .: 94K cycles # # Overhead Command Shared Object Symbol # ........ ....... ................. ........................... # 98.96% repmov repmov [.] main 0.08% repmov [kernel.kallsyms] [k] perf_ctx_adjust_freq 0.06% repmov [kernel.kallsyms] [k] perf_event_task_tick The rep mov function was inlined in main(), obviously. If you were to use perf annotate, it would show for function main(): 100.00 : 40058a: f3 a5 rep movsl %ds:(%rsi),%es:(%rdi) Which is expected. Now, let's use cycles:pp (which gets converted automagically by the kernel into inst_retired:cmask=16:i): $ perf record -e cycles:pp .: 90K cycles # # Overhead Command Shared Object Symbol # ........ ....... ................. ......................... # 97.21% repmov [kernel.kallsyms] [k] apic_timer_interrupt 1.43% repmov repmov [.] main 0.10% repmov [kernel.kallsyms] [k] perf_ctx_adjust_freq We get about the same number of samples, but the distribution is completely different. How is that possible? Which of the two modes is more precise, then? Is there one mode that is always more precise? There are other side effects of PEBS, including some bias in the sample distribution due to the PEBS shadow effect. In summary, I don't think allowing this trick on-the-fly without the user knowing what is going is a good idea. I think the trick is cool and could be useful. We need to ensure we can setup the PMU for this mode but users have to be aware of what is going on to correctly interpret the profiles. I would drop that trick from the kernel. It would still be accessible by explicitly passing inst_retired:cmask=16:i to the perf tool. #include <sys/types.h> #include <stdio.h> #include <stdlib.h> #include <stdint.h> #include <string.h> #include <err.h> int doit(void *a, void *b, size_t sz) { asm volatile ("cld\n\t" "rep\n\t" "movsl" : "=c" (sz), "=S" (b), "=D" (a) : "0" (sz), "1" (b), "2" (a) : "memory" ); } int main(int argc, char **argv) { uint32_t *a, *b; uint64_t i, nloop = 20000; size_t sz, count = 6553600; if (argc > 1) count = strtoull(argv[1], NULL, 0); if (argc > 2) nloop = strtoull(argv[2], NULL, 0); sz = count * sizeof(*a); a = malloc(sz); if (!a) err(1, "cannot allocate"); b = malloc(sz); if (!b) err(1, "cannot allocate"); printf("a=%p a_e=%p total_size=%zuMB\n", a, a + count, sz >>20); printf("b=%p b_e=%p total_size=%zuMB\n", b, b + count, sz >>20); for (i=0; i < nloop; i++) doit(a, b, count); free(a); free(b); return 0;
https://groups.google.com/g/linux.kernel/c/H5lBIVAJewE/m/MSS2H2WLh64J?pli=1
CC-MAIN-2021-43
refinedweb
838
68.47
!ENTITY sdot "⋅"> ]> ncurses.scripts.mit.edu Git - ncurses.git/blob - doc/hackguide.doc projects / ncurses.git / blob commit grep author committer pickaxe ? search: re summary | shortlog | log | commit | commitdiff | tree history | raw | HEAD ncurses 6.3 - patch 20220917 is (a) well-adapted for 124 on-line browsing through viewers that are everywhere; (b) more easily 125 readable as plain text than most other mark-ups, if you do not have a 126 viewer; do have taken these steps at the head of our queue. This 146 means that if you do not, you will probably end up at the tail end and 147 have to are using a 159 traditional asynchronous terminal or PC-based terminal emulator, 160 rather than xterm or a UNIX console entry. 161 It is therefore extremely helpful if you can tell us whether or 162 not your problem reproduces on other terminal types. Usually you 163 will have, it 182 will find terminfo problems at this stage by noticing 188 that the escape sequences put out for various capabilities are 189 wrong. If not, you are likely to learn enough to be able to 190 characterize any bug in the screen-update logic quite exactly. 191 4. Report details and symptoms, not just interpretations. 192 If you do the preceding two steps, it is very likely that you will 193 discover the nature of the problem yourself and be able to send us 194 a fix. This will create happy feelings all around and earn you 195 good karma for the first time you run into a bug you really cannot 196 characterize and fix yourself. 197 If you are still stuck, at least you will does not 203 throw away any information (actually they are better than 204 un-munched ones because they are easier to read). 205 If your bug produces a core-dump, please include a symbolic stack 206 trace generated by gdb(1) or your local equivalent. 207 Tell us about every terminal on which you have reproduced the bug 208 -- and every terminal on which you cannot. Ideally, send us 209 terminfo have got 223 a A Tour of the Ncurses Library 232 233 Library Overview 234 235 Most of the library is superstructure -- fairly trivial convenience 236 interfaces to a small set of basic functions and data structures used 237 to manipulate the virtual screen (in particular, none of this code 238 does any I/O except through calls to more fundamental modules 239 described below). The files 240 241 lib_addch.c lib_bkgd.c lib_box.c lib_chgat.c lib_clear.c 242 lib_clearok.c lib_clrbot.c lib_clreol.c lib_colorset.c lib_data.c 243 lib_delch.c lib_delwin.c lib_echo.c lib_erase.c lib_gen.c 244 lib_getstr.c lib_hline.c lib_immedok.c lib_inchstr.c lib_insch.c 245 lib_insdel.c lib_insstr.c lib_instr.c lib_isendwin.c lib_keyname.c 246 lib_leaveok.c lib_move.c lib_mvwin.c lib_overlay.c lib_pad.c 247 lib_printw.c lib_redrawln.c lib_scanw.c lib_screen.c lib_scroll.c 248 lib_scrollok.c lib_scrreg.c lib_set_term.c lib_slk.c 249 lib_slkatr_set.c lib_slkatrof.c lib_slkatron.c lib_slkatrset.c 250 lib_slkattr.c lib_slkclear.c lib_slkcolor.c lib_slkinit.c 251 lib_slklab.c lib_slkrefr.c lib_slkset.c lib_slktouch.c lib_touch.c 252 lib_unctrl.c lib_vline.c lib_wattroff.c lib_wattron.c lib_window.c 253 254 are all in this category. They are very unlikely to need change, 255 barring bugs or some fundamental reorganization in the underlying data 256 structures. 257 258 These files are used only for debugging support: 259 260 lib_trace.c lib_traceatr.c lib_tracebits.c lib_tracechr.c 261 lib_tracedmp.c lib_tracemse.c trace_buf.c 262 263 It is rather unlikely you will ever need to change these, unless you 264 want to introduce a new debug trace level for some reason. 265 266 There is another group of files that do direct I/O via tputs(), 267 computations on the terminal capabilities, or queries to the OS 268 environment, but nevertheless have only fairly low complexity. These 269 include: 270 271 lib_acs.c lib_beep.c lib_color.c lib_endwin.c lib_initscr.c 272 lib_longname.c lib_newterm.c lib_options.c lib_termcap.c lib_ti.c 273 lib_tparm.c lib_tputs.c lib_vidattr.c read_entry.c. 274 275 They are likely to need revision only if ncurses is being ported to an 276 environment without an underlying terminfo capability representation. 277 278 These files have serious hooks into the tty driver and signal 279 facilities: 280 281 lib_kernel.c lib_baudrate.c lib_raw.c lib_tstp.c lib_twait.c 282 283 If you run into porting snafus moving the package to another UNIX, the 284 problem is likely to be in one of these files. The file lib_print.c 285 uses sleep(2) and also falls in this category. 286 287 Almost all of the real work is done in the files 288 289 hardscroll.c hashmap.c lib_addch.c lib_doupdate.c lib_getch.c 290 lib_mouse.c lib_mvcur.c lib_refresh.c lib_setup.c lib_vidattr.c 291 292 Most of the algorithmic complexity in the library lives in these 293 files. If there is a real bug in ncurses itself, it is probably here. 294 We will tour some of these files in detail below (see The Engine 295 Room). 296 297 Finally, there is a group of files that is actually most of the 298 terminfo compiler. The reason this code lives in the ncurses library 299 is to support fallback to /etc/termcap. These files include 300 301 alloc_entry.c captoinfo.c comp_captab.c comp_error.c comp_hash.c 302 comp_parse.c comp_scan.c parse_entry.c read_termcap.c write_entry.c 303 304 We will discuss these in the compiler tour. 305 306 The Engine Room 307 308 Keyboard Input 309 310 All ncurses input funnels through the function wgetch(), defined in 311 lib_getch.c. This function is tricky; it has to poll for keyboard and 312 mouse events and do a running match of incoming input against the set 313 of defined special keys. 314 315 The central data structure in this module is a FIFO queue, used to 316 match multiple-character input sequences against special-key 317 capabilities; also to implement pushback via ungetch(). 318 319 The wgetch() code distinguishes between function key sequences and the 320 same sequences typed manually by doing a timed wait after each input 321 character that could lead a function key sequence. If the entire 322 sequence takes less than 1 second, it is assumed to have been 323 generated by a function key press. 324 325 Hackers bruised by previous encounters with variant select(2) calls 326 may find the code in lib_twait.c interesting. It deals with the 327 problem that some BSD selects do not return a reliable time-left 328 value. The function timed_wait() effectively simulates a System V 329 select. 330 331 Mouse Events 332 333 If the mouse interface is active, wgetch() polls for mouse events each 334 call, before it goes to the keyboard for input. It is up to 335 lib_mouse.c how the polling is accomplished; it may vary for different 336 devices. 337 338 Under xterm, however, mouse event notifications come in via the 339 keyboard input stream. They are recognized by having the kmous 340 capability as a prefix. This is kind of klugey, but trying to wire in 341 recognition of a mouse key prefix without going through the 342 function-key machinery would be just too painful, and this turns out 343 to imply having the prefix somewhere in the function-key capabilities 344 at terminal-type initialization. 345 346 This kluge only works because kmous is not actually used by any 347 historic terminal type or curses implementation we know of. Best guess 348 is it is a relic of some forgotten experiment in-house at Bell Labs 349 that did not leave any traces in the publicly-distributed System V 350 terminfo files. If System V or XPG4 ever gets serious about using it 351 again, this kluge may have to change. 352 353 Here are some more details about mouse event handling: 354 355 The lib_mouse() code is logically split into a lower level that 356 accepts event reports in a device-dependent format and an upper level 357 that parses mouse gestures and filters events. The mediating data 358 structure is a circular queue of event structures. 359 360 Functionally, the lower level's job is to pick up primitive events and 361 put them on the circular queue. This can happen in one of two ways: 362 either (a) _nc_mouse_event() detects a series of incoming mouse 363 reports and queues them, or (b) code in lib_getch.c detects the kmous 364 prefix in the keyboard input stream and calls _nc_mouse_inline to 365 queue up a series of adjacent mouse reports. 366 367 In either case, _nc_mouse_parse() should be called after the series is 368 accepted to parse the digested mouse reports (low-level events) into a 369 gesture (a high-level or composite event). 370 371 Output and Screen Updating 372 373 With the single exception of character echoes during a wgetnstr() call 374 (which simulates cooked-mode line editing in an ncurses window), the 375 library normally does all its output at refresh time. 376 377 The main job is to go from the current state of the screen (as 378 represented in the curscr window structure) to the desired new state 379 (as represented in the newscr window structure), while doing as little 380 I/O as possible. 381 382 The brains of this operation are the modules hashmap.c, hardscroll.c 383 and lib_doupdate.c; the latter two use lib_mvcur.c. Essentially, what 384 happens looks like this: 385 * The hashmap.c module tries to detect vertical motion changes 386 between the real and virtual screens. This information is 387 represented by the oldindex members in the newscr structure. These 388 are modified by vertical-motion and clear operations, and both are 389 re-initialized after each update. To this change-journalling 390 information, the hashmap code adds deductions made using a 391 modified Heckel algorithm on hash values generated from the line 392 contents. 393 * The hardscroll.c module computes an optimum set of scroll, 394 insertion, and deletion operations to make the indices match. It 395 calls _nc_mvcur_scrolln() in lib_mvcur.c to do those motions. 396 * Then lib_doupdate.c goes to work. Its job is to do line-by-line 397 transformations of curscr lines to newscr lines. Its main tool is 398 the routine mvcur() in lib_mvcur.c. This routine does 399 cursor-movement optimization, attempting to get from given screen 400 location A to given location B in the fewest output characters 401 possible. 402 403 If you want to work on screen optimizations, you should use the fact 404 that (in the trace-enabled version of the library) enabling the 405 TRACE_TIMES trace level causes a report to be emitted after each 406 screen update giving the elapsed time and a count of characters 407 emitted during the update. You can use this to tell when an update 408 optimization improves efficiency. 409 410 In the trace-enabled version of the library, it is also possible to 411 disable and re-enable various optimizations at runtime by tweaking the 412 variable _nc_optimize_enable. See the file include/curses.h.in for 413 mask values, near the end. 414 415 The Forms and Menu Libraries 416 417 The forms and menu libraries should work reliably in any environment 418 you can port ncurses to. The only portability issue anywhere in them 419 is what flavor of regular expressions the built-in form field type 420 TYPE_REGEXP will recognize. 421 422 The configuration code prefers the POSIX regex facility, modeled on 423 System V's, but will settle for BSD regexps if the former is not 424 available. 425 426 Historical note: the panels code was written primarily to assist in 427 porting u386mon 2.0 (comp.sources.misc v14i001-4) to systems lacking 428 panels support; u386mon 2.10 and beyond use it. This version has been 429 slightly cleaned up for ncurses. 430 431 A Tour of the Terminfo Compiler 432 433 The ncurses implementation of tic is rather complex internally; it has 434 to do a trying combination of missions. This starts with the fact 435 that, in addition to its normal duty of compiling terminfo sources 436 into loadable terminfo binaries, it has to be able to handle termcap 437 syntax and compile that too into terminfo entries. 438 439 The implementation therefore starts with a table-driven, dual-mode 440 lexical analyzer (in comp_scan.c). The lexer chooses its mode (termcap 441 or terminfo) based on the first "," or ":" it finds in each entry. The 442 lexer does all the work of recognizing capability names and values; 443 the grammar above it is trivial, just "parse entries till you run out 444 of file". 445 446 Translation of Non-use Capabilities 447 448 Translation of most things besides use capabilities is pretty 449 straightforward. The lexical analyzer's tokenizer hands each 450 capability name to a hash function, which drives a table lookup. The 451 table entry yields an index which is used to look up the token type in 452 another table, and controls interpretation of the value. 453 454 One possibly interesting aspect of the implementation is the way the 455 compiler tables are initialized. All the tables are generated by 456 various awk/sed/sh scripts from a master table include/Caps; these 457 scripts actually write C initializers which are linked to the 458 compiler. Furthermore, the hash table is generated in the same way, so 459 it doesn't have to be generated at compiler startup time (another 460 benefit of this organization is that the hash table can be in 461 shareable text space). 462 463 Thus, adding a new capability is usually pretty trivial, just a matter 464 of adding one line to the include/Caps file. We will have more to say 465 about this in the section on Source-Form Translation. 466 467 Use Capability Resolution 468 469 The background problem that makes tic tricky is not the capability 470 translation itself, it is the resolution of use capabilities. Older 471 versions would not handle forward use references for this reason (that 472 is, a using terminal always had to follow its use target in the source 473 file). By doing this, they got away with a simple implementation 474 tactic; compile everything as it blows by, then resolve uses from 475 compiled entries. 476 477 This will not do for ncurses. The problem is that that the whole 478 compilation process has to be embeddable in the ncurses library so 479 that it can be called by the startup code to translate termcap entries 480 on the fly. The embedded version cannot go promiscuously writing 481 everything it translates out to disk -- for one thing, it will 482 typically be running with non-root permissions. 483 484 So our tic is designed to parse an entire terminfo file into a 485 doubly-linked circular list of entry structures in-core, and then do 486 use resolution in-memory before writing everything out. This design 487 has other advantages: it makes forward and back use-references equally 488 easy (so we get the latter for free), and it makes checking for name 489 collisions before they are written out easy to do. 490 491 And this is exactly how the embedded version works. But the 492 stand-alone user-accessible version of tic partly reverts to the 493 historical strategy; it writes to disk (not keeping in core) any entry 494 with no use references. 495 496 This is strictly a core-economy kluge, implemented because the 497 terminfo master file is large enough that some core-poor systems swap 498 like crazy when you compile it all in memory...there have been reports 499 of this process taking three hours, rather than the twenty seconds or 500 less typical on the author's development box. 501 502 So. The executable tic passes the entry-parser a hook that immediately 503 writes out the referenced entry if it has no use capabilities. The 504 compiler main loop refrains from adding the entry to the in-core list 505 when this hook fires. If some other entry later needs to reference an 506 entry that got written immediately, that is OK; the resolution code 507 will fetch it off disk when it cannot find it in core. 508 509 Name collisions will still be detected, just not as cleanly. The 510 write_entry() code complains before overwriting an entry that 511 postdates the time of tic's first call to write_entry(), Thus it will 512 complain about overwriting entries newly made during the tic run, but 513 not about overwriting ones that predate it. 514 515 Source-Form Translation 516 517 Another use of tic is to do source translation between various termcap 518 and terminfo formats. There are more variants out there than you might 519 think; the ones we know about are described in the captoinfo(1) manual 520 page. 521 522 The translation output code (dump_entry() in ncurses/dump_entry.c) is 523 shared with the infocmp(1) utility. It takes the same internal 524 representation used to generate the binary form and dumps it to 525 standard output in a specified format. 526 527 The include/Caps file has a header comment describing ways you can 528 specify source translations for nonstandard capabilities just by 529 altering the master table. It is possible to set up capability 530 aliasing or tell the compiler to plain ignore a given capability 531 without writing any C code at all. 532 533 For circumstances where you need to do algorithmic translation, there 534 are functions in parse_entry.c called after the parse of each entry 535 that are specifically intended to encapsulate such translations. This, 536 for example, is where the AIX box1 capability get translated to an 537 acsc string. 538 539 Other Utilities 540 541 The infocmp utility is just a wrapper around the same entry-dumping 542 code used by tic for source translation. Perhaps the one interesting 543 aspect of the code is the use of a predicate function passed in to 544 dump_entry() to control which capabilities are dumped. This is 545 necessary in order to handle both the ordinary De-compilation case and 546 entry difference reporting. 547 548 The tput and clear utilities just do an entry load followed by a 549 tputs() of a selected capability. 550 551 Style Tips for Developers 552 553 See the TO-DO file in the top-level directory of the source 554 distribution for additions that would be particularly useful. 555 556 The prefix _nc_ should be used on library public functions that are 557 not part of the curses API in order to prevent pollution of the 558 application namespace. If you have to add to or modify the function 559 prototypes in curses.h.in, read ncurses/MKlib_gen.sh first so you can 560 avoid breaking XSI conformance. Please join the ncurses mailing list. 561 See the INSTALL file in the top level of the distribution for details 562 on the list. 563 564 Look for the string FIXME in source files to tag minor bugs and 565 potential problems that could use fixing. 566 567 Do not try to auto-detect OS features in the main body of the C code. 568 That is the job of the configuration system. 569 570 To hold down complexity, do make your code data-driven. Especially, if 571 you can drive logic from a table filtered out of include/Caps, do it. 572 If you find you need to augment the data in that file in order to 573 generate the proper table, that is still preferable to ad-hoc code -- 574 that is why the fifth field (flags) is there. 575 576 Have fun! 577 578 Porting Hints 579 580 The following notes are intended to be a first step towards DOS and 581 Macintosh ports of the ncurses libraries. 582 583 The following library modules are "pure curses"; they operate only on 584 the curses internal structures, do all output through other curses 585 calls (not including tputs() and putp()) and do not call any other 586 UNIX routines such as signal(2) or the stdio library. Thus, they 587 should not need to be modified for single-terminal ports. 588 589 lib_addch.c lib_addstr.c lib_bkgd.c lib_box.c lib_clear.c 590 lib_clrbot.c lib_clreol.c lib_delch.c lib_delwin.c lib_erase.c 591 lib_inchstr.c lib_insch.c lib_insdel.c lib_insstr.c lib_keyname.c 592 lib_move.c lib_mvwin.c lib_newwin.c lib_overlay.c lib_pad.c 593 lib_printw.c lib_refresh.c lib_scanw.c lib_scroll.c lib_scrreg.c 594 lib_set_term.c lib_touch.c lib_tparm.c lib_tputs.c lib_unctrl.c 595 lib_window.c panel.c 596 597 This module is pure curses, but calls outstr(): 598 599 lib_getstr.c 600 601 These modules are pure curses, except that they use tputs() and 602 putp(): 603 604 lib_beep.c lib_color.c lib_endwin.c lib_options.c lib_slk.c 605 lib_vidattr.c 606 607 This modules assist in POSIX emulation on non-POSIX systems: 608 609 sigaction.c 610 signal calls 611 612 The following source files will not be needed for a 613 single-terminal-type port. 614 615 alloc_entry.c captoinfo.c clear.c comp_captab.c comp_error.c 616 comp_hash.c comp_main.c comp_parse.c comp_scan.c dump_entry.c 617 infocmp.c parse_entry.c read_entry.c tput.c write_entry.c 618 619 The following modules will use open()/read()/write()/close()/lseek() 620 on files, but no other OS calls. 621 622 lib_screen.c 623 used to read/write screen dumps 624 625 lib_trace.c 626 used to write trace data to the logfile 627 628 Modules that would have to be modified for a port start here: 629 630 The following modules are "pure curses" but contain assumptions 631 inappropriate for a memory-mapped port. 632 633 lib_longname.c 634 assumes there may be multiple terminals 635 636 lib_acs.c 637 assumes acs_map as a double indirection 638 639 lib_mvcur.c 640 assumes cursor moves have variable cost 641 642 lib_termcap.c 643 assumes there may be multiple terminals 644 645 lib_ti.c 646 assumes there may be multiple terminals 647 648 The following modules use UNIX-specific calls: 649 650 lib_doupdate.c 651 input checking 652 653 lib_getch.c 654 read() 655 656 lib_initscr.c 657 getenv() 658 659 lib_newterm.c 660 lib_baudrate.c 661 lib_kernel.c 662 various tty-manipulation and system calls 663 664 lib_raw.c 665 various tty-manipulation calls 666 667 lib_setup.c 668 various tty-manipulation calls 669 670 lib_restart.c 671 various tty-manipulation calls 672 673 lib_tstp.c 674 signal-manipulation calls 675 676 lib_twait.c 677 gettimeofday(), select(). 678 _________________________________________________________________ 679 680 681 Eric S. Raymond <esr@snark.thyrsus.com> 682 683 (Note: This is not the bug address!) ncurses, with patches starting at ncurses-5.6; new users should use RSS Atom
https://ncurses.scripts.mit.edu/?p=ncurses.git;a=blob;f=doc/hackguide.doc;h=1bafad2ad10bd0d012173f38d81252df61481d66;hb=HEAD
CC-MAIN-2022-40
refinedweb
3,877
62.58
Section 8.8 Overloading Methods Java allows us to declare several methods with the same name in a single class. To nevertheless distinguish them properly, methods are not only identified by their name, like in C, but also by their signature. Consider the invocation of a method m: Which of the overloaded methods m could be called here? Remember that the exact method that will be called in the execution can in general only be determined at the programs run time because of overridden methods. Which of a group of overriding methods in a class hierarchy is called is decided based on the concrete (dynamic) type of a and not its static type. A first point of consideration is the number of arguments. For the above call, only method declarations whose signature has \(n\) parameters (their arity is \(n\)) can be considered. Next, Java checks whether the static types of the arguments can be converted to the types of the method parameters. Candidate signatures where this is possible are called matching signatures. Among those, the Java compiler selects the most specific signature. Since overloaded methods are resolved at compile time, only the static types are relevant. Definition 8.8.1. More Specific Signature. A signature \((T_1,\dots, T_n)\) is more specific than a signature \((U_1,\dots,U_n)\) if \(T_i\sqsubseteq U_i\) for all \(1\le i\le n\text{.}\) Here, \(T\sqsubseteq U\) holds if \(T\) can be converted to \(U\) according to Figure 8.2.2 or if \(T \subtyperel U\text{.}\) Example 8.8.2. \((\mathtt{A}, \mathtt{int}, \mathtt{int})\) is more specific than \((\mathtt{A}, \mathtt{int}, \mathtt{float})\text{.}\) \((\mathtt{A})\) is more specific than \((\mathtt{B})\) if \(A\subtyperel B\) (as \(\subtyperel\) implies \(\sqsubseteq\)). The method to be called is now selected from the set of methods whose signature is less specific than (or equally specific as) the signature of the call. Java requires that this set of matching signatures contains one maximally specific signature, whose method is selected. Definition 8.8.3. Maximally Specific Signature. Let \(\mathcal S\) be a set of signatures with an arity of \(n\text{.}\) \(s\in\mathcal S\) is a maximally specific signature of \(\mathcal S\) if no \(s'\in\mathcal S\setminus\{s\}\) is more specific than \(s\text{.}\) Definition 8.8.3 does not exclude the existence of more than one maximally specific signature per set of signatures. In this case, the method call would be ambiguous and the Java compiler would reject the program with an error message. Example 8.8.4. Consider the invocation of the method foo in the method call: class A { void foo(int x, int y, float z) {} void foo(boolean x, float y) {} void foo(int x, float y) {} void foo(long x, int y) {} void call() { this.foo(2, 2); } } The argument types are \((\mathtt{int}, \mathtt{int})\text{.}\) foo has four declarations with the following signatures: The first one does not qualify because its arity does not match. Neither does the second one, as the types are not convertible ( int does not convert to boolean). The last two do match, but both are maximally specific. Java therefore cannot find a declaration and reports an error. If we added a method void foo(int x, int y) {} its signature would be more specific than \((\mathtt{int}, \mathtt{float})\) and \((\mathtt{long}, \mathtt{int})\text{.}\) Java then could find a unique maximally specific signature. Remark 8.8.5. For the sake of simplicity, we ignored variadic arguments ( ...), boxing/unboxing and type variables here. More details are in Section 15.12.2 of the Java Language Specification [12]. The relation \(\sqsubseteq\) is called Method Invocation Conversion (see Section 5.3 of the language specification). Subsection 8.8.1 Overloading and overriding If a method m is overloaded in a class A, we can separately override every overloaded method when we write a class inheriting from A. Strictly speaking, we do not override m, but \(\mathtt{m}(T_1, \dots, T_n)\text{:}\) We only override one of the concrete overloaded methods. The compiler selects the overloaded method statically at compile time, while the dynamic method dispatch at run time only decides which overridden variant of the chosen overloaded method is called. Example 8.8.6. Consider the following classes: public class A { public void m(double X) { System.out.println("A.m(double)"); } public void m(boolean x) { System.out.println("A.m(boolean)"); } } public class B extends A { public void m(int x) { System.out.println("B.m(int)"); } public void m(boolean x) { System.out.println("B.m(boolean)"); } } To understand which method overrides which, consider the following table: Methods in a row with the same name overload each other. A method in a column overrides the method in the same column of a row above. Hence, here only B.m(boolean) overrides the method A.m(boolean). Consider the following program fragments that call m in different constellations: A a = new A(); a.m(5); prints A.m(double). Trivial, since the concrete type is equal to the static type. A a = new B(); a.m(5); prints A.m(double). The static type of ais A, but the concrete type is here B. However, Bdoes not override the method A.m(double), but only overloads the methods m(double)and m(boolean)again with the parameter type int. Since the overloaded method is selected by the static type (here: A.m(double)), the method m(int)in the concrete type Bis irrelevant. B b = new B(); b.m(5); prints B.m(int). The static type is now Band mis overloaded three times in B.
https://prog2.de/book/sec-java-overload.html
CC-MAIN-2022-33
refinedweb
947
57.87
CodePlexProject Hosting for Open Source Software Voted using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using HigLabo.Net; using HigLabo.Net.Dropbox; using System... Id #21922 | Release: None | Updated: Jun 27 at 10:58 AM by sanjeevarora777 | Created: Jun 27 at 10:58 AM by sanjeevarora777 Mime headers can be folder per RFC822. It appears the headers are not being unfolded during parsing. It is notable when receiving a file with a long file name from applications like MS Outlook. ... Id #21921 | Release: None | Updated: Jul 3 at 3:38 AM by higty | Created: Jun 25 at 6:51 PM by tx018 Attached email is not available in AttachmentFiles property of the HigLabo.Mime.MailMessage object. From outlook I send an email with another email added as attachment (drag and drop). This is send... Id #21851 | Release: None | Updated: May 26 at 3:18 AM by higty | Created: May 12 at 12:01 PM by coraln When using HigLabo.Net. (4.0), and opening a passive data socket to the remote ftp server, the FTP library parses the IP address and port from the FTP server response to the pasv comma... Id #21651 | Release: None | Updated: Jan 12 at 11:05 PM by higty | Created: Jan 8 at 7:07 PM by charles_culver I am facing problem if attachment is having special character's like / \:?"<> if i attached file like abc/1"?<>.pdf its taking file like 1"\ please suggest any solution for this. Id #21290 | Release: None | Updated: Oct 30, 2014 at 9:36 AM by Nil_430 | Created: Sep 10, 2014 at 7:06 AM by Nil_430 i want to send multiple attachment.but while sending multiple attachment same attachment name and type repeats for all attachment.here is my code plz check it and let me knw as early as possible. ... Id #21261 | Release: None | Updated: Aug 26, 2014 at 7:39 AM by higty | Created: Aug 25, 2014 at 1:07 PM by Nil_430 Hello hi, First of all thanks for this great work.I have successfully done authentication and file uploading. But in between testing using different users the remote server returned and error:(401)... Id #19496 | Release: None | Updated: May 29, 2013 at 1:37 PM by rkshnair | Created: Mar 21, 2013 at 3:17 PM by _sreejith Keyboard shortcuts are available for this page. Keep up with what's going on in this project's Issue Tracker
http://higlabo.codeplex.com/workitem/list/basic
CC-MAIN-2015-32
refinedweb
416
65.73
icetDrawFunc -- set a callback for drawing. #include <GL/ice-t.h> typedef void (*IceTCallback)(void); The icetDrawFunc function sets a callback that is used to draw the geometry from a given viewpoint. func should be a function that issues appropriate OpenGL calls to draw geometry in the current OpenGL context. After func is called, the image left in the back frame buffer will be read back for compositing. func should not modify the GL_PROJECTION_MATRIX as this would cause IceT to place image data in the wrong location in the tiled display and improperly cull geometry. It is acceptable to add transformations to GL_MODELVIEW_MATRIX, but the bounding vertices given with icetBoundingVertices or icetBoundingBox are assumed to already be transformed by any such changes to the modelview matrix. Also, GL_MODELVIEW_MATRIX must be restored before the draw function returns. Therefore, any changes to GL_MODELVIEW_MATRIX are to be done with care and should be surrounded by a pair of glPushMatrix and glPopMatrix functions. It is also important that func not attempt the change the clear color. In some composting modes, IceT needs to read, modify, and change the background color. These operations will be lost if func changes the background color, and severe color blending artifacts may result. IceT may call func several times from within a call to icetDrawFrame or not at all if the current bounds lie outside the current viewpoint. This can have a subtle but important impact on the behavior of func. For example, counting frames by incrementing a frame counter in func is obviously wrong (although you could count how many times a render occurs). func should also leave OpenGL in a state such that it will be correct for a subsequent run of func. Any matrices or attributes pushed in func should be popped before func returns, and any state that is assumed to be true on entrance to func should also be true on return. None. None. None known. func is tightly coupled with the bounds set with icetBoundingVertices or icetBoundingBox. If the geometry drawn by func is dynamic (changes from frame to frame), then the bounds may need to be changed as well. Incorrect bounds may cause the geometry to be culled in surprising ways.DrawFrame(3), icetBoundingVertices(3), icetBoundingBox(3)
http://www.makelinux.net/man/3/I/icetDrawFunc
CC-MAIN-2015-11
refinedweb
375
61.26
A few years ago, I bought a Pebble smartwatch. Mostly I bought it because it was cheap, but I also wanted to write my own apps for it. I wrote a few apps. Recently, Pebble was bought by FitBit and ceased production of the Pebble smartwatch. However, they are making great progress to ensuring that the watch continues to be viable into the future. Anyway, as part of the transition, I wanted to release my source code for my moon phase app. What sets this moon phase calculator apart from my first implementation is the ability for it to run when not connected to the phone. I implemented it as a C app. The moon phase calculation itself was originally written in JavaScript and I had to convert it to C. Sadly, I can't remember where I got the source, so I can't give attribution. The code is available on my GitHub Page: pebble-moon-phase on GitHub Here is the actual calculation for anyone searching for a moon phase calculator in C: #include "math.h" #define PI 3.1415926535897932384626433832795 #define RAD (PI/180.0) #define SMALL_FLOAT (1e-12) static double Julian(int year,int month,double day) { /* Returns the number of julian days for the specified day. */ int a,b=0,c,e; if (month < 3) { year--; month += 12; } if (year > 1582 || (year 1582 && month>10) || (year 1582 && month==10 && day > 15)) { a=year/100; b=2-a+a/4; } c = 365.25*year; e = 30.6001*(month+1); return b+c+e+day+1720994.5; } static double sun_position(double j) { double n,x,e,l,dl,v; //double m2; int i; n=360/365.2422*j; i=n/360; n=n-i*360.0; x=n-3.762863; if (x<0) x += 360; x *= RAD; e=x; do { dl=e-.016718*sin(e)-x; e=e-dl/(1-.016718*cos(e)); } while (fabs(dl)>=SMALL_FLOAT); v=360/PI*atan(1.01686011182*tan(e/2)); l=v+282.596403; i=l/360; l=l-i*360.0; return l; } static double moon_position(double j, double ls) { double ms,l,mm,n,ev,sms,ae,ec;//,z,x;//,lm,bm,ec; //double d; //double ds, as, dm; int i; /* ls = sun_position(j) */ ms = 0.985647332099*j - 3.762863; if (ms < 0) ms += 360.0; l = 13.176396*j + 64.975464; i = l/360; l = l - i*360.0; if (l < 0) l += 360.0; mm = l-0.1114041*j-349.383063; i = mm/360; mm -= i*360.0; n = 151.950429 - 0.0529539*j; i = n/360; n -= i*360.0; ev = 1.2739*sin((2*(l-ls)-mm)*RAD); sms = sin(ms*RAD); ae = 0.1858*sms; mm += ev-ae- 0.37*sms; ec = 6.2886*sin(mm*RAD); l += ev+ec-ae+ 0.214*sin(2*mm*RAD); l= 0.6583*sin(2*(l-ls)*RAD)+l; return l; } static double moon_phase2(int year,int month,int day, double hour, int* ip) { /* Calculates more accurately than Moon_phase , the phase of the moon at the given epoch. returns the moon phase as a real number (0-1) */ double j= Julian(year,month,(double)day+hour/24.0)-2444238.5; double ls = sun_position(j); double lm = moon_position(j, ls); double t = lm - ls; if (t < 0) t += 360; *ip = (int)((t + 22.5)/45) & 0x7; return (1.0 - cos((lm - ls)*RAD))/2; }
https://cerkit.com/tag/development/
CC-MAIN-2017-43
refinedweb
565
70.6
Style Guide¶ This style guide summarizes code conventions used in ProbNum. This is intended as a reference for developers. ProbNum uses Black’s formatting ruleset, which can be viewed as a strict subset of PEP 8, and we recommend Black for automated code formatting. With respect to code style, the Google Python Style Guide should be applied with some notable additions and exceptions (i.e. docstrings, …). We summarize and expand on this style guide below. Code¶ Imports¶ Use absolute imports over relative imports.). Use __all__ = [...] in __init__.py files to fix the order in which the methods are visible in the documentation. This also avoids importing unnecessary functions via import statements from ... import *. Many classes and functions are “pulled up” to a higher-level namespace via __init__. py files. Import from there wherever there is no chance for confusion and/or circular imports. This makes imports more readable. When changing the namespace of classes make sure to to correct module paths in the documentation by adding SuperClass.__module__ = "probnum.module" to the corresponding __init.py__. Conventions¶ If imports are shortened, the following conventions should be used. Full import paths are always acceptable. import probnum as pn from probnum import randvars, linalg, diffeq, statespace An exception from these rules are type-related modules, which include typing and probnum.typing. Types are always imported directly. from typing import Optional, Callable from probnum.typing import FloatLike Please do not abbreviate import paths unnecessarily. We do not use the following imports: import probnum.random_variables as pnrvor import probnum.filtsmooth as pnfs(correct would be from probnum import randvars, filtsmooth) from probnum import random_variables as rvsor import probnum.random_variables as rvs(the randvarsname is sufficiently short and does not need to be abbreviated) While all of these rules obey the Google Python Style Guide, we use one import convention that deviates from this guide. If two objects (functions, classes) share the same namespace (i.e. RandomVariable and Normal are both imported via probnum.randvars, but their implementation is in different files, randvars/_randomvariable.py and randvars/_normal.py) and one object needs to be imported into the module of the other object, use relative imports. For instance, in randvars/_normal.py the import reads from ._randomvariables import RandomVariable which helps with making the code in Normal more compact and readable. Type Standardization¶ Many types representing numeric values, shapes, dtypes, random states, etc. have different possible representations. For example a shape could be specified in the following ways: n, (n,), (n, 1), [n], [n, 1]. For this reason most types should be standardized internally to a core set of types defined in probnum.typing, e.g. for numeric types np.generic, np.ndarray. Methods for input argument standardization can be found in probnum.utils.argutils. Naming¶ The package itself is written “ProbNum” except when referred to as a package import, then probnum should be used. Classes, Functions, Methods and Variables¶ joined_lowerfor functions, methods, attributes, variables joined_loweror ALL_CAPSfor constants StudlyCapsfor classes camelCaseonly to conform to pre-existing conventions, e.g. in unittest Probabilistic Numerical Methods¶ Function names and signatures of PN methods attempt to replicate numpy or scipy naming conventions. For example probsolve_ivp(...)(scipy: solve_ivp(...)) problinsolve(...)(scipy: linalg.solve(...)) Methods with “Bayesian” in the name come with the prefix bayes, e.g. bayesquad; Bayesian quadrature, BayesFilter; Bayesian filter, BayesSmoother; Bayesian smoother. Printable Representations¶ The way an object is represented in the console or printed is defined by the following functions: repr(obj)is defined by obj.__repr__()and should return a developer-friendly representation of obj. If possible, this should be code that can recreate obj. str(obj)is defined by obj.__str__()and should return a user-friendly representation of obj. If no .__str__()method is implemented, Python will fall back to the .__repr__()method. As an example consider numpy’s array representation array([[1, 0], [0, 1]]) versus its output of str. [[1 0] [0 1]] Notational Conventions¶ Stick to the first few letters for abbreviations if they are sufficiently descriptive: cov: covariance fun: function mat: matrix vec: vector arr: array; wherever applicable, specify vecor mat Further conventions are unit2unit: convert between types or units, e.g. mat2arr: convert matrix to array or s2ms: convert seconds to milliseconds. Can also be used for simple adapter methods, along the lines of filt2odefilt. proj: projection (if required: projmat, projvec, projlinop, …) precon: preconditioner driftmat: drift-matrix, forcevec: force-vector, dispmatdispersion-matrix, dynamicsmatdynamics-matrix, diffmatdiffusion-matrix, plus the respective driftmatfun, driftfun, dispmatfun, etc. inv*: for inverse of a matrix; e.g. invprecond, invcovmat, … optional arguments via **kwargs, e.g.: fun(t, x, **kwargs) msg: message, e.g. for issuing raising and warnings ( errmsg, warnmsg) rv: random variable; if concatenated with e.g. init, abbreviate to initrv(initial random variable) data: data (don’t abbreviate that one) functions/methods that do something from time t0to time t1with step size huse the signature (start, stop, step, **kwargs)or any corresponding subset of that. This is in line with np.arangefor instance. Use it like (start=t0, stop=t1, step=h, **kwargs). jacob: Jacobian, if necessary use jacobfun. Hessians are hess, respectively hessfun. param(s): parameter(s). If abbreviations are necessary (e.g. in inline-function definition, use par(s)). Indices via idx(either idx1, idx2, … or idx, jdx, kdx) and not via i, j, k. The former is more readable (and follows PEP8); the latter may collide with the built-in imaginary constant j=sqrt(-1). A function maps from its domainto its range. The rangeof a random variable is the domainof its distribution. Whenever a time-grid is searched for the nearest neighbour of a location (e.g. used in continuous-time evaluation of a Gauss-Markov posterior), consider using np.searchsorted(..., side="left")(which is the default configuration). This is how it is done everywhere in statespace, filtsmooth, and diffeq. Errors and Warnings¶ Stick to the built-in python exceptions ( TypeError, NotImplementedError, …) If dunder method is not implemented for a type, return NotImplemented Warnings via warnings.warn(). See or. Recall the difference between TypeErrorand ValueError TypeErrorPassing arguments of the wrong type (e.g. passing a list when an int is expected) should result in a TypeError. Example: float(['5 '])since a list cannot be converted to float. ValueError: Raised when a built-in operation or function receives an argument that has the right type but an inappropriate value. Example: The floatfunction can take a string, i.e. float('5'), but float('string')fails since 'string'is a non-convertible string. Package Structure¶ Modules and Folders¶ low(shortened lower caps) for modules/folders in the namespace, e.g. probnum.linalg.linops lowerfor modules/folders not in the namespace, e.g. probnum/linalg/solvers. Rule of thumb: the more low-level the module is, the longer (more descriptive) the file name can be, because the chances that access is provided through higher-level namespaces are rather high. Interfaces to PN methods should be in a separate module, while their implementation (in classes) is in the same folder in other files. Documentation¶ All documentation is written in American English. Every publicly visible class or function must have a docstring. Do not use extensive documentation as a clutch for spaghetti code – divide and conquer instead!
https://probnum.readthedocs.io/en/v0.1.18/development/styleguide.html
CC-MAIN-2022-33
refinedweb
1,202
50.23
The thr_create(3THR) routine is one of the most elaborate of all the Solaris threads library routines. Usethr_create(3THR) to add a new thread of control to the current process. (For POSIX threads, see pthread_create(3TH 1TH_DAEMON—Marks the new thread as a daemon. A daemon thread is always detached (THR_DAEMON implies THR_DETACHED).TH NULL value to new_thread. thr_create() returns zero when it completes successfully. Any other return value indicates that an error occurred. When any of the following conditions). Use thr_min_stack(3TH. Use thr_self(3THR) to get the ID of the calling thread. (For POSIX threads, see pthread_self(3THR).) #include <thread.h> thread_t thr_self(void); thr_yield(3THR) causes the current thread to yield its execution in favor of another thread with the same or greater priority; otherwise it has no effect. There is no guarantee that a thread calling thr_yield() will do so. #include <thread.h> void thr_yield(void); thr_kill(3THR) sends a signal to a thread. (For POSIX threads, see pthread_kill(3THR).) #include <thread.h> #include <signal.h> int thr_kill(thread_t target_thread, int sig); Use thr_sigsetmask(3THR) to change or examine the signal mask of the calling thread. #include <thread.h> #include <signal.h> int thr_sigsetmask(int how, const sigset_t *set, sigset_t *oset); Use thr_exit(3THR) to terminate a thread. (For POSIX threads, see pthread_exit(3THR).) #include <thread.h> void thr_exit(void *status); Use thr_join(3THR) to wait for a thread to terminate. (For POSIX threads, see pthread_join(3THR).) #include <thread.h> int thr_join(thread_t tid, thread_t *departedid, void **status); Solaris thr_join(), a join will take place when any non detached thread in the process exits. The departedid will indicate the thread ID of the exiting thread. Except for the function names and arguments, thread specific data is the same for Solaris as it is for POSIX. The synopses for the Solaris functions are given in this section. thr_keycreate(3THR) allocates a key that is used to identify thread-specific data in a process. (For POSIX threads, see pthread_key_create(3THR).) #include <thread.h> int thr_keycreate(thread_key_t *keyp, void (*destructor) (void *value)); thr_setspecific(3THR) binds value to the thread-specific data key, key, for the calling thread. (For POSIX threads, see pthread_setspecific(3THR).) #include <thread.h> int thr_setspecific(thread_key_t key, void *value); thr_getspecific(3THR) stores the current value bound to key for the calling thread into the location pointed to by valuep. (For POSIX threads, see pthread_getspecific(3THR).) #include <thread.h> int thr_getspecific(thread_key_t key, void **valuep); In Solaris threads, if a thread is to be created with a priority other than that of its parent's, it is created in SUSPEND mode. While suspended, the threads priority is modified using the thr_setprio(3THR) function call; then it is continued. A higher priority thread will receive precedence by libthread over lower priority threads with respect to synchronization object contention. thr_setprio(3THR) changes the priority of the thread, specified by tid, within the current process to the priority specified by newprio. (For POSIX threads, see pthread_setschedparam(3THR).) #include <thread.h> int thr_setprio(thread_t tid, int newprio) By default, threads are scheduled based on fixed priorities that range from zero, the least significant, to 127 the most); Use thr_getprio(3THR) to get the current priority for the thread. Each thread inherits a priority from its creator. thr_getprio() stores the current priority, tid, in the location pointed to by newprio. (For POSIX threads, see pthread_getschedparam(3THR).) #include <thread.h> int thr_getprio(thread_t tid, int *newprio)
http://docs.oracle.com/cd/E19683-01/806-6867/6jfpgdcoj/index.html
CC-MAIN-2014-42
refinedweb
573
58.58
The more services you have running across different clouds and Kubernetes clusters, the harder it is to ensure that you have a central place to collect service mesh observability metrics. That’s one of the reasons we created Kuma, an open-source control plane for service mesh. In this tutorial, I’ll show you how to set up and leverage the Traffic Metrics and Traffic Trace policies that Kuma provides out of the box. If you haven’t already, install Kuma and connect a service. Observability will give you a better understanding of service behavior to increase development teams’ reliability and efficiency. Traffic Metrics Kuma natively integrates with Prometheus for auto-service discovery and traffic metrics collection. We also integrate with Grafana dashboards for performance monitoring. You can install your own Prometheus and Grafana, or you can run kumactl install metrics | kubectl apply -f - to get up and running quickly. Running this will install Prometheus and Grafana in a new namespace called kuma-metrics. If you want to use Splunk, Logstash, or any other system - Kuma supports those as well. Once you have your Prometheus and Grafana infrastructure up and running, you should update your default service mesh tool to enable automatic metrics collection by exposing the built-in Prometheus side. You’ll need to make sure that mutualTLS is enabled first. "apiVersion: kuma.io/v1alpha1 kind: Mesh metadata: name: default spec: metrics: enabledBackend: prometheus-1 backends: - name: prometheus-1 type: prometheus mtls: enabledBackend: ca-1 backends: - name: ca-1 type: builtin dpCert: rotation: expiration: 1d conf: caCert: RSAbits: 2048 expiration: 10y" | kubectl apply -f - After that, you should be able to see the traffic metrics you have enabled in the GUI. Next, expose Grafana so that you can look at the default dashboards that Kuma provides. To do this, port forward Grafana from the Kuma metrics namespace. In Grafana, there are four dashboards that you can visualize out of the box. You should see an overview of the meshes you’re running, including the data planes and the bytes flowing to Envoy. In my example, I can see all the network traffic from my sample application to Redis, including requests. I can visualize the actual control plane metrics to determine the overall performance of the control plane. This information will be helpful as you scale your service mesh technology. It’ll help you determine if your control plane is experiencing a bottleneck situation or not. My example is a standalone deployment. For multi-zone deployments, you would be able to visualize the global and remote metrics. Incorporating an Ingress Gateway What I did above is a little bit of an anti-pattern. I should not be consuming applications by port forwarding the sample application. Instead, I should use an ingress, like Kong Ingress Controller. Kuma is universal on both the data plane and the control plane. That means you can automatically deploy a sidecar proxy in Kubernetes with automatic injection. Since Kong Ingress Controller is an API gateway, it provides all sorts of features and plugins, including authentication, rate limiting, and bug detection. You can use these plugins to enhance how you want services consumed from within the mesh. Install Kong Ingress Controller To install Kong Ingress Controller inside my Kubernetes cluster, I’ll open a new terminal. There should be a new Kong namespace. To include this as part of the service mesh, you should annotate the Kong namespace with the Kuma sidecar injection. When Kuma sees this annotation on a namespace, it knows that it must inject the sidecar proxy to any service running here into that namespace. Lastly, retrigger a deployment of Kong to inject the sidecar. You should see the sidecar showing up in your API gateway data planes. With Kong Ingress Controller up and running, I’ll expose the address of my minikube. There is no ingress rule defined. That means Kong doesn’t know how to route this request. There’s no API, so Kong doesn’t know how to process it. To tell Kong to process this request, I must create an ingress rule. I’ll make an ingress rule that proxies the route a path to my sample application. "apiVersion: extensions/v1beta1 kind: Ingress metadata: name: demo-ingress namespace: kuma-demo annotations: kubernetes.io/ingress.class: kong konghq.com/strip-path: 'true' spec: rules: - http: paths: - path: / backend: serviceName: demo-app servicePort: 5000" | kubectl apply -f - After refreshing, I see my sample application running through the ingress. Traffic Trace Injecting distributed tracing into each of your services will enable you to monitor and troubleshoot microservice behavior without introducing any dependencies to the existing application code. To capture traces between Kong and your applications, you can use Kuma’s Traffic Trace policy. Kuma provides a native Jaeger and Zipkin integration so I can run kumactl install tracing | kubectl apply -f -. This provides a helper that creates Zipkin and Jaeger automatically in a new namespace called kuma-tracing. After that’s running, you could go ahead and update your service mesh definition with the new distributed tracing property. That allows you to determine what backend to use, how much of those requests to trace, and where to push them. If you’re running these in the cloud as a managed service, you could be setting up your third-party destination for your traces. tracing: defaultBackend: jaeger-collector backends: - name: jaeger-collector type: zipkin sampling: 100.0 conf: url: In the GUI, you should see that distributed tracing is enabled. Next, add the traffic trace resource that determines what services you want to trace. In this case, I want to trace them all, and I want to store those traces in the Jaeger collector backend. I’ll create this resource. "apiVersion: kuma.io/v1alpha1 kind: TrafficTrace mesh: default metadata: name: trace-all-traffic spec: selectors: - match: kuma.io/service: '*' conf: backend: jaeger-collector" | kubectl apply -f - Once some traffic comes through the gateway, I’ll expose the tracing service to see those traces. Create your free account to unlock your custom reading experience.
https://hackernoon.com/gathering-service-mesh-observability-metrics-with-kuma-2w35351p
CC-MAIN-2021-25
refinedweb
1,009
55.24
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Set the default product_uos_qty based on a custom field in the product form I'm customizing Odoo 8.0 to support sale of Real Estate properties by each property's Gross internal Area (GIA) in square metres. I have created a custom model extending the product.template model with the following field 'qty_gia': fields.integer('Floor Space', help="Gross internal area of property") How do I create a function that will set the qty_gia value as the default product_uos_qty and quantity values in the sale_order_line and account.invoice.line models respectively? I think you can write onchange method on sale_order_line and account_invoice_line. so, while creating sale_order_line or account_invoice_line product_uos_qty will auto-populate form product screen. def onchange_product(self) for line in self: line.product_uos_qty=line.product_id.qty_gia This code is for your reference. it may not be directly work. Hope this help. About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/set-the-default-product-uos-qty-based-on-a-custom-field-in-the-product-form-103780
CC-MAIN-2017-34
refinedweb
197
51.85
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Calling onchange product_id_change() method Hello, I'm trying to call a product_id_change method becouse i'd like to change the currency in every sale order line by changing date_order. By using below code i'm getting an error : product_id_change() takes at least 6 arguments (5 given). Which arguments should i use? @api.onchange('date_order') def change_currency_rate(self): for line in self.order_line: line.product_id_change() Try to use this code @api.onchange('date_order') def change_currency_rate(self): for line in self.order_line: val=product_id_change(line) def product_id_change(self,line): ----code I know that i have to this but i don't know which of these arguments i have to pass. This is def i want to call: def product_id_change(self, cr, uid, ids, pricelist, product, qty=0, uom=False, qty_uos=0, uos=False, name='', partner_id=False, lang=False, update_tax=True, date_order=False, packaging=False, fiscal_position=False, flag=False, context=None): I'm using models.Model in version 8I'm using models.Model in version 8 About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now product_id_change() takes at least 6 arguments (6 given) ?!?!?!! Odooooo whyyyyyyy
https://www.odoo.com/forum/help-1/question/calling-onchange-product-id-change-method-103908
CC-MAIN-2017-43
refinedweb
228
56.76
> Mixing meta data into names is a bad idea. You don't want to change > the name of something just because some of its meta data changes. Advertising Your right. It's just that sometimes people may have a package name that is not agreeable to the broader community. >> The namespace could be, zodb. > > No it can't. ZODB isn't a namespace package and can't be one without > breaking backward compatibility. Should it be a namespace package over the long term? _______________________________________________ For more information about ZODB, see the ZODB Wiki: ZODB-Dev mailing list - ZODB-Dev@zope.org
https://www.mail-archive.com/zodb-dev@zope.org/msg04833.html
CC-MAIN-2018-09
refinedweb
101
78.04
Golang Environment Configuration I know that in my previous post, mocking API's in Golang, I said I would talk about testing, but I lied. To your face. I'm actually going to take a step backwards and talk a bit about the Golang environment configuration. Back when I first started to tinker with Go, I struggled getting the environment set up. Once I did get it working and was finally able to compile hello_world.go, I just accepted Go's environment as some sort of secret voodoo. This seems to be a recurring pattern, as I've encountered a few developers who have had similar experiences. And in fact, the reason I'm writing this post instead of one about testing is because a colleague approached me and asked if I could do a post about setting Go up. So here it is. You're welcome, friendo. Environment Variables There are several variables that may or may not need to be configured based on how you install Go. This is an incomplete list, check the golang docs for more. - GOROOT - GOBIN - GOOS - GOARCH - GOPATH GOROOT is the path to where the Go standard library is located on your local filesystem. Go assumes a default location of /usr/local/go, so do NOT set this if that's where you've extracted Go. If you want to install Go somewhere besides /usr/local/go, then be sure to set the $GOROOT to whichever path you chose. GOBIN is the path to where your Go binaries are installed from running go install. Only set this if you want to change it from the default location $GOPATH/bin. GOOS is my personal favorite and specifies the operating system. Optional to define. GOARCH is used to specify the processor architecture. Also optional. GOPATH is the path to your workspace. It is required to be set and has no default. I usually make my $GOPATH something like /home/matt/code/go GOPATH Structure Once $GOPATH is defined (don't forget to add export GOPATH="/path/to/your/workspace" to your .bashrc) we need to make sure the directory $GOPATH points to has the proper structure. Go expects 3 sub-directories: bin/ pkg/ src/ bin/ contains executables which are generated by running a go install pkg/ contains installed package objects. Running a go get will place package objects here. src/ is --SURPRISE-- where our source code will live. The src/ directory will contain all of our projects and should have the following structure: Source-Control-Platform/User/Repository As an example let's say that I have a Go project on github, named "BestApp". The src/ directory would look like: github.com/MattBorowiec/BestApp Walkthrough Download the appropriate archive for your platform. Extract the archive. tar -C /usr/local -xzf go1.X.X.platform-arch.tar.gz NOTE: as mentioned earlier, if extracted to /usr/local, then GOROOT does not need to be explicitly set. Add Go Binaries to your PATH. export PATH="$PATH:/usr/local/go/bin" Verify you have access to GO binaries. Running goin your shell should now print out a list of Go commands. Set your GOPATH export GOPATH="/home/matt/code/go" Add a src/ directory inside your GOPATH mkdir $GOPATH/src Add namespace to the src/directory cd $GOPATH/src<br/> mkdir github.com/MattBorowiec/BestApp NOTE: Remember to use the source-control-platform/user/repo structure Add a go source code file to your namespace. cd $GOPATH/src/github.com/MattBorowiec/BestApp/ touch best.go Add some source code to the file package main import "fmt" func main() { fmt.Println("YOU ARE THE BEST!") } Save the file, run the code, and verify the output [matt@localhost BestApp]$ go run best.go YOU ARE THE BEST! If YOU ARE THE BEST! then we now have a fully functioning Go environment. Now that we have a Go environment setup, in my next post I'll dive a bit deeper into mocking APIs and add some tests. Add new comment
https://www.metaltoad.com/blog/golang-environment-configuration
CC-MAIN-2020-16
refinedweb
669
67.04
.NET Core gives you a fast and modular platform for creating applications that run on Windows, Linux, and macOS. Use Visual Studio Code with the C# extension to get a powerful editing experience with full support for C# IntelliSense (smart code completion) and debugging. Prerequisites - Install Visual Studio Code. - Install the .NET Core SDK. - Install the C# extension for Visual Studio Code. For more information about how to install extensions on Visual Studio Code, see VS Code Extension Marketplace. Hello World Let's get started with a simple "Hello World" program on .NET Core: Open a project: Open Visual Studio Code. Click on the Explorer icon on the left menu and then click Open Folder. Select File > Open Folder from the main menu to open the folder you want your C# project to be in and click Select Folder. For our example, we're creating a folder for our project named HelloWorld. Initialize a C# project: Open the Integrated Terminal from Visual Studio Code by selecting View > Integrated Terminal from the main menu. In the terminal window, type dotnet new console. This command creates a Program.cs file in your folder with a simple "Hello World" program already written, along with a C# project file named HelloWorld.csproj. Resolve the build assets: For .NET Core 1.x, type dotnet restore. Running dotnet restoregives you access to the required .NET Core packages that are needed to build your project. Note Starting with .NET Core 2.0 SDK, Azure DevOps Services or in build systems that need to explicitly control the time at which the restore occurs. Run the "Hello World" program: Type dotnet run. You can also watch a short video tutorial for further setup help on Windows, macOS, or Linux. Debug Open Program.cs by clicking on it. The first time you open a C# file in Visual Studio Code, OmniSharp loads in the editor. Visual Studio Code should prompt you to add the missing assets to build and debug your app. Select Yes. To open the Debug view, click on the Debugging icon on the left side menu. Locate the green arrow at the top of the pane. Make sure the drop-down next to it has .NET Core Launch (console) selected. Add a breakpoint to your project by clicking on the editor margin, which is the space on the left of the line numbers in the editor, next to line 9, or move the text cursor onto line 9 in the editor and press F9. To start debugging, press F5 or select the green arrow. The debugger stops execution of your program when it reaches the breakpoint you set in the previous step. - While debugging, you can view your local variables in the top left pane or use the debug console. Select the blue arrow at the top to continue debugging, or select the red square at the top to stop. Tip For more information and troubleshooting tips on .NET Core debugging with OmniSharp in Visual Studio Code, see Instructions for setting up the .NET Core debugger. Add a class To add a new class, right click in the VSCode Explorer and select New File. This adds a new file to the folder you have open in VSCode. Name your file MyClass.cs. You must save it with a .csextension at the end for it to be recognized as a csharp file. Add the code below to create your first class. Make sure to include the correct namespace so you can reference it from your Program.cs file: using System; namespace HelloWorld { public class MyClass { public string ReturnMessage() { return "Happy coding!"; } } } Call your new class from your main method in Program.cs by adding the code below: using System; namespace HelloWorld { class Program { static void Main(string[] args) { var c1 = new MyClass(); Console.WriteLine($"Hello World! {c1.ReturnMessage()}"); } } } Save your changes and run your program again. The new message should appear with the appended string. > dotnet run Hello World! Happy coding! I'm missing required assets to build and debug C# in Visual Studio Code. My debugger says "No Configuration." The Visual Studio Code C# extension can generate assets to build and debug for you. Visual Studio Code prompts you to generate these assets when you first open a C# project. If you didn't generate assets then, you can still run this command by opening the Command Palette (View > Command Palette) and typing ">.NET: Generate Assets for Build and Debug". Selecting this generates the .vscode, launch.json, and tasks.json configuration files that you need. See also Feedback
https://docs.microsoft.com/en-us/dotnet/core/tutorials/with-visual-studio-code?wt.mc_id=devto-blog-chnoring
CC-MAIN-2019-43
refinedweb
761
76.62
: Since the previous Working Draft, OWL compatibility has been updated to include OWL 2 (which is a compatible extension of OWL 1). The changes are detailed in the change log. The Working Group believes it has completed its design work for the technologies specified this document, so this is a "Last Call" draft. The design is not expected to change significantly, going forward, and now is the key time for external review, before the implementation phase. (This is the second Last Call draft of this document.). The Resource Description Framework RDF [RDF-Concepts] is a Web-based language for the representation and exchange of data; RDF Schema (RDFS) [RDF-Schema] and the OWL Web Ontology Language [OWL2-Syntax] are Web-based languages for representing and exchanging ontologies. This document specifies how combinations of RIF documents and RDF data and RDFS and OWL ontologies are interpreted; i.e., it specifies how RIF interoperates with RDF, RDFS, and OWL. We consider here OWL 2 [OWL2-Syntax], which is an extension of OWL 1 [OWL-Reference]. Therefore, the notions defined in this document also apply to combinations of RIF documents with OWL 1 ontologies. We consider here the RIF Basic Logic Dialect (BLD) [RIF-BLD] and RIF Core [RIF-Core], a subset of RIF BLD. The RIF Production Rule Dialect (PRD) [RIF-PRD] is an extension of RIF Core. Interoperability between RIF and RDF/OWL is only defined for the Core subset of PRD. In the remainder, when speaking about RIF documents and rules, we refer to RIF Core and BLD. RDF data and RDFS and OWL ontologies can be represented using RDF graphs. There exist several alternative syntaxes for OWL ontologies; however, for exchange purposes it is assumed they are represented using RDF graphs. Several syntaxes have been proposed for the exchange of RDF graphs, the normative syntax being RDF/XML [RDF-Syntax]. RIF does not provide a format for exchanging RDF graphs; it is assumed that RDF graphs are exchanged using RDF/XML, or any other syntax that can be used for representing or exchanging RDF graphs. A typical scenario for the use of RIF with RDF/OWL is the exchange of rules that use RDF data and/or RDFS or OWL ontologies: an interchange partner A has a rules language that is RDF/OWL-aware, i.e., it supports the use of RDF data, it uses an RDFS or OWL ontology, or it extends RDF(S)/OWL. A sends its rules using RIF, possibly with references to the appropriate RDF graph(s), to partner B. B receives the rules and retrieves the referenced RDF graph(s). The rules are translated to the internal rules language of B and are processed, together with the RDF graphs, using the RDF/OWL-aware rule engine of B. The use case Vocabulary Mapping for Data Integration [RIF-UCR] is an example of the interchange of RIF rules that use RDF data and RDFS ontologies. A specialization of this scenario is the publication of RIF rules that refer to RDF graphs; publication is a special kind of interchange: one to many, rather than one-to-one. When a rule publisher A publishes its rules on the Web, there may be several consumers that retrieve the RIF rules and RDF graphs from the Web, translate the RIF rules to their respective rules languages, and process them together with the RDF graphs in their own rules engines. The use case Publishing Rules for Interlinked Metadata [RIF-UCR] illustrates the publication scenario. Another specialization of the exchange scenario a RIF document, publishes the OWL ontology, and sends (or publishes) the RIF document, which includes a reference to the OWL ontology. A consumer of the rules retrieves the OWL ontology and translates the ontology and document into a combined ontology+rules description in its own rule extension of OWL.. The notation of certain symbols in RIF, particularly IRIs and plain literals, is slightly different from the notation in RDF/OWL. These differences are illustrated in the Section Symbols in RIF Versus RDF/OWL. The RDF semantics specification [RDF-Semantics] defines four normative notions of entailment for RDF graphs: Simple, RDF, RDFS, and Datatype entailment. OWL 2 specifies two different semantics, with corresponding notions of entailment: the direct semantics [OWL2-Semantics], which specifies OWL 2 DL entailment, and the RDF-based semantics [OWL2-RDF-Based-Semantics], which specifies OWL 2 Full entailment. This document specifies the interaction between RIF and RDF/OWL for all six notions. The Section RDF Compatibility is concerned with the combination of RIF and RDF/RDFS. The combination of RIF and OWL is addressed in the Section OWL Compatibility. The semantics of the interaction between RIF and OWL 2 DL is close in spirit to [SWRL]. RIF provides a mechanism for referring to (importing) RDF graphs and a means for specifying the profile of this import, which corresponds to the intended entailment regime. The Section Importing RDF and OWL in RIF specifies how such import statements are used for representing RIF-RDF and RIF-OWL combinations. The Appendix: Embeddings (Informative) describes how reasoning with combinations of RIF rules with RDF and OWL 2 RL (a subset of OWL 2 DL) can be reduced to reasoning with RIF documents. This reduction can be seen as an implementation hint for interchange partners who do not have RDF/OWL-aware rule systems, but want to process RIF rules that import RDF graphs and OWL ontologies. In terms of the aforementioned scenario: if the interchange partner B does not have an RDF/OWL-aware rule system, but B can process RIF rules, then the appendix explains how the rule system of B could be used for processing RIF-RDF/OWL combinations. Throughout this document the following conventions are used when writing RIF and RDF statements in examples and definitions. Where RDF/OWL has four kinds of constants: URI references (i.e., IRIs), plain literals without language tags, plain literals with language tags and typed literals (i.e., Unicode sequences with datatype IRIs) [RDF-Concepts], RIF has one kind of constants: Unicode sequences with symbol space IRIs [RIF-DTB]. Symbol spaces can be seen as groups of constants. Every datatype is a symbol space, but there are symbol spaces that are not datatypes. For example, the symbol space rif:iri groups all IRIs. The correspondence between constant symbols in RDF graphs and RIF documents is explained in Table 1. The shortcut syntax for IRIs and strings [RIF-DTB], used throughout this document, corresponds to the syntax for IRIs and plain literals in Turtle [Turtle], a commonly used syntax for RDF. RIF does not have a notion corresponding exactly to RDF blank nodes. RIF local symbols, written _symbolname, have some commonality with blank nodes; like the blank node label, the name of a local symbol is not exposed outside of the document. However, in contrast to blank nodes, which are essentially existentially quantified variables, RIF local symbols are constant symbols. In many applications and deployment scenarios, this difference may be inconsequential. However the results will differ when such symbols are used in a non-assertional context, such as in a query pattern or rule body. Finally, variables in the bodies of RIF rules or in query patterns may be existentially quantified, and are thus similar to blank nodes; however, RIF BLD does not allow existentially quantified variables to occur in rule heads. This section specifies how a RIF document interacts with a set of RDF graphs in a RIF-RDF combination. In other words, how rules can "access" data in the RDF graphs. There is that contains the triples ex:john ex:brotherOf ex:jack . ex:jack ex:parentOf ex:mary . saying that ex:john is a brother of ex:jack and ex:jack is a parent of ex:mary, and a RIF document that contains the rule Forall ?x ?y ?z (?x[ex:uncleOf -> ?z] :- And(?x[ex:brotherOf -> ?y] ?y[ex:parentOf -> ?z])) which says that whenever some x is a brother of some y and y is a parent of some z, then x is an uncle of z. From this combination the RIF frame formula :john[:uncleOf -> :mary], as well as the RDF triple :john :uncleOf :mary, are consequences of this combination. a RIF rule using a variable in a rule. The following example illustrates the interaction between RDF and RIF in the face of blank nodes. Consider a combination of an RDF graph that contains the triple _:x ex:hasName "John" . saying that there is something, denoted here by a blank node, which has the name "John", and a RIF document that contains the rules Forall ?x ?y ( ?x[rdf:type -> ex:named] :- ?x[ex:hasName -> ?y] ) Forall ?x ?y ( <>[<> -> ?y] :- ?x[ex:hasName -> ?y] ) which says that whenever there is some x that has some name y, then x is of type ex:named and has a property with value y. From this combination the following RIF condition formulas can be derived: Exists ?z (?z[rdf:type -> ex:named]) <>[<> -> "John"] as can the following RDF triples: _:y rdf:type ex:named . <> <> "John" . However, there is no RIF constant symbol t such that t[rdf:type -> ex:named] can be derived, because there is no constant that represents the named individual. Note that, even when considering Simple entailment, not every combination is satisfiable. In fact, not every RIF document has a model. For example, the RIF BLD document consisting of the fact "a"="b" does not have a model, since the symbols "a" and "b" are mapped to the (distinct) character strings "a" and "b", respectively, in every semantic structure. One consequence of the difference of the alphabets of RDF and RIF is that IRIs of the form and typed literals of the form ""^^rif:iri that occur in an RDF graph are treated the same in RIF-RDF combinations, even if the RIF document is empty. However, documents importing RDF graphs containing typed literals of the form ""^^rif:iri must be rejected. Plain literals without language tags of the form "mystring" and typed literals of the form "mystring"^^xs:string also correspond. For example, consider the combination of an empty document and an RDF graph that contains the triple <> <> "abc" . This combination entails, among other things, the following frame formula: <>[<> -> "abc"^^xs:string] as well as the following triple: <> <> "abc"^^xs:string . These entailments are sanctioned by the semantics of plain literals and xs:strings. Lists in RDF (also called collections) have a natural correspondence to RIF lists. For example, the RDF list _:l1 rdf:first ex:b . _:l1 rdf:rest rdf:nil . corresponds to the RIF list List(ex:b). And so, the combination of the empty RIF document with the RDF graph ex:a ex:p _:l1 . _:l1 rdf:first ex:b . _:l1 rdf:rest rdf:nil . entails the formula ex:a[ex:p -> List(ex:b)]. Likewise, the combination of the empty RDF graph with the RIF fact ex:p(List(ex:a)) entails the triples _:l1 rdf:first ex:a . _:l1 rdf:rest rdf:nil . as well as the formula Exists ?x (And(ex:p(?x) ?x[rdf:first -> ex:a] ?x[rdf:rest -> rdf:nil])). Editor's Note: This last example shows there is a 1-to-1 correspondence between RIF and RDF lists. Note that 1-to-1 correspondence (condition 10 in common-RIF-RDF-interpetations) is at risk. The previous example shows that RIF lists extend RDF lists (condition 9 in common-RIF-RDF-interpetations); this feature is not at risk. The remainder of this section formally defines combinations of RIF rules with RDF graphs and the semantics of such combinations. A combination consists of a RIF document and a set RIF-RDF combinations are formally defined. The section concludes with a review of definitions related to datatypes and typed literals. An RDF vocabulary V consists of the following sets of names: In addition, there is an infinite set of blank nodes, which is disjoint from the sets of names. See RDF Concepts and Abstract Syntax [RDF-Concepts] for precise definitions of these concepts. Definition. Given an RDF vocabulary V, a generalized RDF triple of V is a statement of the form s p o, where s, p and o are names in V or blank nodes. ☐ Definition. Given an RDF vocabulary V, a generalized RDF graph is a set of generalized RDF triples of V. ☐ (See the End note on generalized RDF graphs) A RIF-RDF combination consists of a RIF document and zero or more RDF graphs. Formally: Definition. A RIF-RDF combination is a pair < R,S>, where R is a RIF document and S is a set of generalized RDF graphs of a vocabulary V. ☐ When clear from the context, RIF-RDF combinations are referred to simply as combinations.. To facilitate discussing datatypes, and specifically datatypes supported in specific contexts (required for D-entailment), the notion of datatype maps [RDF-Semantics] is used. A datatype map is a partial mapping from IRIs to datatypes. RDFS, specifically D-entailment, allows the use of arbitrary datatype maps, as long as rdf:XMLLiteral is in the domain of the map. RIF BLD requires a number of additional datatypes to be included; these are the RIF-required datatypes [RIF-DTB]. When checking consistency of a combination < R,S> or entailment of a graph S or RIF formula φ by a combination < R,S>, the set of considered datatypes is the union of the set of RIF-required datatypes and the sets of datatypes referenced in R, the documents imported into R, and φ (when considering entailment of φ). Definition. Let DTS be a set of datatypes. A datatype map D is conforming with DTS if it satisfies the following conditions: Note that it follows from the definition that every datatype used in the RIF document in the combination or the entailed RIF formula (when considering entailment questions) is included in any datatype map conforming to the set of considered datatypes. There may be datatypes used in an RDF graph in the combination that are not included in such a datatype map. Definition. Given a datatype map D, a typed literal (s, d) is a well-typed literal if The semantics of RIF-RDF combinations is defined through a combination of the RIF and RDF model theories, using a notion of common models. These models are then used to define satisfiability and entailment in the usual way. Combined entailment extends both entailment in RIF and entailment in RDF. The RDF Semantics document [RDF-Semantics] defines four normative kinds of interpretations, as well as corresponding notions of satisfiability and entailment: Those four types of interpretations are reflected in the definitions of satisfaction and entailment in this section. This section defines the notion of common-RIF-RDF-interpretation, which is an interpretation of a RIF-RDF combination. This common-RIF-RDF-interpretation is the basis for the definitions of satisfaction and entailment in the following sections. The correspondence between RIF semantic structures (interpretations) and RDF interpretations is defined through a number of conditions that (cf. the Section Symbols in RIF Versus RDF/OWL). The notions of RDF interpretation and RIF semantic structure (interpretation) are briefly reviewed below. As defined in [RDF-Semantics], a Simple interpretation of a vocabulary V is a tuple I=< IR, IP, IEXT, IS, IL, LV >, where RDF-, RDFS-, and D-interpretations are Simple interpretations that satisfy certain conditions: As defined in [RIF-BLD], a semantic structure I is a tuple of the form <TV, DTS, D, Dind, Dfunc, IC, IV, IF, INF, Ilist, Itail, Iframe, Isub, Iisa, I=, Iexternal, Itruth>. The specification of RIF-RDF compatibility is only concerned with DTS, D, IC, IV, Ilist, Itail, Iframe, Isub, Iisa, and Itruth. The other mappings that are parts of a semantic structure are not used in the definition of combinations. Recall that Const is the set of constant symbols and Var is the set of variable symbols in RIF. For the purpose of the interpretation of imported documents, RIF BLD defines the notion of semantic multi-structures, which are nonempty sets of semantic structures of the form {J,I; Ii1, Ii2, ...} that differ only in interpretation of local constants. The structure I in the above is used to interpret document formulas, and will be used to specify RIF combinations. Syntactically speaking, an RDF list is a set of triples of the form i1 rdf:first d1 . i1 rdf:rest i2 . ... in rdf:first dn . in rdf:rest rdf:nil . Here, i1 ... in provide the structure of the linked list and d1 ... dn are the items. The above list would be written in RIF syntax as List(d1 ... dn). Given an RDF interpretation I=< IR, IP, IEXT, IS, IL, LV >, we say that an element l1 ∈ IR refers to an RDF list (y1,...,yn) if l1=IS(rdf:nil), in case n=0; referred to by l1, since there is no restriction, in general, on the rdf:first elements and the rdf:rest successors. Definition. A common-RIF-RDF-interpretation is a pair (Î, I), where Î is a semantic multi-structure of the form {J,I; Ii1, Ii2, ...}, and I is an RDF interpretation of a vocabulary V, such that the following conditions hold: Condition 1 ensures that the combination of resources and properties corresponds exactly to the RIF domain; note that if I is an RDF-, RDFS-, or D-interpretation, IP is a subset of IR, and thus IR=Dind. Condition 2 ensures that the set of RDF properties at least includes all elements that are used as properties in frames in the RIF domain. Condition 3 ensures that all concrete values in Dind are included in LV (by definition, the value spaces of all considered datatypes are included in Dind). Condition 4 ensures that RDF triples are interpreted in the same way as frame formulas. Condition 5 ensures that IRIs are interpreted in the same way. Condition 6 ensures that typed literals are interpreted in the same way. Note that no correspondences are defined for the mapping of names in RDF that are not symbols of RIF, e.g., ill-typed literals and RDF URI references that are not absolute IRIs. Condition 7 ensures that typing in RDF and typing in RIF correspond, i.e., a rdf:type b is true iff a # b is true. Condition 8 ensures that whenever a RIF subclass statement holds, the corresponding RDF subclass statement holds as well, i.e., a rdfs:subClassOf b is true if a ## b is true. Finally, condition 9 requires the existence of an RIF list for every RDF list and condition 10 in addition requires the existence of an RDF list for every RIF list. Feature At Risk #1: 1-to-1 lists Note: This feature is "at risk" and may be removed from this specification based on feedback. Please send feedback to public-rif-comments@w3.org. Condition 10 (which implies condition 9), which ensures a one-to-one correspondence between RDF and RIF lists, is AT RISK and may be removed based on implementation experience. The notion of satisfiability refers to the conditions under which a common-RIF-RDF-interpretation (Î, I) is a model of a combination < R, S>. The notion of satisfiability is defined for all four entailment regimes of RDF (i.e., Simple, RDF, RDFS, and D). The definitions are all analogous. Intuitively, a common-RIF-RDF-interpretation (Î, I) satisfies a combination < R, S> if Î is a model of R and I satisfies S. Formally: Definition. A common-RIF-RDF-interpretation (Î, I) satisfies a RIF-RDF combination C=< R, S > if Î is a model of R and I satisfies every RDF graph S in S; in this case (Î, I) is called a Simple-model, or model, of C, and C is satisfiable. (Î, I) satisfies a generalized RDF graph S if I satisfies S. (Î, I) satisfies a condition formula φ if TValÎ(φ)=t. ☐ Rdf-, RDFS-, and D-satisfiability are defined through additional restrictions on I: Definition. A model (Î, I) of a combination C is an RDF-model of C if I is an RDF-interpretation; in this case C is RDF-satisfiable. A model (Î, I) of a combination C is an RDFS-model of C if I is an RDFS-interpretation; in this case C is RDFS-satisfiable. Let (Î, I) be a model of a combination C and let D be a datatype map conforming with the set DTS of datatypes in I. (Î, I) is a D-model of C if I is a D-interpretation; in this case C is D-satisfiable. ☐ Using the notions of models defined above, entailment is defined in the usual way, i.e., through inclusion of sets of models. Definition. Let C be a RIF-RDF combination, let S be a generalized RDF graph, let φ be a condition formula, and let D be a datatype map conforming with the set of considered datatypes. C D-entails S if every D-model of C satisfies S. Likewise, C D-entails φ if every D-model of C satisfies φ. ☐ The other notions of entailment are defined analogously: Definition.). ☐ Note that simple entailment in combination with an empty ruleset is not the same as simple entailment in RDF, since certain entailments involving datatypes are enforced by the RIF semantics in combinations, cf. the example involving strings and plain literals above. This section specifies how a RIF document interacts with a set of OWL ontologies in a RIF-OWL combination. The semantics of combinations is defined for OWL 2 [OWL2-Syntax]. Since OWL 2 is an extension of OWL 1 [OWL-Reference], the specification in this section applies also to combinations of RIF documents with OWL 1 ontologies. OWL 2 specifies two different variants of the language, each having its own semantics, namely OWL 2 DL [OWL2-Syntax] (called simply OWL 2 in the specification) and OWL 2 Full [OWL2-RDF-Based-Semantics]. OWL 1 Lite and OWL 1 DL [OWL-Reference], which are sublanguages of OWL 1, can be seen as syntactical subsets of OWL 2 DL. The key difference between OWL 2 DL and OWL 2 Full is that the semantics of OWL 2 DL [OWL2-Semantics] is based on standard Description Logic semantics, whereas the semantics of OWL 2 Full is an extension of the RDFS semantics. The syntax of OWL 2 is defined in terms of a structural specification, and there is a mapping to an RDF representation for interchange. The RDF representation of OWL 2 DL does not extend the RDF syntax, but rather restricts it: every OWL 2 DL ontology in RDF form is an RDF graph, but not every RDF graph is an OWL 2 DL ontology. OWL 2 Full and RDF have the same syntax: every RDF graph is an OWL 2 Full ontology and vice versa. This syntactical difference is reflected in the definition of RIF-OWL compatibility: combinations of RIF with OWL 2 DL are based on the OWL 2 structural model, whereas combinations with OWL 2 Full are based on the RDF syntax. Since the OWL 2 Full syntax is the same as the RDF syntax and the OWL 2 Full semantics is an extension of the RDF semantics, the definition of RIF-OWL 2 Full compatibility is an extension of RIF-RDF compatibility. However, defining RIF-OWL DL compatibility in the same way would entail losing certain semantic properties of OWL 2 DL. One of the main reasons for this is the difference in the way classes and properties are interpreted in OWL 2 Full and OWL 2 DL. In the Full variant, classes and properties are interpreted as objects in the domain of interpretation, which are then associated with subsets of, respectively binary relations over the domain of interpretation, using the rdf:type property and the extension function IEXT, as in RDF. In the DL variant, classes and properties are directly interpreted as subsets of, respectively binary relations over the domain. This is a key property of the first-order logic nature of Description Logic semantics and enables the use of Description Logic reasoning techniques for processing OWL 2 DL descriptions. Defining RIF-OWL DL compatibility as an extension of RIF-RDF compatibility would define a correspondence between OWL 2 DL statements and RIF frame formulas. Since RIF frame formulas are interpreted using an extension function, as in RDF, defining the correspondence between them and OWL 2 DL statements would change the semantics of OWL statements, even if the RIF document were empty. A RIF-OWL combination that is faithful to the first-order nature of the OWL 2 DL semantics requires interpreting classes and properties as sets and binary relations, respectively, suggesting that a correspondence could be defined with unary and binary predicates. It is, however, also desirable that there be uniform syntax for the RIF component of both RIF-OWL 2 DL and RIF-RDF/OWL 2 Full combinations, because one may not know at the time of constructing the rules which type of inference will be used. Consider, for example, an RDF graph S consisting of the following statements _:x rdf:type owl:Ontology a rdf:type C . and a RIF document with the rule Forall ?x (?x[rdf:type -> D] :- ?x[rdf:type -> C]) The combination of the two, according to the specification of RDF Compatibility, allows deriving the triple a rdf:type D . Now, the RDF graph S is also an OWL 2 DL ontology. Therefore, one would expect the triple to be implied according to the semantics of RIF-OWL DL combinations as well. To ensure that the RIF-OWL DL combination is faithful to the OWL 2 DL semantics and to enable using the same, or similar, RIF rules in combinations with both OWL 2 DL and RDF/OWL 2 Full, the interpretation of frame formulas s[p -> o] in RIF-OWL DL combinations is slightly different from their interpretation in RIF and syntactical restrictions are imposed on the use of variables and function terms in frame formulas. The remainder of this section formally defines combinations of RIF rules with OWL 2 DL and OWL 2 Full ontologies and the semantics of such combinations. A combination consists of a RIF document and a set of OWL ontologies. The semantics of combinations is defined in terms of combined models, which are pairs of RIF semantic multi-structures and OWL 2 DL, respectively OWL 2 Full interpretations. The interaction between the structures and interpretations is defined through a number of conditions. Entailment is defined as model inclusion, as usual. Since RDF graphs and OWL 2 Full ontologies cannot be distinguished, the syntax of RIF-OWL 2 Full combinations is the same as the syntax of RIF-RDF combinations. The syntax of OWL ontologies in RIF-OWL DL combinations is given by the structural specification of OWL 2 and the restrictions on OWL 2 DL ontologies OWL2-Syntax. Certain restrictions are imposed on the syntax of the RIF rules in combinations with OWL 2 DL. Specifically, the only terms allowed in class and property positions in frame formulas are constant symbols. A DL-frame formula is a frame formula a[b1 -> c1 ... bn -> cn] such that n≥1 and for every bi, with 1≤i≤n, it holds that bi is a constant symbol and if bi = rdf:type, then ci is a constant symbol. Definition. A condition formula φ is a DL-condition if every frame formula in φ is a DL-frame formula. A RIF-BLD document formula R is a RIF-BLD DL-document formula if every frame formula in R is a DL-frame formula. A RIF-OWL DL-combination is a pair < R,O>, where R is a RIF-BLD DL-document formula and O is a set of OWL 2 DL ontologies of a vocabulary V over an OWL 2 datatype map D. ☐ When clear from the context, RIF-OWL DL-combinations are referred to simply as combinations. In the literature, several restrictions on the use of variables in combinations of rules and Description Logics have been identified [Motik05, Rosati06] for the purpose of decidable reasoning. This section specifies such safeness restrictions for RIF-OWL2-DL combinations. Given a set of OWL 2 DL ontologies O, a variable ?x in a RIF rule Q H :- B is DL-safe if it occurs in an atomic formula in B that is not of the form s[P -> o] or s[rdf:type -> A], where P or A occurs in one of the ontologies in O. A disjunction-free RIF rule Q (H :- B) is DL-safe, given O, if every variable that occurs in H :- B is DL-safe. A disjunction-free RIF rule Q (H :- B) is weakly DL-safe, given O, if every variable that occurs in H is DL-safe. Definition. A RIF-OWL DL-combination <R,O> is DL-safe if every rule in R is DL-safe, given O. A RIF-OWL DL-combination <R,O> is weakly DL-safe if every rule in R is weakly DL-safe, given O. ☐ Feature At Risk #2: Safeness Note: This feature is "at risk" and may be removed from this specification based on feedback. Please send feedback to public-rif-comments@w3.org. The above definition of DL-safeness is intended to identify a fragment of RIF-OWL DL combinations for which reasoning is decidable. This definition should be considered AT RISK and may become stricter based on implementation experience. Compared with RDF and the RIF, OWL 2 uses a slightly extended notion of datatype. In the remainder of this section, a datatype d contains, in addition to the lexical space, value space, and lexical-to-value mapping, a facet space, which is a set of pairs of the form (F, v), where F is an IRI and v is a data value, and a facet-to-value mapping, which is a mapping from facets to subsets of the value space of d. An OWL 2 datatype map D is a datatype map that maps the IRIs of the datatypes specified in Section 4 of [OWL2-Syntax] to the corresponding datatypes such that the domain of D does not include rdfs:Literal. We note here that the definitions of datatype and datatype map in the OWL 2 direct semantics specification [OWL2-Semantics] are somewhat different. There, a datatype is some entity with some associated IRIs, and the datatype map assigns lexical value, and facet spaces, as well as lexical-to-value and facet-to-value mappings. The definitions of datatype and datatype map we use are isomorphic, and, indeed, the same as in the OWL 2 RDF-based semantics specification [OWL2-RDF-Based-Semantics]. The latter does not preclude the use of rdfs:Literal in datatype maps. Note that we do not restrict the use of rdfs:Literal in OWL 2 ontologies or RDF graphs. The semantics of RIF-OWL 2 Full combinations is a straightforward extension of the Semantics of RIF-RDF Combinations. The semantics of RIF-OWL2 DL combinations cannot straightforwardly extend the semantics of RIF-RDF combinations, because OWL 2 DL does not extend the RDF semantics. In order to keep the syntax of the rules uniform between RIF-OWL 2 Full and RIF-OWL DL combinations, the semantics of RIF frame formulas is slightly altered in RIF-OWL DL combinations. Given an OWL 2 datatype map D and a vocabulary V that includes the domain of D and the OWL 2 Full vocabulary and facet names, a D-interpretation I is an OWL 2 RDF-Based Interpretation of V with respect to D if it satisfies the semantic conditions in Section 5 of [OWL2-RDF-Based-Semantics]. The semantics of RIF-OWL 2 Full combinations is a straightforward extension of the semantics of RIF-RDF combinations. It is based on the same notion of common interpretations, but defines additional notions of satisfiability and entailment. Definition. Let (Î, I) be a common-RIF-RDF-interpretation that is a model of a RIF-RDF combination C=< R, S > and let D be an OWL 2 datatype map conforming with the set of datatypes in I. (Î, I) is an OWL Full-model of C if I is an OWL 2 RDF-Based Interpretation with respect to D; in this case C is OWL Full-satisfiable with respect to D. Let C be a RIF-RDF combination, let S be a generalized RDF graph, let φ be a condition formula, and let D be an OWL 2 datatype map D conforming with the set of considered datatypes. C OWL Full-entails S with respect to D if every OWL Full-model of C satisfies S. Likewise, C OWL Full-entails φ with respect to D if every OWL Full-model of C satisfies φ. ☐ The semantics of RIF-OWL DL-combinations is similar in spirit to the semantics of RIF-RDF combinations. Analogous to common-RIF-RDF-interpretations, there is the notion of common-RIF-OWL DL-interpretations, which are pairs of RIF and OWL 2 DL interpretations, and which define a number of conditions that relate these interpretations to each other. The modification of the semantics of RIF frame formulas is achieved by modifying the mapping function for frame formulas (Iframe), and leaving the RIF BLD semantics [RIF-BLD] otherwise unchanged. Namely, frame formulas of the form s[rdf:type -> o] are interpreted as membership of s in the set denoted by o and frame formulas of the form s[p -> o], where p is not rdf:type, as membership of the pair (s, o) in the binary relation denoted by p. Definition. A dl-semantic structure is a tuple I = <TV, DTS, D, Dind, Dfunc, IC, IV, IF, Iframe, INF, Isub, Iisa, I=, Iexternal, Itruth>, where Dl-semantic multi-structures are defined analogous to RIF-BLD semantic multi-structures [RIF-BLD]. Formally, a dl-semantic multi-structure Î is a set of dl-semantic structures {J,I; Ii1, Ii2, ...}, where All the structures in Î (adorned and non-adorned) are identical in all respects except for the following: The notation I(φ), for any other formula or symbol φ, and the truth valuation function TValÎ are defined as in BLD semantic structures. Definition. A dl-semantic multi-structure Î is a model of a RIF-BLD DL-document formula R if TValÎ(R)=t. ☐ As defined in [OWL2-Semantics], an interpretation for a vocabulary V over a datatype map D is a tuple I=< IR, LV, C, OP, DP, I, DT, LT, FA >, where The OWL semantics imposes a number of further restrictions on the mapping functions to ensure the interpretation of datatypes, literals, and facets conforms with the given datatype map D and to define the semantics of built-in classes and properties (e.g., owl:Thing). The mappings DT, LT, and FA are essentially given by the datatype map. Definition. Given a vocabulary V over an OWL 2 datatype map D, a common-RIF-OWL DL-interpretation for V over D is a pair (Î, I), where Î is a dl-semantic multi-structure of the form {J,I; Ii1, Ii2, ...}, and I is an interpretation for V over D, such that the following conditions hold. Condition 2 ensures that the relevant parts of the domains of interpretation are the same. Condition 3 ensures that the interpretation (extension) of an OWL class identified by an IRI u corresponds to the interpretation of frames of the form ?x[rdf:type -> <u>]. Conditions 4 and 5 ensure that the interpretation (extension) of an OWL object or data property identified by an IRI u corresponds to the interpretation of frames of the form ?x[<u> -> ?y]. Finally, condition 6 ensures that individual identifiers in the OWL ontologies and the RIF documents are interpreted in the same way. Using the definition of common-RIF-OWL DL-interpretation, satisfaction, models, and entailment are defined in the usual way: Definition. A common-RIF-OWL DL-interpretation (Î, I) for a vocabulary V over an OWL 2 datatype map D is an OWL DL-model of a RIF-OWL DL-combination C=< R, O > if Î is a model of R and I is a model of every ontology O in O; in this case C is OWL DL-satisfiable for V over D. (Î, I) is an OWL DL-model of an OWL 2 ontology O if I is a model of O. (Î, I) is an OWL DL-model of a DL-condition formula φ if TValÎ(φ)=t. Let C be a RIF-OWL DL-combination, let O be an OWL 2 ontology, let φ be a DL-condition formula, and let D be an OWL 2 datatype map conforming with the set of considered datatypes, and let V be a vocabulary over D for every ontology in C and for O. C OWL DL-entails O with respect to D if every common-RIF-OWL DL-interpretation for V over D that is an OWL DL-model of C is an OWL DL-model of O. Likewise, C OWL DL-entails φ with respect to D if every common-RIF-OWL DL-interpretation for V over D that is an OWL DL-model of C is an OWL DL-model of φ. ☐ Example. In OWL 2 DL, the domains for interpreting individuals respectively, literals (data values), are disjoint. The disjointness entails that data values cannot be members of a class and individuals cannot be members of a datatype. RIF does not make such distinctions; variable quantification ranges over the entire domain. So, the same variable may be assigned to an abstract individual or a concrete data value. Additionally, RIF constants (e.g., IRIs) denoting individuals can be written in place of a data value, such as the value of a data-valued property or in datatype membership statements; similarly for constants denoting data values. Such statements cannot be satisfied in any common-RIF-OWL DL-interpretation. The following example illustrates several such statements. Consider the datatype xs:string and a RIF-OWL DL combination consisting of the set containing only the OWL DL ontology ex:myiri rdf:type ex:A . and a RIF document containing the following fact ex:myiri[rdf:type -> xs:string] This combination is not OWL DL-satisfiable, because ex:myiri is an individual identifier and S maps individual identifiers to elements in O, which is disjoint from the elements in the datatype xs:string. Consider a RIF-OWL DL combination consisting of the set containing only the OWL 2 DL ontology ex:hasChild rdf:type owl:ObjectProperty . and a RIF document containing the following fact ex:myiri[ex:hasChild -> "John"] This combination is not OWL DL-satisfiable, because ex:hasChild is an object property, and values of object properties may not be concrete data values. Consider a RIF-OWL DL combination consisting of the OWL DL ontology SubClassof(ex:A ex:B) and a RIF document containing the following rule Forall ?x (?x[rdf:type -> ex:A]) This combination is not OWL DL-satisfiable, because the rule requires every element, including every concrete data value, to be a member of the class ex:A. However, since every OWL interpretation requires every member of ex:A to be an element of the object domain, concrete data values cannot be members of the object domain. ☐ In the preceding sections, RIF-RDF Combinations and RIF-OWL combinations were defined in an abstract way, as pairs consisting of a RIF document and a set of RDF graphs/OWL ontologies. In addition, different semantics were specified based on the various RDF and OWL entailment regimes. RIF provides a mechanism for explicitly referring to (importing) RDF graphs from documents and specifying the intended profile (entailment regime) through the use of Import statements. This section specifies how RIF documents with such import statements must be interpreted. A RIF document contains a number of Import statements. Unary Import statements are used for importing RIF documents, and the interpretation of these statements is defined in Section 3.5 of [RIF-BLD]. This section defines the interpretation of binary Import statements: Import(<t1> <p1>) ... Import(<tn> <pn>) Here, ti is an absolute IRI referring to an RDF graph to be imported and pi is an absolute IRI denoting the profile to be used for the import. The profile determines which notions of model, satisfiability and entailment must be used. For example, if a RIF document R imports an RDF graph S with the profile RDFS, the notions of RDFS-model, RDFS-satisfiability, and RDFS-entailment must be used for the combination <R, {S}>. Profiles are ordered as specified in Section 5.1.1. If several graphs are imported in a document, and these imports specify different profiles, the highest of these profiles is used. For example, if a RIF document R imports an RDF graph S1 with the profile RDF and an RDF graph S2 with the profile OWL Full, the notions of OWL Full-model, OWL Full-satisfiability, and OWL Full-entailment must be used with the combination <R, {S1, S2}>. Finally, if a RIF document R imports an RDF graph S with the profile OWL DL, R must be a RIF-BLD DL-document formula, S must be the RDF representation of an OWL 2 ontology O, and the notions of OWL DL-model, OWL DL-satisfiability, and OWL DL-entailment must be used with the combination <R, {O}>. RIF defines specific profiles for the different notions of model, satisfiability and entailment of combinations, as well as one generic profile. The use of a specific profile specifies how a combination should be interpreted. If a specific profile cannot be handled by a receiver, the combination should be rejected. The use of a generic profile implies that a receiver may interpret the combination to the best of its ability. The use of profiles is not restricted to the profiles specified in this document. Any specific profile that is used with RIF must specify an IRI that identifies it, as well as associated notions of model, satisfiability, and entailment for combinations. The following table lists the specific profiles defined by RIF, the IRIs of these profiles, and the notions of model, satisfiability, and entailment that must be used with the profile. Profiles that are defined for combinations of DL-document formulas and OWL ontologies in abstract syntax form are called DL profiles. Of the mentioned profiles, the profile OWL DL is a DL profile. The profiles are ordered as follows, where '<' reads "is lower than": Simple < RDF < RDFS < D < OWL Full OWL DL < OWL Full RIF specifies one generic profile. The use of the generic profile does not imply the use of a specific notion of model, satisfiability, and entailment. Let R be a RIF document such that Import(<u1> <p1>) ... Import(<un> <pn>) are all the two-ary import statements in R and the documents imported into R and let Profile be the set of profiles corresponding to the IRIs p1,...,pn. If pi, 1 ≤ i ≤ n, corresponds to a DL profile and ui refers to an RDF graph that is not the RDF representation of an OWL (2) DL ontology, the document should be rejected. If ui, 1 ≤ i ≤ n, refers to an RDF graph that uses a typed literal of the form "s"^^rif:iri or "s"^^rdf:PlainLiteral, the document must be rejected. If Profile contains only specific profiles, then: If Profile contains a generic profile, then the combination C=<R,{S1,....,Sn}>, where S1,....,Sn are the RDF graphs referred to by u1,...,un, may be interpreted according to the highest among the specific profiles in Profile, if there is one. We define notions of conformance for RIF-RDF and RIF-OWL combinations. We define these notions both for the RIF Core [RIF-Core] and RIF BLD [RIF-BLD] dialects. Conformance is described in terms of semantics-preserving transformations between the native syntax of a compliant processor and the XML syntax of RIF Core and BLD. We say that an RDF graph S is a standard RDF graph if for every triple s p o in S, s is an IRI or blank node, p is an IRI, and o is an IRI, literal, or blank node. A combination < R, S > is standard if every graph in S is standard. Each RIF processor has sets Τ, of supported datatypes and symbol spaces that include the symbol spaces listed in [RIF-DTB], and Ε, of supported external terms that include the built-ins listed in [RIF-DTB]. The datatype map of a RIF processor is the smallest datatype map conforming with the set of datatypes in Τ. Now, let P ∈ {Simple, RDF, RDFS, D, OWL Full} be a specific RDF profile. A RIF-RDF combination C=< R, S > is a BLDΤ,Ε-P combination if R is a BLDΤ,Ε formula and C is a CoreΤ,Ε-P combination if R is a CoreΤ,Ε formula. A RIF-OWL DL-combination C=< R, O > is a BLDΤ,Ε-OWL DL combination if R is a BLDΤ,Ε formula and C is a CoreΤ,Ε-OWL DL combination if R is a CoreΤ,Ε formula. A RIF processor is a conformant BLDΤ,Ε-P consumer, for P ∈ {Simple, RDF, RDFS, D, OWL DL, OWL Full}, iff it implements a semantics-preserving mapping, μ, from the set of standard BLDΤ,Ε-P combinations, standard RDF graphs, OWL 2 ontologies, and BLDΤ,Ε formulas to the language L of the processor (μ does not need to be an "onto" mapping) and, in case P ∈ {OWL DL, OWL Full}, its datatype map is an OWL 2 datatype map. We say that a RIF document R is list-safe if R is safe and it contains no occurrences of rdf:first, rdf:rest, or rdf:nil in rule consequents. An RDF graph S is list-safe if it contains no occurrences of rdf:first or rdf:rest outside of the property positions, it contains no occurrences of rdf:nil outside of triples of the form ... rdf:rest rdf:nil, and there are no two triples s rdf:first o1 . s rdf:first o2 . or s rdf:rest o1 . s rdf:rest o2 . in S, where s, o1, o2 are RDF terms and o1≠o2. A combination < R, S > is list-safe if R is list-safe and the merge of the graphs in S is list-safe. A RIF processor is a conformant CoreΤ,Ε-P consumer, for P ∈ {Simple, RDF, RDFS, D, OWL DL, OWL Full}, iff it implements a semantics-preserving mapping, μ, from the set of standard list-safe CoreΤ,Ε-P combinations, standard RDF graphs, OWL 2 ontologies, and CoreΤ,Ε formulas to the language L of the processor (μ does not need to be an "onto" mapping) and, in case P ∈ {OWL DL, OWL Full}, its datatype map is an OWL 2 datatype map. Formally, this means that for any pair (φ, ψ), where φ is a BLDΤ,Ε-P combination and ψ is an RDF graph, OWL 2 ontology, or BLDΤ,Ε formula such that φ |=P ψ is defined, φ |=P ψ iff μ(φ) |=L μ(ψ). Here |=P denotes P-entailment and |=L denotes the logical entailment in the language L of the RIF processor. A RIF processor is a conformant BLDΤ,Ε-P producer iff it implements a semantics-preserving mapping, ν, from the language L of the processor to the set of all BLDΤ,Ε formulas, RDF graphs, OWL 2 ontologies, and BLDΤ,Ε-P combinations (ν does not need to be an "onto" mapping). A RIF processor is a conformant CoreΤ,Ε-P producer iff it implements a semantics-preserving mapping, ν, from the language L of the processor to the set of all CoreΤ,Ε formulas, RDF graphs, OWL 2 ontologies, and CoreΤ,Ε-P combinations (ν does not need to be an "onto" mapping). Formally, this means that for any pair (φ, ψ) of formulas in L such that φ |=L ψ is defined, φ |=L ψ iff ν(φ) |=P ν(ψ). Here |=P denotes P-entailment and |=L denotes the logical entailment in the language L of the RIF processor. This document is the product of the Rules Interchange Format (RIF) Working Group (see below), the members of which deserve recognition for their time and commitment to RIF. The editors extend special thanks to: Mike Dean, Michael Kifer, Stella Mitchell, following members of the joint RIF-OWL task force have contributed to the OWL Compatibility section in this document: Mike Dean, Peter F. Patel-Schneider, and Ulrike Sattler.). RIF-RDF combinations can be embedded into RIF documents in a fairly straightforward way, thereby demonstrating how a RIF-compliant translator without native support for RDF can process RIF-RDF combinations. RIF-OWL combinations cannot be embedded in RIF, in the general case. However, there is a subset of OWL 2 DL, namely the OWL 2 RL profile [OWL2-Profiles], for which RIF-OWL combinations that can be embedded.. The embeddings are defined using an embedding function tr that maps symbols, triples, and RDF graphs/OWL ontologies to RIF symbols, statements, and documents, respectively. To embed consistency checking in RDF(S) and OWL, we use a special 0-ary predicate symbol rif:error, which is assumed not to be used in the RIF documents in the combination. Besides the namespace prefixes defined in the Overview, the following namespace prefix is used in this appendix: pred refers to the RIF namespace for built-in predicates [RIF-DTB]. To facilitate the definition of the embeddings we define the notion of a merge of RIF formulas. Definition. Let R={R1,...,Rn} be a set of document, group, and rule formulas, such that there are no prefix or base directives, or relative IRIs in R and directive11, ..., directivenm are all the import directives occurring in document formulas in R. The merge of R, denoted merge(R), is defined as Document(directive11 ... directivenm Group(R*1 ... R*n)), where R*i is obtained from Ri in the following way: Note that the requirement that no prefix or based directives, or relative IRIs are included in any of the formulas to be merged is not a limitation, since compact IRIs can be rewritten to absolutes IRIs, as can relative IRIs, by exploiting prefix and base directives, and the location of the document. RIF-RDF combinations are embedded by combining the RIF rules with embeddings of the RDF graphs and an axiomatization of Simple, RDF, and RDFS entailment. The embedding is not defined for combinations that include infinite RDF graphs and for combinations that include RDF graphs with RDF URI references that are not absolute IRIs (see the End note on RDF URI references) or plain literals that are not in the lexical space of the xs:string datatype [XML-Schema2]. Also, the embedding is not defined for RDF lists. We define a list-free combination as a combination that does not contain any mention of the symbols rdf:first, rdf:rest, or rdf:nil. In the remainder of this section we first define the embedding of symbols, triples, and graphs, after which we define the axiomatization of Simple, RDF, and RDFS entailment of combinations and, finally, demonstrate faithfulness of the embeddings. Given a combination C=< R,S>, the function tr maps RDF symbols of a vocabulary V and a set of blank nodes B to RIF symbols, as defined in the following table. It is assumed that the vocabulary V includes all the IRIs and literals used in the RIF documents and condition formulas under consideration. In the table, the mapping tr' is an injective function that maps typed literals to new constants in the rif:local symbol space, where a new constant is a constant that is not used in the document or its vicinity (i.e., entailed formula or entailing combination). It "generates" a new constant from a typed literal. This section extends the mapping function tr to triples and defines two embedding functions for RDF graphs. In one embedding (trR), graphs are embedded as RIF documents and variables (originating from blank nodes) are skolemized, i.e., replaced with new constant symbols. In the other embedding (trQ), graphs are embedded as condition formulas and variables (originating from blank nodes) are existentially quantified. The following sections show how these embeddings can be used for reasoning with combinations. For skolemization we assume a function sk that takes as argument a formula φ and returns a formula φ' that is obtained from φ by replacing every variable symbol ?x with <new-iri>, where new-iri is a new globally unique IRI, i.e., it does not occur in the graph or its vicinity (i.e., entailing combination or entailed graph/formula). The semantics of the RDF vocabulary does not need to be axiomatized for Simple entailment. Nonetheless, the connection between RIF class membership and subclass statements and the RDF type and subclass statements needs axiomatization. We define: The following theorem shows how checking Simple-entailment of combinations can be reduced to checking entailment of RIF conditions by using the embeddings of RDF graphs defined above. Theorem A list-free RIF-RDF combination C=<R, RSimple,{S1,...,Sn}> Simple-entails a generalized RDF graph T if and only if merge({R, trR(S1), ..., trR(Sn)}) entails trQ(T); C Simple-entails a condition formula φ if and only if merge({R, RSimple, trR(S1), ..., trR(Sn}) entails φ. Proof. We prove both directions through contraposition. We first consider condition formulas (the second part of the theorem), after which we consider graphs (the first part of the theorem). In the proof we abbreviate merge({R, RSimple, trR(S1), ..., trR(Sn)}) with R'. (=>) Assume R' does not entail φ. This means there is some semantic multi-structure Î that is a model of R', but not of φ. Consider the pair (Î, I), where I is the interpretation defined as follows: Clearly, (Î, I) is a common-RIF-RDF-interpretation: conditions 1-6 in the definition are satisfied by construction of I and conditions 7 and 8 are satisfied by condition 4 and by the fact that Î is a model of RSimple.Clearly, (Î, I) is a common-RIF-RDF-interpretation: conditions 1-6 in the definition are satisfied by construction of I and conditions 7 and 8 are satisfied by condition 4 and by the fact that Î is a model of RSimple. - IR is Dind, - IP is the set of all k in Dind such that there exist some a, b in Dind and Itruth(Iframe(a)(k,b))=t, - LV is the union of the value spaces of all considered datatypes, - IEXT(k) is the set of all pairs (a, b), with a, b, and k in Dind, such that Itruth(Iframe(a)(k,b))=t, - IS(i) is IC(<i>), for every absolute IRI i in VU, and - IL((s, d)) is IC(tr("s"^^d)), for every typed literal (s, d) in VTL. Consider ai=sk(?xi). Now, let A be a mapping from blank nodes to elements in Dind such that A(xj)=IC(uj) for every blank node xj in Si. From the fact that I is a model of trR(Si) and by construction of I it follows that [I+A] satisfies Si (see Section 1.5 of [RDF-Semantics])), and so I satisfies Si. We have that Î is a model of R, by assumption. So, (Î, I) satisfies C. Again, by assumption, I is not a model of φ. Therefore, C does not entail φ. Assume now that R' does not entail trQ(T), which means there is a semantic multi-structure Î that is a model of R', but not of trQ(T). The common-RIF-RDF-interpretation (Î, I) is obtained in the same way as above, and so it satisfies C. We proceed by contradiction. Assume I satisfies T. This means there is some mapping A from the blank nodes x1,...,xm in T to objects in Dind such that [I+A] satisfies T. Consider now the semantic multi-structure Î*, which is the same as Î, with the exception of the mapping I*V on the variables ?x1,...,?xm, which is defined as follows: Î*V(?xj)=A(xj) for each blank node xj in S. By construction of I and since [I+A] satisfies T we can conclude that I* is a model of And(tr(t1)... tr(tm)), and so I is a model of trQ(T), violating the assumption that it is not. Therefore, (Î, I) does not satisfy T and C does not entail T. (<=) Assume C does not Simple-entail φ. This means there is some common-RIF-RDF-interpretation (Î, I) that satisfies C such that I is not a model of φ. Consider the semantic multi-structure Î', which is like Î, except for the mapping I'C on the new IRIs that were introduced by the skolemization mapping sk(). The mapping of these new IRIs is defined as follows: For eachj=sk(?xj). Now, since I satisfies Si, there must be a mapping A from blank nodes to elements in Dind such that [I+A] satisfies Si. We define I'C(uj)=A(xj) for every blank node xj in Si. By assumption, Î' is a model of R (recall that Î' differs from Î only on the new IRIs, which are not in R). Clearly, I' is also a model of RSimple, by conditions 7, 8, and 4 in the definition of common-RIF-RDF-interpretation. From the fact that I satisfies Si and by construction of I' it follows that I' is a model of trR(Si). So, I' is a model of R'. Since I is not a model of φ and φ does not contain any of the new IRIs, I' is not the model of φ. Therefore, R' does not entail φ. Assume now that C does not entail T, which means there is a common-RIF-RDF-interpretation (Î, I) that satisfies C, but I does not satisfy T. We obtain I' from I in the same way as above, and so it satisfies R'. It can be shown analogous to the (=>) direction that if I' is a model of trQ(T), then there is a blank node mapping A such that [I+A] satisfies T, and thus I satisfies T, violating the assumption that it does not. Therefore, I' is not a model of trQ(T) and thus R' does not entail trQ(T). ☐ Theorem A list-free RIF-RDF combination <R,{S1,...,Sn}> is satisfiable iff there is a semantic multi-structure Î that is a model of merge({R, RSimple, trR(S1), ..., trR(Sn)}). Proof. The theorem follows immediately from the previous theorem and the observation that a combination (respectively, RIF document) is satisfiable (respectively, has a model) if and only if it does not entail the condition formula "a"="b". ☐ We axiomatize the semantics of the RDF vocabulary using the following RIF rules. To finitely embed RDF entailment, we need to consider a subset of the RDF axiomatic triples. Given a combination C, the context of C includes C and its vicinity (i.e., all graphs/formulas considered for entailment checking). The set of RDF finite-axiomatic triples is the smallest set such that: We assume that the unary predicate symbols ex:wellxml and ex:illxml are not used in the context of the the given combination. Here, inconsistencies may occur if non-well-typed XML literals, axiomatized using the ex:illxml predicate, are in the class extension of rdf:XMLLiteral. If this situation occurs, rif:error is derived, which signifies an inconsistency in the combination. Theorem An RDF-satisfiable list-free RIF-RDF combination C=<R,{S1,...,Sn}> RDF-entails a generalized RDF graph T iff merge({RRDF, R, trR(S1), ..., trR(Sn)}) entails trQ(T). C rdf-entails a condition formula φ iff merge({RRDF, R, trR(S1), ..., trR(Sn)}) entails φ. Proof. In the proof we abbreviate merge({RRDF, R, trR(S1), ..., trR(Sn)}) with R'. The proof is obtained from the proof of correspondence for Simple entailment in the previous section with the following modifications: (*) in the (=>) direction we additionally need to ensure that I does not satisfy rif:error, extend I to ensure it satisfies the RDF axiomatic triples and show that I is an RDF-interpretation, and (**) in the (<=) direction we need to slightly extend the definition of I' to account for ex:wellxml and ex:illxml, and show that I' is a model of RRDF. (*) We assume that, for every non-well-typed literal of the form (s, rdf:XMLLiteral) in VTL, IC(tr("s"^^rdf:XMLLiteral)) is not in the value space of any of the considered datatypes and tr("s"^^rdf:XMLLiteral)[rdf:type -> rdf:XMLLiteral] is not satisfied in I. Since C is RDF-satisfiable, one can verify that this does not compromise satisfaction of R'. Finally, we may assume, without loss of generality, that I does not satisfy rif:error. See also the proof of the following theorem. For any positive integer j such that rdf:_j does not occur in the context of C, I and I are extended such that IS(rdf:_j)=IC(rdf:_j)=IC(rdf:_m) (see the definition of finite-axiomatic triples above for the definition of m). Clearly, this does not affect satisfaction of R' or non-satisfaction of φ and trQ(T). To show that I is an RDF-interpretation, we need to show that I satisfies the RDF axiomatic triples and the RDF semantic conditions. Satisfaction of the axiomatic triples follows immediately from the inclusion of tr(t) in RRDF for every RDF finite-axiomatic triple t, the fact that I is a model of RRDF, and construction of I. Consider the three RDF semantic conditions: Satisfaction of condition 1 follows from satisfaction of the first rule in RRDF in I and construction of I; specifically, the second bullet in the definition. Consider a well-typed XML literal "xxx"^^rdf:XMLLiteral. By the definition of satisfaction in RIF BLD, IC("xxx"^^rdf:XMLLiteral) is the XML value of xxx (condition 2a), and is clearly in LV (condition 2b), by definition of I. Condition 2c is satisfied by satisfaction of the second rule in RRDF in I. Satisfaction of 3a and 3b follows straightforwardly from our assumptions on I. This establishes the fact that I is an RDF-interpretation. (**) Recall that, by assumption, ex:wellxml and ex:illxml are not used in R. Therefore, changing satisfaction of atomic formulas involving ex:wellxml and ex:illxml does not affect satisfaction of R. We assume that I'C(ex:wellxml)=k and I'C(ex:illxml)=l are distinct unique elements, i.e., no other constants is mapped to k and l. We define I'F(k) and I'F(l) as follows: For every typed literal of the form (s, rdf:XMLLiteral) such that I'C(tr(s^^rdf:XMLLiteral))=u, if (s, rdf:XMLLiteral) is well-typed, Itruth(I'F(k)(u))=t and Itruth(I'F(l)(u))=f, otherwise Itruth(I'F(k)(u))=f and Itruth(I'F(l)(u))=t; I'truth(I'F(k)(v))=Itruth(I'F(l)(v))=f for every other object v in Dind. Consider RRDF. Satisfaction of RSimple was established in the proof in the previous section. Satisfaction of the facts corresponding to the RDF axiomatic triples in I' follows immediately from the definition of common-RIF-RDF-interpretation and the fact that I is an RDF-interpretation, and thus satisfies all RDF axiomatic triples. Satisfaction of the ex:wellxml and ex:illxml facts in RRDF follows immediately from the definition of I'. Finally, satisfaction of the rules in RRDF follow straightforwardly from the RDF semantic conditions 1, 2, and 3. This establishes the fact that I' is a model of RRDF. ☐ Theorem A list-free RIF-RDF combination <R,{S1,...,Sn}> is RDF-satisfiable iff merge({RRDF, R, trR(S1), ..., trR(Sn)}) does not entail rif:error. Proof. Recall that we assume rif:error does not occur in R. If <R,{S1,...,Sn}> is not RDF-satisfiable, then either merge({R, trR(S1), ..., trR(Sn)}) is not consistent, or condition 3a or 3b (see previous proof) is violated. In either case, rif:error is entailed. If rif:error is entailed, either merge({RRDF, R, trR(S1), ..., trR(Sn)}) is inconsistent, which means merge({R, trR(S1), ..., trR(Sn)}) is not consistent and thus <R,{S1,...,Sn}> is not RDF-satisfiable, or the body of the second or third rule in RRDF is satisfied in every model, which means either condition 3a or 3b is violated, and so <R,{S1,...,Sn}> is not RDF-satisfiable. ☐ We axiomatize the semantics of the RDF(S) vocabulary using the following RIF rules. Similar to the RDF case, the set of RDFS finite-axiomatic triples is the smallest set such that: We assume that the unary predicate symbol ex:welllit is not used in the context of the given combination. In the following theorems it is assumed that, in combinations C=<R,{S1,...,Sn}>, R does not have mentions of rdfs:Resource, S1,...,Sn do not have mentions of rdfs:Resource beyond triples of the form xxx rdf:type rdfs:Resource, and entailed graphs T and formulas φ do not have mentions of rdfs:Resource. Theorem An RDFS-satisfiable list-free RIF-RDF combination C=<R,{S1,...,Sn}> RDFS-entails a generalized RDF graph T if and only if merge({R, RRDFS, trR(S1), ..., trR(Sn)}) entails trQ(T); C RDFS-entails a condition formula φ if and only if merge({R, RRDFS, trR(S1), ..., trR(Sn)}) entails φ. Theorem A list-free RIF-RDF combination <R,{S1,...,Sn}> is RDFS-satisfiable if and only if merge({R, RRDFS, trR(S1), ..., trR(Sn)}) does not entail rif:error. It is known that expressive Description Logic languages such as OWL 2 DL cannot be straightforwardly embedded into typical rules languages such as RIF BLD [RIF-BLD], because of features such as disjunction and negation. In this section we consider a subset of OWL 2 DL in RIF-OWL DL combinations, namely, the OWL 2 RL profile [OWL2-Profiles], and show how reasoning with RIF-OWL 2 RL combinations can be reduced to reasoning with RIF. The embedding of RIF-OWL 2 RL combinations is not defined for combinations that include infinite OWL ontologies and for combinations that include ontologies with RDF URI references that are not absolute IRIs or plain literals that are not in the lexical space of the xs:string datatype. In addition, the guard predicate pred:is-dt [DTB] must be defined for each datatype used in the combination, where dt is the short name of the datatype. Since OWL 2 RL includes equality through ObjectMaxCardinality and DataMaxCardinality restrictions, as well as FunctionalObjectProperty UniverseFunctionalObjectProperty, SameIndividual, and HasKey axioms, and there is non-trivial interaction between such equality and the predicates in the RIF rules in the combination, embedding RIF-OWL 2 RL combinations into RIF requires equality. Therefore, the embedding presented in this appendix is not in RIF Core, even if the RIF document in the combination is. If the ontologies in the combination do not contain any of the mentioned constructs, the embedding is in Core. Also, it is well-known that adding equality to a rules language does not increase its expressiveness in the absence of function symbols: one can replace equality = with a new binary predicate symbol, and add rules for reflexivity and the principle of substitutivity (also called the replacement property). Recall that the semantics of frame formulas in DL-document formulas is different from the semantics of frame formulas in RIF documents. Nonetheless, DL-document formulas can be embedded into RIF documents, by translating frame formulas to predicate formulas. The mapping tr is the identity mapping on all RIF formulas, with the exception of frame formulas, as defined in the following table. In the table, the mapping tr' is an injective function that maps constants to new constants, i.e., constants that are not used in the original document or its vicinity (i.e., entailed or entailing formula). It "generates" a new constant from an existing one. For the purpose of making statements about this embedding, we define a notion of entailment for DL-document formulas. Definition. A RIF-BLD DL-document formula R dl-entails a DL-condition φ if for every dl-semantic multi-structure Î that is a model of R it holds that TValÎ(φ)=t. ☐ The following lemma establishes faithfulness with respect to entailment of the embedding. RIF-BLD DL-document formula Lemma A RIF-BLD DL-document formula R dl-entails a DL-condition φ if and only if tr(R) entails tr(φ). Proof. We prove both directions by contradiction: if the entailment does not hold on the one side, we show that it also does not hold on the other. (=>) Assume tr(R) does not entail tr(φ). This means there is some semantic multi-structure I = <TV, DTS, D, Dind, Dfunc, IC, IV, IF, Iframe, ISF, Isub, Iisa, I=, Iexternal, Itruth> that is a model of tr(R), but not of tr(φ). Consider the dl-semantic multi-structure I* = <TV, DTS, D, Dind, Dfunc, I*C, IV, IF, I*frame', ISF, Isub, Iisa, I=, Iexternal, Itruth>, with I*C and I*frame' defined as follows: Let t be an element in D such that Itruth(t)=t and let f in D be such that Itruth(f)=f. Observe that tr(R) and tr(φ) do not include frame formulas.Observe that tr(R) and tr(φ) do not include frame formulas. - for every constant c' used as unary or binary predicate symbol in tr(R) or tr(φ) such that c'=tr'(c) for some constant c, I*C( c')=IC(c); I*C(c*)=I*C(c*) for every other constant c*; - for every constant c' used as unary predicate symbol in tr(R) or tr(φ) such that c'=tr'(c) for some constant c, and every object k in Dind, if Itruth(IF(IC(c'))(k))=t, I*frame'(k)((IC(rdf:type), IC(c))=t, - for every constant b' used as binary predicate symbol in tr(R) or tr(φ) such that b'=tr'(b) for some constant b, and every pair (k, l) in Dind × Dind, if Itruth(IF(IC(b'))(k,l))=t, I*frame'(k)((IC(b),l))=t, - if I*frame'(k)((b1,...,bn))=t and I*frame'(k)((c1,...,cm))=t for any two finite bags (b1,...,bn) and (c1,...,cm), then I*frame'(k)((b1,...,bn,c1,...,cm))=t, and - I*frame'(b)=f for any other bag b. To show that I* is a model of R and not of φ, we only need to show that (+) for any frame formula a[b -> c] that is a DL-condition, I* is a model of a[b -> c] iff I is a model of tr(a[b -> c]). This argument straightforwardly extends to the case of frames with multiple bis and cis, since in RIF semantic structures the following condition is required to hold: TValI(a[b1->c1 ... bn->cn]) = t if and only if TValI(a[b1->c1]) = ... = TValI(a[bn->cn]) = t [RIF-BLD]. Consider the case b=rdf:type. Then, I* is a model of a[b -> c] iff Itruth(I*frame'(I(a))(IC(rdf:type),IC(c)))=t. From the definition of I* we obtain Itruth(I*frame'(I(a))(IC(rdf:type),IC(c)))=t iff I*frame'(I(a))(IC(rdf:type),IC(c))=t. By definition of the embedding, we know that tr'(c) is used as unary predicate symbol in tr(R) or tr(φ). From the definition of I* we obtain I*frame'(I(a))(IC(rdf:type),IC(c))=t iff Itruth(IF(IC(tr'(c)))(I(a)))=t. Finally, since tr(a[b -> c])=tr'(c)(a), we obtain Itruth(IF(IC(tr'(c)))(I(a)))=t iff I is a model of tr(a[b -> c]). From this chain of equivalences follows that I* is a model of a[b -> c] iff I is a model of tr(a[b -> c]). The argument for the case b≠rdf:type is analogous, thereby obtaining (+). (<=) Assume R does not dl-entail φ. This means there is some dl-semantic multi-structure I = <TV, DTS, D, Dind, Dfunc, IC, IV, IF, Iframe', ISF, Isub, Iisa, I=, Iexternal, Itruth> that is a model of R, but not of φ. Let B be the set of constant symbols occurring in the frame formulas of the forms a[rdf:type -> b] and a[b -> c] in R or φ. Consider the semantic multi-structure I* = <TV, DTS, D, Dind, Dfunc, I*C, IV, I*F, I*frame, ISF, Isub, Iisa, I=, Iexternal, Itruth>. Let t and f in D be such that Itruth(t)=t and Itruth(f)=f. We define I*C, I*frame, and I*F as follows: Observe that R and φ do not include predicate formulas involving derived constant symbols tr'(b) or tr'(c). The remainder of the proof is analogous to the (=>) direction. ☐Observe that R and φ do not include predicate formulas involving derived constant symbols tr'(b) or tr'(c). The remainder of the proof is analogous to the (=>) direction. ☐ - I*C(tr'(b))=I*C(b) for any b in B ; I*C(c)=IC(c) for any c not in B, - I*frame(b)=f for any finite bag b of D, and - I*F is defined as follows: - for every c in B, given an object k in Dind, if Itruth(Iframe'(k)((IC(rdf:type), IC(c)))=t, I*F(I*C(tr'(c)))(k)=t; I*F(I*C(tr'(c)))(k')=f for any other k' in Dind, - for every b in B, given a pair (k, l) in Dind × Dind, if Itruth(Iframe(k)((IC(b),l)))=t, I*F(tr'(b))(k,l)=t;I*F(tr'(b))(k',l')=f for any other pair (k', l') in Dind × Dind, and - I*F(c')=IF(c') for every other constant c'. The embedding of OWL 2 RL into RIF BLD has two stages: normalization and embedding. The OWL 2 syntax is given in terms of a structural model, and there is a functional-style syntax that is a serialization of this structural model. For convenience, normalization and embedding in this section are done in terms of the functional-style syntax. That is, the normalization mapping takes as input a functional-style syntax ontology document and produces a normalized ontology document. The embedding mapping takes as input a normalized ontology document and produces an RIF document. We refer to Section 4.2 of [OWL2-Profiles] for the specification of the OWL 2 RL syntax. Normalization splits the OWL axioms so that the later mapping to RIF of the individual axioms results in rules. Additionally, it simplifies the axioms and removes annotations. It is assumed that the normalization process is preceded by a simplification process that removes all namespace prefixes, turns all CURIEs and relative IRIs into absolute IRIs, and removes all annotations, import statements, entity declarations, and annotation axioms. We note here that, strictly speaking, simplified OWL 2 RL ontologies are not OWL 2 RL ontologies in the general case, because certain entity declarations are required (e.g., those distinguishing data from object properties). It is assumed that such entity declarations are present implicitly, i.e., they do not appear explicitly in the simplified ontology, but they are known. We also note that removing import statements in the simplification does not prohibit importing ontologies in practice; since combinations contain sets of ontologies, all imported ontologies may be added to these sets. The normalization mapping trN takes as input a simplified ontology O and produces an equivalent normalized ontology O'. The names of variables used in the mapping generally correspond to the names of productions in the OWL 2 RL grammar. We note that normalized OWL 2 RL ontologies are not necessarily OWL 2 RL ontologies, since owl:Thing may appear in subclass expressions, as a result of the transformation of DataPropertyRange axioms. The following lemma establishes the fact that, for the purpose of entailment, the ontologies in a combination may be replaced with their normalization. Normalization Lemma Given a combination C=<R,{O1,...,On}>, where O1,...,On are simplified OWL 2 RL ontologies that do not import ontologies, C OWL DL-entails φ iff C'=<R,{trN(O1),...,trN(On)}> OWL DL-entails φ. We now proceed with the embedding of normalized OWL 2 RL ontologies into RIF DL-document formulas. The embedding function trO takes as input a normalized OWL 2 RL ontology and returns a RIF-BLD DL-document formula. The embeddings of IRIs and literals is as defined in the Section Embedding Symbols. When we speak about class IDs and datatype IDs we mean IRIs used as identifiers for classes, respectively datatypes. By the syntactic restrictions on OWL ontologies, these sets are disjoint. Similarly, we speak about data, object, and annotation property IDs when talking about IRIs used as identifiers for data, object, respectively annotation properties. These sets of property identifiers are mutually disjoint for any OWL ontology. Besides the embedding in the previous table, we also need an axiomatization of some of the aspects of the OWL DL semantics, e.g., separation between individual and datatype domains. This axiomatization is defined relative to an OWL vocabulary V, which includes all well-typed literals used in the rules, and a datatype map D, which includes all considered datatypes. In the table, for a given datatype d, L2V(d) is the lexical-to-value mapping of d. We call an OWL 2 RL ontology O normalized if it is the same as its normalization, i.e., O=trN(O). The following lemma establishes faithfulness of the embedding. Normalized Combination Embedding Lemma Given a datatype map D conforming with T, a RIF-OWL DL-combination C=<R,{O1,...,On}>, where {O1,...,On} is an imports-closed set of normalized OWL 2 RL ontologies with vocabulary V, OWL DL-entails a DL-condition φ with respect to D iff merge({R, ROWL-DL(V), trO(O1), ..., trO(On)}) dl-entails φ. The following theorems establish faithfulness of the full embedding of RIF-OWL 2 RL combinations into RIF. Theorem Given a datatype map D conforming with T, a RIF-OWL DL-combination C=<R,{O1,...,On}>, where {O1,...,On} is an imports-closed set of OWL 2 RL ontologies with vocabulary V, OWL DL-entails a DL-condition formula φ with respect to D iff tr(merge({R, ROWL-DL(V), trO(trN(O1)), ..., trO(trN(On))})) entails tr(φ). Theorem Given a datatype map D conforming with T, a RIF-OWL-DL-combination <R,{O1,...,On}>, where {O1,...,On} is an imports-closed set of OWL 2 RL ontologies with vocabulary V, is owl-dl-satisfiable with respect to D iff tr(merge({R, ROWL-DL(V), trO(trN(O1)), ..., trO(trN(On))}) does not entail rif:error. Changes since the 30 July 2008 Working Draft. Error in the definition of dl-semantic-structure was corrected; the previous version of the definition did not disallow class and property identifiers to be mapped in the individual domain. Some missing CURIE prefixes were added in the Appendix, references to rif:text have been updated to rdf:PlainLiteral, in line with the change in [RIF-DTB], and some typos were corrected in the main text. The embeddings of the Simple, RDF, and RDFS entailment regimes have been updated so that they are now in RIF Core. It was previously undefined what happens when importing graphs with the OWL DL profile, but the document is not a DL-document formula or the graphs are not representations of OWL DL ontologies. OWL compatibility is now defined for OWL 2, which extends OWL 1. Since annotations have no semantics in OWL 2 DL, the notion of owl-dl-annotation-entailment is rendered meaningless, and has thus been removed. RDF URI References: There are certain RDF URI references that are not IRIs (e.g., those containing spaces). It is possible to use such RDF URI references in RDF graphs that.
http://www.w3.org/2005/rules/wg/draft/rif-rdf-owl/
crawl-002
refinedweb
13,461
51.18
Intro to Machine Learning for Developers Intro to Machine Learning for Developers This post will simplify this process of machine learning tool selection down to one — scikit-learn. Join the DZone community and get the full member experience.Join For Free. However, in this introductory post, we will go through the following topics: - A brief introduction to machine learning - What is scikit-learn? - Installing scikit-learn - Algorithms that you will learn to implement scikit-learn in series. - Example to build your first regression model Now, let's begin this fun journey into the world of machine learning with scikit-learn! A Brief Introduction to Machine Learning Machine learning has generated quite the buzz — from Elon Musk fearing the role of unregulated artificial intelligence in society to Mark Zuckerberg having a view that contradicts Musk. Here is a quick overview of the Google trend for machine learning. Broadly speaking, machine learning can be categorized into three main types: - Supervised learning - Unsupervised learning - Reinforcement learning Supervised Learning Supervised Learning Algorithms Supervised learning algorithms can be used to solve both classification and regression problems. You will learn how to implement some of the most popular supervised machine learning algorithms. Popular supervised machine learning algorithms are the ones that are widely used in industry and research and have helped us solve a wide range of problems across a wide range of domains. The following are some of the supervised learning algorithms: - Linear regression: This supervised learning algorithm is used to predict continuous numeric outcomes such as house prices, stock prices, and temperature, to name a few. - Logistic regression: The logistic learning algorithm is a popular classification algorithm that is especially used in the credit industry in order to predict loan defaults. - k-Nearest Neighbors: The k-NN algorithm is a classification algorithm that is used to classify data into two or more categories, and is widely used to classify houses into expensive and affordable categories based on price, area, bedrooms, and a whole range of other features. - Support vector machines: The SVM algorithm is a popular classification algorithm that is used in image and face detection, along with applications such as handwriting recognition. - Tree-Based algorithms: Tree-based algorithms such as decision trees, Random Forests, and Boosted trees are used to solve both classification and regression problems. - Naive Bayes: The Naive Bayes classifier is a machine learning algorithm that uses the mathematical model of probability to solve classification problems. Unsupervised Learning Unsupervised learning is a form of machine learning in which the algorithm tries to detect/find patterns in data that do not have an outcome/target variable. In other words, we do not have data that comes with pre-existing labels. Thus, the algorithm will typically use a metric such as distance to group data together depending on how close they are to each other. As discussed in the previous section, most of the data that you will encounter in the real world will not come with a set of predefined labels and, as such, will only have a set of input features without a target attribute. In the following simple mathematical expression, U is the unsupervised learning algorithm, while X is a set of input features, such as weight and age: Given this data, our objective is to create groups that could potentially be labeled as Healthy or Not Healthy. The unsupervised learning algorithm will use a metric such as distance in order to identify how close a set of points are to each other and how far apart two such groups are. Unsupervised Learning Algorithms Unsupervised machine learning algorithms are typically used to cluster points of data based on distance. The unsupervised learning algorithm that you will learn is as follows: - k-means: The k-means algorithm is a popular algorithm that is typically used to segment customers into unique categories based on a variety of features, such as their spending habits. This algorithm is also used to segment houses into categories based on their features, such as price and area. Reinforcement Learning Reinforcement learning is an area of Machine Learning. Reinforcement. It is about taking suitable action to maximize reward in a particular situation. It is employed by various software and machines to find the best possible behavior or path it should take in a specific situation.. Prerequisites for the Machine Learning: - How to Setup Jupyter Notebook perfectly for Data Analysis - Pandas in Python for Data Analysis with Example (Step-by-Step guide) - Data Visualization How We Are Going to Do It — Scikit-Learn Scikit-learn is a free and open source software that helps you tackle supervised and unsupervised machine learning problems. The software is built entirely in Python and utilizes some of the most popular libraries that Python has to offer, namely NumPy and SciPy. The main reason why scikit-learn is very popular stems from the fact that most of the world's most popular machine learning algorithms can be implemented quite quickly in a plug and play format once you know what the core pipeline is like. Another reason is that popular algorithms for classification such as logistic regression and support vector machines are written in Cython. Cython is used to give these algorithms C-like performance and thus makes the use of scikit-learn quite efficient in the process. Scikit-learn is designed to tackle problems pertaining to supervised and unsupervised learning only and does not support reinforcement learning at present. Installing the Scikit-Learn Package There are two ways in which you can install scikit-learn on your personal device: - By using the pip method - By using the Anaconda method The pip method can be implemented on the macOS/Linux Terminal or the Windows PowerShell, while the Anaconda method will work with the Anaconda prompt. Choosing between these two methods of installation is pretty straightforward: The pip Method pip3 install NumPy pip3 install SciPy pip3 install scikit-learn pip3 install -U scikit-learn The Anaconda Method conda install NumPy conda install SciPy conda install scikit-learn conda install -U scikit-learn So far, this lesson has focused on the brief introduction into what machine learning is for those of you who are just beginning your journey into the world of machine learning. You have learned about how scikit-learn fits into the context of machine learning and how you can go about installing the necessary software. Now, we'll put this into practice and do some data exploration and analysis. The dataset we'll look at in this section is the so-called Boston housing dataset. Often times,. The Boston housing dataset can be accessed from the module sklearn.datasets using the method. load_boston from sklearn import datasets boston = datasets.load_boston() type(boston) print(boston['DESCR']) import pandas as pd ## Loading the data as Dataframe in pandas df = pd.DataFrame(data=boston['data'], columns = boston['feature_names']) #Checking our top 5 rows of the dataframe df.head() In machine learning, the variable that is being modeled is called the target variable; it's what you are trying to predict given the features. For this dataset, the suggested target is MEDV, the median house value in 1,000s of dollars. ## Adding Target temp Column to our dataframe df['MEDV'] = boston['target'] ## Creating copy of the target Value y = df['MEDV'].copy() ##Deleting the Newly created column del df['MEDV'] ## Concat the target columns to our existing dataframe). print(df.shape) df.isnull().sum() --------------------- (506, 14) MEDV 0 CRIM 0 ZN 0 INDUS 0 CHAS 0 NOX 0 RM 0 AGE 0 DIS 0 RAD 0 TAX 0 PTRATIO 0 B 0 LSTAT 0 dtype: int64 For this dataset, we see there are no NaNs, which means we have no immediate work to do in cleaning the data and can move on. To simplify the analysis, the final thing we'll do before exploration is remove some of the columns. We won't bother looking at these and instead focus on the remainder in more detail. Remove some columns by running the cell that contains the following code: for col in ['ZN', 'NOX', 'RAD', 'PTRATIO', 'B']: del df[col] Data Exploration Since this is an entirely new dataset that we've never seen before, the first goal here is to understand the data. We've already seen the textual description of the data, which is important for qualitative understanding. We'll now compute a quantitative description. df.describe().T. cols = ['RM', 'AGE', 'TAX', 'LSTAT', 'MEDV'] df[cols].corr() We call sns.heatmap and pass the pairwise correlation matrix as input. We use a custom color palette here to override the Seaborn default. This resulting table shows the correlation score between each set of values. Large positive scores indicate a strong positive (that is, in the same direction) correlation. As expected, we see maximum values of 1 on the diagonal. Pearson coefficient is defined as the covariance between two variables, divided by the product of their standard deviations: The covariance, in turn, is defined as follows: Here, n is the number of samples, xi and yi are the individual samples being summed over, and Xbar and Ybar are the means of each set. Linear Model With Scikit-Learn. We'll import the LinearRegression class and build our linear classification model the same way as before when we calculated the MSE. Run the following: Use scikit-learn to fit a polynomial regression model to predict the median house value (MEDV), given the LSTAT values. We are hoping to build a model that has a lower mean-squared error (MSE) y = df['MEDV'].values x = df['LSTAT'].values.reshape(-1,1) from sklearn.preprocessing import PolynomialFeatures poly = PolynomialFeatures(degree=3) x_poly = poly.fit_transform(x) from sklearn.linear_model import LinearRegression clf = LinearRegression() clf.fit(x_poly, y) y_pred = clf.predict(x_poly) resid_MEDV = y - y_pred from sklearn.metrics import mean_squared_error error = mean_squared_error(y, y_pred) print('mse = {:.2f}'.format(error))'); -------------------------------------- mse = 28.88 This completes the extensive guide to understanding how to write your first machine learning model. Here, we used visual assists, such as scatter plots, to deepen our understanding of the data. We also performed simple predictive modeling. In the next part, we will look into what is MSE/RMSE and work with other models to enhance accuracy. Hope you liked it! Let me know your thoughts in the comments section. Published at DZone with permission of Ram Dunge . See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/deep-intro-to-machine-learning-for-developers?fromrel=true
CC-MAIN-2019-43
refinedweb
1,752
51.78
[SOLVED] How to use glut in my shared library - gamowaxaky Hi everyone, I got a problem with using glut in my shared library. Please help me! I'm developing an application that draws 3D solid in a QWidget, I use glut library to initial, draw and so on... And the app works with no problem, but after a month, I realize bulding entire app is too slow, it take about 15 minutes for 1 build, so I decided to break my app into separate libraries to reduce the building time. It's ok with libraries doesn't include glut, but with glut included in my library, it can't compile, with @ error: undefined reference to glBegin@4' error: undefined reference toglVertex3f@12' error: undefined reference to `glEnd@0' @ Here is my library project: util.pro file: @ LIBS += -L./lib -lglut32 #glut.lib, glut32.lib, libglut32.a here INCLUDEPATH += ./include #glut.h file here TARGET = util TEMPLATE = lib SOURCES += draw.cpp HEADERS += draw.h @ Here is draw.h file: @ void drawLine(float x1, float y1, float z1, float x2, float y2, float z2); @ And draw.cpp file: @ #include <glut.h> #include "draw.h" void drawLine(float x1, float y1, float z1, float x2, float y2, float z2){ glBegin(GL_LINES); glVertex3f(x1,y1,z1); glVertex3f(x2,y2,z2); glEnd(); } @ I also try to use static library but it isn't work either. So how to use glut with static or shared library? Is it posible with qt? Any suggest will be appreciated! [Edit: Added @ tags for code formatting -- mlong] The missing symbols do not come from GLUT but from OpenGL itself. You should add the OpenGL library as well in your LIBS. Which may be as simple as doing QT += opengl in your subprojects by the way. - gamowaxaky Thanks rcari & Don Harris so much! Finally, I can use glut in my library with your help. Would this work with QT 5 as well Would this work with QT 5 as well
https://forum.qt.io/topic/19830/solved-how-to-use-glut-in-my-shared-library
CC-MAIN-2017-39
refinedweb
329
75.71
02 August 2012 03:47 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> It nominated $2,000/tonne CFR (cost & freight) Cost-push factors and tight supply are the reasons cited by Asahi Kasei for the FECP increase. “Our margins are in deficit because of high raw material costs, so we have no choice but to increase the August FECP,” the company source said. Feedstock propylene prices have increased by about $70/tonne since 15 June to $1,340/tonne CFR northeast (NE) Asia in the week ended 27 July, ICIS data showed. Prices of ammonia – another feedstock for ACN production – also increased by about $50/tonne in the past month to $700/tonne CFR Taiwan in the week ended 27 July, according to ICIS. Asahi Kasei plans to continue to run its plants at 70% of capacity in August. The plant has been operating at this rate since July, the source said. The company has two ACN plants in Its wholly-owned subsidiary, Tongsuh Petrochemical
http://www.icis.com/Articles/2012/08/02/9583119/japans-asahi-kasei-raises-august-acn-nomination-by.html
CC-MAIN-2014-41
refinedweb
165
60.55
We are supposed to write a program which asks for the month, date, and yr and then outputs the day of that date. input: Month: 7 // This is July Day: 4 Year: 1776 output: July 4, 1776 was on a Thursday Formula: day = (r + [2.6m - .2] - 2c + d + [c/4.0] +[d/4.0] ) % 7 [x] denotes the greatest integer less than or equal to x (look in cmath). c = first two digits of the year. (If the year is 1898, c = 18). d = the last two digits of the year. (If the year is 1998, d = 98). r = the day of the month (If the date is May 23, 1998, r = 23). m = the month. March is considered month 1, April is 2, May is 3...February is month 12. this is my code [c++]: #include<iostream> #include<cmath> using namespace std; char Date(int c, int d, int r, int m); int main() { int month, day, yr, yr1, yr2; cout<<"month:"<<endl; cin>>month; cout<<"day:"<<endl; cin>>day; cout<<"year:"<<endl; cin>>yr; yr1=yr/100; yr2=yr%100; cout<<month<<" "<<day<<" "<<yr<<" "<<Date(yr1, yr2, day, month); } char Date(int c, int d, int r, int m) { char result; result=ceil(r+(2.6*m-.2)-2*c+d+(c/4.0)+(d/4.0))%7; switch(result) { case 1: cout<<" was on a sun"; break; case 2: cout<<" was on a mon"; break; case 3: cout<<" was on a tues"; break; case 4: cout<<" was on a wed"; break; case 5: cout<<" was on a thurs"; break; case 6: cout<<" was on a fri"; break; case 7: cout<<" was on a sat"; break; default: cout<<"invalid"; return result; } } //when i compile it, i get the error "'%' illegal, left operand, has type 'double""
https://www.daniweb.com/programming/software-development/threads/12938/illegal-left-operand-has-type-double
CC-MAIN-2017-17
refinedweb
298
86.84
Figure 5.1 shows the main example used in this chapter, an online hangman game. When the user first accesses the program, it chooses a random word from a dictionary of words and displays a series of underscores for each of the word's letters. The game prompts the user to type in a single letter guess or, if he thinks he knows it, the whole word. Each time the user presses return (or the "Guess" button), the game adds the guess to the list of letters already guessed and updates the display. Each time the user makes a wrong guess, the program updates the image to show a little bit more of the stick figure, up to six wrong guesses total. When the game is over, the user is prompted to start a new game. A status area at the top of the screen keeps track of the number of words the user has tried, the number of games he's won, and the current and overall averages (number of letters guessed per session).[2] [2] Lincoln was very gratified when he tested the first working version of the game on his wife. She took over the computer and refused to give it back for hours! This hangman game is a classic case of a web application that needs to maintain state across an extended period of time. It has to keep track of several pieces of information, including the unknown word, the letters that the user has already guessed, the number of wins, and a running average of guesses. In this section, we implement the game using hidden fields to record the persistent ... No credit card required
https://www.safaribooksonline.com/library/view/writing-apache-modules/156592567X/156592567X_ch05-4429.html
CC-MAIN-2018-09
refinedweb
279
75.34
Windows Programming". Windows System Architecture History. Virtual Memory When a program is started (e.g. a web browser or a word processor), it runs in its own process. A process contains its own "virtual" memory space and resources. Its memory is "virtual" because the process thinks memory. Next Chapter C and Win32 API C and Windows. Next Chapter <Windows.h> windows.h The primary C header file for accessing the Win32 API is the <windows.h> header file. To make a Win32 executable, the first step is to include this header file in your source code. The windows.h header file should be included before any other library include, even the C standard library files such as stdio.h or stdlib.h. This is because the windows.h file includes macros and other components that may modify, extend, or replace things in these libraries. This is especially true when dealing with UNICODE, because windows.h will cause all the string functions to use UNICODE instead. Also, because many of the standard C library functions are already included in the Windows kernel, many of these functions will be available to the programmer without needing to load the standard libraries. For example, the function sprintf is included in windows.h automatically. Child Header Files There are a number of header files that are automatically included with windows.h. Many of these files cannot simply be included by themselves, because of dependencies. The windows.h header file is actually a relatively small file, that provides conditional inclusion for a number of other header files, definitions of a few important macros, etc. For a list of child header file visit---------- Additional Header Files This section will talk about some other interesting header files and libraries that can be included with your project, but which are not included by default with windows.h. windows.h Macros This section will briefly detail some of the changes that the user can make in the windows.h structure at compile time using macros. WIN32_LEAN_AND_MEAN The WIN32_LEAN_AND_MEAN macro causes several of the child headers to not be included in the build. This can help to speed up the compilation process. UNICODE and _UNICODE These macros, which we will discuss later, can generally be used interchangeably, but are frequently both defined together. These cause the program to be compiled with UTF-16 encoded strings instead of ASCII encoded strings. UTF-16 is one internationalized encoding based on the UNICODE standard and allows for more characters than the American ASCII encoding, although UTF-8 is often preferred in modern programming. WINVER and _WIN32_WINNT You must set these to a number greater or equal to 0x500 or you will not get some of the useful functions that are in Windows 2000 (and thus in any modern Windows) but were not in Windows 98. Next Chapter. Hungarian Notation. Next Chapter Unicode For a reference of Unicode standard, see Unicode. Introduction to Unicode Control Characters Unicode characters 0 to 31 (U+0000 to U+001F) are part of the C0 Controls and Basic Latin block. They are all control characters. These characters correspond to the first 32 characters of the ASCII set. Next Chapter - Dynamic Link Libraries (DLL) Dynamic Link Libraries Dynamic link libraries hellspecMainLL DLL libraries can be linked to an executable in two ways: Statically and Dynamically. Static Linking to a D. Next chapter Programming Windows With OSS Tools Getting an Open Source Compiler With a fast internet connection, gcc — a free compiler that can compile C, C++, and Fortran, as well as other languages and works on Windows — can be obtained through Cygwin. When installing Cygwin, select to install the "devel/gcc" package and optionally also "graphics/opengl", depending on what type of program you're planning to write. Another Windows port of the GCC compiler collection is MinGW (Minimalist GNU for Windows), an approximately 10MB download 1/10th the size of Cygwin. You could also download DJGPP, but it is designed more for 32-bit console programs. An alternative to obtaining the gcc compiler via porting is to compile from command line within a virtualized version of Linux, which not only comes with gcc, but whose kernel "... is written in the version of the C programming language supported by GCC..." Command Line GCC Illustration To compile, open a command-line interface and use the commands: c:\cygwin\bin\gcc -s -Os -mno-cygwin -o <outputfilename> <inputfilename> for command-line programs and c:\cygwin\bin\gcc -s -Os -mwindows -mno-cygwin -o <outputfilename> <inputfilename> -lopengl32 -lwinmm for GUI programs. Description of command-line flags: - -s: makes the output program smaller by stripping out information that is useful for debugging crashed applications - -Os: optimizes for size (-O3 for execution speed) - -mno-cygwin: makes the program work without any external nonstandard .dll-files - -mwindows: makes it a GUI application - -lopengl32: optionally lets you use OpenGL for graphics - lwinmm: optionally adds some multimedia support such as sound, joysticks, and high resolution timers One must be in the bin directory of Cygwin in order to compile with Cygwin, whereas in Linux gcc is already aliased to work in all directories. Graphical Integrated Development Environments For a more user-friendly environment, like Visual Studio, an excellent open source IDE for Windows is Bloodshed's Dev-C++. It has a well-organised Graphical User Interface, and comes with everything necessary for setting up a Windows C/C++ development environment, installing and using the Mingw port of GCC by default (though it can be configured to work in combination any other GCC-based compiler such as Cygwin). Its source (written in Delphi) is available from the Bloodshed Website, as are the pre-compiled binaries. Dev-C++ has not been updated in quite some time, though, and alternatives like Code::Blocks or CodeLite have since emerged. Netbeans is an IDE developed by Sun Microsystems as the official IDE for Java however Netbeans now supports multiple languages and is very extensible via its plugin system. Another open source IDE is Eclipse, which was initially developed by IBM. One more IDE, created specifically for C++, is Ultimate++. GCC, DJGPP, MinGW and others These are three well-known FOSS development toolsets for compiling applications, although they function in a way slightly different to Microsoft and Borland's Windows programming tools. GCC GCC stands for the GNU Compiler Collection. Although it provides support for compiling sources in several programming languages, it is most commonly used for C and C++ code. Because it is a compiler and not an IDE, approaches such as DJGPP wrap some more arcane details away from the user, easing the creation of programs, as well as providing other features, described below. DJGPP DJGPP is an Integrated Development Environment (IDE) developed by DJ Delorie since 1989. It is a DOS application with 32-bit awareness and allows the creation of software with DPMI support. As a combined port of GCC and other GNU utilities, it is commonly used to write, compile and debug console applications. It can make use of libraries such as Allegro, in order to create applications with graphics, Other add-ons extend its functionality. It has support for cross-platform development, meaning that you may create software without needing a physical instance of a target machine type or operating system in order to build the executable that will be shipped to that configuration. MinGW MinGW stands for the Minimalist GNU for Windows and includes the needed header files for using the Win32 API; it supports creation of native Windows executables. As of late December 2006, Wikipedia defines the goal of MinGW as to "provide only a free compiler and toolchain, prioritizing performance". This contrasts the Cygwin objective of creating a POSIX compliant development environment. Like DJGPP, it can be used for cross-platform development. Because MinGW is available for other systems outside of Windows (such as Linux), even developers lacking a copy of Windows can create Win32 API applications. Furthermore, PE executables (.exe files) can be run using the WINE program (which translates Windows API calls to the corresponding UNIX system functions) to get a taste of what the application will look like - without actually having to install Windows. Dev-C++ Dev-C++ is a free graphical IDE that comes packaged with MinGW and can also use the Cygwin compiler. Other tools GCC, DJGPP and MinGW do not have support for graphic-oriented resource and GUI editing commonly needed for Windows programming. However open source libraries such as QT, GTK+ and wxWidgets provide user interface frontends as well as simplified models wrapping around the complexities of the Windows API. The minimal trade-off is that these may require special licensing and the bundling of runtime DLLs. Cygwin Cygwin is an environment made for Windows that emulates Unix behavior. It can be obtained from cygwin.com via a small setup.exe package that when run, asks for a selection of features that are then fetched. A fast internet connection is suggested, since Cygwin comprises many non-trivial shells, programs, and even GUIs. The setup program allows the user to download the following parts, among others: - Bash shell - Development tools, with debugger, several languages supported by gcc compilers, as well as AWK, SED, YACC and Bison support - Network tools, such as ssh, lynx web browser, nmap, and others - Programmer text editors, such as GNU Emacs, nano and vi - X, which is a very difficult environment to fetch for free on Windows otherwise The software, after selected, is downloaded and installed to a folder at the root of the user's hard drive. It does not repartition the drive, require installation of any Unix system nor use a disk image. Files are simply installed in a Unix hierarchy within the cygwin folder. To run Cygwin, the user will have a Start Menu program or Desktop shortcut called Cygwin Bash Shell. Upon opening it, a DOS-like window opens and displays a classic bash prompt. It is color coded, unlike its DOS underpinnings, and can interact with DOS by being aware of path settings and other system variables, so be aware of the fact that your Windows applications may be located from your system path variable and conflict with a folder's current filename because of DOS's implicit EXE extension expansion. Directory information As an added detail, if you need access to different drives in your computer, instead of using /mnt/, you have to cd into /cygdrive/ to see your drives listed. Seeing your C: drive's root, then, is just a matter of typing cd /cygdrive/c at the command line. Emacs editor information When using Emacs, the normal key combination to exit the program has no effect, by default, though adept users will probably find an Emacs keybinding trick to overcome this issue. Thus, when you want to exit the editor from command line mode, you may end up getting tired, and suspend the program to do a later kill of the process at the command line. Instead of pressing C-x C-c, to leave emacs, you must press F10 to get the menu. Once there, press f and then e. This summons the File Menu and later its Exit option, returning to the command line and prompting for the appropriate saving when some changed files need to be committed to disk. Compiling without Cygwin library bindings Compiling with g++, as stated in a previous section, needs to include the appropriate flags. gcc will normally bind the executable code to the cygwin1.dll file, unless your gcc flags explicitly include "-mno-cygwin" as a switch. To solve this, you can add this and all other switches to your .bashrc file, by using a known Unix option: Open the .bashrc file at your Cygwin home directory in any editor. Look for the lines starting with "alias". There, you will see some shell commands tweaked to have specific options. Under the line containing alias l='ls -CF' you may add: alias g++='g++ -mno-cygwin' #do not depend on cygwin1.dll on target system In case you need to add more fine tuning tags, you may either add them to that same line after the -mno-cygwin switch, or just re-alias the g++ command so that the shell stacks up the current meaning of g++ to further tags. For example, you may add alias g++='g++ -Wno-deprecated' #silence deprecation talk. To keep the g++ compiler from reminding you about the standard practice of including <string> instead of the now deprecated <string.h> header. No root nor su switching Cygwin was created with a few differences in mind. It does not have a different root account, though it may claim files are owned by the Administrator user. Though this may be beyond the scope of the book, it is worth a line or two to say that you won't easily be able to experiment with programs that run under different user rights, because it does not seem that Cygwin tries to enforce account security. You may read any file in the OS that Windows allows Cygwin to access. Running X Other details GNU Tools This page of the Windows Programming book is a stub. You can help by expanding it. Next Chapter Resource Scripts The Here is a list of common resources: - Drop-down Menus - Popup Menus - Text Strings - Keyboard Accelerators (keypress combinations, such as [Ctrl]+[C] to copy text) - Icons - Bitmap Images - Dialog Boxes - Version information - Mouse Cursors Making a Resource Script. Next Chapter Message Loop Architecture WinMainDAFAULT,240,120, NULL,NULL,hTnstanceDAFAULT,240,120) are the X and Y co-ordinates for the top left corner of your window, and the width and height of the window. The X and Y co-ordinates are se to CW_USERDAFAULT to let the window choose where on the screen to put the window. Next, (NULL,NULL,hTnstance. Next Chapter Interfacing The Keyboard The timer only comes with a single message, WM_TIMER, and the WPARAM field will contain the timer ID number. Timer Modes Next Chapter Window Creation On the Windows operating system, most user-interfacable objects are known as "windows". Each window is associated with a particular class, and once the class is registered with the system, windows of that class can be created. WNDCLASS To register a windows class, you need to fill out the data fields in a WNDCLASS structure, and you need to pass this structure to the system. First, however, you need to provide your class with a name, so that Windows (the system) can identify it. It is customary to define the window class name as a global variable: LPTSTR szClassName = TEXT("My Class"); You can name it anything you want to name it, this is just an example. After you have the class name, you can start filling out the WNDCLASS structure. WNDCLASS is defined as such:; For more information on this structure, see this Microsoft Developer's Network article. Notice the last data field is a pointer to a string named "lpszClassName"? This is where you point to your class name that you've just defined. The field named "hInstance" is where you supply the instance handle for your program. We will break the rest of the fields up into a few different categories. The HANDLEs There are a number of different data types in the WNDCLASS structure that begin with the letter "h". As we remember from our discussion of Hungarian notation, if a variable starts with an "h", the variable itself holds a HANDLE object. - HICON hIcon - This is a handle to the icon that your program will use, as located in the top left, and in the taskbar. We will discuss icons more later. However, in our example below, we will use a default value for this item. - HCURSOR hCursor - This is a handle to the standard mouse pointer that your window will use. In our example, we will use a default value for this also. - HBRUSH hbrBackground - This is a handle to a brush (a brush is essentially a color) for the background of your window. Here is a list of the default colors supplied by Windows (these colors will change depending on what 'theme' is active on your computer): Because of a software issue, a value of 1 must be added to any of these values to make them a valid brush. Another value that is worth mentioning in here is the "lpszMenuName" variable. lpszMenuName points to a string that holds the name of the program menu bar. If your program does not have a menu, you may set this to NULL. Extra Fields There are 2 "extra" data members in the WNDCLASS structure that allow the programmer to specify how much additional space (in bytes) to allocate to the class (cbClsExtra) and to allocate to each specific window instance (cbWndExtra). In case you are wondering, the prefix "cb" stands for "count of bytes". int cbClsExtra; int cbWndExtra; If you don't know how to use these members, or if you don't want to use them, you may leave both of these as 0. We will discuss these members in more detail later. Window Fields There are 2 fields in the WNDCLASS that deal specifically with how the window will operate. The first is the "style" field, which is essentially a set of bitflags that will determine some actions that the system can take on the class. These flags can be bit-wise OR'd (using the | operator) to combine more then one into the style field. The MSDN WNDCLASS documentation has more information. The next (and arguably most important) member of the WNDCLASS is the lpfnWndProc member. This member points to a WNDPROC function that will control the window, and will handle all of the window's messages. Registering the WNDCLASS After the fields of the WNDCLASS structure have been initialized, you need to register your class with the system. This can be done by passing a pointer to the WNDCLASS structure to the RegisterClass function. If the RegisterClass function returns a zero value, the registration has failed, and your system has failed to register a new window class. Creating Windows Windows are generally created using the "CreateWindow" function, although there are a few other functions that are useful as well. Once a WNDCLASS has been registered, you can tell the system to make a window from that class by passing the class name (remember that global string we defined?) to the CreateWindow function. ); (See this MSDN article for more information.) The first parameter, "lpClassName" is the string associated with our window class. The "lpWindowName" parameter is the title that will be displayed in the titlebar of our window (if the window has a titlebar). "dwStyle" is a field that contains a number of bit-wise OR'd flags, that will control window creation. Window Dimensions The "x" and "y" parameters specify the coordinates of the upper-left corner of your window, on the screen. If x and y are both zero, the window will appear in the upper-left corner of your screen. "nWidth" and "nHeight" specify the width and height of your window, in pixels, respectively. The HANDLEs There are 3 HANDLE values that need to be passed to CreateWindow: hWndParent, hMenu, and hInstance. hwndParent is a handle to the parent window. If your window doesn't have a parent, or if you don't want your windows to be related to each other, you can set this to NULL. hMenu is a handle to a menu, and hInstance is a handle to your programs instance value. Passing Values to the New Window To pass a value to the new window, you may pass a generic, LPVOID pointer (a 32-bit value) in the lpParam value of CreateWindow. Generally, it is a better idea to pass parameters via this method than to make all your variables global. If you have more than 1 parameter to pass to the new window, you should put all of your values into a struct, and pass a pointer to that struct to the window. We will discuss this in more detail later. An Example Finally, we are going to display a simple example of this process. This program will display a simple window on the screen, but the window won't do anything. This program is a bare-bones program, and it encompasses most of the framework necessary to make any Windows program do anything. Beyond this, it is easy to add more functionality to a program. #include <windows.h> LPSTR szClassName = "MyClass"; HINSTANCE hInstance; LRESULT CALLBACK MyWndProc(HWND, UINT, WPARAM, LPARAM); int WINAPI WinMain(HINSTANCE hInst, HINSTANCE hPrevInstance, LPSTR szCmdLine, int iCmdShow) { WNDCLASS wnd; MSG msg; HWND hwnd; hInstance = hInst; wnd.style = CS_HREDRAW | CS_VREDRAW; //we will explain this later wnd.lpfnWndProc = MyWndProc; wnd.cbClsExtra = 0; wnd.cbWndExtra = 0; wnd.hInstance = hInstance; wnd.hIcon = LoadIcon(NULL, IDI_APPLICATION); //default icon wnd.hCursor = LoadCursor(NULL, IDC_ARROW); //default arrow mouse cursor wnd.hbrBackground = (HBRUSH)(COLOR_BACKGROUND+1); wnd.lpszMenuName = NULL; //no menu wnd.lpszClassName = szClassName; if(!RegisterClass(&wnd)) //register the WNDCLASS { MessageBox(NULL, "This Program Requires Windows NT", "Error", MB_OK); return 0; } hwnd = CreateWindow(szClassName, "Window Title", WS_OVERLAPPEDWINDOW, //basic window style CW_USEDEFAULT, CW_USEDEFAULT, //set starting point to default value CW_USEDEFAULT, CW_USEDEFAULT, //set all the dimensions to default value NULL, //no parent window NULL, //no menu hInstance, NULL); //no parameters to pass ShowWindow(hwnd, iCmdShow); //display the window on the screen UpdateWindow(hwnd); //make sure the window is updated correctly while(GetMessage(&msg, NULL, 0, 0)) //message loop { TranslateMessage(&msg); DispatchMessage(&msg); } return msg.wParam; } LRESULT CALLBACK MyWndProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam) { switch(msg) { case WM_DESTROY: PostQuitMessage(0); return 0; } return DefWindowProc(hwnd, msg, wParam, lParam); } -EX members The Win32 API gains more functionality with each generation, although Microsoft faithfully maintains the API to be almost completely backwards-compatible with older versions of windows. To add more functionally, therefore, Microsoft needed to add new functions and new structures, to make use of new features. An extended version of the WNDCLASS structure is known as the "WNDCLASSEX" structure, which has more fields, and allows for more options. To register a WNDCLASSEX structure, you must use the RegisterClassEx function instead. Also, there is a version of the CreateWindow function with extended functionality: CreateWindowEx. To learn more about these extensions, you can do a search on MSDN. Dialog Boxes Dialog Boxes are special types of windows that get created and managed differently from other windows. To create a dialog box, we will use the CreateDialog, DialogBox, or DialogBoxParam functions. We will discuss these all later. It is possible to create a dialog box by defining a WNDCLASS and calling CreateWindow, but Windows already has all the definitions stored internally, and provides a number of easy tools to work with. For the full discussion, see: Dialog Boxes. Default Window Classes There are a number of window classes that are already defined and stored in the Windows system. These classes include things like buttons and edit boxes, that would take far too much work to define manually. Here is a list of some of the pre-made window types: - BUTTON - A BUTTON window can encompass everything from a push button to a check box and a radio button. The "title" of a button window is the text that is displayed on a button. - SCROLLBAR - SCROLLBAR windows are slider controls that are frequently used on the edge of a larger window to control scrolling. SCROLLBAR types can also be used as slider controls. - MDICLIENT - This client type enables Multiple Document Interface (MDI) applications. We will discuss MDI applications in a later chapter. - STATIC - STATIC windows are simple text displays. STATIC windows rarely accept user input. However, a STATIC window can be modified to look like a hyperlink, if necessary. - LISTBOX, COMBOBOX - LISTBOX windows are drop-down list boxes, that can be populated with a number of different choices that the user can select. A COMBOBOX window is like a LISTBOX, but it can contain complex items. - EDIT, RichEdit - EDIT windows allow text input with a cursor. Basic EDIT windows also allow for copy+paste operations, although you need to supply the code to handle those options yourself. RichEdit controls allow for text editing and formatting. Consider an EDIT control being in Notepad.exe, and a RichEdit control being in WordPad.exe. Menus There are a number of different menus that can be included in a window or a dialog box. One of the most common (and most important) is the drop-down menu bar that is displayed across the top of a window of a dialog box. Also, many programs offer menus that appear when the mouse is right-clicked on the window. The bar across the top of the window is known as the "Menu Bar", and we will discuss that first. For some information about creating a menu in a resource script, see The Resource Script Reference Page, in the appendix to this book. Menu Bar: From Resource Script The easiest and most straight-forward method to create a menu is in a resource script. Let's say that we want to make a menu with some common headings in it: "File", "Edit", "View", and "Help". These are common menu items that most programs have, and that most users are familiar with. We create an item in our resource script to define these menu items. We will denote our resource through a numerical identifier, "IDM_MY_MENU": IDM_MY_MENU MENU DISCARDABLE BEGIN POPUP "File" POPUP "Edit" POPUP "View" POPUP "Help" END The keyword POPUP denotes a menu that opens when you click on it. However, let's say that we don't want the "Help" menu item to pop up, but instead we want to click on the word "Help", and immediately open the help window. We can change it as such: IDM_MY_MENU MENU DISCARDABLE BEGIN POPUP "File" POPUP "Edit" POPUP "View" MENUITEM "Help" END The MENUITEM designator shows that when we click on "Help", another menu won't open, and a command will be sent to the program. Now, we don't want to have empty menus, so we will fill in some common commands in the "File" and "Edit" menus, using the same MENUITEM keyword as we used above: IDM_MY_MENU MENU DISCARDABLE BEGIN POPUP "File" BEGIN MENUITEM "Open" MENUITEM "Save" MENUITEM "Close" END POPUP "Edit" BEGIN MENUITEM "Cut" MENUITEM "Copy" MENUITEM "Paste" END POPUP "View" MENUITEM "Help" END Now, in the "View" category, we want to have yet another popup menu, that says "Toolbars". When we put the mouse on the "Toolbars" command, a submenu will open to the right, with all our selections on it: IDM_MY_MENU MENU DISCARDABLE BEGIN POPUP "File" BEGIN MENUITEM "Open" MENUITEM "Save" MENUITEM "Close" END POPUP "Edit" BEGIN MENUITEM "Cut" MENUITEM "Copy" MENUITEM "Paste" END POPUP "View" BEGIN POPUP "Toolbars" BEGIN MENUITEM "Standard" MENUITEM "Custom" END END MENUITEM "Help" END This is reasonably easy, to start with, except that now we need to provide a method for interfacing our menu with our program. To do this, we must assign every MENUITEM with a command identifier, that we can define in a headerfile. It is customary to name these command resources with an "IDC_" prefix, followed by a short text saying what it is. For instance, for the "File > Open" command, we will use an id called "IDC_FILE_OPEN". We will define all these ID tags in a resource header script later. Here is our menu with all the ID's in place: IDM_MY_MENU MENU DISCARDABLE BEGIN POPUP "File" BEGIN MENUITEM "Open", IDC_FILE_OPEN MENUITEM "Save", IDC_FILE_SAVE MENUITEM "Close", IDC_FILE_CLOSE END POPUP "Edit" BEGIN MENUITEM "Cut", IDC_EDIT_CUT MENUITEM "Copy", IDC_EDIT_COPY MENUITEM "Paste", IDC_EDIT_PASTE END POPUP "View" BEGIN POPUP "Toolbars" BEGIN MENUITEM "Standard", IDC_VIEW_STANDARD MENUITEM "Custom", IDC_VIEW_CUSTOM END END MENUITEM "Help", IDC_HELP END When we click on one of these entries in our window, the message loop will receive a WM_COMMAND message, with the identifier in the WPARAM parameter. We will define all our identifiers in a header file to be numerical values in an arbitrary range that does not overlap with the command identifiers of our other input sources (accelerator tables, push-buttons, etc): //resource.h #define IDC_FILE_OPEN 200 #define IDC_FILE_SAVE 201 #define IDC_FILE_CLOSE 202 #define IDC_EDIT_COPY 203 #define IDC_EDIT_CUT 204 #define IDC_EDIT_PASTE 205 #define IDC_VIEW_STANDARD 206 #define IDC_VIEW_CUSTOM 207 #define IDC_HELP 208 And we will then include this resource header both into our main program code file, and our resource script. When we want to load a menu into our program, we need to create a handle to a menu, or an HMENU. HMENU data items are identical in size and shape to other handle types, except they are used specifically for pointing to menus. When we start our program, usually in the WinMain function, we will obtain a handle to this menu using an HMENU data item, with the LoadMenu function: HMENU hmenu; hmenu = LoadMenu(hInst, MAKEINTRESOURCE(IDM_MY_MENU)); We will discuss how to use this handle to make the menu appear in another section, below. Menu Bar: From API Calls Menu Bar: Loading a Menu To associate a menu with a window class, we need to include the name of the menu into the WNDCLASS structure. Remember the WNDCLASS structure:; It has a data field called "lpszMenuName". This is where we will include the ID of our menu: WNDCLASS wnd; wnd.lpszMenuName = MAKEINTRESOURCE(IDM_MY_MENU); Remember, we need to use the MAKEINTRESOURCE keyword to convert the numerical identifier (IDM_MY_MENU) into an appropriate string pointer. Next, after we have associated the menu with the window class, we need to obtain our handle to the menu: HMENU hmenu; hmenu = LoadMenu(hInst, MAKEINTRESOURCE(IDM_MY_MENU)); And once we have the HMENU handle to the menu, we can supply it to our CreateWindow function, so that the menu is created when the window is created: ); We pass our HMENU handle to the hMenu parameter of the CreateWindow function call. Here is a simple example: HWND hwnd; hwnd = CreateWindow(szClassName, "Menu Test Window!", WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, NULL, '''hmenu''', hInstance, 0); As a quick refresher, notice that we are using default values for all the position and size attributes. We are defining the new window to be a WS_OVERLAPPEDWINDOW, which is a common, ordinary window type. Also the title bar of the window will say "Menu Test Window!". We also need to pass in the HINSTANCE parameter as well, which is the second-to-last parameter. Right-Click Menus Next Chapter User Interface Controls Some predefined window classes are intended for use as user interface controls. They're commonly known as "standard Windows controls" and "common controls". Usages of these UI controls should be documented in task-oriented categories, not on an API-oriented basis. Standard Windows Controls Static controls are “controls” with no user interface.. Static Text (Label)) With other style bits, bitmaps or icons can be shown – with no line of code when the source image is placed in the same resource. Metafile support was added with Windows 95. Rectangles (Divider lines when tall) Static controls can be used to draw grouping rectangles or divider lines. Button Buttons are controls with a simple user interface. Push Button Everyone should be familiar with the windows push button. It is simply a raised square with text inside it, and when you click usually something happens. Example Code. Introduced with Windows Vista, this button has an additional combo box drop-down field, typically to change the behaviour of this push button before pressing. Checkbox The Edit control is the standard base object for text editing and display (it is commonly called a TextBox). The Edit control has a quite large number of styles. Single-Line A list box can have tabulators so it can show somehow tabulated entries. This control is not so commonly used, except in spreadsheet-alike applications. It's main purpose is to deliver the Drop-Down Combo Boxes the pop-up List Box. Combo Box This control has three appearances. Simple Combo Box (non-editable) This type is very rarely used and not worth an explanation. Non-editable Drop-Down Combo Box Use it when the user can enter single-line text or can take some predefined strings. A typical example is choosing an URL or number with history function. Or selecting a port address, allowing to enter a yet-not-known port address. Common Controls) Note that the documentation of CalcEffectiveClientRect() is mostly erraneous: The first UINT/BOOL pair of the array is not used at all. Date/Time Picker Introduced: Windows 95 IP Address Input Field Up/Down Control Balloon-style tooltips were introduced quite late, with Windows XP. Therefore, most programs still use standard black-on-yellow rectangular tooltips. List View, Tree View Introduced: Windows 95. Extra features: - Image at the left side to the text - Different images possible for selected items (but rarely used) Caveats: - Cumbersome initializing - Don't forget InitCommonControlsEx! - Larger binary resource Checked List Box Progress Bar What it is A standard bar graph that displays progress of an item. Shows a graphical representation of amount completed over amount total.*/ Next Chapter References GDI and Drawing This page of the Windows Programming book is a stub. You can help by expanding it. This page will talk about graphics and drawing using the windows GDI libraries. Device Contexts What's in? And their defaults: To be continued … Brush object Windows uses brushes to paint colors and fill areas with predefined patterns. Brushes have a minimum size of 8X8 pixels and like pens, have three basic characteristic: size, pattern and color. With their 8X8 pixel minimum size,brushes are said to have a pattern, not a style as pens do. The pattern may be a solid color, hatched, diagonal or any other user definable combination, even a bitmap pattern. case WM_PAINT: { /* This will paint a red rectangle */ HDC holdBrush; HDC hdc = BeginPaint(hwnd, &ps); HBRUSH hBrush = CreateSolidBrush(RGB(255,0,0)); holdBrush = SelectObject(hdc, hBrush); Rectangle(hdc, 10, 10, 100, 100); /* Memory management has been omitted for brevity */ EndPaint(hwnd, &ps); break; } What's behind the brush object? Note that a bitmap pattern brush behaves differently from a hatched brush, even when the bitmap looks like the hatch. Hatched brushes are transparent, whereas bitmap brushes are opaque. If a black-and-white bitmap is used, black is replaced by the Device Context's text color, and white is replaced by the background color. (This is exactly the BitBlt() behaviour.) SetBkMode() won't work. However, DC's ROP2 applies. Pen object Pens are used to create borders around shapes you have drawn. Lines which result in widths larger than 1 device pixel (not logical units) are drawn as polygons with winding rule. Line colors are never dithered, except ExtCreatePen with a solid brush is used. Font object Fonts are for displaying text and symbols in various styles and sizes. Internal large differences exist for font management between Windows 2 (very basic), 3.1 (introducing TrueType), and 4+ (Unicode; world-transform rotation and mirroring; escapement can differ from rotation). See LOGFONT structure for visible fields. Hidden fields include - Referencing device context - Device-dependent bitmap font representation Therefore, selecting a font into a Device Context (SelectObject) can be time-consuming, especially for large Asian fonts and large font sizes, to “paint” all the glyphs out of the TrueType template into the bitmap. Furthermore, as for all GDI objects, it's not a good idea to select one object from one to another context, because invalidating the hidden files can be a time-consuming process. Note that any GDI object cannot be selected into more that one DC. Basic Drawing In Windows, drawing is typically handled by the WM_PAINT message. The following is an example of how to draw a red square: case WM_PAINT: { PAINTSTRUCT ps; BeginPaint(hwnd, &ps); RECT rectangle = {50, 50, 250, 250}; HBRUSH hbr = CreateSolidBrush(RGB(125, 0, 0)); FillRect(ps.hdc, &rectangle, hbr); DeleteObject(hbr); EndPaint(hwnd, &ps); } break; Firstly, we create the PAINTSTRUCT variable ps. This is a data structure containing information about the painting operation. The next line calls BeginPaint. This initializes ps, then fills it with relevant information. For this example, we only need the hdc member of ps. This is a handle to our window's Device Context. Next, we create a rectangle. This holds the coordinates we're going to paint this rectangle at. The coordinates are relative to the upper-left corner of the window's client area. We also have to create a brush, otherwise Windows won't know what color to paint the rectangle. Finally, we call FillRect, and pass the parameters ps.hdc, a pointer to rectangle, and hbr, our brush. This paints the rectangle directly to our window's device context, and from there it is painted on the screen. After every painting operation, it is necessary to clean up any GDI objects we use, in this case hbr and ps. Advanced hint Because Windows should respond to WM_PAINT and WM_PRINTCLIENT, a more general rule for writing a WM_PAINT handler is this: case WM_PRINTCLIENT: OnPaint((HDC)wParam, NULL); break; case WM_PAINT: { PAINTSTRUCT ps; BeginPaint(hwnd, &ps); OnPaint(ps.hdc, &ps.rcPaint); // It's a good idea to manage the update area. Other PAINTSTRUCT fields are of less usefulness. EndPaint(hwnd, &ps); }break; // Somewhere else void OnPaint(HDC dc, RECT* rcUpdate) { … } Metafiles Next Chapter Dialog Boxes People are familiar with dialog boxes. They are the grey windows that pop up on Windows systems to display messages, and allow the user to set parameters. There are 3 types of dialog boxes: modeless, modal, and system modal. - Modal - Modal dialog boxes are generally used inside a program, to display messages, and to set program parameters. Modal dialog boxes come to the front of the screen, and you may not use the program while the modal dialog box is open. to continue using the program, the modal dialog box must be closed. - System Modal - System modal dialog boxes are like modal boxes, except that they supersede the entire desktop area. When a system modal dialog box is open, nothing else on the screen can be clicked or selected. - Modeless - Modeless dialog boxes are able to be deselected, and control can be taken away from a modeless dialog box and transferred to some other window. Modeless dialog boxes are frequently used as a fast and easy way to create a window, without having to register a window class. Modeless dialog boxes are common in the Windows control panel. MessageBox The most simple type of dialog box is the MessageBox function. The MessageBox function takes 4 parameters: a handle to a parent, a message, a title, and an option. If the parent handle is NULL, the message box is modeless. If you provide a handle for a parent window, the MessageBox can become Modal to the parent window. MessageBox dialog boxes have a number of different options that can be specified: Button types, Icons, modality (modal/modeless), and text justification. These options are specified as bit flags, that can be used by bitwise ORing them together. Buttons Message boxes can have standard OK or Cancel buttons, or they can have a "Yes, No, Cancel" configuration, or a number of derivatives. Only one primary button scheme can be used per message box: - MB_ABORTRETRYIGNORE: The message box contains three push buttons: Abort, Retry, and Ignore. - MB_CANCELTRYCONTINUE: Same as MB_ABORTRETRYIGNORE, but preferred on Windows 2000/XP. - MB_OK: The message box contains an "OK" button.. To display an icon in the message box, specify one of the following values. In addition, a message box can add an additional "Help" button by specifying the "MB_HELP" flag. A "Default Button", a concept that we will see frequently in this chapter, is the button that is automatically selected when a dialog box is opened. Windows provides the ability to set the default button to any of the buttons on a message box, by using the MB_DEFBUTTONx macro. Here is an example: MessageBox(NULL, "This is a Test", "Test", MB_OKCANCEL|MB_HELP|MB_DEFBUTTON2); This will have a message box with an "OK", a "Cancel", and a "Help" button, and the "Cancel" button will be automatically selected. Icons A message box may have no icons, or it may have one. You shouldn't specify a message box to have multiple icons. The different icons, according to MSDN are: -:. - MB_ICONERROR: A stop sign icon appears in the message box. - MB_ICONHAND: A stop sign icon appears in the message box. Modality Finally, the MessageBox can be defined as being Modal, Modeless, or System Modal, by using another identifier: MB_APPLMODAL, MB_SYSTEMMODAL, or MB_TASKMODAL. MB_APPLMODAL is the default value, and only works if a parent window handle was specified to the function. There are a number of other options available, check out MSDN for more details. Dialog Box Procedures Dialog box procedures are slightly different from window procedures. Specifically, they return BOOL values, instead of LRESULT values. Also, dialog boxes do not have a default message processing function, because messages don't always need to be handled. Specifically, Windows manages dialog boxes, and Windows will handle the unused messages. If a dialog box processes a certain message, it should return TRUE. If the message is not processed, the function should return FALSE. Also, Dialog boxes do not get a WM_CREATE message, but instead get a WM_INITDIALOG message. Furthermore, when a dialog box has finished its business, it should call the EndDialog function. Here is an example of a skeleton dialog box function: BOOL CALLBACK MyDlgProc(HWND hDlg, UINT msg, WPARAM wParam, LPARAM lParam) { switch(msg) { case WM_INITDIALOG: return TRUE; case WM_DESTROY: EndDialog(hDlg, 0); return TRUE; } return FALSE; } Creating Modal Dialog Boxes Once a dialog box procedure has been defined, a dialog box can be created by calling either the DialogBox or DialogBoxParam function. These functions return an NRESULT value, that is the integer number passed to the EndDialog function in the dialog box procedure. The DialogBox function will not return until the dialog box is closed. This means, essentially, that the program is frozen in time until we close the dialog box. The DialogBox function requires 2 handles: the module instance handle and the handle of the parent window. Also, the DialogBox function requires that a string be passed naming the resource where the dialog box is defined. The last argument to DialogBox is a pointer to the dialog box procedure function, that you have already defined. To pass a parameter to a dialog box, the function DialogBoxParam can be used. DialogBoxParam has all the same parameters as the regular version, except it takes a fifth argument as a 32-bit pointer. This 32 bit value will be passed as the LPARAM element of the WM_INITDIALOG message. Indirect Dialog Boxes DialogBox and DialogBoxParam both require that the dialog box be defined in a resource. However, if you want to make the dialog box on the fly, you can use the DialogBoxIndirect or the DialogBoxIndirectParam functions. When defining a dialog box indirectly, we need to fill out a DLGTEMPLATE structure, and pass a pointer to that structure to the function, in place of a resource identifier. The DLGTEMPLATE contains fields for determining some of the characteristics of the dialog box, such as the dimensions and screen location. The DLGITEMTEMPLATE structure is used to define individual dialog box items. For more information on this subject, search MSDN. Creating Modeless Dialog Boxes Modeless dialog boxes are a breed of a different color, and are more like windows than dialog boxes. First, we need to modify the message loop, to ensure that dialog box messages are routed correctly: while(GetMessage(&msg, NULL, 0, 0)) { if(!IsDialogMessage(hDlg, &msg)) { TranslateMessage(&msg); DispatchMessage(&msg); } } Now, there are 2 ways we can define a message box in a resource script, with a class or without. We will discuss each in turn. Without Class We can define a dialog box in a resource script with the DIALOG keyword. The resource will have an ID associated with it (either a number or a string), and this ID can be passed directly to the CreateDialog function. With Class If we want to define a modeless dialog box in terms of a window class, we can use a few additions to make the job easier. First, we create a WNDCLASS structure with the information about our dialog box. However, there is one difference, in that we must set the cbWndExtra field to the value DLGWINDOWEXTRA value: wnd.cbWndExtra = DLGWINDOWEXTRA; Then, we register the class like normal. Since we are registering our window classes like normal windows, it shouldn't come as a surprise that Modeless dialog boxes use a regular window procedure, and not a dialog box procedure. Now, Windows identifies classes by name, so we should remember the name of our class. Let's say we named our class "MyDlgClass". We could create a dialog box resource as such: MYDLGCLASS DIALOG DISCARDABLE 100, 100, 200, 200 CAPTION "My Dialog Box" CLASS "MyDlgClass" FONT 8, "MS Sans Serif" BEGIN ... END Notice the field that says "CLASS"? This is the same string that we used in our WNDCLASS structure to name the class. It is important that these two strings be identical, because Windows needs this string to link the WNDCLASS and the dialog box resource together. Notice also that we used the string "MYDLGCLASS" to identify the dialog resource. This isn't mandatory, but it does make things convenient later on. Now, instead of calling CreateWindow, we will call the easier-to-use function CreateDialog. We do not use the DialogBox function, because CreateDialog returns immediately, and doesn't halt program execution. Here is an example: HWND hDlg; hDlg = CreateDialog(hInst, "MyDlgClass", hwndParent, MyDlgProc); Here, we are saying that "hInst" is the instance handle of the application, and "hwndParent" is the handle to the parent window of our dialog box. If the hwndParent parameter is NULL, the dialog box won't have a parent. When the modeless dialog box is finished, it calls "DestroyWindow", not "EndDialog", like a modal dialog box would. Common Dialog Boxes The Common Dialogs is a library of functions that automatically produce some of the most common dialog boxes in Windows. This is an effort to make some amount of continuity between different programs, so that each different program doesn't create its own proprietary "File Open" dialog, for instance. Each Common Dialog generally has a single function that takes a pointer to a structure. This structure is defined specifically for each different control. The common controls can be added to a project by including the <commdlg.h> header file, and linking to the comdlg32.dll library. Some of the common controls available through this library are the "Choose Font" dialog box, the "File open" and "File save" boxes, and the "Color Palette" dialog box. ChooseColor The ChooseColor function brings up the color palette window, and returns a 32-bit color value to your program. BOOL ChooseColor(LPCHOOSECOLOR lpcc); ChooseColor takes a single argument, in the form of a pointer to a CHOOSECOLOR structure. This structure is initialized with various values, and when the function returns, the CHOOSECOLOR structure contains the color value code. GetOpenFileName and GetSaveFileName These two functions bring up the familiar file open and file save dialog boxes that are found in nearly every Windows application. BOOL GetOpenFileName(LPOPENFILENAME lpofn); BOOL GetSaveFileName(LPOPENFILENAME lpofn); Both of these functions take a pointer to an OPENFILENAME structure. This structure controls such things as the file extensions that may be loaded, and the starting path to look in. When the function returns, the structure will contain the name of the file selected. Once your program has this information, you can use the File I/O API to access the file. ChooseFont The ChooseFont function brings up a familiar dialog box that allows the user to select a font and various font attributes such as size, underline/bold/italics, color, etc. This function takes a pointer to a CHOOSEFONT structure. BOOL ChooseFont(LPCHOOSEFONT lpcf); Dialog Box Resources Dialog boxes can be specified in a resource script, to handle the tricky task of creating all the various child windows (buttons and editboxes, etc.) that a dialog box may contain. This process is described in detail in the Resource Script Reference in the appendix. Here, we will discuss some of the basics of using a resource script to define a dialog box. The DIALOG keyword A dialog box resource is specified with the DIALOG (must be all caps) keyword. The DIALOG keyword is preceeded by the resource identifier, and is followed by a series of dimension values: ID_DLGBOX DIALOG X, Y, CX, CY X and Y are the location coordinates of the upper-left corner of the dialog box, in relation to the upper-left corner of the screen. Remember, all the coordinates start at (0,0) in the upper-left hand corner. The next set of numbers, CX and CY, are the dimensions of the dialog box. These dimensions do not include the title bar (if any), so setting your Y value to 0 will make a dialog box that is only a title bar. +---------------> X ["DialogBox" [_][O][x]] + | | | | | | | | | | | | | | | | | v | | Y | | | | | | +-----------------------+ After the DIALOG declaration, there are a number of other fields that can be filled in, to provide information about your dialog box: ID_DLGBOX DIALOG 100, 100, 200, 150 STYLE WS_OVERLAPPED | WS_CAPTION | WS_VISIBLE CAPTION "Title Bar Text" FONT 8, "MS Sans Serif" The STYLE declaration contains all the window styles, bitwise OR'd, that you would have used in the WNDCLASS structure, or in the style field of the CreateWindow function. All the same values are available. The CAPTION is the title of the dialog box. The FONT is the point-size and the TrueType font to be used on all the surfaces of the dialog box. Any font and size can be specified, although if the font is too big, your dialog box will be very annoying. Now, once we have our dialog box sized and shaped the way we want it, we can start to fill it with control buttons and edit boxes, and all sorts of other goodies. First, we use the BEGIN and END tags: ID_DLGBOX DIALOG 100, 100, 200, 150 STYLE WS_OVERLAPPED | WS_CAPTION | WS_VISIBLE CAPTION "Title Bar Text" FONT 8, "MS Sans Serif" BEGIN ... END Next, we can start to fill in the dialog box with buttons, checkboxes, or whatever we want, using the following format: ID_DLGBOX DIALOG 100, 100, 200, 150 STYLE WS_OVERLAPPED | WS_CAPTION | WS_VISIBLE CAPTION "Title Bar Text" FONT 8, "MS Sans Serif" BEGIN PUSHBUTTON "OK", IDOK, 10, 10, 50, 15, WS_TABSTOP CHECKBOX "Box 1", IDC_CB1, 10, 30, 50, 15, WS_TABSTOP EDITTEXT IDC_EDIT1, 10, 50, 100, 100 END After the declaration, you may optionally include one or more style flags, to specify how you want a particular control to appear. The WS_TABSTOP identifier specifies which controls can be selected when you press the TAB key on the keyboard. When you press the TAB key, control switches among the dialog box controls in the same order that they are specified in the resource script (top to bottom). Next Chapter Input-Output Many of the previous chapters have attempted to shed some light on the Windows graphical interface, but this chapter is going to start a detour into the inner-workings of the Windows operating system foundations. In this chapter, we are going to talk about Input and Output routines. This includes (but is not limited to) File I/O, Console I/O, and even device I/O. File API Files, like everything else in a windows platform, are managed by handles. When you want to read a file or write to one, you must first open a handle to that file. Once the handle is open, you may use the handle in read/write operations. In fact, this is the same with all I/O, including console I/O and device I/O: you must open a handle for reading/writing, and you must use the handle to perform your operations. CreateFile We will start with a function that we will see frequently in this chapter: CreateFile. CreateFile is the generic function used to open I/O handles in your system. Even though the name doesn't indicated it, CreateFile is used to open Console Handles and Device Handles as well. As the MSDN documentation says: The CreateFile function creates or opens a file, file stream, directory, physical disk, volume, console buffer, tape drive, communications resource, mailslot, or named pipe. The function returns a handle that can be used to access an object. Now, this is a powerful function, and with the power comes a certain amount of difficulty in using the function. Needless to say, CreateFile is a little more involved than the standard C STDLIB fopen. HANDLE CreateFile( LPCTSTR lpFileName, DWORD dwDesiredAccess, DWORD dwShareMode, LPSECURITY_ATTRIBUTES lpSecurityAttributes, DWORD dwCreationDisposition, DWORD dwFlagsAndAttributes, HANDLE hTemplateFile); As can be guessed, the "lpFileName" parameter is the name of the file to be opened. "dwDesiredAccess" specifies the desired access permissions for the file handle. In the most basic sense, for a file, this parameter can specify a read operation, a write operation, or an execute operation. However, don't be fooled, there are many many different options that can be used here, for different applications. The most common operations are GENERIC_READ, GENERIC_WRITE, and GENERIC_EXECUTE. These can be bitwise-OR'd to have read+write access, if needed. File handles can be optionally shared or locked. A shared file can be simultaneously opened and accessed by other processes. If a file is not shared, then other programs attempting to access the file will fail. The "dwShareMode" specifies whether or not the file can be accessed by other applications. Setting dwShareMode to zero means that the file access cannot be shared, and other applications attempting to access the file, while the file handle is open, will fail. Other common values are FILE_SHARE_READ and FILE_SHARE_WRITE which allow other programs to open read handles and write handles, respectfully. The lpSecurityAttributes is a pointer to a SECURITY_ATTRIBUTES structure. This structure can help to secure the file against unwanted accesses. We will discuss security attributes in a later chapter. For now, you can always set this field to NULL. The dwCreationDisposition member would be better named "dwCreateMode" or something similar. This bit flag allows you to determine how the file is to be opened, according to different flag values: - CREATE_ALWAYS - Always creates a new file. If the file exists already, it will be deleted, and overwritten. If the file does not exist, it is created. - CREATE_NEW - If the file exists, the function fails. Otherwise, creates a new file. - OPEN_ALWAYS - Opens the file, without erasing the contents, if the file exists. Creates a new file if the file does not exist. - OPEN_EXISTING - Opens the file, without erasing the contents, only if the file exists already. If the file does not exist, the function fails. - TRUNCATE_EXISTING - Opens the file, only if the file exists. When the file is opened, all the contents are deleted, and the file is set to 0 bytes long. If the file does not exist, the function fails. When opening with TRUNCATE_EXISTING, you must specify a GENERIC_WRITE flag as the access mode, or the function will fail. The dwFileAttributes member specifies a series of flags for controlling File I/O. If the CreateFile function is being used to create something that isn't a File handle, this parameter is not used and may be set to 0. For accessing a normal file, the flag FILE_ATTRIBUTE_NORMAL should be used. However, there are also options for FILE_ATTRIBUTE_HIDDEN, FILE_ATTRIBUTE_READONLY, FILE_ATTRIBUTE_ARCHIVE, etc. Finally, the hTemplateFile member can be specified if you want the new file handle to mimic the properties of an existing file handle. This can be set to NULL if not used. ReadFile WriteFile Once a file handle is opened, ideally we would like to interact with the specified file. We can do this most directly by using the ReadFile and WriteFile functions. Both of them take similar parameters: BOOL ReadFile( HANDLE hFile, LPVOID lpBuffer, DWORD nNumberOfBytesToRead, LPDWORD lpNumberOfBytesRead, LPOVERLAPPED lpOverlapped); BOOL WriteFile( HANDLE hFile, LPCVOID lpBuffer, DWORD nNumberOfBytesToWrite, LPDWORD lpNumberOfBytesWritten, LPOVERLAPPED lpOverlapped); In both, the hFile parameter is the handle to the file that we obtained with CreateFile. The lpOverlapped parameter is used only for a special I/O mode known as "Overlapped I/O Mode", which we will discuss later. For simple I/O, the lpOverlapped parameter can be set to NULL. In ReadFile, the lpBuffer is a pointer to a generic buffer to receive the data. This data may not be character data, so we don't call it a LPSTR type. "nNumberofBytesToRead" is the number of bytes that should be read, and "lpNumberOfBytesRead" is the actual number of bytes that were read. If lpNumberOfBytesRead is zero, the file has no more data in it. In WriteFile, the lpBuffer parameter points to the data that should be written into the file. Again, it isn't specifcally character data. nNumberOfBytesToWrite is the maximum number of bytes to write, and the lpNumberOfBytesWritten returns the number of bytes that were actually written to the file. CloseHandle When you are done with a file handle, you should close it with the CloseHandle function. CloseHandle only takes one parameter, the file handle you wish to close. If you do not close your handle, Windows will automatically close the handle when the program closes. However, it is a more expensive operation for Windows to do it for you, and can waste time on your system. It is a good idea to always explicitly close all your handles before you exit your program. Failure to close a handle is known as "handle leaks", and are a common form of memory leakage that can cause your program, and your entire system, to lose resources and operate more slowly. The handle itself occupies only 32-bits of information, but internally the kernal maintains a large amount of data and storage for every handle. Failure to close a handle means that the kernal must maintain all the associated information about the handle. It also costs the kernel additional time and resources to check through all the old unused handles when it is looking for information about a current handle. Memory-Mapped Files Memory-Mapped files provides a mechanism to read and write to a file using regular pointers and array constructs. Instead of reading from the file using ReadFile, you can read from the file using a memory pointer. The system does this by reading in the file to a memory page, and then writing changes to that page onto the physical disk. There is a certain amount of additional overhead to read the file into memory at first, and to write it back after the mapping is completed. However, if there are many accesses to the file, it can be much more convenient in the long run. Overlapped I/O "Overlapped" I/O is the term Microsoft uses to describe asynchronous I/O. When you want to do I/O, either to a file or to an external device, you have two options: - Synchronous (non-overlapped) - You request the I/O from the system, and wait till the I/O has completed. The program will stop running until the I/O has completed. - Asynchronous (overlapped) - You send a request to the system, and the system completes that request in parallel with your program. Your program can continue to do processing work, and the system will automatically send notification when your request has been completed. Synchronous I/O is much easier to use, and is much more straight forward. In synchronous I/O, things happen sequentially, and when the I/O function has returned, you know that the transaction is complete. However I/O is typically much slower then any other operation in your program, and waiting on a slow file read, or a slow communications port can waste lots of valuable time. In addition, if your program is waiting for a slow I/O request, the graphical interface will appear to hang and be non-responsive, which can annoy the user. Programmers can avoid these delays by using dedicated threads or a thread pool to execute synchronous I/O operations. But threads have significant overhead, and creating too many of them exhausts system resources. Asynchronous I/O avoids this overhead, and is thus the preferrable API for high-performance high-load server applications. Asynchronous I/O is more complicated to use: It requires the use of the OVERLAPPED structure, and the creation of a handler function that will be called automatically by the system when the I/O is complete. However, the benefits are obvious in the efficiency of the method. Your program can request multiple transactions without having to wait for any of them to complete, and it can also perform other tasks while the system is performing the required task. This means that the programs will appear more responsive to the user, and that you can spend more time on data processing, and less time waiting for data. Console API Allocating a Console A console can be allocated by calling the AllocConsole function. Normally we need not do so if we are creating a "console process" (which contains the main function) because they are already attached to a console. However we can create a console for "GUI process" (which entry point is WinMain) and perform I/O operation on the newly created console. It should be noted that each process can only be associated with one console. If the process has already attached to a console, calling AllocConsole will return FALSE. After calling AllocConsole, the Windows Command Prompt window will appear. A console can be freed by calling FreeConsole. Getting a Console Handle Upon the creation of console, the standard output, standard input and standard error handles (we call them the "standard devices") will be initialized. These handles are essential for any console I/O operations. They can be obtained by calling GetStdHandle, which accepts a parameter specifying the handle of the standard device to be obtained. The parameter can be any of the following: - STD_OUTPUT_HANDLE - Specifies the standard output device, which is used for outputting data to the console. - STD_INPUT_HANDLE - Specifies the standard input device, which is used for reading input from the console. - STD_ERROR_HANDLE - Specifies the standard error device, which is mainly used for outputting error. If the function succeeded, the return value is the handle to the standard device specified. If failed, it will return INVALID_HANDLE_VALUE. High Level I/O The <stdio.h> or <iostream> (C++ only) header files contain the functions typically used for high level console I/O. The high level I/O are typically "buffered". Such functions including printf, scanf, fgets etc. If we wish to do unbuffered I/O, we can use the fread or fwrite functions and pass stdin, stdout or stderr to the parameter specifying the standard input, standard output and standard error devices respectively. It is generally not advisable to combine the use of both high level and low level I/O however. These functions are designed to be portable and act as an abstraction to the low level system I/O functions. Low Level I/O The low level console I/O can be done by using several API functions such as WriteConsole, ReadConsole, ReadConsoleInput etc. BOOL WriteConsole( HANDLE hConsoleOutput, const VOID *lpBuffer, DWORD dwNumberOfCharsToWrite, LPDWORD lpNumberOfCharsWritten, LPVOID lpReserved ); BOOL ReadConsole( HANDLE hConsoleInput, LPVOID lpBuffer, DWORD dwNumberOfCharsToRead, LPDWORD lpNumberOfCharsWritten, LPVOID pInputControl ); Please note that the "Chars" referred is actually the number of TCHAR, which can be 2-bytes wide when UNICODE is defined. It is NOT the number of bytes. The ReadConsoleInput can be used to read keystrokes, which can't be done with C or C++ standard library. There are many more functions which provide powerful I/O functions. Colors and Features There are many exciting API functions that provide additional controls over the console. One of the more commonly used function is the SetConsoleTitle which is used to set the console title text. We can also alter the position of the cursor by using the SetConsoleCursorPosition function. We can output text with different foreground and background colors by SetConsoleTextAttribute. We can also change the size of the screen buffer by SetConsoleScreenBufferSize. For an extensive documentation of all the Console API one can consult MSDN. Device IO API Interaction between a program and a device driver can be complicated. However, there are a few standard device drivers that may be used to access standard ports and hardware. In most instances, interacting with a port or a piece of hardware is as easy as opening a handle to that device, and then reading or writing to it like a file. In most instances, these ports and devices can be opened using the CreateFile function, by calling the name of the device instead of the name of a file. Getting a Device Handle Device IO Functions Device IO function. Warnings about Device IO Completion Ports This page of the Windows Programming book is a stub. You can help by expanding it. Next Chapter File Management This chapter will discuss some of the details of file and directory management. Some topics discussed will include moving and deleting files and directories, enumerating files, locking files, encrypting files, and accessing the Recycle Bin. File Attributes Directory changes ReadDirectoryChangesW can be used to watch a directory (or volume). It can pass notifications when some file action is done in the target directory. File Enumeration Locking Files Encrypting Files Compressing Files Next Chapter Memory Subsystem C programmers will undoubtedly be familiar with the stdlib memory allocation functions, malloc, realloc, calloc, etc. These functions are based off a number of other functions in the Win32 API that deal with memory segments. Windows Memory Next Chapter Multitasking Processes and Threads - CreateProcess, etc Jobs Threads The CreateThread function accepts a few parameters: - a pointer to the function to be executed within the thread. - a pointer to a variable to be passed to the thread's function. The CreateThread function creates a new thread for a process. The creating thread must specify the starting address of the code that the new thread is to execute. Typically, the starting address is the name of a function defined in the program code. This function takes a single parameter and returns a DWORD value. A process can have multiple threads simultaneously executing the same function. The following example demonstrates how to create a new thread that executes the locally defined function, ThreadFunc. value as an argument to the thread function. This could be a pointer to any type of data or structure, or it could be omitted altogether by passing a NULL pointer and deleting the references to the parameter in ThreadFunc. It is risky to pass the address of a local variable if the creating thread exits before the new thread, because the pointer becomes invalid. Instead, either pass a pointer to dynamically allocated memory or make the creating thread wait for the new thread to terminate. Data can also be passed from the creating thread to the new thread using global variables. With global variables, it is usually necessary to synchronize access by multiple threads. Passing Parameters Thread Local Storage (TLS) Synchronization Events Mutexes Critical Sections Spin Locks Debugging ) { } } Fiber Next Chapter Interprocess Communication This. Pipes Unnamed Pipes Named Pipes Mailslots Sockets Next Chapter. Steps to Making an MDI These are the steps to making an MDI application. We will explain each of them. - Register the Frame and Child window classes. - Modify the message loop. - Create the frame window. - Create the MDI Client - Create MDI child windows with the client. Register the Window Classes Frame Windows get created like any other windows. Remember, however, that frame windows are just a backdrop for the action that is happening in the child windows, so you shouldn't get to crazy or fancy with your background artwork. The Client Window. Next Chapter Registry API The Registry Keys must be opened before they can be used. To do this, we use either of - RegOpenKeyEx() - To open any key. - RegCreateKeyEx() - To create a key or open it if it exists already. Values The value of a key can be obtained by calling RegQueryValueEx(). The RegSetValueEx() function is used to associate a value with a key. Key Hierarchy Next Chapter Security API This page of the Windows Programming book is a stub. You can help by expanding it. This page will talk about the Windows security API. Security Overview API User Permissions Editing ACLs Security Reminders Next Chapter Winsock Winsock is the name of the library in Windows that handles the Berkely Socket API. Technically, this library is not part of the Win32 API, although there are some windows-specific issues that need to be considered when programming a Winsock application. Making a Winsock Project. Microsoft Foundation Classes (MFC) Microsoft Foundation Classes (MFC) In essence, MFC is a SDK interface, a library consisting in a set of classes that act as wrappers around portions of the Windows API, so that C++ programmers may program Windows using some concepts of the object-oriented programming (OOP) paradigm and the C++ language (the Win32 API is based on C, as seen in C and Win32 API Section of the book). One should learn the Win32 API or at least have some ideas since some functions are absent from the MFC and would help you to better understand the SDK. Some tools, such as Microsoft Visual Studio, are capable of automatically generating large amounts of MFC skeleton code for use in a project. Because of this, most MFC tutorials or reference materials will teach the subject using the automated Visual Studio tools, and leave out some of the gritty details. In this book where possible we try to be neutral. MFC was first oriented mostly for enterprise-level programming projects, created in an age most code was done in C and Object Oriented Programming was only in the realm of Smalltalk. Since the release of Visual Studio 6.0 and the MFC 6.0 little was known of the future support to the MFC since the company was favoring the .NET Framework. Version 7.0, 7.1 and 8.0 were mostly extensions to support the new OSs and to aid developers in migrating to the new framework. Since then information on the future of the MFC could be only extracted from Steve Teixeira, Microsoft Corporation, June 2005 paper - MFC: Visual Studio 2005 and Beyond, on the release of Visual Studio 2008 Service Pack 1, Microsoft seems to once again be actively supporting the MFC. Many users today find it acceptable for a low complexity program to have a memory footprint of 30-80Mb (this is common in Java or .Net applications), low response times or "outside of your control" applications like the internet now provides. It is therefore debatable if the impact of use of MFC in small applications outweighs the benefits the libraries provides. Most of the software made specifically for Windows today uses MFC. You should prefer the Win32 API SDK, or an alternative wrapper for it, if you do not intend to: - Make use of a complex GUI, use the document/view architecture or complex controls. This will increase the use of system resources (memory use, exe and install size). - Use other libraries that depend on the MFC. - Have a complex install for your application. To distribute MFC-Based project you must build it with static MFC libraries or distribute your project with the needed MFC dlls. There is no guarantee that your customer has the required dlls for your program. Old MFC-Based programs must work with new MFC versions. They should but don't do it always. Your customer will not be very happy if after installing your project some of his old software products begin hanging. MFC and C++ The MFC design principle is an attempt for simplification. The wrapper classes were designed so to simplify some tasks and automates others. Because of those facts, however, a certain amount of fine-tunable control was lost from the raw Win32 API or excessive automation was archived. The MFC has been recognized as having serious design flaws and inconsistencies and has not been actively maintained. The C++ language and best practices have evolved and today this constitutes a barrier for the utilization of the framework. As MFC predates the STL standardization in to the C++ language, it implements its own versions of the STL containers, not as complete and even inconsistent, this simplistic solutions the MFC implementations tend to be faster, however you should prefers to use the STL when ever you can, it will make the code more C++ standard and permit easier portability in converting the code to multi-platform. - Multiple Inheritance The MFC class library does not use and Multiple Inheritance and was not designed with full support for Multiple Inheritance. Since most MFC classes derive from CObject using Multiple Inheritance will be cause problems of ambiguity (any reference to CObject member functions will have to be disambiguated). Static member functions, including operator new and operator delete must also be disambiguated. The best option is to avoid the use of Multiple Inheritance with MFC but take a look at Using C++ Multiple Inheritance with MFC (msdn.microsoft.com) for the needed information on how to bypass the limitations. MFC Conventions The MFC uses the Hungarian notation. It uses prefixes, like "m_" to indicate a member variable or "p" to indicate a pointer, and the rest of the name is normally written out in CamelCase (the first letter of each word is capitalized). - CObject as the Root for most MFC Classes All the significant classes in MFC derive from the CObject class. The CObject does not have any member data, but does have some default functionality. Using MFC MFC requires header files that are separate from the standard <windows.h> header file. The core of the MFC system requires the inclusion of <afxwin.h>. Other header files that are of some use are <afxext.h> (for MFC extensions), and <afxcmn.h> (for the MFC common dialog boxes). Simply changing the header files, unfortunately, is still not enough. The MFC DLL libraries must be linked to by the project, because the DLL files contain the class definitions that will be used throughout every program. To use MFC, you must link to the MFC libraries stdafx.h stdafx.h is the standard include for MFC projects - that is, if you create a new MFC project, a stdafx.h will automatically be created for you. It will include all the rest of the necessary MFC header files. theApp extern CYourAppClass theApp; Use in the header file of your application class and then include it wherever you need to use theApp. You could try the AfxGetApp function to get a pointer to theApp, an efficient method of accessing members of your application is to make a pointer to theApp a member variable of the class which needs it -- for example: class CMyDialog : public CDialog { // other class stuff here... // Attributes public: CMdiApp* m_pApp; }; and make sure you initialize m_pApp in the constructor or else will be accessing a NULL pointer. CMyDialog::CMyDialog(CWnd* pParent /*=NULL*/): CDialog(CMyDialog::IDD, pParent) { //{{AFX_DATA_INIT(CMyDialog) //}} AFX_DATA_INIT // Outside the special-format comments above... m_pApp = (CMdiApp*)AfxGetApp( ); } and voila! Now any time you need access to your application, you got it! m_pApp->m_nMemberVar; m_pApp->MemberFunction(nParam1, strParam2); Getting Started First and foremost, it must be mentioned, that MFC is not the brand of C++ programming that "looks like C". MFC makes heavy use of the object-oriented features of C++, which can seem "dense" or even unreadable to a new C++ programmer. It is highly recommended that the reader become very familiar with C++ concepts such as classes and hierarchies now, if they are not familiar concepts yet. The root class for MFC is the CObject class. CObject itself does not support multiple inheritance, but derivative classes do. Each application begins in a class derived from CWinApp. Every program must have a CWinApp class, and each application may only have one. CWinApp contains a number of functions for initializing an application, and controlling the instance handle (similar to the HINSTANCE member of the WinMain function). Programs that want to display a window must utilize a derivative of the CWnd class. Basic MFC Program We will outline here a basic MFC program that will create a simple window, but won't handle any user input. From this basic outline, we will be able to tackle more difficult subjects. #include <afxwin.h> //basic MFC include //class derived from CFrameWnd, which is derived from CWnd class Basic_Window:public CFrameWnd { public: Basic_Window() { Create(NULL, "Basic MFC Window"); // In some cases you might want to use // Create(NULL, _T(":Basic MFC Window")); } }; //class derived from CWinApp, which is the main instance of our application class MyProgram:public CWinApp { //a pointer to our window class object Basic_Window *bwnd; public: //this is essentially our "entry point" BOOL InitInstance() { bwnd = new Basic_Window(); m_pMainWnd = bwnd; m_pMainWnd->ShowWindow(1); return 1; } }; //the program class instance pointer MyProgram theApp; As we see here, we are immediately relying on class definitions and inheritance, so it is a good idea for readers who are not completely familiar with these topics to brush up on them before continuing. Global Variables MFC provides a number of global variables, that are instantiated and then utilized in the underlying MFC framework to compile your program. We can see this in our basic example code when we use the variable m_pMainWnd. Cleanly closing an MFC application The generic solution is PostQuitMessage([exit code]);, but take care to cleanup any lingering resources (closing documents, deallocating memory and resources, destroying any additional windows created etc), on the other hand using AfxGetMainWnd()->PostMessage( WM_CLOSE ); can be a better method in some situations, since it triggers the correct shutdown sequence. This is especially important on MDI/SDI applications because it gives a chance for documents to prompt for save before exit or for the user to cancel the exit. Wait mouse cursor To display a busy/wait mouse cursor the MFC as added a simple helper class. Instantiate a CWaitCursor class inside a function and a wait cursor is then displayed for the duration of that function, auto destruction of the class will restore the cursor state. CWaitCursor aWaitCursor; Threads As it should be expected the MFC also wraps the Win32 thread primitives in special classes and functions of its own. For generic information of threads and C++ see the C++ Programming Wikibook section on Multitasking. The MFC devised threads in worker threads and GUI threads. Worker threads Worker threads are especially useful to perform background tasks, or any asynchronous work that doesn't require user intervention, a print job, calculations, waiting for an event, etc. To create a worker thread, the simplest way is to implement a function that will perform the desisted work, and then create the thread with AfxBeginThread() that will use that specific function. Communication Exit strategy Never use TerminateThread() unless you have no other choice to guarantee a thread termination. It is poor practice and dangerous. A proper thread exit can be archived by a normal return from the thread (completion) or by signaling it to return prematurely. Since we are working on Windows and on a 32 bit bound CPUs (making 32 bits accesses atomic), it is considered safe (but not portable) to use a shared bool variable to indicate to a thread to exit, any other synchronization methods can be used if available. Since the only issue regarding thread termination is on aborting a job or exiting the program without having threads running, when dealing with worker threads, in the class that created the thread on the destructor you should signal the thread to abort and then use WaitForSingleObject() to wait for the thread to terminate. Classes Hierarchy MFC Classes Hierarchy CObject The root/base class for most of the MFC libary. Some of the features it makes available to the programmer are serialization support, run-time class information, and object diagnostic output. Any derived class from CObject, can exploit these features. Class CObject in AFX.H Field Summary static const AFX_CORE_DATA CRuntimeClass classCObject Constructor Summary protected CObject() private CObject( const CObject& objectSrc ) virtual ~CObject() Method Summary void PASCAL operator delete( void* p ) virtual CRuntimeClass* GetRuntimeClass() const BOOL IsKindOf( const CRuntimeClass* pClass ) const BOOL IsSerializable() const void* PASCAL operator new( size_t, void* p ) void* PASCAL operator new( size_t nSize ) virtual void Serialize( CArchive& ar ) CCmdTarget CWinThread CWinApp CWnd CListCtrl This class encapsulates the functionality of a list view control, which is a control that displays a collection of items, each consisting of an icon and a label. Class CListCtrl in AFXCMN.H CObject | +--CCmdTarget | +--CWnd | +--CListCtrl class CListCtrl extends CWnd Fields inherited from class CWnd m_hWnd, wndTop, wndBottom, wndTopMost, wndNoTopMost, m_hWndOwner, m_nFlags, m_pfnSuper, m_nMsgDragList, m_nModalResult, m_pDropTarget, m_pCtrlCont, m_pCtrlSite, messageMap Fields inherited from class CCmdTarget messageMap, commandMap, dispatchMap, connectionMap, interfaceMap, eventsinkMap, m_dwRef, m_pOuterUnknown, m_xInnerUnknown, m_xDispatch, m_bResultExpected, m_xConnPtContainer Fields inherited from class CObject classCObject CToolBar This class encapsulates the functionality of a control bars that have a row of bitmapped buttons and/or separators. CToolBar objects are usually embedded members of frame-window objects derived from the class CFrameWnd or MDIFrameWnd. CTreeCtrl CException CArray CFile CDC CGdiObject CFont CHttpArgList Subclassing standard technique for customizing the behavior of a class Window Subclassing Window message route The mechanism by which MFC routes messages is also called the Message Map system that was created to wrap the old C API and reduce the level of complexity for programmers. // a Message Map example BEGIN_MESSAGE_MAP( ClNOMainFrameWnd, CFrameWnd ) ON_WM_SYSCOMMAND() ON_WM_CREATE() ON_WM_SIZE() ON_WM_CLOSE() ON_WM_MEASUREITEM() ON_WM_INITMENUPOPUP() ON_WM_MENUCHAR() ON_WM_DESTROY() ON_REGISTERED_MESSAGE( D_WM_REQUEST, OnMsgReqToShowWnd ) ON_COMMAND( ID_APP_TRAYCLICK, OnTrayClick ) ON_MESSAGE( C_WM_TIPACTION, OnTipOfTheDayAction ) END_MESSAGE_MAP() Message map The message map macros: - ON_COMMAND and ON_BN_CLICKED - the ON_COMMAND and ON_BN_CLICKED are the same, MFC command and control macro is preset to handle the Windows message WM_COMMAND and the notification routing mechanism uses the command ID to decide where to route to. Notifications with control notification code of zero (BN_CLICKED) are interpreted as commands. - ON_UPDATE_COMMAND_UI - the ON_UPDATE_COMMAND_UI macro. Component Object Model (COM) COM and ActiveX People familiar with Windows have doubtless heard a number of different terms used, such as COM, DCOM, COM+, ActiveX, OLE, etc. What precisely are all these things, and how are they related? Introduction to COM, et cetera. Essentially, COM defines an interface that all languages must follow, if they want to participate in COM programs. COM Libraries. COM Complaints DCOM and COM+ This page of the Windows Programming book is a stub. You can help by expanding it. After the original versions of COM were introduced, there became a clear need to expand the functionality of the COM model, and to make some changes to the general framework. Later versions of COM were given various names, such as DCOM and COM+. DCOM DCOM stands for "Distributed COM", and is a protocol for linking COM components from across a local area network (LAN). DCOM allows COM components to be stored on external "COM servers", and used from other computers in the network. COM+ COM plus = COM + MTS .NET When the COM design team tried to create a framework that was more easy to use then COM, more portable, and more universal, they created the .NET framework. Originally, .NET was supposed to be introduced as the next version of COM, but instead Microsoft made it into its own entity. COM development has essentially stopped, to focus more attention on the .NET platform. Because .NET has its roots in COM, it shares many of the guiding principals of COM. First, many different OO languages can be compiled into a single intermediate language. This allows software written in C++, C#, VB.NET, and other .NET languages to be easily interfaced together to create mixed-language programs quickly. In addition, .NET has strong support for distributed computing (a la DCOM). The .NET platform is quickly being standardized, and is available on several different platforms. Therefore, .NET is not strictly a Windows-programming topic, and will likely not be covered much throughout the rest of this book. Device Driver Introduction. Driver History Driver StacknP "Arbitrary Context" Arithmetic - Understanding the Windows Driver Model - An introduction to the basic concepts needed for WDM programming - WDM I/O Concepts - Understanding the I/O concepts needed for WDM programming - Kernel-Mode Driver Framework - the .ISO download includes the Driver Development Kit (DDK) The DDK What it is. Driver Structure This page of the Windows Programming book is a stub. You can help by expanding it. Drivers, unlike user-mode programs, have a number of different entry points, and each entry point is specifically designed to handle a single type of interface. In this chapter, we will talk about the required interfaces, and we will also talk about adding additional interfaces. DRIVER_OBJECT Driver routines all receive a pointer to a DRIVER_OBJECT structure. The driver must fill in the appropriate fields of this structure, so that Windows can learn the whereabouts of the different access routines, and any other information about the driver. I/O Request Packets Device Object Structures DeviceExtensions DriverEntry and DriverUnload StartIo and AddDevice Service Routines Driver API This page of the Windows Programming book is a stub. You can help by expanding it. It should come as no surprise that kernel mode drivers cannot use the same functions as user-mode applications. Device Drivers, for this reason, must rely on the available kernel mode functions. This page will talk about some of the available libraries and functions that can be utilized in a driver. Function Prefixes - include <stdio.h> - include <iostream.h> - include <conio.h> class Native API String Functions Safe String Library UNICODE_STRING Kernel Mode Functions Programming Shell Extensions The Shell explorer.exe, the Windows shell program, has a number of different functions that can be used to cause your program to perform tasks like the shell would. We will run over a few of them here: ShellExecute The Taskbar The System Tray. The Recycle Bin Extending IE This page of the Windows Programming book is a stub. You can help by expanding it. Similarly to explorer.exe, the Microsoft Internet Explorer, iexplore.exe, can also be extended by creating a Browser Helper Object (BHO). A BHO may be a single toolbar button, an entire toolbar, or even any number of other objects in the browser. BHOs are generally programmed using C++. Programming Screen-savers Screensavers are graphical programs that will exercise the computer's screen while the computer is not in use, to prevent damage to the screen. Screensavers can also be very aesthetic, interesting, and entertaining however. Requirements. Starting a Screensaver. Batch File. Hello World Example Conditionals IF. Console Control Handlers Sample FTP script A simple program that will ftp one file. Do not name the file or it will recurse. set ftpUser= set ftpPass= set ftpSite= set file= @echo off cls : FTP the stuff > script.ftp ECHO USER %ftpUser% >> script.ftp ECHO %ftpPass% >> script.ftp ECHO put %file% >> script.ftp ECHO quit FTP -v -n -s:script.ftp %ftpSite% @type NUL >script.ftp : Delete scripts del script.ftp /Q cls There are a number of other XML tags that can be used in a WSH script. <description >. JScript JScript, Microsoft's proprietary version of JavaScript, is a default language in Windows Script Host. JScript is very similar to JavaScript, and was originally created so that JavaScript programmers could put their skills to use scripting a local Windows environment. Further Reading Compiled HTML Help In most programs, the menu will have a popup item called "Help". In the help menu, there is frequently an option for "Contents" or "Index" or "Topics". When you click on these options, it brings up a help box. These help boxes are known as "Compiled HTML Help Modules", or .chm files. These help files can be compiled from HTML source code using the Microsoft HTML Help Workshop, available as a free download from Microsoft. Making HTML Help Modules HTML help modules can be compiled from source files containing a mixture of HTML, DHTML, CSS and JavaScript. This is the same exact HTML that is used in writing web pages, so people with web experience will have a leg up creating HTML help modules. HTML, CSS and JavaScript are languages that are very well suited for graphical applications, but the functionality is far less then C or VB. The Help Workshop contains a number of different options, as to whether you want to include an index window, if you want to allow searching and indexing, if you want to have a Table of Contents, et cetera. A default page may be chosen to be displayed when the Help Module is first loaded. Pages in a help module may use hyper links to link to other pages in the module, other files on the computer, or even resources from the internet. HTML Help API The HTML Help Workshop comes with a header file ("htmlhelp.h") and a library file ("htmlhelp.lib") that must be utilized in your project to be able to call HTML help modules from your program. Windows.h must be included before the Htmlhelp.h file, or the compiler will spew out errors. Htmlhelp.h defines a single function, HtmlHelp that can be called from inside your program: HWND WINAPI HtmlHelp(HWND hwndCaller, LPCTSTR pszFile, UINT uCommand, WORD_PTR dwData); This function returns a handle to the html help window, and your program can interact with this window at a relatively high level, if you want it to. The first parameter is the handle to the calling (parent) window. If you set this parameter, the help window will be a child of your window, and can communicate with it. If you set this to NULL, the help window will essentially be autonomous. The "pszFile" parameter is the filename of the target help module. The "uCommand" may contain a number of different commands, some of which are very complicated and involved. The simplest (and most common) is the HH_DISPLAY_TOPIC command, which displays help like normal, and allows the user to view and interact with the help module. The "dwData" may optionally contain additional data, depending on the command used. If we want to open a help module, located at c:/help/myhelp.chm, we can use the following command: hwndHelp = HtmlHelp(hwndParent, "C:\help\myhelp.chm", HH_DISPLAY_TOPIC, NULL); Now, let's say that we want to open a specific page inside our help module. Let's say that our module contains a page called "example.htm". We can open the help directly to that page, by changing the destination string a little bit: hwndHelp = HtmlHelp(hwndParent, "C:\help\myhelp.chm::example.htm", HH_DISPLAY_TOPIC, NULL); And further more, we can force the page to be displayed in a particular window of our help viewer, if we have created more then one window for it. Let's say we have created a window called "SideWin" in our html help module. We can load our page (example.htm) into SideWin using the following syntax: hwndHelp = HtmlHelp(hwndParent, "C:\help\myhelp.chm||example.chm>SideWin", HH_DISPLAY_TOPIC, NULL); For more advanced options, and a discussion of some of the other commands, you should check out the related material at MSDN. Creating an HTML Help Module The HTML help workshop has 5 buttons across the top, In order, from left to right, they are: - New - create a new HTML file for editing - Open - open an existing HTML file for editing - Compile - Compile the current project into a help module - View - view the current help project - Opens a help viewer window If you click on the "New" button (or alternately select "File>New" from the menu bar), you will be prompted to select what type of new thing you want to create. To start, you should create a new project. Selecting to create a new project will open up a wizard, that will walk you through the steps of creating a new project. HTML Help Files There are 3 different file types that may be included in an HTML Help project: HTML source files (.HTM) Table of Contents files (.HHC) and Index Files (.HHK). Once you have created a new project, you may edit and add all of these types of files to your project. Each HTML help module may only have one .HHC and one .HHK file. The Project Sidebar When you start a new project, and complete the wizard, you will see a sidebar, on the left-hand-side of the help workshop window, that will allow you to edit the project options, the table of contents, and the index. Decompiling On Windows, a CHM file can be extracted to plain HTML with the command: hh.exe -decompile extracted filename.chm This will decompress all files embedded in filename.chm to folder extracted. You can also use HTML Help Workshop to decompile CHM files. File -> Decompile...[citation needed] 7-Zip will open .chm files as archives allowing access to all content. On Windows, select the topmost Topic, right-click and select Print. On the popup dialog, select "Print this heading and all subtopics.", click OK. Before selecting a printer, look in %HOMEPATH%\Local Settings\Temp for a file named ~hh*.htm. This is the concatenated HTML. The image reference will refer to the .chm file at the time of the print. On Linux systems which use apt as a packaging tool, a CHM file is extracted to plain HTML with extract_chmLib (first command is for a Debian based OS): $ sudo apt-get install libchm-bin $ extract_chmLib tero.chm tero/ Resource Script Reference This appendix page will attempt to list the different types of resources, and will attempt to show how to use those resources. General construction Resource script files are human-readable text files in either ANSI or Unicode (more strictly: UTF-16 with byte order mark (BOM)) format. To intermix different languages in ANSI format, a quirks #pragma exist to switch the code page in between. Unicode, #pragma switch, and the LANGUAGE statement are only supported for Win32. A typical small file may look like #include <windows.h> #define IDC_STATIC -1 100 ICON "ProgIcon.ico" 10 MENU { // or BEGIN POPUP "&File" { MENUITEM "E&xit",IDCANCEL } } // or END Using curly braces or BEGIN/END depends on your preference. The old, space-wasting style is the BEGIN/END pair, coming from the Pascal heritage of MacOS and Win16 API calls. C programmers typically prefer curly braces. The Visual Studio Resource Editor always generates BEGIN/END pairs, besides lots of housekeeping stuff. Beginning with some header inclusion and #define statements, every resource is included as either id_of_resource resource_type [memory management flags] "filename" or id_of_resource resource_type [memory management flags] BEGIN subsequent data END An exception from this rule is - The LANGUAGEstatement, can be placed almost everywhere (Win32 only) - The DIALOGand the VERSIONINFOresource, there are additional statements between heading line and BEGIN - The STRINGTABLEresource, where no resource ID is before the keyword. Instead, every string is prefixed with an ID id_of_resource and resource_type can be either a string or a number. No quotes are there! A number is the preferred method for identification. All predefined resource types are numbers. But watch out! If you use an unknown ID for the resource compiler (let's say MANIFEST for Visual Studio 6), you will get neither error nor warning, and the resource is built with string resource type. Windows XP, using ExecProcess(), will not find that intended manifest and your program will show up in old visual style. Conditional compilation with #if / #ifdef / #endif is also supported. Expressions for IDs are limited to very simple math, no boolean operators are permitted. Under the hood Resources are compiled into a three-level directory structure: - Resource type (MENU, DIALOG etc.) - Resource ID (the number that is ahead the Resource Type – for STRINGTABLE, the ID of a group of upto 16 strings) - Resource language (the currently active language, given by command-line option or LANGUAGE statement; Win32 only) The content of following data depends on actual resource type. Mostly, it's binary. Read access to the binary data of an arbitrary resource is done with FindResource() // get a handle LoadResource() // get the binary size LockResource() // get a pointer; Win32: This is a simple macro, Win16: This is a function call. … // do something UnlockResource() // Win32: This is a do-nothing macro, Win16: This is a function call. FreeResource() // release Because bitmap, icon, cursor, dialog, string table, and menu resources are not officially documented and a bit hard to parse, programmers should use specialized resource load functions for these resource types instead. See description and examples for these types below. Identifiers Identifiers are generally named in a certain way, although the reader and all programmers are free to alter this naming scheme. It is simply a suggestion. Identifiers generally start with the prefix "ID", followed by a letter that denotes the type of identifier: - IDS: A string resource - IDM: A menu resource - IDC: A command identifier - IDD: A dialog box resource - IDA: An Accelerator table resource - IDI: An Icon or bitmap resource - IDB: A Bitmap resource - ID: A custom resource, or an uncommon resource type. Sometimes, the command identifiers in a menu are given an "IDM_" prefix, to distinguish between commands from other sources. There is no need to use symbolic identifiers. In some cases, identifiers complicate access to numerically adjanced controls in a dialog or menu. In any case, identifiers don't help non-English programmers to read a software source. Numbers never need translation. And both identifiers and numbers need explanation. IDs are allowed in range 0..65535 and preferred in range 1..32767. LANGUAGE This keyword has different scope: - Local (for one resource) if located below the resource line, like 21 MENU LANGUAGE 7,1 // or, LANG_GERMAN, SUBLANG_GERMAN { POPUP "&Datei" // = "&File" … This language applies to that menu - Global (for all next resources) if located somewhere else Language-neutral resources, like culture-free icons, VersionInfo, and Manifests, should be always set to LANGUAGE 0,0 (or, more verbously, LANG_NETUTRAL,SUBLANG_NEUTRAL). Note to always check images against culture dependency! A typical mistake is the plug symbol for a mains-supplied notebook: It shows undoubtly an American plug, even in Europe. Obviously, images containing letters or text are culture-dependent. Memory Management Flags There are some Memory Management Flags from Win16 heritage, like MOVEABLE, FIXED, etc. See LocalAlloc() for some flags. DISCARDABLE Resources are loaded into memory when the program is run. However, if a resource is not in use, and if Windows does not need them immediately, resources can be optionally unloaded from memory until needed. To specify that it is okay to unload an unused resource from memory, you may list the DISCARDABLE keyword with the resource. DISCARDABLE resources allow more efficient memory usage, but can slow down your program if they need to be loaded from disk. The DISCARDABLE keyword is ignored for 32-bit Windows, but remains for compatibility. [1] 32 bit resources are never loaded but mapped into memory. Icons Icons can be stored in a resource file using the ICON keyword. Here is a general example of using an icon in a resource script: IDI_ICON<n> ICON [DISCARDABLE] "iconfile.ico" Windows Explorer will display the binary executable with the first icon from the script. For instance, if we load two icons, as such: IDI_ICON1 ICON DISCARDABLE "icon1.ico" IDI_ICON2 ICON DISCARDABLE "icon2.ico" And we define our macros as such in the corresponding resource.h: #define IDI_ICON1 1 #define IDI_ICON2 2 The executable file will have icon1.ico as its icon. To load an icon from an executable module, assuming we have an instance handle to the module ( hInst in the following example), we can get a handle to the icon as such: HICON hIcon; hIcon = LoadIcon(hInst, MAKEINTRESOURCE(IDI_ICON1)); This will return a handle to the icon associated with the identifier "IDI_ICON1". Icon identifiers are generally prefixed with an "IDI_" which is short for "ID for an Icon". The second parameter to the LoadIcon() function is a pointer to a string. String pointers are 32 bit values. However, if the most significant 16 bits are all zero, Windows will treat the value as a resource number, and not a string. To make the conversion between a string and a 16-bit integer, Microsoft provides the MAKEINTRESOURCE macro. Similarly, we could have used a string to define our Icon: MYICON1 ICON DISCARDABLE "icon1.ico" And we could load this string by name: HICON hIcon; hIcon = LoadIcon(hInst, "MYICON1"); String identifiers for resources are case insensitive. WNDCLASSEX has handle values for two icons: a large icon and a small icon. The small icon is the icon used in the upper-left corner. Small icons are generally 16 pixels square. Larger icons are 32 pixels square. If no small icon handle is provided, the large icon will be shrunk down to fit. If the LoadIcon() function is supplied with a NULL instance handle, Windows will supply a default icon for use. Recently, the Win32 API provides the LoadImage function for loading icons, bitmaps, and mouse cursors from a single function. You can find more information about this function on MSDN. Internally, Icons are stored under numeric resource type RT_ICON == 3, and are grouped under RT_GROUP_ICON == 14. Bitmaps Bitmaps can be loaded similarly to Icons in resource files: (bitmap ID or name) BITMAP [DISCARDABLE] "bitmapfile.bmp" Bitmaps can be accessed with the aptly named LoadBitmap function (again, new versions of the Win32 API prefer you use LoadImage to load a bitmap, icon, or cursor). LoadBitmap returns an HBITMAP handle type: HBITMAP hBmp; hBmp = LoadBitmap(hInst, MAKEINTRESOURCE(IDB_BITMAP1)); Or, if we have named our bitmap resource: hBmp = LoadBitmap(hInst, "MyBitmapRes"); Bitmaps are large resources, and if windows can't load the bitmap into memory (or if the ID or name value is invalid), the function will return a NULL value. Make sure you test this value before you use the handle. Bitmaps must be unloaded from memory by passing the handle to the DeleteObject() function. You can find more information about this on MSDN Bitmap identifiers generally use a "IDB_" prefix, to indicate that it is the ID of a bitmap. Internally, Bitmaps are stored under numeric resource type RT_BITMAP == 2. Mouse Cursors Mouse cursors are specified similarly to icons and bitmaps, and are loaded with the LoadCursor function. Internally, cursors are stored under numeric resource type RT_CURSOR == 1, and are grouped under RT_GROUP_CURSOR == 12. As for any resource that implies binary data, the use of an external file with "filename" is recommended. However, most resource compilers allow to inline binary data to the resource file in this way: 42 ICON { 123,4567,0x89AB,0xCDEF '\x01','\x23',"ajx" } with this rule, coming from Win16 heritage: - Numbers (decimal or hexadecimal) are stored contiguously as 16-bit little-endian quantities (unaligned) - Characters are stored as 8-bit quantities String Tables A resource script can have many string tables, although this is unneccessary: the tables aren't differentiated (i.e. they get merged), and each string object, in any table, must have a unique identifier. Strings in a string table also may not use names, but instead must use numeric identifiers. After all, it doesn't make any sense to have to address a string with a string, does it? Here is a general string table: STRINGTABLE DISCARDABLE BEGIN IDS_STRING1, "This is my first string" IDS_STRING2, "This is my second string" ... END It is important to note that in place of the BEGIN and END keywords, the programmer may also use the more C-like curly brackets, as such: STRINGTABLE DISCARDABLE { IDS_STRING1, "This is my first string" IDS_STRING2, "This is my second string" ... } Some people prefer one over the other, but they are all the same to the resource compiler. Strings can be loaded using the LoadString function. LoadString is more involved then the LoadBitmap or LoadIcon functions: int LoadString(HINSTANCE hInstance, UINT uID, LPTSTR lpBuffer, int nBufferMax); The hInstance parameter, as we know, is the instance handle for the module that contains the string. The uID parameter contains the string number that we are trying to access. lpBuffer is the character array variable that will receive the string, and the nBufferMax number tells windows what the maximum number of characters that can be loaded is. This count is a security precaution, so make sure not to allow Windows to write character data beyond the end of the string. MSDN displays a large warning on the page for this function, and it is important that programmers heed this warning. msdn Windows will automatically zero-terminate the string, once it is written to the buffer. LoadString will return the number of characters that were actually written into the string, in case the number of characters is less then the maximum number allowed. If this return value is 0, the string resource does not exist, or could not be loaded. Strings can have "\0" in the middle. As strings are saved as counted strings, LoadString returns the number of characters saved, including the zeroes in between. But most Resource Editors fail with such strings. Internally, stringtables are stored under numeric resource type RT_STRING == 6, in groups of upto 16 adjanced IDs. Accelerators Keyboard accelerators are a common part of nearly every windows application, and therefore it is a good idea to simplify the job of creating accelerators by putting them in a resource script. Here is how to create an accelerator table: (Accelerator Table ID or name) ACCELERATORS [DISCARDABLE] BEGIN (key combination), (Command ID) ... END Key combinations are specified in terms of either a string literal character ("A" for instance) or a virtual key code value. Here are some examples: IDA_ACCEL_TABLE ACCELERATORS DISCARDABLE BEGIN "A", IDA_ACTION_A //Shift+A END Now, when the key combination "Shift+A" is pressed, your window procedure will receive a WM_COMMAND message with the value IDA_ACTION_A in the WPARAM field of the message. If we want to use combinations of the "Alt" key, or the "Ctrl" key, we can use the ALT and CONTROL keywords, respectively: IDA_ACCEL_TABLE ACCELERATORS DISCARDABLE BEGIN "a", IDA_ACTION_A, ALT //Alt+A "b", IDA_ACTION_B, CONTROL //Ctrl+B "c", IDA_ACTION_C, ALT, CONTROL //Alt+Ctrl+A END Also, we can use the "^" symbol to denote a CONTROL key code: IDA_ACCEL_TABLE ACCELERATORS DISCARDABLE BEGIN "^a", IDA_ACTION_A //Control+A END Similarly, if we want to be super hackers, would could use the ASCII code directly: IDA_ACCEL_TABLE ACCELERATORS DISCARDABLE BEGIN 65, IDA_ACTION_A, ASCII //65 = "A", Shift+A END Or, we could refer to keys (including non-alphanumeric keys) with their Virtual Key Code identifiers, by using the VIRTKEY identifier: IDA_ACCEL_TABLE ACCELERATORS DISCARDABLE BEGIN VK_F12, IDA_ACTION_F12, VIRTKEY //press the "F12 Key" VK_DELETE, IDA_ACTION_DEL, VIRTKEY, CONTROL //Ctrl+Delete END Now, If we make an accelerator correspond to a menu command, the menu command will light up when we press the accelerator. That is, the menu will light up unless we specify the "NOINVERT" keyword: IDA_ACCEL_TABLE ACCELERATORS DISCARDABLE BEGIN "A", IDA_ACTION_A, NOINVERT //Shift+A (non inverted menu selection) END To Load an accelerator table, we need to use the LoadAccelerators function, as such: HACCEL hAccel; hAccel = LoadAccelerators(hInst, MAKEINTRESOURCE(IDA_ACCEL_TABLE)); Again, we could have given our resource a string name, and used that string to load the table. When using accelerators, we need to alter our message loop to intercept the keypress messages, and translate them into command messages according to our accelerator table rules. We use the TranslateAccelerator function, to intercept the keypress messages, and translate them into command messages, as such: while ( (Result = GetMessage(&msg, NULL, 0, 0)) != 0) { if (Result == -1) { // error handling } else { if (!TranslateAccelerator(hwnd, haccel, &msg)) { TranslateMessage(&msg); DispatchMessage(&msg); } } } Also, if we are writing an MDI application, we need to intercept Accelerator messages from the child windows, we use the TranslateMDISysAccel function also: while ( (Result = GetMessage(&msg, NULL, 0, 0)) != 0) { if (Result == -1) { // error handling } else { if ( !TranslateMDISysAccel(hwndClient, &msg) && !TranslateAccelerator(hwndFrame, haccel, &msg) ) { TranslateMessage(&msg); DispatchMessage(&msg); } } } Where "hwndFrame" is the handle to the frame window, and "hwndClient" is the handle to the MDI client window. Internally, Accelerators are stored under numeric resource type RT_ACCELERATOR == 9. Menus Menus can be defined in a resource script using the MENU keyword. There are 2 types of items that appear in a menu, the top level "POPUP" menu items, and the secondary "MENUITEM" items. These are defined in a menu as such: (ID or name) MENU [DISCARDABLE] BEGIN POPUP "File" POPUP "Edit" BEGIN MENUITEM "Copy", IDM_EDIT_COPY MENUITEM "Paste", IDM_EDIT_PASTE END ... END We have included a few examples here, so that you can see the difference between a POPUP and a MENUITEM. When we have a menu with the ID_MENU identifier, we can load it into our program as such: HMENU hmenu; hmenu = LoadMenu(hInst, MAKEINTRESOURCE(ID_MENU)); Once we have this handle, we can pass it to the CreateWindow function, and apply it to our window. When a menu item is selected, the host program receives a WM_COMMAND message, with the menu item identifier in the WPARAM parameter. If we have a basic window procedure switch-case statement, we can see this as follows: case WM_COMMAND: switch(WPARAM) { case IDM_EDIT_COPY: //handle this action break; case IDM_EDIT_PASTE: //handle this action break; } break; In a menu, if we want to associate a menu item with an accelerator, we can define it as such: ID_MENU MENU DISCARDABLE BEGIN POPUP "File" POPUP "Edit" BEGIN MENUITEM "&Copy", IDM_EDIT_COPY MENUITEM "&Paste", IDM_EDIT_PASTE END ... END Notice how we put the ampersand (&) in front of the "C" in "Copy" and the "P" in "Paste". This means that those letters will be underlined, but more importantly, if an accelerator key combination is pressed, those items in the menu will be highlighted (unless the NOINVERT tag is specified in the accelerator table). If an ampersand is placed before a POPUP menu item, pressing ALT+ that letter will popup that menu. For instance, lets define our menu: ID_MENU MENU DISCARDABLE BEGIN POPUP "&File" POPUP "&Edit" BEGIN MENUITEM "Copy", IDM_EDIT_COPY MENUITEM "Paste", IDM_EDIT_PASTE END ... END Now, if we press ALT+F, we will pop open the File menu, and if we press ALT+E it will open the Edit menu. That's pretty nice functionality for only a single extra character to type. Internally, Menus are stored under numeric resource type RT_MENU == 4. Version Information A program can include certain information about its version, and its author in a resource script. This version information appears when you right-click the executable in Windows, and click "Properties". In the properties dialog box, this information appears on the "Version" tab. BLOCK "StringFileInfo" BEGIN BLOCK "040904E4" BEGIN VALUE "CompanyName", "My Company.\0" VALUE "FileDescription", "A Win32 program." VALUE "FileVersion", "1.0.0.0\0" VALUE "ProductName", "The product name.\0" VALUE "ProductVersion", "1.0\0" VALUE "LegalCopyright", "My Company.\0" END END BLOCK "VarFileInfo" BEGIN VALUE "Translation", 0x409, 1252 END Internally, VersionInfo is stored under numeric resource type RT_VERSION == 16. It was introduced with Windows 3. Dialog Boxes Dialog box resources follow a general pattern: (Dialog ID or name) DIALOG [DISCARDABLE] x, y, width, height TITLE "(dialog box title)" [CLASS "(class name)"] FONT "(font name)" BEGIN ... END if a dialog box is not being associated with a class, the CLASS field does not need to be filled in. All strings listed as being in quotes must be in quotes in the resource script or there will be an error. Individual items in a dialog box are then specified between the BEGIN and END tags. Internally, Dialogs are stored under numeric resource type RT_DIALOG == 5. Generic Controls CONTROL classname,windowname,id,left,top,width,height,windowflags Specific Buttons Edit Boxes Manifests Manifest resources contain UTF-8 encoded XML description of operating system and DLL dependency. Mostly, this resource dictates to use the Windows XP Version 6.0 comctl32.dll, to have Luna style for the standard and common controls. Internally, Manifests are stored under numeric resource type RT_MANIFEST == 24. It was introduced with Windows 4.1. User-type resources User-type resources should use some greater resource type identifiers, or RT_RCDATA == 10. Obtaining the SDK The Windows Platform SDK Compiler++. Obtaining the DDK What is the DDK? Microsoft offers a free Driver Development Kit, for people who are interested in writing device drivers, system services, and other kernel-mode code. The DDK comes with a number of different libraries, extensive documentation, link libraries, a special compiler and linker, and many examples for new driver programmers. This book discusses the DDK in depth in throughout Section 4, but specifically in the chapter about the DDK. Obtaining the DDK The DDK is available free from microsoft, but not as a download. To obtain the DDK, you must order the software on CD, and this comes with a small shipping and handling fee. The DDK has a number of different options, and installing the DDK with every option will require a large amount of disk space. The DDK used to be available as a download, but to conserve bandwidth, Microsoft has changed their policy to only offer the DDK as a CD. The SDK, and a number of other CDs are also available to order from Microsoft for free (plus shipping and handling), so this might be worthwhile to check out as well. Obtaining MASM Obtaining MASM. Obtaining the HTML Help Workshop Introduction to HTML Help The HTML Help Workshop is a tool for creating compiled HTML help applets, that are common on many platforms. The compiled Help modules may then be used as stand-alone information sources, or as help-modules from inside a program. HTML help modules are relatively easy to create, and only require a knowledge of basic HTML to get started. HTML Help modules can be separated by topic, and the HTML Help Workshop will automatically compile a table of contents, and create an option to search, if specified. Obtaining the HTML Help Workshop The HTML Help Workshop can be obtained from Microsoft as a free download. The HTML Help SDK should contain all the header files, library files, and object files that are needed to link help files into existing applications. Windows Programming/Key Combinations Here is a key combination guide for computers: IBM compatible DOS Cntrl+Alt+Delete - performs a soft reset Left arrow, right arrow - Move cursor backward or forward one character Cntrl +ntrl+C - Close down most applications and return to prompt Shift - Telephony Application Programming Interface (TAPI) provides computer telephony integration and enables PCs running Microsoft Windows to use telephone.
https://en.wikibooks.org/wiki/Windows_Programming/Print_Version
CC-MAIN-2015-18
refinedweb
19,109
52.7
Abstract A major goal of the COMMON LISP committee was to dene a Lisp language with sucient power and generality that people would be happy to stay within its connes and thus write inherently transportable code. We argue that the resulting language denition is too large for many short-term and medium-term potential applications. In addition many parts of COMMON LISP cannot be implemented very eciently on stock hardware. We further argue that the very generality of the design with its dierent eciency proles on dierent architectures works against the goal of transportability. Support for this research was provided by the Air Force Oce of Scientic Research under contract no.: F49620-82-C0092, and by the Defense Advanced Research Projects Agency DARPA/MDA90380-C-0102 Permission to copy without fee all or part of this material is granted provided specic permission c 1984 ACM 1 1. Introduction Introduction The COMMON LISP language was designed by a committee of over 30 interested people from academia and industry. As the nal touches were put on the specication a small core, 5 people, made most of the decisions. The quinquevirate of nal decision-makers are all well-respected and knowledgable Lisp implementors, and each one had a denite opinion on the best means of implementation. Several members of the COMMON LISP core group were working onand believed inLisp implementations on micro-codable machines. The remainder of the COMMON LISP core groupincluding one of the authors of this paper, Dick Gabrielwas working on a highly-optimizing Lisp compiler for a supercomputer-class computer. This computer, the S-1 Mark IIA, has outstanding hardware support for numeric calculations, including complex numbers, and non-trivial support for Lisp function-calls and tags. The main bulk of the decisions made in designing COMMON LISP were either made directly by this group or under considerable inuence from this group. As it happened, there was no strong voice from Lisp implementors working on Vaxes, MC68000s, or any truly stock hardware. The original goal of the COMMON LISP committee was to specify a Lisp that was close enough to each member of a family of descendants of MacLisp that the guardians of those implementations would be willing to evolve them towards that common denition; this goal was achieved. A major subgoal of the committee was that programs written in this COMMON LISP would be source-transportable between implementations, regardless of the underlying hardware. Every decision of the committee can be locally rationalized as the right thing. We believe that the sum of these decisions, however, has produced something greater than its parts; an unwieldy, overweight beast, with signicant costs (especially on other than micro-codable personal Lisp engines) in compiler size and speed, in runtime performance, in programmer overhead needed to produce ecient programs, and in intellectual overload for a programmer wishing to be a procient COMMON LISP programmer. 1.1 What should have happened There are some desirable properties of computer languages against which any language design can be evaluated: 2 1. Intellectual Conciseness. A language should be small enough for a programmer to be able keep all of it in mind when programming. This can be achieved through regularity in the design and a disciplined approach to the introduction of primitives and new programming concepts. PASCAL is a good example of such a language, although it is perhaps too concise. 2. Compiler Simplicity. It should be possible to write a compiler which produces very ecient code, and it should not be too great an undertaking to write such a compilerperhaps a man-year at the outside. 3. Runtime Eciency. There should be minimal inherent runtime ineciencies. Furthermore a small program should run in a small amount of physical memory. Small programs should not lead to inherently poor paging performance. 4. Ease of Programming. It should be simple and pleasant for the programmer to write ecient programs in the language. In an untyped language the programmer should not be forced to make large numbers of declarations in order to gain reasonable run-time eciency. The design of COMMON LISP should have heeded these design criteria or should have been prepared to do more than simply design the language. For example, regarding point 2, if the COMMON LISP designers believed that a language should be such that a simple compiler is impossible, then the COMMON LISP committee ought to take steps to produce such a compiler. Lisp talent is in short supply; Lisp is not a very commercially lucrative business; and COMMON LISP is a complex language requiring not only a complex compiler, but a complex runtime system and many auxiliary functions. Expecting the Lisp community to provide COMMON LISP for little or no cost is not sensible; nor is it sensible to expect a software house to devote its resources to a project that does not justify its cost to that company with high prots. group was in the position of advocating, as a standard, a Lisp which is dicult to implement and which would not be available on many machines outside of the machines with which the implementors within the COMMON LISP were working.COMMON LISP Therefore, the The COMMON LISP design group did not heed these design criteria, nor did they provide very much help with compilers for stock hardware. 3 2 2. What happened What happened We believe COMMON LISP suers from being designed by implementors with beliefs in what is possible, rather than by users with beliefs in what is probable. In particular we believe that the committee went wrong in the following ways: There was a belief (hope) that in the near future all Lisp users would have individual workstations, micro-coded for Lisp, with large amounts of physical memory. We argue otherwise in section 4.4. Too much emphasis was placed on the ability of micro-coding to hide the cost of type dispatching. The cost of invisible pointers was ignored for micro-coded machines and underestimated for stock architectures. Invisible pointers are like indirection bits in some architectures: Whenever a memory reference is made to a particular address, if that address contains an invisible pointer, the memory fetch logic fetches the contents of the memory location to which the invisible pointer points. Suppose that memory location, m1 , contains an invisible pointer, ip2 , that points to memory location m2 . Any memory reference that attempts to load from or store into m1 will actually load from or store into m2 . Displaced arrays are a construct in borrowed from FORTRAN. It is possible to alias two arrays to the same storage locations. That is, it is possible toCOMMON LISP state that the array elements A(0)A(j ) are identical to the array elements B (k )B (l) where A and B are arrays and j = l k . Notice that A and B may overlap with some elements of B not part of A if k0 . There are primarily two ways to implement this sharing: 1.) Invisible pointers can be used as the entries to B , or 2.) the two arrays could actually share storage. Assume that array A has already been allocated. Then if method 2 is used, then allocating the array, B , could mean re-locating A, because there may be no room for elements 0 through k 1 of B in front of A. This could result in a garbage collection to re-locate objects. Too many costs of the language were dismissed with the admonition that any good compiler can take care of them. No one has yet writtennor is likely to without tremendous eorta compiler that does a fraction of the tricks expected of it. At 4 What happened present, the hope for all COMMON LISP implementors is that an outstanding portable COMMON LISP compiler will be written and widely used. Recall that the Production Quality Compiler Compiler (PQCC) project at CMU was a research project just a few years ago; this projects goal was to produce a high-performance portable compiler for an algorithmic languagetheir goal has still not been achieved. Too much emphasis was put on existing implementations of inherently inecient ideasin particular MIT-style Lisp machine arrays. InCOMMON LISP such arrays are called general arrays as contrasted with simple vectors. Even with special hardware the cost of structure references under these schemes is enormously more than it should be. Here are some examples of how a micro-coded Lisp machine could improve the performance of general arrays. COMMON LISP requires that any reference to an array be within the array bounds, and the dimensionality and bounds for each dimension are stored in an area called the array header. A micro-coded machine can cache the array header when the array is fetched; the array bounds can be veried in parallel with the index calculation. The semantics of COMMON LISP allow the dimensionality of arrays to change dynamically. If such an array is being access within a loop, like this: (DO ((I 0 (1+ I)) (M 0 (1+ M))) ((= I 101) . . .) . . .(AREF foo 54 M). . .) there is no reason to presume that the array, foo, would retain constant through this loop. Therefore, the indexing arithmetic for foo, involving 54 and M , would need to be re-computed each time through the loop. This would require 100 re-computations of an arithmetic expression including a multiplication by 54. Micro-coded machines could do this several ways: 1.) The re-computation could be done in parallel with other operations needed to access the array, such as nding its base; or 2.) That part of the computation that remains constant through the loop could be cached somewhere, and only if the array were changed so as to invalidate this cached value would it be re-computed. In either of these cases, stock hardware would be at a disadvantage as compared with a micro-coded machine. 5 What happened Method 2 requires at least on additional memory reference, a memory reference which may not necessarily be done cheaply by a cache-memory, stock-hardware system. Therefore a micro-coded Lisp machine will be able to implement this method more eciently than a stock-hardware machine. Too much eort was put into extending transportability mechanisms to extremes. Examples of this are: potential numbers, branch cuts of complex numbers, and pathnames. Most of these issues do not eect most users. They add to the cost for all users however. Quite an eort was made to dene appropriate behavior for oating-point arithmetic. Programmers who do extensive oating-point computations spend a great deal of time understanding the hardware characteristics of their host computer in order to adjust their algorithms to converge or to attain sucient accuracy. Most such programmers will expect that the languages they have on their machines will take advantage of the characteristics of those machines. In COMMON LISP a little too much control was placed on oating-point arithmetic. And certainly, although the correct behavior of a oating-point-intensive program can be attained, the performance may vary wildly. There was a belief that Lisp is the correct language for systems programming. This is a religious belief, especially in light of the fact that only a few operating systems, for example, have been written in Lisp so far. There was an implicit assumption that all programs should be transportable invisibly. That is, it was assumed that the user would never be concerned with the underlying hardware or operating system, and furthermore that the user was incapable of organizing large systems into large transportable pieces with small system dependent modules. There was an implicit assumption that incredibly complex existing programs, such as editors, should become source transportable with the right language denition. Some of these programs were written with large eorts towards tuning them for a particular hardware/software environment. A spirit of compromise left many issues undened. Thus, although programs written in COMMON LISP would be transportable, it is trivial to write programs which look syntactically like they are written in COMMON LISP but which dynamically are not, and will not work in another COMMON LISP implementation. 6 What happened An example is (EQ 3 3): It is not specied whether this is true or false. A second example is SUBST. SUBST is a function which takes a tree and a description of old sub-trees to be replaced by new ones. That is, let T be a tree, t be a subtree of T , new and old be trees, and p be a predicate; then new will be substituted for t in the result ofSUBST does not specify whether structure sharing occurs between the input tree and the output tree. Thus in one implementation, some structures may be EQ after a SUBST, while in another these structures will be only EQUAL.SUBST A third example is the meaning of (A ,B). Backquote () is a read macro in COMMON LISP which is useful for building tree structures from templates. (A ,B) is roughly equivalent to (LIST A B). COMMON LISP does not specify whether the template and the result share any substructures. Therefore, the form of what is produced by a backquote expression is implementation-dependent, and hence it is dicult to write a machine-independent pretty-printer that will print tree-structures in backquote equivalent forms the same way over new implementations. which is enormous. It requires either specially micro-coded machines, a fantastically complex compiler (which relies on copious declarations from the poor incompetent user), or very slow running time. Due to the complexity of the denition any programs written in COMMON LISP which do run in other COMMON LISPs may have vastly dierent performance characteristics, redeemable only by gross changes to algorithms and data structure representations.COMMON LISP To be a bit more specic about this last point, programmers who use a particular dialect of Lisp will adjust the style of their programming to the performance proles of that Lisp. Let us consider a conceptually valid, but actually bogus, example.1 In one dialect the construct: ((LAMBDA (X) < form1 > . . . < formn >) <expr>))1 This is bogus because a real programmer would use LET rather than either of the two alternatives dis- cussed, and we can presume that LET will expand into the most ecient construct in any implementation. 2 may be much more inecient than: (PROG (X) (SETQ X <expr>) < form1 > . . . (RETURN < formn >)) An unwitting programmer might believe that the performance prole which has the LAMBDA application slower than the PROG form is universal over all machines. If the programmer realizes that this may not be the case, he can write a macro which initializes variables and then performs some computations. In some implementations this macro could expand into a LAMBDA application, while in others it could expand into a form. The choice would then be machine dependent.PROG The eect, of course, is that COMMON LISP code might be trivially transportable in terms of operational semantics, but not in terms of performance. Thus the robe of transportability hides the fact that programmers still need to program for their own implementations, and the programs so produced may not be transportable to the degree one might be led to believe. The true test is that, even though the bulk of COMMON LISP was reasonably welldened in the summer of 1982, there are still no widely-available COMMON LISPs, despite many large eorts (e.g. CMU Spice project, DEC COMMON LISP project, Rutgers COMMON LISP project, Symbolics COMMON LISP project, S-1 COMMON LISP project). 3. Common Lisp is Good adopted many of the good ideas developed by the various MacLisp descendants. In particular, much of the good theoretical activities during the late 1960s and the entire 1970s with respect to Lisp was considered by the COMMON LISP committee.COMMON LISP Among the good ideas is lexical binding. A good Lisp compiler tries to code lambdavariables as lexical variables as often as it can, so that references to them are to locations on the stack via a display oset. Dynamic variables requires explicit binding and unbinding code to reect lambda-binding, while lexical variables only require simple display updates to reect this. 8 Because lexical binding is frequently what programmers intend when they write LAMBDAs, and because lexical binding leads to programs that are easier to debug and understand, lexical binding is a denite improvement over the dynamic binding that older Lisps support.SETF writes a piece of code with an abstract data structure, and he wants to update that data structure, he has a stylized way of doing this that interacts well with the structure-dening forms. If he denes his abstract data structure with defstruct, then he is assured that SETF can be used to update his data structure without any further work by him. Closures are a means of capturing an environment for further use. They are a means for achieving own variables which can be shared among the closures in a module-like arrangement. Although closures has not been fully exploited yet, the intellectual simplication they represent is important. Much of the integer division standards clean up the arithmetic situation, especially with respect to the rounding modes (truncate, oor, ceiling, and round), although not all hardware can support these modes easily. Nevertheless, they are easily coded by compilers. Multiple values solve a problem that has been poorly solved by programmershow to get several distinct pieces of information back through a function-return. Usually a programmer will either dene special variables or construct a data structure to accomplish this. Because almost any hardware could support a more ecient means of accomplishing this, placing this within COMMON LISP is a good idea. 4. Common Lisp is Bad 4.1 Generic Arithmetic One of the major ways that COMMON LISP falls down is that all arithmetic is generic. Although this might be a good idea for people whose use of arithmetic involves adding a couple of numbers every several hundred million instructions, it is not a good idea for those who want to write a lot of numeric code for non-Lisp machines. Currently there is no computer that excels at both Lisp processing and numeric processing. For people who want to do number-crunching in Lisp on such machines, writing type-specic numeric code requires wordy type-declarations. To write an addition of two 9 4 integers requires writing either: (DECLARE (integer x y)) (+ x y) or: (+ (the integer x) (the integer y)) Neither of these is esthetically pleasing, nor is it likely that the average Lisp programmer is going to bother all the time. Micro-coded machines have an advantage in that they can leave until runtime decisions about what the types of objects are. For example, one could write (+ x y ) in COMMON LISP and a micro-coded machine could determine the types of x and y on the y. 4.2 Arithmetic Explosion A large class of number types was introduced. These include 4 sizes of xed precision oating point numbers, rational numbers, complex numbers made up of pairs of these (after long debate the committee at least restricted the types of the real and complex parts to be the same), xed precision integers, and arbitrary precision integers. All these number types take up a large number of tag values and require extensive runtime support, as the types are orthogonal from the functions which operate on them. Besides addition, subtraction, multiplication and a few varieties of division, there are trigonometric functions, logarithmics, exponentials, hyperbolic functions, etc. In addition, the inclusion of complex numbers has opened a Pandoras box of confusion and controversy surrounding the selection of branch cuts. The current COMMON LISP denition follows a proposed, but yet to be accepted, APL denition. Mathematicians do not have denitions for the various branch cuts, but they tend to dene them a particular way depending on the situation. 4.3 Function Calls Too Expensive Function calls are the hallmark of Lisp. Everyone expects that Lisp code will be sprinkled with many function calls. With the argument-passing styles dened by COMMON function calling must be expensive on most stock hardware. COMMON LISP allows optional, keyword, and &rest arguments (along with a few others). Optional arguments are arguments that may or may not be supplied; if they are not supplied, then they are 10LISP, defaulted by the called function. Keyword arguments are useful for those times when one cannot remember the order of arguments. For instance, MISMATCH takes two sequences, a ag stating whether to scan from left to right or from right to left, and starting and ending indices for both sequences. Oh, and theres a test that is used to test the mismatch. Who could remember the order, so one can write: (MISMATCH S1 S2 :from-end t :test predp :end1 2 :start2 baz) where the symbols prexed by (:) are keyword names and the symbols after those keyword names are the arguments associated with those keywords. And one more thing: it also is possible to supply a not-test instead of the test to not mismatch using the keyword :test-not. An &rest argument will be bound to a list of all of the arguments supplied that are neither bound to required parameters nor to optional parameters. Included in this list are all keyword-name/keyword-value pairs which appear to the right of the &rest parameter in the dening lambda-list. Thus a programmer is able to supply as many extra arguments as he wants to functions dened with an &rest argument in its lambda-list. We will now look at a particular function, SUBSTand how it might be dened in COMMON LISPto illustrate the expense of function calls and the advantage that microcoded machines might have over stock hardware. before, let us look at it a little more closely. The COMMON LISP denition of SUBST states that it takes a new tree, an old tree, a :test or a :test-not keyword argument (but not both), and a :key keyword argument. We have explained all of these arguments except for :key. Recall that the :test keyword argument is a predicate that is used to decided whether to perform a substitution on a particular subtree. The :key argument is a function of one argument which takes a subtree as its argument, and returns an object to which old and :test are applied. For example,SUBST will return (HERE HERE) because we have asked in the tree with B as their second element. 11 SUBST 1 (defun subst (new old tree &rest x 2 &key test test-not key) 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 new) ((atom tree) tree) (t (let ((a (apply #subst old new (car tree) x)) (d (apply #subst old new (cdr tree) x))) (if (and (eql a (car tree)) (eql d (cdr tree))) tree (cons a d)))))) (cond ((satises-test old tree :test test :test-not test-not :key key) Notice that if a subtree is not altered it is not copied (lines 1417). The interesting part is the &rest argument, x; it is used to do the recursive calls to SUBST in lines 813. In those lines we APPLY SUBST, and x holds all of the keyword arguments, which are then passed on to SUBST. We use APPLY so that SUBST does not need to gure out which keyword arguments were actually passed and so that SUBST does not have to default any unsupplied arguments. What happens when SUBST is invoked? First it checks to see how many arguments were supplied; if too few were supplied, that is an errorSUBST expects 3 required arguments. Next it must associate the keyword arguments with the correct lambda-variables. If no keyword arguments were supplied, then this amounts to a check of the number of keyword arguments passed. Finally it must construct the list of arguments for the &rest argument, x. If some keyword arguments are passed, then this requires a list to be constructed. Constructing a list usually requires allocating storage in the heap. To avoid suchCONSing for each function call, an implementation can choose to allocate this list on the 12 stack, as a vector. That is, the arguments to SUBST will be supplied (in most implementations) on the stack. If the &rest argument is never returned as part of the value of a function, then there is no need to actually CONS up that list. However, to treat the items on the stack as a list requires either a very fancy ow analysis by the compiler or else CDR-coding. Supporting CDR-coding is easily done with a micro-coded machine if there are enough bits in the word to accomodate the CDR-code bit. On the other hand, if the &rest argument list is to be returned as part of the value of the function, then a CONS in the heap must be done sooner or later. This can either be accomplished with extensive compiler analysis or with special hardware and micro-code using that hardware. If a programmer uses SUBST without any of the keywords, then the overhead required is to check the number of required arguments, to check whether any keywords were supplied, and to construct a null list for x. Checking the number of required arguments supplied is necessary for all functions, but the other checks are not. If the programmer uses the COMMON LISP SUBST routine, as he is encouraged to do, then he faces this overhead. If the programmer uses some keywords, then the overhead is very much higher. If the call to SUBST is simply (SUBST new old tree :test #baz) then one might hope that the association of keywords supplied with appropriate lambdavariables could be eliminated at compile-time, but this could only happen if there are entries for each combination of keyword arguments or if there were separate functions for each such combination. Thus, the programmer faces either a performance or a space penalty for this generality on stock hardware implementations. 4.4 Computer Size requires a large computer memory. To support only the function names and keyword names alone requires many kilobytes. To support the many copies of keywordtaking functions to improve the speed of COMMON LISP on stock hardware would require megabytes of memory. The compiled code for functions is large because of the amount of function-call overhead at the start to accomodate both possible keyword and optional arguments and also the inline type dispatches.COMMON LISP 13 Memory costs, while declining, still account for most of the cost of a large computer system. For high-performance computers, it is essentially the case that one buys the memory and gets the CPU free. In the next few years much research will be done with distributed computing. If each computing node needs to cost $100,000 and up, very large networks of them will not be built. 4.5 Good Ideas Corrupted Another common theme of theCOMMON LISP idea and to over-generalize it. This has the eect of complicating both the implementation and the intellectual eort required to understand and use the idea. is a way to do stylized printingthe user species a template with empty slots, he species the contents of the slots, and FORMAT prints the template with the slotsFORMAT FORTRAN FORMAT statement. statement: Try to guess what it doesyoull nd out after the next paragraph. took FORMAT to its ultimate generalization. With FORMAT it is possible to do iteration, conditional expressions, justication, non-local returns within iterations, and pluralization of words according to the values for the slots. The language used by FORMAT is not Lisp, but a language that uses strings with special escape characters that either represent substitutions, substitutions with specic printing control, or commands. This language is somewhat large by itself, but the main problem is that it isnt Lisp. Many language designers believe that the diction for expressing similar constructions should be similar. With FORMAT the COMMON LISP committee abandoned this goal.COMMON LISP The sample FORMAT 4 To a lesser extent the same charge can be leveled at SETQ. One can SETQ the value of a variable like this: (SETQ X <value>) is a generalization of CAR of a CONS SETF works on all of the data structures in COMMON LISP, for instance, arrays, property lists, vectors, and user-dened structures. The principle is that if part of a data structure can be accessed using some form, that SETF ought to be able to change the value of that part.SETF So far, so good. Now note that often a programmer will dene a data structure as a complex of macros: Some macros create a data structure and others access them. When the user chooses to dene a data structure with a complex of macros, he is usually content to dene special value-altering macros. has been generalized so that it has extremely complex mechanisms for describing the semantics of user macros to SETF. This way SETF can be used to alter the values of user-dened data structures. For such complex macros it makes little sense to try to force them into the mold of being a data substructure reference.SETF 4.6 Sequences The COMMON LISP denition introduces the notion of generic sequence. A sequence can be represented as a list or a one-dimensional array. Many of the standard Lisp functions (e.g., LENGTH, REVERSE, NREVERSE, DELETE, SORT) have been re-dened to work on any sequence, and a whole host (about 24 base functions each with a large number of keyword modiers) of new functions. Lists and one-dimensional arrays have fundamentally dierent access time characteristics (constant for arrays, linear for lists). They have even greater fundamentally dierent 15 characteristics for being shortened and lengthened. We believe that trying to hide these dierences behind generic function names will be a source of programmer overhead and will lead to inecient programs. As generic functions, the eciency of dispatch on argument type for all the sequence functions needs to rely on either 1.) detailed declarations which implies the need for greater programmer overhead and compile time use of those declarations with associated increased compiler complexity or 2.) a cost to runtime performance, especially on stock hardware. In addition the sequence functions are the primary example of keywordcombination explosion, with the associated costs on compiler complexity and possibly paging performance at runtime. The large number of function/keyword possibilities is itself a source of programmer overhead. 4.7 Arrays arrays are incredibly complex. As a concession to non-micro-coded machines, vectors, simple vectors, and simple strings were included as types. The resulting tangle of types (all simple vectors are vectors, all simple strings are strings, all strings areCOMMON LISP vectors, but no simple strings are simple vectors) is clearly a political compromise and not something that would exist after careful design. Simple vectors are a xed-size set of pointers in contiguous locations; they are called simple because they dont have ll pointers, are not displaced, and are not adjustable in size. Cleary such vectors can be implemented much more eciently on stock hardware than can general arrays. Unfortunately the default assumption is that a vector is not simple unless explicitly declared simplemaking the programmer using stock hardware have to work harder to acheive eciency, and making programs which work well on micro-coded machines perform poorly on stock hardware. 4.8 Subsets ofCOMMON LISP We believe that these points argue for a standard subset of COMMON LISP, especially for small, stock hardware implementations. The COMMON LISP committee has not dened a standard subset, and it appears that at least one subset, if not many such subsets, will appear. Unless these subsets are co-ordinated and widely adopted, then we will see one of the major goals of COMMON LISPtransportabilitylost. 16 5 5. Conclusions Conclusions Our goal in this paper is to discuss the shortcomings of COMMON LISP. In doing so we do not want to only criticize COMMON LISP: Both authors agree that the world is better having a COMMON LISP than not. We wish only that COMMON LISP could have been designed with more, varied machines in mind. We fear that COMMON LISP will not live up to some of its goals because of these design decisions. Perhaps good compilers will come along and save the day, but there are simply not enough good Lisp hackers around to do the jobs needed for the rewards oered in the time desired. References [Brooks 1982] Brooks, Rodney A., Gabriel, Richard P., and Steele, Guy L. S-1 Common Lisp Implementation in Proceedings 1982 ACM Symposium on Lisp and Functional Programming, Pittsburgh, Pa, August, 108113. [CL Committee 1984] Many People, Mail messages concerning the development of Common Lisp in COMMON.[COM,LSP]@SAIL. [Knuth 1965] Knuth, Donald E. Fourteen Hallmarks of a Good Programming Language, Unpublished. [Steele 1984] Steele, Guy Lewis Jr. et. al. Common Lisp Reference Manual, Digital Press, 1984. [Weinreb 1981] Weinreb, Daniel, and Moon, David. LISP Machine Manual, Fourth Edition. Massachusetts Institute of Technology Articial Intelligence Laboratory, Cambridge, Massachusetts, July 1981. [Wichman 1984] Wichman, Brian A. Is ADA Too Big? A Designer Answers the Critics in Communications of the ACM, (27)98103. 17
https://fr.scribd.com/document/142095014/Common-Lisp-Critique
CC-MAIN-2019-43
refinedweb
5,545
50.57
Page { Container { TextField {} Button {} } Slider {} } Keyboard input BlackBerry devices can have either a physical keyboard or touch screen keyboard. For devices with a touch screen keyboard, the keyboard appears only in certain contexts (for example, when the user taps a text field). With both types of keyboards, it's possible to listen for and respond to keyboard input. Invoking the touch screen keyboard You can control the touch screen keyboard by using the functions defined in virtualkeyboard.h . This header file includes functionality for hiding and showing the keyboard and handling keyboard events. Cascades does not have an API for opening and closing the touch screen keyboard. However, there still are some ways that you can invoke the touch screen keyboard. When the user taps on a TextField or a TextArea , the keyboard displays automatically to capture input. To invoke the keyboard yourself, you can call requestFocus() on the control that you want the user to type into. For more information about how text controls handle input, see Handling text input . Low-level keyboard events Low-level keyboard input events are delivered to the app through the SCREEN_EVENT_KEYBOARD event type. You can access the events directly through the Screen API or you can register for the events through BlackBerry Platform Services (BPS). If you want to capture keyboard events that are specific to a certain area of the screen (for example, a text field) you can use input sessions. Input sessions allow your app to capture events that are specific to a certain region, device, or screen event type. For example, you can create a session when a text field receives input focus in order to capture text entry in that field, and then destroy the session upon losing input focus. Sessions aren't restricted to on-screen events. You can create sessions to capture off-screen events from a gamepad, keyboard, mouse (or other pointing device), or the touch-sensitive keyboard. Cascades keyboard events In Cascades, you can use a KeyListener object to listen for any key press that produces a character (any key except for modifier keys such as the Shift, Alt, and Control). However, any key press that generates a character provides information about the state of the modifier keys in its event parameters. Unlike physical keyboard shortcuts, KeyListener events support both the physical and touch screen keyboards. In addition to the types of keyboards that the APIs support, there are a few differences in usage: - A KeyListener provides keyPressed() and keyReleased() signals while a shortcut provides only a triggered() signal. - KeyListener signals are sent to all key listeners in the propagation path, while shortcuts send a signal to the first slot it encounters. - There is no signal parameter passed with the shortcut triggered() signal. KeyListener signals provide a KeyEvent parameter containing information about the key press. - You can attach shortcuts to an ActionItem. Supported controls You can attach a KeyListener to any control that inherits from Control or AbstractPane . Attaching a KeyListener to an AbstractActionItem is not supported. Key listener behavior A KeyListener provides 3 signals to connect to: - keyEvent: Emitted first when a key is pressed or released - keyPressed: Emitted immediately when a key is pressed - keyReleased: Emitted immediately when a key is released This diagram is the call sequence when a key is pressed and immediately released: This sequence gives you a couple of options for creating the slots in your app. Your first option is to create two slots: one to handle the keyPressed() signal and one to handle the keyReleased() signal. Your second option is to create a single slot to handle the keyEvent() signal, then determine the key state by examining the keyEvent signal parameter. Key events Each signal provides a keyEvent parameter called event. This object is used to access information about the key that was pressed. The keyEvent event objects have the following properties: - key: An integer corresponding to the char code (UTF-8) of the key being pressed. - keycap: A value corresponding to the key that was pressed to generate the event. The keycap is a static code for each physical key. The keycap for a particular key is always the same, regardless of differences in the printed keyboard layout (for example QWERTY, AZERTY). - pressed: A Boolean expression indicating whether the key was pressed or released. - altPressed: A Boolean expression indicating whether the alt modifier key was pressed while the key was pressed. - shiftPressed: A Boolean expression indicating whether the shift modifier key was pressed while the key was pressed. - ctrlPressed: A Boolean expression indicating whether the ctrl modifier key was pressed while the key was pressed. - pressedDuration: The length of time in milliseconds that the key has been held in a pressed state. - unicode: The Unicode value of the key pressed to generate the event. Signal traversal KeyListener signals are first emitted for the control that's currently in focus. Then, the signals are emitted for its parent control, and so on, all the way through the tree. Unsupported controls are skipped automatically during the traversal, and there is no way to stop the propagation manually. In the diagram below, the control tree on the left illustrates the traversal path if the focus is on the F node. The control tree on the right illustrates the traversal path when nothing is in focus. The root of the scene (in this case, the C node) receives the KeyListener signal, and the traversal continues toward the TabbedPane . The blue arrows illustrate the order in which the KeyListener signals are emitted for the different controls. Signal traversal example Consider the following example that includes a Page with a Container , TextField , Button , and Slider . - A KeyListener attached to the TextField receives key events only when the TextField is in focus. - A KeyListener attached to the Container receives key events when either the TextField or Button is in focus, but not when the Slider is in focus. - A KeyListener attached to the Page receives key events when any object is in focus. - If a KeyListener object is attached to the TextField, the Container, and the Page, three signals are emitted when a key is pressed with the TextField in focus. The signals are emitted beginning with the object in focus and moving toward the root. Handling keyboard events Here are some examples for how to handle keyboard events using input sessions and KeyListener. In Cascades, attach a KeyListener to a supported node type and specify the logic for how your app responds to a key event. For low-level keyboard events, request the screen event through BPS, and look for events of type SCREEN_EVENT_KEYBOARD. import bb.cascades 1.1 Page { keyListeners: [ KeyListener { onKeyEvent: { console.log("A " + event.unicode + " key event has occurred") } onKeyPressed: { console.log("The " + event.unicode + " key was pressed") } onKeyReleased: { console.log("The " + event.unicode + " key was released") } } ] } #include <bb/cascades/KeyEvent> #include <bb/cascades/KeyListener> #include <bb/cascades/Page> KeyListener* myKeyListener = KeyListener::create() .onKeyEvent(this, SLOT(onKeyEventHandler(bb::cascades::KeyEvent *))) .onKeyPressed(this, SLOT(onKeyPressedHandler(bb::cascades::KeyEvent *))) .onKeyReleased(this, SLOT(onKeyReleasedHandler(bb::cascades::KeyEvent *))); Page* myPage = Page::create(); myPage->addKeyListener(myKeyListener); #include< bps/screen.h> #include <bps/bps.h> #include <bps/event.h> void handleScreenEvent(bps_event_t *event) { screen_event_t screen_event = screen_event_get_event(event); int screen_val; screen_get_event_property_iv(screen_event, SCREEN_PROPERTY_TYPE, &screen_val); if (screen_val == SCREEN_EVENT_KEYBOARD) { screen_get_event_property_iv(screen_event, SCREEN_PROPERTY_KEY_FLAGS, &screen_val); if (screen_val & KEY_DOWN) { screen_get_event_property_iv(screen_event, SCREEN_PROPERTY_KEY_SYM,&screen_val); printf("The '%c' key was pressed\n", (char)screen_val); fflush(stdout); // ... } // ... } } Related resources Last modified: 2015-05-07 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/documentation/device_comm/input/physical_keyboard.html
CC-MAIN-2018-13
refinedweb
1,256
54.02
Why Plotly? Plotly’s Python graphing library is an interactive, open-source plotting library that supports over 40 unique chart types. These chart types cover a wide range of statistical, financial, geographic, scientific, and 3-dimensional use-cases. We will walk through how to make interactive, publication-quality graphs ranging from line plots, scatter plots, to histograms, heatmaps, subplots, and bubble charts. Let’s compare Plotly with Matplotlib, another commonly used library for data visualization in data science. We will create synthetic data and then plot data with both Matplotlib and Plotly. Importing required libraries: import pandas as pd import numpy as np import matplotlib.pyplot as plt import plotly.offline as pyo Creating a matplotlib plot: # create fake data: df = pd.DataFrame(np.random.randn(100,4),columns=['col1','col2','col3','col4']) df.plot() plt.show() This is just a static image without any interactivity. Creating a plotly plot: pyo.plot([{ 'x': df.index, 'y': df[col], 'name': col } for col in df.columns]) - You can compare data while hovering over the plot as shown in figure above. - Clicking on a trace on legend hides it and double-clicking a trace isolates it. Double-click again to redisplay the other traces. - A file named temp-plot.html is saved in your working directory (i.e. where your .py file is saved). We’ll see later how adding a filename=’something-else.html’ argument lets you change the name of the file (useful when working with multiple plots). Re-running .py or jupyter notebook replaces earlier copies of the file. - You can also download this plot to a static .png image file if you want. Creating Scatter plots with Plotly: import plotly.offline as pyo import plotly.graph_objs as go import numpy as np np.random.seed(42) random_x = np.random.randint(1,101,100) random_y = np.random.randint(1,101,100) data = [go.Scatter( x = random_x, y = random_y, mode = 'markers', marker = dict( # change the marker style size = 12, color = 'rgb(51,204,153)', symbol = 'pentagon', line = dict( width = 2, ) ) )] layout = go.Layout( title = 'Random Data Scatterplot', # Graph title xaxis = dict(title = 'Some random x-values'), # x-axis label yaxis = dict(title = 'Some random y-values'), # y-axis label hovermode ='closest') # handles multiple points landing on the same vertical fig = go.Figure(data=data, layout=layout) pyo.plot(fig, filename='scatter_plot.html') Notice how we bundled both the data and the layout inside a Figure , and had plotly graph the figure as HTML. We also used following argument to change the marker style: marker = dict( # change the marker style size = 12, color = 'rgb(51,204,153)', symbol = 'pentagon', line = dict( width = 2, ) Bubble Charts Bubble charts simply scatter plots with the added feature that the size of the marker can be set by the data. Changing the above code for ‘markers’ with the given below will give us a bubble chart. Try it out yourself! marker = dict( # change the marker style size = 1.5*random_x, color = 'rgb(51,204,153)', line = dict( width = 2, ) Creating Box plots with Plotly: At times it’s important to determine if two samples of data belong to the same population. Box plots are great for this! The shape of a box plot (also called a box-and-whisker-plot) doesn’t depend on aggregations like sample mean. Rather, the plot represents the true shape of the data. Also, depending on how the whiskers are constructed, box plots are useful for identifying true outliers of a data set. A box plot identifies those points that lie far from the median compared to the rest of the data. where Q1 and Q3 are first and third quartile respectively. import plotly.graph_objects as go import numpy as np x_data = ['Ajinkya Rahane', 'Virat Kohli', 'Prithvi Shaw', 'Wriddhiman Saha', 'Mayank Agarwal', 'Shubman Gill',] N = 100 y0 = (10 * np.random.randn(N) + 30).astype(np.int) y1 = (13 * np.random.randn(N) + 38).astype(np.int) y2 = (11 * np.random.randn(N) + 33).astype(np.int) y3 = (9 * np.random.randn(N) + 36).astype(np.int) y4 = (15 * np.random.randn(N) + 31).astype(np.int) y5 = (12 * np.random.randn(N) + 40).astype(np.int) y_data = [y0, y1, y2, y3, y4, y5] colors = ['rgba(93, 164, 214, 0.5)', 'rgba(255, 144, 14, 0.5)', 'rgba(44, 160, 101, 0.5)', 'rgba(255, 65, 54, 0.5)', 'rgba(207, 114, 255, 0.5)', 'rgba(127, 96, 0, 0.5)'] fig = go.Figure() for xd, yd, cls in zip(x_data, y_data, colors): #some data manipulation to add a trace to our figure like earlier fig.add_trace(go.Box( y=yd, name=xd, boxpoints='all', jitter=0.5, whiskerwidth=0.2, fillcolor=cls, marker_size=2, line_width=1) ) fig.update_layout( title='Runs Scored by the Top 9 Scoring Indian Batsman in 2020 Australia tour', yaxis=dict( autorange=True, showgrid=True, zeroline=True, dtick=5, gridcolor='rgb(255, 255, 255)', gridwidth=1, zerolinecolor='rgb(255, 255, 255)', zerolinewidth=2, ), margin=dict( l=40, r=30, b=80, t=100, ), paper_bgcolor='rgb(243, 243, 243)', plot_bgcolor='rgb(243, 243, 243)', showlegend=False ) fig.show() Creating Dist plots with Plotly: Distribution Plots, or Displots, typically layer three plots on top of one another. The first is a histogram, where each data point is placed inside a bin of similar values. The second is a rug plot – marks are placed along the x-axis for every data point, which lets you see the distribution of values inside each bin. Lastly, Displots often include a “kernel density estimate”, or KDE line that tries to describes the shape of the distribution. import plotly.figure_factory as ff import numpy as np # Add histogram data x1 = np.random.randn(200)-2 x2 = np.random.randn(200) x3 = np.random.randn(200)+2 x4 = np.random.randn(200)+4 # Group data together hist_data = [x1, x2, x3, x4] group_labels = ['Group 1', 'Group 2', 'Group 3', 'Group 4'] # Create distplot with custom bin_size fig = ff.create_distplot(hist_data, group_labels,curve_type='normal', bin_size=[.1, .25, .5, 1]) # Add title fig.update_layout(title_text='Distplot with Normal Distribution') fig.show() These were a few examples of how to use Plotly to create amazing plots and charts. In the next post, we will go through the basics of Dash which will enable you to design and build a basic dashboard from scratch in Python.
https://www.codegigs.app/data-visualization/plotly-for-data-visualization/
CC-MAIN-2022-05
refinedweb
1,056
59.8
python interface for the atomx api on Interface for the atomx rest api. For more information read the full documentation online, report bugs in github or see the atomx wiki from atomx import Atomx # create atomx session atomx = Atomx('user@example.com', 'password') # get 10 creatives creatives = atomx.get('Creatives', limit=10) # the result is a list of `atomx.models.Creative` models # that you can easily inspect, manipulate and update for creative in creatives: print('Creative ID: {c.id}, state: {c.state}, ' 'name: {c.name}, title: {c.title}'.format(c=creative)) # update title for the first creative in list creative = creatives[0] creative.title = 'shiny new title' # the session is inherited from `atomx` that made the get request creative.save() # create a new profile from atomx.models import Profile profile = Profile(advertiser_id=23, name='test profile') # Note that you have to pass it a valid `Atomx` session for create # or use `atomx.create(profile)` profile.create(atomx) # now you could alter and update it like the creative above profile.name = 'changed name' profile.save() # you can also get attributes profiles = atomx.get('advertiser', 88, 'profiles') # equivalent is to pass the complete resource path as string instead of arguments profiles = atomx.get('advertiser/88/profiles') # same as above # profiles is now a list of `atomx.models.Profile` that you can # read, update, etc again. profiles[0].click_frequency_cap_per = 86400 profiles[0].save() # working with search s = atomx.search('mini*') # s is now a dict with lists of search results for the different models # with the model id and name publisher = s['publisher'][0] # get the first publisher.. publisher.reload() # .. and load all the data print(publisher) # now all publisher data is there publisher.history() # gets all changes made to this publisher # reporting example # get a report for a specific publisher report = atomx.report(scope='publisher', groups=['hour'], metrics=['impressions', 'clicks'], where=[['publisher_id', '==', 42]], from_='2015-02-08 00:00:00', to='2015-02-09 00:00:00', timezone='America/Los_Angeles') # check if report is ready print(report.is_ready) # if pandas is installed you can get the pandas dataframe with `report.pandas` # you can also get the report csv in `report.content` without pandas df = report.pandas # A datetime index is automatically set when group by a hour/day/month. # calculate mean, median, std per hour means = df.resample('H', how=['mean', 'median', 'std']) # and plot impression and clicks per day means['impressions'].plot() means['clicks'].plot() To install the python atomx api, simply: $ pip install atomx or if you want to use ipython notebook and reporting functionality: $ pip install atomx[report] Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/atomx/
CC-MAIN-2017-26
refinedweb
448
51.44
Function Overloading in C++ In this CPP tutorial, we are going to discuss the concept of Function overloading in C++ and its implementation. What is Function Overloading in C++ It is a feature in C++ where a function can be overloaded means two or more than function can have the same name but their way of doing work is different. These functions are distinguished by changing in no. of parameters or by changing in the return type and no. of parameters. Ex:- void max ( int a, int b); int max ( int a, int b, int c) CPP code implementation of function overloading #include<bits/stdc++.h> using namespace std; int maxx(int a, int b) { int res = a>b? a: b; return res; } int maxx(int a , int b, int c) { int res=a>b?a:b; return res>c?res:c; } double maxx(double a , double b) { return a>b? a: b; } char maxx(char a, char b) { return a>b? a: b; } int main() { cout<<maxx(1,2,3)<<endl; cout<<maxx(1,2)<<endl; cout<<maxx(2.3,3.4)<<endl; cout<<maxx('a','d')<<endl; } OUTPUT 3 2 3.4 d Functions that can not be overloaded - A static member function declaration cannot be overloaded. - Function which differs only in return type also cannot be overloaded. - Parameters that differ only in pointer type declaration vs array type declaration are considered to be equivalent and are not considered in function overloading. - Two parameters that differ only in the absence and presence of const are considered to be equivalent. Hence, not considered in function overloading. Function overloading vs Function Overriding - Function Overloading is achieved at compile time while function overriding is achieved at run time. - This is using the same function name by just changing in data types of parameter or change in no. of parameters while function overriding is a redefinition of base class function in its derived class with the same return type and parameters using the keyword virtual in its base class function. - Function overriding cannot occur without inheritance while overloading can occur. - Overloaded functions are in the same scope while Overridden functions are in different scope. Main() function can be overloaded ? Main is not a reserved keyword in programming languages hence it can be overloaded. To overload the main function we can define a class and we can use main() as its member function. CPP code implementation of main() function #include<bits/stdc++.h> using namespace std; class overload { public: int main(string s) { cout<<s<<endl; return 0; } int main(int a) { cout<<a<<endl; return 0; } int main( int a , int b) { cout<<a+b; return 0; } }; int main() { overload o; o.main("main() called"); o.main(1); o.main(1,2); return 0; } OUTPUT main() called 1 3 Also read,
https://www.codespeedy.com/function-overloading-in-cpp/
CC-MAIN-2020-45
refinedweb
469
65.42
sun.swing; 26 27 import java.util.*; 28 29 /** 30 * <b>WARNING:</b> This class is an implementation detail and is only 31 * public so that it can be used by two packages. You should NOT consider 32 * this public API. 33 * <p> 34 * <b>WARNING 2:</b> This is not a general purpose List implementation! It 35 * has a specific use and will not work correctly if you use it outside of 36 * its use. 37 * <p> 38 * A specialized ArrayList that caches its hashCode as well as overriding 39 * equals to avoid creating an Iterator. This class is useful in scenarios 40 * where the list won't change and you want to avoid the overhead of hashCode 41 * iterating through the elements invoking hashCode. This also assumes you'll 42 * only ever compare a BakedArrayList to another BakedArrayList. 43 * 44 * @author Scott Violet 45 */ 46 public class BakedArrayList extends ArrayList { 47 /** 48 * The cached hashCode. 49 */ 50 private int _hashCode; 51 52 public BakedArrayList(int size) { 53 super(size); 54 } 55 56 public BakedArrayList(java.util.List data) { 57 this(data.size()); 58 for (int counter = 0, max = data.size(); counter < max; counter++){ 59 add(data.get(counter)); 60 } 61 cacheHashCode(); 62 } 63 64 /** 65 * Caches the hash code. It is assumed you won't modify the list, or that 66 * if you do you'll call cacheHashCode again. 67 */ 68 public void cacheHashCode() { 69 _hashCode = 1; 70 for (int counter = size() - 1; counter >= 0; counter--) { 71 _hashCode = 31 * _hashCode + get(counter).hashCode(); 72 } 73 } 74 75 public int hashCode() { 76 return _hashCode; 77 } 78 79 public boolean equals(Object o) { 80 BakedArrayList list = (BakedArrayList)o; 81 int size = size(); 82 83 if (list.size() != size) { 84 return false; 85 } 86 while (size-- > 0) { 87 if (!get(size).equals(list.get(size))) { 88 return false; 89 } 90 } 91 return true; 92 } 93 }
http://checkstyle.sourceforge.net/reports/javadoc/openjdk8/xref/openjdk/jdk/src/share/classes/sun/swing/BakedArrayList.html
CC-MAIN-2018-22
refinedweb
314
73.27
Slashdot Log In XHTML 1.0 now a W3C Recommendation Posted by jimjag on Thu Jan 27, 2000 01:05 AM from the still-lost-in-dynamic-html dept. from the still-lost-in-dynamic-html dept. thehermit writes "New info on the W3C's Web site as XHTML 1.0 became a W3C Recommendation on Jan. 26. The specification now features a single namespace, and takes a more cautious approach to Internet media types, following feedback from W3C members on the previous version of the specification. " W3C notes that "XHTML 1.0 is the first step toward a modular and extensible Web based on XML". The full XHTML spec is also available. This discussion has been archived. No new comments can be posted. XHTML 1.0 now a W3C Recommendation | Log In/Create an Account | Top | 139 comments (Spill at 50!) | Index Only | Search Discussion The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way. No one uses XML?! Are you for real? Or a troll.. (Score:5) WHAT?! XML is the best thing since sliced bread! And, no this isn't a troll, I mean this! Hard to parse? What are you talking about? First of all, YOU shouldn't be parsing it. I don't care what language you're coding in, you'll probably find that someone else has taken care of that for you. I use Perl primarily, and switch between XML::DOM and XML::Parser, both of which handle all of the dirty work of chewing on the tags and characters. As I mentioned in the story on the Slashdot code release, I have a project: Iaijutsu: Open Source Content Management and Web Application Framework [ninjacode.com]. And this project makes extensive, pervasive use of XML. Then you've REALLY missed the boat. XML is EASY. Screw comma delimited, I've actually found it easier and more maintainably elegant to write quick Perl scripts which use the XML::DOM, than to hack out a CSV parser. Hell, I even have Oracle DB servers spewing XML streams at me to handle. XML is far from failed. Go back and try it again. As for XHTML, I don't know that it will ever be truly adopted, but if it catches on... we could write web browsers and web service consumers in a fraction of the time and code. Re:Compatibility with existing browsers (Score:5) XHTML, like all XML, is *required* to be "well-formed", which basically means matched tags, no missing quotes, etc. The XML 1.0 Recommendation *requires* implementations to stop normal processing of an XML document that isn't well-formed. In short, if it isn't well-formed, it isn't XML. Browsers will eventually get smart about this. Mozilla already is. Anon on purpose. Moderate accordingly. Posted with M13 Re:Meta-Standard? (Score:4) But I don't see them specifically ruled out either, any more than in HTML 4.01. Sure, W3C don't want people using them, but there's nothing much they can do about that. Yeah, I know it's very worthy and everything, but have you ever tried converting a table layout to CSS? It ain't fun. First, of course, browser support is terrible; Netscape tends to break if you have the temerity to put a positioned element inside another positioned element, and it messes the whole page up if you try to mix CSS-P with tables to achieve some kind of graceful degradation on But that's not what's wrong with the standard, obv. What's wrong is the total lack of flexibility in positioning. Normally with positioning you want to say things like "this element is to go 3 ems to the left of that element", or "this element should line up horizontally with that element and vertically with the other element". But CSS gives you only two choices: specify an absolute page position, or move the element a bit in some direction; you can't mix the two horizontally and vertically, and the latter option is usually useless anyway since it leaves an element-shaped hole in the parent. This could nearly be half-workable, since you can achieve more complex effects by putting elements inside other elements. But Netscape 4 breaks so very, very badly if you try that the page often becomes completely unreadable. So what you end up doing is either making every element absolutely-positioned to the page pixel, which is okay for the kind of fixed-layout fixed-width page which idiots write, but otherwise useless, or you end up writing a complete page-layout engine in several KB of JavaScript at the top of the page, slowing everyone down. And of course writing layout JavaScript that works with IE4+, Netscape 4 and the W3C DOM is a Sisyphean task. Oh, and of course people with JavaScript turned off are screwed. To summarise: CSS is not up to producing interesting, dynamic-page-size layouts, and browser-supported CSS is not up to anything at all. To summarise the summary: Style. Is a problem. To summarise the summary of the summary: Aaaarrrrrghhh. -- This comment was brought to you by And Clover. Re:XHTML - Extend HTML without breaking standard (Score:3) No! This is a total misunderstanding. XHTML 1.0 is simply a recasting of HTML 4.01 into XML compliant syntax. You cannot extend XHTML as such by adding your own tags. You can produce hybrid documents by combining XHTML with other XML dialects, but the result would not be XHTML. You could even combine XHTML with XML dialects you create yourself. But you would be very foolish to do so. XML dialects are only useful if they serve a significant community who have tools which understand the dialect and can do useful things with them. If you just make it up yourself as you go along, then the only thing you can really do with it is use XSL to translate it back into standard XHTML, so you've gained nothing. Re:What does this do for me? (Score:3) If XML is a failure, then I hope we should all fail so spectacularly ! I'll be writing the XML handlers that send out welfare cheques to you, and all the other unemployed CSV import coders. The downside and "failure" of XML is that it's still immature as a wetware discipline (not as a protocol). XML and especially schema design is regarded in the same way as database design was 5-6 years ago. For years before RDBMS design had been the sole preserve of gurus like Ted Codd (i.e. the SGML era), then along come M$oft with Access and suddenly everyone and their dog thinks they're a real database designer. Cue a whole pile of badly normalised (or just downright ugly) data models, or in today's situation a lot of nasty slapped-together XML structures. It will be a year or so before people realise that XML schema design is a discipline in just the same way as good RDBMS design is. TipOfTheDay: Use tags like <br /> instead of <br/> when writing "tight" HTML, otherwise older browsers choke on it. Compatibility with existing browsers (Score:3) Looking over the spec, I see that the w3c spec will begin enforcing things that most browsers have allowed, such as without a closing tag. Any idea how browsers such as mozilla or whatever will deal with this restriction? Are we going to be getting errors or unrenderable pages due to bad HTML? Frankly, I hope we do :-) It'd serve them right. Just an observation/question. If you can't figure out how to mail me, don't. XHTML - Extend HTML without breaking standard (Score:3) While it might not be realistic, the W3 likes to envision a future where clients become much more lightweight and flexible by putting all parsing and presentation into standard XML parsers and stylesheet tools. Currently a significant amount of browser bloat is due to the fact that the browsers pretty much render anything you throw at them. Hopefully this will change lest our HTML parsers grow to 20MB.
http://slashdot.org/articles/00/01/26/150247.shtml
crawl-002
refinedweb
1,374
73.68
User Tag List Results 1 to 4 of 4 Using a different character set inside a JavaScript file Inside my .js file I have some text in Hebrew. However, whenever it is written to the document or displayed in any way (alert() for example), it is shown as gibberish. Any idea how to fix this? Thanks. - HTML Code: <script type="text/javascript" src="/script.js" charset="utf-8"></script> Hmm, thanks... that works with my test pages, but when I try to use it in my real application (which is XUL-based) nothing changes. Here is how I use the script: Code: <?xml version="1.0" encoding="Windows-1255"?> <?xml-stylesheet <script type="text/javascript" src="common.js" charset="windows-1255" /> ... </window> Code: Content-type: application/vnd.mozilla.xul+xml; charset=windows-1255 If I add the HTML namespace and use the <html:script /> tag instead, the Hebrew text is displayed correctly... the only problem is that the script seems to get included twice - the first load gives the right text but the second load gives the 'bad' text. Maybe charset is the wrong attribute for the XUL <script /> tag? Bookmarks
http://www.sitepoint.com/forums/showthread.php?167250-Using-a-different-character-set-inside-a-JavaScript-file&p=1206413
CC-MAIN-2017-13
refinedweb
191
75.61
Hi friends; I've run into a problem trying to complete the below assignment: Write a function called myName which returns (does not cout) your full name (e.g., "Bob Smith"). Write main to call the function and display the name on the screen. NOTE: Your name should not appear anywhere other than in the function myName. The problem, as I understand it, is that the string MyName cannot be converted into an integer, and the function must use int to define itself. I'm between a rock and a hard place, and I keep running into error code C2440: 'return' : cannot convert from 'std::string' to 'int' #include <iostream> #include <string> using namespace std; int MyName() { string MyName = "TK 421"; return MyName; } int main () { MyName(); } Also I used my skills in the internets to try to solve the problem, but the best I found was a bit of code that used a char function as a pointer to the string. Or something like that, but I didn't understand it and I'm not sure I'm allowed to do it for this problem. If it is the only way, could someone explain how it works please?
https://www.daniweb.com/programming/software-development/threads/235509/dr-myname-or-how-i-learned-to-stop-worrying-and-love-using-strings-in-functions
CC-MAIN-2017-43
refinedweb
198
69.15
The team has been quietly working on a number of improvements to the entity designer that we would like to show off and get feedback on. This particular set of features is targeted at making modeling more productive and scalable, and models themselves more comprehensible. We’ll start with the small, fun things, and build up to a grand finale. 1. Association Highlighting Let’s start with this. Consider the following model: We would like to rename the navigation properties to something meaningful, as well as the foreign keys. But which navigation property is related to which foreign key? And to which association connector? So, now you can select an association or navigation property and find out. Here, we select the “Address.Person” navigation property, and the designer highlights the two entity types involved in the association, the association connector, the navigation property on the other side, as well as the foreign keys that hold the navigation information. This makes it easier for us to rename things to: 2. Property Reordering Imagine that you want to add a “Name” property to the “Property” type above. You do this and end up with a type that looks like this: But of course, you want this new property to be the second one in the type, below “Id”. Well, now you can select this property and, for example, press Alt + Home, which will move that property to the top of the type, then on Alt + Down Arrow to move it down one slot. Here is a screenshot of the type, along with the new “Move Properties” menu: You can now select multiple properties at the same time and move them. For example: 3. Entity Shape Coloring Imagine, now, that our model has gotten more complex, and viewing it all in one window renders areas of the diagram hard to distinguish:Imagine, now, that our model has gotten more complex, and viewing it all in one window renders areas of the diagram hard to distinguish: Entity shape coloring allows us to visually divide a single diagram into multiple meaningful areas by color coding them. You can select one or more entities and in their property sheet, change their color, which lets us do something like this: 4. Multiple Diagrams! But what if you just want to look at subject areas within a diagram? Say, the product catalog entities, or the sales types. In that case, you can select any number of entities and move them into a new diagram by right-clicking and selecting the new “Move to new Diagram” menu item. If we do this to our “Property” type, we’ll see it appear in a new diagram. And, if we make the model browser tool window visible, we’ll see this: Note that we now have a new “Diagrams” folder in the model browser, and that the new diagram opens in its own tab, allowing you to have multiple windows open at once against the same model. This diagram is a little sparse. Let’s bring into it any types related to “Property” by right-clicking it and selecting “Include Related”: Now, color the “Person” type green, then bring its related type, and see how they pick up the original type’s color: (Note that this models is not a real-world one and only intended for visualization purposes.) Some additional notes: – You can also drag and drop associations, types, and entity sets from the model browser. – You can cut or copy and paste objects from one diagram to another. – Deleting objects from a diagram will no longer delete them from the model, but only from the diagram. – You can delete objects from the model by using Shift+Del, right clicking and selecting “Delete from Model”, or deleting them from the model browser. – Diagrams are stored in a child file of the edmx for new edmx files. For backwards compatibility, models created in previous versions of visual studio will keep the diagram information in the EDMX file itself. However, you can move the diagram information from the EDMX to a child file by right clicking and selecting the “Move Diagrams to Separate File” context menu. We look forward to your feedback! Noam Hi Noam, Looks good, particularly the highlighting of each association when dealing with multiple associations between related entities, I had to do this manually not 20 mins ago 🙂 Also the copy and paste between diagrams would be very useful especially since I tend to keep one edmx per functionally area and so often need to recreate overlapping areas eg the overlap between Accounts Receivable and Accounts Payable edmxs. Entity shape colouring goes some way in this area as well. Another time consuming designer task is renaming from Db field names to Object Property names. It would be great if when you create a new edmx with overlapping tables that you could 'scavenge' mapping names from other edmx files in the solution. eg how many times I might have to rename blnAct to IsActive etc. It would great if the existing mappings where treated as a reusable asset in this respect, may be exportable and importable in some key value format?? BTW Are there improvements on the way to composite key handling (primary and foreign), or FKs that reference unique keys rather than the primary key? One last thought: Another major time consumer is the round tripping in the model first/db first journey. Currently if I make any non trivial changes to the model I have to create a new blank database from the 'Generate Database from Model' tool then use a third party tool to diff the existing and new dbs or just figure our the changes manually. A more comprehensive path to update the model and produce a db change script would be a huge step forward. Finally is there any way to enjoy any of these new features out-of-band ie: before the next VS version, if so how? Cheers Simon Great! When will be this functional published? This looks great! When can we have it? 🙂 Simon, a lot of the time consuming problems you mention have already been solved by other O/RM designers. LLBLGen Pro () for instance, even lets you generate code for EF. It's great Noam and we already spoke many times about the importance of the designer (for my customers particularly). However, what about complex types on the designer, what about Horizontal Entity Splitting, what about TPC? Matthieu Love the improvements! Much needed on my project. Will this be deployed in SP1? Looks awesome, particularly like the multiple diagrams! Please please please please ProviderManifestTolken in the designer so that we can SET it. It's a real PITA right now to develop for SQL Server 2005 when we have SQL Server 2008 DBs here. +1 for ProviderManifestTolken. Hit me many times before. The idea of multiple diagrams in theory should solve performance issues. Let's hope so… When can we get these bits to play with? @Gary, thanks looks interesting and very comprehensive. I guess I am just hoping to get some of the core offerings from MS improved 🙂 Thanks for the feedback! Yes, we are working on the unique key handling feature, and are looking at a few different ways of make the model-first feature migrate, rather than recreate, your database. I'll work with the team to see if we can push ProviderManifestToken handling into the next release! It must be possible to highlight properties based on some condition. For example, I want to highlight nullable ones in gray, keys in orange, and numeric in fuchsia. Also move for properties is ok, but what about sort (alphabetically, or by types, or some other way) ? One other thing that I seem to spend a bit of time doing but have to leave the designer to do is to manually add the DefaultValue attribute to the various not nullable storage entities that are not mapped to conceptual entities. It would be great if the model generator could read this info from the db and automatically apply these attributes to the storage and/or conceptual models. 1. We could not get these improvements shipped in SP1. We'll ship them as soon as we can after that release. 2. The diagrams can be stored in a separate file from the EDMX, which should alleviate some source merging issues: For backwards compatibility, existing EDMX files will keep the diagrams in the EDMX file, so you can open them in older versions of VS. New EDMX files will store the diagrams in a separate file. For existing EDMX files, there is a right-click context menu item called "Move Diagrams to Separate Files" that you can use if you do not need/want backwards compat. Like the improvements too! Multiple Diagrams will be very useful feature. Great job! @Noam, looks great! @Simon, if you want to add incremental SQL-DDL generation to avoid having to regenerate the db, take a look at my 'model comparer' feature for EFv4. It is part of my add-in Huagati DBML/EDMX Tools for VS2010, and gives you more granular control over DB <=> SSDL <=> CSDL differences and allow you to bring individual changes across without touching unrelated areas of the model. I have posted a couple of screencasts showing what it can do here: huagati.blogspot.com/…/introducing-model-comparer-for-entity.html huagati.blogspot.com/…/using-model-comparer-to-generate.html …and if you want to take it for a test-spin you can download the add-in and get a trial license at Nice, real nice. Bring it on I guess. awesome work. cant wait to get my hands on the new features and new designer. OMG, I'm absolutely gurgling with glee with these designer improvements! In particular, multiple diagrams are going to be an absolute god send. It's a total nightmare at the moment when we have multiple devs trying to edit the same diagrams. Will we be able to include the same entity in multiple diagrams though? From the "move to new diagram" option, it sounds like an entity can only exist in one diagram at a time. Quite often I have entities that relate to lots of other enties in different ways, and it would be great to be able to include them in each sub-diagram. Of course, if you make changes to an entity in one diagram, it would have to also keep any other instances in sync too. Hopefully we won't have to wait too long for these enhancements. The EF really needs to maintain a fast pace – being tied in with the mololithic Visual Studio releases is too slow. I second the request for the designer to read defaults from the database schema for non-nullable fields, this is a tedious and time-consuming job to do each time we create a model. Would be really good to include niladic function defaults here as well, especially getdate() and suser_sname(), we use these extensively in our databases for logging etc. Also, same question as Daniel Smith, will we be able to include the same entity in more than one diagram (like we can with SQL Server database diagrams) ? This will make it an awful lot easier to work on larger models with multiple developers. Great new features! When can we have it as a CTP or a Power Pack or something or anything?!? 🙂 It seems really complete… Have you seen this site for customer feedback on Entity Framework? data.uservoice.com/…/72025-ado-net-entity-framework-ef-feature-suggestions This looks great… though I probably will maintain my current method of splitting my database tables into groups and therefore edmx models. This gives strength for improved security model and pluggable data providers; so I can develop one area of the systems data model with no effect to others, I can then sell this as a plug add on. This gives rise to JasonBSteele's similar problem of version and change control on edmx models. Another important step would be 'using' … one edmx could import and use the objects of another that is imported, making them readonly. @Kristofer ขอบคุณครับ that looks very impressive. Great to see designer features that deal with SSDL and the SQL diff script look like just what I was looking for. Will definitely give it a try. Great news ! Almost all these improvements were features I've been missing ! Nice work 1. second the shared entity across multiple diagrams 2. What about basic documentation and annotation for the diagrams ? ala SQL Diagrams – Ability to document and annotate the models with text, labels, boxes etc that help describe the intent of the model right on the design surface. 1. second the shared entity across multiple diagrams 2. What about basic documentation and annotation for the diagrams ? ala SQL Diagrams – Ability to document and annotate the models with text, labels, boxes etc that help describe the intent of the model right on the design surface. One very useful feature would be scripting or macros in the designer so I can script exverything I do to a model and then blow away the model, recreate it from an updated db and then rerun my scripted changes. It would be nice to be able to specify extended properties on a table/column/…. in the designer. It would be nice to be able to delete something from the model and then re-add it without having to worry about messing up the mapping and storage parts of the edmx and being forced to hack it. It would be nice to be able to set a primary key guid column to identity without having to hack the edmx. It would be nice to be able to specify default values for columns right in the Designer it would be nice to import the nullability straight from the database. Some kind of working sych tool so I can synch an editted model with and editted db and pick and choose how the sych happens and then have everything work without having to redo all my changes, or hack the edmx file. Some kind of edmx merger to allow multiple developers to editit the model and then merge the changes without having to parse the edmx file and merge it in their heads. It seems like you guys are working on fluff rather than some of the basics. I liked this new feature very much. I would like to vote for another feature, using the database table schema as part of generated class namespace. (I don't know if this doable in the t4 templates) when this new designer enhancement will be available (weeks or months)? Thanks Looks good, nice work! My little wish list: – extending properties and the availability of these extensions in T4 – more business rules in the model (data annotations, inter entity rules and inter record rules) – a solution for having a framework with edmx and a project with edmx so framework model and data code can be resused between projects within 1 context (f.e. merging context or using an edmx in another edmx). Any idea when these improvements will become mainstream? @Kendall Morley, Many of the features you're asking for are available in the Model Comparer for EFv4 that is part of this Visual Studio add-in: It allows you to bring selective changes across, it uses documentation from extended properties, it allows you to set up rules as for what db-side default constraints should be treated as identity/computed, it syncs nullability/storegenerated/default value between CSDL and SSDL, and it allows you to select exactly what changes you want to bring across between the layers _without_ touching any unchanged portions of the model. Download it and try it out and I think you will find that it complements the EFv4 designer in VS2010 with a lot of the things you're asking for. Great stuff! Wondering if you can continue to dodge =P the question of a release schedule for these changes? my 0.02pecs worth Multiple diagrams are great. Will there be support for multiple namespaces? (With the possibility to associate 2 entities from different namespaces, this could simplify big models) zano04 – Multiple namespaces are not on the docket for the next release, they'll have to wait for a future one… One thing I'd really need at the moment is the ability to populate the list of options presented to the user for custom properties added via a Visual Studio extension and a EntityDesignerExtendedProperty. The enums are nice for things that are static, but annoyingly restrictive. For example if you need the developer to be able to specify an entity as the value of the custom property, all you can do is to make the property a string and add a validation into its setter. No way to give him a list. I'd also like to add a property that'd let me specify what permissions must a user have to be able to edit something (I'll script it out as an attribute then and use it in a custom model binder and in html helpers), but again, how do I give the developers the list of permissions? In this case as the VS extension is in-house and so far used only for one project, I can copy the enum definition into the extension's sources, but that's rather … suboptimal. .NET Links of the Week #40 Most of these features have been implemented in the Devart Entity Developer. It is a powerful modeling tool that allows to build models for LINQ to SQL and Entity Framework. Some of Entity Developer features are presented below: * You can use Model-First and Database-First approaches * You can highlight associations * You can drag-n-drop properties inside entity. Read more about Entity Developer features on our web-site –. 4 months on…..any update on a release of these features even in a CTP? Would love to see more out-of-band releases of this nature rather than ~2 yearly versions :). Any news? Makes you wonder how committed Microsoft are to Entity Framework; even the EF design blog only has two entries since this one! Hi, is there any news on when to expect a release?? Coloring and multiple diagrams are desperately needed !! Great work… judging by your screen-shots :)) A month on from Simon asking and almost a month from me asking… I guess no one at Microsoft is reading this any more. Either that or they are no longer interested in EF! Hi all — Thanks for all your suggestions and excitement over the features Noam had laid out here. We had a transition of work, apologies for the lack of responses on the blog during that. We are very interested in EF and are working on even new features for EF and the Entity Designer to get out to you along with these ones. We are also working on faster releases to get these features to you all as soon as possible without you having to wait the long times between major product releases. More soon! Sarah McDevitt This is indeed a long time ago that it was announced and we still have no ideas about when we will be able to use these interesting improvements! I got really excited about this, but 6 months on, and still no sniff of a release date. Any surprise people find alternatives such as devart, LLGenPro, nHibernate, DevForce. The serious shortfalls of edmx (that have seemingly been overcome) at least warrant a CTP, PowerPack or Plugin. I wish I had customers whom I could show them a pretty picture of how something might look, then sit on it for 6 months without a peep. I can only assume (like all others on this thread) that EF just isn’t a priority. Sarah McDevitt said "More soon!" – any idea when soon is? It's been a couple of weeks and still no more posts from Microsoft We are really waiting for it, NOT more diagrams was the last obstacle for us. Hello Microsoft developers, can you give a date when this is available for us? Thank you. Please provide a better way to automatically layout the diagram automatically. Almost a month since I asked Sarah when "soon" is and no reply from her so I guess we can just assume that no news = we don't care so go away and find something else to use? OMG!!!! SO NICE!!!! This is SO BADLY NEEDED!!! Thx MSFT! Soooo. what's the status about this.. Is this actually w.i.p.? Or are we waiting for nothing? I've just install the SP1 hoping this feature will be inside (posted 9 months ago !) but nothing new in the SP1… Please give us just a date or a release number where this feature will be include. Thank's When will we see this. Another few weeks have passed… blogs.msdn.com/…/announcing-the-microsoft-entity-framework-june-2011-ctp.aspx Is there somewhere I can download and test these new features? I need the multiple diagrams pretty badly. @Shaun, click the link in the comment previous to yours (for the June 2011 CTP). Architects today use such program on deciding which interior ( ) compliments the whole design of the house.
https://blogs.msdn.microsoft.com/efdesign/2010/10/11/entity-designer-improvements-preview/
CC-MAIN-2017-22
refinedweb
3,567
60.75
We are about to switch to a new forum software. Until then we have removed the registration on this forum. Hello everyone. I'm having a bit of trouble here, could you give me a hand, please? I need to read the frequencies from microphone input. My code runs fine on desktop, because I'm using Minim and getting fft to work there is pretty straightforward. But when I went to Android, I had to abandon Minim and I'm stuck at trying to apply fft to input myself (both in desktop and in android). On Android I get my input from AudioRecord.read( myarray , 0 , bufferSize ); On Desktop I get my input from AudioInput.mix.get( n ) and assign each value to a position in myarray. Apparently, it's picking up the same input for both. So input is fine, my problem is I don't know how to use FFT. I got a FFT java class from the internet. It asks for a real and an imaginary array. I pass "myarray" as the real array and another array with all 0-values as the imaginary one. For some reason, FFT is giving me negative values. Why? Aren't frequency bands supposed to start from zero? Any idea of what I am doing wrong? I attached a pic of what's happening (sorry for the bright red): And here's my code in case it helps: import ddf.minim.*; AudioInput in; Minim minim; double[] real; double[] imag; FFT fft; int bufferSize = 1024; int sampleRate = 5512; void setup() { size(1024,768); minim = new Minim( this ); in = minim.getLineIn( Minim.STEREO , bufferSize , sampleRate ); real = new double[bufferSize]; imag = new double[bufferSize]; fft = new FFT( bufferSize ); background(100,100,250); } void draw() { background( 100 , 100 , 200 ); noFill(); stroke( 255 , 255 , 255 ); strokeWeight(1); for ( int i=0; i<bufferSize; i++ ) { real[i] = in.mix.get(i)*1000; imag[i] = 0; } for ( int i=0; i<bufferSize; i++ ) line( i , 200 , i , 200 - (int)(real[i]) ); fft.fft( real , imag ); stroke( 255 , 255 , 0 ); for ( int i=0; i<bufferSize; i++ ) line( i , 600 , i , 600 - ((int)(real[i])/10) ); } I'm using a FFT class I got from the interwebs (this one: ) Answers what's up with using the fft from within minim? example: None, if you can get that to work on Android. sorry, didn't see that requirement. the code you posted still has minim calls in it... I wrote that as a test, just to see if I could get the manual FFT thing on desktop before moving to Android. Actually, I just stumbled upon a Minim FFT implementation for Android. :D I'm still going to try that out, but it should work. Sorry for coming here a little too desperate, I should have googled this a bit more before bothering you guys ;)
https://forum.processing.org/two/discussion/11400/how-to-manually-apply-fft-to-sound-input
CC-MAIN-2019-43
refinedweb
475
73.37
In this article, we will discuss the Life Cycle Methods of Angular. If a user can understand the entire process of the components create and destroy the application. What is the Life Cycle Method? - Every Component has a LifeCycle. - It goes through a number of different stages of Initializing and destroying. - There is a total of 8 stages in the life cycle. - Each and every stage call the life cycle hook event. - If you can use hook in different phases in our application to obtain fine control. - Each component contain Typescript class. - Each Component has the constructor method. - Constructor component class execute first before executing the life cycle hook. - After executing the constructor. execute the life cycle hook method in a specific order. Life Cycle Methods : - Constructor - ngOnChanges - ngOnInit - ngDoCheck - ngAfterContentInit - ngAfterContentChecked - ngAfterViewInit - ngAfterViewChecked - ngOnDestroy 1) ngOnChanges - This event executes every time value of the input control within the component has been changing. 2) ngOnInit - This event Execute after ngOnChanges. mainly used in initializing data in the component. 3) ngDoCheck - Input property of the component has been checked then this event is executed. you can also implement your check login to the property to be checked. 4) ngAfterContentInit - This event execute angular performed any content projection with a component view. this event executes the first time for every binding of the component check to the first time. this event is a child of the ngDoCheck. every time this method is executed then after ngDoCheck. 5) ngAfterContentChecked - This event executes after the execution of ngAfterContentInit. this event call content of the component is checked by the changes mechanism of the Angular. this method is also linked with every child component initialization. this method also calls every subsequent execution of ngDoCheck(). 6) ngAfterViewInit - This event executes after the Every Components view and child view have been initialized. it call first time after ngAfterContentChecked(). this method only applies to components. 7) ngAfterViewChecked - This method call after ngAfterViewInit() method. this event executes every time the view of the given component has been checked by the change detection algorithm of Angular. this method executes every subsequence execution of the ngAfterContentChecked(). this method is also executed when any binding of children directives has been changed. this method is useful for the component wait for some value is coming from its child component. 8) ngOnDestroy - This method will be executed after all Angular Component has been destroyed. this method avoids the memory leaks for the event observables and subscribes. this method call just one before the component has been removed from the DOM. Example: Step1: Create New Application. - ng new Demo - cd Demo app.component.ts import { Component, OnChanges, OnDestroy, OnInit } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent implements OnInit,OnChanges,OnDestroy { constructor() { console.log("Contructor"); } ngOnChanges() { console.log("ngOnChnages"); } ngOnInit(): void { console.log("ngOnInit"); }"); } title = 'demo'; } see output:
https://www.thecodehubs.com/life-cycle-method-in-angular/
CC-MAIN-2022-21
refinedweb
483
52.26
This demo will start you on the track to rock or techno with jMusic, and generally introduce techniques for multi-part ostinati which might also be used for minimalist, African, or Gamelan music. To hear the result play the MIDI file below. Lets have a closer look. import jm.JMC;import jm.music.data.*;import jm.music.tools.*;import jm.util.*; public final class Kit implements JMC{ public static void main(String[] args){ Score pattern1 = new Score("JMDemo - Kit"); Part drums = new Part("Drum Kit", 0, 9); Phrase phrBD = new Phrase(0.0); Phrase phrSD = new Phrase(0.0); Phrase phrHH = new Phrase(0.0); //Let us know things have started System.out.println("Creating drum patterns . . ."); This section declares the class and sets class variables for most levels of the jMusic data structure. Phrases will hold a pattern for each drum, the 'drum' part will hold those phrases and, although there is only one instrument in this demo, the music in jMusic always needs to be in a score format to create a MIDI file. The part is set to the 10th MIDI channel (i.e. channel 9, counting from 0) for GM playback of drum and percussion sounds. The number 0.0, which is an argument to the Phrase constructor, tells the new phrases that they will start at the beginning of the piece. The final println statement simply keeps us informed of the program's progress as it runs by printing to the standard output. // make bass drum for(short i=0;i<4;i++){ Note note = new Note(36, C); phrBD.addNote(note); Note rest = new Note(REST, C); //rest phrBD.addNote(rest); } The first phrase to be created is the bass drum. It consists of four crotchet note - crotchet rest pairs. The General MIDI (GM) drum kit uses note number 36 for bass drum. Each note and rest is added to the bass drum phrase (phrBD) by calling the 'addNote' method in the Phrase class. // make snare drum for(short i=0;i<4;i++){ Note rest = new Note(REST, C); //rest phrSD.addNote(rest); Note note = new Note(38, C); phrSD.addNote(note); } // make hats for(short i=0;i<15;i++){ Note note = new Note(42, Q); phrHH.addNote(note); } Note note = new Note(46, Q); // open hi hat phrHH.addNote(note); A similar process creates the snare and hi hat parts. The snare plays on the opposite beats to the bass drum. The hi hat is normally closed (note 42 in GM) but an open hi hat is added to the end of the phrase just for some small aesthetic interest. // loop the drum pattern for 16 bars Mod.repeat(phrBD, 7); Mod.repeat(phrSD, 7); Mod.repeat(phrHH, 7); Mod.repeat() is used to create loops of each phrase. In this case the same number of repeats are used for each phrase and each phrase is the same length. However, interesting effects could be created by 'phasing' patterns of different lengths to create continually varying rhythmic combinations. // add phrases to the instrument (part) drums.addPhrase(phrBD); drums.addPhrase(phrSD); drums.addPhrase(phrHH) // add the drum part to a score. pattern1.addPart(drums); Write.midi(pattern1, "Kit.mid");
http://jmusic.ci.qut.edu.au/jmtutorial/kitDemo.html
crawl-002
refinedweb
536
75.81
Just as an RCW proxy wraps a COM object when you access it from C#, code that accesses a C# object as a COM object must do so through a proxy as well. When your C# object is marshaled out to COM, the runtime creates a COM Callable Wrapper (CCW). The CCW follows the same lifetime rules as other COM objects, and as long as it is alive, a CCW maintains a traceable reference to the object it wraps. This keeps the object alive when the garbage collector is run. The following example shows how you can export both a class and an interface from C# and control the Global Unique Identifiers (GUIDs) and Dispatch IDs (DISPIDs) assigned. After compiling IRunInfo and StackSnapshot, you can register both using RegAsm.exe. // IRunInfo.cs // Compile with: // csc /t:library IRunInfo.cs using System; using System.Runtime.InteropServices; [GuidAttribute("aa6b10a2-dc4f-4a24-ae5e-90362c2142c1")] public interface IRunInfo { [DispId(1)] string GetRunInfo( ); } // StackSnapshot.cs // compile with csc /t:library /r:IRunInfo.dll StackSnapShot.cs using System; using System.Runtime.InteropServices; using System.Diagnostics; [GuidAttribute("b72ccf55-88cc-4657-8577-72bd0ff767bc")] public class StackSnapshot : IRunInfo { public StackSnapshot( ) { st = new StackTrace( ); } [DispId(1)] public string GetRunInfo( ) { return st.ToString( ); } private StackTrace st; }
http://etutorials.org/Programming/C+in+a+nutshell+tutorial/Part+II+Programming+with+the+.NET+Framework/Chapter+18.+Integrating+with+COM+Components/18.3+Exposing+C+Objects+to+COM/
CC-MAIN-2016-50
refinedweb
205
50.12
On 12 Jun 2001 21:07:16 GMT, Christian Schaller <chris at alaska.cert.siemens.de> wrote: > Dear Pythoners, > > I've seen already some threads about configuring vim that it is > comfortable editing Python programs, so let's get one step further! Use > the following script to be able moving up and down the same indentation > level of the line the cursor is on (just press F10 for down and > Shift-F10 for up). > > Have fun! > > bye > Chris... > > """ > fu! Block(indent, dir) > let line = getline(".") > let blankCount = matchend(line, a:indent) > let nr = line(".") > let lastPos = nr > let maxLine = line("$") > while nr <= maxLine && nr > 0 > let nr = nr + a:dir > let curr = getline(nr) > if curr == "" > continue > endif > if matchend(curr, a:indent) == blankCount > break > elseif matchend(curr, a:indent) < blankCount > let nr = lastPos > break > endif > endw > exe nr > endfu > > map <F10> :call Block("^[ ]*", 1)<c-m> > map <S-F10> :call Block("^[ ]*", -1)<c-m> > """ Thanks for the post - but it's not working fully here. F10 works fine but S-F10 seems to capitalize current character and jump few characters to the right. I'm using 6.0z version. I tried changing it to F11 but it does some other odd things.. Ok, now I tried F10 for down, F11 for up and it works perfectly well. I guess this is something to do with 6.0 beta status. -- True sailing is dead - Jim
https://mail.python.org/pipermail/python-list/2001-June/088601.html
CC-MAIN-2019-30
refinedweb
235
80.62
On Mon, Mar 28, 2005 at 07:23:21PM +0200, Joerg Sonnenberger wrote: > On Mon, Mar 28, 2005 at 09:15:28AM -0800, Matthew Dillon wrote: > > :I. > > There is _no_ reason for the thread library to extend the TCB. > Anything which a thread library might want to store there can also be > stored in the pthread structure, which is completely managed by the > library. There are still differences between architectures in what > to place in the TCB, because the "self" pointer is only needed for > segment-style implementations, if the TCB register contains a normal > pointer (like e.g. on IA64 or any RISC architecture), it is not > needed. I should add that I mean the following tcb layout: struct tls_tcb { #ifdef INDIRECT_THREAD_REGISTER struct tls_tcb *tcb_base; #endif void *dtv_base; void *tcb_pthread; } Joerg
http://leaf.dragonflybsd.org/mailarchive/commits/2005-03/msg00631.html
CC-MAIN-2015-27
refinedweb
132
67.08
30 September 2007 10:20 [Source: ICIS news] BERLIN (ICIS news)--BP Acetyls, the global acetic acid producer, plans to debottleneck its acetic acid unit in Hull, UK, by 30,000 tonnes/year during a turnaround in 2010, a company official said on Sunday. ?xml:namespace> “We will size the units for growth and increase efficiency by lowering costs,” David Smith, general manager of ?xml:namespace> The company would also expand its acetic anhydride unit by 25,000 tonnes/year and upgrade reactors and carbon monoxide reformer in “It will be a significant expenditure,” Smith said, but declined to give the exact figure as project costing was still ongoing. BP Acetyls operates a 520,000 tonne/year acetic acid unit and a 150,000 tonne/year acetic anhydride
http://www.icis.com/Articles/2007/09/30/9066346/epca-07-bp-to-debottleneck-hull-acetic-acid-unit.html
CC-MAIN-2014-41
refinedweb
128
50.5
PathStripToRoot function Removes all file and directory elements in a path except for the root information. Note Misuse of this function can lead to a buffer overrun. We recommend the use of the safer PathCchStripToRoot function in its place. Syntax Parameters - szRoot [in, out] Type: LPTSTR A pointer to a null-terminated string of length MAX_PATH that contains the path to be converted. When this function returns successfully, this string contains only the root information taken from that path. Return value Type: BOOL Returns TRUE if a valid drive letter was found in the path, or FALSE otherwise. Examples #include <windows.h> #include <iostream.h> #include "Shlwapi.h" void main( void ) { // Path to convert. char buffer_1[ ] = "C:\\path1\\path2"; char *lpStr1; lpStr1 = buffer_1; // Print the path before the root is stripped. cout << "The contents of the path before is : " << lpStr1 << endl; // Print the return value from the function. cout << "The return from \"PathStripToRoot\" is : " << PathStripToRoot(lpStr1) << endl; // Print the path after the root is stripped. cout << "The contents of the path after is : " << lpStr1 << endl; } OUTPUT: ================== The contents of the path before is : C:\path1\path2 The return from "PathStripToRoot" is : 1 The contents of the path after is : C:\ Requirements Show:
https://msdn.microsoft.com/en-us/library/bb773757(v=vs.85).aspx
CC-MAIN-2015-18
refinedweb
202
65.93
Analyzing chemical compounds in Haskell I tried to do the parser chemistry as an exercise for myself, but I am stuck. Here is the datatype I'm trying to use: data Compound = Monoatomic String Int | Poliatomic [Compound] Int For a line like "Ca (OH) 2" I want to get something like: Poliatomic [Monoatomic "Ca" 1, Poliatomic [Monoatomic "O" 1, Monoatomic "H" 1] 2 ] 1 Monatomic type constructor for single atoms and Poliatomic type constructor for several atoms. In this example, (OH) 2 represents both an internal polyatomic structure and is represented as Poliatomic [(Monoatomic O 1), (Monoatomic H 1 )] 2 . The number 2 means we have two of these polyatomic structures. I did it a lot, import Data.Char (isUpper) data Compound = Monoatomic String Int | Poliatomic [Compound] Int instance Functor Compound where fmap f (Monoatomic s i) = Monoatomic (f s) i fmap f (Poliatomic xs i) = Poliatomic (fmap f xs) i Change number of a compound changeNumber :: Compound -> Int -> Compound changeNumber (Monoatomic xs _) n = Monoatomic xs n changeNumber (Poliatomic xs _) n = Poliatomic xs n Take a partial compound and next chracter return partial compound parseCompound :: Compound -> Char -> Compound parseCompound (Poliatomic x:xs n) c isUpper c = Poliatomic ((Monoatomic [c] 1):x:xs) n -- add new atom to compound isLower c = Poliatomic I want to do foldl parseCompound (Poliatomic [] 1) inputstring but then it became too difficult for me to continue. It looks like it should be a pretty simple problem, but I'm very new to Haskell and can't figure out how to accomplish this function. I have the following questions: - Is my approach fixed so far? - How can I make this work? source to share I created the parser you are looking for with Parsec to give you an idea of what parser parsers look like since you stated you have little experience with it. Even with a little Haskell experience, it should be fairly readable. I have provided some comments on those parts where there is something special to look for. import Text.Read (readMaybe) import Data.Maybe (fromMaybe) import Text.Parsec (parse, many, many1, digit, char, string, (<|>), choice, try) import Text.Parsec.String (Parser) data Compound = Monoatomic String Int | Poliatomic [Compound] Int deriving Show -- Run the substance parser on "Ca(OH)2" and print the result which is -- Right (Poliatomic [Monoatomic "Ca" 1,Poliatomic [Monoatomic "O" 1,Monoatomic "H" 1] 2] 1) main = print (parse substance "" "Ca(OH)2") -- parse the many parts which make out the top-level polyatomic compound -- -- "many1" means "at least one" substance :: Parser Compound substance = do topLevel <- many1 part return (Poliatomic topLevel 1) -- a single part in a substance is either a poliatomic compound or a monoatomic compound part :: Parser Compound part = poliatomic <|> monoatomic -- a poliatomic compound starts with a '(', then has many parts inside, then -- ends with ')' and has a number after it which indicates how many of it there -- are. poliatomic :: Parser Compound poliatomic = do char '(' inner <- many1 part char ')' amount <- many1 digit return (Poliatomic inner (read amount)) -- a monoatomic compound is one of the many element names, followed by an -- optional digit. if omitted, the amount defaults to 1. -- -- "try" is a little special, and required in this case. it means "if a parser -- fails, try the next one from where you started, not from where the last one -- failed." -- -- "choice" means "try all parsers in this list, stop when one matches" -- -- "many" means "zero or more" monoatomic :: Parser Compound monoatomic = do name <- choice [try nameParser | nameParser <- atomstrings] amount <- many digit return (Monoatomic name (fromMaybe 1 (readMaybe amount))) -- a list of parser for atom names. it is IMPORTANT that the longest names -- come first. the reason for that is that it makes the parser much simpler to -- write, and it can execute much faster. it common when designing parsers to -- consider things like that when creating them. atomstrings :: [Parser String] atomstrings = map string (words "He Li Be Ne Na Mg Al Ca H B C N O F") I've tried to write this code in a way that should be at least reasonably accessible to newbies, but it's probably not crystal clear, so I'm happy to answer any questions about it. The parser above is the one you wanted. However, this is not the one I would write if I had free reins. If I did but wanted to, I would use the fact that Ca(OH)2 can be represented as (Ca)1((O)1(H)1)2 which is a much more homogeneous representation and in turn results in a simpler data structure and parser with fewer templates. The code I would rather write would look like import Text.Read (readMaybe) import Data.Maybe (fromMaybe) import Control.Applicative ((<$>), (<*>), pure) import Text.Parsec (parse, many, many1, digit, char, string, (<|>), choice, try, between) import Text.Parsec.String (Parser) data Substance = Part [Substance] Int | Atom String deriving Show main = print (parse substance "" "Ca(OH)2") -- Right (Part [Part [Atom "Ca"] 1,Part [Part [Atom "O"] 1,Part [Atom "H"] 1] 2] 1) substance :: Parser Substance substance = Part <$> many1 part <*> pure 1 part :: Parser Substance part = do inner <- polyatomic <|> monoatomic amount <- fromMaybe 1 . readMaybe <$> many digit return (Part inner amount) polyatomic :: Parser [Substance] polyatomic = between (char '(') (char ')') (many1 part) monoatomic :: Parser [Substance] monoatomic = (:[]) . Atom <$> choice (map (try . string) atomstrings) atomstrings :: [String] atomstrings = words "He Li Be Ne Na Mg Al Ca H B C N O F" This uses a few "advanced" tricks in Haskell (such as the <$> and operators <*> ), so you might not be interested, OP, but I'm putting it in other people who might be more advanced Haskell users and Parsec learners. This parser is about half a page as you can see, and that libraries like Parsec make them easy and fun to write parsers! source to share
https://daily-blog.netlify.app/questions/2216412/index.html
CC-MAIN-2021-49
refinedweb
967
64.34
Background subtraction and object counting - slow performance Hi, I'm trying to write down a code to count objects that will be moving from left to right in the FoV, so I think I don't need to track them and I'm just going for a check when the center point of an object gets into a ROI (for example the middle of the image). I've used background subtraction to easily highlights objects to count, then with findcontours and the center of the bounding rect I get the point that will be used to count every object. The code seems to work "quite well" if I'm working with very low resolution videos: in this first example ( Video 1 ) I'm counting some pens, but I need to define a pixel range of 8 pixels where the rect center could be located to get it counted or the pen will be missed. Also the 6th pen is not being counted. When I run the code the video is playing slower than it's normal speed. Is it my PC working bad or is it the code bad written? When trying to work with higher resolution video (like this one Video 2 the code isn't giving any result at all, cannot identify almost any rect. Any suggestion? What am I doing wrong? Thanks for helping Here's the source code: (more)(more) #include < stdio.h> #include < stdlib.h> #include < opencv2\opencv.hpp> #include < opencv2/core/core.hpp> #include < opencv2/highgui/highgui.hpp> #include < opencv2/video/background_segm.hpp> using namespace std; using namespace cv; // Global variables Mat img, frame, fgMaskMOG; vector<vector<Point> > contours; vector<Vec4i> hierarchy; Ptr< BackgroundSubtractor> pMOG; //MOG Background subtractor static Rect pointSetBoundingRect(const Mat& points, Mat m) { int npoints = points.checkVector(2); int xmin = 0, ymin = 0, xmax = -1, ymax = -1, i; Point ptxmin, ptymin, ptxmax, ptymax; if (npoints == 0) return Rect(); const Point* pts = points.ptr<Point>(); Point pt = pts[0]; ptxmin = ptymin = ptxmax = ptymax = pt; xmin = xmax = pt.x; ymin = ymax = pt.y; for (i = 1; i < npoints; i++) { pt = pts[i]; if (xmin > pt.x) { xmin = pt.x; ptxmin = pt; } if (xmax < pt.x) { xmax = pt.x; ptxmax = pt; } if (ymin > pt.y) { ymin = pt.y; ptymin = pt; } if (ymax < pt.y) { ymax = pt.y; ptymax = pt; } } return Rect(xmin, ymin, xmax - xmin + 1, ymax - ymin + 1); } int main() { pMOG = new BackgroundSubtractorMOG(); int Count = 0; VideoCapture cap("../4.avi"); while (true) { cap >> frame; pMOG->operator()(frame, fgMaskMOG); findContours(fgMaskMOG, contours, hierarchy, CV_RETR_CCOMP, CHAIN_APPROX_SIMPLE); Mat dst = Mat::zeros(frame.rows, frame.cols, CV_8UC3); for (size_t i = 0; i < contours.size(); i++) { Rect minRect = pointSetBoundingRect(Mat(contours[i]), dst); if (minRect.area() > 2000) { rectangle(dst, minRect, Scalar(0, 0, 255), 3, 8); Point center = Point((minRect.x + (minRect.width) / 2), (minRect.y + (minRect.height) / 2)); circle(dst, center, 2, Scalar(255, 0, 0), 2, 8, 0); if (center.x > 300 & center.x < 308) { Count++; } } char text[30]; sprintf_s(text, "Bars: %d", Count); IplImage iplimg = dst; CvFont font; double hScale = 1.5; double vScale ... Try to time the different parts of your code to know which one slow down your program. I think that your function pointSetBoundingRect could be replace by cv::boundingRect. You can also directly use cv::Mat, C++ functions for the different drawings and display. @Eduardo: tested both boundingRect and this code, this one seems to work a little faster. What is slowing down the code is the background subtractor: i've tryed a different strategy to highlight moving objects working on consecutive frames subtraction, but this is not giving good result since i'm losing the overlapping area of the objects plus in the difference image coming from absdiff function every object is cloned. I'm back to MOG2 background subtractor, but this is getting harder than expected..
https://answers.opencv.org/question/70588/background-subtraction-and-object-counting-slow-performance/
CC-MAIN-2019-43
refinedweb
637
66.44
You can discuss this topic with others at Read reviews and buy a Java Certification book at In an expression involving the operators & | && || and variables of known values state which operands are evaluated and the value of the expression. It is easy to forget which of the symbols mean logical operator and which mean bitwise operations, make sure you can tell the difference for the exam. If you are new to these operators it might be worth trying to come up with some sort of memory jogger so you do not get confused between the bitwise and the logical operators. You might like to remember the expression "Double Logic" as a memory jerker. The logical operators (&& ||) have a slightly peculiar effect in that they perform "short-circuited" logical AND and logical OR operations as in C/C++. This may come as a surprise if you are a from a Visual Basic background as Visual Basic will evaluate all of the operands. The Java approach makes sense if you consider that for an AND, if the first operand is false it doesn't matter what the second operand evaluates to, the overall result will be false. Also for a logical OR, if the first operand has turned out true, the overall calculation will show up as true because only one evaluation must return true to return an overall true. This can have an effect with those clever compressed calculations that depend on side effects. Take the following example. public class MyClass1{ public static void main(String argv[]){ int Output=10; boolean b1 = false; if((b1==true) && ((Output+=10)==20)) { System.out.println("We are equal "+Output); }else { System.out.println("Not equal! "+Output); } } }";. This may be handy sometimes when you really don't want to process the other operations if any of them return false, but it can be an unexpected side effect if you are not completely familiar with it. The & and | operators when applied to integral bitwise AND and OR operations. You can expect to come across questions in the exam that give numbers in decimal and ask you to perform bitwise AND or OR operations. To do this you need to be familiar with converting from decimal to binary and learn what happens with the bit patterns. Here is a typical example What is the result of the following operation 3 | 4 The binary bit pattern for 3 is 11 The binary bit pattern for 4 is 100 For performing a binary OR, each bit is compared with the bit in the same position in the other number. If either bit contains a 1 the bit in the resulting number is set to one. Thus for this operation the result will be binary 111 Which is decimal 7. The objectives do not specifically ask for knowledge of the bitwise XOR operation, performed with ^ If you do not feel comfortable thinking in binary (I am much more comfortable in decimal), you may want to do some exercises to help master this topic and also the bitwise shift operators topic. If you are running windows you may find it helpful to use the windows calculator in scientific mode. To do this choose View and switch from the default standard to scientific mode. In Scientific mode you can switch between viewing numbers ad decimal and binary, this displays the bit pattern of numbers. Here is another handy trick I wish I had known before I wrote my BitShift applet (see the applets menu from the front of this site), is how to use the Integer to display bit patterns. Here is a little program to demonstrate this. public class BinDec{ public static void main(String argv[]){ System.out.println(Integer.parseInt("11",2)); System.out.println(Integer.toString(64,2)); } } If you compile and run this program the output will be 3 1000000 Note how the program converts the bit pattern 11 into the decimal equivalent of the number 3 and the decimal number 64 into its equivalent bit pattern. The second parameter to each method is the "radix" or counting base. Thus in this case it is dealing with numbers to the base 2 whereas we normally deal with numbers to the base 10. What will happen when you attempt to compile and run the following code? int Output=10; boolean b1 = false; if((b1==true) && ((Output+=10)==20)){ System.out.println("We are equal "+Output); }else { System.out.println("Not equal! "+Output); } 1) Compile error, attempting to peform binary comparison on logical data type 2) Compilation and output of "We are equal 10" 3) Compilation and output of "Not equal! 20" 4) Compilation and output of "Not equal! 10" What will be output by the following line of code? System.out.println(010|4); 1) 14 2) 0 3) 6 4) 12 Which of the following will compile without error? 1) int i=10; int j = 4; System.out.println(i||j); 2) int i=10; int j = 4; System.out.println(i|j); 3) boolean b1=true; boolean b2=true; System.out.println(b1|b2); 4) boolean b1=true; boolean b2=true; System.out.println(b1||b2); 4) Compilation and output of "Not equal! 10"";. 4) 12 As well as the binary OR objective this questions requires you to understand the octal notation which means that the leading zero not means that the first 1 indicates the number contains one eight and nothing else. Thus this calculation in decimal means 8|0 To convert this to binary means 1000 0100 ---- 1100 ---- The | bitwise operator means that each position where there is a 1, results in a 1 in the same position in the answer. 2,3,4 Option 1 will not compile because it is an attempt to perform a logical OR operation on a an integral types. A logical or can only be performed with boolean arguments. Last updated 10 Jan 2000
http://www.jchq.net/tutorial/05_03Tut.htm
crawl-001
refinedweb
985
51.28
Hi Everyone, First let me that the last time I did a Vc++ small utility prg is back in 1995. Here is my situation: I have a win32 app that uses HINSTANCE and the main.cpp looks like: ----------------------- #include "stdafx.h" #include "testlib.c" int APIENTRY WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow) { LoadTestLibrary(); // etc.... } ---------------- the testlib.c looks like this: ----------------- HINSTANCE TESTLIB=NULL; long LoadTestLibrary() { TESTLIB=LoadLibrary("TESTLIB.DLL"); if(TESTLIB == NULL) { MessageBox(NULL, "Unable to load TESTLIB.DLL", "ERROR", MB_OK); return(-1); } else { // do some stuff... } ----------------- What I want is rewrite the code in a win32 console app. all output will be echoed instead of messagebox. The reason I want a console app is because I need to compile it with mono and use it on a linux server. The TESTLIB.DLL is an external library that is supposed to be compiled in .NET. Any help will be appreciated, if someone is interested to do the work for me, please PM me I can give you the source code and you can give me an estimate price. Thanks I might have forgot something here but... It should be just a matter of changing WinMain to a standard main and change the project settings to a console And change calls to 'MessageBox' to instead write to the output stream ("cout"). Viggy ? Originally Posted by peterman ? Mono allows you to run .net applications on Linux which doesn't have anything to do with running a native 32 app. What are you trying to do exactly? My Code Guru Articles I have a webserver running on linux (running PHP), I need a handwriting recognition library to decode part of an image that is stamped with some handwriting characters, so I looked all over the internet and only found 2 third party libraries but they are only available as DLL so I need to find a way to use these DLL on linux. if it can not not be done, then I would need a win32 webapp that will run as a window service and that will listen on port XXX. I will be sending an HTTP POST (a JSON string that will contain the Image in BMP format encoded in Base64, with the command like "RecognizeLine,RecognizeWord" etc) and then the webapp will respond also in JSON format with the results. I would prefer a console app that I will call via PHP from my webserver (send a JSON stream and getback a JSON response). I ratter not use wine. View Tag Cloud Forum Rules
http://forums.codeguru.com/showthread.php?531381-Rewriting-a-Win32-app-using-HINSTANCE-to-a-Win32-console-application&p=2097249
CC-MAIN-2013-20
refinedweb
426
71.75
I have Array object ArrObject object = { Active: true, // type: boolean Code: '12345' // type: string } Active Code underscoreJs data = [ { Code: "Code0", Description: "Description0", IsActive: true, id: 0 }, { Code: "Code1", Description: "Description1_edit", IsActive: true, id: 1 }, { Code: "Code5", Description: "Description5_edit", IsActive: false, id: 2 }] I'd just use Array#sort with a simple comparator function: function cmp_bool(a, b) { return a == b ? 0 : a ? -1 : +1 } function cmp_str(a, b) { return a == b ? 0 : a < b ? -1 : +1 } function cmp(a, b) { return cmp_bool(a.IsActive, b.IsActive) || cmp_str( a.Code, b.Code); } Then you could simply do this: var sorted = data.sort(cmp); If you need to be able to switch the Code sort order then you just need a reversed version of cmp_str (say cmp_str_reverse) and a version of cmp that uses cmp_str_reverse instead of cmp_str. If you must use _.sortBy then you just need to come up with a value to sort by, something like this: function combined(obj) { return (obj.IsActive ? 'a' : 'b') + obj.Code; } var sorted = _(data).sortBy(combined); The problem with this is that it is much harder to reverse the Code ordering. I suppose you could do a big mess of bit twiddling on the string's characters but that would just leave you wonder just what you were doing when you look at the code in six months. _.sortBy is a notational convenience, you don't have to force everything to fit whatever conveniences you have on hand. Demo:
https://codedump.io/share/BWk6L2nTP5mb/1/multi-sorting-in-underscore
CC-MAIN-2018-09
refinedweb
248
63.59
User-Agent: Mozilla/5.0 (X11; U; OpenBSD i386; en-US; rv:1.9.1.11) Gecko/20100901 Firefox/3.5.11 Build Identifier: Builds of ffx4 betas fail since b4 (b5,b6,same issue) when linking xpcshell with -lxul, error message follows. /usr/obj/mozilla-firefox-4.0b6/mozilla-central/dist/bin/libxul.so.22.0: undefined reference to `nsDOMWorkerSecurityManager::JSCheckAccess(JSContext*, JSObject*, long, JSAccessMode, unsigned long long*)' /usr/obj/mozilla-firefox-4.0b6/mozilla-central/dist/bin/libxul.so.22.0: undefined reference to `mozilla::storage::convertJSValToVariant(JSContext*, unsigned long)' /usr/obj/mozilla-firefox-4.0b6/mozilla-central/dist/bin/libxul.so.22.0: undefined reference to `nsDOMWorkerMessageEvent::SetJSVal(JSContext*, unsigned long long)' /usr/obj/mozilla-firefox-4.0b6/mozilla-central/dist/bin/libxul.so.22.0: undefined reference to `nsContentUtils::WrapNative(JSContext*, JSObject*, nsISupports*, nsWrapperCache*, nsID const*, unsigned long*, nsIXPConnectJSObjectHolder**, int)' /usr/obj/mozilla-firefox-4.0b6/mozilla-central/dist/bin/libxul.so.22.0: undefined reference to `JSValToNPVariant(_NPP*, JSContext*, unsigned long,_NPVariant*)' /usr/obj/mozilla-firefox-4.0b6/mozilla-central/dist/bin/libxul.so.22.0: undefined reference to `nsDOMWorkerTimeout::Init(JSContext*, unsigned int, unsigned long long*, int)' /usr/obj/mozilla-firefox-4.0b6/mozilla-central/dist/bin/libxul.so.22.0: undefined reference to `nsContentUtils::CreateStructuredClone(JSContext*, unsigned long, unsigned long*)' collect2: ld returned 1 exit status All those failures are due to conflicting types between JSUint64 and PRUint64. They all involve a method having a jsval type (or jsval*) as one of the arguments. Take for example: undefined reference to `nsDOMWorkerMessageEvent::SetJSVal(JSContext*, unsigned long long)' defined in: ./dom/src/threads/nsDOMWorkerEvents.cpp:nsDOMWorkerMessageEvent::SetJSVal(JSContext* aCx, jsval aData) The compiled object for this file uses a wrong type for jsval: $nm -g dom/src/threads/nsDOMWorkerEvents.o | grep SetJSVal | c++filt 00001d40 T nsDOMWorkerMessageEvent::SetJSVal(JSContext*, unsigned long) Suddenly, jsval is an unsigned long, not an unsigned long long (as intended ?). Failure is probably hidden on other platforms because of a different typedef for __uint64_t ? suggested fixes : -redefining jsval typedef to something fixed ? -shuffling includes order in the failing files so that the correct type is used ? -make JSUint64 and PRUint64 the same typedef ? Reproducible: Always Steps to Reproduce: 1.try to build firefox 4 beta 2. 3. Actual Results: failure :( Expected Results: success \o/ Doesn't seem like a nanojit issue. But does seem like a jsapi problem.... Why are we typedefing jsval to something that we don't necessarily control the definition of (uint64, to be exact)? I started with uint64_t et al, but was told to drop the _t because it was the same thing but shorter. Well, clearly whoever told you that was wrong in the presence of NSPR stuff. :( Created attachment 478732 [details] [diff] [review] typedef jsval using uint64_t instead of uint64 The attached patch fixes the build error. No idea if it's the best fix though, or if JSuint64 should be used instead. Brendan: is this right? (In reply to comment #5) > Brendan: is this right? That can't matter because if uint64 is not uint64_t there's trouble. How are the two typedefs ever different? The JSUint64 etc. NSPR names are old and we should get rid of them. They resulted from incomplete forking of NSPR in 1998. /be > That can't matter because if uint64 is not uint64_t there's trouble. NSPR never defines or really uses uint64_t. It does typedef uint64 to PRUint64, except on beos. And PRUint64 is defined as follows (PRInt64 stuff elided): 342 /* 343 * On 64-bit Mac OS X, uint64 needs to be defined as unsigned long long to 344 * match uint64_t, otherwise our uint64 typedef conflicts with the uint64 345 * typedef in cssmconfig.h, which CoreServices.h includes indirectly. 346 */ 347 #if PR_BYTES_PER_LONG == 8 && !defined(__APPLE__) 349 typedef unsigned long PRUint64; 350 #elif defined(WIN32) && !defined(__GNUC__) 352 typedef unsigned __int64 PRUint64; 353 #else 355 typedef unsigned long long PRUint64; 356 #endif /* PR_BYTES_PER_LONG == 8 */ Sounds like we should perhaps change that first condition? Or just always use uint64_t if available? Or, again, fix this on the js side.... while possibly spinning off a separate bug on NSPR. I thought with jsinttypes.h we finally had consistency and a single source of truth, which came from C99 if supported, else from our own workalike. Cc'ing Jim. /be (In reply to comment #8) > Or, again, fix this on the js side.... while possibly spinning off a separate > bug on NSPR. Yes, this should be fixed on the JS side. I didn't realize this was an NSPR vs. JS issue. We long ago forked NSPR stuph including the PRUint32 ugly names as JSUint32 etc., in order to stand alone. If we aren't able to stand alone any longer, but we want stdint/inttypes.h C99 goodness to be the source of truth, then it's high time to stop relying on the NSPR int typedefs. We could still have problems in interactions between NSPR or XPCOM code based on it and JS, though -- which must be what is going on here, duh. But two bugs, one on JS we can act on quickly, another on NSPR, seems right. /be Can i get a followup on that issue ? I stumbled upon it again when porting thunderbird 3.3a2, and my 'fix' is still needed for ffx4b11. (In reply to comment #0) >; This is really the key here. Our header files shouldn't be leaning on NSPR when <stdint.h>, and thus uint64_t, is available. (In reply to comment #6) > (In reply to comment #5) > > Brendan: is this right? > > That can't matter because if uint64 is not uint64_t there's trouble. > > How are the two typedefs ever different? The underlying absurdity is that, on x86_64, 'long' and 'long long' are treated differently when they appear in mangled C++ symbol names ('l' vs. 'x') --- even though they are both 64 bits long. So compile-time selection of the underlying type can get you definitions that are correct for execution but not for linking. :( The fix is to figure out why the non-<stdint.h> uint64 definition is being used for jsval, and change that. If <stdint.h> is present, we should always be using it for jsval and the other jsapi types. Comment on attachment 478732 [details] [diff] [review] typedef jsval using uint64_t instead of uint64 In the meantime, can this get commited ? Or you really want to find a better 'global way' to pull types from stdint.h when available ? Comment on attachment 478732 [details] [diff] [review] typedef jsval using uint64_t instead of uint64 (In reply to comment #13) > The fix is to figure out why the non-<stdint.h> uint64 definition is being used > for jsval, and change that. If <stdint.h> is present, we should always be using > it for jsval and the other jsapi types. jsval uses uint64/uint32/etc because these types are used exclusively in the jsapi.h; there are no uses of the _t stdint types.h. I have been operating under the impression (based on a Waldo v. Brendan discussion somewhere on this same issue) that the SM style is to use non-_t types in preference to the _t types. grep uint64_t -c shows that this style is mostly followed, except in jsclone. I would be much happier if we could just kill the non-standard non-_t types altogether and *only* use the _t types. Is there any reason we can't do that? If not, then it seems like the fix is to make uint64 not conflict with NSPR, if at all possible. Either way, I don't think this patch is the one that should land. We have the shorter names, they're in the API. Brevity + API-mixing inside the engine trump standards. Better to have fewer typename aliases than more. BTW, we should get rid of jsint and jsuint inside the engine and in the API (since they are POD and equivalent to int32 and uint32, IINM). /be (In reply to comment #15) > jsval uses uint64/uint32/etc because these types are used exclusively in the > jsapi.h; there are no uses of the _t stdint types.h. That's not what I meant --- certainly SpiderMonkey should be using its traditional uint64, uint32, etcetera. I have no beef with comment 16; we had that discussion, and the decision was made. The issue here, I thought, is that the *definition* of uint64 comes from different places depending on who's including jsapi.h, so that jsval within SpiderMonkey cites uint64 with one definition, and jsval outside SpiderMonkey cites uint64 with a different definition. This would be fine, since the two types are both 64-bit unsigned types; the code would run perfectly. But C++ distinguishes in mangled names between integer types that have the same signedness and bit length. (In reply to comment #17) > That's not what I meant Oops, sorry, I see that now. Fwiw, that patch is still needed to build ffx 5.0b3 (and mozilla-aurora, and mozilla-central, etc...) on OpenBSD/amd64 and probably others. Can something be done to fix that properly ? So, this is the definition that seems to be causing us misery: typedef PRUint64 uint64; I believe this appears at nsprpub/pr/include/obsolete/protypes.h:100; Landry, can you verify that that is indeed the definition of uint64 that's taking effect on the affected platforms? The comment atop that file says: /* * This header typedefs the old 'native' types to the new PR<type>s. * These definitions are scheduled to be eliminated at the earliest * possible time. The NSPR API is implemented and documented using * the new definitions. */ And SM is carefully stepping aside if that header is included, in js/src/jsotypes.h: /* * Note that we test for PROTYPES_H, not JSOTYPES_H. This is to avoid * double-definitions of scalar types such as uint32, if NSPR's * protypes.h is also included. */ #ifndef PROTYPES_H #define PROTYPES_H So... how is this "obsolete" header file making its way into the affected compilation units? Can we delete its #inclusion, and perhaps use the PR types directly instead? Okay, so what's going on here is that there's a #inclusion race. Both js/src/jsotypes.h and nsprpub/pr/include/obsolete/protypes.h say: #ifndef PROTYPES_H #define PROTYPES_H (which is charming, but leave that aside) which means that whichever one gets #included first gets its definition to win. The universal SpiderMonkey header, jsapi.h, #includes jspubtd.h -> jstypes.h -> jsotypes.h, so jsotypes.h is part of SpiderMonkey's public API. protypes.h is #included from prtypes.h, which is #included everywhere. Ideally we'd #defined NO_NSPR_10_SUPPORT and stop using headers from directories marked "obsolete", but that may be more work and churn than we want to deal with now, and the benefit is unclear. But it suggests a quick fix: just make sure that, in the broken files, jsapi.h gets #included before prtypes.h. Landry, do you want to pursue this? I don't have a system on which this is broken handy to test against. Quoting myself from comment. If i get it right, you want me to include jsapi.h on top of each of those files ? Will do. But wouldn't it be "cleaner" to just stop using obsolete headers, as you said ? (In reply to comment #23) > But wouldn't it be "cleaner" to just stop using obsolete headers, as you > said ? I'm willing to bet that #defining NO_NSPR_10_SUPPORT by default, across the tree, would be a major project. I don't know how much code there is out there that uses bits and pieces of the NSPR 1.0 interface. The key point being that a modern, essential NSPR header file, prtypes.h, conditionally includes the obsolete header protypes.h, with the effect that the entire source tree expects that protypes.h is in force. Removing something so widely used is, in my experience, a major project. On the other hand... protypes.h doesn't have any effect if PROTYPES_H is #defined. And jsapi.h has been #defining PROTYPES_H itself. And the limited extent of the problem you ran into --- only a few files --- suggests that almost everywhere in the code base is seeing jsapi.h before prtypes.h. This is plausible. We could test this hypothesis by putting a blatant error in protypes.h and seeing how much things still build. If true, then removing the #inclusion of protypes.h from prtypes.h might not really have much effect at all. We'd have less obsolete code participating in the build. That would be the best outcome, and not much work. Created attachment 537748 [details] [diff] [review] Ensure jsapi.h is included first Here's a patch trying to include jsapi.h first (well, before prtypes.h) With the latter patch (which might be wrong, to me it'd make more sense to #include "jsapi.h" at the top of each corresponding headers for all offended cpp file... otherwise another include in the header could pull prtypes.h first. Ugh.), i'm still getting the corresponding failure when linking TestINIParser with libxul (i suppose it's the first binary that gets linked against it..) : ../../dist/bin/libxul.so.1.0: undefined reference to `nsDOMWorkerTimeout::Init(JSContext*, unsigned int, unsigned long*, int)' ../../dist/bin/libxul.so.1.0: undefined reference to `nsXPConnect::Base64Encode(JSContext*, unsigned long, unsigned long*)' ../../dist/bin/libxul.so.1.0: undefined reference to `non-virtual thunk to nsDOMFile::Initialize(nsISupports*, JSContext*, JSObject*, unsig ned int, unsigned long long*)' ../../dist/bin/libxul.so.1.0: undefined reference to `mozilla::storage::convertJSValToVariant(JSContext*, unsigned long)' ../../dist/bin/libxul.so.1.0: undefined reference to `nsXPConnect::Base64Decode(JSContext*, unsigned long, unsigned long*)' ../../dist/bin/libxul.so.1.0: undefined reference to `nsContentUtils::WrapNative(JSContext*, JSObject*, nsISupports*, nsWrapperCache*, nsID const*, unsigned long*, nsIXPConnectJSObjectHolder**, int)' ../../dist/bin/libxul.so.1.0: undefined reference to `JSValToNPVariant(_NPP*, JSContext*, unsigned long, _NPVariant*)' ../../dist/bin/libxul.so.1.0: undefined reference to `nsDOMFile::Initialize(nsISupports*, JSContext*, JSObject*, unsigned int, unsigned long long*)' To be honest, i don't think shuffling include order is the way to go. Should we open a bug against nspr asking for #include "protypes.h" to be removed from prtypes.h ? This starts to be beyond my understanding of the codebase... Oh, and fwiw commenting out the inclusion of obsolete/protypes.h in prtypes.h breaks early in the build, as xpidl_typelib.c relies on uint16 being defined : /home/landry/src/mozilla-central/xpcom/typelib/xpidl/xpidl_typelib.c:50: error: expected specifier-qualifier-list before 'uint16' /home/landry/src/mozilla-central/xpcom/typelib/xpidl/xpidl_typelib.c: In function 'add_interface_maybe': /home/landry/src/mozilla-central/xpcom/typelib/xpidl/xpidl_typelib.c:144: error: 'struct priv_data' has no member named 'interface_map' /home/landry/src/mozilla-central/xpcom/typelib/xpidl/xpidl_typelib.c:148: error: 'struct priv_data' has no member named 'interface_map' /home/landry/src/mozilla-central/xpcom/typelib/xpidl/xpidl_typelib.c:162: error: 'struct priv_data' has no member named 'interface_map' /home/landry/src/mozilla-central/xpcom/typelib/xpidl/xpidl_typelib.c:164: error: 'struct priv_data' has no member named 'ifaces' etc etc Another try... don't 'typedef PRUint64 uint64' in protypes.h on OpenBSD, which also fails : ../../../dist/include/mozilla/Util.h:148: error: 'uint64' does not name a type ../../../dist/include/mozilla/Util.h:160: error: 'uint64' does not name a type In file included from /home/landry/src/mozilla-central/js/src/jsval.h:48,util.h: In static member function 'static T* js::OffTheBooks::array_new(size_t)': /home/landry/src/mozilla-central/js/src/jsutil.h:474: error: 'uint64' was not declared in this scope /home/landry/src/mozilla-central/js/src/jsutil.h:474: error: expected `;' before 'numBytes64' /home/landry/src/mozilla-central/js/src/jsutil.h:474: error: 'numBytes64' was not declared in this scope In file includedval.h: At global scope: /home/landry/src/mozilla-central/js/src/jsval.h:154: error: 'uint64' was not declared in this scope Sorry, i'm trying random things, as all that mess seems intermixed... and i don't have more ideas. So far the only patch that works for me is Created attachment 537836 [details] [diff] [review] Define jsval in terms of a type we control, not a type defined, depending on circumstances, by either NSPR or our own headers. Does this patch work for you? If it does, I'm disappointed in myself: this is the obvious change, suggested in the earliest comments in the bug. Comment on attachment 537836 [details] [diff] [review] Define jsval in terms of a type we control, not a type defined, depending on circumstances, by either NSPR or our own headers. Try server says it's all a-okay. Thanks for your patience. [muttered obscenities] I see there has been a windy road back to the same place, but can you explain why only jsval's type changes, and not every other use of uint64/uint32 in jsapi.h? My naive expectation would be that either we'd use uintX everywhere in jsapi.h, or JSUintX, but not a mix. Said differently, lets say I have to add a new decl to jsapi.h, how do I know whether to pick uint64 (like JS_ReadStructuredClone) or JSUint64 (like jsval would be)? Is the weirdness isolated to uint64 and not uint32? (In reply to comment #32) > I see there has been a windy road back to the same place, but can you > explain why only jsval's type changes, and not every other use of > uint64/uint32 in jsapi.h? Perhaps we should be doing so. I don't see why we wouldn't get the same sorts of linking errors. (In reply to comment #33) > Perhaps we should be doing so. I don't see why we wouldn't get the same > sorts of linking errors. However, changing the types of all the jsapi.h functions is a much larger change, and while it seems to be an improvement, it doesn't actually fix any problems we know of --- unlike the jsval change, which fixes a build. Can we put this patch in, given that I've filed the follow-up bug 662852? Comment on attachment 537836 [details] [diff] [review] Define jsval in terms of a type we control, not a type defined, depending on circumstances, by either NSPR or our own headers. Bug 662852 looks great. Landing this now seems fine; I just wanted to make sure I understood whether this was a partial fix or whether there was really something special about jsvals. Landry, haven't heard from you whether this patch fixes your problem. If it doesn't, please re-open. Oh, wait, resolved/fixed means "in mozilla-central". Sorry for the noise. Sorry for the delay, life's been pretty busy. Yes, your JSUint64 fix works for me on OpenBSD/amd64, i'll just have to check other archs to be sure. Thanks a lot ! cdleary-bot mozilla-central merge info:
https://bugzilla.mozilla.org/show_bug.cgi?id=599764
CC-MAIN-2017-22
refinedweb
3,165
59.4
Making GUI applications with Gazpacho Earlier I have made some simple GUI applications using PyGTK and Glade, which is surprisingly easy. Now I have another small itch to scratch and have another go at some GUI app. Only this time I decided that the coolness of Gazpacho looked slightly nicer to me, so gave that a try. Creating the UI is easy enough and after some messing around I had something that would do. Gazpacho claims to create glade files compatible with Libglade, so I just went about what I did last time: import gtk import gtk.glade class MainWindow: def __init__(self): self.wtree = gtk.glade.XML('ddmp.glade') self.wtree.signal_autoconnect(self) def on_mainwindow_destroy(self, *args): gtk.main_quit() def main(self): gtk.main() if __name__ == '__main__': app = MainWindow() app.main() However this didn't quite work, I got libglade warnings about unexpected element <ui> and unkown attribute constructor. Furthermore gtk.glade gave tracebacks about assertions of GTK_IS_WIDGET. After a quick search on the great internet that didn't result in anything (I was wondering if my libglade was too old or so) I had a look at the examples supplied (stupid me, why would I not look there first?) and sure enough, they don't use gtk.glade. So the above code changes into: import gtk from gazpacho.loader.loader import ObjectBuilder class MainWindow: def __init__(self): self.wtree = ObjectBuilder('ddmp.glade') self.wtree.signal_autoconnect(self) def on_mainwindow_destroy(self, *args): gtk.main_quit() def main(self): mainwindow = self.wtree.get_widget('mainwindow') mainwindow.show() gtk.main() if __name__ == '__main__': app = MainWindow() app.main() So Gazpacho needs a different loader for the XML, the returned object appears to be behaving as the gtk.glade.XML widget tree which is nice (since gazpacho documentation seems to be non-existing). I suppose libglade doesn't cope with the gtk.UIManager code created by gazpacho yet (the FAQ seems to suggest there are patches pending) and that their custom loader translates it to something libglade understands. This does make me wonder if you can use gazpacho when using any other language then python, the examples only contained python code. Surely they'll want to support any language that has libglade? Lastly it seems to hide windows by default, which I actually quite like. I remember in glade you had to explicitly hide dialog windows or they would show up at startup, this seems slightly more logical. Overall I do quite like gazpacho so far, I'm glad I chose it and would recommend it. It still has some rough edges but is very nice already. 1 comments: Johan Dahlin said... Thanks for the nice overview of Gazpacho. Unfortunately the Gazpacho FAQ is rather out of date, let me answer your question here instead. Gazpacho can save in three different formats; gazpacho, libglade and gtkbuilder (no there's no UI to select/change it yet) Gazpacho is obviously only available to Python programs, the reason a separate format was created was to be able to use recent Gtk+ features which libglade doesn't support. GtkBuilder is a new rewritten XML loader which is included in GTK+ itself, however no stable releases of GTK+ has been done which has GtkBuilder included, 2.12.0 which is expected in a couple of weeks will include it. GtkBuilder should deprecate the Gazpacho loader. Note, I'm one of the Gazpacho developers and I did most of the working getting GtkBuilder included in Gtk+. New comments are not allowed.
http://blog.devork.be/2007/08/making-gui-applications-with-gazpacho.html
CC-MAIN-2019-22
refinedweb
580
56.96
Singer.io utility library - PipelineWise compatible Project description pipelinewise-singer-python Writes the Singer format from Python. This is a fork of Singer's singer-python made for PipelineWise. Usage Setup environment This library depends on python3. We recommend using a virtualenv like this: python3 -m venv ~/.virtualenvs/singer-python Installation Next, install this library: source ~/.virtualenvs/singer-python/bin/activate git clone cd singer-python make install Usage example Now, from python code within the same virtualenv, you can use the library: import singer singer.write_schema('my_table', {'properties':{'id': {'type': 'string', 'key': True}}}, ['id']) singer.write_records('my_table', [{'id': 'b'}, {'id':'d'}]) singer.write_state({'my_table': 'd'}) Logging configuration pipelinewise-singer-python by default doesn't use any predefined logging configuration, it's up to the calling library to define it. However, if the environment variable LOGGING_CONF_FILE is found and set then the pipelinewise-singer-python would use the path provided in the env variable as the logging configuration for the logger. License Distributed under the Apache License Version 2.0 Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pipelinewise-singer-python/
CC-MAIN-2021-25
refinedweb
193
50.12
Created on 2016-09-25 09:45 by vmurashev, last changed 2019-12-12 16:06 by vstinner. Hi,here the issue: We (crystax.net) use custom builds of cpython, which for windows are compiled by MinGW with pyconfig.h taken from PC/pyconfig.h And for 32-bit Windows everything works well, but for 64-bit Windows - doesn't. The root cause of this issue is actauly very simple: Python code for windows is very sensitive on properly defined macro MS_WIN32/MS_WIN64 And while MS_WIN32 is predefined unconditionally in PC/pyconfig.h, the MS_WIN64 is defined only in conjunction with _MSC_VER, like #ifdef _MSC_VER ... #ifdef _WIN64 #define MS_WIN64 #endif ... #endif /* _MSC_VER */ So suggested patch (for 3.5.2 and 2.7.12) just appropriately define MS_WIN64 for MinGW This probably duplicates Issue 17590, although the extra context here is nice :) What do you think of my patch there, which also removes them from PC/pyconfig.h? I think it is better to define things in one place if possible. Patch suggested here is actually the most trivial as it could be. And at the same time we (crystax.net) can prove that after this patch Python being compiled by MInGW for 64-bit Windows actually works well. There are other minor fixes but this one is the most important and again - is really trivial It looks like Issue 4709 may also be a duplicate. Your added definition is distant from the comment explaining it. And why not open MS_WIN64 to any Windows compiler, rather than limiting it to just MSC and MINGW? I presume you are not building by running the configure script, but some other method. The advantage of moving this stuff out of PC/pyconfig.h is it can also be used with the pyconfig.h generated by the configure script. FWIW I'm not bothered by this patch. Maybe we can one day tidy this all up but merging this now is a good stop-gap. (I probably won't get a chance in the next week.) Other compilers can pass architecture flags on the command line instead of inferring them at build. > And why not open MS_WIN64 to any Windows compiler It would be very good idea Patches suggested here are just the drafts which just work. Actually I don't believe they will be accepted, so just dropped here for history
https://bugs.python.org/issue28267
CC-MAIN-2021-31
refinedweb
397
75.2
Table of Contents In Chapter 7, I/O, we talked about the IO monad, but we intentionally kept the discussion narrowly focused on how to communicate with the outside world. We didn't discuss what a monad is. We've already seen in Chapter 7, I/O that the IO monad is easy to work with. Notational differences aside, writing code in the IO monad isn't much different from coding in any other imperative language. When we had practical problems to solve in earlier chapters, we introduced structures that, as we will soon see, are actually monads. We aim to show you that a monad is often an obvious and useful tool to help solve a problem. We'll define a few monads in this chapter, to show how easy it is. Let's take another look at the parseP5 function that we wrote in Chapter 10, Code case study: parsing a binary data format. --) When we introduced this function, it threatened to march off the right side of the page if it got much more complicated. We brought the staircasing under control using the (>>?) function. -- file: ch10/PNM.hs (>>?) :: Maybe a -> (a -> Maybe b) -> Maybe b Nothing >>? _ = Nothing Just v >>? f = f v We carefully chose the type of (>>?) to let us chain together functions that return a Maybe value. So long as the result type of one function matches the parameter of the next, we can chain functions returning Maybe together indefinitely. The body of (>>?) hides the details of whether the chain of functions we build is short-circuited somewhere, due to one returning Nothing, or completely evaluated. Useful as (>>?) was for cleaning up the structure of parseP5, we had to incrementally consume pieces of a string as we parsed it. This forced us to pass the current value of the string down our chain of Maybes, wrapped up in a tuple. Each function in the chain put a result into one element of the tuple, and the unconsumed remainder of the string into the other. --) Once again, we were faced with a pattern of repeated behaviour: consume some string, return a result, and return the remaining string for the next function to consume. However, this pattern was more insidious: if we wanted to pass another piece of information down the chain, we'd have to modify nearly every element of the chain, turning each two-tuple into a three-tuple! We addressed this by moving the responsibility for managing the current piece of string out of the individual functions in the chain, and into the function that we used to chain them We also hid the details of the parsing state in the ParseState type. Even the getState and putState functions don't inspect the parsing state, so any modification to ParseState will have no effect on any existing code. When we look at the above examples in detail, they don't seem to have much in common. Obviously, they're both concerned with chaining functions together, and with hiding details to let us write tidier code. However, let's take a step back and consider them in less detail. First, let's look at the type definitions. -- file: ch14/Maybe.hs data Maybe a = Nothing | Just a -- file: ch10/Parse.hs newtype Parse a = Parse { runParse :: ParseState -> Either String (a, ParseState) } The common feature of these two types is that each has a single type parameter on the left of the definition, which appears somewhere on the right. These are thus generic types, which know nothing about their payloads. Next, we'll examine the chaining functions that we wrote for the two types. ghci> :type (>>?)(>>?) :: Maybe a -> (a -> Maybe b) -> Maybe b ghci> :type (==>)(==>) :: Parse a -> (a -> Parse b) -> Parse b These functions have strikingly similar types. If we were to turn those type constructors into a type variable, we'd end up with a single more abstract type. -- file: ch14/Maybe.hs chain :: m a -> (a -> m b) -> m b Finally, in each case we have a function that takes a “plain” value, and “injects” it into the target type. For Maybe, this function is simply the value constructor Just, but the injector for Parse is more complicated. -- file: ch10/Parse.hs identity :: a -> Parse a identity a = Parse (\s -> Right (a, s)) Again, it's not the details or complexity that we're interested in, it's the fact that each of these types has an “injector” function, which looks like this. -- file: ch14/Maybe.hs inject :: a -> m a It is exactly these three properties, and a few rules about how we can use them together, that define a monad in Haskell. Let's revisit the above list in condensed form. A function of type m a -> (a -> m b) -> m b for chaining the output of one function into the input of another. A function of type a -> m a for injecting a normal value into the chain, i.e. it wraps a type a with the type constructor m. The properties that make the Maybe type a monad are its type constructor Maybe a, our chaining function (>>?), and the injector function Just. For Parse, the corresponding properties are the type constructor Parse a, the chaining function (==>), and the injector function identity. We have intentionally said nothing about how the chaining and injection functions of a monad should behave, and that's because this almost doesn't matter. In fact, monads are ubiquitous in Haskell code precisely because they are so simple. Many common programming patterns have a monadic structure: passing around implicit data, or short-circuiting a chain of evaluations if one fails, to choose but two. We can capture the notions of chaining and injection, and the types that we want them to have, in a Haskell typeclass. The standard Prelude already defines just such a typeclass, named Monad. -- file: ch14/Maybe.hs class Monad m where -- chain (>>=) :: m a -> (a -> m b) -> m b -- inject return :: a -> m a Here, (>>=) is our chaining function. We've already been introduced to it in the section called “Sequencing”. It's often referred to as “bind”, as it binds the result of the computation on the left to the parameter of the one on the right. Our injection function is return. As we noted in the section called “The True Nature of Return”, the choice of the name return is a little unfortunate. That name is widely used in imperative languages, where it has a fairly well understood meaning. In Haskell, its behaviour is much less constrained. In particular, calling return in the middle of a chain of functions won't cause the chain to exit early. A useful way to link its behavior to its name is that it returns a pure value (of type a) into a monad (of type m a). While (>>=) and return are the core functions of the Monad typeclass, it also defines two other functions. The first is (>>). Like (>>=), it performs chaining, but it ignores the value on the left. -- file: ch14/Maybe.hs (>>) :: m a -> m b -> m b a >> f = a >>= \_ -> f We use this function when we want to perform actions in a certain order, but don't care what the result of one is. This might seem pointless: why would we not care what a function's return value is? Recall, though, that we defined a (==>&) combinator earlier to express exactly this. Alternatively, consider a function like ghci> :type print "foo"print "foo" :: IO () If we use plain (>>=), we have to provide as its right hand side a function that ignores its argument. ghci> print "foo" >>= \_ -> print "bar""foo" "bar" But if we use (>>), we can omit the needless function. ghci> print "baz" >> print "quux""baz" "quux" As we showed above, the default implementation of (>>) is defined in terms of (>>=). The second non-core Monad function is fail, which takes an error message and does something to make the chain of functions fail. -- file: ch14/Maybe.hs fail :: String -> m a fail = error To revisit the parser that we developed in Chapter 10, Code case study: parsing a binary data format, here is its Monad instance. -- file: ch10/Parse.hs instance Monad Parse where return = identity (>>=) = (==>) fail = bail There are a few terms of jargon around monads that you may not be familiar with. These aren't formal terms, but they're in common use, so it's helpful to know about them. “Monadic” simply means “pertaining to monads”. A monadic type is an instance of the Monad typeclass; a monadic value has a monadic type. When we say that a type “is a monad”, this is really a shorthand way of saying that it's an instance of the Monad typeclass. Being an instance of Monad gives us the necessary monadic triple of type constructor, injection function, and chaining function. In the same way, a reference to “the Foo monad” implies that we're talking about the type named Foo, and that it's an instance of Monad. An “action” is another name for a monadic value. This use of the word probably originated with the introduction of monads for I/O, where a monadic value like print "foo" can have an observable side effect. A function with a monadic return type might also be referred to as an action, though this is a little less common. In our introduction to monads, we showed how some pre-existing code was already monadic in form. Now that we are beginning to grasp what a monad is, and we've seen the Monad typeclass, let's build a monad with foreknowledge of what we're doing. We'll start out by defining its interface, the section called “Translating a glob pattern into a regular expression”. We will modify it so that it keeps a record of each of the special pattern sequences that it translates. We are revisiting familiar territory for a reason: it lets us compare non-mon export the value constructor that a user would need to create a value of this type. All they can use Logger for is writing type signatures. The Log type is just a synonym for a list of strings, to make a few signatures more readable. We use a list of strings to keep the implementation simple. -- file: ch14/Logger.hs type Log = [String] Instead of giving our users a value constructor, we provide them with a function, runLogger, that evaluates a logged action. This returns both the result of an action and whatever was logged while the result was being computed. -- file: ch14/Logger.hs runLogger :: Logger a -> (a, Log) The Monad typeclass doesn't provide any means for values to escape their monadic shackles. We can inject a value into a monad using return. We can extract a value from a monad using (>>=) but the function on the right, which can see an unwrapped value, has to wrap its own result back up again. Most monads have one or more runLogger-like functions. The notable exception is of course IO, which we usually only escape from by exiting a program. A monad execution function runs the code inside the monad and unwraps its result. Such functions are usually the only means provided for a value to escape from its monadic wrapper. The author of a monad thus has complete control over how whatever happens inside the monad gets out. Some monads have several execution functions. In our case, we can imagine a few alternatives to runLogger: one might only return the log messages, while another might return just the result and drop the log messages. When executing inside a Logger action, user code calls record to record something. -- file: ch14/Logger.hs record :: String -> Logger () Since recording occurs in the plumbing of our monad, our action's result supplies no information. Usually, a monad will provide one or more helper functions like our record. These are our means for accessing the special behaviors of that monad. Our module also defines the Monad instance for the Logger type. These definitions are all that a client module needs in order to be able to use this monad. Here is a preview, in ghci, of how our monad will behave. ghci> let simple = return True :: Logger Bool ghci> runLogger simple(True,[]) When we run the logged action using runLogger, we get back a pair. The first element is the result of our code; the second is the list of items logged while the action executed. We haven't logged anything, so the list is empty. Let's fix that. ghci> runLogger (record "hi mom!" >> return 3.1337)(3.1337,["hi mom!"]) Here's how we kick off our glob-to-regexp conversion inside the Logger monad. -- file: ch14/Logger.hs globToRegex cs = globToRegex' cs >>= \ds -> return ('^':ds) There are a few coding style issues worth mentioning here. The body of the function starts on the line after its name. By doing this, we gain some horizontal white space. We've also “hung” the parameter of the anonymous function at the end of the line. This is common practice in monadic code. Remember the type of (>>=): it extracts the value on the left from its Logger wrapper, and passes the unwrapped value to the function on the right. The function on the right must, in turn, wrap its result with the Logger wrapper. This is exactly what return does: it takes a pure value, and wraps it in the monad's type constructor. ghci> :type (>>=)(>>=) :: (Monad m) => m a -> (a -> m b) -> m b ghci> :type (globToRegex "" >>=)(globToRegex "" >>=) :: (String -> Logger b) -> Logger b Even when we write a function that does almost nothing, we must call return to wrap the result with the correct type. -- file: ch14/Logger.hs globToRegex' :: String -> Logger String globToRegex' "" = return "$" When we call record to save a log entry, we use (>>) instead of (>>=) to chain it with the following action. -- file: ch14/Logger.hs globToRegex' ('?':cs) = record "any" >> globToRegex' cs >>= \ds -> return ('.':ds) Recall that this is a variant of (>>=) that ignores the result on the left. We know that the result of record will always be (), so there's no point in capturing it. We can use do notation, which we first encountered in the section called “Sequencing”, to somewhat tidy up our code. -- file: ch14/Logger.hs globToRegex' ('*':cs) = do record "kleene star" ds <- globToRegex' cs return (".*" ++ ds) The choice of do notation versus explicit (>>=) with anonymous functions is mostly a matter of taste, though almost everyone's taste is to use do notation for anything longer than about two lines. There is one significant difference between the two styles, though, which we'll return to in the section called “Desugaring of do blocks”. Parsing a character class mostly follows the same pattern that we've already seen. -- file: ch14/Logger.hs globToRegex' ('[':'!':c:cs) = record "character class, negative" >> charClass cs >>= \ds -> return ("[^" ++ c : ds) globToRegex' ('[':c:cs) = record "character class" >> charClass cs >>= \ds -> return ("[" ++ c : ds) globToRegex' ('[':_) = fail "unterminated character class" Based on the code we've seen so far, monads seem to have a substantial shortcoming: the type constructor that wraps a monadic value makes it tricky to use a normal, pure function on a value trapped inside a monadic wrapper. Here's a simple illustration of the apparent problem. Let's say we have a trivial piece of code that runs in the Logger monad and returns a string. ghci> let m = return "foo" :: Logger String If we want to find out the length of that string, we can't simply call length: the string is wrapped, so the types don't match up. ghci> length m<interactive>:1:7: Couldn't match expected type `[a]' against inferred type `Logger String' In the first argument of `length', namely `m' In the expression: length m In the definition of `it': it = length m What we've done so far to work around this is something like the following. ghci> :type m >>= \s -> return (length s)m >>= \s -> return (length s) :: Logger Int We use (>>=) to unwrap the string, then write a small anonymous function that calls length and rewraps the result using return. This need crops up often in Haskell code. We won't be surprised to learn that a shorthand already exists: we use the lifting technique that we introduced for functors in the section called “Introducing functors”. Lifting a pure function into a functor usually involves unwrapping the value inside the functor, calling the function on it, and rewrapping the result with the same constructor. We do exactly the same thing with a monad. Because the Monad typeclass already provides the (>>=) and return functions that know how to unwrap and wrap a value, the liftM function doesn't need to know any details of a monad's implementation. -- file: ch14/Logger.hs liftM :: (Monad m) => (a -> b) -> m a -> m b liftM f m = m >>= \i -> return (f i) When we declare a type to be an instance of the Functor typeclass, we have to write our own version of fmap specially tailored to that type. By contrast, liftM doesn't need to know anything of a monad's internals, because they're abstracted by (>>=) and return. We only need to write it once, with the appropriate type constraint. The liftM function is predefined for us in the standard Control.Monad module. To see how liftM can help readability, we'll compare two otherwise identical pieces of code. First, the familiar kind that does not use liftM. -- file: ch14/Logger.hs charClass_wordy (']':cs) = globToRegex' cs >>= \ds -> return (']':ds) charClass_wordy (c:cs) = charClass_wordy cs >>= \ds -> return (c:ds) Now we can eliminate the (>>=) and anonymous function cruft with liftM. -- file: ch14/Logger.hs charClass (']':cs) = (']':) `liftM` globToRegex' cs charClass (c:cs) = (c:) `liftM` charClass cs As with fmap, we often use liftM in infix form. An easy way to read such an expression is “apply the pure function on the left to the result of the monadic action on the right”. The liftM function is so useful that Control.Monad defines several variants, which combine longer chains of actions. We can see one in the last clause of our globToRegex' function. -- file: ch14/Logger.hs globToRegex' (c:cs) = liftM2 (++) (escape c) (globToRegex' cs) escape :: Char -> Logger String escape c | c `elem` regexChars = record "escape" >> return ['\\',c] | otherwise = return [c] where regexChars = "\\+()^$.{}]|" The liftM2 function that we use above is defined as follows. -- file: ch14/Logger.hs liftM2 :: (Monad m) => (a -> b -> c) -> m a -> m b -> m c liftM2 f m1 m2 = m1 >>= \a -> m2 >>= \b -> return (f a b) It executes the first action, then the second, then combines their results using the pure function f, and wraps that result. In addition to liftM2, the variants in Control.Monad go up to liftM5. We've now seen enough examples of monads in action to have some feel for what's going on. Before we continue, there are a few oft-repeated myths about monads that we're going to address. You're bound to encounter these assertions “in the wild”, so you might as well be prepared with a few good retorts. Monads can be hard to understand. We've already shown that monads “fall out naturally” from several problems. We've found that the best key to understanding them is to explain several concrete examples, then talk about what they have in common. Monads are only useful for I/O and imperative coding. While we use monads for I/O in Haskell, they're valuable for many other purposes besides. We've already used them for short-circuiting a chain of computations, hiding complicated state, and logging. Even so, we've barely scratched the surface. Monads are unique to Haskell. Haskell is probably the language that makes the most explicit use of monads, but people write them in other languages, too, ranging from C++ to OCaml. They happen to be particularly tractable in Haskell, due to do notation, the power and inference of the type system, and the language's syntax. Monads are for controlling the order of evaluation. The definition of our Logger type is very simple. -- file: ch14/Logger.hs newtype Logger a = Logger { execLogger :: (a, Log) } It's a pair, where the first element is the result of an action, and the second is a list of messages logged while that action was run. We've wrapped the tuple in a newtype to make it a distinct type. The runLogger function extracts the tuple from its wrapper. The function that we're exporting to execute a logged action, runLogger, is just a synonym for execLogger. -- file: ch14/Logger.hs runLogger = execLogger Our record helper function creates a singleton list of the message we pass it. -- file: ch14/Logger.hs record s = Logger ((), [s]) The result of this action is (), so that's the value we put in the result slot. Let's begin our Monad instance with return, which is trivial: it logs nothing, and stores its input in the result slot of the tuple. -- file: ch14/Logger.hs instance Monad Logger where return a = Logger (a, []) Slightly more interesting is (>>=), which is the heart of the monad. It combines an action and a monadic function to give a new result and a new log. -- file: ch14/Logger.hs -- (>>=) :: Logger a -> (a -> Logger b) -> Logger b m >>= k = let (a, w) = execLogger m n = k a (b, x) = execLogger n in Logger (b, w ++ x) Let's spell out explicitly what is going on. We use runLogger to extract the result a from the action m, and we pass it to the monadic function k. We extract the result b from that in turn, and put it into the result slot of the final action. We concatenate the logs w and x to give the new log. Our definition of (>>=) ensures that messages logged on the left will appear in the new log before those on the right. However, it says nothing about when the values a and b are evaluated: (>>=) is lazy. Like most other aspects of a monad's behaviour, strictness is under the control of the monad's implementor. It is not a constant shared by all monads. Indeed, some monads come in multiple flavours, each with different levels of strictness. Our Logger monad is a specialised version of the standard Writer monad, which can be found in the Control.Monad.Writer module of the mtl package. We will present a Writer example in the section called “Using typeclasses”. The Maybe type is very nearly the simplest instance of Monad. It represents a computation that might not produce a result. -- file: ch14/Maybe.hs instance Monad Maybe where Just x >>= k = k x Nothing >>= _ = Nothing Just _ >> k = k Nothing >> _ = Nothing return x = Just x fail _ = Nothing When we chain together a number of computations over Maybe using (>>=) or (>>), if any of them returns Nothing, then we don't evaluate any of the remaining computations. Note, though, that the chain is not completely short-circuited. Each (>>=) or (>>) in the chain will still match a Nothing on its left, and produce a Nothing on its right, all the way to the end. It's easy to forget this point: when a computation in the chain fails, the subsequent production, chaining, and consumption of Nothing values is cheap at runtime, but it's not free. A function suitable for executing the Maybe monad is maybe. (Remember that “executing” a monad involves evaluating it and returning a result that's had the monad's type wrapper removed.) -- file: ch14/Maybe.hs maybe :: b -> (a -> b) -> Maybe a -> b maybe n _ Nothing = n maybe _ f (Just x) = f x Its first parameter is the value to return if the result is Nothing. The second is a function to apply to a result wrapped in the Just constructor; the result of that application is then returned. Since the Maybe type is so simple, it's about as common to simply pattern-match on a Maybe value as it is to call maybe. Each one is more readable in different circumstances. Here's an example of Maybe in use as a monad. Given a customer's name, we want to find the billing address of their mobile phone carrier. -- file: ch14/Carrier.hs Our first version is the dreaded ladder of code marching off the right of the screen, with many boilerplate case expressions. -- file: ch14/Carrier.hs variation1 person phoneMap carrierMap addressMap = case M.lookup person phoneMap of Nothing -> Nothing Just number -> case M.lookup number carrierMap of Nothing -> Nothing Just carrier -> M.lookup carrier addressMap The Data.Map module's lookup function has a monadic return type. ghci> :module +Data.Map ghci> :type Data.Map.lookupData.Map.lookup :: (Ord k, Monad m) => k -> Map k a -> m a In other words, if the given key is present in the map, lookup injects it into the monad using return. Otherwise, it calls fail. This is an interesting piece of API design, though one that we think was a poor choice. On the positive side, the behaviours of success and failure are automatically customised to our needs, based on the monad we're calling lookup from. Better yet, lookup itself doesn't know or care what those behaviours are. The case expressions above typecheck because we're comparing the result of lookup against values of type Maybe. The hitch is, of course, that using fail in the wrong monad throws a bothersome exception. We have already warned against the use of fail, so we will not repeat ourselves here. In practice, everyone uses Maybe as the result type for lookup. The result type of such a conceptually simple function provides generality where it is not needed: lookup should have been written to return Maybe. Let's set aside the API question, and deal with the ugliness of our code. We can make more sensible use of Maybe's status as a monad. -- file: ch14/Carrier.hs variation2 person phoneMap carrierMap addressMap = do number <- M.lookup person phoneMap carrier <- M.lookup number carrierMap address <- M.lookup carrier addressMap return address If any of these lookups fails, the definitions of (>>=) and (>>) mean that the result of the function as a whole will be Nothing, just as it was for our first attempt that used case explicitly. This version is much tidier, but the return isn't necessary. Stylistically, it makes the code look more regular, and perhaps more familiar to the eyes of an imperative programmer, but behaviourally it's redundant. Here's an equivalent piece of code. -- file: ch14/Carrier.hs variation2a person phoneMap carrierMap addressMap = do number <- M.lookup person phoneMap carrier <- M.lookup number carrierMap M.lookup carrier addressMap When we introduced maps, we mentioned in the section called “Partial application awkwardness” that the type signatures of functions in the Data.Map module often make them awkward to partially apply. The lookup function is a good example. If we flip its arguments, we can write the function body as a one-liner. -- file: ch14/Carrier.hs variation3 person phoneMap carrierMap addressMap = lookup phoneMap person >>= lookup carrierMap >>= lookup addressMap where lookup = flip M.lookup While the Maybe type can represent either no value or one, there are many situations where we might want to return some number of results that we do not know in advance. Obviously, a list is well suited to this purpose. The type of a list suggests that we might be able to use it as a monad, because its type constructor has one free variable. And sure enough, we can use a list as a monad. Rather than simply present the Prelude's Monad instance for the list type, let's try to figure out what an instance ought to look like. This is easy to do: we'll look at the types of (>>=) and return, and perform some substitutions, and see if we can use a few familiar list functions. The more obvious of the two functions is return. We know that it takes a type a, and wraps it in a type constructor m to give the type m a. We also know that the type constructor here is []. Substituting this type constructor for the type variable m gives us the type [] a (yes, this really is valid notation!), which we can rewrite in more familiar form as [a]. We now know that return for lists should have the type a . There are only a few sensible possibilities for an implementation of this function. It might return the empty list, a singleton list, or an infinite list. The most appealing behaviour, based on what we know so far about monads, is the singleton list: it doesn't throw information away, nor does it repeat it infinitely. -> [a] -- file: ch14/ListMonad.hs returnSingleton :: a -> [a] returnSingleton x = [x] If we perform the same substitution trick on the type of (>>=) as we did with return, we discover that it should have the type [a] . This seems close to the type of -> (a -> [b]) -> [b] map. ghci> :type (>>=)(>>=) :: (Monad m) => m a -> (a -> m b) -> m b ghci> :type mapmap :: (a -> b) -> [a] -> [b] The ordering of the types in map's arguments doesn't match, but that's easy to fix. ghci> :type (>>=)(>>=) :: (Monad m) => m a -> (a -> m b) -> m b ghci> :type flip mapflip map :: [a] -> (a -> b) -> [b] We've still got a problem: the second argument of flip map has the type a , whereas the second argument of -> b (>>=) for lists has the type a . What do we do about this? -> [b] Let's do a little more substitution and see what happens with the types. The function flip map can return any type b as its result. If we substitute [b] for b in both places where it appears in flip map's type signature, its type signature reads as a . In other words, if we map a function that returns a list over a list, we get a list of lists back. -> (a -> [b]) -> [[b]] ghci> flip map [1,2,3] (\a -> [a,a+100])[[1,101],[2,102],[3,103]] Interestingly, we haven't really changed how closely our type signatures match. The type of (>>=) is [a] , while that of -> (a -> [b]) -> [b] flip map when the mapped function returns a list is [a] . There's still a mismatch in one type term; we've just moved that term from the middle of the type signature to the end. However, our juggling wasn't in vain: we now need a function that takes a [[b]] and returns a [b], and one readily suggests itself in the form of -> (a -> [b]) -> [[b]] concat. ghci> :type concatconcat :: [[a]] -> [a] The types suggest that we should flip the arguments to map, then concat the results to give a single list. ghci> :type \xs f -> concat (map f xs)\xs f -> concat (map f xs) :: [a] -> (a -> [a1]) -> [a1] This is exactly the definition of (>>=) for lists. -- file: ch14/ListMonad.hs instance Monad [] where return x = [x] xs >>= f = concat (map f xs) It applies f to every element in the list xs, and concatenates the results to return a single list. With our two core Monad definitions in hand, the implementations of the non-core definitions that remain, (>>) and fail, ought to be obvious. -- file: ch14/ListMonad.hs xs >> f = concat (map (\_ -> f) xs) fail _ = [] The list monad is similar to a familiar Haskell tool, the list comprehension. We can illustrate this similarity by computing the Cartesian product of two lists. First, we'll write a list comprehension. -- file: ch14/CartesianProduct.hs comprehensive xs ys = [(x,y) | x <- xs, y <- ys] For once, we'll use bracketed notation for the monadic code instead of layout notation. This will highlight how structurally similar the monadic code is to the list comprehension. -- file: ch14/CartesianProduct.hs monadic xs ys = do { x <- xs; y <- ys; return (x,y) } The only real difference is that the value we're constructing comes at the end of the sequence of expressions, instead of the beginning as in the list comprehension. Also, the results of the two functions are identical. ghci> comprehensive [1,2] "bar"[(1,'b'),(1,'a'),(1,'r'),(2,'b'),(2,'a'),(2,'r')] ghci> comprehensive [1,2] "bar" == monadic [1,2] "bar"True It's easy to be baffled by the list monad early on, so let's walk through our monadic Cartesian product code again in more detail. This time, we'll rearrange the function to use layout instead of brackets. -- file: ch14/CartesianProduct.hs blockyDo xs ys = do x <- xs y <- ys return (x, y) For every element in the list xs, the rest of the function is evaluated once, with x bound to a different value from the list each time. Then for every element in the list ys, the remainder of the function is evaluated once, with y bound to a different value from the list each time. What we really have here is a doubly nested loop! This highlights an important fact about monads: you cannot predict how a block of monadic code will behave unless you know what monad it will execute in. We'll now walk through the code even more explicitly, but first let's get rid of the do notation, to make the underlying structure clearer. We've indented the code a little unusually to make the loop nesting more obvious. -- file: ch14/CartesianProduct.hs blockyPlain xs ys = xs >>= \x -> ys >>= \y -> return (x, y) blockyPlain_reloaded xs ys = concat (map (\x -> concat (map (\y -> return (x, y)) ys)) xs) If xs has the value [1,2,3], the two lines that follow are evaluated with x bound to 1, then to 2, and finally to 3. If ys has the value [True, False], the final line is evaluated six times: once with x as 1 and y as True; again with x as 1 and y as False; and so on. The return expression wraps each tuple in a single-element list. Here is a simple brute force constraint solver. Given an integer, it finds all pairs of positive integers that, when multiplied, give that value (this is the constraint being solved). -- file: ch14/MultiplyTo.hs guarded :: Bool -> [a] -> [a] guarded True xs = xs guarded False _ = [] multiplyTo :: Int -> [(Int, Int)] multiplyTo n = do x <- [1..n] y <- [x..n] guarded (x * y == n) $ return (x, y) ghci> multiplyTo 8[(1,8),(2,4)] ghci> multiplyTo 100[(1,100),(2,50),(4,25),(5,20),(10,10)] ghci> multiplyTo 891[(1,891),(3,297),(9,99),(11,81),(27,33)] Haskell's do syntax is an example of syntactic sugar: it provides an alternative way of writing monadic code, without using (>>=) and anonymous functions. Desugaring is the translation of syntactic sugar back to the core language. The rules for desugaring a do block are easy to follow. We can think of a compiler as applying these rules mechanically and repeatedly to a do block until no more do keywords remain. A do keyword followed by a single action is translated to that action by itself. A do keyword followed by more than one action is translated to the first action, then (>>), followed by a do keyword and the remaining actions. When we apply this rule repeatedly, the entire do block ends up chained together by applications of (>>). The <- notation has a translation that's worth paying close attention to. On the left of the <- is a normal Haskell pattern. This can be a single variable or something more complicated. A guard expression is not allowed. This pattern is translated into a let binding that declares a local function with a unique name (we're just using f as an example above). The action on the right of the <- is then chained with this function using (>>=). What's noteworthy about this translation is that if the pattern match fails, the local function calls the monad's fail implementation. Here's an example using the Maybe monad. -- file: ch14/Do.hs robust :: [a] -> Maybe a robust xs = do (_:x:_) <- Just xs return x The fail implementation in the Maybe monad simply returns Nothing. If the pattern match in the above function fails, we thus get Nothing as our result. ghci> robust [1,2,3]Just 2 ghci> robust [1]Nothing Finally, when we write a let expression in a do block, we can omit the usual in keyword. Subsequent actions in the block must be lined up with the let keyword. Back in the section called “The offside rule is not mandatory”, we mentioned that layout is the norm in Haskell, but it's not required. We can write a do block using explicit structure instead of layout. Even though this use of explicit structure is rare, the fact that it uses semicolons to separate expressions has given rise to an apt slogan: monads are a kind of “programmable semicolon”, because the behaviours of (>>) and (>>=) are different in each monad. When we write (>>=) explicitly in our code, it reminds us that we're stitching functions together using combinators, not simply sequencing actions. As long as you feel like a novice with monads, we think you should prefer to explicitly write (>>=) over the syntactic sugar of do notation. The repeated reinforcement of what's really happening seems, for many programmers, to help to keep things clear. (It can be easy for an imperative programmer to relax a little too much from exposure to the IO monad, and assume that a do block means nothing more than a simple sequence of actions.) Once you're feeling more familiar with monads, you can choose whichever style seems more appropriate for writing a particular function. Indeed, when you read other people's monadic code, you'll see that it's unusual, but by no means rare, to mix both do notation and (>>=) in a single function. The (=<<) function shows up frequently whether or not we use do notiation. It is a flipped version of (>>=). ghci> :type (>>=)(>>=) :: (Monad m) => m a -> (a -> m b) -> m b ghci> :type (=<<)(=<<) :: (Monad m) => (a -> m b) -> m a -> m b It comes in handy if we want to compose monadic functions in the usual Haskell right-to-left style. -- file: ch14/CartesianProduct.hs wordCount = print . length . words =<< getContents We discovered earlier in this chapter that the Parse from Chapter 10, Code case study: parsing a binary data format was a monad. It has two logically distinct aspects. One is the idea of a parse failing, and providing a message with the details: we represented this using the Either type. The other involves carrying around a piece of implicit state, in our case the partially consumed ByteString. This need for a way to read and write state is common enough in Haskell programs that the standard libraries provide a monad named State that is dedicated to this purpose. This monad lives in the Control.Monad.State module. Where our Parse type carried around a ByteString as its piece of state, the State monad can carry any type of state. We'll refer to the state's unknown type as s. What's an obvious and general thing we might want to do with a state? Given a state value, we inspect it, then produce a result and a new state value. Let's say the result can be of any type a. A type signature that captures this idea is s -> (a, s): take a state s, do something with it, and return a result a and possibly a new state s. Let's develop some simple code that's almost the State monad, then we'll take a look at the real thing. We'll start with our type definition, which has exactly the obvious type we described above. -- file: ch14/SimpleState.hs type SimpleState s a = s -> (a, s) Our monad is a function that transforms one state into another, yielding a result when it does so. Because of this, the state monad is sometimes called the state transformer monad. Yes, this is a type synonym, not a new type, and so we're cheating a little. Bear with us for now; this simplifies the description that follows. Earlier in this chapter, we said that a monad has a type constructor with a single type variable, and yet here we have a type with two parameters. The key here is to understand that we can partially apply a type just as we can partially apply a normal function. This is easiest to follow with an example. -- file: ch14/SimpleState.hs type StringState a = SimpleState String a Here, we've bound the type variable s to String. The type StringState still has a type parameter a, though. It's now more obvious that we have a suitable type constructor for a monad. In other words, our monad's type constructor is SimpleState s, not SimpleState alone. The next ingredient we need to make a monad is a definition for the return function. -- file: ch14/SimpleState.hs returnSt :: a -> SimpleState s a returnSt a = \s -> (a, s) All this does is take the result and the current state, and “tuple them up”. You may by now be used to the idea that a Haskell function with multiple parameters is just a chain of single-parameter functions, but just in case you're not, here's a more familiar way of writing returnSt that makes it more obvious how simple this function is. -- file: ch14/SimpleState.hs returnAlt :: a -> SimpleState s a returnAlt a s = (a, s) Our final piece of the monadic puzzle is a definition for (>>=). Here it is, using the actual variable names from the standard library's definition of (>>=) for State. -- file: ch14/SimpleState.hs bindSt :: (SimpleState s a) -> (a -> SimpleState s b) -> SimpleState s b bindSt m k = \s -> let (a, s') = m s in (k a) s' Those single-letter variable names aren't exactly a boon to readability, so let's see if we can substitute some more meaningful names. -- file: ch14/SimpleState.hs -- m == step -- k == makeStep -- s == oldState bindAlt step makeStep oldState = let (result, newState) = step oldState in (makeStep result) newState To understand this definition, remember that step is a function with the type s -> (a, s). When we evaluate this, we get a tuple, and we have to use this to return a new function of type s -> (a, s). This is perhaps easier to follow if we get rid of the SimpleState type synonyms from bindAlt's type signature, and examine the types of its parameters and result. -- file: ch14/SimpleState.hs bindAlt :: (s -> (a, s)) -- step -> (a -> s -> (b, s)) -- makeStep -> (s -> (b, s)) -- (makeStep result) newState The definitions of (>>=) and return for the state monad simply act as plumbing: they move a piece of state around, but they don't touch it in any way. We need a few other simple functions to actually do useful work with the state. -- file: ch14/SimpleState.hs getSt :: SimpleState s s getSt = \s -> (s, s) putSt :: s -> SimpleState s () putSt s = \_ -> ((), s) The getSt function simply takes the current state and returns it as the result, while putSt ignores the current state and replaces it with a new state. The only simplifying trick we played in the previous section was to use a type synonym instead of a type definition for SimpleState. If we had introduced a newtype wrapper at the same time, the extra wrapping and unwrapping would have made our code harder to follow. In order to define a Monad instance, we have to provide a proper type constructor as well as definitions for (>>=) and return. This leads us to the real definition of State. -- file: ch14/State.hs newtype State s a = State { runState :: s -> (a, s) } All we've done is wrap our s -> (a, s) type in a State constructor. By using Haskell's record syntax to define the type, we're automatically given a runState function that will unwrap a State value from its constructor. The type of runState is State s a -> s -> (a, s). The definition of return is almost the same as for SimpleState, except we wrap our function with a State constructor. -- file: ch14/State.hs returnState :: a -> State s a returnState a = State $ \s -> (a, s) The definition of (>>=) is a little more complicated, because it has to use runState to remove the State wrappers. -- file: ch14/State.hs bindState :: State s a -> (a -> State s b) -> State s b bindState m k = State $ \s -> let (a, s') = runState m s in runState (k a) s' This function differs from our earlier bindSt only in adding the wrapping and unwrapping of a few values. By separating the “real work” from the bookkeeping, we've hopefully made it clearer what's really happening. We modify the functions for reading and modifying the state in the same way, by adding a little wrapping. -- file: ch14/State.hs get :: State s s get = State $ \s -> (s, s) put :: s -> State s () put s = State $ \_ -> ((), s) We've already used Parse, our precursor to the state monad, to parse binary data. In that case, we wired the type of the state we were manipulating directly into the Parse type. The State monad, by contrast, accepts any type of state as a parameter. We supply the type of the state, to give e.g. State ByteString. The State monad will probably feel more familiar to you than many other monads if you have a background in imperative languages. After all, imperative languages are all about carrying around some implicit state, reading some parts, and modifying others through assignment, and this is just what the state monad is for. So instead of unnecessarily cheerleading for the idea of using the state monad, we'll begin by demonstrating how to use it for something simple: pseudorandom value generation. In an imperative language, there's usually an easily available source of uniformly distributed pseudorandom numbers. For example, in C, there's a standard rand function that generates a pseudorandom number, using a global state that it updates. Haskell's standard random value generation module is named System.Random. It allows the generation of random values of any type, not just numbers. The module contains several handy functions that live in the IO monad. For example, a rough equivalent of C's rand function would be the following: -- file: ch14/Random.hs import System.Random rand :: IO Int rand = getStdRandom (randomR (0, maxBound)) (The randomR function takes an inclusive range within which the generated random value should lie.) The System.Random module provides a typeclass, RandomGen, that lets us define new sources of random Int values. The type StdGen is the standard RandomGen instance. It generates pseudorandom values. If we had an external source of truly random data, we could make it an instance of RandomGen and get truly random, instead of merely pseudorandom, values. Another typeclass, Random, indicates how to generate random values of a particular type. The module defines Random instances for all of the usual simple types. Incidentally, the definition of rand above reads and modifies a built-in global random generator that inhabits the IO monad. After all of our emphasis so far on avoiding the IO monad wherever possible, it would be a shame if we were dragged back into it just to generate some random values. Indeed, System.Random contains pure random number generation functions. The traditional downside of purity is that we have to get or create a random number generator, then ship it from the point we created it to the place where it's needed. When we finally call it, it returns a new random number generator: we're in pure code, remember, so we can't modify the state of the existing generator. If we forget about immutability and reuse the same generator within a function, we get back exactly the same “random” number every time. -- file: ch14/Random.hs twoBadRandoms :: RandomGen g => g -> (Int, Int) twoBadRandoms gen = (fst $ random gen, fst $ random gen) Needless to say, this has unpleasant consequences. ghci> twoBadRandoms `fmap` getStdGenLoading package old-locale-1.0.0.0 ... linking ... done. Loading package old-time-1.0.0.0 ... linking ... done. Loading package random-1.0.0.0 ... linking ... done. Loading package mtl-1.1.0.0 ... linking ... done. (945769311181683171,945769311181683171) The random function uses an implicit range instead of the user-supplied range used by randomR. The getStdGen function retrieves the current value of the global standard number generator from the IO monad. Unfortunately, correctly passing around and using successive versions of the generator does not make for palatable reading. Here's a simple example. -- file: ch14/Random.hs twoGoodRandoms :: RandomGen g => g -> ((Int, Int), g) twoGoodRandoms gen = let (a, gen') = random gen (b, gen'') = random gen' in ((a, b), gen'') Now that we know about the state monad, though, it looks like a fine candidate to hide the generator. The state monad lets us manage our mutable state tidily, while guaranteeing that our code will be free of other unexpected side effects, such as modifying files or making network connections. This makes it easier to reason about the behavior of our code. Here's a state monad that carries around a StdGen as its piece of state. -- file: ch14/Random.hs type RandomState a = State StdGen a The type synonym is of course not necessary, but it's handy. It saves a little keyboarding, and if we wanted to swap another random generator for StdGen, it would reduce the number of type signatures we'd need to change. Generating a random value is now a matter of fetching the current generator, using it, then modifying the state to replace it with the new generator. -- file: ch14/Random.hs getRandom :: Random a => RandomState a getRandom = get >>= \gen -> let (val, gen') = random gen in put gen' >> return val We can now use some of the monadic machinery that we saw earlier to write a much more concise function for giving us a pair of random numbers. -- file: ch14/Random.hs getTwoRandoms :: Random a => RandomState (a, a) getTwoRandoms = liftM2 (,) getRandom getRandom As we've already mentioned, each monad has its own specialised evaluation functions. In the case of the state monad, we have several to choose from. runState returns both the result and the final state. evalState returns only the result, throwing away the final state. execState throws the result away, returning only the final state. The evalState and execState functions are simply compositions of fst and snd with runState, respectively. Thus, of the three, runState is the one most worth remembering. Here's a complete example of how to implement our getTwoRandoms function. -- file: ch14/Random.hs runTwoRandoms :: IO (Int, Int) runTwoRandoms = do oldState <- getStdGen let (result, newState) = runState getTwoRandoms oldState setStdGen newState return result The call to runState follows a standard pattern: we pass it a function in the state monad and an initial state. It returns the result of the function and the final state. The code surrounding the call to runState merely obtains the current global StdGen value, then replaces it afterwards so that subsequent calls to runTwoRandoms or other random generation functions will pick up the updated state. It's a little hard to imagine writing much interesting code in which there's only a single state value to pass around. When we want to track multiple pieces of state at once, the usual trick is to maintain them in a data type. Here's an example: keeping track of the number of random numbers we are handing out. -- file: ch14/Random.hs data CountedRandom = CountedRandom { crGen :: StdGen , crCount :: Int } type CRState = State CountedRandom getCountedRandom :: Random a => CRState a getCountedRandom = do st <- get let (val, gen') = random (crGen st) put CountedRandom { crGen = gen', crCount = crCount st + 1 } return val This example happens to consume both elements of the state, and construct a completely new state, every time we call into it. More frequently, we're likely to read or modify only part of a state. This function gets the number of random values generated so far. -- file: ch14/Random.hs getCount :: CRState Int getCount = crCount `liftM` get This example illustrates why we used record syntax to define our CountedRandom state. It gives us accessor functions that we can glue together with get to read specific pieces of the state. If we want to partially update a state, the code doesn't come out quite so appealingly. -- file: ch14/Random.hs putCount :: Int -> CRState () putCount a = do st <- get put st { crCount = a } Here, instead of a function, we're using record update syntax. The expression st { crCount = a } creates a new value that's an identical copy of st, except in its crCount field, which is given the value a. Because this is a syntactic hack, we don't get the same kind of flexibility as with a function. Record syntax may not exhibit Haskell's usual elegance, but it at least gets the job done. There exists a function named modify that combines the get and put steps. It takes as argument a state transformation function, but it's hardly more satisfactory: we still can't escape from the clumsiness of record update syntax. -- file: ch14/Random.hs putCountModify :: Int -> CRState () putCountModify a = modify $ \st -> st { crCount = a } Functors and monads are closely related. The terms are borrowed from a branch of mathematics called category theory, but they did not make the transition completely unscathed. In category theory, a monad is built from a functor. You might expect that in Haskell, the Monad typeclass would thus be a subclass of Functor, but it isn't defined as such in the standard Prelude. This is an unfortunate oversight. However, authors of Haskell libraries use a workaround: when someone defines an instance of Monad for a type, they almost always write a Functor instance for it, too. You can expect that you'll be able to use the Functor typeclass's fmap function with any monad. If we compare the type signature of fmap with those of some of the standard monad functions that we've already seen, we get a hint as to what fmap on a monad does. ghci> :type fmapfmap :: (Functor f) => (a -> b) -> f a -> f b ghci> :module +Control.Monad ghci> :type liftMliftM :: (Monad m) => (a1 -> r) -> m a1 -> m r Sure enough, fmap lifts a pure function into the monad, just as liftM does. Now that we know about the relationship between functors and monads, If we look back at the list monad, we can see something interesting. Specifically, take a look at the definition of (>>=) for lists. -- file: ch14/ListMonad.hs instance Monad [] where return x = [x] xs >>= f = concat (map f xs) Recall that f has type a -> [a]. When we call map f xs, we get back a value of type [[a]], which we have to “flatten” using concat. Consider what we could do if Monad was a subclass of Functor. Since fmap for lists is defined to be map, we could replace map with fmap in the definition of (>>=). This is not very interesting by itself, but suppose we could go further. The concat function is of type [[a]] -> [a]: as we mentioned, it flattens the nesting of lists. We could generalise this type signature from lists to monads, giving us the “remove a level of nesting” type m (m a) -> m a. The function that has this type is conventionally named join. If we had definitions of join and fmap, we wouldn't need to write a definition of (>>=) for every monad, because it would be completely generic. Here's what an alternative definition of the Monad typeclass might look like, along with a definition of (>>=). -- file: ch14/AltMonad.hs import Prelude hiding ((>>=), return) class Functor m => AltMonad m where join :: m (m a) -> m a return :: a -> m a (>>=) :: AltMonad m => m a -> (a -> m b) -> m b xs >>= f = join (fmap f xs) Neither definition of a monad is “better”, since if we have join we can write (>>=), and vice versa, but the different perspectives can be refreshing. Removing a layer of monadic wrapping can, in fact, be useful in realistic circumstances. We can find a generic definition of join in the Control.Monad module. -- file: ch14/MonadJoin.hs join :: Monad m => m (m a) -> m a join x = x >>= id Here are some examples of what it does. ghci> join (Just (Just 1))Just 1 ghci> join NothingNothing ghci> join [[1],[2,3]][1,2,3] In the section called “Thinking more about functors”, we introduced two rules for how functors should always behave. -- file: ch14/MonadLaws.hs fmap id == id fmap (f . g) == fmap f . fmap g There are also rules for how monads ought to behave. The three laws below are referred to as the monad laws. A Haskell implementation doesn't enforce these laws: it's up to the author of a Monad instance to follow them. The monad laws are simply formal ways of saying “a monad shouldn't surprise me”. In principle, we could probably get away with skipping over them entirely. It would be a shame if we did, however, because the laws contain gems of wisdom that we might otherwise overlook. The first law states that return is a left identity for (>>=). -- file: ch14/MonadLaws.hs return x >>= f === f x Another way to phrase this is that there's no reason to use return to wrap up a pure value if all you're going to do is unwrap it again with (>>=). It's actually a common style error among programmers new to monads to wrap a value with return, then unwrap it with (>>=) a few lines later in the same function. Here's the same law written with do notation. -- file: ch14/MonadLaws.hs do y <- return x f y === f x This law has practical consequences for our coding style: we don't want to write unnecessary code, and the law lets us assume that the terse code will be identical in its effect to the more verbose version. The second monad law states that return is a right identity for (>>=). -- file: ch14/MonadLaws.hs m >>= return === m This law also has style consequences in real programs, particularly if you're coming from an imperative language: there's no need to use return if the last action in a block would otherwise be returning the correct result. Let's look at this law in do notation. -- file: ch14/MonadLaws.hs do y <- m return y === m Once again, if we assume that a monad obeys this law, we can write the shorter code in the knowledge that it will have the same effect as the longer code. The final law is concerned with associativity. -- file: ch14/MonadLaws.hs m >>= (\x -> f x >>= g) === (m >>= f) >>= g This law can be a little more difficult to follow, so let's look at the contents of the parentheses on each side of the equation. We can rewrite the expression on the left as follows. -- file: ch14/MonadLaws.hs m >>= s where s x = f x >>= g On the right, we can also rearrange things. -- file: ch14/MonadLaws.hs t >>= g where t = m >>= f We're now claiming that the following two expressions are equivalent. -- file: ch14/MonadLaws.hs m >>= s === t >>= g What this means is if we want to break up an action into smaller pieces, it doesn't matter which sub-actions we hoist out to make new actions with, provided we preserve their ordering. If we have three actions chained together, we can substitute the first two and leave the third in place, or we can replace the second two and leave the first in place. Even this more complicated law has a practical consequence. In the terminology of software refactoring, the “extract method” technique is a fancy term for snipping out a piece of inline code, turning it into a function, and calling the function from the site of the snipped code. This law essentially states that this technique can be applied to monadic Haskell code. We've now seen how each of the monad laws offers us an insight into writing better monadic code. The first two laws show us how to avoid unnecessary use of return. The third suggests that we can safely refactor a complicated action into several simpler ones. We can now safely let the details fade, in the knowledge that our “do what I mean” intuitions won't be violated when we use properly written monads. Incidentally, a Haskell compiler cannot guarantee that a monad actually follows the monad laws. It is the responsibility of a monad's author to satisfy—or, preferably, prove to—themselves that their code follows the laws.
http://book.realworldhaskell.org/read/monads.html
CC-MAIN-2018-51
refinedweb
10,206
69.11
Hi, this could be silly question from newcomer, but I wonder about can I use Node.js library with stack that Ionic 2 (Node package system with package.json , Angular 2, Cordova) provides. I found interesting for my project library from Google ( I need geocoding for some form validation). For this moment I added this package to package.json with npm install @google/maps --save npm i I created provider for client consuming, added import {} from ‘@google/maps’; and understood that I need some sort of typings for TS:) Is it really an option to use separate from Ionic and Angular libraries or am I wasting my time? Update Thanks to @Ivaro18 on SO, I managed how to add typings to my project: npm install @types/google-maps --save Now I can make something like this: import GoogleMapsLoader = require('google-maps'); @Injectable() export class GoogleMapsClientProvider { constructor(public http: Http) { console.log('Hello GoogleMapsClientProvider Provider'); GoogleMapsLoader.createLoader(); }; } I found about this here but I’m not sure how to use this stuff,e.g. GoogleMapsLoader.KEY = 'qwertyuiopasdfghjklzxcvbnm'; GoogleMapsLoader.CLIENT = 'yourclientkey'; What is what? Someone tried this lib before? My mandatory task is to use geocoding for address validation.
https://forum.ionicframework.com/t/using-library-from-npm-package-google-maps-for-ionic-2-project/72534
CC-MAIN-2018-51
refinedweb
197
57.47
I universal tagset. Here is the code I am running when I have the issue. import nltk,math tagged_sentences = nltk.corpus.brown.tagged_sents(categories='news',tagset='universal') i = math.floor(len(tagged_sentences)*0.2) testing_sentences = tagged_sentences[0:i] training_sentences = tagged_sentences[i:] perceptron_tagger = nltk.tag.perceptron.PerceptronTagger(load=False) perceptron_tagger.train(training_sentences) It won't train correctly, and gives the following stack trace. --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-10-61332d63d2c3> in <module>() 1 perceptron_tagger = nltk.tag.perceptron.PerceptronTagger(load=False) ----> 2 perceptron_tagger.train(training_sentences) /home/nathan/anaconda3/lib/python3.5/site-packages/nltk/tag/perceptron.py in train(self, sentences, save_loc, nr_iter) 192 c += guess == tags[i] 193 n += 1 --> 194 random.shuffle(sentences) 195 logging.info("Iter {0}: {1}/{2}={3}".format(iter_, c, n, _pc(c, n))) 196 self.model.average_weights() /home/nathan/anaconda3/lib/python3.5/random.py in shuffle(self, x, random) 270 # pick an element in x[:i+1] with which to exchange x[i] 271 j = randbelow(i+1) --> 272 x[i], x[j] = x[j], x[i] 273 else: 274 _int = int TypeError: 'LazySubsequence' object does not support item assignment It seems to be coming from the shuffle function in the random module but that doesn't really seem right. Is there something else that could cause the problem? Has someone had this issue? I am running this on Ubuntu 16.04.1 with Anaconda Python 3.5. The nltk version is 3.2.1 NLTK has a lot of custom "lazy" types, which should ease mangling of large bodies of data, such as annotated corpora. They behave like the standard lists, tuples, dicts etc. in many ways, but avoid occupying too much memory unnecessarily. One instance of this is the LazySubsequence, which is the result of the slice expression tagged_sentences[i:]. If tagged_sentences was a normal list, the division of the data into test/training would create an entire copy of the data. Instead, this LazySubsequence is a view to parts of the original sequence. While the memory benefits of this are a probably a good thing, the problem here is that this view is read-only. Apparently the PerceptronTagger would like to shuffle its input data in-place, which is not allowed – hence the exception. A quick (but maybe not the most elegant) solution is to provide the tagger with a copy of the data: perceptron_tagger.train(tuple(training_sentences)) You might have to do the same thing with the test data.
https://www.codesd.com/item/nltk-perceptron-tagger-typeerror-the-lazysubsequence-object-does-not-support-element-allocation.html
CC-MAIN-2019-13
refinedweb
412
51.24
Auto Login Bot Using Python Hi, everyone in this post I will tell you how you can make an Auto-Login Bot in Python. This will automatically log in to your account on a specific site when you run this code. We need this because it is very annoying, after a certain time we have to log-in after the timeout. So, Let’s start. We require gecko driver for the specific browser, so download the driver for the browser you are using. Python provides us with the module name selenium. Before going through the post please read about selenium module and its function because without that you will not get this post. To download the drivers go to the link below because we need this in our project. From selenium we require Web-driver. This code is coded for the Mozilla Firefox. If you want then just change the browser name. Python program: Auto Login Bot from selenium import webdriver # Used to import the driver def bot(usr,pas): br=webdriver.Firefox() # you can import driver for any browser but install geckodriver for thet browser br.get("Enter your link") user=br.find_element_by_css_selector("#Enter the id name of id box using inspect") user.clear() user.send_keys(usr) # Fill the email box by given username or email id pasd=br.find_element_by_css_selector("#Enter the id name of password box using inspect") pasd.clear() pasd.send_keys(pas) # Fill the password filled by given password btn=br.find_element_by_css_selector("#Enter the id name of login button using inspect") btn.click() # Auto click the button #main driver code bot(" enter your id "," enter your password ") In user, we have to enter the user id from the inspect of the website on which you want to login. Do the same for the password field. And the rest of the instructions are in the comments in the program itself. Also read: I entered the required link and id name but it gave an error selenium.common.exceptions.InvalidSelectorException: Message: Given css selector expression “Sign in” is invalid: InvalidSelectorError: Document.querySelector: ‘Sign in’ is not a valid selector: “Sign in” Please HELP me out.
https://www.codespeedy.com/auto-login-bot-using-python/
CC-MAIN-2022-27
refinedweb
356
58.28