text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Fri 8 Jun 2012
Python 101: How to submit a web form
Posted by Mike under Python, Web
[7] Comments
Today we’ll spend some time looking at three different ways to make Python submit a web form. In this case, we will be doing a web search with duckduckgo.com searching on the term “python” and saving the result as an HTML file. We will use Python’s included urllib modules and two 3rd party packages: requests and mechanize. We have three small scripts to cover, so let’s get cracking!
Submitting a web form with urllib
We will start with urllib and urllib2 since they are included in Python’s standard library. We’ll also import the webbrowser to open the search results for viewing. Here’s the code:
import urllib import urllib2 import webbrowser data = urllib.urlencode({'q': 'Python'}) url = '' full_url = url + '?' + data response = urllib2.urlopen(full_url) with open("results.html", "w") as f: f.write(response.read()) webbrowser.open("results.html")
The first thing you have to do when you want to submit a web form is figure out what the form is called and what the url is that you will be posting to. If you go to duckduckgo’s website and view the source, you’ll notice that its action is pointing to a relative link, “/html”. So our url is “”. The input field is named “q”, so to pass duckduckgo a search term, we have to concatenate the url to the “q” field.
The mechanize module has lots of fun features for browsing the internet with Python. Sadly it doesn’t support javascript. Anyway, let’s get on with the show!
import mechanize url = "" br = mechanize.Browser() br.set_handle_robots(False) # ignore robots br.open(url) br.select_form(name="x") br["q"] = "python" res = br.submit() content = res.read() with open("mechanize_results.html", "w") as f: f.write(content)
As you can see, mechanize is a little more verbose than the other two methods were. We also need to tell it to ignore the robots.txt directive or it will fail. Of course, if you want to be a good netizen, then you shouldn’t ignore it. Anyway, to start off, you need a Browser object. Then you open the url, select the form (in this case, “x”) and set up a dictionary with the search parameters as before. Note that in each method, the dict setup is a little different. Next you submit the query and read the result. Finally you save the result to disk and you’re done!
Wrapping Up
Of the three, requests was probably the simplest with urllib being a close follow-up. Mechanize is made for doing a lot more then the other two though. It’s made for screen scraping and website testing, so it’s no surprise it’s a little more verbose. You can also do form submission with selenium, but you can read about that in this blog’s archives. I hope you found this article interesting and perhaps inspiring. See you next time!
Further Reading
Source Code
Pingback: Links « Frackmente
Pingback: Scraping with Mechanize and BeautifulSoup | A geek with a hat | http://www.blog.pythonlibrary.org/2012/06/08/python-101-how-to-submit-a-web-form/ | CC-MAIN-2014-41 | refinedweb | 528 | 76.42 |
For.
You know one of the reasons I refuse to live near most cities in the US? Traffic. I’m not talking about the “this road is crowded” type of traffic that lets you zip along at a good speed, or even about the “I can’t get out of my lane because this is so packed” type of traffic that moves slowly along. No, I’ve come to accept and deal with that. It’s the “we might as well get out of the car and enjoy the sunshine because we aren’t moving” type of traffic that I can’t stand. Amazingly enough, it seems to happen around most major cities in the US during rush hour, and sometimes this “rush hour” stretches from 7AM until 8PM. It’s a bad sign when car manufacturers have advertisements telling you how much more comfortable you’ll be in their car when you’re stuck in traffic. Under those conditions it can easily take over an hour just to cover a distance of 3 or 4 miles.
Other than pointing out that you’re much better off walking, cycling, or using public transportation, what’s the point of all this and how does it relate to the physical structure of a program? There are some things that just don’t scale well. They appear to work perfectly fine for a small number of units, but as soon as a certain threshold is reached, things seem to bog down and eventually collapse under their own weight. Just adding more lanes doesn’t appear to solve the problem either, judging by the number of clogged-up 5-lane highways everywhere. Sometimes, you need to take a step back and deal with the problem in a different way. Either that, or buy a nice music system and enjoy your time in traffic.
We might not have much of a say over how traffic should be dealt with where we live, but we certainly have a lot of choices when it comes down to structuring our C++ source code. For small projects, we can blissfully code away without paying any attention to physical structure and we won’t be any worse off for it. However, as a project grows, it reaches a critical point, and compilation times start getting slower and slower, to the point where tiny changes could make you wish you were stuck in traffic instead of staring powerlessly at your monitor. Adding a faster CPU, more memory, or a better hard drive can help make things faster, but is usually not a good long-term solution.
Build Types
We are usually concerned with the time for two types of builds:
- Full builds. In this case we care about the time it takes to build the whole project from scratch, starting from a totally clean build. This situation comes about when we just want to use the result of the build of a project we’re not actively modifying. For example, an automated build machine will most likely be doing full builds of the game, so the turnaround time before a build is ready will depend on the full build time. Another example might be if you need to link your code with a library for which you have the source code.
- Minimal builds. Once we have done a full build on a project, we then make a very small change to its source code and build it again. That’s the time for a minimal build. This is what you really care about when you’re actively working on a project, making modifications and compiling constantly. Ideally, building the project after a small change should require very little time This allows for very fast turnaround time for debugging, or even to get feedback from the compiler on silly syntax errors we just typed.
Improving the physical structure of a program often reduces the time of both types of builds. Unfortunately things don’t always work out so neatly and there are times where some changes will make one type of build faster and the other slower. Understanding what affects each compilation time allows us to optimize our compilation strategy and strike a balance that fits our needs.
Clearly, the time for both types of builds depends on the number of files and the complexity of those files. Both types of builds are also affected by the number of files each file depends on (the number of #include statements in each file). However, as we’ll see in a moment, in the case of a full build there is the chance of caching the includes of some files and reusing them for other files.
There is something very different about minimal builds. Their build time is usually dominated by the number of files that depend on the modified files. In the worst situation, every file will depend on the file that changed and a full build will be triggered. In the ideal case, only the file with the changes itself will be compiled and no other files will have been affected. In one case the build could take less than a second, and in the other it could easily take multiple hours.
The rest of this article will look at different techniques to reduce build times and how they affect each of those two build types.
Counting Includes
It is easy to underestimate how quickly include statements can compound. If file A includes file B, and file B includes file C and D, every time someone includes file A they’re including three other files for the ride. Add a few more levels of inclusion with header files including many other header files, and you have a recipe for disaster (or for really long build times at least).
As an experiment, I added one more feature to the script I wrote last week. The script analyzes a set of source code files and determines how many times each file is included by other files in a recursive way. So, in our trivial example above, file C will be reported as being included twice (once by B directly, and once by A indirectly). I then decided to test it on the source code for a high-level game library (I’m not going to be any more specific since it wasn’t particularly good code and it had a pretty hideous physical structure). I wouldn’t be surprised if it’s not very different from the level of complexity of a lot of game code out there. As a point of reference, the library was composed of 300 cpp files and 312 header files.
Before I ran the script, I tried to guess how many times the most included file in the whole library was included by other files. My guess was around 600 times, just because I knew that the physical structure of that code wasn’t pretty. I figured maybe almost half the files included that one header file, and a few others included it indirectly. Boy was I wrong! Here are the shocking results:
That means that during the course of a full build for those 300 cpp files, one header file could be included over 10,000 times! No wonder this particular library seemed to take a long time to compile. Notice that the other top files quickly drop to being included around 800 times each (which is still even higher than my initial estimate).
As a comparison, I tried running that same script on another, much smaller library, but also one with a much better physical structure and many fewer dependencies between files. This second library was only made up of 33 cpp files and 39 header files. The most included file was only included a total of 23 times (with the second one being included less than 10 times). So, having the number of classes grow by a factor of 10, caused the number of includes to grow by a factor of 1000. Clearly not a very scalable situation.
Things aren’t quite that bad though. Header files typically have a set of include guards in them, to prevent the compiler from adding duplicate symbols if it encounters the same header file multiple times during the compilation of one cpp file. This is what include guards look like:
#ifndef SOMEFILE_H_
#define SOMEFILE_H_
// Normal code goes here, even other #include statements if necessary
#endif
With every header file having include guards around it, I turned on the /showincludes switch in Visual C++ and performed a full build. The total number of includes during the course of building the 300 classes in the library was an astounding 15,264. Better than the worst-case-scenario we calculated earlier, but still tremendously high.
Apparently some C++ compilers try to optimize this situation by automatically caching header files and avoiding hitting the disk to reload them over and over. Unfortunately, there is very little hard data about that, and you’re always at the mercy of your current compiler writer. Was that true for Visual Studio .NET 2003?
Redundant Guards
To test if I could speed up the compilation any, I added redundant include guards to the whole library. Redundant include guards are like the regular include guards, but they are placed around the actual #include statement. I first saw them mentioned in the book Large Scale C++ Software Design by John Lakos (written in 1996), but popular wisdom claims that they are unnecessary with modern compilers. Well, time to test that.
This is what redundant include guards look like:
#ifndef SOMEFILE_H_
#include “SomeFile.h”
#endif
#ifndef SOMEOTHERFILE_H_
#include “SomeOtherFile.h”
#endif
//…
I wrote a quick script to add redundant guards to all the source code and did a full build again. The number of includes reported by the compiler went down to 10,568 (from over 15,000). That means that there were about 5,000 redundant includes in a full build. However, the overall build time didn’t change at all.
Result: Zero. Apparently Visual Studio .NET 2003 (and probably most of the major compilers) does a pretty good job caching those includes by itself.
Recommendation: Stay away from redundant include guards. I never liked having the including files know about the internal define, and if the guard ever changes it can easily break things. Besides, the code looks a lot messier and unreadable. It might have been worth it if we could define #include to expand to a redundant guard automatically, but I don’t think that’s possible with the standard C preprocessor.
#pragma once
Just in case, I decided to test another strategy and see if I obtained similar results. Instead of using redundant include guards, I added the #pragma once preprocessor directive to all header files. Visual C++ will treat files with that directive differently and it’ll make sure that those files are only included once per compilation unit. In other words, it accomplishes the same thing as the external guards, just in a non-portable way. Here’s another really simple script to add #pragma once to all the header files.
Result: No difference. Just as with redundant include guards, it seems that the compiler was smart enough already to optimize that case.
Recommendation: Don’t bother with it. It’s a non-standard construct that doesn’t get any apparent benefit. If you still feel compelled to use it, at least wrap it up with #ifdef checks for the correct version of Visual Studio.
Precompiled Headers
During a full build, every cpp file is treated as a separate compilation unit. For each of those files, all the necessary includes are pulled in, parsed, and compiled. If you look at all the includes during a full build, you’re bound to find a lot of common headers that get included over and over for every compilation unit. Those are usually headers for other libraries that the code relies on, such as STL, boost, or even platform-specific headers like windows.h or DirectX headers. They are usually also particularly expensive headers to include because they tend to include many other header files in turn.
From our findings in the previous two sections, it is clear that some compilers cache the headers encountered for each compilation unit. However, they don’t do anything about duplicated headers found across multiple cpp files, and that’s where precompiled headers come in.
When using precompiled headers, we can flag a set of headers as being part of the precompiled set. The compiler will then process them all at once and save those results. Every compilation unit will then automatically include all the headers that were part of the precompiled set at the very beginning, but at a much lower cost than parsing them from scratch every time.
The catch is that if any of the contents of the precompiled headers changes, a full rebuild is necessary to compile the program again. This means that we should only add headers that are included very often throughout our project but that don’t change frequently. Perfect candidates are the ones we mentioned earlier: STL headers, boost, and any other big external APIs. I always prefer not to include any headers from the project itself, although if you have a header that is included in every file, you might as well include it in the precompiled set (or, even better, change it so it’s not included everywhere and improve the physical structure).
The gains from using precompiled headers are quite dramatic. The game library we mentioned in an earlier section took over 14 minutes to compile without pre-compiled headers, but only 2:30 when using them. Those are huge savings! Minimal rebuilds are also improved because we avoid parsing some of the common headers for one file, but the results aren’t as dramatic as for full builds.
Precompiled headers are not without their downside though. The first problem is that precompiled headers often end up forcing the inclusions of more headers than it is absolutely necessary to compile each individual file. Not every file needs <vector> or <windows.h> included, but since a fair amount of them do and those are considered expensive includes, they’ll invariably end up in the precompiled header section. That means that any compilation unit taking advantage of precompiled headers will be forced to include those as well. Logically, the program is the same, but we have worsened the physical structure of the source code. In effect, we are trading extra physical dependencies between files for a faster compile time.
The second problem is that precompiled headers are not something you can rely on from compiler to compiler and platform to platform. The only compilers I’m aware of that implement them are Microsoft’s Visual C++ and Metrowerks’ CodeWarrior (although I just did a Google search and apparently gcc also supports precompiled headers–that’s great news!). For the rest of us using different compilers, we’re out of luck as far as this technique goes. Considering how important multi-platform development is becoming in the games industry (and elsewhere), this is a big blow against them.
Finally, by far the worst aspect of precompiled headers, is what happens when you combine the first two problems: Take a set of source code developed on a compiler where precompiled headers were available, and try to build it on a different platform. The code will compile since everything is standard C++, but it’ll compile at a glacial pace. That’s because every file is including a massive precompiled header file, and it is being parsed over and over for every compilation unit without taking advantage of any optimizations in the part of the compiler. If the code had been developed without precompiled headers in the first place, each file would only include those headers that it absolutely needs to compile, which would result in much faster compile times.
So earlier, when I said that the game library without precompiled headers took over 14 minutes to build, that’s because it was written with precompiled headers in mind. Otherwise, I estimate it would only take about 5-6 minutes (still much longer than the 2:30 it took with precompiled headers).
Personally, I have never yet worked on a set of code that was developed to be compiled on multiple platforms and where some of them did not support precompiled headers. I suppose the best approach is to use #ifdefs to only include the precompiled headers in one platform and the minimal set of includes for the rest, but it seems like an extremely error-prone approach where programmers are going to be breaking the other platform’s builds all the time. I’d be interested to know how teams working in such an environment deal with it.
Result: Huge gains both for full builds and minimal builds if your compiler supports them. Much worse physical structure.
Recommendation: Definitely use them if you’re only compiling in a platform that supports them. If you need to support multiple platforms, the gain is still too big to pass up. It probably is worth if you manage to separate the includes for precompiled headers with lots of #ifdefs and try to keep the physical structure sane for platforms that don’t support them.
Single Compilation Unit
This is an interesting trick that you won’t find in most books. I first read about it in the sweng-gamedev mailing list about a couple of years ago. Be warned, this is hackish and ugly, but people claimed really good results. I just had to find out for myself how it stacked up against the other techniques to reduce build times.
This technique involves having a single cpp file (compilation unit) that includes all the other cpp files in the project (yes, that’s right, cpp files, not header files). To compile the project we just compile that one cpp file and nothing else. The contents of this file are simply #include statements including all the cpp files we’re interested in. Something along these lines:
#include “MyFile1.cpp”
#include “MyFile2.cpp”
#include “MyFile3.cpp”
//….
As you can imagine by now, I wrote a script to create that file from a directory containing the source code for a project. I just created a file including all the cpp files in that directory, although a better way of doing it would be to parse the make (or project) file and only include those files that are actually part of the project. That way, as I discovered, you avoid including outdated files or files that are in that directory but are not part of the project.
I created this file (everything.cpp), compiled it and… get ready: The build time went down from 2:32 minutes to 43 seconds! That’s a 72% decrease in build time!! Not only that, but the .lib file it created from that library went from 42MB down to 15MB, so it should help with link times down the line. People in the mailing list reported even better results with gcc than with Visual Studio.
What is the reason for such reduction in build times? I can only speculate. I suspect part of it is due to avoiding the overhead of starting and stopping the compiler for every compilation unit. However, the biggest win probably comes from the reduced number of included files. Because everything is one compilation unit, we only include every file once. The second time any other file attempts to include a particular header file, the compiler will have already cached it (and it’ll have include guards so there’s no need to parse anything). To test this theory, I again turned on the /showincludes switch. Indeed, the number of includes during a full build went down from 10,568 to 3,197. That’s a 70% reduction of included files, which is, probably not coincidentally, the same reduction in build time.
One very interesting observation from this experiment is that build times are probably more dependent on the number of actual includes performed by the compiler than I thought at first. All the more reason to keep a really watchful eye on the physical structure of the program. The third part of this article will cover what architectural choices we can make to improve the physical structure and keep the overall number of includes down.
Unfortunately this method also has its share of problems. One of the biggest problems is that there is no such a thing as a minimal build anymore. Any modification to any file will cause a full rebuild. Of course, the full build takes a only a fraction of the time it took before, so this might not be much of an issue.
As with precompiled headers, we’re adding a lot of physical dependencies between files. In this case, files will have a physical dependency with any files that were included before it in the everything.cpp file.
However, the most objectionable of all problems is that we can now run into naming conflicts. Before we assumed each cpp file was a separate compilation unit. Now they’ve all been forcefully added to the same one. Any static variables or functions, or anything on an anonymous namespace will be available to every cpp file that comes after it on the large everything.cpp file. This means that there’s potential for having conflicting symbols, which is one of the things that anonymous namespaces were supposed to solve in the first place. If you have decided to use this technique, you will want to keep everything as part of a separate namespace or part of the class itself and avoid global-scope symbols completely.
Result: Huge improvement in full-build times, but minimal-build times become much worse. Potential for clashing of static and anonymous namespace symbols.
Recommendation: The gains of this technique are simply huge so it would be a shame to ignore it. It is probably no good for regular builds, but you might want to have it as an option when you just care about doing full builds (automated build machine or building someone else’s code). If so, make sure to wrap symbols in namespaces or classes.
analyze_includes.pl
add_external_guards.pl
add_pragma_once.pl
create_everything.pl
The next (and final, I promise) part of this article will look at architectural choices that can greatly influence build times as well as looking briefly at link times and see what we can do about them. | http://gamesfromwithin.com/physical-structure-and-c-part-2-build-times | CC-MAIN-2014-10 | refinedweb | 3,771 | 68.4 |
Installing Caché Classes and Populating the Database
Complete the following steps to use Studio to install Contact and PhoneNumber into a Caché namespace:
Open Studio. Click File —> Change Namespace to connect to the namespace in which you will be working. The USER namespace is a good choice.
Click Tools —> Import Local.
Browse to the directory containing Contacts.xml. Click on Contacts.xml. Click Open and then OK to load and compile the classes.
After the classes are installed and compiled, use the Terminal to populate the database with sample instances of the classes. All classes have been configured for use with the population utility. Here are the steps to follow:
Open the Terminal in the namespace in which you installed the Caché classes. If terminal opens in a different namespace, use the ZN command to switch to the correct namespace.
Execute the Fill method of Provider.Utilities. This method clears all existing Provider.Contact and Provider.PhoneNumber instances from the namespace and then adds 25 Provider.Contact and 100 Provider.PhoneNumber instances.
USER> do ##class(Provider.Utilities).Fill()
To learn how to create a new Caché namespace, read the discussion of “Configuring Namespaces” in the Configuring Caché section of the Caché System Administration Guide.
To learn more about using Studio, read Using Studio.
To learn more about using the Terminal, read Using the Terminal. | https://docs.intersystems.com/latest/csp/docbook/Doc.View.cls?KEY=TCMP_InstallAndPopulate | CC-MAIN-2021-39 | refinedweb | 224 | 60.61 |
In this series of five articles, I want to help you get started with using SpecFlow in your test automation project. In this chapter, you will get started with using SpecFlow in your test automation project by explaining how to set up and configure SpecFlow, how to write your first SpecFlow specification and how to make it executable.
Tutorial Chapters
- BDD, SpecFlow and The SpecFlow Ecosystem (Chapter 1)
- You’re here →, you’ve seen what SpecFlow is (and what it is not) and how it supports Behaviour Driven Development. In this article, you’ll learn how to get started with SpecFlow in your test automation project.
Setting up your development environment
Before creating a new C# project and add SpecFlow feature files, it’s a good idea to install the SpecFlow Extension for Visual Studio. This Extension enables you to perform common SpecFlow actions from within your IDE, and it also provides syntax highlighting for more efficient writing of features files and scenarios.
Installing the extension from within Visual Studio can be done through the Extensions > Manage Extensions menu option (Visual Studio 2019) or through Tools > Extensions and Updates (earlier Visual Studio versions).
Switch to the Online section, do a search for ‘SpecFlow’ and install the ‘SpecFlow for Visual Studio’ extension.
If you’re using SpecFlow 3 (as we are going to do in this article), you will also need to disable the SpecFlowSingleFileGenerator custom tool. This can be done through Tools > Options > SpecFlow. Locate the Enable SpecFlowSingleFileGenerator CustomTool option and set it to False.
Creating a new project and adding required NuGet packages
To start writing SpecFlow features and add the underlying test automation, we first need to create a new project. In this example, I start with a project of the type ‘Class library (.NET Framework)’, because I am going to write code against the .NET Framework and I want to start with an empty project. The Class1.cs file that is auto-generated when you create this type of project can be removed as well.
After you created the project, add the following packages using the NuGet package manager:
- SpecFlow.NUnit – This installs both SpecFlow itself and the NUnit testing framework, which is the unit testing framework we’ll use in this example. Similar packages are available for other unit testing frameworks.
- NUnit3TestAdapter – This package allows us to run NUnit-based tests from within Visual Studio.
- SpecFlow.Tools.MsBuild.Generation – This package generates code that SpecFlow uses to run feature files (instead of the legacy SpecFlowSingleFileGenerator custom tool we disabled earlier).
Now that you have set up your IDE, created a new project and added the required packages, you’re ready to go and create your first SpecFlow feature.
Creating and running a first SpecFlow feature
To do so, though, you first need an application to write tests for. In the remainder of this article and all follow-up articles, we’ll use Zippopotam.us, a REST API for looking up location data based on country code and zip code combinations (and vice versa, i.e., looking up zip codes for locations) for this.
Let’s take a quick look at how this API works. You can look up the location corresponding to the US zip code 90210 by sending a GET request to. The API will respond with a JSON document (as well as an HTTP status code and some header data) that tells us that this combination of country code and zip code corresponds to Beverly Hills in California:
{ "post code": "90210", "country": "United States", "country abbreviation": "US", "places": [ { "place name": "Beverly Hills", "longitude": "-118.4065", "state": "California", "state abbreviation": "CA", "latitude": "34.0901" } ] }
Now, back to writing our first SpecFlow feature and using it to create an automated acceptance test. A SpecFlow feature is a file with a .feature file extension, describing the intended behaviour of a specific component or feature of the application you are going to write tests for. An example of a feature for our API would be the ability to return the correct location data based on a country code and zip code, or on a more granular level, supporting the ability to return more than one location for a specific country and zip code (this is very useful for use in the UK and Germany, among other countries).
Feature files contain one or more scenarios that describe the specifics of the behaviour for that feature, often expressed in very concrete examples. These examples are often obtained through following a process known as Specification by Example. The process of creating good feature files and scenarios, and techniques on how to improve your skills in this area, are outside the scope of this article.
Here’s an example feature file and three scenarios that describe one of the core features of our API through some examples:
Feature: Returning location data based on country and zip code As a consumer of the Zippopotam.us API I want to receive location data matching the country code and zip code I supply So I can use this data to auto-complete forms on my web site Scenario: An existing country and zip code yields the correct place name Given the country code us and zip code 90210 When I request the locations corresponding to these codes Then the response contains the place name Beverly Hills Scenario: An existing country and zip code yields the right number of results Given the country code us and zip code 90210 When I request the locations corresponding to these codes Then the response contains exactly 1 location Scenario: An existing country and zip code yields the right HTTP status code Given the country code us and zip code 90210 When I request the locations corresponding to these codes Then the response has status code 200
We can add a new feature file to our project by right-clicking the project name and selecting Add > New Item. Since we have installed the SpecFlow extension before, we can now select SpecFlow > SpecFlow Feature File to add a new feature file to our project:
After the feature file has been added to the project, we can edit the specifications in there to reflect the expected behaviour we defined earlier for our zip code API:
The fact that the steps in our scenarios are shown in purple means that there are no step definition methods associated with the steps in the scenarios yet.
You can run the scenarios in this feature and see what happens by:
- Opening the Test Explorer Window using the menu option Test > Windows > Test Explorer.
- Building the project, for example using Ctrl + Shift + B to build all projects in the current solution – this should result in several tests becoming visible in the Test Explorer.
- Right-clicking the tests you would like to run and choosing Run Selected Tests.
Since there is no code to execute, SpecFlow will display an error message:
Fortunately, SpecFlow offers an easy way to generate these step definitions methods for you. Right-click anywhere in the feature file editor window and select Generate Step Definitions, another useful feature that comes with the SpecFlow Visual Studio extension.
This will display a window where you can select the steps for which to generate step definition methods, as well as the step definition style. You will read more about the different styles in the next article, for now we are going to stick with the default Regular expressions in attributes option. Click Generate, select the destination for the step definition file and click Save.
A C# .cs file with the generated step definition methods will now be added to your project. Here’s a snippet from that file:
using System; using TechTalk.SpecFlow; namespace testproject_specflow.StepDefinitions { [Binding] public class ReturningLocationDataBasedOnCountryAndZipCodeSteps { [Given(@"the country code us and zip code (.*)")] public void GivenTheCountryCodeUsAndZipCode(int p0) { ScenarioContext.Current.Pending(); } [When(@"I request the locations corresponding to these codes")] public void WhenIRequestTheLocationsCorrespondingToTheseCodes() { ScenarioContext.Current.Pending(); } [Then(@"the response contains the place name Beverly Hills")] public void ThenTheResponseContainsThePlaceNameBeverlyHills() { ScenarioContext.Current.Pending(); }
When you check the feature file editor window again, you’ll see that all step definitions have changed from purple to white (or black, depending on your IDE color scheme):
This is definitely good news: SpecFlow now knows what code is associated with the steps in the scenarios, and therefore knows what code to run when we run the feature. Let’s do that again from the Test Explorer:
Now, SpecFlow is telling us that it did find step definitions to execute, but that one or more of the methods it ran are not yet implemented. The reason for this is that when you let SpecFlow generate step definitions, it will automatically add the line:
ScenarioContext.Current.Pending();
as the body of the method. Look at it as a friendly reminder that your work isn’t done yet. When this line is executed, the above error will be thrown.
If we remove this line from all of our step definitions methods and run the feature again, we’ll see this:
Success! Of course, you will still need to add the code that actually invokes the API and performs verifications on the response, but since that’s outside of the responsibility of SpecFlow I won’t go into that in this article.
In the next article, we’ll be taking a closer look at the step definition file, different step definitions styles and how to make use of parameters and regular expressions to make our steps more expressive and powerful.
The example project used in this article can be found on GitHub:. This project also contains test code that actually invokes the Zippopotam.us API and performs the checks specified in the feature file we have seen in this article.
Hi Bas,
I have just started with Visual Studio Community 2020 and followed all the steps similar to what you’ve described here as given in. I am running into an error that says:
Severity Code Description Project File Line Suppression State
Error MSB3030 Could not copy the file “C:\Users\Venkataraman_Moncomp\.nuget\packages\specrun.runner\3.2.31\tools\TechTalk.SpecRun.Framework.dll” because it was not found. Example C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Current\Bin\Microsoft.Common.CurrentVersion.targets 4643
This file exists inside of the tools folder but inside each folder – net45, netcoreapp2.0, netcoreapp2.1, netcoreapp2.2, netcoreapp3.0, netcoreapp3.1.
I was trying to copy the dlls over to the tools folder. Then it compiles without error but skips all the behavior tests – feature files and step definitions are skipped.
Can you please help?
Thanks in advance.
Venkat | https://blog.testproject.io/2019/10/15/getting-started-with-specflow/ | CC-MAIN-2022-05 | refinedweb | 1,776 | 57.3 |
Advanced Namespace Tools blog 26 December 2016
ANTS 2.6 Release
The 9front project has released a new update of the installation .iso image, making this a good moment for me to sync up the ANTS code repositories, documentation, and downloads to the latest revision. I have decided that making "release tarballs" with precompiled kernels is probably pointless, although I will probably upload and link a compiled kernel at some point. Compiling from source seems like what Plan 9 users prefer to do. The idea of a mostly non-technical user who wants to use "Plan 9 Advanced Namespace Tools" is probably a complete phantasm. Most people who are interested in Plan 9 already have fairly substantial software development and system administration skills.
New Features Added since 2.1
I haven't been doing regular point releases, so there haven't been any specific version releases between 2.1 in fall 2015 and 2.6 now at the very end of 2016. I just bumped up the version by .5 to indicate that a fair amount of work has been done, but not so much change as would be implied by calling it 3.0. Here is a summary of notable improvements:
- Support for rcpu/rimport and dp9ik auth in the ANTS boot/service namespace
- Support for building amd64 kernel
- Support for TLS boot option in plan9rc boot script
- Ramfossil script for instantiating a temporary fossil from a venti rootscore
- Patched versions of utilities and kernel updated to latest 9front code base
- Bugfixes to hubfs with multiple client readers, grio color selection, and more
Whither Bell Labs support?
This release still includes the Bell Labs version of ANTS in the main directory, with the 9front specific changes in the "frontmods" subdirectory. 9front is my primary Plan 9 environment, but I do keep a Bell Labs qemu VM active. Last I checked (in 2015), the Bell Labs version of ANTS compiles and installs correctly in 9legacy, also.
Time marches on, life is short, and in the absence of any kind of significant active user base for ANTS making feature and support requests, I am intending to drop active support and testing for the Bell Labs version. Since the labs' version of Plan 9 is no longer receiving updates, this release should continue to be useful for anyone who does want to use the original. If I receive any feedback that people are interested in using ANTS with 9legacy, I will probably create an independent repository for a 9legacy, based on the current code for labs.
TL; DR
The and repos have been re-synchronized at revision 427. This represents ANTS release 2.6, which builds vs 9front revision 5641. This is probably the last ANTS release to support Plan 9 from Bell Labs. | http://doc.9gridchan.org/blog/161226.release2.6 | CC-MAIN-2017-22 | refinedweb | 462 | 60.85 |
Exponential sums are a specialized area of math that studies series with terms that are complex exponentials. Estimating such sums is delicate work. General estimation techniques are ham-fisted compared to what is possible with techniques specialized for these particular sums. Exponential sums are closely related to Fourier analysis and number theory.
Exponential sums also make pretty pictures. If you make a scatter plot of the sequence of partial sums you can get surprising shapes. This is related to the trickiness of estimating such sums: the partial sums don’t simply monotonically converge to a limit.
The exponential sum page at UNSW suggests playing around with polynomials with dates in the denominator. If we take that suggestion with today’s date, we get the curve below:
These are the partial sums of exp(2πi f(n)) where f(n) = n/10 + n²/7 + n³/17.
[Update: You can get an image each day for the current day’s date here.]
Here’s the code that produced the image.
import matplotlib.pyplot as plt from numpy import array, pi, exp, log N = 12000 def f(n): return n/10 + n**2/7 + n**3/17 z = array( [exp( 2*pi*1j*f(n) ) for n in range(3, N+3)] ) z = z.cumsum() plt.plot(z.real, z.imag, color='#333399') plt.axes().set_aspect(1) plt.show()
If we use logarithms, we get interesting spirals. Here f(n) = log(n)4.1.
And we can mix polynomials with logarithms. Here f(n) = log(n) + n²/100.
In this last image, I reduced the number of points from 12,000 to 1200. With a large number of points the spiral nature dominates and you don’t see the swirls along the spiral as clearly.
14 thoughts on “Exponential sums make pretty pictures”
I Made An exp Sum with the birthday of my first grandchild yesterday. (8 oct 2017). I put the Pic on a Card with the text : a suiting crown for Céline on her birthday.
In that first example, the shapes remind me of Brusqueness figures. We a bit of work, I bet you could actually fit to a given silhouette.
^^^ oops. Serious autocorrect fail!
… Rubinesque figures. With a bit …
John,
In your sample code, what is 1j ?
The summation at UNSW is e^(2*pi*i*f(n))
Thanks,
Chuck
reducing the expression e^(2 * pi * i * f(n), I note that e^(pi * i) = -1
so I reduce the expression to (-1)^(2 * f(n))
2*f(n) = ( (n/mm) + (n^2/dd) + (n^3/yy)) yields a fraction
thus, I have eliminated the i
correct?
regards,
Chuck
1j is the imaginary unit in Python, i.e. “i” in math notation.
thanks John!
-Chuck
@Chuck
The rule (e^x)^y=e^(xy) holds if both x and y are real, but doesn’t always hold for complex values.
Here’s the code in Mathematica (it runs a bit slow)
m = 2000;
day = 21;
month = 11;
year = 17;
f[n_] = n/day + n^2/month + n^3/year;
g[x_] = Exp[2*Pi*I*f[x]];
z = Array[g, m + 1, {3, m + 3}];
z = Accumulate[z];
p = ListPlot[{Re[#], Im[#]} & /@ z, PlotRange -> Automatic,
ImagePadding -> 40, AspectRatio -> 1,
FrameLabel -> {{Im, None}, {Re, “complex plane”}},
PlotStyle -> Directive[Red, PointSize[.02]], Joined -> True];
Show[p]
what version of python and matplotlib was the python code run under?
regards,
-chuck rothauser
ok, I just answered my own question – the python code works fine under version 3.6 of python – I have a fedora workstation that has versions 2.7 and 3.6 and yup 2.6 was the default LOL
-chuck rothauser
I took the code John posted and added date input function with error checking :-)
-Chuck
import matplotlib.pyplot as plt, time
from numpy import array, pi, exp, log, e
valid = False
while not valid:
date_str = input(“Enter Date(mm/dd/yy):”)
try:
time.strptime(date_str, “%m/%d/%y”)
valid = True
except ValueError:
print(“Invalid Format For Date”)
valid = False
print(date_str)
m = int(date_str[0:2])
d = int(date_str[3:5])
y = int(date_str[6:8])
N = 12000
def f(n):
return n/m + n**2/d + n**3/y
z = array( [exp( 2*pi*1j*f(n) ) for n in range(3, N+3)] )
z = z.cumsum()
plt.plot(z.real, z.imag, color=’#333399′)
plt.axes().set_aspect(1)
plt.show()
looks like 2018 will produce less interesting patterns compared to 2017.. 2017 is prime and 2018 is not!!
correct me if i am wrong
2018 will produce simpler images since lcm(m, d, 18) is typically smaller than lcm(m, d, 17). The images are particularly simple now that m = 1, but some more complex images are coming. | https://www.johndcook.com/blog/2017/10/07/exponential-sums-make-pretty-pictures/ | CC-MAIN-2021-10 | refinedweb | 789 | 73.27 |
-------- SIMULATOR FOR FOR BACnet/WS 2 (Addendum 135-2012am-PPR1) ------------- This is a simulator for the BACnet Web Services (BACnet/WS) protocol proposed in First Public Review of Addendum am to ANSI/ASHRAE Standard 135, Spring 2014. (Not to be confused with the earlier "First *Advisory* Public Review") This is mostly proof-of-concept code, but it tries to be hard-to-crash so it can be used for unattended testing. The emphasis has been on simplicity, rapid development, and ease of debugging, so it is definitely not optimized for a production deployment. It is mostly single threaded and was written with an emphasis on portability to other languages, so it does not leverage many of the fancy features of modern Java (other than generics to avoid a lot of casting). The primary purpose for writing it was to spur development of prototypes to test the protocol. However, it has a BSD license, so you can do whatever you want with it, even for commercial purposes. Initially, this is just an implementation of the server side of the protocol. Future enhancements may include the ability to be a data-driven client simulator, and thus morph into a simple form of an automated testing tool. The speed of development of this has left it with few comments; but hopefully well-named classes, methods, and variables will help make things clear. Even so, an IDE with a "Find Usages" feature will be very helpful! --------------------------- MAKING AND RUNNING -------------------------------- Tester.java contains an example main() that starts and stops the server. Project files for IntelliJ Idea 13 Community Edition (free) are included, but you can also just use the command lines: javac -sourcepath src -d out src/org/ampii/xd/tests/Tester.java java -cp out org.ampii.xd.tests.Tester The main configuration constants are in Application.java, e.g., public static String configFile = "resources/config/config.xml"; public static int serverPort = 8080; public static int serverSSLPort = 4443; public static Locale locale = Locale.US; public static Level generalLogLevel = Level.INFO; public static Level httpLogLevel = Level.FINE; public static boolean autoactivateTLS = true; For testing, Firefox has a add-on called "RESTClient" that is useful for testing. Tip: Keeping multiple tabs open with RESTClient allows for easier switching between messages (e.g. one for PUT/POST and one for GET) rather than trying to switch back and forth using the drop-downs on one tab. --------------------------------- DATA ---------------------------------------- This is entirely static data driven. It loads all of its test data from XML files at startup and does not save anything. It also does not attempt to connect to a real communications back end for "live" data. A good follow-on project would be to hook the /.bacnet tree data to a real BACnet stack using preread() and postwrite() handlers. All the data in the server is loaded via XML files specified by the Application.configFile constant (and changed by commandline arg to Tester.java). Only the root and the "/.definitions" node are created automatically, everything else (including TLS certificate/key) comes from the xml files. If you want to test factory default conditions, set the user/pass to ".", remove the TLS cert/key info, and set Application.autoactivateTLS to false. This will then wait for external writes of that data before attempting to start TLS. There is an example subscription ------------------------------ IMPLEMENTED ----------------------------------- Everything in the PPR1 addendum for servers is implemented except as noted below in TO DO / NOT IMPLEMENTED YET. (this is a loooong list of features, not included in this doc - go read the addendum) This code makes no attempt to allow multiple server instances in one JVM (as the use of static constants in the Application class indicates). The XML implementation is the minimum needed for interoperability. It does not support general namespaces or entities beyond the required ones (" etc). However, it does support a fixed secondary namespace to allow proprietary attributes that this application uses to tag certain data with behavior (for setting rules for preread(), postwrite(), etc.) Most of the code tries to be just a generic-server-of-data without any application-specific "behavior". All the things that are *not* generic are in the "application" package. This includes: AccessHooks: this is where the ties to a back end can be done. HTTPHooks: this traps any special URI paths that are not normal data. XMLHooks: this handles the extra namespace that configures the access hooks. Watcher: the thread that monitors the .subscriptions and .multi records. ------------------------ TO DO / NOT IMPLEMENTED YET -------------------------- It would be useful to make console logs available through the web interface for remote testers. historyPeriodic() is not implemented. errorPrefix and errorText query parameters are not implemented. Can't handle pure write-only data: e.g., /.auth/dev-key-pend is readable while in factory defaults mode and /auth/int/pass is readable, with "auth", when not in factory defaults mode. These seem harmless though, so perhaps we will change the addendum after PR (write-only data is a pain). In general: Client must present sufficient scope(s) to be able to read data before writing is considered (i.e. paths are read before writing). Is this a bug? Or is the idea that you can have permission to write data that you can't also read just dumb and the code is actually correct? Client.java uses HttpURLConnection. It would be helpful for language portability to remove dependency on this library functionality and implement on raw sockets like the server side (started but didn't finish code for this). Some limits/restrictions are not enforced: Metadata like 'maximum' and 'allowedChoices' are not checked. Only basetype is checked for PUT. And XxxxPattern basetypes don't parse their data; they just contain an unparsed string with no error checking: e.g. "2013-fred-14" is OK. Definitions are not checked: For Choice: only one child is allowed but it is not checked against the members of the 'choices' metadata. For Enumeration: any string is accepted, no check of 'namedValues'. For BitString: and bit names are accepted, no check of 'namedBits'. For BitString: and bit names are accepted, no check of 'namedBits'. Root certificates: /.auth/root-cert-pend is not used to validate the /.auth/dev-cert-pend. External authorization servers: Tokens from external authorizations servers are not processed; only the internal auth server is implemented. Group audiences: /.auth/group-uuids is not used for token verification. remote() function: Not implemented - need to figure out a method to determine if a URI refers to this server. Relative paths are easy, but how do I know if a URI hostname resolves to me or not? /.bacnet data: The data in the /.bacnet tree is "dead". It is not tied to any kind of communications back end. There is also no local "behavior". e.g., priority arrays don't function. 'priority' query parameter: the priority query parameter is parsed but nothing is done with it yet since the /.bacnet data has no "behavior". ".required" and ".optional": the 'select' query works but the special values ".required" and ".optional" don't do anything. Reading a range of an OctetString or String: the 'skip' and 'max-results' query parameters do not affect the results of a String or OctetString like Clause XX.16.3 says Filter paths with "*" segments: the 'select' query supports the syntax "*/*/foo" but filter doesn't yet (not actually required by the draft, but should be!) alt=media: the method to access an OctetString as a "media" blob (e.g. returning "Content-Type: application/pdf") is not supported. next: client-driven limiting with 'max-results' query works, but the server currently doesn't do any response limiting on its own, so there's no way to generate a 'next' link. .self: the "/.bacnet/xxx/.self" is not a magic alias for the server's device in scope xxx --------------- THINGS IMPLEMENTED THAT ARE NOT IN SPEC ----------------------- @basetype: useful for plain text clients that otherwise would not have a way to know the base type of the data | http://sourceforge.net/p/ampii/code/ci/master/tree/ | CC-MAIN-2015-48 | refinedweb | 1,332 | 56.86 |
);
}
}
I think you're on the right track since your code in the main method will do what you want it to do.
However your instructions are contradictory:
"SumOfN takes an int n as a parameter and returns the sum ofN"
"Your program should loop, ask for n, if N > 0, Calculate and Print sumofN"
Usually a method will either print something or return it. These instructions are unclear which they really want (or if they want both).
You could do both I suppose. Just stuff your main code in the method you have and that would work. Just don't forget to return the value after you print it.
As it stands you print out the answer, but the sumOfN method just returns the same number it is given. So it kind of halfway works.
The thing is I don't know how to write the code for the method to take the user input, calculate and see if it is 0 and then stop the program or calculate if the answer is 5, get the sum 5 + 4 + 3 + 2 + 1 + 0, could you maybe show me some code that would do this? thanks a bunch.
First of, there is a very quick and very clean way to calculate the SumOfN.
in your example you used 5, which results iun the SumOfN being 1+2+3+4+5=15. This is the hard way of doing things. the fast way (which only works on even numbers, but I'll give a fix for that later) would be:
n=4; // sumOfN: 1+2+3+4=10
sumOfN=(n/2)*(n+1);
This works because:
1+4 = 5 (n+1)
2+3 = 5 (n+1)
This combination can be made n/2 times.
now, to make this work with odd numbers as well is very straightforward:
n=5; // SumOfN: 1+2+3+4+5=15
sumOfN=(((n-1)/2)*n)+n;
this works because:
1+4 = 5 (n)
2+3 = 5 (n)
this combination can be made (n-1)/2 times (twice in this case).
You can check if an int is odd by doing:
if ((i%2) == 1)
{
}
the % gives you the remainder after dividing by 2, which in case of an odd number would be 1.
Now in you case, this calculation would take place in your sumOfN method, which would take theint as parameter and return the calculated value.
But still how do I put it into method SumOfN and then Calculate it in SumOfN and then Pull it out of SUmOfN to be printed? that's what I can't figure out, is how to call the method sumOfN, put all the calculations in sumofN to print it out?
private static int sumofn(int n)
{
int sum = 0;
// do the calculating here;
return sum;
}
System.out.println(sumofn(5));
I still keep getting errors that the main cannot read sumofn, and it's not reading the user input, can anyone find my flaws
import java.util.*;
public class sumOfN {
//Method
private static int sumofn(int n)
{
int count = 0;
int sum = 0;
for (count = n; count > 0; count--) {
sum += count;
if (n <= 0) {
System.out.println("Cannot equal 0");
return sum;
}
}
return sum;
}
public static void main(String[] args) {
Scanner stdin = new Scanner(System.in);
System.out.println("Please enter an integer: ");
sumofn = stdin.nextInt();
System.out.println("SumOfN" + sumofn(n) + " has an integer value of ");
}
I've never done an input this way before myself, so I can't help you there.
The problem why the main can't read the sumOfN method is because it is private and you have put it in a different class then your main method. if you make it public, that should solve the problem.
(This is my fault I see, cause I made it private in my example).
making a static method private makes no sence though, since static is always called from outside the class itself.
Forum Rules
Development Centers
-- Android Development Center
-- Cloud Development Project Center
-- HTML5 Development Center
-- Windows Mobile Development Center | http://forums.devx.com/showthread.php?140642-Help-With-parameter-passing&p=416531 | CC-MAIN-2014-10 | refinedweb | 677 | 76.15 |
A tree is always a Bipartite Graph as we can always break into two disjoint sets with alternate levels. In other words we always color it with two colors such that alternate levels have same color. The task is to compute the maximum no. of edges that can be added to the tree so that it remains Bipartite Graph.
Examples:
Input : Tree edges as vertex pairs 1 2 1 3 Output : 0 Explanation : The only edge we can add is from node 2 to 3. But edge 2, 3 will result in odd cycle, hence violation of Bipartite Graph property. Input : Tree edges as vertex pairs 1 2 1 3 2 4 3 5 Output : 2 Explanation : On colouring the graph, {1, 4, 5} and {2, 3} form two different sets. Since, 1 is connected from both 2 and 3, we are left with edges 4 and 5. Since, 4 is already connected to 2 and 5 to 3, only options remain {4, 3} and {5, 2}.
1) Do a simple DFS (or BFS) traversal of graph and color it with two colors.
2) While coloring also keep track of counts of nodes colored with the two colors. Let the two counts be count_color0 and count_color1.
3) Now we know maximum edges a bipartite graph can have are count_color0 x count_color1.
4) We also know tree with n nodes has n-1 edges.
5) So our answer is count_color0 x count_color1 – (n-1).
Below is the implementation :
C++
Python3
# Python 3 code to print maximum edges such
# that Tree remains a Bipartite graph
def dfs(adj, node, parent, color):
# Increment count of nodes with
# current color
count_color[color] += 1
# Traversing adjacent nodes
for i in range(len(adj[node])):
# Not recurring for the parent node
if (adj[node][i] != parent):
dfs(adj, adj[node][i],
node, not color)
# Finds maximum number of edges that
# can be added without violating
# Bipartite property.
def findMaxEdges(adj, n):
# Do a DFS to count number of
# nodes of each color
dfs(adj, 1, 0, 0)
return (count_color[0] *
count_color[1] – (n – 1))
# Driver code
# To store counts of nodes with
# two colors
count_color = [0, 0]
n = 5
adj = [[] for i in range(n + 1)]
adj[1].append(2)
adj[1].append(3)
adj[2].append(4)
adj[3].append(5)
print(findMaxEdges(adj, n))
# This code is contributed by PranchalK
Output:
2
Time Complexity: O(n)
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. | https://tutorialspoint.dev/data-structure/graph-data-structure/maximum-number-edges-added-tree-stays-bipartite-graph | CC-MAIN-2021-21 | refinedweb | 422 | 59.13 |
Progressive Web App (PWA) using React
Develop a progressive web application using React.
Overview
In this article, let’s develop a progressive web application using React.
From Wikipedia,
A progressive web application (PWA) is a type of application software delivered through the web, built using common web technologies including HTML, CSS and JavaScript. It is intended to work on any platform that uses a standards-compliant browser, including both desktop and mobile devices.
Setup
Let’ start by creating a progressive web application using create-react-app and the PWA template.
npx create-react-app react-pwa-app --template cra-template-pwa
I use the Javascript template. For Typescript, use
cra-template-pwa-typescript.
I am going to leverage the sample code I developed in my previous article to display a dynamic grid with user details and dropdowns of countries and cities.
React Grid with Dynamic Dropdown and Selection
A React grid code example with dynamic dropdown and selection.
alpha2phi.medium.com
The sample application shall look like below. You can find the code in this repository.
Benchmark
Now let’s benchmark the application for PWA. I am going to build the application and serve the build.
yarn build
yarn global add serve
serve -s build
Using Google Chrome or Brave browser, navigate to open the
Developer Tools. I am going to use LightHouse to generate a report.
The application is not yet PWA ready. Let’s change it.
Service Worker
A service worker enables offline work for web applications. If you open
package.json you can see a list of workbox libraries configured as part of the PWA template.
Workbox is a set of libraries that can power a production-ready service worker for your Progressive Web App.
{
"name": "react-pwa-app",
"version": "0.1.0",
"private": true,
"dependencies": {
"@testing-library/jest-dom": "^5.11.4",
"@testing-library/react": "^11.1.0",
"@testing-library/user-event": "^12.1.10",
"ag-grid-community": "^25.1.0",
"ag-grid-react": "^25.1.0",
"faker": "^5.5.3",
"react": "^17.0.2",
"react-dom": "^17.0.2",
"react-scripts": "4.0.3",
"web-vitals": "^0.2.4",
"workbox-background-sync": "^5.1.3",
"workbox-broadcast-update": "^5.1.3",
"workbox-cacheable-response": "^5.1.3",
"workbox-core": "^5.1.3",
"workbox-expiration": "^5.1.3",
"workbox-google-analytics": "^5.1.3",
"workbox-navigation-preload": "^5.1.3",
"workbox-precaching": "^5.1.3",
"workbox-range-requests": "^5.1.3",
"workbox-routing": "^5.1.3",
"workbox-strategies": "^5.1.3",
"workbox-streams": "^5.1.3"
},
To enable the service worker, open
index.js and register the service worker.
import React from 'react';
import ReactDOM from 'react-dom';
import './index.css';
import App from './App';
import * as serviceWorkerRegistration from './serviceWorkerRegistration';
import reportWebVitals from './reportWebVitals';ReactDOM.render(
<React.StrictMode>
<App />
</React.StrictMode>,
document.getElementById('root')
);// If you want your app to work offline and load faster, you can change
// unregister() to register() below. Note this comes with some pitfalls.
// Learn more about service workers:
serviceWorkerRegistration.register();// If you want to start measuring performance in your app, pass a function
// to log results (for example: reportWebVitals(console.log))
// or send to an analytics endpoint. Learn more:
reportWebVitals();
Maskable Icon
Maskable icons is a new icon format that ensures that your PWA icon looks great on all Android devices. On newer Android devices, PWA icons that don’t follow the maskable icon format are given a white background. When you use a maskable icon, it ensures that the icon takes up all of the space that Android provides for it.
Open
manifest.json and add the maskable icon. I did not create a custom icon for the app. You can use the Maskable.app Editor to create one for your app.
{
"short_name": "React PWA",
"name": "React Progress Web Application",
"icons": [
{
"src": "favicon.ico",
"sizes": "64x64 32x32 24x24 16x16",
"type": "image/x-icon"
},
{
"src": "logo192.png",
"type": "image/png",
"sizes": "192x192"
},
{
"src": "logo512.png",
"type": "image/png",
"sizes": "512x512"
},
{
"src": "logo192.png",
"sizes": "192x192",
"type": "image/png",
"purpose": "maskable"
}
],
"start_url": ".",
"display": "standalone",
"theme_color": "#000000",
"background_color": "#ffffff"
}
Benchmark
Now build and benchmark the application again and it should be PWA ready.
Hosting
To make it simple, I am going to serve the application using GitHub pages. I will also enforce HTTPS as this is part of PWA requirements.
Make the following changes in
package.json.
...
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test",
"eject": "react-scripts eject",
"predeploy": "yarn build",
"deploy": "gh-pages -d build"
},
Now run
yarn add gh-pages
yarn run deploy
You can access the application here.
The application is installable now.
And here is the PWA report for the hosted application.
You can try turning off Internet access and the application should still be accessible.
Summary
Developing a progressive web application using React is easy. Looking at the advantages it brings, developers should explore and consider if PWA is the right fit for their applications. | https://alpha2phi.medium.com/progressive-web-app-pwa-using-react-b1650b181843?responsesOpen=true&source=---------8---------------------------- | CC-MAIN-2021-25 | refinedweb | 823 | 53.98 |
Warning: The content of this article may be out of date. It was last updated in 2002.
Overview
Handling network and locally retrievable resources is a central part of Necko. Resources are identified by URI "Uniform Resource Identifier" (Taken from RFC 2396):.
In Necko every URI scheme is represented by a protocol handler. Sometimes a protocol handler represents more than one scheme. The protocol handler provides scheme specific information and methods to create new URIs of the schemes it supports. One of the main Necko goals is to provide a "plug able" protocol support. This means that it should be possible to add new protocols to Necko just by implementing nsIProtocolHandler and nsIChannel. It also might be necessary to implement a new urlparser for a new protocol but that might not be necessary because Necko already provides URI implementations that can deal with a number of schemes, by implementing the generic urlparser defined in RFC 2396.
nsIURI and nsIURL
In a strict sense Necko does only know URLs, URIs by the above definition are much too generic to be properly represented inside a library.
There are however two interfaces which loosely relate to the distinction between URI and URL as per the above definition: nsIURI and nsIURL.
nsIURI represents access to a very simple, very generic form of an URL. Simply speaking it's scheme and non-scheme, separated by a colon, like "about". nsIURL inherits from nsIURI and represents access to typical URLs with schemes like "http", "ftp", ...
nsSimpleURI
One implementation of nsIURI is nsSimpleURI which is the basis for protocols like "about". nsSimpleURI contains setters and getters for the URI and the components of an URI: scheme and path (non-scheme). There are no pre written urlparsers for simple URIs, because of it's simple structure.
nsStandardURL
The most important implementation of nsIURL is nsStandardURL which is the basis for protocols like http, ftp, ...
These schemes support a hierarchical naming system, where the hierarchy of the name is denoted by a "/" delimiter separating the components in the path. nsStandardURL also contains the facilities to parse these typ of urls, to break the specification of the URL "spec" down into the most basic segments.
The spec consists of prepath and path. The prepath consists of scheme and authority. The authority consists of prehost, host and port. The prehost consists of username and password. The path consists of directory, filename, param, query and ref. The filename consists of filebasename and fileextension.
If the spec is completly broken down, it consists of: scheme, username, password, host, port, directory, filebasename, fileextension, param, query and ref. Together these segments form the URL spec with the following syntax:
scheme://username:password@host:port/directory/filebasename.fileextension;param?query#ref
For performance reasons the complete spec is stored in escaped form in the nsStandardURL object with pointers (position and length) to each basic segment and for the more global segments like path and prehost for example.
Necko provides pre written urlparsers for schemes based on hierachical naming systems.
Escaping
To be able to parse an URL safely it is sometimes necessary to "escape" certain characters, to hide them from the parser. An escaped character is encoded as a character triplet, consisting of the percent character "%" followed by the two hexadecimal digits representing the octet code. For example, "%20" is the escaped encoding for the US-ASCII space character.
Another quote from RFC 2396: implies that the segments of urls are escaped differently. This is done by NS_EscapeURL which is now part of xpcom, but started as part of Necko. The information how to escape each segment is stored in a matrix.
Also a string should not be escaped more than once. Necko will not escape an already escaped character unless forced by a special mask that can be used if it is known that a string is not escaped.
Parsing URLs
RFC 2396 defines an URL parser that can deal with the syntax that is common to most URL schemes that are currently in existence.
Sometimes scheme specific parsing is required. Also to be somewhat tolerant to syntax errors the parser has to know more about the specific syntax of the URLs for that scheme. To stay almost generic Necko contains three parsers for the main classes of standard URLs. Which one has to be used is defined by the implementation of nsIProtocolhandler for the scheme in question.
The three main classes are:
- Authority
- The URLs have an authority segment, like "http".
- NoAuthority
- These URLs have no or a degenerated authority segment, like the "file" scheme. Also this parser can identify drives if possible depending on the platform.
- Standard
- It is not known if an authority segment exists or not, less syntax correction can be applied in this case.
Noteable Differences
- Necko does not support certain deprecated forms of relative URLs, based on the following part of RFC 2396:.
The decision was made against backwards compatibility. This means that URLs like "http:page.html" or "http:/directory/page.html" are interpreted as absolute urls and "corrected" by the parser.
- Also the handling of query segments is different from the examples given in RFC 2396:
Within an object with a well-defined base URI of;p?q the relative URI would be resolved as follows: ... ?y = ...
Instead
?y =;p?y
was implemented as suggested by the older RFC 1808. This decision is based on an email by Roy T. Fielding, one of the authors of RFC 2396, stating that the given example is wrong. Details can be found at bug 90439.
- Registry-based authoritys Currently Necko's url-objects only support host based authoritys or urls with no authoritys. Registry-based authoritys as defined in RFC 2396
Many URI schemes include a top hierarchical element for a naming authority, such that the namespace defined by the remainder of the URI is governed by that authority. This authority component is typically defined by an Internet-based server or a scheme-specific registry of naming authorities. ... The structure of a registry-based naming authority is specific to the URI scheme, but constrained to the allowed characters for an authority component.
are not supported.
References
Main reference for URIs, URLs and URL-parsing is RFC 2396.
Original Document Information
- Author(s): Andreas Otte
- Last Updated Date: January 2, 2002
- Copyright Information: Portions of this content are © 1998–2007 by individual mozilla.org contributors; content available under a Creative Commons license | Details. | https://developer.mozilla.org/en-US/docs/URIs_and_URLs | CC-MAIN-2015-11 | refinedweb | 1,072 | 55.74 |
Fabulous Adventures In Coding
Eric Lippert is a principal developer on the C# compiler team. Learn more about Eric.
Suppose, as some commenters noted, to swap left and right:
static Comparison<string> Reverse(Comparison<string> comparison){ return (string x, string y) => comparison(y, x);}
Next time: One more defect to spot.
Couldn't you implement reverse as "return (string x, string y) => comparison(y,x);" instead?
Why don't you change the implementation to:
return (string x, string y) => comparison(y, x);
?
@Brian and @Patrick: You can. It's just that that doesn't *appear* immediately to be any better than returning the result of negating the original call. You basically need to remember the int.MinValue problem. Once you know it, it's easy to remember it... but I suspect that relatively few people who hadn't already come across it would pick that up in code review.
(Oops, hadn't spotted the longer implementation at the end of the post.)
I would suggest that the "comparison(y, x)" implementation works in any *sane* implementation of IComparer<T>. No doubt you could produce weird implementations (e.g. ones that favoured the first argument somehow) but I'd argue that at that point all bets are off anyway. I can't think of any guarantees provided by Sort in terms of which value will be used for the first parameter and which for the second.
If we generalized Comparison to something that took two different data types then I could see why you'd need the longer version. E.g. something like Comparison <int,string> where you compare the value of the int to the length of the string. But I can't think of a great example where such a comparison would be useful.
I wonder why you introduced FirstByLength anyway...? It seems to me that Reverse(bizarre) will have the same unwanted behaviour - or am I missing something?
I'm assuming the next defect to spot isn't the missing '};' in the last code sample.
According to Raymond's article that Eric referenced on Thursday: (blogs.msdn.com/.../55408.aspx)
"...your comparison function needs to follow these rules:
...
•Anti-Symmetry: Compare(a, b) has the opposite sign of Compare(b, a), where 0 is considered to be its own opposite. "
So if we can make that assumption about the comparison function, we should be able to reverse it by reversing arguments. If we can't make that assumption, I agree with Jon, "all bets are off anyway."
This is of course ignoring the fact that Eric's implementation discards any information in the return value of the inner comparison other than the sign. I'll leave it to others to decide whether that's a good thing or a bad thing.
@Tim Goodman: A comparison on two different types is useful when you want to update one sorted list using another sorted list of updates. You need to compare elements from the two lists to do a merge, but update objects won't necessarily be of the same type as the objects being updated.
"I would suggest that the "comparison(y, x)" implementation works in any *sane* implementation of IComparer<T>"
I suspect that a not-uncommon way to implement the Comparison<T> method is, when given input that is either numerical or easily mapped to something numerical, to just return the result of subtracting the second input from the first.
While I would just delegate to the numerical type's own CompareTo() method, a) that might not always be possible (for custom numerical types that are not as fully implemented as the built-in types), and b) I can't say it's technically insane to use subtraction instead (in fact, in a micro-optimization sort of way it could be more efficient in some cases, trading the math for a method call).
And subtraction makes even more sense if the difference calculation somehow just falls out of the data type naturally without an actual project to a numerical data type (not that I can think of any valid examples of that at the moment…but I don't feel comfortable ruling them out at the moment :) ).
At least, this is what I came up with when I starting thinking about why the Comparison<T> (and related comparison APIs) are defined this way, rather than just requiring -1, 0, and 1 as return values in the first place.
@pete.d, consider subtraction and these results. Comparison<int> comparer = (a, b) => a - b; int x = comparer(3, 4); int y = comparer(int.MinValue, int.MaxValue); What's x? What's y?
I don't know if it's a "defect" per se, but composing comparisons like this is going to become particularly unwieldy as soon as you want to sort *first* by *descending* size and *then* in ascending order of some other comparison function. It seems it would be cleaner not to clutter up our length comparison function with comparison composition functionality, and instead write a separate function to compose an arbitrary number of comparisons. (Or if we're really crazy on the functional programming to write a function to compose two comparisons and then apply a fold to it.) Then instead of something nasty like:
var comparison = Reverse(FirstByLength(Reverse(CompareAlphabetic)));
we can write more naturally:
var comparison = Compose(Reverse(CompareByLength), CompareAlphabetic);
I think using extension methods on the Comparison<T> type would probably be a cleaner way to compose, and would more closely match the LINQ syntax people are used to:
var comparison = CompareByLength.Reverse().ThenBy(CompareAlphabetic);
@Anthony P: "int y = comparer(int.MinValue, int.MaxValue); What's x? What's y?"
You are missing the point. I'm not talking about a general purpose comparison function. I'm talking about a scenario where the inputs are known to work fine for subtraction.
Now, wait a sec: "once they are sorted by length (!!!), sort each group that is the same length by some other comparison," isn't that what we ar supposed to do? If so, then why are we sorting by "some other comparison" within the same function that sorts by length, which is clearly before they are sorted by length??? Maybe my math is bad; maybe the result will be the same... but my gut feeling, plus a bit of military background that says "do exactly what you are told to do", are against this approach. Sorry if it does not sound very convincing...
@Denis, he's saying this: sort by length, let another comparison be a tiebreaker for matching lengths. Perhaps it's a straight alphabetic sort, maybe it's by the number of vowels. In SQL, it would be Order By Length, Foo. | http://blogs.msdn.com/b/ericlippert/archive/2011/01/24/spot-the-defect-bad-comparisons-part-two.aspx | CC-MAIN-2013-48 | refinedweb | 1,121 | 52.29 |
Using I2C devices with Raspberry PI Pico and MicroPython
- Raspberry PI
- Using.
No Commentson Using I2C devices with Raspberry PI Pico and MicroPythonPico, Raspberry PIpeppe8oi2c, micropython, raspberry pi pico, rpi picoCheck my RPI articles in Best Raspberry PI projects article or peppe8o.com home page. Or subscribe my newsletter (top right in this page) to be notified when new projects are available! Also interested to start 3D printing with a cheap budget? Visit my cheap 3D printers list0(0)
I2C (Inter-Integrated Circuit) communication allows to reduce wiring difficulty when connecting devices able to support this protocol. It is widely used in electronics industry and also Raspberry PI Pico can use I2C
In this tutorial I’m going to show you how works and how to use I2C with Raspberry PI Pico.
Explaining How I2C Works
I2C (pronounced “I squared C”) is a synchronous communication protocol invented in 1982 by Philips Semiconductors. It supports multi-master, multi-slave, packet switched, single-ended, serial communications between supporting devices. I2C allows to connect lower-speed peripheral to processors and microcontrollers.
It uses a simple 2 wire communication bus, Serial Data Line (SDA) and Serial Clock Line (SCL), pulled up with resistors. Typical voltages used are +5 V or +3.3 V.
Master devices generates the clock, so keeping communication management ownership. All devices share same wires:
Common I2C bus connections can work at 100 kbit/s (also known as standard mode) or at 400 kbit/s (known as Fast mode). You can theoretically use also arbitrarily low clock frequencies. On the other side, faster speeds are allowed on more recent I2C revisions, raising speed up to 1 Mbit/s (Fast mode plus), 3.4 Mbit/s (High Speed mode), and 5 Mbit/s (Ultra Fast-mode).
I2C uses basic transactions to let devices communicate between them. Each transaction begins with a START and ends with a STOP signal:
- Single message – Master writes data to Slave
- Single message – Master reads data from Slave
- Combined format – Master sends at least two reads or writes to one or more Slaves
The I2C uses a 7-bit address space to make each device on bus identified and send requests to correct slave listening on bus.
More info about this communication protocol can be found in Wikipedia I2C page.
Common I2C MicroPython Commands
Micropython includes an I2C class in its “machine” module, which makes it simple using this communication protocol. Common methods are below reported and refer to MicroPython I2C documentation.
- Constructors
- machine.I2C(id, *, scl, sda, freq=400000) – creates a new I2C object
- General Methods
- I2C.init(scl, sda, *, freq=400000) – initialise the I2C bus with the given arguments
- I2C.deinit() – turn off the I2C bus
- I2C.scan() – scan all I2C addresses (between 0x08 and 0x77 inclusive) and return a list of those that respond
- Primitive I2C operations
- I2C.start() – generate a START condition on the bus
- I2C.stop() – generate a STOP condition on the bus
- I2C.readinto(buf, nack=True, /) – reads bytes from the bus and stores them into “buf” variable
- I2C.write(buf) – write the bytes from “buf” variable to the bus
- Standard bus operations
- I2C.readfrom(addr, nbytes, stop=True, /) – read nbytes from the slave specified by addr
- I2C.readfrom_into(addr, buf, stop=True, /) – Read into buf from the slave specified by addr
- I2C.writeto(addr, buf, stop=True, /) – Write the bytes from buf to the slave specified by addr
- I2C.writevto(addr, vector, stop=True, /) – Write the bytes contained in vector to the slave specified by addr (vector should be a tuple or list of objects with the buffer protocol)
- Memory operations
- I2C.readfrom_mem(addr, memaddr, nbytes, *, addrsize=8) – Read nbytes from the slave specified by addr starting from the memory address specified by memaddr (addrsize specifies the address size in bits)
- I2C.readfrom_mem_into(addr, memaddr, buf, *, addrsize=8) – Read into buf from the slave specified by addr starting from the memory address specified by memaddr. The number of bytes read is the length of buf. (addrsize specifies the address size in bits)
- I2C.writeto_mem(addr, memaddr, buf, *, addrsize=8) – Write buf to the slave specified by addr starting from the memory address specified by memaddr (addrsize specifies the address size in bits)
In the following example, I will show you how to use Raspberry PI Pico with I2C, connecting a generic device, and scanning I2C bus to find slave addresses. I will use a generic I2C LCD screen, but this applies to all I2C compatible devices.
What We Need
As usual, I suggest adding from now to your favourite ecommerce shopping cart all needed hardware, so that at the end you will be able to evaluate overall costs and decide if continuing with the project or removing them from shopping cart. So, hardware will be only:
- A common computer (maybe with Windows, Linux or Mac). It can also be a Raspberry PI Computer board
- Raspberry PI Pico microcontroller (with a common micro USB cable)
- a generic I2C device (for example a I2C LCD)
Check hardware prices with following links:
Wiring Diagram
Following picture shows how to wire a generic I2C device to Raspberry PI Pico:
Note that SDA connects to SDA and SCL connects to SCL.
Step-by-Step procedure
Connect RPI Pico to Thonny (you can refer my tutorial about First steps with Raspberry PI Pico). Download my rpi_pico_i2c_scan.py script in your computer and open it with Thonny.
Following paragraphs will describe my code line by line. At the end, you will find scipt expected results.
Required modules are imported:
import machine
SDA and SCL PINs are defined as variables, set to corresponding PIN numbers:
sdaPIN=machine.Pin(0) sclPIN=machine.Pin(1)
An I2C instance is set, associated to variable i2c. This requires the i2c block number (0 – zero, in our wiring). It also requires the SDA and SCL PIN variables and, finally, the bus frequency:
i2c=machine.I2C(0,sda=sdaPIN, scl=sclPIN, freq=400000)
Following lines notify user that scanning is going to start and uses “devices” variable (a list) to store scan results, which return a list of all devices address (in decimal):
print('Scanning i2c bus') devices = i2c.scan()
If no devices have answered, then devices list will not include any number, so its lenght will be 0:
if len(devices) == 0: print("No i2c device !")
On the other hand, if at least one device answered to scan, then device length will be equal to the number of devices found:
else: print('i2c devices found:',len(devices))
Finally, for each device in list, its address will be printed (both in decimal and hexadecimal) with a for loop:
for device in devices: print("Decimal address: ",device," | Hexa address: ",hex(device))
Running the rpi_pico_i2c_scan Script
Run this script on Thonny (F5 key), selecting “This computer” as Save Location (if requested).
This will output following:
Scanning i2c bus i2c devices found: 1 Decimal address: 39 | Hexa address: 0x27
Found address will be useful for your programs to set correct identifier and send data to right slave device.
Enjoy!
Categories: Uncategorized | https://tlfong01.blog/2021/05/31/rpi-pico-i2c-notes/ | CC-MAIN-2021-25 | refinedweb | 1,173 | 51.58 |
This page provides access to lists of corrections and changes for C++ An Introduction to Computing, (2nd Ed., Prentice Hall, 1997) by Joel Adams, Sanford Leestma, and Larry Nyhoff.
We have tried to correct errors with new book printings, but discovered that in early printings, it was also necessary to make other changes due to the changing C++ standard. These changes are also given in the following lists. The most significant ones are in the 3rd and later printings when the C++ standard was finalized.
To determine which printing you have: Look at the descending list of numbers below "Printed in the United States of America" on the page before the preface. The last number in this list is the number of the printing for that book.
#include <iostream> #include <cstdlib> #include <cctype> #include <vector> using namespace std;
We would appreciate your sending us errors as you find them as well as any other comments about the text. | http://cs.calvin.edu/books/c++/intro/2e/Errata/ | crawl-001 | refinedweb | 158 | 68.3 |
Re: Cannot return values of char variable
- From: jt@xxxxxxxxxxx (Jens Thoms Toerring)
- Date: 3 Nov 2006 12:40:31 GMT
Pedro Pinto <kubic62@xxxxxxxxx> wrote:
Ok here it goes:
The problem is, when the function menu starts, i insert the information
CREATE TABLE [tab] col1,col2,col3
that starts the function criaRespCreate and the buf variable, when
exported into the program, before is ok, i print it to the screen and
appears well, but when returned it comes empty.......
I start the client socked, the result of the printf's:
sd075@lab1215-31:~/Desktop/teste/Cliente$ ./clisql 17500
Sintaxe do programa cliente:
clisql -s <endere?o_servidor> <porto_servidor>
-endere?o_servidor: (opcional) IP do servidor
-porto_servidor: porto do servidor
Insira comando:
CREATE TABLE [tab] col1,col2,col3
1 - buffer =
2 - buffer =
passei o primeiro divide com aux[0] = CREATE TABLE
passei o primeiro divide com aux[1] = tab] col1,col2,col3
passei o segundo divide com aux[0] = tab
passei o segundo divide com aux[1] = col1,col2,col3
dpx do memcpy aux[0] = tab
dpx do memcpy aux[1] = col1,col2,col3
buffer =
antes do strcpy - aux[1] =
argv[0] = (null)
No test to search.
Code --------------------------
cliente_aux.c
#include "cli.h"
void syntax() {
printf("Sintaxe do programa cliente:\n");
printf(" clisql -s <endere?o_servidor>
<porto_servidor>\n");
printf(" -endere?o_servidor: (opcional) IP do
servidor\n");
printf(" -porto_servidor: porto do servidor\n");
}
void sintaxe(){
printf("Sintaxe do programa cliente:\n");
printf("CREATE TABLE [table_name] coluna1,coluna2,coluna3...\n");
printf("INSERT INTO [table_name] VALUES
(coluna1_value,coluna2_value,coluna3_value...) \n");
printf("UPDATE [table_name] SET (Coluna) WHERE (expressao)\n");
printf("SELECT [Coluna] FROM (table_name) WHERE (expressao)\n");
}
char menu(){
Since you seem to be trying to return a char pointer (not just a char)
the function must be declared accordingly (and it doesn't take an
argument, so tell the compiler):
char *menu( void )
char buf[BUFFSIZE];
char bufSaida[BUFFSIZE];
char *tmp = buf;
int cod = 0;
int id = random();
int tamanhoMsg;
printf("Insira comando:\n");
// le uma linha de input
if (fgets(buf,sizeof(buf),stdin) == NULL)
perror("fgets");
I gues it would make no sense to continue if fgets() failed to
obtain user input, does it?
char comando[10];
Please remember that defining new variables randomly within a function
is a C99 feature - you may run into trouble with this if your compiler
doesn't support C99.
memset(comando,0,sizeof(comando));
This is not necessary.
if (sscanf(buf,"%9s",comando) < 1)
perror("scanf");
Again, is it a good idea to continue even though sscanf() failed?
But then, what is 'comando' used for at all? It's not used anywhere
below.
if (strncmp(buf,"CREATE",6) == 0){
cod = 11;
tamanhoMsg = criaRespCreate(cod,id, buf,bufSaida);
}
else if(strncmp(buf,"INSERT",6) == 0){
cod = 12;
//buffer =criaRespInsert(cod,id, buf);
}
else if(strncmp(buf,"UPDATE",6) == 0){
cod = 13;
//buffer =criaRespUpdate(cod,id, buf);
}
else if(strncmp(buf,"SELECT",6) == 0){
cod = 14;
//buffer =criaRespSelect(cod,id, buf);
}
else if(strncmp(buf,"QUIT",4) == 0){
exit(0);
}
else { printf("Comando Desconhecido\n");
}
return bufSaida;
Now, here's a very serious problem (beside the one that you defined
menu() to return a char, not a char pointer): you return a pointer
to an array that is only defined while you're within the menu()
function. Once you have left the function you can't use the buffer
anymore! Once the function has ended 'bufSaida' goes out of scope
and doing anything with the return value is wrong and it can crash
your program, it can seem to work flawlessly or it can seem to work
at first but then result in strange results.
}
int criaRespCreate(int cod, int id, char *argv, char *buffer){
I would recommend _not_ to use 'argv' as a variable or argument
name since everybody reading your code will be confused since
'argv' (and 'argc') are the traditional names of the arguments
main() is invoked with.
int tam = 0;
int nRows = 0;
char *aux[strlen(argv)];
Some compilers will complain here since the length of that array
isn't known at compile time (but since you seem to be using a
C99 compiler anyway it's ok).
char str[]=" // ";
int h = 0;
//tamanho do caracter delimitador
int tcd = strlen(str);
// inserir codigo
memcpy(buffer,&cod,LENGTH);
Why would you directly copy an integer to a char buffer? That hardly
makes any sense. You won't have a textual representation of the number
in the buffer but some bytes that don't make any sense when the
content is interpreted as a string. Moreover, you don't have a '\0'
character at the end, so trying to use 'buffer' as it were a string
(as you do below) is a receipe for disaster. Perhaps waht you really
want here is sprintf()?
And if you really want to copy the bytes of an integer than you
should use 'sizeof(int)' or 'sizeif cod' instead of LENGTH (which
is defined as 4, which can be the right size, but that's not
guaranteed).
tam += LENGTH;
printf("1 - buffer = %s\n", buffer);
Here things get badly wrong: 'buffer' doesn't end in a '\0', so
it's not a string, so it can't be used as an argument to printf()
with "%s". And then, having an int copyied into a string doesn't
make it a string representation of that value. I guess you're
assuming that there's some automatic conversion happening but
that's not the case.
// inserir caracter delimitador //
memcpy(buffer+tam,&str,strlen(str));
tam += tcd;
'buffer' still has no '\0' at the end since you didn't copy that
from 'str'. Perhaps the fucntion you are looking for is strcpy()?
printf("2 - buffer = %s\n", buffer);
And thus this is the next point where things go wrong.
// inserir id da mensagem
memcpy(buffer+tam, &id, LENGTH);
tam += LENGTH;
Same problem as above, copying an integer to a char buffer doesn't
make any sense if you expect the char buffer to contain a represen-
tration of the int that would make sense as a string.
// inserir caracter delimitador //
memcpy(buffer+tam,&str,strlen(str));
tam += tcd;
// dividir string e inserir nome tabela
divide(argv,aux,"[");
Unfortunately, there's neither a declaration nor a definition
of divide() in what you posted so it's impossible to say if
the way you call it is correct (and if it does what you seem
to expect) or what effect of 'aux' (which you use in the
following) it has...
printf("passei o primeiro divide com aux[0] = %s\n\n", aux[0]);
printf("passei o primeiro divide com aux[1] = %s\n\n", aux[1]);
memset(argv,0,strlen(aux));
strcpy(argv,aux[1]);
memset(aux,0,strlen(argv));
divide(argv,aux,"]");
printf("passei o segundo divide com aux[0] = %s\n\n", aux[0]);
printf("passei o segundo divide com aux[1] = %s\n\n", aux[1]);
memcpy(buffer+tam,&aux[0],strlen(aux[0]));
Since 'buffer' as a fixed, final size and you don't have any idea
how long the string aux[0] points to is you could easily write
past the end of 'buffer' if the string is too long. Let's
just hope that your divide function at least works in a way
that the strings the elements of 'aux' point to have an '\0' at
the end, or there would be lots errors in the things to come...
tam += strlen(aux[0]);
printf("dpx do memcpy aux[0] = %s\n\n", aux[0]);
printf("dpx do memcpy aux[1] = %s\n\n", aux[1]);
// inserir caracter delimitador
memcpy(buffer+tam,&str,strlen(str));
tam += tcd;
printf("buffer = %s\n", buffer);
Since you still have no '\0' at the end of 'buffer' you still can't
use it as the argument to printf() with "%s".
// inserir numero e nome colunas
memset(argv,0,strlen(aux[1]));
How do you know for sure that the amount of memory 'argv' points
to is large enough to hold as many 0s as auy[1] has characters?
printf("antes do strcpy - aux[1] = %s\n\n", aux[1]);
strcpy(argv,aux[1]);
printf("argv[0] = %s\n", argv[0]);
Since what you defined as 'argv' is a char pointer, argv[0] is
a char, so you can't use it with "%s", you would need "%c".
memset(aux,0,strlen(argv));
'aux' is an array of pointers (as many as 'argv' has characters.
If you want to zero them all, then you would need
memset( aux, 0, strlen(argv) * sizeof *aux );
But then there's also the problem that 0 is not necessarily a NULL
pointer...
divide(argv,aux,",");
printf("passei o terceiro divide com aux[0] = %s\n\n", aux[0]);
printf("passei o terceiro divide com aux[1] = %s\n\n", aux[1]);
printf("passei o terceiro divide com aux[2] = %s\n\n", aux[2]);
printf("str(aux) = %d\n", strlen(aux));
Since 'aux' is not a string but an array of pointers calling strlen()
on it is simply wrong.
// inserir numero de colunas
for(h=0; aux[h]!=NULL;h++){
nRows++;
}
Let's hope either a NULL pointer is represented by all bits 0 or
your divide() function did the right thing...
memcpy(buffer+tam, nRows, LENGTH);
tam += LENGTH;
Again, copying the bits of an int into a char array may not be what
you want...
// inserir caracter delimitador
memcpy(buffer+tam,&str,strlen(str));
tam += tcd;
// inserir colunas
for(h=0; aux[h]!=NULL;h++){
memcpy(buffer+tam,&aux[h],strlen(aux[h]));
tam += tcd;
// inserir caracter delimitador
memcpy(buffer,&str,strlen(str));
tam += tcd;
}
return buffer;
}
-------------------------------------------
clisql.c--------------
#include "cli.h"
/* Funcao Main */
int main (int argc, char *argv[]) {
syntax();
/* Funcaoo que apresenta sintaxe do programa */
Calling a function before you have defined all variables is only
working with a C99 compiler.
int sock;
int broadcast = 1;
struct sockaddr_in server;
struct sockaddr_in cli;
int porto_servidor;
char buffer[BUFFSIZE];
char *tmp = buffer;
tmp = buffer;
char input[1024];
/* Limpa a estrutura */
bzero((char *)&server, sizeof(server));
bzero((char *)&cli, sizeof(cli));
Why not use memset()?
/* Limpar o buffer */
memset(buffer, 0, BUFFSIZE);
input[0] = menu(buffer);
Do you remember? menu() was defined to return a char, but in reality
it did return a pointer to a char buffer that you can't use here
anymore. So whatever is stored in 'input[0]' is rather likely to
be complete garbage. Luckily, you never use 'input' in the following.
But then what's the reason for this assignment?
Even worse, menu() doesn't accept any arguments, but you call it
with one. That should make your compiler complain loudly. And
'buffer' is never going to be set up to anything you seem to
expect, so using it in the following is a bad mistake.
I am not going to comment on the use of non-standard functions like
socket() etc. in the following, these are things better left for groups
like comp.unix.programmer, and you have enough problems with C anyway
that you should deal with first.
/* Criacao da socket */
if ((sock = socket(AF_INET,SOCK_DGRAM,0)) <0) {
perror("socket");
exit(1);
}
else {
printf("Cliente: socket criada \n");
}
/* Verificar ip */
if(strcmp(argv[1], "-s") == 0) {
printf("entrei n o verifica ip, estamos a funcionar com endereco ip
inserido\n");
inet_aton(argv[2], &server.sin_addr);
porto_servidor = atoi(argv[3]);
}
Checking that argc is at least 4 and that the elements of argv are
really strings of the form you expect would be the RIGHT thing to
do before using them.
else {//enviar em broadcast
printf("N?o foi fornecido o endere?o IP, a enviar em broadcast\n");
porto_servidor = atoi(argv[1]);
server.sin_addr.s_addr = htonl(INADDR_BROADCAST);
/* Activar broadcast na socket */
if (setsockopt(sock, SOL_SOCKET, SO_BROADCAST, (char*)&broadcast,
sizeof (int)) <0) {
perror("setsockopt");
exit(1);
}
}
/* verifica se a porta esta entre os valores pretendidos,
* visto o n?mero de grupo ser o 75, port vai variar entre
* 17500 e 17599 */
You already have been told that defining a function within another
function isn't allowed in C.
void checkPort(porto_servidor) {
You need to supply the type of the argument (probably 'int').
if(porto_servidor < 17500 || porto_servidor > 17599) {
printf("Cliente: O porto tem de ser entre 17500 e 17599!\n");
exit(-1);
}
}
/* Definir a familia de protocolo e porto */
server.sin_family = AF_INET;
server.sin_port = htons(porto_servidor);
/* Inicio da comunicacao com o servidor */
/* Enviar a mensagem para o servidor*/
if(sendto(sock, tmp, strlen(buffer), 0, (struct sockaddr *)&server,
sizeof(server)) <0) {
Let's assume 'buffer' was set up in menu() (which didn't hapen, see
above) via the call to criaRespCreate() (but only under certain circum-
stances, in most cases it would just contain garbage), and the way it's
potentially would be set up there does't make sure there's a '\0' at its
end, so using it as the argument to strlen() is not going to work.
perror("sendto\n");
exit(1);
}
else {
printf("Cliente: Mensagem enviada com sucesso\n");
}
while(1){
printf("entrei no while\n");
/* timeout de 10 segundos */
struct timeval timeout;
timeout.tv_sec = 10; //segundos
timeout.tv_usec = 0; //microsegundos
fd_set readfds;
int sel;
// char buf1[64];
/* limpar a fd_set */
FD_ZERO(&readfds);
/* colocar o file descriptor no fd_set */
FD_SET(sock, &readfds);
printf("vou ver agora o select\n");
if((sel = select(sock+1, &readfds, NULL, NULL, &timeout)) <0)
perror("select\n");
else if (sel == 0){
printf("Ocorreu um timeout! Nao foi recebida nenhuma mensagem em
10s.\n");
exit(0);
}
memset(buffer,0,BUFFSIZE);
printf("vou agora ler a resposta do servidor\n");
/* ler a resposta do servidor */
if(recvfrom(sock, buffer, BUFFSIZE, 0, NULL, NULL) < 0) {
printf("entrei no recv por ser < 0\n");
perror("recvfrom\n");
exit(1);
}
else {
printf("Cliente: Mensagem recebida com sucesso\n");
printf("buffer = %s\n", buffer);
What makes you sure that what you received from the server has a '\0'
character at the end, so you can use 'buffer' with "%s" in printf()?
exit(0);
}
}
You will never get here because of the infinite while loop above
without any 'break' that would you get you out of it (actually
the loop will only repeat as long as select() returns a negative
value, i.e. indicates an error happened, in all other cases you
just exit from the program).
/* fecho da socket */
close(sock);
}
----------------------cli.h--------------------------------------
#ifndef CLI_H
#define CLI_H
#include "projecto.h"
/* Definicao das funcoes definidas em cliente_aux.c */
void syntax();
char menu();
int criaRespCreate(int cod, int id, char *argv, char *buffer);
#endif
--------------------------projecto.h----------------------------------------
#ifndef PROJECTO_H
#define PROJECTO_H
#define BUFFSIZE 6804
#define LENGTH 4
/* Bibliotecas necessarias */
#include <sys/socket.h>
#include <sys/types.h>
#include <stdio.h>
#include <errno.h>
#include <sys/time.h>
#include <stdlib.h>
#include <arpa/inet.h>
#include <netinet/in.h>
#include <netdb.h>
#include <string.h>
#endif
I can only recommend that you start with something more simple, like
some functions for manipulating strings and non-string memory, testing
them carefully, in order to get a better idea what's the difference.
Try to figure out when you have to allocate memory and when not, how
to return memory from functions etc. before you embark on a complex
project involve both dealing with difficult user input, communication
with a server (and getting the protocol right) etc.
Regards, Jens
--
\ Jens Thoms Toerring ___ jt@xxxxxxxxxxx
\__________________________
.
- References:
- Cannot return values of char variable
- From: Pedro Pinto
- Re: Cannot return values of char variable
- From: T.M. Sommers
- Re: Cannot return values of char variable
- From: Pedro Pinto
- Prev by Date: Re: assembly in future C standard
- Next by Date: Re: Out-of-bounds nonsense
- Previous by thread: Re: Cannot return values of char variable
- Next by thread: Re: Cannot return values of char variable
- Index(es):
Relevant Pages
-: socket communication: send & receive doesnt work right
... So I don't want to send a string as bytes. ... public void send_doubles(double
vals, int len) throws ... // send a short acknowledgement to the server ...
char *result; ... (microsoft.public.win32.programmer.networks)
- Re: [PATCH] markers: modpost
... pointers to the name/format string pairs. ... The same can then be done with
modules using the __markers section. ... +static void read_markers(const char *fname)
... int main ... (Linux-Kernel)
- Re: [PATCH] markers: modpost
... This adds some new magic in the MODPOST phase for CONFIG_MARKERS. ... will be
a neighbor of its format string. ... +static void read_markers(const char *fname)
... int main ... (Linux-Kernel) | http://coding.derkeiler.com/Archive/C_CPP/comp.lang.c/2006-11/msg00558.html | crawl-001 | refinedweb | 2,737 | 50.57 |
Well I have narrowed the problem down to the drop_tty() function on line 85 of /usr/share/virt-manager/virt-manager.py
def drop_tty(): # We fork and setsid so that we drop the controlling # tty. This prevents libvirt's SSH tunnels from prompting # for user input if SSH keys/agent aren't configured. if os.fork() != 0: os._exit(0) os.setsid()
If I comment out the part that calls drop_tty on line 348 it works fine.
Commenting out os.setsid() still freezes.
Commenting out the os.fork and os._exit part causes it to throw out "OSError: [Errno 1] Operation not permitted"
I suspect os.fork() is doing bad things. Does anyone know enough about Python to know what is actually happening?
Hello all,
Virt-manager (allows you to do KVM virtual machines) is displaying very strange behaviour. When I run virt-manager (from menu or console) it pops up an unresponsive window and its process (pyton2 /usr/share/virt-manager/virt-manager.py) it burns 100% of a CPU core. The real strangeness is that when I run it with an argument such as "--debug" or "-- no-fork" that prevents it from forking to background it runs fine as normal. It's very similar to a Redhat bug described here except that it happens immediately when the window appears.
I suspect that, since I use KDE, some odd GTK/Gnome dependency has been overlooked. Does this problem happen to anyone else? This happens on two of my systems that run Arch Linux and I want to exclude my own configuration problems before I file a bug report.]]> | https://bbs.archlinux.org/extern.php?action=feed&tid=161434&type=atom | CC-MAIN-2016-50 | refinedweb | 269 | 67.04 |
On 19/02/2013 09:48, Imsieke, Gerrit, le-tex wrote:
>
> On 19.02.2013 09:43, Michael Kay wrote:
>> On 19/02/2013 00:06, Imsieke, Gerrit, le-tex wrote:
>>> An example in the wild is the Unicode presentation that I gave at
>>> Stuttgart XML User Group:
>>>
>>>
>>> It’s an XPath 2 evaluation form.
>>>
>>> I don’t remember exactly how I did it, but I remember that it wasn’t
>>> trivial.
>> It's possible this was before we added the Javascript API (and may have
>> been one of the examples that motivated us to do it).
> I used the API*, but I found it to be non-trivial nonetheless. Now that
> I revisited the files and understood what I did, I think it’s quite
> straightforward.
>
> But if you look at the comments in
>,
> it becomes evident that I hard a hard time figuring it out. For example,
> I discovered (but didn’t report until now) that I couldn’t define
> functions (in some namespace) in a generated stylesheet because the
> namespace declarations in the generated stylesheet seem to get discarded.
I can believe that, although it would be a bad bug. I think nearly all
our tests are writing to the HTML DOM, rather than returning an XML DOM.
Michael Kay
Saxonica
>
> Gerrit
>
> * For example,
> Saxon.run({
> stylesheet: stylesheet,
> source: document,
> method: "transformToDocument"
> }).getResultDocument();
>
> ------------------------------------------------------------------------------
> Everyone hates slow websites. So do we.
> Make your web apps faster with AppDynamics
> Download AppDynamics Lite for free today:
>
> _______________________________________________
> saxon-help mailing list archived at
> saxon-help@...
>
View entire thread | http://sourceforge.net/p/saxon/mailman/message/30503152/ | CC-MAIN-2015-11 | refinedweb | 258 | 64.91 |
I've been having a linker error in one of my programs and I dont understand. It says there's an undefined reference with one of my static instance variables yet it is clearly listed in both the class header and the implementation of the functions. Here is a sample code that resembles the code I'm using.
The code for foo is seperate because it will be a seperate header implementation file when it is really used.
When I try to compile it I get the following error:When I try to compile it I get the following error:Code:#include <iostream> using namespace std; class foo { private: static int death; public: static void init(); void print(); }; void foo::init() { death = -5; } void foo::print() { cout << death << endl; } int main() { foo myFoo; foo::init(); for(int i = 0; i < 10; i++ ) myFoo.print(); return 0; }
test.o:test.cpp
.text+0x105): undefined reference to `foo::death'.text+0x105): undefined reference to `foo::death'
test.o:test.cpp
.text+0x117): undefined reference to `foo::death'.text+0x117): undefined reference to `foo::death'
Also when I do the following objdump: (objdump -t test.o | grep death)
I find [ 24](sec 0)(fl 0x00)(ty 0)(scl 2) (nx 0) 0x00000000 __ZN3foo5deathE in the symbol table which leads me to believe that the linker is being dumb. Of course it's probably something I did, but from example code on the web of using static instance variables I shouldn't be having this problem. Can anyone help?
Thanks | http://cboard.cprogramming.com/cplusplus-programming/87498-static-instance-variable-linker-error.html | CC-MAIN-2015-48 | refinedweb | 256 | 62.58 |
Infinite Streams in Java 8 and 9
Infinite Streams in Java 8 and 9
Time to put on your functional programming hats. See how Java 8 and 9 handle implementing infinite streams and check out the tools at your disposal.
Join the DZone community and get the full member experience.Join For Free
With the advent of lambdas and streams in Java 8, it's finally possible to write Java algorithms in a more functional style. An important element of functional programming is generated (or "infinite") streams that are cut by suitable conditions (of course, these streams aren't really infinite, but they are computed "lazily", i.e. on demand). Infinite streams do exist in Java, too, as we'll see in the following examples.
We're implementing the Luhn algorithm to calculate a simple checksum. I took the idea of using this example from the talk "JVM Functional Language Battle" by Falk Sippach. A short summary of the Luhn algorithm:
- Given a non-negative, integer number of arbitrary length.
- From right to left, double every second digit. If the resulting number has two digits, take the square sum (so we have one digit again).
- Sum up all the resulting digits.
- If the sum modulo 10 is zero, the original number is valid according to the Luhn algorithm.
How do we implement this algorithm with Java 8 streams? Without variables, state change or conditional branching? Like this, for the example:
import java.util.PrimitiveIterator; import java.util.stream.IntStream; public class Luhn { public static boolean isValid(String number) { PrimitiveIterator.OfInt factor = IntStream.iterate(1, i -> 3 - i).iterator(); int sum = new StringBuilder(number).reverse() .toString().chars() .map(c -> c - '0') .map(i -> i * factor.nextInt()) .reduce(0, (a, b) -> a + b / 10 + b % 10); return (sum % 10) == 0; } }
Let's run
isValid()with parameter string "8763".
IntStream.iterate() generates an endless (but lazily calculated) stream of numbers 1,2,1,2,1,2, ... – this is the factor we'll multiply each digit with.
Now, we wrap the parameter string in a StringBuilder since this class offers a reverse() method. This way we can walk through the chars() stream of digits forward:
'3', '6', '7', '8'
The first
map() converts each digit char to its number value. The second
map() multiplies this number value with the next factor value from the infinite
IntStream (i.e. here, each second digit is doubled):
3, 12, 7, 16
reduce() calculates the square sum for all of these numbers (even for the ones with only one digit). We add the tens-digit (div 10) and the ones-digit (mod 10):
(0 + 3) + (1 + 2) + (0 + 7) + (1 + 6)
That gives 20, which is divisible by 10 without a remainder. So the checksum is valid!
Meanwhile, Java 9 introduced
takeWhile(). We can use this stream operation to get rid of the
StringBuilder and the reversing of the characters. Instead, we create a second infinite stream from the
number parameter and cut the stream with a suitable lambda predicate:
import java.util.PrimitiveIterator; import java.util.stream.IntStream; import java.util.stream.LongStream; public class Luhn9 { public static boolean isValid(long number) { PrimitiveIterator.OfInt factor = IntStream.iterate(1, i -> 3 - i).iterator(); long sum = LongStream.iterate(number, n -> n / 10) .takeWhile(n -> n > 0) .map(n -> n % 10) .map(i -> i * factor.nextInt()) .reduce(0, (a, b) -> a + b / 10 + b % 10); return (sum % 10) == 0; } }
The
chars() stream from the previous Java 8 example stopped after the last character of the parameter string. In this Java 9 example,
LongStream.iterate() divides the parameter by 10 endlessly, e.g.
8763, 876, 87, 8, 0, 0, 0, ...
We have to consider the stream's values only as long as they are greater than zero.
takeWhile() does exactly this and yields the following finite stream:
8763, 876, 87, 8
Now we
map() this stream to a stream of its rightmost digits (mod 10):
3, 6, 7, 8
The final operations (doubling of each second digit and adding the square sums) are the same as in the Java 8 example.
This implementation in Java might not be perfect functional style because we are limited to the existing Stream API and we cannot use composition of arbitrary functions (at least unless we're using additional libraries like Vavr). But hey, using only standard API, these examples show quite a different and more expressive Java already than we used to know!
[This text is also available in German on my personal blog.]
Published at DZone with permission of Thomas Much . See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/infinite-streams-in-java-8-and-9?fromrel=true | CC-MAIN-2019-47 | refinedweb | 783 | 58.69 |
IRC log of svg on 2012-03-08
Timestamps are in UTC.
20:02:32 [RRSAgent]
RRSAgent has joined #svg
20:02:32 [RRSAgent]
logging to
20:02:34 [trackbot]
RRSAgent, make logs public
20:02:34 [Zakim]
Zakim has joined #svg
20:02:36 [trackbot]
Zakim, this will be GA_SVGWG
20:02:36 [Zakim]
ok, trackbot, I see GA_SVGWG(SVG1)3:00PM already started
20:02:37 [trackbot]
Meeting: SVG Working Group Teleconference
20:02:37 [trackbot]
Date: 08 March 2012
20:02:50 [krit]
zakim, who is here?
20:02:50 [Zakim]
On the phone I see +1.415.832.aaaa, +61.2.980.5.aabb
20:03:12 [krit]
zakim, aaaa is me
20:03:13 [Zakim]
+krit; got it
20:03:20 [cyril]
zakim, aabb is me
20:03:20 [Zakim]
+cyril; got it
20:03:32 [Zakim]
+ +33.9.53.77.aacc
20:03:56 [Tav]
zakim, +33 is me
20:03:56 [Zakim]
+Tav; got it
20:04:57 [krit]
dschulze@adobe.com
20:05:27 [Zakim]
+??P4
20:05:33 [ed]
Zakim, ??P4 is me
20:05:33 [Zakim]
+ed; got it
20:06:08 [ed]
Agenda:
20:07:05 [krit]
scribe: krit
20:07:18 [krit]
scribenic: krit
20:07:27 [krit]
scribenick: krit
20:07:51 [ed]
chair: ed
20:08:11 [krit]
ed: first topic time changes
20:08:50 [krit]
ed: australia with the latest change?
20:09:23 [krit]
ed: 3 weeks time all transit. Then we review chnages to times
20:09:37 [ChrisL]
ChrisL has joined #svg
20:10:22 [Zakim]
+ChrisL
20:10:46 [krit]
cyril: I think we should switch to morning in europe
20:10:56 [krit]
Tav: doesn't work for the damn americans
20:11:04 [ChrisL]
lol
20:11:09 [krit]
cyril: doesn't work for east coste
20:11:18 [krit]
cabanier: what about west coast
20:11:25 [krit]
cyril: haha
20:11:26 [cabanier_away]
cabanier_away has joined #svg
20:11:35 [shepazu]
shepazu has joined #svg
20:11:51 [krit]
Tav: 1h later in europe one hour later in australia last year
20:12:03 [ed] shows 20.00 UTC May 1, 2012
20:12:42 [krit]
krit: cabanier: 1 hour later is fine for us
20:13:02 [ed]
so, sydney 6am... maybe not great
20:13:03 [krit]
cabanier: 1 earlier is not that good for australia
20:13:16 [krit]
cyril: we should check with cameron
20:13:37 [krit]
krit: stay for now?
20:13:50 [krit]
ed: stay till 3 weeks when all countries switched
20:13:57 [krit]
cyril: 2 more weels
20:14:04 [krit]
s/weels/weeks/
20:14:21 [krit]
ed: keep the time for 2 more weeks
20:15:07 [krit]
… more discussions about time shift
20:15:24 [krit]
ed: no decision now?
20:15:43 [krit]
all: lets stay for the next 2 weeks
20:16:16 [cyril]
zakim, who is noisy?
20:16:29 [Zakim]
cyril, listening for 12 seconds I heard sound from the following: krit (3%)
20:16:55 [krit]
resolution: keep the time for 2 weeks
20:17:17 [krit]
action: erik will send a mail for time shift and time change
20:17:17 [trackbot]
Created ACTION-3245 - Will send a mail for time shift and time change [on Erik Dahlström - due 2012-03-15].
20:17:36 [krit]
next topic: template for tests
20:17:42 [cyril]
zakim, who is noisy?
20:17:44 [krit]
ChrisL: mails on mailing list
20:17:53 [Zakim]
cyril, listening for 10 seconds I heard sound from the following: Tav (44%), ed (5%)
20:18:29 [ed]
20:18:34 [krit]
Tav: cameron said that html tag inside svg and try to add html inside svg you'' break
20:18:56 [krit]
ChrisL: cameron suggested a lot more chnages
20:19:07 [krit]
ChrisL: he seems to wnat to have a whole new layer
20:19:16 [krit]
Tav: link and meta are not part of svg
20:19:27 [krit]
ChrisL: we had a resolution to add them
20:19:34 [krit]
ChrisL: anyway we should ask peter
20:19:45 [krit]
ChrisL: wecould add it to our svg naemspace
20:19:57 [krit]
ed: html5 might break the parsing algorithm there
20:20:05 [cyril]
zakim, who is here?
20:20:05 [Zakim]
On the phone I see krit, cyril, Tav, ed, ChrisL
20:20:56 [krit]
ChrisL: it seems to be a clear resolution that al changes might break html5 parsing. for those test we need something different
20:21:12 [ed]
s/html5 might break the parsing algorithm there/html5's parsing algorithm would break out into html mode if we didn't parent the link and meta elements inside a foreignobject/
20:21:41 [krit]
Tav: cameron suggestions is a bit simplper
20:21:46 [krit]
ChrisL: i didn't like it
20:22:22 [ChrisL]
I think he has oversimplified and it will loose functionality
20:22:35 [krit]
Tav: he is not removing anything
20:22:45 [krit]
Tav: he just implementes in another way
20:23:01 [krit]
Tav: he is just not put html5 into ahead
20:23:02 [krit]
??
20:23:21 [krit]
Tav: th eproblem with meta tag.
20:23:33 [krit]
Tav: he still has a data tag
20:24:21 [krit]
Tachif an html5 parser finds a link in a svg tag
20:24:27 [krit]
ChrisL: what happens?
20:24:39 [krit]
Tav: I don't know
20:25:02 [krit]
ChrisL: html parser don't care about the tags at all
20:26:07 [cabanier95]
cabanier95 has joined #svg
20:26:09 [krit]
ChrisL: a framwork in place to start writing tests
20:26:28 [krit]
ChrisL: otherwise they don't get it written
20:26:36 [krit]
s/it//
20:26:47 [krit]
ed: I prefer simplified structure
20:26:54 [krit]
ed: no namespaces
20:26:58 [krit]
Tav: I agree
20:27:15 [cabanier]
cabanier has joined #svg
20:27:19 [krit]
ChrisL: we had the test criteria … where did it went?
20:27:38 [krit]
Tav: the test harness has to be modified to do that
20:27:46 [krit]
ChrisL: is that in the head and a meta
20:28:11 [krit]
ChrisL: simplicity is goood
20:28:13 [krit]
but
20:28:39 [krit]
Tav: link to the different parts of the spec
20:28:47 [krit]
Tav: and which parts it needs to pass
20:28:50 [Tav]
20:29:16 [krit]
Tav: test assertions:
20:29:29 [krit]
ChrisL: it is all in the content attribute
20:29:35 [krit]
ChrisL: no markup at all
20:29:56 [krit]
Tav: above in the spec links, there is a link that tells you what part of the spec gets tested
20:30:01 [krit]
ChrisL: thats fine
20:30:21 [cyril]
seems broken
20:30:51 [ed]
the tags that break out of "foreign" mode (aka SVG in html):
(link is not among those)
20:31:09 [krit]
ChrisL: ok
20:31:40 [krit]
ed: meta does break out
20:31:45 [krit]
Tav: link is ok
20:31:49 [krit]
ed: yes
20:31:59 [krit]
Tav: you still need sth for the tags
20:32:06 [krit]
Tav: description is ok
20:32:32 [krit]
ed: could we use svg metadata element?
20:33:04 [krit]
20:33:14 [krit]
ed: just sth to put test meta data on
20:33:17 [krit]
ed: should be fine
20:33:24 [krit]
ChrisL: would be fine than
20:33:32 [krit]
ChrisL: peter should know what we want to do
20:34:00 [krit]
ed: metadata should replace meta eleemnt
20:34:20 [krit]
ed: link lists all the tags that break out
20:34:39 [krit]
ed: link element would get unknown svg element
20:34:49 [krit]
ed: needs verification
20:35:37 [krit]
ChrisL: so no other namespace, no head
20:35:42 [krit]
ChrisL: just metadata?
20:36:07 [ed]
20:36:38 [krit]
ed: I dont see a circle in opera and safaro
20:36:45 [krit]
s/safaro/safari
20:37:44 [ChrisL]
<!DOCTYPE html>
20:37:44 [ChrisL]
<svg>
20:37:44 [ChrisL]
<link/>
20:37:44 [ChrisL]
<circle r=200>
20:37:59 [krit]
ChrisL: I see the circle in FF with this example
20:38:04 [krit]
cyril: I don't
20:38:10 [thorton]
thorton has joined #svg
20:38:14 [ChrisL]
ff11beta6
20:38:34 [krit]
s/cyril/Tav/
20:38:42 [krit]
ed: I don't see a circle either
20:39:14 [krit]
Tav: now I see
20:39:44 [thorton]
thorton has joined #svg
20:39:52 [krit]
Tav: I put a space before the slash
20:39:53 [ed]
20:40:25 [krit]
cyril: we should move on
20:40:33 [krit]
ChrisL: we should have a decision
20:40:38 [krit]
ed: link is fine
20:41:13 [krit]
resolution: we will use the link element as proposed. Use metadata instead of head and meta
20:41:34 [ed]
example: instead of <meta name="flags" content="TOKENS" /> we will use <metadata name="flags" content="TOKENS" />
20:42:01 [krit]
Tav: what cameron sugeested with metadat
20:42:02 [krit]
a
20:42:11 [krit]
Tav: basically
20:42:25 [Tav]
20:42:32 [krit]
ed: something else on his proposal
20:42:41 [krit]
Tav: yes, the very last thing
20:42:47 [krit]
Tav: that we diagree too
20:43:12 [krit]
Tav: other issues with copy right… a list that ChrisL sent
20:44:06 [krit]
ed: i'll send a final proposal to the list
20:44:14 [ChrisL]
20:44:15 [krit]
ed: peter should review
20:44:26 [krit]
ChrisL: ed can you edit the wiki?
20:44:28 [krit]
ed: ok
20:44:50 [krit]
s/ed: i'll send a final proposal to the list/ed: Tav'll send a final proposal to the list/
20:45:03 [ChrisL]
s/ ed can you edit the wiki?/I am editing the wiki
20:45:55 [krit]
ChrisL: metadata for flags
20:45:55 [krit]
?
20:46:01 [ed]
<metadata name="flags" content="TOKENS" />
20:46:12 [krit]
Tav: the defs tag for the text description
20:46:30 [krit]
s/defs/desc/
20:46:47 [krit]
ed: should it be long or short descr?
20:46:53 [krit]
Tav: as long as you need/
20:47:31 [krit]
ChrisL: if we agree, than we ask peter
20:47:36 [krit]
Tav: ok
20:48:18 [krit]
Tav: what about copyright
20:48:25 [krit]
ed: use BSD for the suite
20:48:44 [krit]
ChrisL: put the license into one place and link to it
20:48:47 [krit]
Tav: I agree
20:49:14 [krit]
action: ChrisL will edit the wiki page and mention how to add license and copyright
20:49:14 [trackbot]
Created ACTION-3246 - Will edit the wiki page and mention how to add license and copyright [on Chris Lilley - due 2012-03-15].
20:49:26 [krit]
Tav: what about revision?
20:49:53 [krit]
Tav: we have version controll system, so no extra revisions
20:50:11 [krit]
ChrisL: we don't put the number in the file
20:50:18 [krit]
ChrisL: not productive to do that
20:50:28 [krit]
ChrisL: the same for the test frame
20:50:46 [krit]
ed: agree
20:51:21 [krit]
Tav: I tried to add the testing to the rep, but it failed
20:51:58 [krit]
ed: ChrisLdo you edit the wiki and mention what we do about revision?
20:52:05 [krit]
ChrisL: I will
20:53:18 [krit]
… discussion about problems for Tav to submit stuff
20:53:56 [krit]
ed: back to the requirements
20:53:58 [Zakim]
-Tav
20:54:39 [Zakim]
+Tav
20:55:09 [ed]
20:55:21 [krit]
ed: resolve accept the structure of tests
20:55:26 [ed]
agreed to the section: Structure after 8 March 2012 telcon
20:56:04 [krit]
ed: anyone against this template?
20:56:10 [krit]
silence
20:56:32 [krit]
resolution: Accept the testing template
20:56:51 [krit]
topic: Finishing SVG 2 Requirements
20:57:19 [cyril]
20:57:49 [krit]
ChrisL: we should drop xlink on <a>
20:58:09 [krit]
ed: xlink:title might make sense
20:59:01 [ChrisL]
s/xlink/xlink:role, xlink:arcrole and xlink:title/
20:59:22 [krit]
ChrisL: title element is better than xlink:title
20:59:29 [krit]
ChrisL: we should use title element
20:59:39 [krit]
ed: no strong objections
20:59:42 [cyril]
RRSAgent, pointer
20:59:42 [RRSAgent]
See
20:59:45 [krit]
ed: fine for ,e
21:00:18 [krit]
resolution: drop xlink attributes
21:00:42 [krit]
s/resolution: drop xlink attributes/resolution: drop xlink attributes role, arcole, title/
21:01:43 [ChrisL]
resolution: SVG2 will drop xlink attributes role, arcole, title
21:01:45 [krit]
resolution: SVG 2 will drop the xlink attributes role, arcole, title
21:01:59 [ChrisL]
rrsagent, here
21:01:59 [RRSAgent]
See
21:02:11 [ed]
21:02:43 [Zakim]
-ChrisL
21:02:44 [krit]
ChrisL: improve text is fine
21:02:59 [cyril]
21:03:23 [Zakim]
+ChrisL
21:04:01 [krit]
ed: we should keep it, but remove xlink:title from the spec
21:04:36 [krit]
resolution: SVG 2 will include Improved text for the indicating links
21:04:40 [ChrisL]
resolution: svg2 will port the test from svgt1.2
21:04:55 [ChrisL]
s/test/text/
21:05:07 [krit]
ed: next one
21:05:24 [krit]
new scripting functions
21:05:40 [krit]
s/function/features/
21:05:52 [ChrisL]
topic: Improved text for fragment identifiers link traversal
21:06:31 [krit]
ed: svg tiny is more simple
21:06:38 [krit]
ed: on media fragment
21:06:47 [krit]
ed: they are not incompatible
21:07:23 [ChrisL]
resolution: svg2 will merge the svg1.1se text and the svgt12 text on fragment identifiers link traversal
21:07:41 [ChrisL]
oh and add media fragments
21:07:58 [krit]
ChrisL: what about media fragments?
21:08:03 [krit]
ed: we should look at it
21:08:10 [krit]
ed: is part of the same feature
21:08:26 [krit]
ed: more the same thing
21:08:37 [krit]
ed: rephrase resolution?
21:08:46 [ChrisL]
action: chrisl to merge the svg1.1se and svgt1.2 fragment identifier text and consider adding in media fragments for partial images
21:08:46 [trackbot]
Created ACTION-3247 - Merge the svg1.1se and svgt1.2 fragment identifier text and consider adding in media fragments for partial images [on Chris Lilley - due 2012-03-15].
21:09:09 [krit]
ed: procsessing inline scripts
21:09:34 [ed]
21:09:42 [krit]
cyril: i expected that there were differences between svg 1.1 and 1.2 tiny
21:09:57 [krit]
ChrisL: html5 has similar things
21:10:09 [krit]
ChrisL: we should compatible to html5
21:10:18 [krit]
ed: reasonable
21:10:26 [krit]
ChrisL: we should look at it in detail first
21:10:31 [krit]
ChrisL: needs an action
21:11:02 [krit]
cyril: we already have a resolution for async,. sooo
21:11:40 [krit]
ed: the section on tiny is very short
21:11:55 [krit]
ed: you have to look at the type attribute and so on
21:12:03 [krit]
ed: there is nothing similar in SVG 1.1
21:12:53 [krit]
resolution: SVG 2 will define how inline scriptable content will be processed, in compatible way to HTML5
21:13:23 [ed]
21:13:28 [krit]
ed: new scriptiing features
21:13:46 [krit]
ed: we should port it over 'script element text'
21:15:42 [krit]
resolution: SVG 2 will merge SVG1.1SE and SVG 1.2 Tiny on script element text
21:16:19 [krit]
ed: next one change eirk to SVG WG and copy paste resolution on wiki page
21:16:25 [ed]
21:16:31 [krit]
s/eirk/erik/
21:17:34 [cyril]
RESOLUTION: SVG 2 will use the relevant parts from 1.2T and align with the html script element.
21:18:10 [krit]
ed: next is animation
21:18:11 [ed]
21:18:54 [ChrisL]
erik: svgt1.2 defines what happens when there are errors in a begin-value-list
21:19:37 [krit]
ed: tiny is more specific what to do on attribute for wrong content
21:21:16 [krit]
resolution: SVG 2 will apply changes of SVG 1.2 tiny on animation module
21:22:30 [krit]
resolution: SVG 2 will apply changes from SVG 1.2 tiny to the SVG animation section
21:22:58 [krit]
ed: fonts are no modification in tiny
21:23:12 [ed]
21:23:19 [krit]
ed extenisibility we have the xlink:href attribute on foreignObject
21:24:47 [krit]
krit: do we have the same rules like for iframe if we support xlink:href?
21:25:33 [krit]
ed: customers want to use it for customization
21:26:06 [krit]
s/for customization/for some magic things, but basically as a plugin frame/
21:26:43 [krit]
ed: it would be fine to have it. resolution?
21:27:31 [krit]
krit: so. should get an action to check security problems
21:27:43 [krit]
cabanier: most rules of iframe have to apply here as well
21:29:02 [krit]
resolution: SVG 2 will support xlink:href on fo element after security verification
21:29:19 [ed]
s/fo/foreignObject/
21:30:33 [krit]
action: ed will verify that xlink:href won't introduce security issues on foreignObject
21:30:33 [trackbot]
Created ACTION-3248 - Will verify that xlink:href won't introduce security issues on foreignObject [on Erik Dahlström - due 2012-03-15].
21:31:34 [krit]
10 items left + 11 without decision
21:32:24 [Zakim]
-ed
21:32:25 [Zakim]
-krit
21:32:25 [Zakim]
-ChrisL
21:32:29 [Zakim]
-Tav
21:32:39 [Zakim]
-cyril
21:32:40 [Zakim]
GA_SVGWG(SVG1)3:00PM has ended
21:32:40 [Zakim]
Attendees were +1.415.832.aaaa, +61.2.980.5.aabb, krit, cyril, +33.9.53.77.aacc, Tav, ed, ChrisL
21:33:18 [ed]
trackbot, end telcon
21:33:18 [trackbot]
Zakim, list attendees
21:33:18 [Zakim]
sorry, trackbot, I don't know what conference this is
21:33:26 [trackbot]
RRSAgent, please draft minutes
21:33:26 [RRSAgent]
I have made the request to generate
trackbot
21:33:27 [trackbot]
RRSAgent, bye
21:33:27 [RRSAgent]
I see 4 open action items saved in
:
21:33:27 [RRSAgent]
ACTION: erik will send a mail for time shift and time change [1]
21:33:27 [RRSAgent]
recorded in
21:33:27 [RRSAgent]
ACTION: ChrisL will edit the wiki page and mention how to add license and copyright [2]
21:33:27 [RRSAgent]
recorded in
21:33:27 [RRSAgent]
ACTION: chrisl to merge the svg1.1se and svgt1.2 fragment identifier text and consider adding in media fragments for partial images [3]
21:33:27 [RRSAgent]
recorded in
21:33:27 [RRSAgent]
ACTION: ed will verify that xlink:href won't introduce security issues on foreignObject [4]
21:33:27 [RRSAgent]
recorded in | http://www.w3.org/2012/03/08-svg-irc | CC-MAIN-2014-15 | refinedweb | 3,254 | 56.36 |
This is part of a series I started in March 2008 - you may want to go back and look at older parts if you're new to this series.
(I will be at the Brighton Ruby Conference on Monday July 20th if you're attending and follow my series or just want to talk about Ruby in general, I'm happy to have a chat)
First order of business today is to get rid of this abomination from last time:
# FIXME: ScannerString is broken -- need to handle String.new with an argument # - this again depends on handling default arguments s = ScannerString.new r = ch.__get_raw s.__set_raw(r)
Essentially we want to be able to do both
String.new and
String.new "foo",
and then inherit
String with
ScannerString, and use
super to pass a
String
argument straight through to the super class constructory, so we can change
the code above to:
s = ScannerString.new(ch)
This shouldn't be too hard: We pass the argument count to methods, and
def foo arg1, arg2 = "something" end
.. is equivalent to (pseudo code):
def foo args arg1 = args[0] if <number of args> == 2 arg2 = args[1] else if <number of args> == 1 arg2 = "something" else [here we really should raise an exception] end end end
The overheads Ruby imposes for all the nice high level stuff should start to become apparent now...
We don't have support for
raise yet, but we can at least output a message for the
else case too while
we're at it. Of course, the more arguments, the more convoluted the above will get, but it's not hard -
we have %s(numargs) to get at the argument count, and we should be able to do this pretty much just with
high level constructs.
My first attempt was something like this:
def foo arg1, arg2 = "default" %s(if (gt numargs 4) (printf "ERROR: wrong number of arguments (%d for 1)\n" (sub numargs 2)) (if (lt numargs 3) (printf "ERROR: wrong number of arguments (%d for 1)\n" (sub numargs 2)) (if (le numargs 3) (assign arg2 (__get_string "default")))) ) puts arg1 puts arg2 end
Spot the mistake? It is slightly tricky. The problem is not in this function itself, but in
how the arguments get allocated: Since the caller allocates space on the stack for the arguments,
if
arg2 is not being passed, then there's no slot for it, so we can't assign anything to it.
One way out of this is to allocate a local variable, and copy either the default value or
or the passed in argument to the local variable, and then rewrite all references to
arg2 to
local_arg2 in the rest of the function.
Another variation is to instead of doing the rewrite, change the
Function#get_arg method to
allow us to "redirect" requests for the argument afterwards. We can do this similarly to
how we reference
numargs, by treating them as negative offsets
We were getting ahead of ourselves. First lets update the parser. In 7bca9f3 I've added
this to
parse_arglist. Previously we just ignored the result of the call to the
shunting yard parser. Now we keep it, and treat the argument differently if we have a
default value:
default = nil if expect("=") nolfws default = @shunting.parse([","]) end if prefix then args = [[name.to_sym, prefix == "*" ? :rest : :block]] elsif default args = [[name.to_sym, :default, default]] else args = [name.to_sym] end
This change breaks a number of test cases, so there's also a series of updates to
features/parser.feature
To make this work, we also need to slightly change
transform.rb to actually handle
these arrays in the argument list (which brings up the question of whether or not
it might not be cleaner to modify the parser to store them uniformly, but lets leave
that for later).
def rewrite_let_env(exp) exp.depth_first(:defm) do |e| - args = Set[*e[2]] + args = Set[*e[2].collect{|a| a.kind_of?(Array) ? a[0] : a}]
Note that I'm not particularly happy with the approach I'm about to describe. It feels a bit dirty. Not so much the concept, but the implementation. I'd love suggestions for cleanups.
But first, there's a test case in f25101b
Next, we turn our attention to
function.rb.
Function and
Arg objects are instantiated
from the data in the parse tree, and we now need to handle the arguments with
:default
in them.
We start by simply adding a
@default in
Arg (In 1a09e19 ). You'll note I've also
added a
@lvar that is not being set immediately, as this value will depend on the
function object. But as the comment says, this will be used to let us "switch" this
Arg
object from referring to refer to a local variable that will effectively (and intentionally)
alias the original in the case of defaults.
class Arg - attr_reader :name, :rest + attr_reader :name, :rest, :default + + # The local variable offset for this argument, if it + # has a default + attr_accessor :lvar def initialize(name, *modifiers) @name = name # @rest indicates if we have # a variable amount of parameters @rest = modifiers.include?(:rest) + @default = modifiers[0] == :default ? modifiers[1] : nil end
In
Function, we add a
@defaultvars that keeps track of how many extra local variable
slots we need to allocate. In
Function#initialize we change the initalization of the
Arg objects, and once we've created it, we assign a local variable slot for this argument.
+ @defaultvars = 0 @args = args.collect do |a| - arg = Arg.new(*[a].flatten) + arg = Arg.new(*[a].flatten(1)) + if arg.default + arg.lvar = @defaultvars + @defaultvars += 1 + end
We then initialize
@defaults_assigned to false. This is a new instance variable that will
act as a "switch". When it is false, compiling request for an argument with a default will
refer to the argument itself. So we're not yet aliasing. At this stage, we need to be careful -
if we read the argument without checking
numargs, we may be reading random junk from the stack.
Once we switch it to
true, we're telling the
Function object that we wish to redirect requests
for arguments that have defaults to their freshly created local variables. We'll see how this
is done when changing
Compiler shortly.
Let's now look at
get_arg and some new methods, in reverse order of the source,
Function#get_arg
first:
def get_arg(a) # Previously, we made this a constant for non-variadic # functions. The problem is that we need to be able # to trigger exceptions for calls with the wrong number # of arguments, so we need this always. return [:lvar, -1] if a == :numargs
The above is basically a bug fix. It was a short-sighted optimization (I knew there was a reason why I don't like premature optimizations, but every now and again they are just too tempting) which worked great until actually getting the semantics right.
And this is the start of the part I'm not particularly happy with:
r = get_lvar_arg(a) if @defaults_assigned || a[0] == ?# return r if r raise "Expected lvar - #{a} / #{args.inspect}" if a[0] == ?# args.each_with_index do |arg,i| return [arg.type, i] if arg.name == a end return @scope.get_arg(a) end
Basically, if the
@defaults_assigned "switch" has been flipped, or we explicitly request
a "fake argument", we call
get_lvar_arg to try to look it up. If we find it, great. If not,
we check for a normal variable. We feel free to throw an exception for debugging purposes if
we're requesting a "fake argument", as they should only ever be created by the compiler itself,
and should always exist if they've been created. The "fake argument" here is "#argname" if the
arguments name is "argname". I chose to use the "#" prefix as it can't legally occur in an
argument name, and so it's safe. It's still ugly, though.
Next a helper to process the argument list and yield the ones with defaults, and "flip the switch" when done:
def process_defaults self.args.each_with_index do |arg,index| # FIXME: Should check that there are no "gaps" without defaults? if (arg.default) yield(arg,index) end end @defaults_assigned = true end
Last but not least, we do the lookup of our "fake argument":
# For arguments with defaults only, return the [:lvar, arg.lvar] value def get_lvar_arg(a) a = a.to_s[1..-1].to_sym if a[0] == ?# args.each_with_index do |arg,i| if arg.default && (arg.name == a) raise "Expected to have a lvar assigned for #{arg.name}" if !arg.lvar return [:lvar, arg.lvar] end end nil end
in
Compiler#output_functions (still in 1a09e19), we add a bit to actually handle
the default argument:
- @e.func(name, func.rest?, pos, varfreq) do + @e.func(name, pos, varfreq) do + + if func.defaultvars > 0 + @e.with_stack(func.defaultvars) do + func.process_defaults do |arg, index| + @e.comment("Default argument for #{arg.name.to_s} at position #{2 + index}") + @e.comment(arg.default.inspect) + compile_if(fscope, [:lt, :numargs, 1 + index], + [:assign, ("#"+arg.name.to_s).to_sym, arg.default], + [:assign, ("#"+arg.name.to_s).to_sym, arg.name]) + end + end + end + + # FIXME: Also check *minium* and *maximum* number of arguments too. +
This piece basically allocates a new stack frame for local variables, and then iterates
through the arguments with defaults; we then compile code equivalent to
if numargs < [offset]; #arg = [default value]; else #arg = arg; end where "arg" is the name of the current argument.
What should have been utterly trivial, given the above, turned into a couple of annoying debugging sessions, due to bugs that were in fact exposed once I started checking the argument numbers...
But let us start with the actual checks:
+ minargs = func.minargs + + compile_if(fscope, [:lt, :numargs, minargs], + [:sexp,[:call, :printf, + ["ArgumentError: In %s - expected a minimum of %d arguments, got %d\n", + name, minargs - 2, [:sub, :numargs,2]]], [:div,1,0] ]) + + if !func.rest? + maxargs = func.maxargs + compile_if(fscope, [:gt, :numargs, maxargs], + [:sexp,[:call, :printf, + ["ArgumentError: In %s - expected a maximum of %d arguments, got %d\n", + name, maxargs - 2, [:sub, :numargs,2]]], [:div,1,0] ]) + end
These should be reasonably straightforward:
Function#minargsshortly), we
printfan error.
printfanother error - unless there's a "splat" (*arg) in the argument list, in which case the number of arguments is unbounded.
int 1or
int 3, which are intended for debugging, but this is temporary until implementing proper exceptions, and I've not surfaced access to the
intinstruction to the mid-layer of the compiler, so this is a lazy temporary alternative.
In
Function (
function.rb), this is
#minargs and
#maxargs:
+ def minargs + @args.length - (rest? ? 1 : 0) - @defaultvars + end + + def maxargs + rest? ? 99999 : @args.length + end
(We could leave #maxargs undefined for cases where
#rest? returns true, but really, we could
just as well try to set a reasonable maximum - if it gets ridiculously high, it likely indicates
a messed up stack again)
One thing that I wasted quite a bit of time with when adding the min/max argument checks was that this triggered a few lingering bugs/limitations of the current register handling.
The main culprit was the
[:div, 1, 0] part. As it happens, because this forces use of
%edx (if you
remember, this is because on i386,
idivl explicitly depends on
%edx), which is a caller saved
register, our lack of proper handling of caller saved registers caused spurious errors.
The problem is that
Emitter#with_register is only safe as long as nothing triggers forced evictions
of registers within the code it uses. This code is going to need further refactoring to make it safer.
For now I've added a few changes to make the current solution more resilient (at the cost of sometimes
making it generate even less efficient code, but it should long since have been clear that efficiency
is secondary until everything is working).
The most important parts of this change is in
Emitter and
RegAlloc (in 3997a31)
Firstly, we add a
Emitter#caller_save, like this:
def caller_save to_push = @allocator.evict_caller_saved(:will_push) to_push.each do |r| self.pushl(r) @allocator.free!(r) end yield to_push.reverse.each do |r| self.popl(r) @allocator.alloc!(r) end end
This depends on an improvement of
#evict_caller_saved that returns the registers that needs to be
saved on the stack, because they've been allocated "outside" of the variable caching (for cached
variables, we can "just" spill them back into their own storage locations, and reload them later,
if they are referenced again). We then push them onto the stack, and explicitly free them, yield
to let the upper layers do what it wants (such as generate a method call), and pop them back off
the stack.
There's likely lots to be gained from cleaning up the entire method call handling once the dust settles and things are a bit closer to semantically complete - we'll get back to that eventually, I promise.
The new version of
evict_caller_saved looks like this:
def evict_caller_saved(will_push = false) to_push = will_push ? [] : nil @caller_saved.each do |r| evict_by_cache(@by_reg[r]) yield r if block_given? if @allocated_registers.member?(r) if !will_push raise "Allocated via with_register when crossing call boundary: #{r.inspect}" else to_push << r end end end return to_push end
Basically, we iterate through the list of caller-save reigsters, and if we've indicated we intend to push the values on the stack, we return a list of what needs to be pushed. Otherwise we raise an error, as we risk losing/overwriting an allocated register.
You will see
Emitter#caller_save in use in
compiler.rb in
#compile_call and
#compile_callm,
where it simply wraps the code that evaluates the argument list and calls themselves.
Other than that, I've made some minor hacky changes to make
#compile_assign,
#compile_index and
#compile_bindex more resilient against these problems.
The last, missing little piece is that it surfaced a bug in the handling of numargs with splats.
In particular, once I added the min/max checks, the call to a class's
#initialize triggered
the errors, because this is done with
ob.initialize(*args) in
lib/core/class.rb, and it didn't
set numargs correctly.
I'm not going into this change in detail, as it's a few lines of modifications to
splat.rb,
compiler.rb and
emitter.rb. The significant part of the change is that the splat handling
now expects to overwrite
%ebx, as it should, and
Emitter#with_stack will now set
%ebx
outright unless it is called after the splat handling, in which case it will add the number
of "fixed" arguments to the existing "splat argument count" in
%ebx.
(We'll still need to go back and fix the splat handling, as we're still not creating a proper array, and the current support only works for method calls)
While working on this, it struck me that a future optimization to consider is different entry-points
for different variations of the method. While the caller does not really "know" exactly which method
will be called, the caller does know how many arguments it has, and so knows that either there is
an implementation of a method that can take that number of arguments, or it will trigger an
ArgumentError
(or should do when we add exceptions).
Thus, if we allocate vtable slots based on the number of provided arguments too, then static/compile-time lookups will fail for method calls where no known possible combination of method name / number of arguments is known at compile time, and it will fall back to the slow path (which we've not yet implemented).
The question of course is how much it would bloa the vtables. However this approach can also reduce the cost of the code, as we can jump past the numargs checks for the version where all arguments are specified etc.
Another approach is to decide on rules for "padding" the argument space allocated, so that the most common cases, of, say 1-2 default arguments can be supported without resorting to extra local variables. This saves some assignments.
... we'll fix
super. Well, add it, as currently it ends up being treated as a regular method call. | https://hokstad.com/compiler/37-default-arguments | CC-MAIN-2021-21 | refinedweb | 2,690 | 61.67 |
What is ForEach?
"ForEach" is a loop like "For", "While" etc. When you are using foreach loop you do not need to worry about where the collection starts and where it ends. These are taken care of by the foreach loop itself. You might have already of by the foreach. Now, let us move on to how can we implement such a foreach for our collection class.
The Goal
Let us implement the foreach loop construct with the Order Class. The Order class has a collection of Products that participate in the order identified by the product id. Our goal is to provide support for foreach on the Order class so that the Client (User of your class) can iterate through the Products.
IEnumerable and IEnumerator
Both interfaces are exposed by the .Net framework. If you want to support "foreach" then your collection participants should implement these contracts. IEnumerable tells that your class's internal collection (has-a with array, list etc) can be Enumarable by the class, which implement IEnumerator. It might be a little confusing now; come back here and read this paragraph once again when you have finished reading this write-up.
The Product Class
The class is self-explanatory and hence I am leaving it for you to understand yourself.
//001: Product Class.
public class Product
{
private int ProductId;
private string ProductName;
public Product(int id, string Name)
{
ProductId = id;
ProductName = Name;
}
public override string ToString()
return ProductName;
}
The Order class
This is our collection class. From client (calling code) point of view the Order has Collection of Products. If I have an order object and I do have list of products placed on that order. We are going to support foreach construct for this order class. So this Order class should implement the IEnumerable. IEnumerable contract expects to implement a function called: GetEnumerator and an object that implements IEnumerator. Also it expects that create a fresh instance of IEnumerator implemented object and return that back to the caller.
Let us start implementing the Order class. First, as already told Order is collection of Products. So an array of product is declared. The index is just for iterating through the Products collection, which we will see later here.
//002: Order Class is Enumerable to support ForEach loop
public class Order:IEnumerable
private Product[] products;
private int OrderID;
private int index = 0;
Constructor of the order class takes order id and variable number of parameter of type Product. Note down the usage of the keyword params, which allows the client to pass any number product instance. Next these variable numbers of arguments (treated as array inside the constructor/function) are copied to the member of the Order class. Note the usage of foreach when we do the copy. This is working as array implemented it & we are going to the same for our custom class Order. Below is the constructor:
//003: Constructor for Order Class. Initialize the Products
public Order(int ordid, params Product[] col_of_products)
products = new Product[ col_of_products.Length ];
//003_1: The built-in array supports foreach. Our aim is to provide the ability of this
// of Foreach loop for Order Class
foreach (Product prod in col_of_products)
products[index] = prod;
index++;
//003_2: Initialize order. However not required for this example
OrderID = ordid;
As the order class agreed to the contract IEnumerable supplied by the .Net framework, we need to implement the GetEnumerator function. This function should return an object that implements (agreed to the .Net built-in contract IEnumerator) the IEnumerator interface. Note that always create instance and return back. Below is the implementation of the contract function:
//004: Implement the contract expected by IEnumerable.
// Note: Always return a new Instance. Do not confuse with Reset.
// It will be invoked in COM Interops as Per MSDN
public IEnumerator GetEnumerator()
return new OrderIterator(this);
Now we will go ahead and implement the class OrderIterator. Yes, your guess is correct, we will implement IEnumerator. Let us move to OrderIterator Class. I am going to keep this class as inner class as it is absolutely not necessary for the client to know what kind implementation it has. But, still you can go ahead and keep it as a separate class.
Inner class is worth a separate article. But as we are going to use here, below is the short note.
1) Inner class is part of some other class, and it is defined inside containing class. In our example the containing class is Order.
2) Outer class can access inner class members after creating an instance of it.
3) It is not possible to create the Instance of outer class inside the inner class.
4) The only way inner class to access the outer class member is either through inheritance or through by having a reference to it (Passed as parameter).
5) The client can know about outer class, but inner class implementation is hidden.
The OrderIterator inner class
This inner class implements the IEnumerator interface. To fulfill the contract MoveNext(),Reset()function and Current property needs to be implemented. However, the Reset() function is useful in the COM interop, as it is part of the contract an implementation required.
The Order class variable is declared in this class (Do not confuse with point(3), look at point(4)) to access the Products collection. The variable itr_index is used to refer the each element in the Products array. Below is the class with the data members:
//005: Inner Class for the Iterator (Enumerator).
private class OrderIterator: IEnumerator
private Order order;
private int itr_index = -1;
The constructor takes object of Order class as parameter and stores it. Note that we are not creating the Order object here, we are just referring the already created object passed in. The constructor is shown below:
//005_1: Constructor for the Enumerator. Have a reference to the IEnumarable class that requires an Enumerator.
public OrderIterator(Order order)
this.order = order;
In the contract function MoveNext, we will move our index towards the left of the Products collection. That is; just increment the itr_index and after check and return the location is valid or not. Below is the contract function:
//005_2: Contract Expected by IEnumerator. As return type is bool, it is our responsibility to say 'You can not move
// next further'.
public bool MoveNext()
//Move to the First Location.
itr_index++;
if ( itr_index < order.products.Length )
return true;
else
return false;
And, the Reset function reset the itr_index. Simple.
//005_3: Contract Expected by the IEnumerator. Reset. No explanation required. Dot
public void Reset()
itr_index = -1; //Let the Movenext start from the beginning
The last contract that we need to finish-off is implementing the property Current. Note that the property is expected as read-only by the IEnumerator. As our itr_index is in the right position (Because the framework calls the MoveNext before accessing Property Current) the implementation just returns the Product by picking it from Products collection using the itr_index. Below is the implementation for property Current:
//005_4: Contract expected by IEnumerator. Implement the read-only property
public object Current
get
return order.products[itr_index];
}
What we have done so for
1) We created a class Order that maintains a collection of Products. As the Order Collection class implements the IEnumarable it allows iteration of Products possible.
2) The Product class is the single entity that will be collected in the Order class [To have a meaningful order]
3) We created an inner class for Order, which does the processing of iteration on the Products collection. Of course it implements the IEnumerator.
That's all we are done. Take a break and go for coffee before we go on testing the foreach loop construct for our Order Collection class.
The client Code
The client code first creates six products and then places them in the Order class. Then client uses the foreach to iterate through all the Product in a particular order. Below is Client Code:
//Client 001: Create the Products
Product p1 = new Product(1001, "Beer");
Product p2 = new Product(1002, "Soda");
Product p3 = new Product(1003, "Tea");
Product p4 = new Product(1004, "Coffee");
Product p5 = new Product(1005, "Apple");
Product p6 = new Product(1006, "Grapes");
//Client 002: Create the Order Class, that has the collection product that is order placed for these products.
// Look at the Constructor. The Params stands for variable number of arguments collected as
// array
Order new_order = new Order(1001, p1, p2, p3, p4, p5, p6 );
//Client 003: Let us go to the Last step of Testing the ForEach Loops.
foreach(Product ordered_item in new_order)
if (ordered_item != null )
Console.WriteLine("Product in Order: " + ordered_item);
How the foreach works for our collection class
Refer the below picture:
1) The execution first reach at new_order. At this we know that new_order is object of Order class. And order class implemented IEnumerable. So, first we will get the Object that Implemented the IEnumerator using IEnumerable's GetEnumerator() function.
2) Next we know it is foreach and a call to MoveNext takes place.
3) Next, it is time for "ordered_item in" Current property is accessed and we get the Single entity that is; Product in ordered_item.
The numbered notation is shown below to explain the flow in the foreach:
1=>2=>3=>2=>3=>2 until all the items are iterated
Number is the Picture represents the following:
Number 1: Execution reaches to Collection class Instance. That is; Order
Number 2: Execution reads the foreach keyword and makes a call to MoveNext
Number 3: Execution reaches Product order_item in and make a call to Current property, pick-up the value, assigns it to ordered_item.
That's all. See you all in the Next article.
Delegates in C#
Out and Ref Parameters in C# | http://www.c-sharpcorner.com/uploadfile/6897bc/support-foreach-for-your-collection-class/ | CC-MAIN-2014-10 | refinedweb | 1,618 | 55.95 |
Subject: Re: [boost] Checking interest in to_string functions
From: Antony Polukhin (antoshkka_at_[hidden])
Date: 2011-10-13 12:25:06
2011/10/13 Vicente Botet <vicente.botet_at_[hidden]>:
> In particular I want to use them in Boost.Chrono. Of course, in the mean
> time I'm using lexical_cast. Have you already a clear idea of the namespace
> of these functions? boost? boost::string_conversions?
Have no idea about the namespace name. I`ll think about it later.
Function overloads for boost::containers::basic_string will be in
namespace boost::containers (to allow ADL), and will be imported in
the namespace of the to_string library.
> IMO, I would prefer just boost, as these are standard functions.
Some of the boost users write "using namespace boost; using namespace
std;" in their projects, so I think it would be better to place thous
functions in a separate namespace to avoid ambiguity.
Best regards,
Antony Polukhin
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2011/10/186883.php | CC-MAIN-2020-45 | refinedweb | 172 | 68.26 |
Cargotalk JULY 2011 | ANNUAL ISSUE
On the growth trail Industry expects huge demand from domestic and international markets
Vol XI No. 8 Pages 86 Rupees 50 cargotalk.in A DDP Publication
SOUTH ASIA’S LEADING CARGO MONTHLY No.1 in Circulation & Readership
contents July 2011 DEPARTMENTS National News Schenker India organises customer meet 2011 at Coimbatore
8
JBS Academy holds “Certificate Distribution”
8
Image Logistics to 10 open office in Singapore Three Aces Global Logistics 10 represents Conqueror TNT Express starts 12 freighter service between India and Europe Shreeji to continue bonded 14 trucking from IGIA to customs notified areas in India
International News Emirates starts direct 17 Dubai Geneva services
International Airport Abu Dhabi International 20 Airport: Positioning itself as strategic hub in ME, Indian carriers to come
International Airlines Etihad Crystal Cargo grows 24 by 61% in 2010, India remains among top three markets in the world
Family Album Martinair Cargo starts 26 operations from Delhi Calcutta Air Cargo Club 28 celebrates Poila Baishakh A year of revival: 30 Glimpses of year 2010-11
4 cargotalk JUly 2011
Publisher: SanJeet Editor: Rupali Narasimhan Sr. Assistant Editor: Ratan Kumar Paul Assistant Editor: Ipshita Sengupta Nag General Manager: Gunjan Sabikhi Sr. Manager Advertising: Harshal Ashar Sr. Manager Marketing: Rajiv Sharma Asst. Manager Marketing: Roland Dias Marketing Co-ordinator: Gaganpreet Kaur Designer: Parinita Gambhir Advertisement Designer: Vikas Mandotia Production Manager: Anil Kharbanda Circulation Manager: Ashok Rana Durga Das Publications Pvt. Ltd.: P.O. Box 9348, Saif Zone, Sharjah, UAE Tel.: +971 6 5573508 Fax: +971 6 5573509 Email: uae@ddppl.com CARGOTALK is a publication of Durga Das Publications Private Limited. All information in CARGOTALK CARGOTALK. However, we wish to advice our readers that one or more recognized CARGOTALK. CARGOTALK is printed & published by SanJeet on behalf of Durga Das Publications Private Limited. and is printed at Cirrus Graphics Pvt. Ltd., B-62/14, Phase-2, Naraina Industrial Area, New Delhi – 110028 and is published from 72 Todarmal Road, New Delhi – 110001.
contents July 2011
Opinion
UTi Walkathon for relief in 32 Japan and New Zealand
New look to logistics
Shipping & Ports
Albeit cargo and logistics industry in India is in the nascent stage, there is tremendous possibility of positioning itself as a mature and strong sector. The presumption is not prompted by the robust Indian economy and increasing trade volumes, both in domestic and international fronts, but by the emergence of enlightened human resources for this sunrise sector. Of course, there is a huge gap between demand and supply of skilled manpower for the logistics industry at present. Nevertheless, the industry is now able to woo young and dynamic entrepreneurs and professionals, who are capable of giving a new shape to this largely unorganised sector.
Mangalore Port registers 64 handling of more than 10,000 TEUs Traffic handled at major ports 64 during April-March 2011 Mundra Port completes 65 acquisition of Coal Terminal at Australia
Express Cargo Blue Dart expands 78 ONE RETAIL footprints
Interestingly, majority of those entrepreneurs and professionals are coming from different backgrounds and are entering into this segment with a new outlook. Simultaneously the second or third generations who are holding their family business are constantly exploring new possibilities to transform the traditional business into globally competitive business houses.
CoLUMNS young and Emerging Profiles of young entrepreneurs 44 and professionals
The moot question, however, remains unresolved. Will the generation next receive the required support from the present and forthcoming policy makers and executors? The government has been harping on modernisation of the infrastructure and streamlining the processes for quite some time. Unfortunately, nothing has changed so far to do away with the age-old bottlenecks. It would be a great tragedy if the prospective young logistics professionals are dissuaded by the apathy of the authorities concerned. It is crucial to have a collaborative approach and progressive mindset from the government, regulating bodies, custodians and other industry stakeholders. There should be some mechanisms in place so thtat young voice can be heard for the greater interest of the industry and the country.
Infrastructure Update Cargo activities at AAI airports: 56 Impetus on non-metro airports for seamless growth
Study and Survey Six key principles for 68 logistics service providers to grow fast
Guest Column Logistics Industry in India: 80 The sunrise sector requires skilled manpower
CovER SToRy
Rupali Narasimhan Editor
24. 6 cargotalk JUly 2011
National News New Launch
JBS Academy holds “Certificate Distribution” the intricacies of what they execute in their day to day working.
SCheNkeR INDIA oRgANISeS CuSToMeR MeeT 2011 AT CoIMBAToRe Schenker India recently organised Customer Meet 2011 in Coimbatore. The event was attended by representatives from trade, airlines, shipping lines and partners of the company. Present on the occasion, Christian Nebel, managing director, Schenker India, spoke about the journey of the company from 9 offices to 32 offices, 150 employees to 1200 + employees and the turnover of over Rs 10 billion in 2011.
8 cargotalk JUly 2011
“In this region, we have invested in facilities like IT, infrastructure and importantly in human resource, in the last few years and will continue to do so in the future,” he said. According to Nebel, with the enhanced capacity in air and ocean freight and improved service level in contract logistics, DB Schenker Logistics is all set to serve its customers even more efficiently in the region.
Lipika Majumdar Roy Choudhury (third from left) at the JBS Academy
Recently JBS Academy organised three Certificate programmes: Certificate Course in Freight Forwarding, Certificate Course in Custom Clearance Skill sets for a Better you! The course was conducted by various faculties, each being a working professional with more than 7/8 years working experience.A series of tutorials were also held, enabling participants to understand
At the end of the programmes, a written exam was conducted to assess the competence level of the participants. The Certificates to 39 participants were distributed by the chief commissioner of Customs Gujarat Zone, Lipika Majumdar Roy Choudhury at the JBS Academy, Ahmedabad. J B S A c a d e my h a s a l re a dy announced a follow up programme f o r a l l t h e ex i s t i n g c o u r s e s . In the last 8 months, JBS Academy conducted over 28 programmes to train more than 650 participants. It also included training f or port personnel, custodians, exporters, importers, CHAs , steamer agents, freight forwarders, IATA agents etc.
National News New Launch
ThREE ACES GLobAL LoGISTICS REPRESENTS CoNqUERoR Three Aces Global Logistics, has been selected as the exclusive representative, for Conqueror Freight Network, in New Delhi. Three Aces Global Logistics offers comprehensive international logistics services, including CHA, outbound and inbound air freight, outbound and inbound sea freight, door to door shipments and MTO services. Conqueror, which began accepting applications last September and launched operations in January, is choosing one strong forwarder to act as a “virtual branch” in each of-forwarders,” says Antonio Torres, founder of the Madrid-based Conqueror group. The network is now looking for qualified members in Cochin, Kolkata, Vadodara, Coimbatore, Hyderabad, Jaipur and Kanpur.
Image Logistics to open office in Singapore In a bid to strengthen its international network, Image Logistics will open an office in Singapore by July 2011. In the meanwhile, the company has launched its overseas offices in hong Kong, China (Shenzhen and Guangzhou). In India Image Logistics has launched a new branch office in Kolkata, started warehousing and distribution service, self custom clearance in Delhi and put
10 cargotalk JUly 2011
special focus on multi modal transportation. According to Amit Chakraborty, MD, Image Logistics, the company registered a 30 per cent growth in 2010-11 and in 2011-12 it is expecting about 100 per cent growth, as there are lot of new additions in service and infrastructure. “With more impetus on warehousing and distribution, multimodal solutions and increasing client base and consolidation services to/
from hong Kong/Singapore/Germany and The Netherland, we are confident to achieve our target,” said Chakraborty. he also shared that special emphasis would also be on garments exporters and garments market to Europe and USA. “by the year end if everything goes as per plan then we would set up an office in the US. our focus will be on nominations and tenders,” he added.
National News New Launch
TNT Express starts freighter service between India and Europe
T
NT E xp ress ha B767 freighter service has a weekly capacity of 210 tonne. According toAbhik Mitra, managing director, TNT India, the new service will enable the company’s customers to enjoy faster transit times, as well as improved control and visibility over shipments moving between India and Europe. He also informed that shipments depart from New Delhi at the end of each working day with an assurance to arrive in Europe before the start of the next working day. The return flight allows TNT to collect and uplift shipments from Europe on the same day.
12 cargotalk JUly 2011
The service will run five times a week between New Delhi andTNT’s European air hub in belgium, with a stopover in Dubai on the way back to India. CT bureau
Mitra underlined that TNT India recorded year-on-year growth of more than 20 per cent. “The addition of a dedicated TNT freighter from India will enable our customers to become even more competitive due to faster factoryto-market lead times and improved efficiency,” he emphasised. Mitra asserted that with the frequency of TNT’s service, it would now take just one day for shipments
to reach Europe from New Delhi. ,” informed Michael J Drake, regional managing director, TNT Express Asia Pacific.
National News New Launch
ShREEJI To CoNTINUE boNDED TRUCKING FRoM IGIA To CUSToMS NoTIFIED AREAS IN INDIA The leading bonded trucking s e r v i c e p ro v i d e r, S h r e e j i Tr a n s p o r t S e r v i c e s , h a s obtained the renewal of bonded trucking services for air import c a rg o b e t w e e n a i r c a rg o complex at IGI Airport, New Delhi and all other customs notified Indian ACCs, CFSs and Rupesh Shah ICDs. With a notification by The Office of the Commissioner of Customs (Import and General), this permission has been renewed for three years from the date of acceptance of bond. According to Rupesh M Shah, director, Shreeji Transport Services, the renewal would help their customer in a big way to fulfill the requirement of bonded trucking from this gateway airport. Currently, some 16 airlines are utilising Shreeji’s bonded trucking facilities from Delhi IGI Airport.
14 cargotalk JUly 2011
PhDCCI CoNFeReNCe DeMANDS INDuSTRy STATuS To wARehouSINg The National Conference called Smart Supply Chains 2011, which was organised by the PhDCCI on ay 27 in New Delhi, demanded industry status to warehousing industry and recommended warehouse design and development as part of the town planning. By releasing a study paper, which has been prepared by Tata Consultancy Services, the
organisation also urged for liberalisation of FDI in retail, continuation of APMC reforms, collaboration and adoption of common supply chain standards and IT systems and skill development for strengthening the logistics and supply chain industry in India. The conference was attended by several leading logistics companies and corporate shippers.
National News Trade Associations
ACAAI-Western Region organises seminar on air way bill and liability regime
J
The Air Cargo Agents Association of India (ACAAI) Western Region recently organised a seminar on “Air Way bill & Liability Regime,” in Mumbai that was well attended by the cross section of the air cargo community of Mumbai. CT bureau
.Krishnan, President, ACAAI, who inaugurated the seminar said “Regulations related to liability have undergone major changes in the recent past and it is time for the community to know more about it”. He also appreciated the efforts put in by the ACAAI-Western Region in enhancing the knowledge and skills of the community by organising various training and seminars of this nature. “Article 24 of the Montreal Convention 1999 sets out the process for a periodic review and revision as necessary of the limits of liability in order to protect the interests of consumers in international carriage by air and the need for equitable compensation based on the principle of restitution and considering the global inflationary trends, the liability is likely
to witness a steep upward revision. Hence the forwarders while executing air waybills, specially the house air waybills (HAWBs) must be absolutely aware of the
consequences if things do not go correctly,” cautioned B.Govindarajan, chief operating officer, Tirwin Management Services, which is a Chennai based training and consulting firm. Talking on the sidelines after the seminar several ACAAI-WR members expressed the requirement of more such initiatives in the near future. Firdos Fanibanda, chairman and Afzal Malbarwala, secretary, ACAAI-WR, said that the response to the seminar was quite encouraging and that they are planning to have seminars on the same line on various topics of interest and necessity to the air cargo community. Many of the delegates expressed that other regions should follow ACAAI-WR and arrange for this type of seminars that will benefit their colleagues.
16 cargotalk JUly 2011
International News News in Brief
EMIRATES STARTS DIRECT DUbAI-GENEvA SERvICES Emirates has linked two renowned international hubs by launching flights between Dubai and Geneva. The Dubai-Geneva service will be operated by a combination of Boeing 777-200LR and Boeing 777300ERs. EK 089 will leave Dubai each day at 0855hrs and arrive in Geneva at 1345hrs. From Geneva, EK 090 will depart at 1515hrs, arriving in Dubai at 2330hrs. Each 777 operating the Geneva route will be able to carry 15-20 tonne of cargo. Much of the capacity out of Geneva is expected to be used by the luxury sector, one of the top employers in the Geneva region and responsible for 60 per cent of its exports. The cargo manifest for first Geneva-Dubai flight includes fashion items, perfumes, pharmaceuticals, electronics and relief goods. The inaugural flight’s VIP delegation aboard included Tim Clark, president Emirates Airline; Ram Menen, divisional senior vice president, cargo, Emirates and Salem Obaidalla, senior vice president, commercial
An official delegation prepares to depart on the inaugural flight at Dubai Airport
operations, Europe & Russian Federation, Emirates..
cargotalk
JUly 2011
17
International News News in Brief
e-FReIghT CooPeRATIoN BeTweeN SChIPhoL AND INCheoN AIRPoRTS STReNgTheNINg The e-freight collaboration between Amsterdam Airport Schiphol’s cargo division and Incheon Airport continues, with the staging of e-freight workshop at Schiphol. This followed a similar event held recently at Incheon. The two workshops are a direct result of the Mou between Amsterdam Airport Schiphol and Incheon InterSaskia van Pelt national Airport Corporation, signed in March 2011, in which the major cargo gateways pledged to cooperate in the promotion of “e-freight”, as a means of facilitating business between them. The Mou provides for the exchange of e-freight knowledge and expertise between the two hubs, and paves the way for stimulating the use of e-freight between, and from, both Schiphol and Incheon. The work-
18 cargotalk JUly 2011
shop examined ways in which the cargo community at Schiphol can work together to increase the take-up of e-Freight, paperless airfreight between Incheon and Amsterdam. A number of bottlenecks were identified during the session, such as the need to improve the quality of communication. Further opportunities to replace paper documents with electronic versions also emerged. Another workshop will be held in Incheon in August. “The meeting was very successful in connecting all parties in the supply chain, and creating a better understanding of how we can improve the quality of data transmitted. Some of the current obstacles to e-freight have proved very easy to resolve, and we are confident that we will see a steady increase in paperless shipping between Amsterdam and Incheon in the coming months,” said Saskia van Pelt, business development director, Schiphol Cargo. The Schiphol event was attended by representatives from the two airports, along with executives from korean Air, Air France kLM, Dutch Customs, IATA, handling agents and several logistics providers with bases at Schiphol.
International Airport Terminal Update
Abu Dhabi International Airport Positioning itself as strategic hub in ME, Indian carriers to come
A
Abu Dhabi Airport Company (ADAC), which has tremendous potential to be a lucrative transit hub of the Middle East, saw 16 per cent growth in 2010. huraiz Al Mur bin huraiz, chief commercial officer, ADAC, spoke to Cargo Talk about the growing importance of the airport. Ratan Kr Paul
t Abu Dhabi International Airport, ADCC or the Abu Dhabi Cargo Company operates a cargo terminal of over 35,000 sq meters, which includes around 10,000 sq meters of perishable storage facilities. This includes a new 10,000 sq meter extension to the old cargo building which is currently used to handle outbound Etihad freighters and trucks. ADCC is the sole cargo handler and a subsidiary of Abu Dhabi Airports Company (ADAC) tasked with the responsibility of providing cargo warehouse services to all customer airlines operating out of Abu Dhabi International Airport. In 2010, Abu Dhabi International Airport registered 16 per cent increase in cargo, with a total of 438,000 tonne handled. In 2011, this growth trend was carried out throughout the first quarter, with 116,474 tonne handled to achieve 16 per cent growth. At present over 35 airlines operate out of Abu Dhabi International Airport, which include prominent carriers such as Etihad Airways (the national carrier of the UAE), British Airways, Lufthansa, AF/KLM and other leading airlines. In addition to these, China Airlines and Etihad also operate freighter services. 20 cargotalk JUly 2011
Commenting on the special arrangements made for the airlines so that they would be interested in using Abu Dhabi Airport as transit point, Huraiz Al Mur Bin Huraiz said, “Abu Dhabi airport is the hub of Etihad Airways. Accordingly, it is very important for us to provide speed, reliability and efficiency in enabling connections while maintaining accuracy.� In this respect, a whole range of activities have been undertaken, which includes anticipating the manpower requirement based on schedules. The services range
Huraiz Al Mur Bin Huraiz
International Airport Terminal Update
CARGO VILLAGE AND GREEN INITIATIVES ADAC has a Cargo Village measuring approximately 9000 sq meters with an airside access and warehouses for prominent freight forwarders. In addition, Abu Dhabi Airports Company is developing “SkyCity” which will be a logistics park providing one stop services under a single roof. ADAC’s will develop a comprehensive logistics related “green air” policy, part of the preparations to move into a state of the art East Midfield Terminal.
from three months to 48 hrs to departure, together with real time management of resources and information. “In addition, we work closely with both the airlines and the local authorities to ensure that the customer receives a seamless service,” he added. Abu Dhabi Airport offers a 4-day free storage for export and a 7-day free storage for
22 cargotalk JUly 2011
import cargo. The 4-day period also applies to transfer cargo. However, a majority of the transfer cargo connects on the outbound flights within the free storage period. Questioned on the measures taken for safety and security of aircraft, while uploading and flying cargo, Huraiz Al Mur Bin Huraiz informed that the airport
collaborates closely with the local security authorities to ensure that even as all the appropriate measures, are undertaken, the impact on the supply chain are at a minimum. “All our operations and planning is currently also focused on speed, efficiency and accuracy, all of which are hallmarks of an efficient hub. In order to enhance this, we will undertake measures ranging from beefing up our infrastructure and equipment to process reviews and evaluations,” added the CCO. Commenting on the traffic from India and the subcontinent, Huraiz Al Mur Bin Huraiz maintained, cargo from this region is a significant contributor to our overall volumes. Etihad operates dedicated freighters from major metros in India, thus providing connectivity to goods from the Indian markets. In addition, the airport is in touch with a few established and startup freighter operators in India to commence operations to Abu Dhabi.
International Airlines Exclusive Interview
Etihad Crystal Cargo grows by 61% in 2010 India remains among top three markets in the world
C
argo demand and yields (lead measures of industry trends) have recovered robustly. Significantly, Etihad Crystal Cargo’s revenue in 2010 was USD 518 million, an increase of 61 per cent. Crystal Cargo contributed 19 per cent to Etihad’s direct operating revenue for 2010. “The arrival of two Airbus A330200F freighters in 2010, adding to two A300s and two MD11s freighters, was a huge milestone for us. The additional freighter capacity has allowed us to expand frequency in key markets and launch eight new freighter routes (including Hong Kong, Johannesburg, Amsterdam and Beijing), bringing the total freighter network now to 26 stations,” said Roy. He is confident that these aircraft, owned and operated by Etihad Crystal Cargo, would further push the airline’s business forward. “Etihad Crystal Cargo continues to grow impressively. Our belly ATK (Available Tonne Kilometres) cargo capacity grew by 20 per cent (year-on-year) in 2010, outpaced by our freighter ATK capacity growth of 40 per cent. Overall, this led to a 25 per cent year-on-year ATK growth. FTK (Freight Tonne Kilometres) growth kept pace at 26 per cent,” Roy informed. According to him, yield was also a significant driver of cargo revenue growth with year24 cargotalk JUly 2011
It is now evident that the world air cargo market is reviving very fast after witnessing the recession. Roy Kinnear, senior vice president Cargo, Etihad, elaborates on the airline’s strategy to increase its market share in the world cargo market. Ratan Kr Paul on-year growth of 34 per cent. Yield improvement was accomplished through a combination of factors such as stronger demand in 2010, a focus on improving revenue performance on high demand legs, and a drive to improve the cost effectiveness of offline routings. Etihad operates 13 weekly freighter frequencies through five points in India, to complement its eight passenger service destinations. Etihad Crystal Cargo was scheduled to take delivery of a Boeing 777 Freighter in June 2011. “It would offer further opportunities to expand frequency and broaden our footprint in growth markets such as India,” said Roy. Etihad Crystal cargo has been very encouraged by results in India, now a top three global market for the airline, and this has been reflected in its investment in freighter capacity and frequency, which is expected to push ahead as India continues to grow as an exporting and importing market. The nature of the products being freighted by air from India to Abu Dhabi has diversified over the years and is still changing. “Garments and textiles have traditionally been strong on the routes, however, we are seeing more and more diversification in output, and as a result, growth in pharmaceutical and
Roy Kinnear
consumer electronics from India in our shipments,” Roy said. The large UAE expatriate working population from India and the subcontinent are also served through personal effects forwarding. Added to that, Etihad Crystal Cargo recently launched a new premium service, branded Fast-Track, for customers needing guaranteed priority service. Benefits of Fast-Track include expedited airport to airport service, priority access to capacity with later booking, and faster tender times. “The majority of our business from India transits Abu Dhabi into the traditional consumer markets, although we have seen Africa and the Middle East emerge as growing destinations,” he pointed out.
Family Album Airlines
Martinair Cargo
starts operations from Delhi Air France KLM Cargo organised the launch party of Martinair Cargo in Delhi on June 8. Martinair cargo started Delhi-Amsterdam twice weekly flights with B747 ERF aircraft. RenĂŠ Peerboom, director India, Nepal & Bhutan, Air France-KLM Cargo and Martinair Cargo hosted the event.
26 cargotalk JUly 2011
Family Album Club Function
Calcutta Air Cargo Club celebrates poila baishakh
Calcutta Air Cargo Club organised Poila Baisakh ( Bengali New Year) Party at The Golden Park Hotel, Kolkata. On this occasion CACC organised a traditional Bengali music programme followed by delicious food. All members came to attend the event in their traditional attire.
28 cargotalk JUly 2011
Family Album Club Function
Air Cargo Club of Delhi
presents light & laughter moments With an aim to reduce stress from the cargo fraternity, recently the Air Cargo Club of Delhi decided to organise a luncheon meet by presenting a comedy show at Hotel Vasant Continental, New Delhi. The lunch was well-attended by the club members and their guests.
cargotalk
JUly 2011
29
Family Album Glimpses
A year of revival
glimpses of year 2010-11 The year 2010-11 was the year of revival of the cargo and logistics industry in India. There were a number of new launches, agreements and announcements. Also the industry witnessed various events and conferences across the country. In this Annual Issue Cargo Talk glances through the memorable moments.
Pratibha Devisingh Patil, president of India, along with the trade delegation from India to China
Inauguration of ACAAI Convention 2010 in Bengaluru
Inauguration of DACAAI Convention 2010 in New Delhi
Freightstar starts first train service from ICD Loni to Pipavav
CSC Performs Bhoomi Puja to build Greenfield Cargo Terminal at IGI Airport in Delhi
Wilson Sandhu introduces new business partner
Vineet Kumar, chief commissioner customs inaugurating EICI’s Mumbai Terminal
30 cargotalk JUly 2011
VRL Logistics honoured with Apollo CV Award
Safexpress wins Express Logistics and Supply Chain Award
PS Bedi enters into JV with Maman Group to offer record management
FedEx launches flight to Bengaluru
DIAL appoints TCI and Gati to carry cargo from IGI Airport, Delhi
DIAL and Celebi unveils cargo plans at ACCD meet
DP World launches International Transshipment Terminal at Vallarpadam, Cochin
Minister of Shipping inaugurates three major port projects a major ports
Aerologic pioneers freighter with women pilots
At ACCD’s luncheon meet on “Forward Contract” in May 2010 cargotalk
JUly 2011
31
Family Album Corporate Social Responsibilities
uti Walkathon
for relief in Japan anD new ZealanD The staff of UTi India walked for UTi Charity on May 21 to raise funds for benefiting earthquake reconstruction in Japan and New Zealand. The company staff participate in this walk every year.
32 cargotalk JUly 2011
Cargo Performance Export/Import
DELhI INTERNATIoNAL AIRPoRT CARGo DEPARTMENT, IGI AIRPoRT, NEW DELhI
(AIRLINE-WISE IMPoRT/ExPoRT CARGo PERFoRMANCE FoR ThE MoNTh oF MAy 2011)
S. No. Airlines
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 27 28 29 30 32 33 34 35 36 37 38 39 40 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57
Export(MTs)
Export Perishable Cargo (MTs)
Export (with Peri.) (UPL)(MTs)
Import
Total Cargo
All wt. in mt.
% of Total
Jet Airways.................................... 1064 .................336 ................ 1400 ...............1765 .................3165 ............8.89% Cathay Pacific 1220 12 1232 1834 3066 8.61% British Airways 1081 18 1099 1713 2813 7.90% Emirates 828 898 1726 761 2487 6.99% Lufthansa Cargo Airline 543 78 621 1089 1710 4.80% Air India .........................................644 .................258 .................. 901 .................566 .................1467 ............4.12% Kingfisher Airlines Ltd. 552 13 565 815 1380 3.88% Singapore Airlines 533 16 549 710 1259 3.54% Fedex Express Corpation 844 0 844 312 1156 3.25% Qatar Airways 443 215 658 406 1064 2.99% Thai Airways ...................................337 .................. 19 .................. 356 .................695 .................1051 ............2.95% Swiss World Cargo(India) 469 40 509 466 976 2.74% Turkish Airlines 545 35 581 243 823 2.31% Aerologic 521 0 521 295 817 2.29% KLM 442 49 491 304 794 2.23% Virgin Atlantic ................................321 .................... 3 .................. 324 .................423 ...................747 ............2.10% Etihad Airways 327 72 398 299 697 1.96% Air France 280 61 340 355 696 1.95% Malaysian Airline System 309 45 354 341 695 1.95% Uzbekistan 450 17 467 169 636 1.79% Finnair ............................................356 .................. 27 .................. 382 .................176 ...................558 ............1.57% Austrian Airlines 210 0 210 234 444 1.25% China Eastern Airlines 205 2 207 165 372 1.05% Saudia 303 50 353 6 359 1.01% 0.99% Continental Airlines 215 0 215 137 352 American Airlines Cargo 183 0 183 160 343 0.96% Aeroflot Cargo Airlines 228 74 302 31 333 0.94% Blue Dart ........................................164 .................... 3 .................. 167 .................158 ...................325 ............0.91% China Air 113 0 113 204 317 0.89% Japan Airlines 139 1 140 166 306 0.86% Air China 86 0 86 155 242 0.68% Air Mauritius 83 71 154 2 156 0.44% Ariana Afghan Airlines 103 2 105 45 151 0.42% Gulf Air...........................................100 .................. 22 .................. 122 .................... 4 ...................126 ............0.35% Asiana Airlines 27 0 27 83 110 0.31% Royal Jordanian Airlines 59 36 95 2 97 0.27% Aerosvit 71 9 80 1 81 0.23% Eva Air 78 0 78 0 78 0.22% Air Arabia 71 4 75 1 76 0.21% Ethopean Airlines .............................10 .................. 57 .................... 67 .................. 12 .....................69 ............0.19% China Southern Airlines 12 0 12 53 64 0.18% Mahan Air 37 4 41 17 58 0.16% Pakistan International 15 0 16 36 52 0.15% Oman Air 35 11 46 4 51 0.14% Kuwait Airlines .................................. 4 .................. 23 .................... 27 .................. 13 .....................40 ............0.11% Air Astana 28 0 28 0 29 0.08% Turkmenisthan Airlines 28 0 28 0 28 0.08% Sri Lankan Airlines Ltd 11 2 14 9 23 0.06% Jetlite 14 0 14 0 14 0.04% Royal Nepal Airlines 0 0 0 10 10 0.03% Indian Airlines ................................... 6 .................... 0 ...................... 6 .................... 0 ...................... 6 ............0.02% Druk Air 1 0 1 0 1 0.00% Deccan Express Log 0 0 0 0 0 0% MIS 1263 57 1321 1501 2822 7.93%
Total Cargo handled in May‘10 % VARIATION
16014 17708 -10.58%
2628.38 2273 13.52%
18642.67 19981 -7.18%
16948 15764 6.99%
35591 35745 -0.43%
## Cargo Handled at Centre for Perishable Cargo
34 cargotalk JUly 2011
Subscription form
Subscription form
MUMbAI CSI AIRPoRT ExPoRT/IMPoRT CARGo ToNNAGE hANDLED IN MAy 2011
(Including TP Cargo)
WEIGhT IN ToNNES S. No. Airlines 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44
Air France Air Cargo Germany Air Arabia Airasia Austrian Airlines British Airways Blue Dart Cathay Pacific Continental Airlines Delta Airlines/KLM EL-AL Airlines Etihad Airways Emirates Ethopian Airlines Federal Express Gulf Air Iran Air Jet Airways Jade Cargo Kenya Airways Kingfisher Airlines Lufthansa Malaysian Airlines NorthWest Airlines Oman Air Pakistan Airways Qantas Qatar Airways Saudi Arabian Airlines Singapore Airlines Swiss Intl. Airlines Srilankan Air Turkish Airlines UPS Air India Air Mauritius Egypt Air Korean Air Kuwait Airways Royal Jordanian Airways South African Airlines Thai Airways Yemenia Airways Charters Others
GRAND TOTAL
Export General
Export Perishable
Total Export
Import
Total Exp+Imp
292.78 1.05 58.99 105.48 0.00 885.45 24.51 1179.61 70.56 230.08 43.95 680.79 1791.73 462.30 608.37 138.95 57.69 1488.68 0.00 307.67 489.76 1334.63 332.20 0.00 8.00 47.63 117.57 497.39 412.76 875.21 503.23 62.58 308.49 97.68 950.95 112.48 31.05 364.09 93.24 7.81 186.18 312.19 30.12 0.00 193.79
28.10 0.00 45.02 0.00 0.00 148.37 0.00 12.60 0.00 0.00 0.09 97.13 1031.06 21.56 1.84 118.40 6.29 1218.72 0.00 7.34 0.00 15.99 10.49 34.32 135.48 32.62 0.00 373.39 88.53 159.30 27.59 13.07 9.22 0.00 2098.82 1.81 0.00 5.95 199.49 0.00 11.88 50.18 6.39 0.00 0.00
320.88 1.05 104.01 105.48 0.00 1033.82 24.51 1192.21 70.56 230.08 44.04 777.92 2822.79 483.86 610.21 257.35 63.98 2707.40 0.00 315.01 489.76 1350.62 342.69 34.32 143.48 80.25 117.57 870.78 501.29 1034.51 530.82 75.65 317.71 97.68 3049.77 114.29 31.05 370.03 292.73 7.81 198.06 362.38 36.51 0.00 193.79
330.00 797.15 0.64 113.61 223.60 547.14 35.14 1917.40 97.65 204.62 54.91 512.78 1358.97 29.23 258.36 2.54 1.66 2439.85 120.72 13.61 1051.64 2015.16 243.95 0.00 5.01 19.29 166.92 493.64 39.87 1020.06 406.23 19.27 286.41 375.09 836.74 0.92 0.73 85.33 32.92 0.05 18.91 550.61 0.30 185.22 25.78
650.88 798.20 104.65 219.09 223.60 1580.95 59.64 3109.61 168.21 434.70 98.95 1290.70 4181.75 513.09 868.57 259.89 65.64 5147.24 120.72 328.62 1541.40 3365.78 586.63 34.32 148.49 99.54 284.49 1364.42 541.16 2054.57 937.05 94.92 604.11 472.77 3886.51 115.21 31.78 455.37 325.65 7.87 216.97 912.98 36.81 185.22 219.56
15797.62
6011.02
21808.64
16939.61
38748.25
ExPoRT/IMPoRT CARGo ToNNAGE hANDLED IN APRIL 2011
Cargo Handled in April’11
15925.70
4824.70
20750.40
16651.14
37401.54
cargotalk
JUly 2011
37
Cover Story Market Trends. Ratan Kr Paul
A
ccording to Kenneth Koval, Vice President – India Operations, FedEx Express, the industry is expected to grow at 12 per cent to 14 per cent owing to the brisk momentum of intra-Asia trade. Now with the world economy recovering, the logistics industry, especially the air express industry, will play a significant role in strengthening the inventory cycle and revitalising economies through its ability to connect.
Kenneth Koval
Raajeev Bhatnagar
38 cargotalk JUly 2011
In his opinion, a trend that is clearly emerging in the express cargo industry is the broad-basing of service and product portfolios of express delivery companies – with more players offering comprehensive portfolios covering an array of services ranging from express services,
Samir Hosangady
ground services, value added services, warehousing, 3PL etc. “The demand for air transportation is ever increasing, particularly from the pharmaceuticals, healthcare, engineering, manufacturing, automotive and gems and jewelry sectors in India,” he pointed out. “Another key trend that has been witnessed over the past year is consolidation within the industry. There is a growing need for global standard logistics services in the country so that Indian enterprises can integrate seamlessly with the global economy. This is fuelling globalisation and consolidation within the industry with the entry of global players as well as mergers and acquisitions in the Indian market,” he underlined. Raajeev Bhatnagar, regional vice president, India subcontinent, UT Worldwide elaborated, “The trend which are visible for the industry are enhanced focus in automotive, pharmaceutical, as large automobile companies have set up their manufacturing units in India.” He assumed that focus is also going to be in-terms of innovative warehousing and distribution. “Our expectation in current financial year is a significant growth, although it appears that the market would be either sluggish or flat. As a result, we all need to work to look at optimising opportunity available and diversification of products and related activities,” added Bhatnagar. The year 2010-11 was “very
by around 15-20 per cent in the current financial year. “The Indian government’s commitment to improve the infrastructure with planned investments of around USD250 billion in the next four years will provide an environment of growth for the logistics industry,” he argued.
Volkar Mueller
good” for UTi India as the growth was well in terms of product, vertical and service offerings. The company’s revenue grew by 40 per cent, so is the profitability of UTi India. “What we are targeting this year is to add additional competency in terms of warehousing, distribution and inventory management. We are looking
Shesh Kulkarni
at growing our products – air export and import, brokerage, ocean freight, and we are targeting a growth of around 15 to 20 per cent,” said Bhatnagar. Samir Hosangady, COO, South Asia region, Damco also observed that the logistics industry in India should grow
He emphasised the overall export out of India which is increasing as key destination markets such as Europe and United States are recovering from the economic downturn. The import volumes are also strong as the domestic Indian economy is doing fairly well with increased consumption and manufacturing activity. He appeared to be very optimistic about the much discussed GST (Goods and Services Tax) regime. Hosangady added one crucial perspective about the present market trends requirements. “With increased focus on supply chain reliability and cost efficiency, technology is increasingly becoming a vital part of today’s logistics industry. Organisations are increasingly becoming aware of the environmental impact of supply chains and are demanding green logistics solutions to reduce the overall carbon foot print of the supply chain,” he pointed out. He expected more consolidation in the Indian logistics market. “The market is largely fragmented and consolidation will definitely help to create size and scale for industry players and thereby create benefits to the customers,” he advocated. Damco grew by over 30 per cent in sea freight, 40 per cent in air freight volume by 40 per cent 60 per cent in landside
Christoph Remund
cargotalk
JUly 2011
39
Cover Story Market Trends
PLANS AhEAD FedEx: FedEx will continue to focus on the key sectors like textiles including fashion, IT, automobile, leather, pharmaceuticals and healthcare, engineering, handicrafts, high value items and electronics industries, in addition to the SME sector. FedEx Import Services and small and medium businesses will be additional key focus areas for FedEx in India. UTi: The company will increase its footprint in warehousing and distribution, create better understanding and visibility of its clients with real time information. The company will work together with the industry to contribute and support the growth of India Damco: Damco is planning to increase its geographical presence in the country by opening new offices in developing industrial zones in India, especially in the North east region. Kuehne + Nagel: Kuehne + Nagel will further increase its footprint in India from 35 locations to 50 locations in 2011. The company plans to open 16 more new offices in India this year. UFM: UFM’s plan for 2011-12 is to strengthen its operations in Delhi and Chennai; open up Mumbai operation soon.
and
DHL: DHL’s focus in the life science and healthcare industry remains strong and it will continue to invest in it. Other key verticals include fashion and apparel, technology and energy.
volume. “We have set ourselves ambitious growth targets and we aim to achieve this through a focused approach of growing in selected industry verticals like chemicals, refrigerated cargo, consumer electronics, aid & relief, retail and automotive,” Hosangady informed. The company is also focusing on developing logistics services such as quality inspection and warehousing for the agricommodities segment. Damco will focus on anchoring its growth on specific trade lanes from India including, Africa, Europe, 40 cargotalk JUly 2011
China and Intra Asia. It sees opportunities to increase business between South Asian countries such as India and Bangladesh. From a product perspective, Damco has significant growth plans to increase our air freight volumes, imports, trucking and warehousing services. Volkmar Mueller, managing director of Kuehne + Nagel was of the opinion that the revival of the world economy was accompanied by a distinct rise in transport
logistics volume, primarily in the Asia Pacific region. “We are confident that our resilient integrated business model will support the continuation of our strong performance,” he stressed. According to Mueller, the key countries of the Asia Pacific region, in particular, India and China, have emerged as the growth engines for Kuehne + Nagel’s freight forwarding business. In sea and airfreight, a record volume growth was achieved in 2010. In Asia-Pacific region, Kuehne + Nagel’s airfreight business grew by 47 per
cent (2010 over 2009); and seafreight 14 per cent (2010 over 2009). In 2010, Kuehne + Nagel Asia Pacific handled 64 per cent of the total seafreight volume of the Kuehne + Nagel Group. The regional oragnisation also contributed 53 per cent of the airfreight volume handled by the Kuehne + Nagel Group. India remained one of the key contributors to the regional growth. The group aims to double its business by 2014, increasing the number of containers moved by sea from 2.5 million in 2009 to more than 5 million in 2014. In airfreight, the Kuehne + Nagel Group intend to increase the cargo volume to 1.3 million tonne. In contract logistics, it is aimed to raise turnover by 50 per cent. The expansion plans for the Kuehne + Nagel Group are underpinned by investments in the development of activities in emerging countries, especially India and China, in the Asia Pacific region. Kuehne + Nagel is rapidly expanding its market presence and product/service portfolio in the Asia Pacific region, particularly in India. In 2011, Kuehne + Nagel will further increase its footprint in India from 35 locations to 50 locations in 2011. The company plans to open 16 more new offices in India this year. Muelle was upbeat about the infrastructure development initiatives of the government, especially for airports and road networks. He believed that it would push the cargo movement in the days to come. Taking cue from Muelle, Shesh Kulkarni, president & CEO, UFM established three major factors behind the growth of logistics industry in India. Firstly, All segments of logistics i.e., airfreight, ocean freight, brokerage, road surface transportation, warehousing and other contract logistics related activities will all see a change, with the advent of GST in a year or two, the complete working will of this industry will be forced to re-group and reorganise. The second factor is the adoption of technology by the logistics players. “Today technology is in use, but very selectively. We foresee this trend is changing, more and more use of technology will push improved visibility and working,” Kulkarni said. The third aspect of the change is people related.
Cyrus Katgara
P.C. Sharma
Ram Tiwari
DP Singh
“We foresee that all people who have been part of this industry has to be capable for adoption of changes. If they are not aligned to changing trends and are unable to adopt best practices, will be isolated,” he emphatically maintained. “We will see a positive growth in the business.But definitely some of the happenings in the country have had their impact on FDIs which, directly or indirectly, influence the growth,” he cautioned. According to Kulkarni, the 8 months old company (UFM) has clocked billing close to $3 million (USD) and has added on over 100 clients. “We are very excited about the prospects for the coming months. We are confident that we will have a story to tell in two years from now. It’s our dream to build the one of its kind Indian multinational company in this sector,” he asserted. UFM has set a target to be a 100 crore company by end of third year of its operation. For the financial year 2011-12 it targeted to generate + 50 crore revenue.
Christoph Remund, CEO - DHL Lemuir Logistics pointed out that strong economic growth and liberalisation have led to considerable increase in domestic and international trade volumes over the past five years. “Our business continues to develop positively as we keep a close eye on global risks. We expect a healthy growth of volumes from our newly opened facility at the Free Trade Zone (FTZ) in Chennai,” Remund said. He, however, made a caution. “We cannot but be mindful, of the fragility of the world economic situation not only by the devastating disaster in Japan, but also by the instability of many countries in the Middle East and northern Africa,” he added. By incorporating a flexible cost structure, DHL Lemuir Logistics is strengthening its ability to withstand economic volatility and expect to sustain a double digit growth figures. Cyrus Katgara, partner, Jeena & Co. analysed the present market trends from a macro level. “Trends in the logistics industry are closely following and shaped the trends in the economy in general. cargotalk
JUly 2011
41
Cover Story Market Trends
Growing urban middle class with ever increasing purchasing power, driving the lifestyles dependent on organised industry and trade in telecom, electronics, automotive and retail is resulting growth
Bharat Thakkar
Trevor Saldana
of organised sector and attracting foreign direct investment. As a result there is a fast growth of 3PL service providers engaged in serving the indigenous industry as well as the growing foreign direct investment manufacturing units who are hungry for the local logistics support,” he explained. In addition, he argued, GST regime has created grounds for growing trend of consolidation of share of logistics sector moving from small unorganised sector to organised sector driven by indigenous corporate houses and multinational logistics corporations. “Another area displaying a dramatic change is increasing share of exports emerging from Special Economic Zones (SEZs) where the share of SEZs has grown form five per cent of total exports in 42 cargotalk JUly 2011
2006-2007 to 30 per cent of total exports in 2010-2011.This trend is likely to get a boost with Free Trade Warehousing Zones (FTWZs) getting attention of the global players who may like to use that to place their inventory closer to the final buyer to reduce the delivery time without locking the funds in duty paid goods,” added Katgara. He also expected that Indian territory could be used as a major trading hub for south Asia with the development of FTWZs. “We expect the volume to grow by 20 per cent in the current financial year,” he maitained. PC Sharma, CEO, TCI XPS, gave an overview about the containerized cargo movement. According to him, proportion of containerised cargo handled at Indian ports has been showing a constant increase driven by various trade liberalisation policies, increasing containerisation in general cargo commodities such as durables, engineering components, machinery, auto components, food products and apparels. Investments in technology by logistics players are also fast increasing. “Overall the TCI Group registered a jump of 20 per cent in net profit in comparison to last year. XPS as a division grew by almost 19 per cent as against industry growth of 16 per cent,” said Sharma. Ram Tiwari, director marketing, Shine Logistics was expecting a better performance as compared to 2010-11. “We are hoping to have about 18-22 per cent growth in 2011-12 than last financial year,” said Tiwari. In 201011 the company registered a 16 per cent growth as compared to 2009-10. “This growth should have gone more if Delhi air cargo terminal had operated in proper way to handle the cargo without any delay,” he informed. DP Singh, GM, CP&MS, Airports Authority of India depicted a promising scenario for the logistics industry in the country. He maintained that up to 201213 Delhi airport will witness 12 per cent in international and 20 per cent in domestic cargo. In Mumbai the growth will be 8 and 15 per cent for domestic and international cargo respectively. He also maintained that the non metro airports are also emerging very fast. He informed that the present
capacity for cargo at the Indian airports 1.95 million tonne for international and 1.42 million tonne for domestic cargo. In the 12th Plan capacity of 1.3 million tonne for international and .95 million tonne for domestic cargo will be added. However, Bharat Thakkar, vice president, ACAAI, aired skepticism. “The question should be are the Indian airport’s ready?” he asked. He observed that Indian forwarders always meet the challenge of increase in exports, in spite of all odds. They work under unfriendly condition and yet are the ones to take all the burden of finance of freight and other charges paid to airlines. “Dwell time in other Asian countries is 24 hours for cargo, while on Indian airports it is 72 to 96 Hours. Cold storage for imports the facility is almost non existence due to its capacity there by food stuff and other items overflow and kept outside,” he highlighted. Thakkar also maintained that though export and import have increased remarkably, over last decade there have been no additional arrangements made at Mumbai airport (for instance). Same is the case at other metro airports. Trevor Saldanha, CEO – international division, Patel Integrated Logistics and a managing committee member – ACAAI supplemented Thakkar. “The major challenges that we currently face in India are inadequate infrastructure, leading to congestion/delays at the air cargo terminals and technology that often does not function as designed, creating bottlenecks in customs clearance procedures. “With the rapid growth of cargo industry, availability of qualified/ trained human resources is also becoming an issue now, since training facilities in our industry have not kept pace with the rapid growth of the business,” he stated. For a healthy growth of the cargo and logistics industry, the government needs to focus on providing adequate infrastructure in terms of developing modern and well equipped cargo complexes, where forwarders can have offices and warehouses with permission to build ULDs themselves to save time and reduce transaction costs.
Young Brigade Entrepreneurs & Professionals
Young & Emerging In the Annual Issue 2010 of Cargo Talk we had presented the ‘who is who’ of the cargo and logistics industry.This year we have zeroed in on young entrepreneurs and professionals with an objective to highlight the changing mind set of the forthcoming logistics majors.The brief interactions with them unveiled that this hitherto unorganised sector will witness a sea change in the near future.The profiles are arranged in alphabetical order. Cargo Talk will continue the interactions in the future issues. Crystal Group
Bullish about growth prospect Akash Agarwal – Director Akash Agarwal, director, Crystal Group was born in a business family involved in the transportation and trading business. He is an MBA. He perceives that logistics business in India has passed through various stages and there is tremendous growth in this sector. He also points out many challenges which include geographical factors, various protocol, unskilled manpower, infrastructure and technology. He feels that the governments should play the most important role to improve the infrastructure and clear the hindrances. Sam Walton and Dhirubhai Ambani are the icons Akash would like to follow to become a successful entrepreneur. He likes reading, watching movies spending time with his kids. “Geographics have to be understood and dealt with experience. Similarly manpower has to be trained and groomed for the needs of the industry.The government’s role is to improve the clear the hindrances and provide us with the needed infrastructure so that economies of scale can be achieved in the logistics industry,” said Akash.
his Favourites Kashmir, Switzerland , Aamir Khan, Madhuri Dixit, Lawn Tennis, Chess, MS Dhoni, Robert Kyosaki , Robin Sharma, Narendra Modi and Vallabhai Patel.
om Logistics
“Something different” was the idea Akash Bansal – Head logistics Akash Bansal, head logistics, Om Logistics, is a post graduate with BE (electronics) and MBA( Materials and Logistics). It was his interest in doing something different that prompted him to opt for logistics.“During my summer training in BE, I felt that there is immense potential in logistics which is not being explored and dedicated professionals are required for redefining the logistics industry,” he explains. He feels that there is a requirement of professionals in Indian logistics industry to change the perception that this industry is for transport companies. According to Bansal, organised logistics industry in India will multiply in manifold within the next decade. He, however, expects the government should accredit logistics as an industry and help the industry improve by simplifying the procedures. “The future of logistics industry of very bright and this is the only industry which will not feel the heat of recession if you are diversifies among different verticals.We need professional in Indian logistics industry to reform the mentality of Indian buyers as a transport company which is also a challenge for a professional managed logistics company in India,” Bansal said.
his Favourites He loves spending time with his family and kids, and travelling.“I am a bit fond of movies but this is just to de-stress and relax post my busy schedule,” he shares. His favourite destinations are Goa and Switzerland.
44 cargotalk JUly 2011
Mituj Marketing
Fond of hard core off-road driving Amit Bajaj – Director A Commerce Graduate from Bhagat Singh Collage, New Delhi, Amit Bajaj, director, Mituj Marketing, has done executive management programmes in leadership, HR etc from various institutes like IIM Lucknow and IMT.While graduating from college, Amit got an opportunity to become an agent for Modiluft Airlines, for cargo and courier. “The business was started with me and my mother and then my father joined in. At present, the business is headed by me and from this year onward my brother Anuj Bajaj has also joined the company,” Amit informed. According to him, the domestic air cargo industry is continuously evolving and maturing. However, the biggest challenge at present is the regular changes in the fuel surcharges by the airlines. There have been instances lately that the aviation fuel prices have gone down but the airline fuel surcharge on cargo has gone up.Amit is a member of Northern India Offroad Club.The club members are into hard core off-road driving on 4x4 vehicles. He also enjoys swimming everyday, cycling and reading.
his Favourites His favourite male and female actors are Rahul Bose,Vinay Pathak and Vidya Balan. His favourite singers include Air Supply, Rod Stewart, Jagjit Singh and Kailash Kher.
Image Logistics
Well qualified and methodical Amit Chakraborty – MD While working with different Indian and MNC companies, Amit Chakraborty, MD, Image Logistics, felt that logistics trade and work needs to be more organised and the working culture should be changed. Hence he decided to open a new venture with an organised work culture. Amit is a B.Sc (Zoology), PGDTM (specialisation with cargo), Master in Foreign Trade (MCT) and Master in Computer Application (MCA). In his opinion, the future of logistics industry in India is very bright. However, he observes that in terms of technology and customs process India is lagging behind as compared to other developing nations. “Our infrastructure does not support us to compete with developed country. So as the volume is increasing, the challenges are also increasing.To cope of with the challenges we need to do lot of innovations, diversify in business and from government we request support on infrastructure and custom policies,” he says emphatically.
his Favourites Tourist Destination : Haridwar, Hong Kong and Germany; Cuisine: Italian (apart from Indian); Actor: Rahul Bose; Singer : Kishore Kumar, Game : Football; and National Leader : Netaji Subhash Chandra Bose.
DTDC
A job seeker positioned suitably Amit Shankhdhar – DGM North Amit Shankhdhar, DGM North, DTDC, who is a BSc, PGDBA and a life member of CILT, started his professional career as sales executive just after completion of graduation. During his job, Amit pursued his PG degree and then got CILT membership.“I came to Delhi to get a job which was a big achievement for me at that time. Daily new challenge, uncertain nature of work kept my interest alive and I continued,” he shares. His joining logistics industry was unplanned. Now he is very bullish and optimistic about the future of this Industry in India. In his opinion, till now society as such may have not given due recognition to logistics industry in India and so far logistics professionals may have not been treated at par with other sectors, but now the situation has changed many fold. Presently logistics professionals are in high demand not only in logistics industry but supply chain functions for other industries like retail, exports, manufacturing, trading-distribution, E-commerce etc. He underlines the fact that this industry is mostly being controlled and covered by unorganised sector players, and therefore bringing about a change is a big challenge. Significantly, a lot of initiatives have already started.The industry itself coming forward to set up education and training wings; for example, DTDC has set up DTDC Institute of Supply Chain Management.
his Favourites His areas of interest include reading and watching political news; regular visit to native village and meeting old friends and travel with Star Cruise Singapore.
cargotalk
JUly 2011
45
Young Brigade Entrepreneurs & Professionals
Patel Integrated Logistics
Strengthening the family business Areef Patel – Executive Vice Chairman Born in a family that was already well entrenched in the freight business, Areef Patel’s association with Patel Integrated Logistics began when he formally joined the “House of Patels” as a director in 1992. He is the second generation to the business, started by his father Asgar Patel in 1959. He completed his 10th grade from Choate Rosemary Hall, USA, in 1989, thereafter graduated in 1993 from the University of Bombay with a major in Economics. According to him, logistics industry in India is now coming of age and is growing at a steady and consistent pace. Several viable opportunities are now available for the customers to select from, depending on the nature of the product, delivery time required and cost affordability. He maintains that there are several challenges before the cargo / logistics industry in India, the major one being lack of adequate infrastructure.While we now boast of Modern Airport Terminal Buildings in the major cities in India, unfortunately the focus has been only on handling of passenger traffic.”We are yet to see modern cargo hubs/integrated cargo villages in India. In the Domestic Sector, issues like Octroi, Sales Tax, etc., continue to create impediments.The government should take a serious look at the cargo and logistics industry and involve all the stake holders, to ensure that proper steps are taken for improvement in efficiency levels and reduction in transaction costs.
his Favourites For a healthy growth of this Industry, infrastructure needs to be improved, processes need to be streamlined and made user friendly, transparent and quick, electronic transactions/interface must be increased to reduce human interaction.
SA Consultants
Response to father’s call Ashish Asaf – Director After completion of BA (Hon) English (DU) Ashish Asaf, director, SA Consultants has been associated with cargo and logistics business for the past 10 years. He was inspired by his father, Asaf Ali ( a well known cargo trade practitioner), to enter into this trade. Ashish finds this industry very promising, though it has miles to go to attain industry status. “Biggest challenge faced by our industry is debt control.We need a CIBIL (Credit Control Bureau India Limited) like body for our trade,” he feels. He also urges for simplified and effective policies, and improved infrastructure—internal as well as external. LN Mittal is Ashish’s role model for professional success.
his Favourites He likes spending time with his family and watching movies.His favourite tourist destinations are Munnar and Scotland.Chinese is his favourite cuisine.Ashish is an admirer of Aamir Khan,Julia Roberts,Rahat Fateh Ali Khan,MS Dhoni,John Grisham,MF Hussain,Atal BihariVajpayee.
Shreeji Transport Service
Be positive in approach Dileepa BM – CEO Dileepa BM, CEO, Bonded Trucking, Shreeji Transport Service, is a B.Com graduate and an MBA in Finance.Venturing into logistics business was not his original chosen profession.“During my MBA days I did a project on Entrepreneurship. My project was selected as the best project. It encouraged me to do something different and challenging. I got an offer from a logistics company in the year 2000 and in 2002 launched Shreeji Bonded Trucking in 2002, with some of my friends,” he shared. Now Dileepa is satisfied and bullish about the future of logistics industry in India.“I believe young aspirants would be keen to take logistics services as their careers and dream projects,” he said. Initially, Dileepa had faced lot of challenges to run the show due to external factors. Many of them still continue.“Meeting our clients (airlines) is a big challenge in India. Poor road infrastructure and airport terminals, particularly at the major airports have been a matter of serious concern. However, I think we should be very positive in overcoming those challenges,” Dileepa maintained. According to Dileepa, struggle, hard work, proper education, dedication and consistency, lead to success.“Since logistics is a 24x7 business sector, I hardly get time to think beyond business.”
his Favourites I love to meet my friends outside Bengaluru in social gatherings. Also I try to find time to watch my favourite actor, Dr.Rajkumar and Rajnikant’s movies,” he added. Dileepa was also a university level football player. Sachin Tendulkar is his favourite cricketer.
46 cargotalk JUly 2011
Indo Arya Central Tpt Ltd
Searches for new friends Dushyant Arya – Director Dushyant Arya, director, Indo Arya Central Tpt Ltd is a commerce graduate. Presently he is running the family business. In his opinion, more and more “single window global solution providers,” will emerge and high level of automation will rapidly happen. “We have lots of barriers to pass and as a result the industry has to bear heavy transaction costs, which serves a custodian of goods without helping the government,” he underlines. According to Dushyant, infrastructure is shaping up quiet nicely, but the system mars the progress to a great level. Corruption at all levels also hinders the growth rate.
his Favourites Listening to classical music and making new friends are his areas of interest.“France has enthralled me like no other destinations in the world,” he maintains.
Modern Cargo Services
Seeks industry status Firdos Fanibanda – MD A Bachelor of Commerce, Firdos Fanibanda is the managing director, Modern Cargo Services and chairman – ACAAI,WR. After his education, Firdos was introduced to the freight forwarding industry. He found it challenging enough to make a career out of it. “No doubt there has been substantial improvement within the industry. However, if you compare the increase in trade vis-a-vis the infrastructural development in our country, there is a huge deficiency. Infrastructure, or rather the lack of it, is by far the most important challenge this industry faces. The poor condition of the roads, along with the limited space availability at airports and seaports has resulted in higher operational costs,” Firdos observes. He opines that the government must recognise the important role being played by the freight forwarding community and grant it an industry status. It should also be more actively involved and considered as a body when formulating policies.
high optimism “The logistics industry will continue to boom in India as long as the world economies continue to set up their manufacturing bases in India. It is economical for advanced economies to shift their production bases to the Asian continent and India.”
Majha Transport
Nostalgia for transport business Gagandeep Klaire – Director Gagandeep Klaire, Majha Transport, did his schooling from Welham Boy’s School Dehradun and then his B.Com Hons from Delhi University. His grandfather has been a self made transporter since 1930s. Gagandeep’s father too has been in transport business for more then 30 years and later decided to shift to Vehicle Finance. He set up an NBFC in 1985. “When we started in 2004, we had no one in our family then doing direct transport business. But my brothers and I had always wanted to do something in it. It is the thing we all have grown up with since childhood. I remember my father taking us to the workshop and I would just play around the trucks the whole day. It is just the love and respect we have for trucks and drivers,” he says emotionally. He feels that India is a growing and booming market.“Things and perceptions are changing everywhere. And transport being the blood of the nation development is just becoming thicker and thicker,” he says.
his Favourites Gagandeep loves to swim to relax.“Because when you are in water, you never think about anything else,” he explains. He also likes to travel and indulge in adventures sports. His favourite destination is Zanskar, which is above Kargil (J&K).
cargotalk
JUly 2011
47
Young Brigade Entrepreneurs & Professionals
ocean King Shipping
Freight forwarding: A natural choice GS Chawla – MD GS Chawla, MD, Ocean King Shipping, is a B.com (Hons) from Delhi University and MBA (Marketing - Customer Satisfaction) from University of Liverpool, England, UK. He did Cargo Basic course with Air India, DGR cargo courses, and passed Rule 20 of Customs House. He has been awarded DIPLOMA for participating in the conference and exhibition by The National Association of Latin American and Caribbean International Cargo Agents Chawla is the second generation in his family business of freight forwarding.“Freight forwarding came naturally to me as I grew up following the footsteps of my father, as my true idol. I didn’t really have to choose this. I was truly just born for it,” he expresses. In his opinion, the logistics industry is all geared up and bound to grow manifold in all dimensions in the times ahead. From the old conventional ways of clearing and forwarding, cargo and logistics industry is truly becoming very high tech and gadget oriented.
his Favourites Helping the needy and feeding the hungry is his mantra. Chawla’s other activities are swimming, gym, reading and listening Kirtan. He loves to visit Udaipur in winters and Scotland in summers. He is the follower of the works of Osho, Mother Teresa and Mahatma Gandhi.
Sun Logistics
Always prepared for challenges Haresh S. Lalwani – Joint Managing Director Haresh S. Lalwani, joint managing director, Sun Logistics, was very intrigued by his family owned freight forwarding business. Under the guidance of his father the Late Sunder Lalwani, who was the founder of the parent company Sundersons, Haresh learned the business from his experiences and knowledge. “My wife Dipika, who is my pillar of strength, has always encouraged and supported me at all times and my brother Bharat who stands shoulder to shoulder in our business,” he acknowledges as the secret behind the rise of his company. His idea is to live with challenges to gain success. “It the best way is to be prepared for expected challenges and expect unexpected challenges. As long as you are ready for changes, no challenge is too big.What this industry needs is a proper controlling body which ensures that the companies offering these services are trained, qualified and at the same time financially sound.”
his Favourites Haresh is keen on photography and he finds the destinations Kashmir and Greece fascinating. He loves Indian and Chinese foods. He is a fan of Amitabh Bachchan, Madhuri Dixit, Salma Hayek, AR Rahman and John Grisham.
Interarch
A trained photographer in logistics business Ishaan Suri – Head of Corporate Marketing Ishaan Suri, Head of Corporate Marketing, Interarch did his school education from Modern School Vasant Vihar and graduated from the University of Rochester, New York, USA. He also holds a degree in Economics & Management from the London School of Economics. He is the second generation of management team here at Interarch and has been working with the company since 2005. “Our company, Interarch, is one of the leading turnkey pre-engineered steel building solution providers in India, Interarch has been deeply involved in contributing high speed and high quality steel building solutions to the industry. I am very bullish and positive about the potential business opportunities provided in the cargo industry,” he justifies his decision to enter into PEB business. Proficiency in PEB apart, Ishaan is also a trained photographer and have studied photography for many years.“I spend most of my free time working on photography and creating art out of my photos. I have also studied music most of my young life and pursue other hobbies such as radio controlled aero modeling, he shares.
his Favourites Ishaan is an avid traveller, and most of Europe and East Europe in particular are his preferred places to visit.“I try to spend a few weeks travelling and driving through Europe on quest for adventure and photography,” he expresses.
48 cargotalk JUly 2011
Seagull Maritime Agencies
Comes with a new sight Nitin Agrawal – Director Nitin Agrawal, director, Seagull Maritime Agencies is a graduate. He finds logistics the most challenging and promising industry in India. “The scope of growth and development is endless, though this is the most unorganised sector in modern India as of now. Companies like us coming up with new sight to look into the logistics vertical,” he says. According to him, government should take all major steps to create efficient infrastructure in fracture in terms of road, ports, freight corridors, logistics hubs and awareness. Spending time with close ones, travelling and sports are Nitin’s favourite pastime. His favourite tourist destination in India is Goa and likes to explore Kerala whenever he gets an opportunity.
his Favourites As far as international destination is concerned, for Nitin, London has been good to spend time. He likes Indian cuisine and shooting. He admires Mahatama Ghandhi, Rahat Fateh Ali Khan and Sachin Tendulkar for their indomitable contributions in the respective arena.
Uniworld
Balancing work and family life perfectly Nihar Parida – CEO (logistics & marketing) Nihar Parida, CEO (logistics & marketing), Uniworld is a science graduate with Zoology honours from Utkal University. He did special course on logistics from IIMA.“Way back in 1995, I started reading about this word called logistics. With the Indian economy opening up, logistics seemed the future opportunity as a profession.“I didn’t ask my family before jumping into it. But after marriage it was difficult for my wife to support me because of the long hours that I had to put in. But finally she also saw the growth and the opportunity and understood why I had to really work hard to balance the times,” he adds. One interview of Clyde Cooper, ex- MD, Blue Dart, which was published in a business magazine, inspired Nihar to enter into logistics business.“His foresight and his focus on operation in a service industry still inspire me,“ he maintains.
his Favourites He loves cooking and photography. The favourite subject in photography is human emotions. He loves holidaying in Kerala and Greece. Sean Connery, Susan Sarandon, Amir Khan, Priyanka Chopra, Sonu Nigam and Shreya Ghoshal top his list of beloved ones.
Rahat Continental
Expertise from UK Rahat Sachdeva – VP, sales and operations Rahat Sachdeva,VP, sales and operations Rahat Continental, is a B.Com from Delhi University and pursued a degree in International Business Management from Oxford Brookes University, UK. He did Internship with SDV, UK for 6 months. He also did IATA Basic/ DGR Rahat is a CHA License holder at the age of 24.“While working with SDV I closely observed their organised system of working and always wanted to put that in place in my own family business,” he says. For him, sky is the limit ! He believes that the international logistics market in India is poised for a fast growth in the next ten years which will truly give logistics service providers massive growth and expansion opportunities. However, he felt government should offer better infrastructure for smoother operations which is right now hindering our growth prospect to a great extent. I wish to have the senior industry people guiding us always. Also I look forward to a strong degree of unanimity within the whole industry.
his Favourites Business icon: Cyrus Katgara from Jeena & Co.; Pass time:Travelling around the globe, working out in the gym, listening to music & partying with friends;Tourist Destinations: Goa and Scottish Highlands; Cuisine – Authentic Schezwan Chinese.
cargotalk
JUly 2011
49
Young Brigade Entrepreneurs & Professionals
Jet Freight and Logistics
Keen on exploring new business Richard Theknath – Director Richard Theknath, director, Jet Freight and Logistics has done his graduation in commerce and then an MBA. He joined his family business started by his father and uncle. “I ventured out to business as I found the concept very interesting to do. I really enjoy the thrill of being part of this business and the excitement of achievement,” he says. He observes that despite tremendous scope before logistics companies, there are plenty of challenges like poor infrastructure (both hard and soft), superfluous government policies and complicated systems. “I would like to see the serious involvement of the government implementing some new policies that would benefit the industry. It should also introduce schemes for encouraging Indians to export more,” he urges. Richard is a follower of J Paul Getty as an entrepreneur and Arnold Schwarzenegger, builder and governor of California.
his Favourites Swimming and watching Movies are his favourite pastime. He enjoys Turkish food. Salman Khan, Katrina Kaif, Lenny Kravitz, Lionel Messi, Laila Ali), Robert Kiyosaki, and Van Gogh are his favourite person.
New venture
Supplying manpower for logistics RK Tiwari – CEO RK Tiwari-CEO, New Venture, is a specialised recruitment/manpower solution provider which includes recruitment, manpower outsourcing and payroll/compliance management services to the logistics/SCM industry. Dearth of skilled manpower prompted Tiwari to start services for logistics industry. He is a graduate in English and did his Masters in marketing management. He loves networking with trade professionals, attending conferences to get more knowledge about logistics/SCM industry.
his Favourites Tiwari’s favourite tourist destination in India is all the holy places. He is an admirer of actor Amitabh Bachchan and cricketer Sachin Tendulkar.
Gandhi Automations
Automation is the prime concern Samir Gandhi – Executive Director A Bachelor of Chemical Engineering, Samir Gandhi, executive director, Gandhi Automations, has 21 years of experience in sales, operations and business management of which 17 years are spent in the entrance automations and loading bay equipment industry. During his stint at Gharda Chemicals, as an engineer, he pondered on how the handling of hazardous chemicals can be automated.“It was my dream to have everything automated in the industry. My family was in the business of fabrication and engineering. Hence I quit my job and joined in to support and put wings to my dream,” he unveils. He established Gandhi Automations in 1996 to implement his ideas for the benefit of the logistics industry. Based on his huge experience, he feels that infrastructure of warehouse, road, railway and port requires to be developed on a fast-track basis. His source of inspiration is from reading about the works of Mahatma Gandhi, Jack Welch, Benjamin Franklin, etc.
his Favourites Samir’s favourite tourist destinations are Leh-Ladakh (India) and Lake Tahoe. He likes Indian cuisine. His favourite game is tennis and the personalities he admires include Mahatma Gandhi, Anna Hazare, Aamir Khan, Madhubala and Talat Mahmood.
50 cargotalk JUly 2011
Aaargus Global Logistics
Right choice for fashion logistics Saakshi Trikha – Head of import division Saakshi Trikha, head of import division, Aaargus Global Logistics has completed her school from Doon International School, Dehradun and done her Higher Secondary from Tagore International School,Vasant Vihar, New Delhi. She has acquired BA degree from Pearl Academy of Fashion, New Delhi and is into Fashion Merchandising and Production Management. Saakshi is a DGR qualified professional. “I always wanted to work in a platform giving exposure to international business and accordingly I did my degree course in Fashion and Merchandising.When I graduated, there was a severe recession which hit the fashion industry and it was in crisis. At that time our family business was looking for exploring new opportunities in import cargo forwarding business.The company’s board was searching for a suitable candidate and an offer was made to me, which I reluctantly accepted, as I had no idea about our family business of freight forwarding and logistics,” confesses Saakshi. “Every shipment is a challenge here as there are too many authorities involved to handle one shipment of air or ocean,” she opines. However, she is proficiently performing her job and has carved a niche in the international market.
her Favourites Saakshi’s favourite pastime is watching movies and reading. Goa, Germany and Austria are her favourite destinations. She loves Indian and Chinese cuisine.
AFL Dachser
From legal profession to logistics business Satish Lakkaraju – National Sales Manager Satish Lakkaraju, national sales manager, AFL Dachser, is a B.Sc, LL.B, MBA with a Post Graduate Diploma in Industrial Relations & Personnel Management. Before his present assignment, Satish was in HR and a legal professional and more focused on courier and cargo industry. In his family, he is the first to enter the logistics industry. According to him, though there is huge potential, India is still far behind many countries in cargo and Logistics industry, and the country needs to look into very basic things as today most of the employees are not qualified in the trade. The industry is facing acute shortage of manpower.The government and industry need to stand united on critical issues. He also urges for a common technology platform, because as of now the cargo industry is separately working on its own software. Cyrus Guzder , chairman and managing director of AFL Dachser has been a role model for Satish.
his Favourites Satish loves to travel on a regular basis both for work and holidays. Reading management and motivation books and cooking are his other areas of interest. He has been a sports person from child hood and has played basketball for the University and state.
Kale Logistics Solutions
In favour of more government support Sumeet Nadkar – CEO Sumeet Nadkar, CEO, Kale Logistics Solutions is a CA with over 20 years of experience. His prior experience spans M&A deals, strategic planning, fund raising and financial management. His immediately previous role was as CFO of Kale Consultants. Kale Logistics Solutions is focused on making IT solutions available to small and big players alike, in the logistics and aviation industry. “Professionally, I have always been a keen adopter of IT in the areas of my function and have had the opportunity, on various occasions during my career, to witness the benefits of IT adoption in a number of business areas,” says Sumeet. In his opinion, apart from existing physical infrastructure bottlenecks, low technology adoption is the big challenge before the logistics industry in India. Industry requires increasing of IT investment to bring speed to the operations and communication in the supply chain.“The need of the hour is to remain focused on practical solutions,” he observes. He also maintains that the industry requires greater support from government.The implementation of Value Added Tax (VAT) in 2006 has played a role in reducing logistics costs.The proposed implementation of Goods and Service Tax (GST) could lower logistics costs further.
Core Area “I have been fortunate to gain a broad experience working in leading roles with businesses across verticals like –Polyester, Financial Services, and Pharmaceuticals amongst others. Also, I have experience in design and implementation of several business software applications.”
cargotalk
JUly 2011
51
Young Brigade Entrepreneurs & Professionals
Monopoly Carriers
Transformation from rail to air cargo Suraj Agarwal – Director A B.Com graduate from Hislop College, Nagpur, Suraj Agarwal, director, Monopoly Carriers and general secretary, DACAAI, joined domestic air cargo business to respond to the changing business scenario. He started train cargo transportation services between Delhi and Nagpur in 1990.“By 1995 we were the biggest carriers for the transportation of lottery ticket on domestic network. By 2003 paper lottery was replaced by on line tickets. Accordingly, movement of goods was reduced.That was the time we started focus on air cargo,” shares Suraj. Interestingly, the current trend is the diversion of domestic cargo from air to train because of increase in rates by airlines and hassles at the airport terminals.To reduce stress, Agarwal likes to watch movies and cricket. He also regularly travels for holidays with his family. His favourite destination in India is Nanital, while overseas destination is Dubai.
his Favourites He is the fan of Mohammad Rafi for his versatile singing quality and cricket maestro Sachin Tendulkar.
Indigo
Playing the vital role S.Hari – General Manager, Cargo – India S.Hari, general manager, cargo – India, Indigo, is a post graduate diploma holder in Business Management. He feels that logistics is the industry where someone can perform and excel, especially in these days of growth . Each and every industry needs this support system to achieve its goals. Being airlines service providers, is playing a critical function in the supply chain system. Currently, Hari is performing the vital role of maintaining a healthy relationship with the freight forwarders and logistics service providers. Hari feels that this industry would grow with the support of government and economies like India will witness major growth since government policies so far are very supportive. However, the country needs infrastructure to grow to compete with other countries He also expects support from government especially for tax rebate.“There should be more private public partnership projects and use of technology to improve efficiency,” he recommends. Narayan Murthy, the founder-chairman of Infosys Technologies is the role model of Hari.
his Favourites Hari loves to spend time with his family after his hectic business schedule.His favorite tourist destination is Kerala in India and Singapore in the overseas. He prefers Chinese food. He is the fan of Amitabh Bachchan, Shankar Mahadevan, Sachin Tendulkar and Chetan Bhagat.
Perfect Cargo Movers
Rise from grass root Umesh Tiwari – MD Umesh Tiwari, managing director, Perfect Cargo Movers is started his professional career with freight forwarding company in accounts section. At present he has 20 years’ industry experience and owning a company called Perfect Cargo Movers. After academic qualification Umesh has done several professional courses from IATA and Lufthansa, which helped him to shift from accounts section to sales and marketing division of Trade Wings, a leading freight forwarding company in India. He completed his company’s first year with a humble turnover of Rs 12 crore in the financial year 2008-09, which was followed by commendable Rs 87 crore in 2009-10 and Rs 126 crore in 2010-11. In the current financial year Umesh has a target to touch Rs 150 crore turn over. Perfect Cargo is planning for huge investment in warehousing in this year. According to Umesh, there is no dearth of cargo and freight forwarders can grow significantly. However, existing infrastructure is not conducive for the development of logistics industry in the country.
his Favourites Capt Gopinath, pioneer of low cost airlines model in India is the role model of Umesh. He is also the admirer of Mother Teresa, Anna Hazare, Lata Mangeshkar and Sachin Tendulkar. He loves to visit Rishikesh, Mauritius, Lebanon and Austria.
52 cargotalk JUly 2011
Continental Carriers
Taking the family business to a new high Vaibhav Vohra – Director Vaibhav Vohra, director, Continental Carriers and director, DPD Continental, completed his graduation in Entrepreneurship and Finance from Babson College, Boston, USA in 2007. He returned home to enter the family business established by his grandfather, the Late TN Vohra. “My qualifications, at the time I joined the company, was simply the genetic inheritance,” admits Vaibhav. And, during the course of more than three years now, he has come a long way professionally. During this time, he has won the International Intellectual Achievements Award for Young Entrepreneurs and the International Kohinoor Award for Individual Achievement and International Integration He perceives that the logistics industry has just started to grow in India and has a long way to go.“Growth of a venture is always infested by inherent problems, drawbacks and challenges. So it is with the cargo/logistics Industry. Lack of an efficient infrastructure, erratic pricing factor based on frequent global oil price fluctuations, customs clearance procedures, safety and security, are the serious concerns that require to be addressed immediately,” feels Vaibhav. He likes to follow the footprints of his father, Vipin Vohra (MD, Continental Carriers), who remains a strong voice for the greater interest of the industry
his Favourites “I do enjoy the quality time I spend with my family. My business travels do take me to various, new exotic countries, but I usually take the opportunity to spend a little while in Goa whenever it is possible,” he informs.
Indicon Logistics
From manufacturing to logistics: a new face Varun Thapar – Director Varun Thapar, director, Indicon Logistics did his schooling from the Modern School in Delhi. After that he went to Brown University in Rhode Island, USA. Here he did his double major in European History and Economics. The parent company of Indicon Logistics i.e. Karam Chand Thapar & Bros. (Coal Sales), where Varun is currently an executive director, has been in the bulk logistics business since its inception in 1943. KCT is engaged in the transportation of coal and is the largest logistics service provider in this field.“We have made a recent entry into the Container Logistics industry. We have entered into this space indirectly through the establishment of a world-class facility to manufacture containers and allied products. In that sense, the entry is more into manufacturing than it is into traditional logistics.” informs Varun. He finds the future of the container logistics industry in India very bright.“India as a country is ideally suited towards the multimodal logistics model,” he observes.
his Favourites Varun is quite passionate about music – listening to it and playing it. His favourite tourist destination in India is Goa and abroad it is Italy. He also has a deep affection for the US. He enjoys all kinds of cuisine.
Seahorse Ship Agencies
An expert of rail logistics Vanish Ahluwalia – Regional Manager (North India) Vanish Ahluwalia, regional manager (North India), Seahorse Ship Agencies is a graduate in science, post graduate in Materials Management and Computer Science and Exports Management from IIFT. He started his professional career with CONCOR.“It was early 90s, when I visited a trade fair in Pragati Maidan and noticed EXIM operations which handled containers at its backyard called ICD, Pragati Maidan.This visit introduced me to the world of containerisation. I was the lucky one to be selected by CONCOR 1992, without any Railway background,” he says. He has been instrumental in the implementation of various job functions of Railway Terminal at the then newly built Inland Container Depot at Tughlakabad (ICD TKD), a Greenfield project, now India’s largest dry port and shifting all operations from ICD at Pragati Maidan, New Delhi in order to provide better infrastructure to growing needs of the Trade. His role model is NR Narayana Murthy, the founder-ex chairman of Infosys Technologies, because of his clear vision and goals at the age when India was an emerging IT market.“His philosophy was to have ’Bigger Dream’ and make shorter achievable plans, which drives you towards those Dream.This is required in our industry too, which surely has long way to go and is equally promising,” he underlines.
his Favourites To take a break Vanish loves to travel to any unexplored destination.
cargotalk
JUly 2011
53
Young Brigade Entrepreneurs & Professionals
viksun Carriers
“I love my India” Vineet Chadha – Director Vineet Chadha, director,Viksun Carriers, did his HSC from Hans Raj Model School in 1994 and B.Com from Delhi University. He holds a diploma in sales & marketing. “The only reason for me to join cargo/logistics industry was my uncle, the Late Anil Khosla and founder of our company,Viksun Carriers. He was a dynamic personality who brought and inspired me to remain in this business,”Vineet says.“With his direction I later formed Combined Logistics Solutions in 2000, to tap sea freight business,” he adds.Vineet has been in this business for over 17 years now and expects more robust growth in future than it was possible anytime before.“Apart from the remarkable growth, new sectors are opening up for export / import,” he points out. In his opinion, India’s cargo industry is poised to grow significantly and to achieve this growth the country needs better infrastructure, better policies, trained and organised work force. He advocates for strong Indian companies.“Don’t sell your companies to MNCs and make them strong.We need to work hard and give them tough competition,” he appeals.
his Favourites He loves Indian destinations, though he has visited many countries because of the nature of his business. His favourite places in India are Goa, Kashmir, Rajasthan and Kerala. Apart from Indian food, he also likes Italian and Mexican food.
Master Group of Companies
Urging for strict rules Xerxes P. Master – MD Xerxes P. Master, managing director, Master Group of Companies has done his M.Sc in International Shipping from the University of Plymouth, UK. His family has been active in the shipping business since 1983. “Hence it was only natural to progress up the value chain to include cargo and logistics in our sphere of activities,” he maintains. According to him, the cargo/logistics industry in India is still at a very early stage. The scope and growth is immense considering the fact that India is a growing economy. However, the logistics infrastructure is vastly under-developed. “The cargo and logistics is only about challenges and nothing else. The market is over crowded with many fly by night operators. Our industry is highly de-regularised and hence it results in a lot of under cutting resulting in reputed and genuine players are under tremendous pressure on their bottom line. Today anyone and everyone can become a logistics player!,” he says. In his opinion, this will harm the industry in the long run.This can only be resolved by the government laying down strict guidelines and qualifications to become a logistics player. This will ensure quality players in the industry.
his Favourites Follower of Ratan Tata Xerxes believes that life is not only about business but enhancing ones personality. He enjoys sailing, playing squash, working out in the gym, spending time with his family and watching movies.
Skyways Group
A bold and dynamic professional Yashpal Sharma – Director Yashpal Sharma, director, Skyways completed his school education from Air Force Bal Bharti, New Delhi. He entered Shaheed Bhagat Singh College, New Delhi, to pursue his graduation. He did diploma in Quality Assurance from Pearl Academy of Fashion and diploma in Advance Computers from NIIT.“My father, Mr SK Sharma, has been associated with cargo industry since 1968. He started our company, Skyways Air Services (P) Ltd., in the year 1983. I started to come to office for a couple of hours in my holidays from school and college.This raised my interest in the industry. Later, after completion of my graduation I joined the family business,” he shares. The young, dynamic logistics practitioner appears exuberant about the immense scope before the Indian companies. At the same time, he voiced the serious concerns about the exiting shortcomings, especially visible at the infrastructural front.“Poor physical and communications infrastructure are major deterrents to the logistics sector. Road transportation accounts for more than 60 per cent of inland transportation. Slow movement of cargo due to bad road conditions, multiple documentation requirements, congestion at sea and airports due to inadequate infrastructure and bureaucracy make it difficult for exporters to meet the deadlines for their international customers,” he highlights.
his Favourites He is a big fan of M S Dhoni because of his leadership skills. He is very fond of driving. His favourite destinations are Rajasthan and the South of France. He is the fan of Kishore Kumar, Ranbir Kapoor and Deepika Padukone.
54 cargotalk JUly 2011
Infrastructure Update Airports Authority of India
Cargo activities at AAI airports Impetus on non-metro airports for seamless growth
T
he development and management of international/ domestic cargo terminals is one of the main functions of AAI. The role of airports has become vital to the country and has become an essential and inescapable part of the growth story. Air cargo terminals, being the nuclei of economic activity, assume a significant role in national economy.
AAI Managed Air Cargo Initiatives Establishment of integrated / interim cargo terminal for international cargo handling The pioneering efforts of AAI led to the designing, construction and managing of metro airports viz. Delhi and Mumbai (now under JVCs), Chennai and Kolkata, followed by an interim cargo terminal at Nagpur (1997), which was handed over to the state government enterprise called MIHAN from August, 2009. AAI has also launched cargo terminals at Guwahati (1999), Lucknow (2000), Coimbatore (2001), Indore (2007) and Amritsar (2007) to handle international 56 cargotalk JUly 2011
The recent activities and achievements of the Airports Authority of India (AAI) pertaining to air cargo movements within and from/to India are portraying the new face of the public sector undertakings. Gp Capt DC Mehta, advisor (MR), chairman secretariat, AAI, highlights some of the infrastructure related initiatives. cargo and courier consignments. AAI also facilitated the operationalisation of the Air Cargo Complex at Port Blair airport, for domestic cargo operations on October 1, 2010, to boost the export of marine products and agriculture / floriculture related products from Port Blair.
Establishment of interim domestic cargo / courier terminal An interim domestic cargo / courier terminal was commissioned at Kolkata Airport from September 10, 2008. The domestic carriers were allotted space on common user basis to carry out their domestic cargo/courier business.
Projects in Hand Launching of Ground Handling Agency at AAI airports After leasing the Mumbai and Delhi airports for a period of 30 years, extendable for another 30, AAI invited tenders for grant of separate license to Ground Handling Agent (GHA), to provide comprehensive ground handling services in 2009 at the airports in India located in the Northern,
VP Agrawal, chairman, AAI
Southern and Western regions, which have both international and domestic flights. AAI has therefore awarded the comprehensive ground handling responsibility at certain airports for a period of 10 year, extendable for 5 years, which would be commencing their operations shortly. The airports are Amritsar, Jaipur, Lucknow, Srinagar and Varanasi in Northern region, Trivandrum, Calicut, Coimbatore, Mangalore and Trichy in the Southern region and Ahmedabad, Goa and Pune in the Western region. The appointment of GHAs is expected to bring more professionalism
Infrastructure Update Airports Authority of India
CARGO TONNAGE PROFILE AT AAI MANAGED AIRPORTS INTERNATIONAL CARGO TONNAGE (WT. IN MT.) AIRPORTS
FY: 2010-11 (APR-MAR)
%
FY: 2009-10 (APR-MAR)
CHANGE
Export
Import
Total
Export
Import
Total
Chennai
119741.64
1135771.21
255512.85
92553.33
119973.97
212527.30
20.23
Kolkata
25863.58
19585.66
45449.24
23141.01
16994.20
40135.21
13.24
Coimbatore
3050.52
127.25
3177.77
2460.53
126.45
2586.98
22.84
Guwahati
0.00
0.63
0.63
0.62
1.53
2.15
-70.47
Lucknow
393.84
205.38
599.21
255.99
158.07
414.07
44.71
Amritsar
6828.70
65.81
6894.51
255.99
73.72
3161.21
118.10
Total
155878
155756
311634
121499
137328
258827
20.40
in airport operations, with a better security environment, including cargo handling with more yield to AAI in the process.
Cargo Potential Studies at non-metro airports The cargo potential study for Pune, Srinagar, Surat, Trichy and Guwahati airports has been carried out and is being evaluated for further decision making. The Cargo Potential Study is under process for Mangalore, Varanasi and Raipur airports while revised terms of the MoU at Calicut airport for cargo operations will be undertaken in the near future. The introduction/creation of Bonded cargo facility at Amritsar is also on the anvil, and also at airports which have limited cargo upliftment capacity and lack proper air connectivity.
Major Improvements at Chennai In Chennai, the AAI has added Phase-II in the Cargo Complex creating storage / 58 cargotalk JUly 2011
processing area of 15600 sqm. The existing cargo capacity for import and export is 363712 MTs., against the demand of 219562 MTs. AAI is going to add PhaseIII, admeasuring an area of 26000 sqm
thus increasing the capacity by 481000 MTs. According to AAI’s CP&MS Report on Capacity Measurement Survey, this capacity will be sufficient to handle the cargo up to 2019-20.
Infrastructure Update Airports Authority of India
Phase-III of cargo complex at Chennai will have Automated Storage Retrieval System (ASRS) with 8000 storage bins, with a capacity of 1.3 MTs. each. This facility will not only provide better inventory control but add to the efficiency, accuracy and zero un-traceability and less human interventions; complete automation in storage and handling.
DEVELOPMENTS AT NON-METRO AIRPORTS
The fully mechanised Elevated Transfer Vehicle (ETV) is also under the process of being installed, with 198 positions for ULD storage, thus enhancing the total capacity by 990 MT. The development works entail an expenditure of Rs.147 crore by AAI. In Kolkata Airport, the existing cargo storage / processing area in Phase-I is 21906 sqm. with a capacity of 125000 MTs., against the present demand of 41000 MTs. The current facility will suffice up to 2019-20. There is no requirement / proposal to add any further infrastructure at this stage. The old cargo terminal, with an area of 4000 sqm., was converted into a separate exclusive domestic / courier terminal in the last quarter of 2008. The airport has introduced ASRS at its Cargo complex with 1944 storage bins having capacity of 1.0 MTs each. The airport has also launched ETV at 70 positions enhancing the storage capacity to the tune of 476 MT. The developments plans have cost the AAI Rs.52 crore. AAI has successfully implemented e-trade/EDI connectivity, including the launch of ICES 1.5 version at Kolkata and Chennai airports in September 2010 and January 2011, respectively, while the implementation at Amritsar and Coimbatore airports was scheduled to be completed by June, 2011.
Cargo Airport and Cargo Hub AAI has conducted a site survey to set up of an international cargo airport in the NCR Region (by providing land and through PPP mode) on behalf of Government of Haryana. The site was also found suitable for a Cargo Village, in future, near Delhi airport. The 60 cargotalk JUly 2011
Cargo being handled departmentally at Amritsar, Indore, Lucknow, Guwahati, Coimbatore & Port Blair. The area available at the existing cargo terminals is sufficient to meet the existing / future demands for next five years. The developmental plan for Srinagar and Trichy Air Cargo Complex is at the planning stage at present, while construction of the cargo facility at Surat airport is linked with the air connectivity issue ex-Surat airport which is under consideration. The concept of Bonded Trucking between metro/non-metro airports has been gaining the attention of Policy Makers, which is being actively considered as an alternative where air connectivity or payload problems are found. AAI has already taken the modernisation of 35 non-metro airports, which will cost around Rs. 5000 crore. Among the 35 non-metro airports, modernisation of 20 has been completed, with 10 in different stages of project activity, and the balance 5 in the planning stage
formal approval from MoCA is awaited on the project. AAI is also emphasising on the Regional Airport concept, whereby the capital and commercial cities in the states, where infrastructure is now improved, would be utilised under the Regional Airport concept to provide for Hub & Node types of airport operations. In this direction, AAI has decided to undertake domestic cargo handling at metro/non-metro airports in a phased manner (including by modifying the old/ redundant passenger terminals at nonmetro airports, so as to improve the existing infrastructure by introducing Common User Domestic Cargo Terminal and to facilitate the creation of Cargo Hubs at major airports in the process. In pursuance of MoCA directives recently, AAI has prepared a ‘concept paper’, for undertaking the cargo related activities with a view to diversify and maximise the cargo revenue as a forward integration process. Major activities will include: Development of Common user Domestic Cargo, Bonded Trucking, X-ray screening / certification by AAI, Door to door delivery of processed cargo, Air Cargo Community System, courier / express cargo handling, Cargo Hub development, development of Cargo Freight City and setting up of Free Trade Zone. Meanwhile, the Economic Advisory Council of MoCA has set up a working group on air cargo and express cargo service industry, to give recommendation on creation of infrastructure, bench marking and standardisation of service parameters, simplification of cargo clearance procedures, creation of cargo hub and cargo village in India and establishment of AFS/ICDs It is now expected that the initiatives taken by MoCa and AAI will have far reaching effects in reducing the congestions and pressure on two major airports, viz Delhi and Mumbai airports. The AAI initiatives are highly commendable in respect of the development of the country’s economy—both in domestic and international sectors.
Shipping & Ports Statistics
MANgALoRe PoRT RegISTeRS hANDLINg oF MoRe ThAN 10,000 TeuS According to P Tamilvanan, chairman of NMPT, the beginning of current financial year shows encouraging trend for the growth of container traffic at the port. Meanwhile, the container handling at New Mangalore Port Trust (NMPT) crossed 10,000 Teus (twentyfoot equivalent units) mark in the first 75 days of the current financial year. In the same period last year the performance was 8,917 Teus thuus recording a growth of 13 per cent.The port handled 40,158 Teus of containers during 2010-11. of the total handling of 10,076 Teus at the port, nearly 13.15 per cent of the cargo was contributed by the mainline vessels calling at the port during the period. Four mainline container vessels from east and west Africa called at New Mangalore till June 14, contributing more than 1,300 Teus to the total container traffic of the port.The port handled one mainline container vessel in April and two in May. The fourth mainline container vessel of the current financial year – m.v. konard Schulte – brought 399 Teus of raw cashew cargo from Port of Contonou in Benin of west Africa.
64 cargotalk JUly 2011
TRAFFIC hANDLED AT MAJoR PoRTS (DURING APRIL To MARCh, 2011* vIS-A-vIS APRIL To MARCh, 2010) (In ‘000 tonnes)
Ports 1
April to March Traffic 2011* 2010 2 3
KOLKATA Kolkata Dock System Haldia Dock Complex TOTAL: KOLKATA
12540 34892 47432
13045 33378 46423
-3.87 4.54 2.17
PARADIP VISAKHAPATNAM ENNORE CHENNAI TUTICORIN COCHIN NEW MANGALORE MORMUGAO MUMBAI JNPT KANDLA
56030 68041 11009 61460 25727 17873 31550 50022 54585 64299 81880
57011 65501 10703 61057 23787 17429 35528 48847 54541 60763 79500
-1.72 3.88 2.86 0.66 8.16 2.55 -11.20 2.41 0.08 5.82 2.99
569908
561090
1.57
TOTAL:
% Variation Against Prev. Year Traffic 4
Mundra Port completes acquisition of Coal Terminal in Australia Mundra Port had announced on 3rd May, 2011 the signing of a Sale and PurchaseAgreement in respect of Abbot Point x 50 Coal Terminal (APCT) following theinternational competitive bidding process conducted by the State of queensland inAustralia. The company was working on completion of the acquisition by signing the various lease agreement and other transfer documents. The company has announced that with the signing of the Long Term Lease Agreement and with the transfer of shares it has completed the acquisition of APCT. Gautam S. Adani, chairman & managing director, announcing the completion of the acquisition said that the entire deal from selection of bidder to completion was completed in record 28 days. With the execution of various documents with officials of the Government of queensland, Mundra Port has become the owner of APCT. The name of the Company has been changed to Adani Abbot Point Terminal Pvt Ltd. The management team from Mundra is in place and has taken over ownership and oversight of the operations of APCT effective from June 1, 2011.
SovEREIGN ShIPPING JoINS CoNqUERoR GLobAL FoRWARDING GRoUP The freight forwarding company Sovereign Shipping has been selected as the exclusive representative of the Conqueror Freight Network in Ludhiana.forwarders,” says Antonio Torres, founder of the Madridbased Conqueror group. “Strong players like Sovereign Shipping will ensure our success as a global group,”Torres added. The network is now seeking qualified members in Cochin, Kolkata, Vadodara, Coimbatore, Hyderabad, Jaipur and Kanpur.
IHC Merwede launches trailing suction hopper dredger
The naming and launch ceremony for the 11,650m³ trailing suction hopper dredger, Breughel, recently took place at the IHC Merwede shipyard in Krimpenaan den IJssel, The Netherlands. IHC Merwede is building the ship for the DEME (Dredging, Environmental & Marine Engineering) Group. The contract for the design, construction and delivery of the vessel was signed between DEME and IHC Dredgers in June 2010, and the keel was laid on 15 November 2010. The vessel will be delivered by November 2011. The design of the Breughel is based on the trailing suction hopper dredgers Brabo and Breydel, delivered by IHC Merwede in 2007 and 2008 respectively. The limited draught, combined with large width, ensures that the Breughel can be used in conditions where other ships of this class would be restricted. The vessel’s one-man operated bridge is equipped with the latest state-of-the-art console, which combines both the dredging and sailing functions.
cargotalk
JUly 2011
65
National News New Launch
ALL ThE WAy ExPRESS LAUNChED To oFFER NvoCC SERvICES With an objective to offer NVOCC services a new company called All The Way Express (ATW) has recently been for med in NewDelhi. ATW offers a unique solution to the freight forwarders across the world with multi modal solutions. Sea-air and air-sea consolidation services will be the primary focus of the company. Both services are available for export as well as import cargo. The service network includes Delhi, Mumbai, Chennai, Bengaluru, Los Angeles, San Francisco, Dubai, Hong Kong and Singapore. The company also offers online tracking and warehousing facilities at all the transit points. Add to these, warehousing and distribution across the world, special pallet rates for retail shipments, ULD management, project cargo movement, special cargo handling, exhibition cargo, DGR cargo and consultancy services will also be offered by the new company.
66 cargotalk JUly 2011
INTERMEC LAUNChES oFFICES IN DELhI, MUMbAI AND ChENNAI Intermec, a leading provider of rugged mobile computing and data collection systems, bar code printers, label media, and RFID solutions recently announced that it has completed a significant investment in India by opening direct liaison offices in Mumbai, New Delhi and Chennai to service the growing demand from customers and partners for its market leading solutions. These offices expand Intermecs ability to offer local advice, consultation and support to its fast growing base of business partners and their customers in this dynamic country. Corporations in India have been strong adopters of Intermec solutions, particularly in the logistics, transportation and consumer packaged goods market where reliability, quality, and performance of systems are critical. Commenting on the new launch Julian Sperring-Toy, regional director and general manager for Intermec India said, “By investing in significant on the ground resources, we are well positioned to help and assist our local or global customers and business partners in deploying our market leading mobility and data collection solutions.” According to him, India and the South Asian markets are the core elements of the company’s growth strategy for the coming years and Intermec has a long term investment plan to support it.
Study and Survey Supply Chain
Six key principles for logistics service providers to grow fast The winners in this era of rapid change will be manufacturers, retailers and logistics service providers who re-engineer the long, thin supply chains they have evolved over the last decade, says Sundar Swaminathan, senior director– industry strategy and marketing, oracle.
A
t the beginning of the year, 2011 seemed to be the year in which the global economy would reach pre-financial crisis levels as economies around the world continued to gain momentum, leading to increased manufacturing output, retail sales, and global freight flows. Now, the world watches with a mixture of anxiety and hope as people revolutions sweep across the Middle East and North Africa. In the short term, oil prices will spike, driving up transportation costs; but in the long term, this could usher in an era of growth to regions that have been suppressed for years, driving up consumer demand for a range of goods and offering new opportunities for logistics. The winners in this era of rapid change will be manufacturers and retailers who 68 cargotalk JUly 2011
re-engineer the long, thin supply chains they have evolved over the last decade and move to a model in which they sell where they build and build where they sell. The shorter supply chains will be less vulnerable to transportation cost spikes, carry lower inventory, and be more demand sensitive. Third-party logistics providers have a key role to play in this transformation as shippers — manufacturers and retailers — outsource a greater portion of their logistics planning and execution. The 2010 Annual Third Party Logistics (3PL) Study, conducted by Capgemini Consulting, the Georgia Institute of Technology and Panalpina, showed that companies spend 11 percent of their sales revenue on logistics, and outsource about 42 percent of logistics spend. The logistics service provider (LSP) industry has been growing steadily over the last two
Sundar Swaminathan
decades with revenues over US$500 billion globally. Shippers outsource a range of services, from domestic and international transportation to warehousing, forwarding, customs brokerage, and supply chain redesign. LSPs offer anywhere from one to 16 separate services, with most LSPs offering eight or more services.
IT Capability Gap The 3PL Study also showed that LSPs still face an IT capability gap — only 54 percent of the shippers surveyed were satisfied with the IT capabilities their LSPs offered. LSPs have been bridging the IT capability gap over the last three years, but there is still a long way to go. Two decades of strong growth through mergers and acquisitions, decentralized IT, and a huge growth in geographical footprint, clients, industries, and services offered have resulted in a complex IT environment with legacy platforms, point-point integrations, multiple data silos, and custom applications that are difficult to maintain, difficult to modify, and expensive. LSPs often have operating platforms for each service they offer, and an instance of the system for each client, leading to hundreds of warehouse management system (WMS) instances and transportation management system (TMS) instances that need to be maintained to support clients. IT is the bottleneck for a large number of LSPs, limiting the agility and functionality LSPs need to bring innovative new services to market quickly, deliver existing services reliably at the lowest cost, and meet ever-increasing regulatory requirements. LSPs have used short-term fixes for years to overcome these challenges, using manual processes to meet client needs, but this is not sustainable. LSPs have to simplify and modernize their information technology environments, standardize and automate business processes, and transition to logistics platforms that allow them to provide multiple services to multiple clients on a single configurable platform.
Six Principles of Transformation Leading logistics providers have realized that IT is a strategic differentiator and that investments in IT are critical to long-term success. As a result, they are making the investments to transition their information technology environments to this state. In working with shippers, LSPs and carriers
over the last two decades, the Oracle Transportation industry team has identified six key principles that characterize class information technology platform.
Automate to scale the business and improve profitability: Standalone systems, disconnected and labor intensive processes, and the lack of decision support for complex tasks such as quoting, routing, and shipment consolidation have resulted in LSPs adding manpower to support growth, high cost of service, and low margins. Logistics leaders will automate routine and complex tasks and orchestrate enterprise workflows, allowing the workforce to focus on the customer and on exceptions that require human expertise.
Leverage technology to speed time-to-market: Multiple systems for logistics planning and execution and financials, and the lack of standard processes, have increased the time LSPs need to introduce new services and made it difficult to deliver them consistently. Logistics leaders will adopt global quote-to-cash platforms that allow new services to be introduced quickly and delivered reliably by ensuring adherence to contracts, automated workflows, and pro-active monitoring.
Configure solutions without customizing processes and IT: Inflexible legacy platforms for transportation and warehouse management have forced LSPs to add a new instance of the system for each client, resulting in hundreds of TMS and WMS instances that needed to be setup, configured, and maintained. LSPs have also had to rely on workforce training to ensure adherence to customer-specific requirements. Logistics leaders will adopt TMS and WMS platforms that can support multiple services for multiple clients, reducing the time and costs to onboard each client. Logistics leaders will also automate client business rules, ensuring service reliability and customer satisfaction.
Collaborate with customers, partners, employees: Collaboration with customers using spreadsheets, complex EDI integrations, high error rates, and long lead times for setting up electronic communications with shippers and carriers have limited the realtime collaboration abilities of LSPs. Leaders in logistics will use new collaboration platforms that support data exchange in a variety of formats, integration using web services, and automated workflows. This will reduce the time and costs for integration, increase reliability, and promote proactive workflow monitoring.
Measure to drive improvement: KPI data collection and computation tends to be manual and delayed at most LSPs, making proactive process improvement difficult. Leaders in logistics will improve decision-making through performance dashboards that allow visibility to KPIs and provide the insights needed to identify root causes and improve processes. Leaders in logistics will invest in robust data collection systems for KPIs, automate KPI computations and make continuous process improvement a priority.
Deliver one version of the truth Most LSPs have a real challenge closing their books quickly because revenue and expense data is split across multiple systems. Customer data, shipment data, and other enterprise master data is stored in multiple systems as well, making it difficult for LSPs to get a consolidated view of customers and operations. Leaders in logistics will centralize customer, shipment, and enterprise master data, lowering IT costs and accelerating decision making. They will also invest in tools for financial consolidation. The LSP Imperative: LSPs that continue to do business as usual will not survive in this of rapid change. LSPs that realize IT is a strategic differentiator and adopt modern leadership principles will become the industry leaders of tomorrow. (This article was originally published on Oracle’s Profit Online - oracle.com/profilt) cargotalk
JUly 2011
69
Cargo Performance Traffic Update
TRAFFIC STATISTICS
DoMESTIC FREIGhT Freight (in Tonnes) For the Month S. No. Airport
March 2011
March 2010
For the period April to March % Change
2010-11
200910
% Change
(A) 11 International Airports 1 2 3 4 5 6 7 8 9 10 11
Chennai 8245 7832 5.3 93336 71246 31.0 Kolkata ......................................... 7198 ...............6366 ................. 13.1 .............84861 ............... 70168 ............... 20.9 Ahmedabad 1284 916 40.2 15060 11018 36.7 Goa 437 321 36.1 4247 3460 22.7 Trivandrum ....................................121 .................110 ................. 10.0 ...............1540 .................1442 .................6.8 Calicut 17 44 -61.4 282 368 -23.4 Guwahati 705 548 28.6 8520 5276 61.5 Jaipur 636 609 4.4 8177 5763 41.9 Srinagar..........................................181 .................134 ................. 35.1 ...............2016 .................1815 ............... 11.1 Amritsar 5 12 -58.3 161 329 -51.1 Portblair 202 203 -0.5 2299 2290 0.4 Total 19031 17095 11.3 220499 173175 27.3
(B) 6 JV International Airports 12 13 14 15 16 17
Delhi (Dial) 17551 16495 6.4 209113 163913 27.6 Mumbai (Mial) 17988 15473 16.3 199831 174184 14.7 Bangalore (Bial) ........................... 7357 ..............6918 .................. 6.3 ............87515 .............. 71893 .............. 21.7 Hyderabad (Ghial) 3053 2746 11.2 36390 30164 20.6 Cochin (Cial) 755 667 13.2 8610 7857 9.6 Nagpur (Mipl) 449 707 -36.5 9145 4717 93.9 Total 47153 43006 9.6 550604 452728 21.6
(C) 9 Custom Airports 18 19 20 21 22 23 24 25 26
Pune 2327 2578 -9.7 27828 17845 55.9 Lucknow ........................................384 ................299 ................ 28.4 ..............3492 ................3407 ................2.5 Coimbatore 624 740 -15.7 6637 6285 5.6 Mangalore 28 30 -6.7 305 382 -20.2 Trichy 0 0 0 25 -100.0 Patna ..............................................260 ................221 ................ 17.6 ..............3279 ................1928 .............. 70.1 Bagdogra 94 88 6.8 1114 869 28.2 Varanasi 31 23 34.8 422 363 16.3 Gaya 0 0 0 0 Total 3748 3979 -5.8 43077 31104 38.5
(D) 20 Domestic Airports 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46
Bhubaneswar 271 Indore 471 Visakhapatnam ..............................284 Jammu 106 Vadodara 166 Agaratala 595 Chandigarh 53 Raipur ............................................217 Imphal 592 Madurai 56 Udaipur 0 Ranchi ............................................148 Bhopal 82 Leh 131 Aurangabad 115 Rajkot 68 Dibrugarh 41 Tirupati ............................................. 0 Silchar 48 Juhu 27 Total 3471
(E)
Other Airports Grand Total (A+B+C+D+E)
70 cargotalk JUly 2011
83 73486
197 408 ................. 89 114 180 602 26 ................149 355 44 0 ................. 72 68 112 152 43 29 ................... 2 27 28 2697
37.6 15.4 .............. 219.1 -7.0 -7.8 -1.2 103.8 ................ 45.6 66.8 27.3 .............. 105.6 20.6 17.0 -24.3 58.1 41.4 ............. -100.0 77.8 -3.6 28.7
2667 5380 ..............1107 1371 2099 7105 549 ..............2356 6002 580 0 ..............1306 1175 1426 1841 933 322 ................. 13 480 311 37023
1998 5301 ..................938 1157 1745 6755 219 ................1593 4719 574 0 ..................677 924 1368 1247 635 331 ....................23 342 383 30929
33.5 1.5 .............. 18.0 18.5 20.3 5.2 150.7 .............. 47.9 27.2 1.0 .............. 92.9 27.2 4.2 47.6 46.9 -2.7 .............-43.5 40.4 -18.8 19.7
73 66850
13.7 9.9
995 852198
1057 688993
-5.9 23.7
Terminal News Air Cargo
CSC TIES UP WITh NIIT TEChNoLoGIES AND SIEMENS CSC India has announced a strategic partnership with NIIT Technologies and Siemens for strengthening its technology and material handling function at the Greenfield Cargo Terminal at IGI Airport, New Delhi. Addressing a press conference in New Delhi, Radharamanan Panicker, Group CEO, CSC, said that the partnership would create a world class cargo handling facility for the integrated cargo complex at Delhi International Airport which is funded by IDBI. The new terminal will be able to handle 1 million tonne cargo per year when it would be fully operational, in phased manner. “With the introduction of NIIT Technologies’ solution as the core cargo handling system, CSC will derive immense benefit in faster processing of cargo, process improvement, better monitoring and control, visibility across the operational process and reduced process redundancies which will result in enhance customer services,” said Panicker. He also emphasiesd that the tie up would enable seamless connectivity with CSC’s customer through EDI and web online interface. Elaborating on the Siemens services for the Green field Cargo Terminal in Delhi Airport, Panicker maintained
72 cargotalk JUly 2011
Radharamanan Panicker addressing the press conference
that Siemens’ Mobility division will outfit the main terminal (T1) with a full air cargo handling solution (MHS) which will include Automatic Storage and Retrieval system, cargo workstation, pallet and container handling storage and handling system, all managed and controlled by high level IT systems. The press conference was also attended by Arvind Mehrotra, EVP, NIIT Technologies; Tilakraj Seth, VP and head mobility division, Siemens Limited and Tushar Jani, chaiman, CSC India.
Cargo Performance Traffic Update
TRAFFIC STATISTICS
INTERNATIoNAL FREIGhT Freight (in Tonnes) For the Month S. No. Airport
March 2011
March 2010
For the period April to March % Change
2010-11
2009-10
% Change
(A) 11 International Airports 1
Chennai
2
Kolkata ........................................ 4109 ............. 3602 ................14.1 ............. 45096 .............. 40088 .............. 12.5
27326
26114
4.6
295497
249522
18.4
3
Ahmedabad
923
1348
-31.5
12980
11657
11.3
4
Goa
420
223
88.3
2535
917
176.4
5
Trivandrum
3582
3638
-1.5
37795
31708
19.2
6
Calicut ......................................... 2246 ............. 2114 ..................6.2 ............. 21964 .............. 17132 .............. 28.2
7
Guwahati
8
Jaipur
9
Srinagar .............................................0 ................... 0 ..................... - ......................0 ...................... 0 ................... -
10
Amritsar
471
471
0.0
5834
2784
109.6
11
Portblair
0
0
-
0
0
-
39091
37551
4.1
422099
354254
19.2
Total
0
0
-
0
0
-
14
41
-65.9
398
446
-10.8
(B)
6 JV International Airports
12
Delhi (Dial)
36204
34673
4.4
390932
333473
17.2
13
Mumbai (Mial)
43983
41825
5.2
470402
408452
15.2
14
Bangalore (Bial) ......................... 13317 ............11786 ................13.0 ............135263 .............102751 .............. 31.6
15
Hyderabad (Ghial)
4024
4004
0.5
42097
36295
16.0
16
Cochin (Cial)
3449
3211
7.4
32198
32779
-1.8
17
Nagpur (Mipl)
31
33
-6.1
346
279
24.0
101008
95532
5.7
1071238
914029
17.2
Total
(C) 9 Custom Airports 18
Pune
19
Lucknow
0
0
-
0
0
-
73
62
17.7
586
378
55.0
20
Coimbatore .....................................39 ................. 62 ...............-37.1 ................. 390 .................. 702 .............-44.4
21
Mangalore
22
Trichy
23
Patna
0
0
-
0
0
-
154
112
37.5
1775
1349
31.6
0
0
-
0
0
-
24
Bagdogra ...........................................0 ................... 0 ..................... - ......................0 ...................... 0 ................... -
25
Varanasi
0
0
-
0
0
-
26
Gaya
0
0
-
0
0
-
Total
266
236
12.7
2751
2429
13.3
0
0
-
76
0
-
20 Domestic Airport (D) 20 Domestic Airports (E) Other Airports Grand Total (A+B+C+D+E)
74 cargotalk JUly 2011
0
0
-
0
0
-
140365
133319
5.3
1496164
1270712
17.7
Study & Survey Statistics
IATA Cargo eChartbook Q-2, 2011
Changes in US sourcing patterns
Changes in US sourcing patterns US cell phone air imports 2010 Tons (K) China
47
S. Korea
’05-’10 CAGR +19%
-5%
16
US salmon air imports Tons (K) 120 100 Other Faroe Islands
80
Taiwan
6
+22%
60
Mexico
5
+54%
40 82%
Malaysia 3
-10%
UK 29%
20
Chile Norway
0 2000 2002 2004 2006 2008 2010
Although from a low base, Mexico is greatly expanding its cell phone exports to the US
Due to the infectious salmon anemia virus in Chile, the US started importing from other countries
Source: Seabury Global Trade Database
Key measures air cargo market
Key measures air cargo market Air traffic & cargo capacity
Int. load factor development
The market environment has deteriorated significantly since the previous issue of the IATA e-chartbook, at the end of the first quarter. Shocks have hit the demand side in Japan and MENA and the cost side with a further surge of fuel prices. The squeeze on profit margins is becoming evident as asset utilisation declines. However, world trade has continued to increase at a robust pace. Economic growth is slowing but it is expected to remain more robust than it did following the 2008 oil price spike, which suggests that cargo markets and cargo profitability will be squeesed further but should be able to stay in profit this year.
Key Points: Cargo profitability has already been reduced by rising fuel costs and falling asset utilization However, economic growth and world trade continues to provide a supportive environment for air cargo
Late 2010, growth in capacity exceeded demand again for the first time since the recovery of the air cargo markets started
Recently ocean freight has benefited from world trade growth but a further upward leg for air freight is expected
Source: IATA Monthly Traffic Results, IATA Cargo E-chartbook; KLM website; US Energy Information Administration; Seabury Surcharge Model; Seabury analysis
India is shifting away from fashion goods export
India is shifting away from fashion goods exports US air imports from India 2010 Tons (K)
Fashion goods
34 24
Chemicals (pharma)
19
Machinery parts
’05-’10 CAGR
Europe air imports of fashion goods Tons (K) 250
-7% +25% +11%
Vehicles & parts
9
+20%
Ind. consumables
8
+6%
Household goods
7
-8%
High tech
6
+8%
China +7%
200
150 100
-1%
India
50 0 2005
2006
2007
2008
2009
2010
Rising fuel costs and some cost increases elsewhere, such as labor, together with falling utilization are putting pressure on profitability With twin-aisle capacity increasing but air cargo volumes slipping, load factors are falling and freighter utilization has declined 8 per cent Translating further growth of markets and revenue into profit will be much harder
Source: Seabury Global Trade Database
76 cargotalk JUly 2011
Express Cargo New Launch
Blue Dart expands ONE RETAIL footprints for domestic & international customers
T
In a bid to create a simplified customer experience through the blue Dart - DhL oNE RETAIL stores, blue Dart has recently added 20 new oNE RETAIL stores thereby taking the count to 422 retail outlets in India. CT bureau
he ONE RETAIL initiative by Blue Dart and DHL leverages the leadership position enjoyed by both the brands. It offers customers the freedom to avail of domestic as well as international products in both Blue Dart and DHL outlets. Both Blue Dart and DHL are well established brands, with strong brand recall and loyalty. They draw on each others’ strengths through collaboration and sharing of knowledge and best practices that make
Blue Dart and DHL have formed an X-BU (cross Business Units) initiative that aims to ‘collaborate and simplify’ customers’ lives it a formidable unified team in the eyes of the customer. The ONE RETAIL store is yet another initiative where the customers stand to gain from Blue Dart’s domestic network and DHL’s global reach. “Blue Dart and DHL have always focused on customer requirements and have constantly innovated to ensure customer delight. Our customers’ trust 78 cargotalk JUly 2011
and loyalty have driven us to design distinctive and customized services like t h e ONE RE T AI L . W e h ave a s t ro n g team and an extremely well thoughtout strategic focus till 2015. As part of this strategy, Blue Dart and DHL have formed an X-BU (cross Business Units) initiative that aims to ‘collaborate and s i m p l i f y ’ c u s t o m e r s ’ l i v e s . This also includes increasing the count of ONE RETAIL stores from 422 to 1000 across
Anil Khanna
India by 2015,” elaborated Anil Khanna, managing director, Blue Dart Express.
Guest Column Opportunities
Logistics Industry in India The sunrise sector requires skilled manpower
T
hough logistics industry in India is currently plagued with poor infrastructure, high costs and government regulations, with rising investments in infrastructure and the central government’s decision to cut excise duty rates for manufacturing goods, the industry is set to grow at a healthy pace.It is estimated to grow by 15-20 per cent by the year 2015, from the present growth of 8-9 per cent. In fact, the market share of the Indian logistics industry that is currently at 6 per cent is expected to reach up to 12 per cent by 2015. In order to cater to this increasing demand, infrastructure development is necessary, coupled with trained professionals with a new vision. We have seen very high (unexpected) growth in volumes both for exports as well as imports from and to various regions in India, in the past two decades, but infrastructure has not developed at the same pace. Ports/ ICDs/terminals and Rail operators being less in numbers, has not only encouraged monopolisation but also kept the cost very high, making us uncompetitive in the international market. In view of the fast growing economy in India and demand from international market, supply chain and logistics industry has a pivotal role to play. Companies will need to either re-model or modify their supply chains in order to keep pace with the country’s economy. And, for that they would require skilled man-power in logistics to support the market demand. 80 cargotalk JUly 2011
The global logistics industry is estimated to be worth over US$ 300 billion.The sector currently employs about 40 million people in the world, a number that is expected to rise rapidly with growth expectations in the sector. vanish Ahluwalia
Vanish Ahluwalia
Growth Areas The domestic logistics industry, which is expected to generate business worth $125 billion in the next two years, would need over four lakh trained professionals in the senior management category in the next two to three years. In order to meet the growing demand, more trained professionals would be required for this industry.The main areas that would require skilled manpower are – professional logistics companies specialising in providing transport, warehousing and other logistics support to other companies; manufacturers and major retail chains. The sectors that would demand trained manpower include FMCG, retail, pharma, aviation, IT/ITES, etc. In addition, I think the professional who has spent longer period in this trade and learnt through their personal experiences, must come forward and
take responsibility of training Gen-X. The gap between upcoming institutions dedicated to logistics and the trade could be reduced by presence and participation of such professionals, who could play a vital role and help develop professionals and future entrepreneurs with a new vision.As globalisation continues, the gravity of international trade is shifting towards Asia. Large MNCs in manufacturing and retail are setting shop in India hence there is a pressing need to produce logistics professionals with right skills and orientation, to benefit from the enormous opportunities. Technological changes in the logistics industry demand a trained workforce in all areas of the sector, which could meet international standards.
Ray of Hope Though the industry, so far, has been employing intelligent graduates with no formal training in logistics, various institutes now offer professional courses and candidates with professional qualifications definitely have an edge.Students who are interested in this challenging field should opt for a professional course in logistics. Various institutes and management schools now offer degree and diploma programmes in logistics, and hopefully the number of logistics institutions would also increase. (Vanish Ahluwalia is regional manager (North India), Seahorse Ship Agencies)
Calendar of Events International Events
Compack Chennai 2011
August 4, 2011 Chennai Trade Centre Chennai, India Contact: Smart Expos, Chennai, India Website:
Air Cargo handling Conference September 19-21, 2011 Sheraton Amsterdam Airport Hotel & Conference Centre Amsterdam, The Netherlands Telephone: +44 (20) 8660 9116 E-mail: parveen@evaint.com
China International Logistics and Transportation Forum october 12-14, 2011 Shenzhen Convention and Exhibition Center Telephones: +001-213-628-9888 E-mail: calvincheng@shenzhenoffice.orgevents.com
5th Sustainable Supply Chain Summit october 2011 San Francisco, USA Telephone: +44 (0)207 375 7167 E-mail: mmuir@eft.com
6th Thai Ports and Shipping 2011
November 25-27 Imperial Qeen’s Park Hotel Bangkok, Thailand Transport Events Management Limited Telephone: +(60)-(3)-80235352 Website: thai-ports-shipping
The building a Resilient Supply Chain Summit December 9-10, 2011 New Orlieans International, USA Telephone: +44 (0) 207 375 7167 E-mail: mmuir@eft.com
82 cargotalk JUly 2011
Postal Registration No.: DL (ND)-11/6002/2010-11-12, Licensed to Post without Pre-payment No.: U(C)-272/2010-12 for posting on 25th-26th of advance month at New Delhi P.S.O., RNI No.: DELENG/2003/10642. | https://issuu.com/ddppl/docs/july11 | CC-MAIN-2017-34 | refinedweb | 22,048 | 51.28 |
Hello David,
if you download and include commons-lang3.jar in your classpath Eclipse
will recognize ArrayUtils and allow you to import
org.apache.commons.lang3.
Here is the Javadoc for it:
Greetings
Bernd
BTW: Commons Developers: I do wonder if this would be a good feature for
dbutils. It has currently a RowProcessor, but that works either in
Object[] or needs to map to beans. Returning a simple type array for a
single column might be usefull?
Am Tue, 26 Aug 2014
11:37:12 -0400 schrieb "Kulpanowski, David" <DKulpanowski@leegov.com>:
> Messrs. Worden and Eckenfels:
>
> Thank you both for your kind assistance.
>
> Mr. Worden:
> your solution works perfectly. This is exactly what I am looking for.
>
> Mr. Eckenfels:
> Please excuse my lack of java coding skills. I am working on it by
> taking on projects at my job. I think your solution will work and I
> want to use it in my code because I am now going to use Apache
> Commons Math for more sophisticated statistics such as regression and
> hypothesis testing. For example, is the mean average ambulance
> response time in Cape Coral the statistically significantly different
> from the mean average response time in Fort Myers. I anticipate
> needing your code so I need to ask for additional help:
>
> In the final line of code Eclipse is putting a red underline under
> ArrayUtils.
>
> ArrayList<Double> times = new ArrayList<>();
> while (rset.next())
> {
> times.add(Double.valueOf(rset.getDouble("M_SecondsAtStatus")));
> }
> double timesArray[] = ArrayUtils.toPrimitive(times.toArray());
>
> My mouse hovers over it and the message is: "ArrayUtils cannot be
> resolved". Eclipse offers nine quick fixes:
> 1.) create class ArrayUtils.
> 2.) create constant ArrayUtils
> 3.) create local variable ArrayUtils
> 4.) change to ArgUtils
> 5.) change to Array
> 6.) change to Arrays
> 7.) create field ArrayUtils
> 8.) create parameter ArrayUtils
> 9.) fix project set up
>
> Which one should I use to output my data in a format Apache Commons
> Math will utilize in its functions?
>
>
> -----Original Message-----
> From: Brent Worden [mailto:brent.worden@gmail.com]
> Sent: Tuesday, August 26, 2014 11:00 AM
> To: Commons Users List
> Subject: Re: [math] JDBC output to generate statistical results.
>
> Another alternative is to use a
> org.apache.commons.math3.stat.descriptive.DescriptiveStatistics
> object to collect all the data and then use it to compute the summary
> statistics you need. Using it alleviates the need for doing all
> explicit type casting and conversion:
>
> DescriptiveStatistics ds = new DescriptiveStatistics();
> while(rset.next()) {
> int observation = rset.getInt("M_SecondsAtStatus");
> ds.addValue(observation);
> }
>
> System.out.println("min: " + ds.getMin());
> System.out.println("max: " + ds.getMax()); ...
>
> HTH,
>
> Brent
>
>
> On Tue, Aug 26, 2014 at 9:41 AM, Bernd Eckenfels
> <ecki@zusammenkunft.net> wrote:
>
> > Hello,
> >
> > First of all: Your DBMS might have SQL methods to calculate typical
> > aggregates. This is not only easier to program, but also most
> > likely faster and less resource intensive than doing it in an extra
> > application.
> >
> > But since this is the commons list: If You want to use the Commons
> > Math functions you have to present the set of values (in your case
> > as an array). And since there is no adapter for result sets (I
> > think) building the array would be done inside the loop. The most
> > natural thing is to use an ArrayList to append the values in the
> > loop, but then you have to convert the resulting Double[] into
> > double[]. The ArrayUtils in Apache Commons Lang could do that (but
> > if you need to process millions of numbers it is not the most
> > efficient way to do it).
> >
> > untested:
> >
> > ArrayList<Double> times = new ArrayList<>();
> > while(rset.next()) {
> > times.add(Double.valueOf(rset.getDouble(T));
> > }
> > double timesArray[] = ArrayUtils.toPrimitive(times.toArray());
> >
> > And then you can use this array for the Math statistics.
> >
> > Gruss
> > bernd
> >
> >
> > --
> >
> >
> > ----- Ursprüngliche Nachricht -----
> > Von: "Kulpanowski, David" <DKulpanowski@leegov.com>
> > Gesendet: 26.08.2014 15:55
> > An: "Commons Users List" <user@commons.apache.org>
> > Betreff: RE: [math] JDBC output to generate statistical results.
> >
> > Thank you Mr. Ritter:
> >
> > Two issues:
> > 1.) I am attempting to obtain univariate statistics from thousands
> > of ambulance responses. For example, ambulance responses (in
> > seconds) 534, 678, 943, 194 would be a mean of 587 seconds. Not by
> > row, but rather as summary statistics.
> > 2.) It appears that Apache Commons Math is needing a Double value.
> > So I change it as shown below.
> > Note on 2) Even though I am needing summary statistics I move the
> > lines of code into the loop just to see what would happen.I just
> > want to get it to work because it appears the problem is the type
> > of variable (int, double, array).
> >
> > while (rset.next())
> > {
> > double values =
> > rset.getDouble("M_SecondsAtStatus");
> > System.out.println(values);
> > System.out.println("min: " +
> > StatUtils.min(values));
> > System.out.println("max: " +
> > StatUtils.max(values));
> > System.out.println("mean: " +
> > StatUtils.mean(values));
> > System.out.println("product: " +
> > StatUtils.product(values));
> > System.out.println("sum: " +
> > StatUtils.sum(values));
> > System.out.println("variance: " +
> > StatUtils.variance(values));
> > }
> >
> > A red underline in Eclipse shows up and my mouse hovers over it.
> > The error message is the following:
> >
> > "The method min(double[]) in the type StatUtils is not applicable
> > for the arguments (double)"
> >
> > I then change the values variable to double[] as shown below:
> >
> > "double[] values = rset.getDouble("M_SecondsAtStatus");"
> >
> > java doesn't like this either. It gives a red underlined error
> > message: "Type mismatch: cannot convert from double to double[]"
> >
> >
> > I guess this boils down to two questions:
> > 1.) How do I output a double[] array from database output?
> > 2.) How do I output this double[] into a variable that Apache
> > Commons Math will accept?
> > ok, maybe three questions:
> > 3.) Other people are using Apache Commons Math to understand their
> > database data better. How are they doing it? A lot of guys have
> > massive mainframe databases filled with health care data etc. They
> > are doing sophisticated math with their data. How are they doing it?
> >
> > -----Original Message-----
> > From: Benedikt Ritter [mailto:britter@apache.org]
> > Sent: Tuesday, August 26, 2014 9:15 AM
> > To: Commons Users List
> > Subject: Re: [math] JDBC output to generate statistical results.
> >
> > > In you're code the variable values is defined within the scope of
> > > the
> > while loop.
> >
> > D'oh worst of typos... should be "in your code" of corse ;-)
> >
> > 2014-08-26 15:13 GMT+02:00 Benedikt Ritter <britter@apache.org>:
> >
> > > Hello David,
> > >
> > > the problem you're encountering is a problem with scopes. A
> > > variable is only available in the scope it was defined. In you're
> > > code the variable values is defined within the scope of the while
> > > loop. This means, that the variable is only defined between the
> > > curly brackets of
> > the while loop.
> > >
> > > Your System.out statements try to access the values variable,
> > > which is no longer accessible, since the flow of control has
> > > already left the scope it was definied in (by finishing the
> > > iteration over the ResultSet).
> > >
> > > What you need to do is move the other System.out statements into
> > > the loop like so:
> > >
> > >
> > >));
> > > }
> > >
> > >
> > > This way statistics will be printed for each row in the result
> > > set.
> > >
> > > Regards,
> > > Benedikt
> > >
> > > P.S.: Jakarta is an old name, that is not used any more. The name
> > > of the project now is simple Apache Commons and you're using
> > > Apache Commons
> > Math.
> > >
> > >
> > > 2014-08-26 15:03 GMT+02:00 Kulpanowski, David
> > > <DKulpanowski@leegov.com>:
> > >
> > > Using jdbc I am querying my database of ambulance response times.
> > > My goal
> > >> is to take the output and process it into statistics using
> > >> Jakarta Commons Math library. So far I am successful in querying
> > >> my database and outputting the response times to the console. My
> > >> next step is to process this output statistically, such as mean,
> > >> medians, mode, etc.
> > This is where I am stuck.
> > >> What I can't figure out is how to get my database output into a
> > >> format for Commons Math to generate a statistical analysis. In
> > >> other words, I have
> > >> 100,000 ambulance responses, now I want to do more advanced
> > >> statistical analysis with this data.
> > >> Shown below is my code.
> > >>
> > >> package javaDatabase;
> > >>
> > >> import java.sql.*;
> > >> import org.apache.commons.math3.stat.StatUtils;
> > >>
> > >> public class javaConnect4
> > >> {
> > >> public static void main(String[] args)
> > >> {
> > >> Connection conn = null;
> > >> Statement stmt = null;
> > >> try
> > >> {
> > >> conn = DriverManager
> > >>
> > >>
> > .getConnection("jdbc:sqlserver://myServerAddress;database=myDatabase;i
> > ntegratedsecurity=false;user=myUser;password=myPassword");
> > >> stmt = conn.createStatement();
> > >> String strSelect = "SELECT M_SecondsAtStatus
> > >> FROM MManpower WHERE M_tTime > 'august 25, 2014' AND M_Code =
> > >> 'USAR'";
> > >>
> > >> ResultSet rset =
> > >> stmt.executeQuery(strSelect);
> > >>
> > >>));
> > >>
> > >> } catch (SQLException ex)
> > >> {
> > >> ex.printStackTrace();
> > >> } finally
> > >> {
> > >> try
> > >> {
> > >> if (stmt != null)
> > >> stmt.close();
> > >> if (conn != null)
> > >> conn.close();
> > >> } catch (SQLException ex)
> > >> {
> > >> ex.printStackTrace();
> > >> }
> > >> }
> > >> }
> > >> }
> > >>
> > >>
> > >> An error message pops up in Eclipse and the variable "values" is
> > >> red underlined; "values cannot be resolved to a variable".
> > >> I am not sure how to get this to work.
> > >> I don't understand how to output my ambulance response times
> > >> from the database into something Apache Commons math will
> > >> understand. How can I get Apache Commons math to take the output
> > >> from my database and generate a statistical result?.
> > >>
> > >>
> > >> NOTES:
> > >> 1.) I have cross-posted this question on StackOverflow.com but
> > >> have not resolved the issue.
> > >> 2.) I have verified that Apache Commons Math is registered in my
> > >> project by hand coding a small array and using Commons Math to
> > >> generate
> > statistics.
> > >> So Apache Math works and my database output goes to the console
> > >> window, so it works also. But how do you get them to work
> > >> together? 3.) I am a geographer, not a computer programmer.
> > >> Believe me, you cannot make it simple enough. Please be explicit
> > >> in your answers.
> > >>
> > >> David Kulpanowski
> > >> Database Analyst
> > >> Lee County EMS
> > >> PO Box 398
> > >> Fort Myers, FL 33902-0398
> > >> 239-533-3962
> > >> DKulpanowski@Leegov.com
> > >> Longitude: -81.861486
> > >> Latitude: 26.528843
> > >>
> > >>
> > >> ________________________________
> > >> | http://mail-archives.apache.org/mod_mbox/commons-user/201408.mbox/%3C20140826183657.000010d2.ecki@zusammenkunft.net%3E | CC-MAIN-2017-47 | refinedweb | 1,600 | 58.99 |
- What Is a Syntax?
- A Basic Example of Making a Custom Syntax
- Syntax Customization
- Syntax return types
What Is a Syntax?
A syntax is essentially a macro(a single instruction that expands automatically into a set of instructions to perform a particular task).
During compile-time, anywhere the syntax is used will be swapped with the syntax's return value. But rather than getting a return that you use in code (an expression) like a normal method or function(a.k.a. an expression), you get a return that becomes the code (a syntax). The syntax code won't be re-evaluated again until a change is compiled.
A Basic Example of Making a Custom Syntax
use cm.syntax; /** * Run code. */ { synRun(); } /** * Run example syntax. */ public void synRun() { exSyn } /** * Call this from inside the syntax. */ public void callFromSyntax() { pln("Call from syntax"); } /** * Example syntax. */ public statement exSyn { pln("Compile Syntax..."); SStatement b = block { callFromSyntax(); pln("I'm in a Statement"); }; pln("Block of statement to run: ", b); return b; }
Compiled output:
Compile Syntax... Block of statement to run: { (callFromSyntax()); pln("I\'m in a Statement"); }
Executed output:
Call from syntax I'm in a Statement
Notice that there are two different outputs here. The compiled output is the output shown after the example code is compiled(Ctrl-Alt-U). Whereas, the code execution output is shown when the code is executed(Ctrl-Alt-P). This demonstrates that the compiler saves the syntax keyword (
exSyn) during compilation into the compiler's memory, so when you execute it,
exSyn will return the statement(SStatement), which is the block of code you want to execute.
The syntax can also be specified on what interface it can be used, i.e. public, package, private. This is specified at the beginning of the syntax definition.
Resulting in essentially the
synRun function be executed as:
/** * Run example syntax. */ public void synRun() { callFromSyntax(); pln("I'm in a Statement"); }
In this example, having
exSyn as a custom syntax seems off, especially without the semicolon
;. So keep in mind to try making custom syntaxes as readable and intuitive as possible by using syntax customization.
Syntax Customization
We can further customize the custom syntax by adding combinations of signature(s) and argument(s).
Signatures
Other than the keyword of the syntax, signatures are character(s) that form the other parts of the custom syntax to be recognized by the compiler.
use cm.syntax; /** * Run code. */ { synRun(); } /** * Run example syntax. */ public void synRun() { exSyn !s.s; exSyn ! s.s ; exSyn ! s.s
; } /** * Example syntax. */ public statement exSyn '!' "s.s" ';' { SStatement b = block { pln("Executing block of code..."); }; return b; }
Output:
Executing block of code... Executing block of code... Executing block of code...
Focusing on the custom syntax creation first, you can observe all the three signatures
!,
Y, and
s.s are placed AFTER the keyword,
exSyn, and must be placed as such.
When the custom syntax is in use, the space(s) and new-line(s) between the keyword and signatures are arbitrary. As long as the keyword itself and each signature(s) match, the compiler will recognize this as one custom syntax. Meaning this would not be recognized as the custom syntax:
exSyn ! s. s;
Arguments
Just like functions, arguments can be passed in for determining the compilation behavior and/or execution behavior of a syntax. The arguments must be a syntax type.
/** * Run code. */ { synRun(); } /** * Run example syntax. */ public void synRun() { SynClass classy(); argDemo classy int myInt = 68; argDemo myInt argDemo "a String" } /** * Argument demo syntax. */ public statement argDemo @e=expr { pln("Compile: e = ", e; e.type; e.literalStr); SStatement b = block { pln("Execute: e = ", @e); }; return b; } /** * Example class. */ public class SynClass { public constructor() { } }
Compiled output:
Compile: e = classy, SynClass, null Compile: e = myInt, int, null Compile: e = "a String", str, a String
Executed output:
Execute: e = SynClass(8) Execute: e = 68 Execute: e = a String
Similar to function arguments, while passing in an argument, you have to specify the argument's name and the argument's data type. In this example,
@e is the argument name, and
expr is the argument's syntax type. The argument name must have
@ in front.
expr is the shorthand for the
SExpr syntax type which is a subclass of the
Syntax class.
Usage of the argument during compilation is expressed as a syntax type and must be used without
@. Which is why it can use methods in the
SExpr class(type, literalStr).
During execution, the argument is passed into a statement and will transform the argument to it's the actual data type and must be used with
@.
68 turns into an integer,
a String turns to a string, classes will turn to its class object, etc.
Here is a list of syntax types that can be passed in as a syntax argument:
- actualArg (SActualArg)
The exact same as
SExprbut it can also hold and process keywords of expressions. A sequence member of
SActualArgList.
- actualArgList (SActualArgList)
A sequence of
SActualArgs. Already includes the close brackets and semicolon(args); as signatures.
- export (SExport)
To indicate the interface(public, package, private) of said member(methods and fields of a class) or definition(class) to return. Does not need to specify the syntax type, but must be placed in between the return type and syntax keyword.
Example:
public member @visibility exampleSyntax { return member{}; }
- expr (SExpr)
Syntactic entity that can be evaluated to determine it's value. Which are primitive data types like integer, boolean, etc. or composite types like sets, maps, classes, etc.
- formalArgList (SFormalArgList)
Just like
SActualArgsbut can evaluate argument data types as
STypesand can retrieve it's
SId.
- id (SId)
Usually used as the argument to pass a name/identifier to an expression you want to return.
- src (SSrc)
This can be thought of as the source code itself.
- statement (SStatement)
Line(s) of executable code.
- type (SType)
Used to pass in a data type of an expression. Can be primary or composite data types.
There are also special operators you can use in combination with arguments:
- [ syntax type ]? Optional argument
This gives the syntax the option to not need this argument passed in, any combinations of signatures can also be added into the square brackets.
/** * Run example syntax. */ public void synRun() { exSyn; exSyn(true); } /** * Example syntax. */ public statement exSyn @expr=['(' expr ')']? ';' { return statement { }; }
- list[ syntax type, signature ]+ Sequence of arguments
This gives the syntax the ability to append multiple arguments of the same syntax type into one parameter to use in the syntax. A signature must be placed after the syntax type to indicate what signature is used to separate arguments. The
+can also be swapped with a
*, and have no difference.
/** * Run example syntax. */ public void synRun() { exSyn true with true with 1 } /** * Example syntax. */ public statement exSyn @expr=list[expr, "with"]+ { return statement { }; }
Syntax return types
There are only a few return types a syntax can return, and all of these are also related to the
Syntax class. These are expressions, statements, member, definition, and syntax.
Expression (SExpr class)
An expression is a syntactic entity which we can evaluate data from. Which is basically what usual functions and methods return.
/** * Run code. */ { synRun(); } /** * Run example syntax. */ public void synRun() { int syntaxValue = exprDemo; pln(syntaxValue); } /** * Expression return syntax demo. */ public expr exprDemo { return expr { int asdf = 98 }; }
Output:
98
Statement (SStatement class)
A statement is a line of code that expresses some action to be carried out. As a syntax, it means you are returning line(s) of statements that carry out the action.
/** * Run code. */ { synRun(); } /** * Run example syntax. */ public void synRun() { statementDemo } /** * Statement return syntax demo. */ public statement statementDemo { return statement { pln("Do statement stuffs."); pln("and then...."); }; }
Output:
Do statement stuffs.
and then....
Member (SMember class)
A member is defined here as a field or method that is part of a class. The interface/export(
SExport) name must be placed between the return type and syntax keyword.
/** * Example empty class. */ public class ExpClass { /** * Blank constructor. */ public constructor() { } /** * Blank method. */ extend public void someMethod() { } } /** * Overriden example class. */ public class OverriddenExpClass extends ExpClass { /** * Use the member syntax. */ public memberDemo } /** * Statement return syntax demo. */ public member @visibility memberDemo { return member { /** * Syntax appended integer field. */ @visibility int syntaxInteger = 68; /** * Syntax overriden method. */ @visibility void someMethod() { pln("syntax has overriden this method"); } }; } /** * Run code. */ { OverriddenExpClass oExpClass(); pln(oExpClass.syntaxInteger); oExpClass.someMethod; }
Output:
68 syntax has overriden this method
Definition (SDefinition class)
The definition of a class itself.
/** * Example class. */ public class ExpClass { /** * Blank constructor. */ public constructor() { } /** * Blank method. */ extend public void someMethod() { } } /** * Use definitionDemo syntax. */ definitionDemo /** * Statement return syntax demo. */ public definition definitionDemo { definition { public class OverriddenExpClass extends ExpClass { /** * Syntax appended integer field. */ public int syntaxInteger = 68; /** * Syntax overriden method. */ public void someMethod() { pln("syntax has overriden this method"); } } }; } /** * Run code. */ { OverriddenExpClass oExpClass(); pln(oExpClass.syntaxInteger); oExpClass.someMethod; }
Output:
68 syntax has overriden this method
Syntax (Syntax class)
This can only be used as a syntax definition's argument. Its return type can be any syntax type or composite data type(ie. sets, class).
/** * Run code. */ { synRun(); } /** * Run example syntax. */ public void synRun() { pln(doubleOrNothing 12 x 2); pln(doubleOrNothing 4.25 x 2); pln(doubleOrNothing); } /** * Double or nothing syntax */ public expr doubleOrNothing @num=multX2 { if (num.type in {int, double}) { return expr { @num }; } return expr { "NOTHING" }; } /** * Double or nothing syntax arugument. */ public syntax SExpr multX2 { @expr=expr 'x' '2' { return expr { @expr*2 }; } empty { return expr { null }; } }
Output:
24
8.5
NOTHING
In this example, we create a special syntax argument called
multX2. If an expression is passed in with the character
x and then
2, it will multiply the expression by 2 and return the integer as a
SExpr. However, if nothing is passed in, then it'll return an empty
SExpr.
Then in the
doubleOrNothing syntax, if the return's type from
multX2 is an integer or a double, it'll return the number, else it'll return a "
NOTHING" string.
Please sign in to leave a comment. | https://support.configura.com/hc/en-us/articles/360049747554-How-to-Create-Custom-Syntaxes | CC-MAIN-2021-39 | refinedweb | 1,660 | 50.33 |
I want a generator that cycle infinitely through a list of values.
Here is my solution, but I may be missing a more obvious one.
The ingredients: a generator function that flatten an infinitely nested list, and a list appendant to itself
def ge(x):
for it in x:
if isinstance(it, list):
yield from ge(it)
else:
yield(it)
def infinitecyclegenerator(l):
x = l[:]
x.append(x)
yield from ge(x)
g = infinitecyclegenerator([1,2,3])
next(g) #1
next(g) #2
next(g) #3
next(g) #1
next(g) #2
next(g) #3
next(g) #1
...
You can use
itertools.cycle to achieve the same result
Make an iterator returning elements from the iterable and saving a copy of each. When the iterable is exhausted, return elements from the saved copy.
Emphasis mine. Your only concern about memory would be saving a copy of each item returned by the iterator.
>>> from itertools import cycle >>> c = cycle([1,2,3]) >>> next(c) 1 >>> next(c) 2 >>> next(c) 3 >>> next(c) 1 >>> next(c) 2 >>> next(c) 3 | https://codedump.io/share/ccILKmu4WkIh/1/a-neat-way-to-create-an-infinite-cyclic-generator | CC-MAIN-2017-13 | refinedweb | 179 | 62.88 |
[I don't have to go out on a limb to acknowledge that is the worst article I've ever written. I wrote it in the wee hours one morning in a rotten mood and it shows. There are far too many absolutes that should have been qualified and the writing style is too aggressive for no good reason. I'm not taking it down because there are worthy comments, and I refuse to try to pretend it never happened. But I absolutely regret writing this article in this way. If you choose to read this, use a large sanity filter and look at some of the comments and the follow-up for qualifications to help see what I'm getting at.]
[This article is in response to an earlier posting which you can find here].
[Some less than charitable and totally unnecessary text removed. I blame myself for writing this at 2:30am. It was supposed to be humorous but it wasn't.]
There is an argument to be made here, but there is also a great deal of ignoring of the real issue going on here.
Let’s actually go about doing the job of properly attacking my position the way I think it should be attacked, shall we?
I start with some incontrovertible facts. Don’t waste your time trying to refute them, you can’t refute facts. You can have your own opinion, but can’t have your own facts.
The relevant facts are these:
-the same algorithm coded in 64-bits is bigger than it would be coded in 32-bits
-the same data coded for 64-bits is bigger than it would be coded in 32-bits
-when you run the same code, but bigger encoding, over the same data, but bigger encoding, on the same processor, things go slower
-any work I can possibly do has an opportunity cost which will mean there is some other work I can’t do
All righty, it’s hard to argue with those.
Now let’s talk about the basis I use for evaluation.
-I get points for creating a great customer experience
-I get no points for using technology X, only for the experience, using fewer technologies for the same experience is better than using more
-I get no points for using more memory, not even enabling the use of more memory, only for the experience, using less memory for the same experience is better than using more
OK, so in short, I begin with “64-bits gets no free inherent value, it has to justify itself with Actual Benefits like everything else.”
We cannot make a compelling argument with fallacies like “32 bits was better than 16 therefore 64 must be better than 32”, nor will we get anywhere with “you’re obviously a short-sighted moron.”
But maybe there is something to learn from the past, and what’s happened over the last 6 years since I first started writing about this.
For Visual Studio in particular, it has been the case since ~2008 that you could create VS extensions that were 64-bits and integrate them into VS such that your extension could use as much memory as it wanted to (Multi-process, hybrid-process VS has been a thing for a long time). You would think that would silence any objections right there -- anyone who benefits from 64-bits can be 64-bits and anyone who doesn’t need 64-bits can stay 32-bits. It’s perfect, right?
Well, actually things are subtler than that.
I could try to make the case that the fact that there are so few 64-bit extensions to VS is proof positive that they just aren’t needed. After all, it’s been nearly 8 years, there should be an abundance of them. There isn’t an abundance, so, obviously, they’re not that good, because capitalism.
Well, actually, I think that argument has it exactly backwards, and leads to the undoing of the points I made in the first place.
The argument is that perhaps it’s just too darn hard to write the hybrid extensions. And likewise, perhaps it’s too darn hard to write “good” extensions in 32-bits that use memory smartly and page mostly from the disk. Or maybe not even hard but let’s say inefficient –from either an opportunity cost perspective or from a processor efficiency perspective; and here an analogy to the 16-bit to 32-bit transition might prove useful.
It was certainly the case that with a big disk and swappable memory sections any program you could write in 32-bit addressing could have been created in 16-bit (especially that crazy x86 segment stuff). But would you get good code if you did so? crapola.
Do we have a dearth of 64-bit extensions because it’s too hard to write them in the hybrid model?
Would we actually gain performance because we wouldn’t have to waste time writing tricky algorithms to squeeze every byte into our 4G address space?
I don’t have the answer to those questions. In 2009 my thinking was that for the foreseeable future, the opportunity cost of going to 64-bits was too high compared to the inherent benefits. Now it’s 2016, not quite 7 years since I first came to that conclusion. Is that still the case?
Even in 2009 I wanted to start investing in creating a portable 64-bit shell* for VS because I figured the costs would tip at some point.
I don’t work on Visual Studio now, I don’t know what they’re thinking about all this.
If there’s a reason to make the change now, I think I’ve outlined it above.
What I can say is that even in 2016, the choice doesn’t look obvious to me. The case for economy is still strong. And few extensions are doing unnatural things because of their instruction set – smart/economical use of memory is not unnatural. It’s just smart.
*the "Shell" is the name we give to the core of VS (what you get with no extensions, which is nearly nothing, plus those few extensions that are so indispensable that you can't even call it VS if you don't have them, like solutions support -- that's an extension]
The reason to change at least something is to let VS be able to appropriately work with 64-bit software developed inside VS, such as the Designer (WPF), which is a major problem for us so we have to have 32-bit version as well as 64-bit even though we only use 64-bit in production for other reasons.
Would it still be the same code though? Doesn't x64 give you more registers? Possibly offering better optimizations?
Damien that's really the point. Sometimes it isn't the same code.. But then the encode length of the 64 bit instructions with the registers is also worse…
So, ya, YMMV, but mostly those registers don't help big applications nearly so much as they help computation engines.
Back in VS2010 days, I would have thought that 64-bits was really necessary, as my IDE was crashing every 2 hours or so. Every time you installed a new extension, you'd have to ponder whether the productivity gain provided by the extension outbalanced the reduced time-to-crash. For instance, removing Resharper improved stability great deal, but restarting VS every 5 hours instead of every 2 hours wasn't worth losing all the nice features.
Since VS2012, things have improved great deal, and I've got to say that I rarely ever see my VS2015 crashing. So, whatever it was, I guess 32/64 bits wasn't the real issue after all.
"because capitalism."
I think you mean "because of the free market". Capitalism isn't the same thing, and it's quite possible to have either of the two without the other.
I thought I meant capitalism… Meaning if was good people would use it to make a buck.
You know I've been thinking about my 2nd article since I wrote it a few hours ago. And maybe I shouldn't be writing things at like 3am but anyway. I think I can net it out pretty much like this:
If you find yourself running out of space you are going to be in one of two situations:
1 If you stop doing some stupid thing you will fit fine into 32 bits of address space.
OR
2 If you start doing some stupid thing you will fit fine into 32 bits of address space.
In 2009, the situation in VS was definitely #1.
The question is, is that still the case in 2016? Because if it isn't then #2 really shouldn't be countenanced.
I have removed an early paragraph because rather than being funny it was actively detracting from the article and it was, to quote a redditor, "Not very charitable."
I'd suspect that progressing technologies that have made the development process more data-intensive like Roslyn keep longer term copies of the AST around, and IntelliTrace keeping sizable debugging history have made the case for greater address space size a lot stronger since you last visited the topic, but I don't know that I'm personally convinced the tipping point for needing a 64 bit VS has been reached *yet*.
But if you ask me, moving toward a multi-process model makes a lot more sense for VS than simply just going 64 bit. Maybe not if you look at it through a solely perf-based lens, but for reliability, extensibility, and testability reasons continuing to put the application's eggs all in one basket of a process makes less sense over time as complexity invariably increases. It'd also indirectly address one of the needs for VS to even be 64 bit by relaxing the stress on a single address space; and if some theoretical future feature needed 64 bitness for whatever reason, it alone could make that decision without having to justify dragging the rest of the product along with it.
That was certainly how I felt in 2009. I'm not sure in 2016. But then I'm not on the team so how would I even know anymore 🙂
I do think hybrid has a lot of merits but it's not right for everything and the need to have more and more of the AST loaded seems to be only growing. Though frankly I wish we could do this with less of that tree actually resident.
One commenter on the other article mentioned Resharper. Is the issue just that the Resharper library wasn't done in the best way?
One thing to note about your 16->32 analogy is that back in the 16 bit days, all but the most trivial code (or, in some cases, really optimized code) usually kept 32-bit pointers. You could write code using 16-bit pointers, but there's a reason that "lp" was probably the most used Hungarian prefix.
Then again, I mostly went from 8-bit embedded Z-80s to 32-bit Windows without doing much in the way of 16-bit at all. When I did write 16-bit code, I did not spend a lot of effort tweaking it for performance (trying to keep it readable was a larger goal).
There's one big aspect of the discussion that's missing so far in these posts, I think. You profess to be unaware of what Visual Studio is doing since you left, but I think you know about Roslyn, yes? Rewriting the C# compiler entirely in C#, using a C# object model for the entire program representation?
That's as pointer-rific and "wooden" (to cite your "wood vs marble" analogy from blogs.msdn.com/…/coding-in-marble.aspx ) as you can get. BUT, it is also a very productive way to program — you have the full power of the idiomatic language, your debugger natively understands all object relationships, the GC is handling all your memory management, etc.
You know and I know that our work in Midori to facilitate the "marble" pattern had a lot of complex code to facilitate interop between the "wooden" world of pointerful objects on the heap (natively supported by C#), and the "marble" world of tabular, dense object representations (requiring a bunch of custom C# libraries and carefully structured types). The "marble" world was a lot of work to build and was fairly rocket-science-ful.
So, to bring this back to the 64-bit analogy: one big advantage of pointer-ful OOPy code is that it's very well supported by the language. But if your object graph grows larger than 4GB, then currently 64-bit is your only hope. Of course you could easily wind up dead anyway if you are traversing huge regions of that graph… but maybe you're not; maybe you're pointer chasing in a relatively structured way, and you're not thrashing your cache to death.
I think if we really believe that denser object representations are a good thing, we need to do a lot more to make them easier for programmers to use. In the limit, I would like to have a language that let me almost entirely abstract away from object representations — I can just write "Foo foo = new Foo();" regardless of whether Foo is made of wood or marble. And I would like to be able to say "Bar bar = new Bar(foo);" in the same way, with seamless interop from the programmer's perspective.
Until we do this, I think the "wood or marble" choice is almost always going to wind up being wood for most programmers, because that's what the language facilitates.
There are some interesting experiments in transforming object representations "under the hood" of the language — for example, infoscience.epfl.ch/…/ildl.pdf and…/MattisHenningReinHirschfeldAppeltauer_2015_ColumnarObjectsImprovingThePerformanceOfAnalyticalApplications_AuthorsVersion.pdf — I don't believe either of these are production-ready by any means, but they point towards what I think is needed to really make marble popular 🙂
In other words: if marble were easier, people would want 64 bits a lot less!
Addendum: looks like the "wood to marble" post wasn't making the point I remembered it making. Your "LINQ to Memory" idea is more like what I meant: blogs.msdn.com/…/quot-linq-to-memory-quot-another-old-idea.aspx — imagine that this was a real facility of the language, and that the language enabled you to put [Dense] attributes on classes that you wanted to store in a columnar representation….
@TFries
A couple of years ago I would have wanted VS to move to a multi-process module for reliability but it seems to crash far less often. I don't know if the plugins have improved or they're just less able to do damage when something goes wrong.
As an actual developer of VS extensions (and addins, remember these?) and a guy who routinely deals with data which cannot fit into the ram of the computer, let alone in the less than 1 GB heap of the 32-bit VS I strongly object to the "facts" you mention.
1. the same data coded for 64-bits is bigger than it would be coded in 32-bits – except when the data is text. One of my tasks some years ago was to find a needle in a haystack, where the haystack was in the form of 160 GB text file. Recompiling my program for 64bit gave me almost 10% speed boost. So as all performance related tasks, giving blanket statements like "64bit code and data are larger, therefore, slower" is wrong. The right approach is "measure for your case and decide".
2. The actual issue with VS is not how much memory it has, but the heap fragmentation. Considering everything loaded in-proc, VS has around 900 MB of heap and after that it goes into crash-happy mode. Granted, in recent versions things have changed and I've seen it with 1.1-1.2 GB working set.
3. Nowadays projects are a bit more complex than in 89. You said: " In 1989 the source browser database for Excel was about 24M. The in-memory store for it was 12k." Well, in 2016, my solution may consist of a project written in TypeScript and node.js (the server backend), Java (android client), objective-c (iOS client), and WPF – the desktop client. it may also have C++ bits for speed-critical code. Every project in my hypothetical solution has their intellisense, debugger, profiler etc. This is before I start the SQL tools because, well, I need to design my data. So, the requirements have changed and whatever worked in 89 is no longer applicable.
4. process boundaries are very expensive to cross in Windows API. Every cross-process communication is very complex to design and implement and slow when executed. That's why even VS cheats. For example, the WPF designer is completely stand-alone program, which just creates a Win32 window as a child of the win32 window created for a WPF-based tool-window inside VS and call it a day. A neat hack, which is well-supported by the underlying Win32 API, completely unsupported by WPF and gets the job done.
Continuing my rant:
5. "I could try to make the case that the fact that there are so few 64-bit extensions to VS is proof positive that they just aren’t needed." No. People has to deal with the bugginess and API changes of VS. For example, we – the VS extension developers, have to ship addins for older versions of VS (we actually have paying customers so the MS excuse "we do not support these versions" does not apply). We have to supply extensions. We want the code between these to be shared. We want our code not to crash randomly. The most peculiar bug of the VS extensibility API I've found so far is that when asking for the color of the text in the text editor through the documented APIs used to result in a hard crash of VS in like 5% of the calls. We had to code around instabilities like this for decades. And no sane person would build upon a flashy new multi-process APIs knowing well that even way simpler VS API are unstable or crash VS right away and they are unsupported for other version of VS one need to support. So, we put everything in-proc, and hope that the next SP of VS does not break our code AGAIN (happened more than once).
Additional obstacles were that the documentation for the VS SDK was poor and insufficient and hard to access. Also, the engagement of the VS team with the extension writes can be described as "non-existent". Note: around VS 2012 I stopped actively developing for VS but keep talking to people who does. Their opinion is that things have changed for better.
To wrap up my rant, the fact that there are little 64-bit VS extensions does not mean that they are not needed. It means there are legitimate technical and business obstacles for writing them.
@teo: In general I agree (though I didn't have an experience like your number 1 yet); but I think number 3 is not fair and incorrect. "So, the requirements have changed and whatever worked in 89 is no longer applicable." Things haven't changed that much, because all of the mentioned code need not run in a single process. As Rico said, pretty much always there's a big bunch of data that you need to have (24M was huge in '89, like say ~16TB today), but you only need a comparatively small index for many/most of the relevant things the user does regularly.
Most important in engineering is to find the balance and bucket your data accordingly.
Your point about 2009 vs 2016 is super important. I'm following your blog since then (at least), and I was ok with VS staying 32b in 2009, I mean I understood it perfectly. But now is the time for things to change. VS has become this bloated memory eater. VS has to force out-of-process model at least for some extension categories, just like Windows Explorer did with shell extensions. And this is really a VS remark, not a general 64b vs 32b remark – maybe you picked the *wrong* example 🙂
I can back up a lot of what @teo said.
I've been working on VS plug-ins since VC6 days, and keeping up with breaks in the VS interfaces is tricky enough that having a 64 bit version of our VS plug-in to support is quite simply not something we'd want to do unless it was necessary or our customers really needed it. Note that the code itself is already 64 bit ready for the most part as we already have a 64 bit version of the same plug-in for Eclipse by necessity (Eclipse has both 32 and 64 bit versions).
Historically at least every second version of Visual Studio seems to bring painful surprises somewhere in the plug-in interfaces. For example, we had to make rather major changes to integrate into VS2005 (broken command bar and VCProjectEngine interfaces), VS2010 (broken command bar interfaces – again!), VS2012 (theming) and VS2015 (removal of add-in support). Some of those changes were unannounced or were sprung on the wider community by the VS team with little warning (I'm looking at you, "Introducing the New Developer Experience" – blogs.msdn.com/…/introducing-the-new-developer-experience.aspx) so when a new version of VS arrives, the first question we have to answer is always "how big a headache will this one cause us?".
Against that background quite frankly adding a 64 bit build to the mix would be a distraction we could do without, but we'll do it if we have to.
2 points are missing – how will you develop and debug a 64 bit WPF application using 64 Bit native Dll's or by not using CPU Any compilation with a 32 bit Visual Studio version?
Additionally, while it's true that commonly memory isn't an issue, some type of applications need not the physically memory itself, but they need the larger address space of virtual memory.
Besides your statement "the same data coded for 64-bits is bigger than it would be coded in 32-bits" is IMHO not quite correct – why should a data structure formerly compiled to 32 bit now be larger under 64 bit? Or which kind of data do you mean?
Having a program that runs in 64-bit mode does not mean that it automatically uses more memory, simply by virtue of being 64-bit. Even when building a 64-bit program, an int, the most commonly used data type, is still only 32 bits on Visual C++. The only thing that implicitly gets bigger is pointers.
All your points are irrelevant in 2016. The same code and data are bigger in 64-bit… so what? This is the era where most home users use at most 10% of their massive 6Tb hard drives, and it's typically with media. The same code and data but encoded bigger run slower on the same processor, doesn't hold water either since modern CPUs are designed and optimized for 64-bit code execution. That's 64-wires across the CPU's die hitting 64 transistors in parallel in 64-bit solid state logic circuits. In fact, most cases prove the opposite to be true… 32-bit code across a 64-bit processor runs slower because the processor considers it a special condition. Here's a question… walk through your office and see how many PCs are running a 64-bit operating system. Why slow down my programming by piping it through yet another layer of conversion to go from 32-bit code to run on my modern 2016 64-bit PC or server? Wouldn't the "fastest" code be going through as few useless platforms, scripting engines, VMs and conversion layers as possible?
I used to long for a 64 bit version on Visual Studio, because I had to install two Oracle clients, 32 and 64 bit. The 32 bit Oracle client was there just to run debug in VS, the 64 bit for Toad and production apps. Oracle has since released a fully managed client that is bit depth agnostic and we jumped onto that so fast your head would spin. So now, I really don't care.
Having said that, the point is that being 64 bit across the entire stack offers advantages and less problems. I still have 99 problems but getting the bitness right ain't one of them. And I used to have to go around and fix it for my people as well. Overall the cost of maintenance is higher with mixed 32 and 64 bits.
As I am not an internals specialist as many of the people responding here but an application developer, I still feel as if I understand much of what is being said in these responses.
However, we should note that with each successive increase in power of our micro-processors, coding becomes sloppier since many of the restrictions that imposed tighter coding such as memory constraints are now relaxed, making development increasingly complex for many developers. So many tools and paradigms have become available to application developers that keeping abreast of them all for career security has become an impossible endeavor. This is why I have always advocated a return to the basics with which quality applications can be written as easily as with the more currently available complex tool-sets.
With increases in power there is always a downside, namely that things that couldn't be done before can now be done more easily. The question is, is any of it necessary when we got along fine without the new tools? Not really, since we develop applications to the same foundations as previously. Younger professionals today have simply opted for more comp0lexity believing they are creating better applications in a purer fashion. They are not but the perceptions are there.
If a 32bit Visual Studio will work just as well as a 64bit one in developing quality applications I can see the reticence on Microsoft's behalf in creating a fully compliant 64bit version. In some ways the 64bit version will run faster and in some ways it won't. As always, it depends on what is being done internally as the author of the article suggests. So in the end we have to consider what actual value would a 64bit version of Visual Studio give us? If not that much on the current available architectures than the question is rather moot. However, if and when the underlying architectures change then maybe more concrete arguments can be provided for such a development.
That being said, whether Visual Studio should have a 64bit version or not I leave to the better experts on this thread. However, using a 654bit OS does have its advantages today since we do have more access to memory naturally without the workarounds of years ago. And this does in fact allow us to do good things such as running VMs more efficiently to test out different ideas on different environments or run a variety of OSs while maintaining our preferred one as the host system.
For me it is very simple:
VS running as a 32 bit application has made the development of pure 64 bit applications more difficult for me.
Therefore, a 64 bit VS would be great to have. It just makes things cleaner and requires less workarounds.
This is why I very much hope that Microsoft will offer a 64 bit Visual Studio soon.
Best wishes
Klaus
64 Bit VS would also allow deployment of VS into places where 32 bit code can't run; such as Windows Server without the Wow64 subsystem installed.
I couldn't agree more… unless the amount of the data that needs to be processed exceeds the size that can be handled by 32-bits without requiring endless swapping. 32-bit MS word, for example, chokes on 400 page documents.
Having worked with segmented 16-bit, then 32-bit, then 64-bit, always under the Windows OS, I'm firmly convinced that the introduction of the 64-bit Windows OS, and the requisite support for it in VS, was superbly timed.
Initially, Intel came up with an absolutely *horrible* implementation of 64-bit, then AMD came up with an absolutely *beautiful* implementation of it, Then Intel, to their credit, swallowed their pride and admitted that AMD's implementation was *fantastic*, essentially trashed what they had come up with, and adopted AMD's standard verbatim.
Now, perhaps for the first time in history, we have a simply gorgeous development environment. In 64-bit, if your C++ or assembly functions have four parameters or less, ALL of them are passed in registers. No ifs, ands, or butts. This is a HUGE bonus to efficiency, regardless of how big your memory cache is. I mean, let's face it, an average program has hundreds, if not thousands, of functions, each of which may be called a huge number of times. For four parameters, that's eight writes to the stack, and who knows how many reads from the stack, every time any one of hundreds of functions is called. That's all gone now. Poof. For all intents and purposes, all of the incalculably huge number of (initial) data transactions between every caller and every callee in a 64-bit program is now done INSIDE THE CPU – no memory required. So excuse me for saying so, but I think that's HUGE.
Anyway, I have more to say on the matter, but I'm not even sure I'll be successful posting this, as I'm not a member, and as a general rule, not a joiner either. So here's me attempting to post this..
Well, much to my surprise, my previous post succeeded – kudos..
So anyway – to me, the biggest advantage of using 64-bit is that – how should I put this? – it completely obliterates the restriction that I've always had to live with, and that I've always *hated* having to live with – that I couldn't write a program that could exploit ALL of the memory in my computer.. I mean, what's the point of shelling out hard-earned money for 64 gigabytes of RAM, if I can only actually *use* two or three gigabytes of it at a time? To make the Windows OS look good? To impress that cute girl who lives down the hall?
Thing is, computers were invented to do math, which they do extremely well. And the math is getting really, really interesting. Neural networks, deep learning, genetic algorithms, rule-based emergence, symbolic processing – all of these emerging technologies and advancements, and much, much more, are all fairly accessible on the internet, and can be investigated further on a modern laptop (thanks largely to Moore's law and, oh yeah, the Internet) !!
Oh, but not in 32-bit. Sorry, Charlie. Now get back to stocking the shelves will you?
That's how I felt whenever I wanted to, oh, say, genetically evolve a neural network topology to perform a particular task. Or investigate the properties of a non-trivial two dimensional rule-based automata.
Sorry, Charlie. "You're just a puny program, not an operating system. You can't have access to all of your memory. That belongs to Microsoft. Buh-bye."
With 64-bit, I feel like I own my computer again. It doesn't belong to Microsoft anymore. It belongs to me. If I have 64 gigabytes of RAM, well heck, I'll use it ALL if I need it. And the only thing I need Windows to do is shut up until I'm finished (which it's actually not especially good at doing, BTW, but that's another story).
<continued due to word count limit>..
<continued from previous post>
Of course, I realize that not everyone is interested in exploiting gobs of memory. But they can still use 64-bit. The pointers are twice the size? Well, yes, they are, but if that's a concern at all (in, say, a large linked list for example), you can simply use an integer offset into a pre-allocated (64-bit) memory block instead. Not a big deal. It wouldn't even slow down the processor much, because it would simply translate to an indexed access instead of a non-indexed access, the difference between which at the processor level is trivial compared to the amount of time spent actually retrieving the data..
But even with pointers being twice the size, is that such a high price to pay for *never* having to worry about memory ceilings again?
Or to put it another way, to finally relegate the entire issue of memory availability *out* of the software realm entirely, and finally be able to put it back to where it rightfully belongs, and has always belonged, in the first place – as a simple hardware issue?
Personally, I think that tradeoff is a bargain, even without all the additional benefits that 64-bit programming has to offer. Benefits like always passing parameters on the stack, a non-trivial increase in the number of CPU registers, 128-bit division, etc., etc..
So there it is – my rant. For what it's worth..
Should you disagree in any way, please send all comments to I_Really@Do_Not_Care.com. You will be promptly handled.
Just kidding 🙂 Really. That was a joke.. 🙂
That wasn't all rant at all, thanks writing!
So, what are we, developers/troubleshooters supposed to do when a client sends a 3gb process dump? Because there is a 100% chance of a Out Of Memory Exception poping up and no debbugging happening…
Small test, just for fun:
#include <iostream>
#include <Windows.h>
void main()
{
const int size = 1000000;
const int count = 1000;
int* a = new int[size];
long sum = 0;
unsigned long long tickStart1 = GetTickCount64();
for (int c = 0; c < count; ++c)
{
for (int i = 0; i < size; ++i)
{
a[i] = i % 10;
}
}
unsigned long long tickStart2 = GetTickCount64();
for (int c = 0; c < count; ++c)
{
for (int i = 0; i < size; ++i)
{
sum += a[i];
}
}
unsigned long long tickStart3 = GetTickCount64();
delete[] a;
std::cout << "sum " << sum << "n";
std::cout << "1= " << (tickStart2 – tickStart1) << "n";
std::cout << "2= " << (tickStart3 – tickStart2) << "n";
}
Compiled for 32bit:
3 runs:
1= 1482,1529,1528
2= 281,281,281
Compiled for 64bit
3 runs:
1= 406,343,327
2= 234,188,188
"You can have your own opinion, but can’t have your own facts." So please don't say that 64bit compilation is always slower.
Sadly you're wrong about your "relevant facts". The correct answer is it all depends. A 64-bit processor and application is better (faster, smaller code) for processing 64-bit data elements or larger than an 8, 16 or 32 bit processor because it takes fewer instructions and steps. When using extended instruction sets a 64 bit processor may be able to process more 8, 16 or 32 data elements in parallel than a smaller processor. Running a 32 app on a 64 bit processor and OS such as windows requires switching modes that a 64 bit app wouldn't.
Well, one thing for sure is that with more memory VS won't crash when it approaches 3GB of RAM (as observed in task manager). Clearly a big win.
It's always the case that answer depends but this case we're focusing on the relevant phenomena for large interactive applications. These are going to have different characteristics than computational engines.
If you look back at my article in perf quiz #14 I did experiments to simulate the consequences of pointer growth.
The microbenchmark above illustrates that in the absence of pointers you can do ok. It's one of the dimensions in quiz 14.
Keep in mind I made these assertions for the purpose of illustrating the foundation of the con argument and I then proceed to shoot them down.
However, you won't notice extra memory use if you stay small enough to stay in cache. | https://blogs.msdn.microsoft.com/ricom/2016/01/04/64-bit-visual-studio-the-pro-64-argument/ | CC-MAIN-2017-47 | refinedweb | 6,111 | 67.79 |
Although I don't use Visual Studio daily anymore, I still fire it up now and again to try something out or help with a project. It's a great IDE that's had a few hiccups along the way, but has steadily gotten better. It's a powerful tool for .NET developers, and given that I currently use Atom to develop Erlang I'd give anything for an equivalent IDE. But since I use a Mac primarily, using Visual Studio means firing up Windows in a VM, which is a phenomenal way to turn a Mac into a personal space heater.
So when Microsoft announced the release of Visual Studio for Mac a couple weeks ago, I had to try it out. "Visual Studio" and "Mac" were two things I never thought I'd hear together. (Note that this is separate from Visual Studio Code, an editor more similar to Atom or Sublime than a full IDE.)
A Brief History
When Microsoft introduced the .NET Framework in 2000, they built it against a standard they helped develop called the Common Language Infrastructure (CLI). When some developers wanted something similar to .NET for other systems, they built a new project called Mono to the CLI standard as well.
Years later, after Mono was acquired by another company, the same dev who started it (Miguel de Icaza) created a company called Xamarin that went on to produce a bunch of cross-platform tools for developing mobile apps and stuff. Funnily enough, the company that bought Mono didn't seem to do much with it and he ended up supporting it again. Xamarin got bought out by Microsoft, who hired Miguel and rolled the various Xamarin apps into Visual Studio.
Built on Mono and Xamarin Studio, Visual Studio for Mac was previewed in Nov 2016 and officially released in May 2017. It uses the Roslyn compiler for intellisense and refactoring, and the MSBuild build engine.
Getting Started
Check out the minimum system requirements and then go to the download page.
The installation was pretty straight-forward except that it required I download Xcode too. I imagine a lot of developers using a Mac might already have it, but I didn't. I've found it to be buggy in the past... I can't remember the exact issue I was having but it was something weird about the license and popup reminders and having to run stuff at the command line that kinda fixed it but not really, so I had just uninstalled it. We'll see if it behaves this time.
There are some separate SDKs you can install to support other types of projects too, but you don't need those right away.
First Impressions
So far, from what I've read, two things are true about Visual Studio for Mac. First, it's built as a native Mac app from the ground up, so it shouldn't look like a Windows app shoehorned into macOS. That would have been just ugly. Second, it's built with VS (for Windows) in mind, so the experience should be very similar for devs moving from one to the other.
The latter seems pretty important to me. A familiar interface will drive adoption from traditional Windows users who now use Mac. Since I've used VS for nearly a decade on Windows, but I've been using a Mac nearly exclusively for the last 18 months, I think I'll have a good feel for both.
The Welcome Screen
Firing up VS for Mac for the first time, it definitely feels like a native Mac app. It also feels very crisp and responds quickly. The Mac I'm using is an i7 with 16 GB RAM so ymmv, but it certainly seems to respond well.
Here's a view from Windows and then macOS.
There's a lot to compare even here.
- They both have a "Get Started" section with links to tutorials and documentation.
- They both have a list of recent projects to choose from.
- There's a panel with recent developer news.
- In the upper-right corner, you can see they implemented the global search in the Mac too that makes it easy to find commands, menu items, etc without having to dig through everything. Nice!
- The menu is integrated in the normal macOS way, pinned to the top of the screen and not in the application, but the layout looks very similar to the Windows version. So far so good.
- Organization of available project types is different.
- The Windows version organizes first by language, then by project type.
- The Mac version organizes by project type, then provides a small drop-down that lets you choose the language. The available languages for now are C# and (limited to certain projects) F#, IL and VB.NET.
Let's Try a Console App
Creating a "Hello World" console app is about as simple as it gets in Visual Studio, so we'll try that in both versions and compare the experience.
Windows
Go to
File / New / Project / Visual C# / Console App (.NET Framework) and you'll get the following template:
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace vs4win_console1_netframework { class Program { static void Main(string[] args) { } } }
Mac
Go to
File / New Solution / Other .NET / Console Project and you'll get a very similar template.
using System; namespace vs4mac_console1 { class MainClass { public static void Main(string[] args) { Console.WriteLine("Hello World!"); } } }
There's a slight difference in behavior that's interesting. The Mac console displays the above line of code, then "Press any key to continue..." and waits for the user.
In a Windows console app you have to explicitly write that line:
Console.WriteLine("Hello World!"); Console.WriteLine("\r\nPress any key to continue..."); Console.ReadLine();
When I opened the code from macOS in Windows, it ran just fine. When I opened the code from Windows in macOS, it wouldn't run. No error, the app just flickered like it was trying but then.. didn't. I assume it's because the app I created in Windows wasn't targeting .NET Core, which is a special version of the .NET Framework that should be able to run across platforms.
So I created a .NET Core Console App in Windows and tried opening that in the Mac. It made me download the .NET Core SDK (and OpenSSL too, though I'm unsure why). I tried to run it again but got a prompt "An admin user name and password is required to enter Developer Mode." which seems to be an Xcode thing. After I did, Mac ran the .NET Core app created on Windows, so that's awesome.
Now Let's Try a Library
Next I created a library on the Mac to see if the DLL assembly it produced could be shared with Windows as long as we're using .NET Core.
On the Mac:
File / New Solution / .NET Core / Library / .NET Standard Library
Here's a ridiculous method courtesy of xkcd.
namespace vs4mac_dotnet_standard_library { public static class Magicmatics { public static int GimmeRandomNumber() { return 4; // chosen by fair dice roll. // guaranteed to be random. } } }
Now I can open the previous console app on the Mac and try using the DLL produced by building the above library. Once I added a reference to the DLL (right-click "References", select "Edit References", and find the DLL in that project's bin/debug folder on disk), I was able to use it but the IDE was underlining my code and telling me I needed to include a reference to System.Runtime, which doesn't seem to exist by itself in the "references" window.
To fix that issue, I had to add a package (right-click Packages and select "Add Packages"), then search for and add "NETStandard.Library" which is the same package in the "library" project. That added a whole lot of core references like System.Runtime, System.Linq, System.Collections, etc. Now it works and I'm able to do this:
using System; using vs4mac_dotnet_standard_library; namespace vs4mac_console1 { class MainClass { public static void Main(string[] args) { var randomNumber = Magicmatics.GimmeRandomNumber(); Console.WriteLine($"Hello World! Your random number is: {randomNumber}"); } } }
The only other thing I'm curious about is whether this library can be used directly from VS Windows too, without having to rebuild the project. I copied the DLL to the Windows machine and tried referencing it from a Windows app. It worked straight away!
Features
Okay, time to bring this home but let's check out a couple popular features first.
Intellisense
It feels a little bit different, but intellisense appears to be present and accounted for. The interface is clean and easy to read, and you can use the arrow keys to scroll through a method's available overloads.
Static Analysis
As I was typing these short examples, the IDE was underlining code it wanted to point out for various reasons. It usually provided options for me to choose from too, like adding the
static modifier to the class.
Something I really like is how it shows you what the code will look like. Visual Studio has, for a few years now, been more focused on presenting things to the developer inline, instead of making you click around and get popups and leave the current screen. So it's a nice touch that even before I applied the suggested fix (seen below) it was showing me how my code will be modified.
A Few Issues...
There were a few snags while I was playing around, which isn't surprising considering this just came out a couple weeks ago, but maybe listing them here will save someone else's time.
Where's the Code Folding?
It's nice to be able to fold a block of code and get it out of the way. It didn't appear to be in Visual Studio for Mac though! That seemed like a strange omission to me, but it turns out the option is just disabled by default. Go into the "preferences" panel to enable it. Maybe they're not confident about this feature yet? Or do they figure Mac users wouldn't use it?
Crashing Templates
There's a nice toolbox on the side, at least with the project types I tried out, that provides some templates you can drop in. When I tried to use the Exception template that provides the barebones class for you to derive your own custom Exception, the app just crashed. When I reopened, VS recovered the document and the template was there, but the initial adding of it crashes the app every time. I'll have to see if anyone's reported this issue yet.
(update: I ended up reporting it. The team was quick to respond and verified they could recreate the issue, so hopefully that'll get fixed in a future version. 👍)
One Solution at a Time
You can't open multiple solutions in separate windows. The Mac loads an app in a single instance, an incredibly annoying experience because even when an app allows for multiple "windows", like Google Chrome, you can't just command-tab through the windows - it just picks the last one that was active.
Instead you have to jump through a few hoops to load more than one solution:
Open your first solution. Then go to
File / Open and select the second, but don't open it yet! Click the Options button to get a few more selections, one of which is a checkbox to "Close current workspace". Uncheck that.
Then you'll see multiple solutions in the same window:
But none of these issues are major ones... just a few kinks to work out and some UX to get used to (for someone used to using Visual Studio in a Windows environment).
Highlights
There are all kinds of other things to try out and so far I've just scratched the surface, but this post is long enough already.
Here are some other things I'd like to check out in the future:
- Experiment with cross-platform Xamarin.Forms UI library
- Libraries available via NuGet
- Testing, try out the XUnit project type, test out NUnit too
- Try existing extensions like NDepend or ReSharper
- Try more C# 7 features, and F# too
- Play around with tying an app into Azure for the backend
Learn More
For more reading, here's a list of articles I found:
- Introducing Visual Studio for Mac - (Nov 16 2016)
- Connect(); 2016 Keynote - (Nov 16 2016) (skip to 2:16:40 - 2:23:47 Miguel de Icaza, brief tour of IDE)
- Visual Studio 2017 for Mac - (May 10, 2017)
- Introducing Visual Studio for Mac - April 14, 2017
Even More
- Visual Studio for Mac UserVoice - Suggestions
- How to Report a Problem with Visual Studio 2017
***(the help menu within the app has "Report a Problem" and "Provide a Suggestion")***
- Platform Targeting and Compatibility
- Build better apps with Visual Studio for Mac (Xamarin site)
Videos
- Xamarin Evolve 2016: Become a Xamarin Studio Expert - May 3, 2016
- Connect(); // 2016: Introducing Visual Studio for Mac - Nov 16, 2016
- VS2017 Launch: Introducing Visual Studio for Mac - Mar 6, 2017
- Build 2017: Visual Studio for Mac - May 8, 2017
- Build 2017: Visual Studio for Mac and Xamarin Live Player - May 8, 2017
- Build 2017: .NET Core and Visual Studio for Mac - May 10, 2017
- Build 2017: Get started with Unity and Visual Studio for Mac - May 10, 2017
- Visual Studio for Mac - May 17, 2017
- ... other Channel9 videos
If you want to see any of the code snippets I wrote above, they're also available on GitHub. | https://grantwinney.com/a-brief-tour-of-visual-studio-for-mac/ | CC-MAIN-2019-18 | refinedweb | 2,279 | 70.73 |
Well-Formed XML
HTML 4.0 has nearly 100 different elements. Most of these elements have a dozen or more possible attributes for several thousand different possible variations. Since XML is more powerful than HTML, you might think that you need to learn even more elements, but you don't. XML gets its power through simplicity and extensibility, not through a plethora of elements.
In fact, XML predefines no elements at all. Instead XML allows you to define your own elements as needed. However, these elements and the documents built from them are not completely arbitrary. Instead, they have to follow a specific set of rules elaborated in this chapter. A document that follows these rules is said to be well-formed. Well-formedness is the minimum criteria necessary for XML processors and browsers to read files. This article examines the rules for well-formed documents. It explores the different constructs that make up an XML documenttags, text, attributes, elements, and so onand discusses the primary rules each of these must follow. Particular attention is paid to how XML differs from HTML. Along the way I introduce several new XML constructs, including comments, processing instructions, entity references, and CDATA sections. This article isn't an exhaustive discussion of well-formedness rules. Some of the rules I present here must be adjusted slightly for documents that have a document type definition (DTD), and there are additional rules for well-formedness that define the relationship between the document and its DTD.
Well-Formedness Rules
The XML specification strictly prohibits XML parsers from trying to fix and understand malformed documents. All a conforming parser is allowed to do is report the error. It may not fix the error. It may not make a best-faith effort to render what the author intended. It may not ignore the offending malformed markup. All it can do is report the error and exit.
Note: The objective here is to avoid the bug-for-bug compatibilitywars that have hindered HTML, and that have made writing HTML parsers andrenderers so difficult. Because Web browsers allow malformed HTML, Web-pagedesigners don't make the extra effort to ensure that their HTML iscorrect. In fact, they even rely on bugs in individual browsers to achievespecial effects. In order to properly display the huge installed base of HTMLpages, every new Web browser must support every quirk of all the Web browsersthat have come before. The marketplace would ignore any browser that strictlyadhered to the HTML standard. It is to avoid tthis sorry state that XMLprocessors are explicitly required to only accept well-formedXML.
To be well-formed, an XML document must follow more than 100 different rules. However, most of these rules simply forbid things that you're not very likely to do anyway if you follow the examples given in this book. For instance, one rule is that the name of the element must immediately follow the < of the element's start tag. For example, <triangle> is a legal start tag but < triangle> isn't. On the other hand, the same rule says that it is OK to have extra space before the tag's closing angle bracket. That is, both <triangle> and <triangle > are well-formed start tags. Another rule says that element names must have at least one character; that is, <> is not a legal start tag, and </> is not a legal end tag. Chances are it never would have occurred to you to create an element with a zero-length name, but computers are dumber than human beings, and need to have constraints like this spelled out for them very formally. XML's well-formedness rules are designed to be understood by software rather than human beings, so quite a few of them are a little technical and won't present much of a problem in practice. The only source for the complete list of rules is the XML specification itself. However, if you follow the rules given here, and check your work with an XML parser such as Xerces before distributing your documents, they should be fine.
Cross Reference: The XML specification itself is found in Appendix C. The formalsyntax the XML specification uses is called the Backus-Naur-Form, or BNF forshort. BNF grammars are an outgrowth of compiler theory that very formallydefines what is and is not a syntactically correct program or, in the case ofXML, a syntactically correct document. A parser can compare any document to theXML BNF grammar character by character and determine definitively whether or notit satisfies the rules of XML. There are no borderline cases. BNF grammars,properly written, leave no room for interpretation. The advantage of this shouldbe obvious to anyone who's had to struggle with HTML documents thatdisplay in one browser but not in another.
As well as matching the BNF grammar, a well-formed XML document must alsomeet various well-formedness constraints that specify conditions thatcan't be easily described in the BNF syntax. Well-formedness is theminimum level that a document must achieve to be parsed. Appendix B provides anannotated description of the complete XML 1.0 BNF grammar as well as all of thewell-formedness tconstraints.
XML Documents
An XML document is made up of text that's divided between markup and character data. It is a sequence of characters with a fixed length that adheres to certain constraints. It may or may not be a file. For instance, an XML document may be:
- A CLOB field in an Oracle database
- The result of a query against a database that combines several records from different tables
- A data structure created in memory by a Java program
- A data stream created on the fly by a CGI program written in Perl
- Some combination of several different files, each of which is embedded in another
- One part of a larger file containing several XMLdocuments
However, nothing essential is lost if you think of an XML document as a file, as long as you keep in the back of your mind that it might not really be a file on a hard drive.
XML documents are made up of storage units called entities. Each entity contains either text or binary data, never both. Text data is comprised of characters. Binary data is used for images and applets and the like.
Note: To use a concrete example, a raw HTML file that includes<IMG> tags is an entity but not a document. An HTML file plus allthe pictures embedded in it with <IMG> tags is a completedocument.
The XML declaration
In this and the next several chapters, I treat only simple XML documents that are made up of a single entity, the document itself. Furthermore, these documents only contain text data, not binary data such as images or applets. Such documents can be understood completely on their own without reading any other files. In other words, they stand alone. Such a document normally contains a standalone pseudo-attribute in its XML declaration with the value yes, similar to this one.
<?xml version="1.0" standalone="yes"?>
Note: I call this a pseudo-attribute because technically onlyelements can have attributes. The XML declaration is not an element. Thereforestandalone is not an attribute even if it looks likeone.
External entities and entity references can be used to combine multiple files and other data sources to create a single XML document. These documents can-tnot be parsed without reference to other files. Therefore, they normally have a stand-alone pseudo-attribute with the value no.
<?xml version="1.0" standalone="no"?>
If a document does not have an XML declaration, or if a document has an XML declaration but that XML declaration does not have a standalone pseudo-attribute, then the value no is assumed. That is, the document is assumed to be incapable of standing on its own, and the parser will prepare itself to read external pieces as necessary. If the document can, in fact, stand on its own, nothing is lost by the parser being ready to read an extra piece.
XML documents do not have to include XML declarations, although they should unless you've got a specific reason not to include them. If an XML document does include an XML declaration, then this declaration must be the first thing in the file (except possibly for an invisible Unicode byte order mark). XML processors determine which character set is being used (UTF-8, big-endian Unicode, or little-endian Unicode) by reading the first several bytes of a file and comparing those bytes against various encodings of the string <?xml . Nothing should come before this, including white space. For instance, this line is not an acceptable way to start an XML file because of the extra spaces at the front of the line.
<?xml version="1.0" standalone="yes"?>
A document must have exactly one root element that completely containsall other elements.
An XML document has a root element that completely contains all other elements of the document. This is also sometimes called the document element, although this element does not have to have the name document or root. Root elements are delimited by a start tag and an end tag, just like any other element. For instance, consider Listing 1.
Listing 1: -greeting.xml
<?xml version="1.0" standalone="yes"?><GREETING>Hello XML!</greeting>
In this document, the root element is GREETING. The XML declaration is not an element. Therefore, it does not have to be included inside the root element. Similarly, other nonelement data in an XML document, such as an xml-stylesheet processing instruction, a DOCTYPE declaration, or comments, do not have to be inside the root element. But all other elements (other than the root itself) and all raw character data must be contained in the root element.
Text in XML
An XML document is made up of text. Text is made up of characters. A character is a letter, a digit, a punctuation mark, a space or tab, or some similar thing. XML uses the Unicode character set which not only includes the usual letters and symbols from English and other Western European alphabets, but also the Cyrillic, Greek, Hebrew, Arabic, and Devanagari alphabets, as well as the most common Han ideographs for Chinese, Japanese, and Korean Hangul syllables. For now, I'll stick to the English language, the Roman script, and the ASCII character set; but I'll introduce many alternatives in the next chapter.
A document's text is divided into character data and markup. To a first approximation, markup describes a document's logical structure, while character data provides the basic information of the document. For example, in Listing 1, <?xml version="1.0" standalone="yes"?>, <greeting>, and </greeting> are markup. Hello XML!, along with its surrounding white space, is the character data. A big advantage of XML over other formats is that it clearly separates the actual data of a document from its markup.
To be more precise, markup includes all tags, processing instructions, DTDs, entity references, character references, comments, CDATA section delimiters, and the XML declaration. Everything else is character data. However, this is tricky because when a document is processed some of the markup turns into character data. For example, the markup > is turned into the greater than sign character (>). The character data that's left after the document is processed, and after all markup that refers to character data has been replaced by the actual character data, is called parsed character data, or PCDATA for short.
Elements and Tags
An XML document is a singly rooted hierarchical structure of elements. Each element is delimited by a start tag (also known as an opening tag) and an end tag (also known as a closing tag) or is represented by a single, empty element tag. An XML tag has the same form as an HTML tag. That is, start tags begin with a < followed by the name of the element the tags start, and they end with the first > after the opening < (for example, <GREETING>). End tags begin with a </ followed by the name of the element the tag finishes and are terminated by a > (for example, </GREETING>). Empty element tags begin with a < followed by the name of the element and are terminated with a /> (for example, <GREETING/>).
Element names
Every element has a name made up of one or more characters. This is the name included in the element's start and end tags. Element names begin with a letter such as y or A or an underscore _. Subsequent characters in the name may include letters, digits, underscores, hyphens, and periods. They cannot include other punctuation marks such as %, ^, or &. They cannot include white space. (The underscore often substitutes for white space.) Both lower- and uppercase letters may be used in XML names. In this book, I mostly follow the convention of making my names uppercase, mainly because this makes them stand out better in the text. However, when I'm using a tag set that was developed by other people it is necessary to adopt their case convention. For example, the following are legal XML start tags with legal XML names:
<HELP><Book><volume><heading1><section.paragraph><Mary_Smith><_8ball>
Note: Colons are also technically legal in tag names. However, theseare reserved for use with namespaces. Namespaces allow you to mix and match XMLapplications that may use the same tag names. Chapter 13 introduces namespaces.Until then, you should not use colons in your tagnames.
The following are not legal start tags because they don't contain legal XML names:
<Book%7><volume control><3heading><Mary Smith><.employee.salary>
Note: The rules for element names actually apply to names of manyother things as well. The same rules are used for attribute names, ID attributevalues, entity names, and a number of other constructs you'll encounterover the next several chapters.
Every start tag must havea corresponding end tag
Web browsers are relatively forgiving if you forget to close an HTML tag. For instance, if you include a <B> tag in your document but no corresponding </B> tag, the entire document after the <B> tag will be made bold. However, the document will still be displayed.
XML is not so forgiving. Every nonempty tagthat is, tags that do not end with />must be closed with the corresponding end tag. If a document fails to close an element with the right end tag, the browser or renderer reports an error message and does not display any of the document's content in any form.
End tags have the same name as the corresponding start tag, but are prefixed with a / after the initial angle bracket. For example, if the start tag is <FOO> the end tag is </FOO>. These are the end tags for the previous set of legal start tags.
</HELP></Book></volume></heading1></section.paragraph></Mary_Smith></_8ball>
XML names are case sensitive. This is different from HTML in which <P> and <p> are the same tag, and a </p> can close a <P> tag. The following are not end tags for the set of legal start tags we've been discussing:
</help></book></Volume></HEADING1></Section.Paragraph></MARY_SMITH></_8BALL>
Empty element tags
Many HTML elements do not have closing tags. For example, there are no </LI>, </IMG>, </HR>, or </BR> tags in HTML. Some page authors do include </LI> tags after their list items, and some HTML tools also use </LI>. However, the HTML 4.0 standard specifically denies that this is required. Like all unrecognized tags in HTML, the presence of an unnecessary </LI> has no effect on the rendered output.
This is not the case in XML. The whole point of XML is to allow new elements and their corresponding tags to be discovered as a document is parsed. Thus, unrecognized tags may not be ignored. Furthermore, an XML processor must be capable of determining on the fly whether a tag it has never seen before does or does not have an end tag. It does this by looking for special empty-element tags that end in />.
Elements that are represented by a single tag without a closing tag are called empty elements because they have no content. Tags that represent empty elements are called empty-element tags. These empty element tags are closed with a slash and a closing angle bracket (/>); for example, <BR/> or <HR/>. From the perspective of XML, these are the same as the equivalent syntax using both start and end tags with nothing in between themfor example, <BR></BR> and <HR></HR>.
However, empty element tags can only be used when the element is truly empty, not when the end tag is simply omitted. For example, in HTML you might write an unordered list like this:
<UL><LI>I've a Feeling We're Not in Kansas Anymore<LI>Buddies<LI>Everybody Loves You</UL>
In XML, you cannot simply replace the <LI> tags with <LI/> because the elements are not truly empty. Instead they contain text. In normal HTML the closing </LI> tag is omitted by the editor and filled in by the parser. This is not the same thing as the element itself being empty. The first LI element above contains the content I've a Feeling We're Not in Kansas Anymore. In XML, you must close these tags like this:
<UL><LI>I've a Feeling We're Not in Kansas Anymore</LI><LI>Buddies</LI><LI>Everybody Loves You</LI></UL>
On the other hand, a BR or HR or IMG element really is empty. It doesn't contain any text or child elements. Thus, in XML, you have two choices for these elements. You can either write them with a start and an end tag in which the end tag immediately follows the start tagfor example, <HR></HR>or you can write them with an empty element tag as in <HR/>.
Note: Current Web browsers deal inconsistently with empty elementtags. For instance, some browsers will insert a line break when they see a<HR/> tag and some won't. Furthermore, the problem mayarise even without empty element tags. Some browsers insert two horizontal lineswhen they see <HR></HR> and some insert one horizontalline. The most generally compatible scheme is to use an extra attribute beforethe closing />. The class attribute is often a goodchoicefor example, <HR CLASS="empty"/>. XSLToffers a few more ways to maintain compatibility with legacy browsers. Chapter17 discusses these methods.
Elements may nest but may notoverlap
Elements may contain (and indeed often do contain) other elements. However, elements may not overlap. Practically, this means that if an element contains a start tag for an element, it must also contain the corresponding end tag. Conversely, an element may not contain an end tag without its matching start tag. For example, this is legal XML.
<H1><CITE>What the Butler Saw</CITE></H1>
However, the following is not legal XML because the closing </CITE> tag comes before the closing </H1> tag:
<H1><CITE>What the Butler Saw</H1></CITE>
Most HTML browsers can handle this case with ease. However, XML browsers are required to report an error for this construct.
Empty element tags may appear anywhere, of course. For example,
<PLAYWRIGHTS>Oscar Wilde<HR/>Joe Orton</PLAYWRIGHTS>
This implies that for all nonroot elements, there is exactly one other element that contains the element, but which does not contain any other element containing the element. This immediate container is called the parent of the element. The contained element is called the child of the parent element. Thus each nonroot element always has exactly one parent, but a single element may have an indefinite number of children or no children at all.
Consider Listing 2. The root element is the PLAYS element. This contains two PLAY children. Each PLAY element contains three children: TITLE, AUTHOR, and YEAR. Each of these contains only character data, not more children.
Listing 2: -Parents and Children
<?xml version="1.0" standalone="yes"?><PLAYS> <PLAY> <TITLE>What the Butler Saw</TITLE> <AUTHOR>Joe Orton</AUTHOR> <YEAR>1969</YEAR> </PLAY> <PLAY> <TITLE>The Ideal Husband</TITLE> <AUTHOR>Oscar Wilde</AUTHOR> <YEAR>1895</YEAR> </PLAY></PLAYS>
In programmer terms, this means that XML documents form a tree. It starts from the root and gradually bushes out to the leaves on the ends. Trees have a number of nice properties that make them congenial to programmatic traversal, although this doesn't matter so much to you as the author of the document.
Note: Trees are more commonly drawn from the top down. That is, theroot of the tree is shown at the top of the picture rather than the bottom.While this looks less like a real tree, it doesn't affect the topology ofthe data structure in the least.
Attributes
Elements may optionally have attributes. Each attribute of an element is encoded in the start tag of the element as a name-value pair separated by an equals sign (=) and, optionally, some extra white space. The attribute value is enclosed in single or double quotes. For example,
<GREETING LANGUAGE="English"> Hello XML! <MOVIE SRC = WavingHand.mov'/></GREETING>
Here the GREETING element has a LANGUAGE attribute that has the value English. The MOVIE element has an SRC attribute with the value WavingHand.mov.
Attribute names
Attribute names are strings that follow the same rules as element names. That is, attribute names must contain one or more characters, and the first character must be a letter or the underscore (_). Subsequent characters in the name may include letters, digits, underscores, hyphens, and periods. They may not include white space or other punctuation marks.
The same element may not have two attributes with the same name. For example, this is illegal:
<RECTANGLE SIDE="8" SIDE="10"/>
Attribute names are case sensitive. The SIDE attribute is not the same as the side or the Side attribute. Therefore, the following is legal:
<BOX SIDE="8" side="10" Side="31"/>
However, this is extremely confusing, and I strongly urge you not to write markup that depends on case.
Attribute values
Attributes values are strings. Even when the string shows a number, as in the LENGTH attribute below, that number is the two characters 7 and 2, not the binary number 72.
<RULE LENGTH="72"/>
If you're writing a program to process XML, you'll need to convert the string to a number before performing arithmetic on it.
Unlike attribute names, there are few limits on the content of an attribute value. Attribute values may contain white space, begin with a number, or contain any punctuation characters (except, sometimes, for single and double quotes). The only characters an attribute value may not contain are the angle brackets < and >, though these can be included using the < and > entity references (discussed soon).
XML attribute values are delimited by quote marks. Unlike HTML attribute values, XML attribute values must be enclosed in quotes whether or not the attribute value includes spaces. For example,
<A HREF="">IBiblio</A>
Most people choose double quotes. However, you can also use single quotes, which is useful if the attribute value itself contains a double quote. For example,
<IMG SRC="sistinechapel.jpg" ALT='And God said, Let there be light," and there was light'/>
If the attribute value contains both single and double quotes, then the one that's not used to delimit the string must be replaced with the proper entity reference. I generally just go ahead and replace both, which is always legal. For example,
<RECTANGLE LENGTH='8'7"' WIDTH="10'6""/>
If an attribute value includes both single and double quotes, you may use the entity reference ' for a single quote (an apostrophe) and " for a double quote. For example,
<PARAM NAME="joke" VALUE="The diner said, "Waiter, There's a fly in my soup!"">
Entity References
You're probably familiar with a number of entity references from HTML. For example, © inserts the copyright symbol ) and ® inserts the registered trademark symbol . XML predefines the five entity references listed in Table 1. These predefined entity references are used in XML documents in place of specific characters that would otherwise be interpreted as part of markup. For instance, the entity reference < stands for the less than sign (<), which would otherwise be interpreted as beginning a tag.
Table 1
XML Predefined Entity references
Caution: In XML, unlike HTML, entity references must end with asemicolon. > is a correct entity reference; > isnot.
XML assumes that the opening angle bracket always starts a tag, and that the ampersand always starts an entity reference. (This is often true of HTML as well, but most browsers are more forgiving.) For example, consider this line,
<H1>A Homage to Ben & Jerry's New York Super Fudge Chunk Ice Cream</H1>
Web browsers that treat this as HTML will probably display it correctly. However, XML parsers will reject it. You should escape the ampersand with & like this:
<H1>A Homage to Ben & Jerry's New York Super Fudge Chunk Ice Cream</H1>
The open angle bracket (<) is similar. Consider this common Java code embedded in HTML:
<CODE> for (int i = 0; i <= args.length; i++ ) { </CODE>
Both XML and HTML consider the less than sign in <= to be the start of a tag. The tag continues until the next >. Thus a Web browser treating this fragment as HTML will render this line as
for (int i = 0; i
rather than
for (int i = 0; i <= args.length; i++ ) {
The = args.length; i++ ) { is interpreted as part of an unrecognized tag. Again, an XML parser will reject this line completely because it's malformed.
The less than sign can be included in text in both XML and HTML by writing it as <. For example,
<CODE> for (int i = 0; i <= args.length; i++ ) { </CODE>
Raw less than signs and ampersands in normal XML text are always interpreted as starting tags and entity references respectively. (The abnormal text is CDATA sections, described below.) Therefore, less than signs and ampersands that are text rather than markup must always be encoded as < and & respectively. Attribute values are text, too, and as you already saw, entity references may be used inside attribute values.
Greater than signs, double quotes, and apostrophes must be encoded when they would otherwise be interpreted as part of markup. However, it's easier just to get in the habit of encoding all of them rather than trying to figure out whether a particular use would or would not be interpreted as markup.
Other than the five entity references already discussed, you can only use an entity reference if you define it in a DTD first. Since you don't know about DTDs yet, if the ampersand character & appears anywhere in your document, it must be immediately followed by amp;, lt;, gt;, apos;, or quot;. All other uses violate well-formedness.
Cross Reference: Chapter 10 teaches you how to define new entity references forother characters and longer strings of text usingDTDs.
XML comments are almost exactly like HTML comments. They begin with <!-- and end with --> . All data between the <!-- and --> is ignored by the XML processor. It's as if it weren't there. This can be used to make notes to yourself or your coauthors, or to temporarily comment out sections of the document that aren't ready, as Listing 3 demonstrates.
Listing 3: -An XML document that contains a comment
<?xml version="1.0" standalone="yes"?><!-- This is Listing 6-3 from The XML Bible --><GREETING>Hello XML!<!--Goodbye XML--></GREETING>
Since comments aren't elements, they may be placed before or after the root element. However, comments may not come before the XML declaration, which must be the very first thing in the document. For example, this is not a well-formed XML document:
<!-- This is Listing 6-3 from The XML Bible --><?xml version="1.0" standalone="yes"?><GREETING>Hello XML!<!--Goodbye XML--></GREETING>
Comments may not be placed inside a tag. For example, this is also illegal:
<?xml version="1.0" standalone="yes"?><GREETING>Hello XML!</GREETING <!--Goodbye--> >
However comments may surround and hide tags. In Listing 4, the t<antigreeting> tag and all its children are commented out. They are not tshown when the document is rendered. It's as if they don't exist.
Listing 4: -A comment that comments out an element
<?xml version="1.0" standalone="yes"?><DOCUMENT> <GREETING> Hello XML! </GREETING> <!-- <ANTIGREETING> Goodbye XML! </ANTIGREETING> --></DOCUMENT>
Because comments effectively delete sections of text, you must take care to ensure that the remaining text is still a well-formed XML document. For example, be careful not to comment out essential tags, as in this malformed document:
<?xml version="1.0" standalone="yes"?><GREETING>Hello XML!<!--</GREETING>-->
Once the commented text is removed what remains is
<?xml version="1.0" standalone="yes"?><GREETING>Hello XML!
Because the <greeting> tag is no longer matched by a closing </greeting> tag, this is no longer a well-formed XML document.
There is one final constraint on comments. The two-hyphen string -- may not occur inside a comment except as part of its opening or closing tag. For example, this is an illegal comment:
<!-- The red door--that is, the second one--was left open -->
This means, among other things, that you cannot nest comments like this:
<?xml version="1.0" standalone="yes"?><DOCUMENT> <GREETING> Hello XML! </GREETING> <!-- <ANTIGREETING> <!--Goodbye XML!--> </ANTIGREETING> --></DOCUMENT>
It also means that you may run into trouble if you're commenting out a lot of C, Java, or JavaScript source code that's full of expressions such as i-- or numberLeft--. Generally, it's not too hard to work around this problem once you recognize it.
Processing Instructions
Processing instructions are like comments that are intended for computer programs reading the document rather than people reading the document. However, XML parsers are required to pass along the contents of processing instructions to the application on whose behalf they're parsing, unlike comments, which a parser is allowed to silently discard. The application that receives the information is free to ignore any processing instruction it doesn't understand.
Processing instructions begin with <? and end with ?>. The starting <? is followed by an XML name called the target, which identifies the program that the instruction is intended for, followed by data for that program. For example, you saw this processing instruction in the last chapter.
<?xml-stylesheet type="text/xml" href="5-2.xsl"?>
The target of this processing instruction is xml-stylesheet, a standard name that means the data in this processing instruction is intended for any Web browser that can apply a style sheet to the document. type="text/xml" href="5-2.xsl" is the processing instruction data that will be passed to the application reading the document. If that application happens to be a Web browser that understands XSLT, then it will apply the style sheet 5-2.xsl to the document and render the result. If that application is anything other than a Web browser, it will simply ignore the processing instruction.
Note: Appearances to the contrary, the XML declaration is technicallynot a processing instruction. The difference is academic unless you'rewriting a program to read an XML document using an XML parser. In that case, theparser's API will provide different methods to get the contents ofprocessing instructions and the contents of the XMLdeclaration.
xml-stylesheet processing instructions are always placed in the document's prolog between the XML declaration and the root element start tag. Other processing instructions may also be placed in the prolog, or at almost any other convenient location in the XML document, either before, after, or inside the root element. For example, PHP processing instructions generally appear wherever you want the PHP processor to place its output. The only place a processing instruction may not appear is inside a tag or before the XML declaration.
The target of a processing instruction may be the name of the program it is intended for or it may be a generic identifier such as xml-stylesheet that many different programs recognize. The target name xml (or XML, Xml, xMl, or any other variation) is reserved for use by the World Wide Web Consortium. However, you're free to use any other convenient name for processing instruction targets. Different applications support different processing instructions. Most applications simply ignore any processing instruction whose target they don't recognize.
The xml-stylesheet processing instruction uses a very common format for processing instructions in which the data is divided into pseudo-attributes; that is, the data is passed as name-value pairs, and the values are delimited by quotes. However, as with the XML declaration, these are not true attributes because a processing instruction is not a tag. Furthermore, this format is optional. Some processing instructions will use this style; others won't. The only limit on the content of processing instruction data is that it may not contain the two-character sequence ?> that signals the end of a processing instruction. Otherwise, it's free to contain any legal characters that may appear in XML documents. For example, this is a legal processing instruction.
<?html-signature Copyright 2001 <a href=> Elliotte Rusty Harold</a><br> <a href=mailto:elharo@metalab.unc.edu> elharo@metalab.unc.edu</a><br> Last Modified May 3, 2001?>
In this example, the target is html-signature. The rest of the processing instruction is data and contains a lot of malformed HTML that would otherwise be illegal in an XML document. Some programs might read this, recognize the html-signature target, and copy the data into the signature of an HTML page. Other programs that don't recognize the html-signature target will simply ignore it.
CDATA Sections
Suppose your document contains one or more large blocks of text that have a lot of <, >, &, or characters but no markup. This would be true for a Java or HTML tutorial, for example. It would be inconvenient to have to replace each instance of one of these characters with the equivalent entity reference. Instead, you can include the block of text in a CDATA section.
CDATA sections begin with <![CDATA[ and end with ]]>. For example:
<![CDATA[System.out.print(<");if (x <= args.length && y > z) { System.out.println(args[x - y]);}System.out.println(>");]]>
The only text that's not allowed within a CDATA section is the closing CDATA tag ]]>. Comments may appear in CDATA sections, but do not act as comments. That is, both the comment tags and all the text they contain will be displayed.
Most of the time anything inside a pair of <> angle brackets is markup, and anything that's not is character data. However, in CDATA sections, all text is pure character data. Anything that looks like a tag or an entity reference is really just the text of the tag or the entity reference. The XML processor does not try to interpret it in any way. CDATA sections are used when you want all text to be interpreted as pure character data rather than as markup.
CDATA sections are extremely useful if you're trying to write about HTML or XML in XML. For example, this book contains many small blocks of XML code. The word processor I'm using doesn't care about that. But if I were to convert this book to XML, I'd have to painstakingly replace all the less than signs with < and all the ampersands with & like this:
<?xml version="1.0" standalone="yes"?><greeting>Hello XML!</greeting>
To avoid having to do this, I can instead use a CDATA section to indicate that a block of text is to be presented as is with no translation. For example:
<![CDATA[<?xml version="1.0" standalone="yes"?><GREETING>Hello XML!</GREETING>]]>
Note: Because ]]> may not appear in a CDATA section, CDATAsections cannot nest. This makes it relatively difficult to write about CDATAsections in XML. If you need to do this, you just have to bite the bullet anduse the < and &escapes.
CDATA sections aren't needed that often, but when they are needed, they're needed badly.
Tools
It is not particularly difficult to write well-formed XML documents that follow the rules described in this article. However, XML browsers are less forgiving of poor syntax than are HTML browsers, so you do need to be careful.
If you violate any well-formedness constraints, XML parsers and browsers will report a syntax error. Thus, the process of writing XML can be a little like the tprocess of writing code in a real programming language. You write it; then you compile it; then when the compilation fails, you note the errors reported and fix them. In the case of XML you parse the document rather than compile it, but the pattern is the same.
Generally, this is an iterative process in which you go through several edit-parse cycles before you get your first look at the finished document. Despite this, there's no question that writing XML is a lot easier than writing C or Java source code. With a little practice, you'll get to the point where you have relatively few errors and can write XML almost as quickly as you can type.
There are several tools that will help you clean up your pages, most notably RUWF (Are You Well Formed?) from XML.COM and Tidy from Dave Raggett of the W3C.
RUWF
Any tool that can check XML documents for well-formedness can test well-formed HTML documents as well. One of the easiest to use is the RUWF well-formedness checker from XML.COM at . Simply type in the URL of the page that you want to check, and RUWF returns the first several dozen errors on the page.
Here's the first batch of errors RUWF found on the White House home page. Most of these errors are malformed XML, but legal (if not necessarily well styled) HTML. However, at least one error (Line 55, column 30: Encountered </FONT> with no start-tag.") is a problem for both HTML and XML.
Line 28, column 7: Encountered </HEAD> expected </META>...assumed </META> ...assumed </META> ...assumed </META>...assumed </META>Line 36, column 12, character '0': after AttrName= in start-tagLine 37, column 12, character '0': after AttrName= in start-tagLine 38, column 12, character '0': after AttrName= in start-tagLine 40, column 12, character '0': after AttrName= in start-tagLine 41, column 10, character 'A': after AttrName= in start-tagLine 42, column 12, character '0': after AttrName= in start-tagLine 43, column 14: Encountered </CENTER> expected </br>...assumed </br> ...assumed </br>Line 51, column 11, character '+': after AttrName= in start-tagLine 52, column 51, character '0': after AttrName= in start-tagLine 54, column 57: after &Line 55, column 30: Encountered </FONT> with no start-tag.Line 57, column 10, character 'A': after AttrName= in start-tagLine 59, column 15, character '+': after AttrName= in start-tag
Tidy
After you've identified the problems, you'll want to fix them. Many common problemsfor instance, putting quote marks around attribute valuescan be fixed automatically. The most convenient tool for doing this is Dave Raggett's command line program HTML Tidy. Tidy is a character mode program written in ANSI C tthat can be compiled and run on most platforms, including Windows, UNIX, BeOS, and Mac.
Tidy cleans up HTML files in several ways, not all of which are relevant to XML well-formedness. In fact, in its default mode Tidy tends to remove unnecessary (for HTML, but not for XML) end tags such as </LI>, and to make other modifications that break well-formedness. However, you can use the -asxml switch to specify that you want well-formed XML output. For example, to convert the file index.html to well-formed XML, you would type this command from a DOS window or shell prompt:
C:> tidy -m -asxml index.html
The -m flag tells Tidy to convert the file in place. The -asxml flag tells Tidy to format the output as XML.
Summary
In this article, you learned about XML's well-formedness rules. In particular, you learned:
- XML documents are sequences of characters that meet certain well-formedness criteria.
- The text of an XML document is divided into character data and markup.
- An XML document is a tree structure made up of elements.
- Tags delimit elements.
- Start tags and empty tags may contain attributes, which describe elements.
- Entity references allow you to include <, >,&, , and ' in your document.
- CDATA sections are useful for embedding text that contains a lot of<, >, and & characters.
- Comments can document your code for other people who read it, but parsers may ignore them. Comments can also hide sections of the document that aren't ready.
- Processing instructions allow you to pass application-specific informationto particular applications..
| http://www.developer.com/xml/article.php/784621/Well-Formed-XML.htm | CC-MAIN-2016-07 | refinedweb | 6,812 | 63.7 |
Re: Database connection issue using SQL schema user account
- From: "Mary Chipman [MSFT]" <mchip@xxxxxxxxxxxxxxxxxxxx>
- Date: Fri, 11 Jan 2008 11:25:10 -0500
User-schema separation has introduced a whole new level of confusion
that didn't exist before SQLS 2005. For example:
--Schemas are intended to be used for grouping objects, much like a
namespace. They can simplify permissions insofar as being able to have
new objects created inside of a schema inherit permissions assigned to
the schema, but no permissions are inherited from a schema by users;
they are only inherited by the database objects inside the schema.
--The dbo user account is not the same thing as the dbo default
schema. The dbo user maps to db_owner/sysadmin. The dbo schema in 2005
serves the purpose of providing a default schema that is backwards
compatible with earlier versions.
--Users who are assigned the dbo schema do not inherit the permissions
of the dbo user account. There is no reason why you need to have
schemas owned by different users who have restricted privileges -- dbo
can work just fine because each schema can have its own set of
permissions that objects inherit that are independent of the dbo user
account.
It looks to me like you may have created users with restricted
permissions by granting them only db_datareader and db_datawriter
roles. That means they don't have permission to do anything else but
read and write data in one particular database. So I think the issue
may be one of permissions rather than schemas, although it's hard to
tell without knowing more about it. I'd recommend taking a look at
SQLS BOL,
ADO.NET and SQL
Server MVP Erland Sommarskog's web site. This will help you get a
handle on designing an appropriate security model that will work for
you.
--Mary
On Wed, 9 Jan 2008 03:57:20 -0800 (PST), mcotter@xxxxxxxxxxxxxxx
wrote:
Thanks for your responds William..
I believe the initial catalog parameter is not required because I set
the default database setting when creating the user login account. I
tried this parameter anyways but still received the same error.
I am still in the development phase of this project so the deployment
setup is not an option. I am still puzzled why you must be a local
administrator or a dbo to attach a database. This requirement
eliminates the powerful use of database schemas. Is this an oversight
by Microsoft or am I missing something
- References:
- Database connection issue using SQL schema user account
- From: mcotter
- Re: Database connection issue using SQL schema user account
- From: William Vaughn
- Re: Database connection issue using SQL schema user account
- From: mcotter
- Prev by Date: Memory footprint of expression columns
- Next by Date: LIMIT 1,10 versus Sql Server: TOP
- Previous by thread: Re: Database connection issue using SQL schema user account
- Next by thread: Index Questions
- Index(es): | http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework.adonet/2008-01/msg00102.html | crawl-002 | refinedweb | 485 | 57.3 |
Accessing .NET Serial Ports from C#
WEBINAR: On-demand webcast
How to Boost Database Development Productivity on Linux, Docker, and Kubernetes with Microsoft SQL Server 2017 REGISTER >
Every PC I've ever used has had at least one. In fact, in times gone by it wasn't rare to see two, sometimes even three of them. What am I talking about?
The humble RS232 Serial port.
Some of the younger readers of this article might at this point be scratching their heads wondering what on earth this old duffer is talking about. Look on the back of your PC and look for anything that looks like this:
Figure 1: Identifying the RS232 port
Back in the mists of time (way before USB), these ports, known as RS232 Serial ports, were the primary means of connecting extra peripherals to your PC. Everything from mice to touch screens, modems, and even network-like connections to other computers went through these things at some time or another.
These days, however, serial ports are becoming a rare occurrence, with many newer machines omitting them entirely. In all honesty, I can't remember when I last saw one on a laptop, and the only machines that I now possess that still have physical ports on the back are the PC I'm typing this post on, and the rack of servers I have in the server room upstairs.
What's All This Got to Do with .NET?
Ever since its very beginning, .NET has had great support for any and all serial ports available on the hardware on which it's running. This comes in the form of the 'System.IO.Ports.SerialPort' object. The serial port object makes it amazingly easy to attach your serial-enabled PC to all sorts of equipment that still uses serial connections, which, granted is quite old, but still useable. Given that statement, however, I can still hear many of you mumbling and shaking your heads.
"What's the point if many PCs no longer have any ports on them?"
I can understand why many of you might be thinking and wondering why on earth I chose to do an article on an arcane, dying technology that appears to be of little use, but there's a little secret that I've yet to tell you. A huge amount of all these new-fangled USB gadgets that many people plug into their PCs these days actually appear to the host machine as something called a "COM Port".
These "COM Ports" are actually how serial ports are addressed in your PC. If you look in your Device Manager, for example:
Figure 2: The COM port listed in Device Manager
You might see something like this. In my case, this is actually a GPS device plugged into one of my USB ports with a serial port identifier of "COM45"
Now I actually do happen to know already what the settings for me to program this device are, so after starting up a console mode project in Visual Studio, I add the following code to Program.cs:
using System; using System.IO.Ports; namespace SerialPorts { class Program { static void Main(string[] args) { SerialPort myPort = new SerialPort(); myPort.DataReceived += MyPortDataReceived; myPort.PortName = "COM45"; myPort.BaudRate = 4800; myPort.DataBits = 8; myPort.Parity = Parity.None; myPort.StopBits = StopBits.One; myPort.Open(); Console.ReadLine(); myPort.Close(); } static void MyPortDataReceived(object sender, SerialDataReceivedEventArgs e) { var myPort = sender as SerialPort; Console.WriteLine(myPort.ReadLine()); } } }
That code gives us the following output:
Figure 3: GPS output
As you can see from the screen grab, what I actually have here is the actual GPS data being produced by the attached GPS device.
As far as Windows and .NET are concerned, you're simply just talking to a standard serial port; the mechanics of it being a USB or even any other type of physical connection make no difference at all. In my collection of gadgets, I have barcode scanners, temperature detectors, and all sorts of stuff.
Here's the best part, though; many mobile phones also show up as a GSM Modem and likewise most 3G USB modem sticks do too. This is interesting for lots of reasons, because with the correct commands you can send instructions to the modem and perform tasks like send SMS messages and make voice and data calls from within .NET code.
Describing the Sample Code
Working through the code, this is about the simplest example that can be put together. First off, you need to make sure you reference 'System.IO.Ports' We then proceed to 'new up' a serial port object that we call 'myPort'.
The first thing we add to this port is an event handler to handle any incoming data from the attached device. The event handler uses the .NET "as syntax" to cast the sender object to the correct type. This means you don't have to make your actual source port a global in your application.
We then simply just use ReadLine to get the next line of data available. I've done this for simplicity, but you could just as easy use one of the many other methods to get all the bytes available, one byte at a time or multiple strings. Different types of data will require different strategies, depending on how you need to process the data you receive. Often, this step is the hardest step involved in serial port programming.
If you want to send data back to the device, all the read methods have corresponding write methods, too. I could, for example, say:
myPort.WriteLine("My Serial Device Command");
and that would, as you might expect, send the line, followed by a carriage return to the device via its open port.
In the case of the demo here, this would have no effect because the GPS device I'm using only outputs data and ignores any input given to it. In the case of a mobile phone, however, there are all sorts of commands in the standard GSM Modem command set that you could send to it to perform all kinds of tasks.
The remaining lines of code all set up the initial parameters to the serial port, so that Windows knows exactly what to open and how. The "COM45" port name, if you remember, came from the Device Manager. The rest, as I said previously, came from the programming manual I received when I bought the GPS device.
The 'BaudRate' property is the speed at which the serial connection communicates with the PC. In our demo, 4800 represents 4800 bits per second. The data bit size of 8 tells Windows that our data comes in 8 bit chunks and that we have no parity checking and one stop bit.
If we do the math on this, we have 8 bits per item of data + 1 stop bit. That means we potentially have 9 bits for each single byte received. 4800 times 9 gives us approx 533 bytes per second, or just over half a K of data. A full explanation of how parity and stop bits work is beyond the scope of this article, but if you choose to peruse it further, there's plenty of material out there on the Internet. For our purposes, the faster the baud rate, the faster the device can send data to the host PC, and ultimately the faster you'll need to process it.
If you don't know the settings for the device you're programming, don't worry too much. You often can try different baud rates and other settings until the data you start to receive makes sense. In fact, many bits of software that scan for modems and other serial devices do just this. Be careful about sending data to the device, however, until you know the settings. Even incorrect settings will allow data to reach the device, but if that data is corrupt, it may cause the device to malfunction.
This was just a brief view at what's available under the hood. Many devices that you might not even realise are serial in nature appear in your Device Manager as COM ports. On top of that, you also can get USB/Serial cables that allow you to connect directly to other machines, such as server lights out boards and routers. With these kinds of cables, it's easily possible to write remote control software, or even MVC sites that communicate with gear on the server side, but allow an operator to simply use a browser on another machine to control that device.
Finally, the implementation of the serial port object in mono is also fully complete, which means your .NET code to control a serial device will run everywhere that mono does. I use C# on a Raspberry PI to control a radio-controlled car.
If you have any suggestions or ideas for articles on specific topics of interest.
Loan TipsPosted by Loanemu on 10/14/2015 03:30am
Major thankies for the post. Will read on...Reply
10 bits per bytePosted by Mark Ward on 11/16/2014 07:33am
Your math is only slightly off... While there are 8 data bits, 1stop bit, and no parity bit, there is ALWAYS a START bit, so (in your data example, there are 10 bites per byte.Reply
biometric system based electronic voting machine using ARM9 microcontrollerPosted by b.divya soundarya sai on 10/14/2014 10:54pm
my project is finger-print based electonic voting machine using arm9 microcontroller.so how can i transfer finger print data from central server,i.e pc into arm9 board using vb.net.Reply
Lots of devices out therePosted by scott purcell on 10/13/2014 07:48am
I work in the industrial automation field and I can tell you there are still millions and millions of devices talking RS232. I have used machine mounted barcode scanners, scales, measuring instruments and a variety of data acquisition units all on RS232 or RS485 connections. Thanks for your article.
RE: Lots of devices out therePosted by Peter Shaw on 10/15/2014 12:26pm
Hi Scott, I'm glad you found things useful. Yes I can well Believe there are tons of device out there still using serial. I work with a lot of telemetry kit, custom devices in vehicles, industrial GPS and such like so know exactly where your coming from.Reply
IT ManagerPosted by David Boccabellla on 10/13/2014 06:20am
Oh thanks you.. Yes - Serial is an old technology however in process control that are thousands of devices that still happily chat at 9600 baud. Just because some executives thinks that serial is no-longer 'cool' does not mean at a legacy of equipment is going to change.
RE: IT ManagerPosted by Peter Shaw on 10/15/2014 12:30pm
Indeed David, I still have more than a few gadgets that happily chat away at 9600 baud. IN fact my server alert service in case my Broadband goes down, is an old SPV C600 with a serial port adapter soldered to the now broken USB port on it. I soldered a USB cable direct onto the motherboard, then soldered a serial UART onto the end of that, which via a traditional Serial port on the monitoring server allows me to send SMS.Reply | https://www.codeguru.com/columns/dotnet/accessing-.net-serial-ports-from-c.html | CC-MAIN-2017-47 | refinedweb | 1,891 | 60.45 |
installation of QT creator 5.6.0 with gstreamer on Windows 10
- shashi prasad
Hi All,
i installed QT creator 5.6.0(qt-opensource-windows-x86-msvc2015_64-5.6.0) on 64 bit windows machine.
tested small program,its working fine.
now i want to play a video using gstreamer.
so i installed gstreamer (gstreamer-1.0-x86_64-1.12.1) and serached gstreamer-1.0 lib but could not found.
again downloaded some other gstreamer for 64 bits but gstreamer-1.0 lib but could not found.
downloaded gstreamer for 32 bit machine (gstreamer-1.0-devel-x86-1.12.1) and installed it on this 64 bit machine.
serached gstreamer-1.0 lib, this lib is there in a path.
why this library is not there when installing 64 bit gstreamer and is it ok to install 32 bit gstreamer on 64 bit QT creator?
please suggest me.
now i have written code for playing video using gstremer.
in mainWindow.cpp
#include "mainwindow.h"
#include "ui_mainwindow.h"
#include<stdio.h>
#include <gst.h>
#include <glib.h>
gst.h and glib.h is related to gstreamer.
so i added gstreamer library in .profile as
INCLUDEPATH += E:/gstreamer/1.0/x86/include
E:/gstreamer/1.0/x86/include/gstreamer-1.0/gst
E:/gstreamer/1.0/x86/include/glib-2.0
E:/gstreamer/1.0/x86/include/glib-2.0/glib
E:/gstreamer/1.0/x86/lib/glib-2.0/include
E:/gstreamer/1.0/x86/include/gstreamer-1.0/gst
LIBS += -LE:/gstreamer/1.0/x86/lib
-LE:/gstreamer/1.0/x86/lib -gstreamer-1.0
CONFIG += E:/gstreamer/1.0/x86/lib/pkgconfig
while compiling i am getting error as
can not open included file gst.h .
please suggest me where i missed in .profile ?
is it possible to integrate gstreamer with QT in window10 ?
is there any pluggins require for QT creator for using gstreamer?
if Yes then please suggest me how to install gstreamer pluggins for QT.
Thanks in advance.
@shashi-prasad said in installation of QT creator 5.6.0 with gstreamer on Windows 10:
gstreamer-1.0-x86_64-1.12.1
This is not a devel package so .lib files are not likely to be included. I would suggest using gstreamer-1.0-devel-x86_64-1.12.1.msi instead. | https://forum.qt.io/topic/81318/installation-of-qt-creator-5-6-0-with-gstreamer-on-windows-10 | CC-MAIN-2018-13 | refinedweb | 387 | 72.73 |
Recently,> … {
…
void sort() import static java.util.Collections.sort;
…
}
Here is my proposed syntax, allow to declare static method
in interfaces (in the same time, we close
one the top 25 RFE Bug 4093687)
and add some sugar in the compiler.
package java.util;
public interface List<E> … {
…
@ExtensionMethod
static <T extends Comparable<? super T>> void sort(List<T> list) {
java.util.Collections.sort(list);
}
…
}
Like other proposed syntax, the compiler
is modified to transform an instance call
to a static call. The following code
List<String> list=...
list.sort();
List<String> list=...
List.sort(list);
@ExtensionMethod (borrowed from
Stephen blog)
like @Override is not
mandatory and tells the compiler to verify
that the first parameter of the static
method is the same type that the declaring interface.
I wait your comments,
Rémi
I suppose it depends on your screen size. Maybe with a tall screen you could see the imported extension methods. Your proposed syntax is not easy to read (you can understand it if you look hard) and might not even fit on the screen horizontally... There are too many different symbols, inheritance, and the like.
I'm not keen on extension methods in Java (okay for scripting languages, as you know what you're getting into) mainly because I don't like the potential confusion for an imported extension method "hiding" a declared method with the same name. Or the opposite? Difficult to remember. Again, resolvable, but reducing code clarity.
Your first criticism seems to be the ambiguity caused by not being able to see the relevant import statement. How is that different from confusion caused when importing java.util.* and java.sql.* and referring to Date? Or java.util.* and java.awt.* and List? I don't see you recommending use of fully-qualified names and nothing else, so the line between clarity and concise code seems to be subjective.
In the same way, I was originally for a real property syntax in Java 7, as opposed to the "get", "set", and "is" convention, similar to C#: concise declaration, concise use, and (for no extra cost) a clear differentiation of a property and some arbitrary "get" method. Current property proposals go way beyond that, with all sorts of syntax (instead of just ".") to do some advanced stuff with another set of concepts and syntax (the opposite of simplification). If you really want a much more powerful language, instead of aiming for clarity-enhancing features, consider working on a scripting language. For that matter, Groovy has almost all of what people seem to be asking for...
- Chris
Posted by: chris_e_brown on November 29, 2007 at 03:50 AM
Use-site extension methods allow people in the Java community at large to create useful, targeted APIs for interfaces which are designed by someone else, and which de facto can't be changed. The List API was not designed by you or me and cannot be changed by us. Use-site extension methods allow us to "extend" the API in ways that make our intentions more clear. Declaration-site extension methods allow the designer of an API some flexibility in extending their interfaces after an earlier version is already in widespread distribution. I think both use cases are valuable.
If you're talking about List, some people believe list.first() is more clear than list.get(0) and list.last() is more clear than list.get(list.size() - 1). There was a long discussion about this re: Ruby versus Java APIs some months back, under the heading IIRC "humane APIs".
As long as there are clear rules for what code gets invoked in the presence of static extension method imports, then I don't see a problem. When working with any class, for methods defined in some superclass I don't immediately know where that method is defined, overridden, or what it does, exactly. My IDE or other tools help me figure that out. What's important is that method selection is rule-based, statically constrained, and well-understood.
And for the record, I know exactly what aString.size() means :).
Regards
Patrick
Posted by: pdoubleya on November 29, 2007 at 04:12 AM
I like the syntax with static interface methods...
Posted by: zero on November 29, 2007 at 04:42 AM
As others have mentioned user-site extension methods provide the library user with the ability to enrich the interface based on the current contract without requiring a new version from the library provider. As to the static import issue, I think IDEs could help here, in the same vein as the ability to recognize fields with syntax highlighting rather than writing 'this.' when referencing fields.
Posted by: khalilb on November 29, 2007 at 06:55 AM
I understand why you have a problem with the Neal's extension methods, I hadn't looked at it like that. The problem of course being that the import statement does not say WHICH classes it affects! So if you have some code with a lot of those static imports you would have to check ALL of them to figure out where that strange new method comes from.
But for the same reason as given by others before I don't like declaration-site ext.methods because it does not give ME, as a programmer, any "power".
I wonder if your problem with use-site ext.methods could not be lessened by adding a more explicit definition, something that will show which classes will be affected by the import? Something like:
import static org.test.utils.Strings.* for String;
Posted by: quintesse on November 29, 2007 at 08:01 AM
I'm not sure I like this. It's just not clear enough where the method is coming from. It also has a rather large scope, it effects everything defined in the entire file. How about a variation on casting so it oblivious that you no longer have a standard list.
List list=...
([ListExt]list).sort();
One might wish to keep the extended interface around.
ListExt exList = [ListExt]list;
exList.sort();
exList.print();
exList.shuffle();
exList.print();
One might define the ListExt class as such:
public interface ListExt static extends List{
...
@ExtensionMethod
static <T extends Comparable<? super T>> void sort(List<T> list) {
java.util.Collections.sort(list);
}
...
}
The "static extends" would mean I have extension methods to add to this interface. Then anyone could implement there own extensions, to anything and apply them to objects who's creation they do not control . Also its very explicit about whats going on. What do you think?
Posted by: aberrant on November 29, 2007 at 10:23 AM
Nicely put why I hate this extension proposal as well. You can't even trust reading the code if you know the API anymore. Don't even think about understand code if there will be multiple extensions defining size()
I think it was Google how submitted this proposal as a feature for Java 7, they seem to be pretty sure that it will be accepted as their collections API will profit from it to become eventually usable.
In most codebases you can do without it by using replacements like common-collections or simple write one yourself (like we did @work). I feel it is a useless additional layer of complexity to the language that only promotes viral ideas
Posted by: csar on November 29, 2007 at 03:18 PM
I agree with you. Creating the illusion of a new instance method that is really a static method is a very bad idea in my opinion. It makes the code much harder to read.
Posted by: swpalmer on November 30, 2007 at 08:12 AM
A mistake was made in the code. It should be:
for(int i=0;i<text.size();i++) {
System.out.println(text.charAt(i));
}
Posted by: ossaert on December 03, 2007 at 05:05 AM
Re. use-site extensions, I am not totally convinced about the
extension mechanism as proposed - maybe I need to know more about it.
My particular concerns are:
They look like they do dynamic dispatch, but they don't.
They have a limited use case - statically imported static methods
Perhaps -> could be used and that this notation is for the first
argument of *any* method regardless of how its name is made available
and this first argument *includes* the hidden this of instance
methods. Also you can pass this argument to multiple methods in a
block, like a with clause (introducing with as a keyword would also be
an option).. Similar to the builder
proposal; the value of the second statement above is
list1.addAll( list3 ). Different than the builder proposal; the
intermediate values, e.g. list1.addAll( list2 ), are always discarded.
Stephen Coleborne has suggested something similar but proposed .do.
instead of -> and in Stephen's proposal there is no concept of passing
to a block of statements and his proposal only works for statically
imported static methods.
Posted by: hlovatt on December 05, 2007 at 06:37 PM
RE Declaration-site extensions (but mention of use-site also)
I think your concerns can re. multiple inheritance can be addressed
using Traits. Traits are interfaces with method bodies, no
constructors, and no fields. If a conflict arises due to multiple
inheritance you must resolve the conflict.
Consider:
interface X { int foo() { return 1; } }
interface Y { int foo() { return 2; } }
class XY implements X, Y { // Conflict - two foos
public int foo() { return X.foo(); } // Conflict resolved by writing a foo - in this case it call X.foo
}
class XX implements X {} // No conflict - therefore no need to write a foo
Class loader, not compiler, adds:
public int foo() { return X.foo(); }
To class XX.
Method foo must be added to XX by the class loader so that new methods
can be added to an interface without breaking old code (provided that
they have implementations).
Posted by: hlovatt on December 05, 2007 at 06:46 PM
I have blogged about an alternative, with clauses:
Typical usages are:
list -> synchronizedList() -> sort();
list -> { add( 1, "A" ); add( 2, "B" ); };
Home home = new Builder() -> { setWindows( windows ); setDoors( doors ); makeHome(); };
Posted by: hlovatt on December 13, 2007 at 02:29 PM | http://weblogs.java.net/blog/forax/archive/2007/11/java_7_extensio.html | crawl-002 | refinedweb | 1,699 | 62.78 |
Hi all,
Ok i've started doing a mini project about a CD collection database.
i got the program working ata stage where you can enter a single entry and the record is printed to the screen, the problem im having is i can not seem to modify the program to store an array of CD 'objects' . Each entry in the array needs to be a single object describing a CD.
I also need to add a menu of options like:
1: Input New Entry 2: Print All Entries 3: Quit
if anyone can help i would really apreciate the help.
below is my program of where i have got to so far::
If you copy and paste this you will see it works.
import javax.swing.*; public class CdStorage { public static void main (String[] args) { int menu_choice; CdRecord one = new CdRecord(); one.artist_name = JOptionPane.showInputDialog("Enter artist name."); one.album_name = JOptionPane.showInputDialog("Enter album name."); one.no_of_tracks =Integer.parseInt(JOptionPane.showInputDialog("Enter the number of tracks on the album")); one.printCdRecord(); }//end main } // end class CdStorage class CdRecord { public String artist_name; public String album_name; public int no_of_tracks; public CdRecord (String artist, String album, int tracks, int year) { artist_name = artist; //stores first argument passed into artist_name album_name = album; //stores second argument passed into album_name no_of_tracks = tracks; //stores third argument passed into no_of_tracks } public CdRecord() { artist_name = "A"; album_name = "B"; no_of_tracks = 0; } public void printCdRecord () { String o = "Artist Name: " + artist_name + "\nAlbum Name: " +album_name+"\nNo. Of Tracks: " + no_of_tracks;; System.out.println(o); } }//end class cdstorage | https://www.daniweb.com/programming/software-development/threads/20680/need-big-help | CC-MAIN-2017-43 | refinedweb | 252 | 51.68 |
Ensuring that resources are used correctly between threads is easy in Java. Usually, it just takes the use of the synchronized keyword before a method. Because Java makes it seem so easy and painless to coordinate thread access to resources, the synchronized keyword tends to get used liberally. Up to and including Java 1.1, this was the approach taken even by Sun. You can still see in the earlier defined classes (e.g., java.util.Vector) that all methods that update instance variables are synchronized. From JDK 1.2, the engineers at Sun became more aware of performance and are now careful to avoid synchronizing willy-nilly. Instead, many classes are built unsynchronized but are provided with synchronized wrappers (see the later section Section 10.4.1).
Synchronizing methods liberally may seem like good safe programming, but it is a sure recipe for reducing performance at best, and creating deadlocks at worst. The following Deadlock class illustrates the simplest form of a race condition leading to deadlock. Here, the class Deadlock is Runnable. The run( ) method just has a short half-second delay and then calls hello( ) on another Deadlock object. The problem comes from the combination of the following three factors:
Both run( ) and hello( ) are synchronized
There is more than one thread
The sequence of execution does not guarantee that monitors are locked and unlocked in correct order
The main( ) method accepts one optional parameter to set the delay in milliseconds between starting the two threads. With a parameter of 1000 (one second), there should be no deadlock. Table 10-1 summarizes what happens when the program runs without deadlock.
With a parameter of 0 (no delay between starting threads), there should be deadlock on all but the most heavily loaded systems. The calling sequence is shown in Table 10-2; Figure 10-2 summarizes the difference between the two cases. The critical difference between the deadlocked and nondeadlocked cases is whether d1Thread can acquire a lock on the d2 monitor before d2Thread manages to acquire a lock on d2 monitor.
A heavily loaded system can delay the startup of d2Thread enough that the behavior executes in the same way as the first sequence. This illustrates an important issue when dealing with threads: different system loads can expose problems in the application and also generate different performance profiles. The situation is typically the reverse of this example, with a race condition not showing deadlocks on lightly loaded systems, while a heavily loaded system alters the application behavior sufficiently to change thread interaction and cause deadlock. Bugs like this are extremely difficult to track down.
The Deadlock class is defined as follows:
package tuning.threads; public class Deadlock implements Runnable { String me; Deadlock other; public synchronized void hello( ) { //print out hello from this thread then sleep one second. System.out.println(me + " says hello"); try {Thread.sleep(1000);} catch (InterruptedException e) { } } public void init(String name, Deadlock friend) { //We have a name, and a reference to the other Deadlock object //so that we can call each other me = name; other = friend; } public static void main(String args[ ]) { //wait as long as the argument suggests (or use 20 ms as default) int wait = args.length = = 0 ? 20 : Integer.parseInt(args[0]); Deadlock d1 = new Deadlock( ); Deadlock d2 = new Deadlock( ); //make sure the Deadlock objects know each other d1.init("d1", d2); d2.init("d2", d1); Thread d1Thread = new Thread(d1); Thread d2Thread = new Thread(d2); //Start the first thread, then wait as long as //instructed before starting the other d1Thread.start( ); try {Thread.sleep(wait);} catch (InterruptedException e) { } d2Thread.start( ); } public synchronized void run( ) { //We say we're starting, then sleep half a second. System.out.println("Starting thread " + me); try {Thread.sleep(500);} catch (InterruptedException e) { } //Then we say we're calling the other guy's hello( ), and do so System.out.println("Calling hello from " + me + " to " + other.me); other.hello( ); System.out.println("Ending thread " + me); } } | http://etutorials.org/Programming/Java+performance+tuning/Chapter+10.+Threading/10.3+Deadlocks/ | CC-MAIN-2018-09 | refinedweb | 660 | 55.13 |
Metrics¶
New in buildbot 0.8.4 is support for tracking various performance metrics inside the buildbot master process. Currently these are logged periodically according to the log_interval configuration setting of the @ref{Metrics Options} configuration.
If WebStatus()._slaves looks ok MetricAlarmEvent.log('num_slaves', level=ALARM_OK)
Metric Handlers¶
MetricsHandler objects are responsble for collecting MetricEvents of a specific type and keeping track of their values for future reporting. There are MetricsHandler classes corresponding to each of the MetricEvent types.
Metric Watchers¶
Watcher objects can be added to MetricsHandlers to be called when metric events of a certain type are received. Watchers are generally used to record alarm events in response to count or time events.
Metric Helpers¶
- countMethod(name)
A function decorator that counts how many times the function is called.
from buildbot.process.metrics import countMethod @countMethod('foo_called') def foo(): return "foo!"
- Timer(name)
Timer objects can be used to make timing events easier. When Timer.stop() is called, a MetricTimeEvent is logged with the elapsed time since timer.start() was called.
from buildbot.process.metrics import Timer def foo(): t = Timer('time_foo') t.start() try: for i in range(1000): calc(i) return "foo!" finally: t.stop()
Timer objects also provide a pair of decorators, startTimer/stopTimer to is better in this case.
from buildbot.process.metrics import timeMethod @timeMethod('time_foo') def foo(): for i in range(1000): calc(i) return "foo!" | http://docs.buildbot.net/0.8.6/developer/metrics.html | CC-MAIN-2014-35 | refinedweb | 234 | 50.63 |
CS 305j - Programming Assignment 2
Problem Decomposition
This is an individual assignment - you cannot work with anyone else. All work must be done on your own. Ask the instructional staff for help.
Goals:
Learn to remove redundancy from your programs through the use of static methods.
Practice using the programming conventions that we have discussed in class.
Write a Java program that produces as output the words of "The Twelve Days of Christmas", as seen
here
. Your program, when executed, must produce all verses (not just the three you see below), with a single blank line between one verse and the next.
On the first day of Christmas,
My true love sent to me
A partridge in a pear tree.
On the second day of Christmas,
my true love sent to me
Two turtle-doves and
A partridge in a pear tree.
...
On the twelfth day of Christmas,
My true love sent to me
Twelve drummers drummings,
Eleven pipers piping,
Ten lords a-leaping,
Nine ladies dancing,
Eight maids a-milking,
Seven swans a-swimming,
Six geese a-laying,
Five golden rings.
Four calling birds,
Three French hens,
Two turtle-doves and
A partridge in a pear tree.
You must
exactly
reproduce the format of this output, and the wording from the indicated webpage.
You must use static methods to avoid simple redundancy. You must use only one println statement for each distinct line of the song. For example, the line:
A partridge in a pear tree.
appears many times in the output. You may only have one println statement in your program that displays this line. You may have static methods in your program that only contain a single println.
You must use a separate static method for each verse of the song.
You are also required to use static methods to reduce structural redundancy. For example, verse six and verse seven are very similar/redundant. If you have code segments - several lines of code - that appear multiple times in your program, these repeated segments should be eliminated by structuring your static methods differently.
You are not allowed to use any more advanced Java features for this assignment. Only Java features that we have covered in chapter 1 of the textbook are allowed in project 2.
The header for your file (for this project and subsequent projects) should look like this, with the required information filled in.
/**
* author: <Your Name Here>
* date: <Submission Date>
* CS 305j Assignment 2
* On my honor, <Your Name>, this programming assignment is my own work.
*
* EID: <Your EID>
* Section: <Unique Number>, <Tuesday discussion time>
*
* <Brief Description - what does the program do?>
*
* Slip Days I am using on this project: <Your Slip Days>
* Slip Days I have used this semester: <Your Total, including for this project>
*/
public class Song {
<Your program code here>
}
Submit your program in a file called Song.java via the turnin program by the due date at 11 pm.
Did you remember to:
review the assignment requirements?
work on this assignment individually?
check for compile errors and runtime errors in your program?
turn in your Java source code, your file Song.java to your cs 305j folder via the turnin program?
submit your program by the due date at 11 pm? | http://www.cs.utexas.edu/~eberlein/cs305j/project2.html | CC-MAIN-2014-52 | refinedweb | 540 | 72.97 |
Hi,
Could anyone tell me difference between Assembly and Namespace in .NET
Assembly is a piece of code (IL) that can be executed by .NET Framework.
Namespaces are designed to solve naming problems. You can have, for example, two different classes with the same name. Placeing them in different namespaces can solve the problem.
hi Mr. Kaushik.
Hope this helps!!!!!
Cheers.......
Last edited by Pawan.Singh; September 15th, 2006 at 12:08 PM.
I want to try too
Assembly: A managed dll or exe (built by .NET)
Namespace: A group of managed types (classes, enums, structs, etc).
One assembly can contain several namespaces.
One namespace can contain types from different assemblies.
The name of the namespace is not necessarily the name of the assembly.
I will also try :d
Namespace
.NET Assemblies
Forum Rules | http://forums.codeguru.com/showthread.php?306705-Difference-between-Assembly-and-Namespace | CC-MAIN-2015-48 | refinedweb | 134 | 71 |
From: David B. Held (dheld_at_[hidden])
Date: 2004-06-18 11:50:32
"Gennadiy Rozental" <gennadiy.rozental_at_[hidden]> wrote in message
news:catrpi$dqu$1_at_sea.gmane.org...
> > /usr/local/boost/libs/test/src/unit_test_log.cpp:383: undefined
reference
> to
> >
>
`boost::unit_test::ut_detail::xml_log_formatter::xml_log_formatter[in-charge
> > ](boost::unit_test::unit_test_log const&)'
> > collect2: ld returned 1 exit status
> [...]
> Try clean rebuild. It maybe caused by new detail namespace name,
> which I did change recently.
Actually, I had to add supplied_log_formatters.cpp to my build, which I
never had to do before. I also don't see it listed as one of the dependent
files here:
I also don't see it in the CVS docs. You might want to mention that
somewhere.
Dave
--- Outgoing mail is certified Virus Free. Checked by AVG anti-virus system (). Version: 6.0.701 / Virus Database: 458 - Release Date: 6/7/2004
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2004/06/66798.php | CC-MAIN-2021-39 | refinedweb | 167 | 55.1 |
It helps get into the natural "red, green, refactor" rhythm.
I'm currently totally immersed in Django, and greatly miss the lack of colour
support within the "test" management command. A simple workaround for this is
to use Fabric with a few modified color commands. Your fabric file should
include the following:
from fabric.colors import _wrap_with green_bg = _wrap_with('42') red_bg = _wrap_with('41') # Set the list of apps to test env.test_apps = "app1 app2" def test(): with settings(warn_only=True): result = local('./manage.py test %(test_apps)s --settings=settings_test -v 2 --failfast' % env, capture=False) if result.failed: print red_bg("Some tests failed") else: print print green_bg("All tests passed - have a banana!")
You can choose your own success and failure messages.
Now we have lovely colors while doing TDD in Django:
Source:
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/all-tests-passed-have-banana?mz=55985-python | CC-MAIN-2016-07 | refinedweb | 145 | 58.99 |
Contents
OverviewEdit
AROS uses several custom development tools in its build-system to aid developers by providing an easy means to generate custom makefiles for amigaos like components.
The most important ones are:
- MetaMake: A make supervisor program. It can keep track of targets available in makefiles available in subdirectories a certain root directory. A more in depth explanation is given below.
- GenMF: (generate makefile) A macro language for makefiles. It allows to combine several make rules into one macro which can simplify writing makefiles.
- Several AROS specific tools that will be explained more when appropriate during the rest of this documentation.
MetaMakeEdit
IntroductionEdit
MetaMake is a special version of make which allows the build-system to recursively build "targets" in the various directories of a project, or even another project.
The name of the makefile's used is defined in the MetaMake config file and defaults to mmakefile for AROS - so we shall use this name to donate MetaMake Makefiles from here on in.
MetaMake searches directory tree's for mmakefiles - and, for each it finds, process's the metatargets.
You can also specify a program which converts "source" mmakefiles (aptly named mmakefile.src) into proper mmakefile's before MetaMake will be invoked on the created mmakefile.
MetaTargetsEdit
MetaMake uses normal makefile syntax but gives a special meaning to a comment line that start with #MM. This line is used to define so called metatargets.
There exist three ways of defining a metatarget in a makefile:
Real MetaTargetsEdit
#MM metatarget : metaprerequisites This defines a metatarget with its metaprerquisites: When a user asks to build this metatarget, first the metaprerequisites will be build as metatargets, and afterwards the given metatarget. This form also indicates that in this makefile also a makefile target is present with the same name. #MM metatarget : prerequisites This form indicates that the make target on the next line is also a metatarget but the prerequisites are not metaprerequisites: The line for the definition of a metatarget can be spread over several lines if one ends every line with the character and starts the next line with #MM.
Virtual MetaTargetsEdit
#MM- metatarget : metaprerequisites This is the same definition as for Real MetaTarget's - only now no "normal" make target is present in the makefile with the same name as the metatarget:
How MetaMake worksEdit
MetaMake is run with a metatarget to be built specified on the command line.
MetaMake will first build up a tree of all the mmakefiles present in a directory and all subdirectories (typically from the aros source base directory) - and autogenerate them where applicable. While doing this it will process the mmakefiles and build a tree of all the defined metatargets and their dependencies.
Next it will build all the dependencies (metaprerequisites) needed for the specified metatarget - and finally the metatarget itself.
metaprerequisite are metatarget's in their own rite - and are processed in the same fashion so that dependancies they have are also fulfilled.
For each metatarget, a walk through of all the directories is done - and in every mmakefile where Real MetaTarget's are defined, make is called with the name of the target as a "make target".
Exported variablesEdit
When MetaMake calls normal make, it also defines two variables...
$(TOP) contains the value of the rootdirectory. $(CURDIR) contains the path relative to $(TOP).
Autogenerating mmakefile'sEdit
Another feature of MetaMake is automatic generation of mmakefile's from a source mmakefile's.
When the directory tree is scanned for mmakefiles, ones with a .src suffix that are newer then any present mmakefile are processed using a specified script that regenerate's the mmakefile from the source mmakefile. The called script is defined in the configuration file.
ExamplesEdit
The next few examples are taken from the AROS project.
Example 1: normal dependenciesEdit
#MM contrib-regina-module : setup linklibs includes contrib-regina-includes
This example says that in this makefile a contrib-regina-module is present that has to be build but the before building this metatarget first the metatargets setup, linklibs, ... has to be build; e.g. that the includes linklibs etc. have to be present before that this module can be build.
Example 2: metatarget consisting of submetatargetsEdit
#MM- contrib-freetype : contrib-freetype-linklib \ #MM contrib-freetype-graph \ #MM contrib-freetype-fonts \ #MM contrib-freetype-demos
Here actually is said that the contrib-freetype metatarget consists of building linklib, graph, fonts and demos of freetype. If some extra work needs to be done in the makefile where this metatarget the definition can start with '#MM ' and a normal make target 'contrib-freetype' has to be present in the makefile.
Also the use of the line continuation for the metatarget definition is shown here.
Example 3: Quick building of a targetEdit
#MM workbench-utilities : includes linklibs setup-clock-catalogs #MM workbench-utilities-quick : workbench-utilities
When a user executes MetaMake with workbench-utilities as an argument, make will be called in all the directories where the metaprerequisites are present in the makefile. This can become quite annoying when debugging programs. When now the second metatarget workbench-utilities-quick is defined as shown above only that target will be build in this directory. Of course the user has then to be sure that the metatargets on which workbench-utilities depend are up-to-date.
Usage and configuration filesEdit
Usage:
mmake [options] [metatargets]
To build mmake, just compile
mmake.c. It doesn't need any other files.
mmake looks for a config file
mmake.config or .mmake.config in the current directory for a file in the environment variable
$MMAKE_CONFIG or a file
.mmake.config in the directory
This file can contain the following things:
- #
- This must be the first character in a line and begins a comment.
- Comments are completely ignored my mmake (as are empty lines).
- text="[<name>]"
- This begins a config section for the project name. You can build
- targets for this project by saying name.target.
- maketool <tool options...>
- Specifies the name of the tool to build a target. The default is
- make "TOP=$(TOP)" "CURDIR=$(CURDIR)".
- top <dir>
- Specifies the root directory for a project. You will later find
- this config option in the variable $(TOP). The default is the
- current directory.
- defaultmakefilename <filename>
- Specifies the basename for makefiles in your project. Basename means
- that mmake will consider other files which have this stem and an
- extension, too. See the items to generate makefiles for details.
- The default is Makefile.
- defaulttarget <target>
- The name of the default target which mmake will try to make if you
- call it with the name of the project alone. The default is all.
- genmakefilescript <cmdline...>
- mmake will check for files with the basename as specified in
- defaultmakefilename with the extension .src. If such a file is found,
- the following conditions are checked: Whether this file is newer than
- the makefile, whether the makefile doesn't exist and whether the file
- genmakefiledeps is newer than the makefile. If any of these is true,
- mmake will call this script the name of the source file as an extra
- option and the stdout of this script will be redirected to
- defaultmakefilename. If this is missing, mmake will not try to
- regenerate makefiles.
- genmakefiledeps <path>
- This is the name of a file which is considered when mmake tries to
- decide whether a makefile must be regenerated. Currently, only one
- such file can be specified.
- globalvarfile <path>
- This is a file which contains more variables in the normal make(1)
- syntax. mmake doesn't know about any special things like line
- continuation, so be careful not to use such variables later (but
- they don't do any harm if they exist in the file. You should just
- not use them anywhere in mmake).
- add <path>
- Adds a nonstandard makefile to the list of makefiles for this
- project. mmake will apply the standard rules to it as if the
- defaultmakefilename was like this filename.
- ignoredir <path>
- Will tell mmake to ignore directories with this name. Try ignore
- CVS if you use CVS to manage your projects' sources.
- Any option which is not recognised will be added to the list of known variables (ie. foo bar will create a variable $(foo) which is expanded to bar).
Example
Here is an example:
# This is a comment # Options before the first [name] are defaults. Use them for global # defaults defaultoption value # Special options for the project name. You can build targets for this # project with "mmake name.target" [AROS] # The root dir of the project. This can be accessed as $(TOP) in every # makefile or when you have to specify a path in mmake. The default is # the current directory top /home/digulla/AROS # This is the default name for Makefiles. The default is "Makefile" defaultmakefilename makefile # If you just say "mmake AROS", then mmake will go for this target defaulttarget AROS # mmake allows to generate makefiles with a script. The makefile # will be regenerated if it doesn't exist, if the source file is # newer or if the file specified with genmakefiledeps is newer. # The name of the source file is generated by concatenating # defaultmakefilename and ".src" genmakefilescript gawk -f $(TOP)/scripts/genmf.gawk --assign "TOP=$(TOP)" # If this file is newer than the makefile, the script # genmakefilescript will be executed. genmakefiledeps $(TOP)/scripts/genmf.gawk # mmake will read this file and every variable in this file will # be available everywhere where you can use a variable. globalvarfile $(TOP)/config/host.cfg # Some makefiles must have a different name than # defaultmakefilename. You can add them manually here. #add compiler/include/makefile #add makefile
A metatarget look like so: project.target. Example: AROS.setup. If nothing is specified, mmake will make the default target of the first project in the config file. If the project is specified but no target, mmake will make the default target of this project.
GenMFEdit
IntroductionEdit
Genmf uses two files for generating a makefile. First is the macro definition file and finally the source makefile where these macro's can be used.
* This syntax example assumes you have AROS' sources (either from SVN or downloaded from the homesite). Assuming 'genmf.py' is found in your $PATH and that $AROSDIR points to location of AROS' sources root (e.g. /home/projects/AROS or alike). [user@localhost]# genmf.py $AROSDIR/config/make.tmpl mmakefile.src mmakefile This creates a mmakefile from the mmakefile.src in the current directory.
In general the % character is used as the special character for genmf source makefiles.
After ./configure i run the make command and that halts with an error from within the genmf.py script that is cannot find some file. the files that are fed to the genmf.py script seem to be lines in the /tmp/genmfxxxx file. the problem is that the lines are not created right. so when the lines are fed to the genmf.py script it cannot handle it.
Metamake creates tmpfiles:
./cache.c: strcpy(tmpname, "/tmp/genmfXXXXXX");
Metamake actually calls genmf.py to generate the genmf file. It is located in bin/$(arch)-$(cpu)/tools
MetaMake uses time stamps to find out if a mmakefile has changed and needs to be reparsed. For mmakefiles with dynamic targets we would have to avoid that time stamp comparison.
This is I think only the case if the metarules would change depending on an external config file without that the mmakefile itself changes.
But this reminds me another feature I had in mind for mmake. I would make it possible to have real files as prerequisites of metatargets. This is to avoid that make is called unnecessary in directories. I would introduce a special character to indicate if a metatarget depends on a file, let's take @ and have the following rule
__MM__ :: echo bar : @foo
This would indicate that for this mmakefile metatarget 'bar' only has to be build if file foo changes. So if mmake wants to build metatarget 'bar' if would only call make if file foo in the same directory as the mmakefile has changed.
This feature would also be able to indicate if the metarules have to be rebuild, I would allocate the special __MM__ metatarget for it. By default always the implicit metarule would be there:
__MM__ :: echo __MM__ : @mmakefile
But people could add config files is needed:
__MM__ :: echo __MM__ : @mmconffile
Does MetaMake really do variable substitution? Yes, have a look in the var.c file.
The generated mmakefile for Demos/Galaxy still has #MM- demo-galaxy : demo-galaxy-$(AROS_TARGET_CPU) and I think the substitution is done later by Gnu/Make.
No, for gmake it is just a comment line; it does not know anything about mmake. And it also the opposite case; mmake does not know anything about gmake it just all the lines starting with #MM. So the next thing does not what you think it does in a gmake file:
ifeq ($(target), ) #MM includes : includes-here else #MM $(target) : includes-here endif
mmake will see both lines as just ignores the if statement ! It will complain if it does not know target. That is one of the main reasons I proposed the above feature.
The main feature of mmake is that is allows for modular directory structure you can add or delete directories in the build tree and metamake will automatically update the metarules and the build itself to the new situation. For example it would allow to checkout only a few subdirectories of the ports directory if one wants to work on one of the programs there.
Macro definitionEdit
A macro definition has the following syntax:
%define macroname option1[=[default][\A][\M]] option2[=[default][\A][\M]] ... ... %end
macroname is the name of the macro. option1, option2, ... are the arguments for the macro. These options can be used in the body of this template by typing %(option1). This will be replaced be the value of option1.
The macro can be followed by a default value. If no default value is specified an empty string is taken. Normally no space are allowed in the default value of an argument. If this is needed this can be done by surrounding the value with double quotes (").
Also two switches can be given:
\A Is the switch to always need a value for this. When the macro is instantiated always a value need to be assigned to this argument. \M Is the switch to turn on multi words. This means that all the words following this argument will be assigned to this argument. This also means that after the use of such an argument no other argument can be present because it will become part of this argument.
Macro instantiationEdit
The instantiation of the macro is done by using the '%' character followed by the name of the macro to instantiate (without a round brackets around it):
%macro_name [option1=]value [option2=]value
Two ways are possible to specify value for arguments to a macro:
value This will assign the value to the argument defined as the first argument to this macro. The time this format is used it will be assigned to the second argument and so on. option1=value This will assign the given value to the option with the specified name.
When giving values to arguments also double quotes need to be used if one wants to include spaces in the values of the arguments.
Macro instantiation may be used inside the body of a macro, even macro's that will only be defined later on in the macro definition file. Examples
FIXME (whole rules to be shown as well as action to be used in make rules)
AROS Build-System usageEdit
AROS Build-System configurationEdit
Before the build-system can be invoked via make - you will need to run "./configure" to set up the environment for your chosen target platform
i.e.
./configure—target=pc-i386
This causes the configure script to perform the following operations ...
AROS MetaMake configuration fileEdit
[add the default settings for mmake]
Default AROS MetaMake MetaTargetsEdit
AROS uses a set of base metatargets to perform all the steps needed to build the tools and components not only used to compile aros but also that make up aros itself
AROS Build MetaMake MetaTargetsEdit
AROS.AROS AROS.contrib AROS.development AROS.bootiso
[list standard metatargets used during the build process]
Special AROS MetaMake MetaTargetsEdit
************ denotes a Real MetaTarget ************-setup ************-includes
Default AROS mmakefile VariablesEdit
The following variables are defined for use in mmakefile's.
//System related variables $(ARCH) $(AROS_HOST_ARCH) $(AROS_HOST_CPU) $(AROS_TARGET_ARCH) $(AROS_TARGET_CPU) $(AROS_TARGET_SUFFIX) / $(AROS_TARGET_VARIANT)
//Arch specific variables $(AROS_TARGET_BOOTLOADER)
//Directory related variables $(TOP) $(CURDIR) $(HOSTDIR) $(TOOLDIR) $(PORTSDIR) $(TARGETDIR) $(GENDIR) $(OBJDIR) $(BINDIR) $(EXEDIR) $(LIBDIR) $(OSGENDIR) $(KOBJSDIR) $(AROSDIR) $(AROS_C) $(AROS_CLASSES) $(AROS_DATATYPES) $(AROS_GADGETS) $(AROS_DEVS) $(AROS_FS) $(AROS_RESOURCES) $(AROS_DRIVERS) $(AROS_LIBS) $(AROS_LOCALE) $(AROS_CATALOGS) $(AROS_HELP) $(AROS_PREFS) $(AROS_ENVARC) $(AROS_S) $(AROS_SYSTEM) $(AROS_TOOLS) $(AROS_UTILITIES) $(CONTRIBDIR)
AROS mmakefile.src High-Level MacrosEdit
Note : In the definition of the genmf rules sometimes mmake variables are used as default variables for an argument (e.g. dflags=%(cflags)). This is not really possible in the definition file but is done by using text that has the same effect.
Building programs
There are two macro's for building programs. One macro %build_progs that will compile every input file to a separate executable and one macro %build_prog that will compile and link all the input files into one executable.
%build_progsEdit
This macro will compile and link every input file to a separate executable and has the following definition:
%define build_progs mmake=/A files=/A \ objdir=$(GENDIR)/$(CURDIR) targetdir=$(AROSDIR)/$(CURDIR) \ cflags=$(CFLAGS) dflags=$(BD_CFLAGS$(BDID)) ldflags=$(LDFLAGS) \ uselibs= usehostlibs= usestartup=yes detach=no
With the following arguments:
- mmake=/A
- This is the name of the metatarget that will build the programs.
- files=/A
- The basenames of the C source files that will be compiled and
- linked to executables. For every name present in this list an
- executable with the same name will be generated.
-.
- ldflags=$(LDFLAGS)
- The flags to use when linking the executables. By default the
- standard AROS link flags will be used.
- uselibs=
- A list of static libraries to add when linking the executables.
- This is the name of the library without the lib prefix or the .a
- suffix and without the -l prefix for the use in the flags
- for the C compiler.
- By default no libraries are used when linking the executables.
- usehostlibs=
- A list of static libraries of the host to add when linking the
- executables. This is the name of the library without the lib prefix
- or the .a suffix and without the -l prefix for the use in the flags
- for the C compiler.
- By default no libraries are used when linking the executables.
- usestartup=yes
- Use the standard startup code for the executables. By default this
- is yes and this is also what one wants most of the time. Only disable
- this if you know what you are doing.
- detach=no
- Wether the executables will run detached. Defaults to no.
%build_progEdit
seems that the %build_prog macros is currently alway producing stripped binaries, even in debug build. To workaround this problem, I need to define TARGET_STRIP in the following way: TARGET_STRIP := $(STRIP) %build_prog mmake="egltest" progname="egltest" files="$(EGL_SOURCES) peglgears" uselibs="GL galliumauxiliary" Can someone with enough knowledge please fix the macro so that it produces unstripped binaries for debug builds again
This macro will compile and link the input files to an executable and has the following definition:
%define build_prog mmake=/A progname=/A files=%(progname) asmfiles= \ objdir=$(GENDIR)/$(CURDIR) targetdir=$(AROSDIR)/$(CURDIR) \ cflags=$(CFLAGS) dflags=$(BD_CFLAGS$(BDID)) ldflags=$(LDFLAGS) \ aflags=$(AFLAFS) uselibs= usehostlibs= usestartup=yes detach=no
With the following arguments:
mmake=/A This is the name of the metatarget that will build the program. progname=/A The name of the executable. files= The basenames of the C source files that will be compiled and linked into the executable. By default just the name of the executable is taken. asmfiles= The assembler files to assemble and include in the executable. By default no asm files are included in the executable. variable). ldflags=$(LDFLAGS) The flags to use when linking the executable. By default the standard AROS link flags will be used. uselibs= A list of static libraries to add when linking the executable. This is the name of the library without the lib prefix or the .a suffix and without the -l prefix for the use in the flags for the C compiler. By default no libraries are used when linking the executable. usehostlibs= A list of static libraries of the host to add when linking the executable. This is the name of the library without the lib prefix or the .a suffix and without the -l prefix for the use in the flags for the C compiler. By default no libraries are used when linking the executable. usestartup=yes Use the standard startup code for the executables. By default this is yes and this is also what one wants most of the time. Only disable this if you know what you are doing. detach=no Wether the executable will run detached. Defaults to no.
%build_linklibEdit
Building static linklibraries
Building link libraries is straight forward. A list of files will be compiled or assembled and collected in a link library into a specified target directory.
The definition of the macro is as follows:
%define build_linklib mmake=/A libname=/A files="$(basename $(wildcard *.c)) \ asmfiles= cflags=$(CFLAGS) dflags=%(cflags) aflags=$(AFLAGS) \ objdir=$(OBJDIR) libdir=$(LIBDIR)
With the meaning of the arguments as follows:
mmake=/A This is the name of the metatarget that will build the linklib. libname=/A The base name of the library to generate. The file that will be generated will be called lib%(libname).a files=$(basename $(wildcard *.c)) The C files to compile and include in the library. By default all the files ending in .c in the source directory will be used. asmfiles= The assembler files to assemble and include in the library. By default no asm files are included in the library.. objdir=$(OBJDIR) The directory where to generate all the intermediate files. The default value is $(OBJDIR) which in itself is by default equal to $(GENDIR)/$(CURDIR). libdir=$(LIBDIR) The directory to put the library in. By default the standard lib directory $(LIBDIR) will be used.
%build_moduleEdit
Building modules consists of two parts. First is a macro to use in mmakefile.src files. Another is a configuration file that describes the contents of the module.
The mmakefile.src macroEdit
This is the definition header of the build_module macro:
%define build_module mmake=/A modname=/A modtype=/A \ conffile=%(modname).conf files="$(basename $(wildcard *.c))" \ cflags=$(CFLAGS) dflags=%(cflags) objdir=$(OBJDIR) \ linklibname=%(modname) uselibs=
Here is a list of the arguments for this macro:
mmake=/A This is the name of the metatarget that will build the module. Also a %(mmake)-quick and %(mmake)-clean metatarget will be defined. modname=/A This is the name of the module without the suffix. modtype=/A This is the type of the module and corresponds with the suffix of the module. At the moment only library, mcc, mui and mcp are supported. Support for other modules is planned in the future. conffile=%(modname).conf The name of the configuration file. Default is modname.conf. files="$(basename $(wildcard *.c))" A list of all the C source files without the .c suffix that contain the code for this module. By default all the .c files in the current directory will be taken. functions.
The module configuration fileEdit
The module configuration file is subdived in several sections. A section is defined with the following lines:
## begin sectionname ... ## end sectionname
The interpretation of the lines between the ##begin and ##end statement is different for every section. The following sections are defined:
* config The lines in this section have all the same format: optionname string with the string starting from the first non white space after optionname to the last non white space character on that line. A list of all the options available: basename Followed by the base name for this module. This will be used as a prefix for a lot of symbols. By default the modname specified in the makefile is taken with the first letter capitalized. libbase The name of the variable to the library base in. By default the basename will be taken with Base added to the end. libbasetype The type to use for the libbase for use internally for the library code. E.g. the sizeof operator applied to this type has to yield the real size of the object. Be aware that it may not be specified as a pointer. By default 'struct LibHeader' is taken. libbasetypeextern The type to use for the libbase for code using the library externally. By default 'struct Library' is taken. version The version to compile into the module. This has to be specified as major.minor. By default 0.0 will be used. date The date that this library was made. This has to have the format of DD.MM.YYYY. As a default 00.00.0000 is taken. libcall The argument passing mechanism used for the functions in this module. It can be either 'stack' or 'register'. By default 'stack' will be used. forcebase This will force the use of a certain base variable in the static link library for auto opening the module. Thus it is only valid for module that support auto opening. This option can be present more than once in the config section and then all these base will be in the link library. By default no base variable will be present in the link library. * cdef In this section all the C code has to be written that will declare all the type of the arguments of the function listed in the function. All valid C code is possible including the use of #include. * functionlist In this section all the functions externally accessible by programs. For stack based argument passing only a list of the functions has to be given. For register based argument passing the names of the register have to be given between rounded brackets. If you have function foo with the first argument in D0 and the second argument in A0 it gives the following line in in the list: foo(D0,A0)
%build_module_macroEdit
Building modules (the legacy way)
Before the %build_module macro was developed already a lot of code was written. There a mixture of macro's was usedin the mmakefile and they were quite complicated. To clean up these mmakefiles without needing to rewrite too much of the code itself a second genmf macro was created to build modules that were written using the older methodology. This macro is called build_module_macro. For writing new modules people should consider this macro as deprecated and only use this macro when the %build_module doesn't support the module yet they want to create.
The mmakefile.src macroEdit
This is the definition header of the build_module_macro macro:
%define build_module_macro mmake=/A modname=/A modtype=/A \ conffile=%(modname).conf initfile=%(modname)_init \ funcs= files= linklibfiles= cflags=$(CFLAGS) dflags=%(cflags) \ objdir=$(OBJDIR) linklibname=%(modname) uselibs= usehostlibs= \ genfunctable= genincludes= compiler=target
Here is a list of the arguments for this macro:
mmake=/A This is the name of the metatarget that will build the module. It will define that metatarget but won't include any metaprerequisites. If you need these you can add by yourself with an extra #MM metatargets : ... line. Also a %(mmake)-quick and %(mmake)-clean metatarget will be defined. modname=/A This is the name of the module without the suffix. modtype=/A This is the type of the module and corresponds with the suffix of the module. It can be one of the following : library gadget datatype handler device resource mui mcc hidd. conffile=%(modname).conf The name of the configuration file. Default is modname.conf. funcs= A list of all the source files with the .c suffix that contain the code for the function of a module. Only one function per C file is allowed and the function has to be defined using the AROS_LHA macro's. files= A list of all the extra files with the .c suffix that contain the extra code for this module. initfile=%(modname)_init The file with the init code function. usehostlibs= A list of static libraries of the host to add when linking the module. This is the name of the library without the lib prefix or the .a suffix and without the -l prefix for the use in the flags for the C compiler. By default no libraries are used when linking the module. genfunctable= Bool that has to have a value of yes or no or left empty. This indicates if the functable needs to be generated. If empty the functable will only be generated when funcs is not empty. genincludes= Bool that has to have a value of yes or no or left empty. This indicates if the includes needs to be generated. If empty the includes will only be generated for a library, a gadget or a device. compiler=target Indicates which compiler to use during compilation. Can be either target or host to use the target compiler or the host compiler. By default the target compiler is used.
The module configuration fileEdit
For the build_module_macro two files are used. First is the module configuration file (modname.conf or lib.conf) and second is the headers.tmpl file.
The modules config file is file with a number of lines with the following syntax:
name <string> Init the various fields with reasonable defaults. If <string> is XXX, then this is the result: libname xxx basename Xxx libbase XxxBase libbasetype XxxBase libbasetypeptr XxxBase * Variables will only be changed if they have not yet been specified. libname <string> Set libname to <string>. This is the name of the library (i.e. you can open it with <string>.library). It will show up in the version string, too. basename <string> Set basename to <string>. The basename is used in the AROS-LHx macros in the location part (last parameter) and to specify defaults for libbase and libbasetype in case they have no value yet. If <string> is xXx, then libbase will become xXxBase and libbasetype will become xXxBase. libbase <string> Defines the name of the library base (i.e. SysBase, DOSBase, IconBase, etc.). If libbasetype is not set, then it is set to <string>, too. libbasetype <string> The type of libbase (with struct), i.e. struct ExecBase, struct DosLibrary, struct IconBase, etc.). libbasetypeptr <string> Type of a pointer to the libbase. (e.g. struct ExecBase *). version <version>.<revision> Specifies the version and revision of the library. 41.0103 means version 41 and revision 103. copyright <string> Copyright string. define <string> The define to use to protect the resulting file against double inclusion (i.e. #ifndef <string>...). The default is _LIBDEFS_H. type <string> What kind of library is this ? Valid values for <string> are: device, library, resource and hidd. option <string>... Specify an option. Valid values for <string> are: o noexpunge Once the lib/dev is loaded, it can't be removed from memory. Be careful with this option. o rom For ROM based libraries. Implies noexpunge and unique. o unique Generate unique names for all external symbols. o nolibheader We don't want to use the LibHeader prefixed functions in the function table. o hasrt This library has resource tracking. You can specify more than one option in a config file and more than one option per option line. Separate options by space.
The header.tmpl fileEdit
Contrary to the %build_module macro for %build_module_macro the C header information is not included in the configuration file but an additional files is used with the name headers.tmpl. This file has different section where each of the sections will be copied in a certain include file that is generated when the module is build. A section has a structure as follows:
##begin sectionname ... ##end sectionname
With sectionname one of the following choices:
* defines * clib * proto
%build_archspecificEdit
Compiling arch and/or CPU specific files
In the previous paragraph the method was explained how a module can be build with the AROS genmf macro's. Sometimes one wants to replace certain files in a module with an implementation only valid for a certain arch or a certain CPU. The macro definition
Arch specific files are handled by the macro called %build_archspecific and it has the following header:
%define build_archspecific mainmmake=/A maindir=/A arch=/A files= asmfiles= \ cflags=$(CFLAGS) dflags=%(cflags) aflags=$(AFLAGS) compiler=target
And the explanation of the argument to this macro:
mainmmake=/A The mmake of the module from which one wants to replace files or to wich to add additional files. maindir=/A The directory where the object files of the main module are stored. The is only the path relative to $(GENDIR). Most of the time this is the directory where the source files of the module are stored. arch=/A The architecture for which these files needs to be build. It can have three different forms ARCH-CPU, ARCH or CPU. For example when linux-i386 is specified these files will only be build for the linux port on i386. With ppc it will be build for all ppc processors and with linux it will be build for all linux ports. files= The basenames of the C source files to replace add to the module. asmfiles= The basenames of the asm source files to replace or add assembling the asm files. By default the standard AROS cflags (the $(AFLAGS) make variable) are taken. This also means that some flags can be added by assigning these to the SPECIAL_AFLAGS make variable before using this macro. compiler=target Indicates which compiler to use during compiling C source files. Can be either target or host to use the target compiler or the host compiler. By default the target compiler is used.
%rule_archaliasEdit
Code shared by different ports
A second macro called %rule_archalias allows to create a virtual architecture. And code for that virtual architecture is shared between several architectures. Most likely this is used for code that uses an API that is shared between several architecture but not all of them.
The macro has the following header:
- %define rule_archalias mainmmake=/A arch=/A alias=/A
With the following arguments
- mainmmake=/A
- The mmake of the module from which one wants to replace files, or
- which to add additional files to.
- arch=/A
- The arch one wants to make alias from.
- alias=/A
- The arch one wants to alias to.
Examples
1. This is an extract from the file config/linex/exec/mmakefile.src that replaces the main init.c file from exec with a linux specialized one:
%build_archspecific \ mainmmake=kernel-exec maindir=rom/exec arch=linux \ files=init compiler=host
2. For the dos.library some arch specific files are grouped together in the unix arch. The following lines are present in the several mmakefiles to make this possible
In config/linux/mmakefile.src:
%rule_archalias mainmmake=kernel-dos arch=linux alias=unix
In config/freebsd/mmakefile.src:
%rule_archalias mainmmake=kernel-dos arch=freebsd alias=unix
And finally in config/unix/dos/mmakefile.src:
%build_archspecific \ mainmmake=kernel-dos maindir=rom/dos \ arch=unix \ files=boot \ compiler=host
AROS mmakefile.src Low-Level MacrosEdit
LibrariesEdit
A simple library that uses a custom suffix (.wxt), and returns TRUE in its init function, however the Open code never gets called - and openlibrary fails? (the init function does get called though..) With a conf file with no ##functionlist section I get the error: In readref: Could not open (null)
Genmodule tries to read a ref file when no ##functionlist section is available. After adding a dummy function to the conf file it worked for me. Take care: haven't added any flags which avoids creating of header files and such. How to deal with library base pointers in plug-ins when you call library functions.
use only one function -> called to make the "plugin" register all its hooks with wanderer. Iterate through the plugin directory, and for each file ending ".wxt", create an internal plugin structure in which i store the pointer to the libbase of the OpenLibrary'd plugin. After enumerating the plugins, iterate the list of plugin structs and call the single library function which causes them to all register with wanderer. had been using some of the struct library fields (lib_Node.ln_Name was the culprit).
We should remove the dos.c, intuition.c, etc. files with hardcoded version numbers from autoinit and replace them with -ldos -lintuition inside gcc specs file. This would avoid starting programs on older versions of libraries. If an older version suffice some __xxx_version global can be defined in the program code to enable this. We could also provide based on the info you described below exec_v33 exec_v45 link libraries that would also make sure no function of a newer version is used. A very clean solution to get the desired effect.
-noarosc mentions checking the spec file to find out about it but there is nothing in the specs file related. This was added to disabled automatic linking of arosc to all libraries. It was used in the build_library macro - check V0. Automatic linking of arosc.library which had per task context to other libraries which had global context was a very bad thing. "C standard library" objects belonging to global context library were allocated on opening task context. When the task exited and global context library not, global context library was using "freed" memory.
A note to any of you wanting to upgrade to Ubuntu 12.10, or any distribution that uses gcc 4.7. There is an issue (bug? misfeature?) in gcc 4.7 where the '-specs /path/to/spec/override' is processed *after* gcc checks that it has been passed valid arguments. This causes gcc to fail with the error:
gcc-4.7: error: unrecognized command line option "-noarosc"
when it is used to link programs for the x86 and x86_64 targets if you are using the native host's compiler (for example, when compiling for linux-x86_64 hosted). Please use gcc-4.6 ("export CC=gcc-4.6") for hosted builds until further notice (still valid as of March 2013).
Per taskEdit
There are other things for which arosc.library needs to be per task based: autoclosing of open files and autofreeing of malloced memory when a programs exits; a per task errno and environ variable that can be changed by calling library functions.
regina.library does also do that by linking with arosc_rel. It needs some more documentation to make it usable by other people. You can grep aroscbase inside the regina source code to see where it is used. regina.library and arosc.library are per task libraries. Each time regina.library is opened it also opens arosc.library and it then gets the same libbase as the program that uses regina.library.
By linking with arosc_rel and defining aroscbase_offset arosc.library functions called from regina.library will be called with the arosc libbase stored in it's own libbase, and the latter is different for each task that has opened regina.library.
The AROS_IMPORT_ASM_SYM of aroscbase in the startup section of regina.conf assures that the arosc.library init functions are called even if the programs that uses regina.library does not use an arosc.library function itself and normally would not autoopen it.
Problem is that both bz2 and z library use stdio functions. The arosc.library uses the POSIX file descriptors which are of type int to refer to files. The same file descriptor will point to different files in different tasks. That's why arosc.library is a pertask library. FILE * pointer internally have a file descriptor stored that then links to the file.
Now bz2 and z are using also stdio functions and thus also they need a different view for the file descriptors depending in which program the functions are called from. That's why bz2 and z become also pertask libraries.
it breaks POSIX compatibility to use a type other than int for file descriptors. Would a better solution be to assign a globally unique int to each file descriptor, and thus avoid the need to make arosc.library a per-task library? far simpler solution - all DOS FileHandles and FileLocks are allocated from MEMF_31BIT. Then, we can be assured that their BPTRs fit into an int.
int open(const char *path, int flags, int mode) { BPTR ret; ULONG rw = ((flags & O_READ) && !(flags & O_WRITE)) ? MODE_OLDFILE : MODE_NEWFILE; ret = Open(path, rw); if (ret == BNULL) { IoErr_to_errno(IoErr()); return -1; } return (int)(uintptr_t)ret; } void close(int fd) { Close((BPTR)(uintptr_t)fd); } static inline BPTR ftob(int fd) { return (fd == 0) ? Input() : (fd == 1) ? Output() : (fd == 2) ? ErrorOutput() : (fd < 0) ? BNULL : ((BPTR)(uintptr_t)fd); } int read(int fd, void *buff, size_t len) { int ret; ret = Read(ftob(fd), buff, len); if (ret < 0) IoErr_to_errno(IoErr()); return ret; }
you will most likely kill the 64-bit darwin hosted target. AFAIR it has 0 (zero) bytes of memf_31bit memory available.
Must modules which are using pertask libraries be implemented itself as pertask library? Is it a bug or feature that I get now the error about missing symbolsets handling. You will now see more verbose errors for missing symbol sets, for example:
Undefined symbol: __LIBS__symbol_set_handler_missing Undefined symbol: __CTORS__symbol_set_handler_missing
By linking with jpeg and arosc, instead of jpeg_rel and arosc_rel, it was pulling in the PROGRAM_ENTRIES symbolset for arosc initialization. Since jpeg.datatype is a library, not a program, the PROGRAM_ENTRIES was not being called, and some expected initialization was therefore missing.
It is the ctype changes that is causing the problem. This code now uses ADD2INIT macro to add something to initialization of the library. As you don't handle these init set in your code it gives an error. You can for now use -larosc.static -larosc or implement init set handling yourself.
The move to ctype handling is that in the future we may want to have locale handling in the C library so toupper/tolower may be different for different locales. This was not possible with the ctype stuff in the link lib. Ideally in the source code sqlite3-aros.c whould be replaced with sqlite3.conf and genmodule would be called from makefile-new
- strongly* recommend that you *not* use %build_module_simple for pertask/peropener libraries for now. There is a PILE of crap that genmodule needs to do *just*exactly*right* to get them to work, and that pile is still in flux at the moment.
Use %build_module, and add additional initialization with the ADD2*() family of macros.
If you insist on %build_module_simple, you will need to link explicitly with libautoinit.
To handle per-task stuff manually:
LibInit: you will need to call AllocTaskStorageSlot() to get a task storage slot, save that in your global base LibExpunge: FreeTaskStorageSlot() the slot LibOpen: and use SetTaskStorageSlot() to put your task-specific data in the task's slot LibClose: Set the task's storage slot to NULL
You can get the task-specific data in one of your routines, using GetTaskStorageSlot().
if you're not using the stackcall API, that's the general gist of it.
would recommend that you use the static libraries until the pertask/peropener features have stabilized a bit more. You can always go back to dynamic linking to pertask/peropen libs later.
You should be able to use arosc.library without needing to be pertask. Things gets more complicated if code in library uses file handles, malloc, errno or similar things.
Is the PROGRAM_ENTRIES symbolset correct for arosc initialization then, or should it be in the INIT set? If so move the arosc_startup.c to the INIT set.
Think about datatypes. Zune (muimaster.library) caches datatype objects. Task A may be the one triggering NewDtObject(). Task B may be the one triggering DisposeDTObject().
NewDTObject() does OpenLibrary of say png.datatype. DisposeDTObjec() does CloseLibrary of say png.datatype.
If png.datatype usees some pertask z.library that's a problem, isn't it? As png.datatype is not per opener and is linked with arosc there should only be a problem when png.datatype is expunged from memory not when opened or closed. It will also use the arosc.library context from the task that calls the Init function vector of png.datatype and it will only be closed when the Expunge vector is called.
relbaseEdit
stackcall/peropener
- library.conf: relbase FooBase -> rellib foo
- rellib working for standard and peropener/pertask libraries
- <proto/foo.h> automatically will use <proto/foo_rel.h> if 'rellib foo' is used in the libraries .conf
- "uselibs" doesn't need to manually specify rellib libraries
arosc_rel.a is meant to be used from shared libraries not from normal programs. Auto-opening of it is also not finished, manual work is needed ATM.
z_au, png_au, bz2_au, jpeg_au, and expat_au now use the relbase subsytem. The manual init-aros.c stub is no longer needed. Currently, to use relative libraries in your module, you must:
- Enable 'options pertaskbase' in your library's .conf
- Add 'relbase FooBase' to your library's .conf for each relative library you need.
- Make sure to use the '<proto/foo_rel.h>' headers instead of '<proto/foo.h>'
- Link with 'uselibs=foo_rel'
can't find a valid way to implement peropener libraries with 'stack' functions without a real ELF dynamic linker (ie ld-linux.so). The inherent problem is determining the where the 'current' library base is when a stack function is called.
(In the following examples, assume stack.library is a peropener library, and StackFunc() is a stack style function in that library. plugin.library uses stack.library)
Example 1 - Other libraries doing weird things behind your back
extern struct Library *StackBase; /* set up by autoinit */ void foo(void) { struct Library *PluginBase; StackFunc(); // Called with expected global StackBase PluginBase = OpenLibrary("plugin.library",0); /* Note that plugin.library did an OpenLibrary() of "stack.library". * In the current implementation, this now sets the taskslot of * stack.library to the new per-opener stack.library base. */ StackFunc(); // Unexpectly called with the new stack.library base!!! CloseLibrary(PluginBase); /* The CloseLibrary() reset the taskslot to the old base */ StackFunc(); // Back to the old base }
Ok, to fix that issue, let's suppose we use a stub wrapper that sets the taskslot to the (global) library base. This was no problem with old implementation. StackBase was passed in the scratch register StackFunc() each time it was called. This base was then used.
Example 2 - Local vs Global bases
extern struct Library *StackBase; /* set up by autoinit */ void bar(void) { StackBase(); // Called as expected { struct Library *StackBase = OpenLibrary("stack.library"); StackBase(); // WTF! linklib wrapper used *global* StackBase, not local one! CloseLibrary(StackBase); } StackBase(); // Works as expected }
Hmm. Ok, that behavior is going to be a little weird to explain to developers. I don't see the need to support local bases.
Example 3 - Callback handlers
extern struct Library *StackBase; /* set up by autoinit */ const struct handler { void (*dostack)(void); } handler = { .dostack = StackFunc }; void bar(void) { /* Who knows what base this is called with? * It depends on a number of things, could be the global StackBase, * could be the most recently OpenLibrary()'d stack.library base. */ handler->dostack(); }
Function pointers to functions in a peropener library may be a problem but is it needed ?
All in all, until we have either
- a *real* ELF shared library subsystem
or
- real use case for peropener libraries. C split arosstdc.library is peropener base. Reasoning is that you
may sometimes want to do malloc in a library that is not freed when the Task that happened to call function is exiting. Let's say picture caching library that uses ported code which internally uses malloc. If you have a pertask library the malloc will allocate memory on the Task that currently is calling the library and this memory will disappear when this task quits (should do free() prior exit). Before your change the caching library could just link with libarosstdc.a (and not libarosstdc_rel.a) and it worked.
idea could be to either link(*) or unlink(**) the malloc to a given task, depending on from where it is called (within library or not). No, the whole point is to have the malloced memory _not_ to be attached to a task so a cached image can be used from different tasks even if the first task already died.
staticEdit
call the static link library for the static version for the few shared libraries that need it different, like libz_static.a Ideally all code should just work with the shared library version.
Module Link Library uselibs= ---------- ------------ -------------- Foo.datatype => libfoo.datatype.a => foo.datatype Foo.library => libfoo.a => foo foo (static) => libfoo.static.a => foo.static And the 'misc static libs' (libamiga.a libarossupport.a, etc) libmisc.a => libmisc.a => misc
usestartup=no and the '-noarosc' LDFLAG both imply arosc.static (it doesn't hurt to link it, and if you really want arosc.library, that will preempt arosc.static)
Again this will make -lz not link with the shared library stubs. IMO uselibs=z should use shared library by default.
'uselibs="utility jpeg.datatype z.static arossupport"' method.
if there's a dynamic version of a library, it should always be used: static linking of libraries should be discouraged for all the usual reasons, e.g. the danger of embedding old bugs (not just security holes), bloat etc. Don't see the need for a -static option (or any other way to choose between static and dynamic libraries).
MakedependEdit
AROS build system generates for each .c file a .d file where the includes are listed. The .c is recompiled when any of the includes changes. Remember that AROS is an OS in development so we often do/did changes to the core header files. If this makedepend was not done programs would not be rebuilt if changes are made to AROS libraries or other core code. OK, so it's basically creating the dependencies of the .o
mmakefileEdit
We do get an error from it, so something is in fact going wrong. But what is?
Probably a hacky mmakefile so that include file is not found during makedepend but is found during compilation or maybe a wrong dependency so it is not guaranteed that the include file is there during makedepend. And I do think it would be better if the build would stop when such an error occurs.
configuration filesEdit
We are talking about configuration files for modules like this: rom/graphics/graphics.conf.
I have been thinking about similar things, but first I would like to convert our proprietary .conf format to xml. Manually writing file parsings is so passe :)
Uhh.. I have no objection to using a 'standard' parser, but I have to vote no on XML *in specific*.
JSON or YAML (summaries of both are on Wikipedia) are available would be better choices, since they are much more human readable, but semantically equivalent to XML.
I agree that xml is not the easiest format to edit in a text editor and is quite bloated. From the other side it has ubiquituous in scripting and programming language and in text editors and IDEs. I also like that the validity of a xml file can be checked through a schema file and that it also can be a guide for the editor. There are also tools to easily convert xml files based on this schema etc. It does not matter what format it is in but it should take as much coding away from the (genmodule) programmer.
Another improvement over XML could be the inclusion of literal code. Currently some literal code snippets are included in .conf file and in XML they would need some character encoding. How is this for JSON or YAML ?
YAML supports UniCode internally. I don't know how well that could be ported to AROS though since it seems AROS doesn't have UniCode support yet. JSON is based on JavaScript notation and YAML 1.2 can import JSON files as it implemented itself as a complete super-set of JSON. YAML's only 1.2 implementation is in C++ using CMake as a build script creator. If we use the C implementation of libYaml, it's only YAML 1.1 compliant and loses the backward compatibility to JSON.
Any data languages can be checked against a scheme; it's mostly a matter of writing out the schemes to check against. You can but my questions if the tools exists. From the second link you provided: "There are a couple of downsides to YAML: there are not a lot of tools available for it and it’s also not very easy to validate (I am not aware of anything similar to a DTD or a schema)". I find validation/syntax checking as important as human readability. Syntax checking is in the parsing in all four cases. The validation the XML can do is whether it conforms to the parsing and whether it conforms to a specific scheme. YAML and JSON are specifically intended for structured data, en I guess my example is too, so the equivalent XML scheme would check whether the content was correctly structured for structured data. The other three don't need that as anything they parse is by definition structured data.
All four have the same solution: They are all essentially tree builders, and you can walk the tree to see if each node conforms to your content scheme. The object is to use a defined schema/DTD for the files that are describing a library. Text editors that understand schemas can then let you only add fields that are valid by the schema. So this schema let everyone validate a XML if it is a valid XML library description file; they can use standard tools for that.
AFAICS JSON and YAML parsers only validate if the input file is a valid JSON/YAML file, not that it is a valid JSON/YAML library description file. AFAICS no such tools exist for these file formats.
ETask Task StorageEdit
__GM_* functions
__GM_BaseSlot: externally visible slot ID (for the AROS_RELLIBFUNCSTUB() assembly routines) __GM_SetBase_Safe: Set (and reallocate) task storage Static function, only called in a library's InitLib() and OpenLib() code. __GM_GetBase_Safe: Get task storage slot This is the 'slow' version of __GM_GetBase(), which calls Exec/GetTaskStorageSlot(). Returns NULL if the slot does not exist or is unallocated. __GM_GetBase: Get task storage slot (unsafe) This is the 'fast' version of __GM_GetBase(), which does not need to perform any checking. This function is provided by the CPU specific AROS_GM_GETBASE() macro (if defined). The fallback is the same implementation as __GM_GetBase_Safe __AROS_GM_GETBASE: Fast assembly 'stub' for getting the relbase Designed to be used in the AROS_RELLIBFUNCSTUB() implementation. Does not do any sanity checking. Guaranteed to be run only if (a) InitLibrary() or OpenLibrary() has already been called in this ETask context or (b) this ETask is a child of a parent who has opened the slot's library. I can generate implementations of this for arm, m68k, and i386, but I want the location of TaskStorage to be agreed upon before I do that work and testing. AROS_GM_GETBASE(): Generates a C function wrapper around the fast stub.
Genmodule no longer has to have internal understanding of where the TaskStorage resides. All of that knowledge is now in exec.library and the arch/*-all/include/aros/cpu.h headers.
Location of the TaskStorage slots
It was important to me that the address of the ETask does not change. For example, it would be pretty bad if code like this broke:
struct ETask *et = FindTask(NULL)->tc_UnionETask.tc_ETask; ... UnzipFile("foo.zip"); <= opens z_au.library, slots reallocated .. if (et->Parent) { <= ARGH! et was freed! ....
Also, I wanted to minimize the number of places that need to be modified if the TaskStorage location needed to be moved (again).
et_TaskStorage is automatically resized by Exec/SetTaskStorageSlot() as needed, and a new ETask's et_TaskStorage is cloned from its parent, if the parent was also an ETask with et_TaskStorage. What I wanted to say here is that some overhead may be acceptable for SetTaskStorageSlot() if properly documented. E.g. to not call the function in time critical paths.
You clone Parent TaskStorage when creating a subtask as before. This may be acceptable if it is documented that a slot allocated in the parent may not be valid in child if it is allocated in parent after the child has been created. For other use cases think it acceptable to have to be sure that in a Task first a SetStorageSlot() has to be done before getting the value.
Auto generation of oop.libraryEdit
updated genmodule to be capable of generating interface headers from the foo.conf of a root class, and have tested it by updating graphics.hidd to use the autogenerated headers.
Hopefully this will encourage more people to use the oop.library subsystem, by making it easier to create the necessary headers and stubs for an oop.library class interface.
Note that this is still *completely optional*, but is encouraged.
Plans to extend this to generating Objective C interfaces in the future, as well as autoinit and relbase functionality.
This allows a class interface to be defined, and will create a header file in $(AROS_INCLUDES)/interface/My_Foo.h, where 'My_Foo' is the interface's "interfacename". In the future, this could be extended to generate C++ pure virtual class headers, or Objective C protocol headers.
The header comes complete with aMy_Foo_* attribute enums, pMy_Foo_* messages, moMy_Foo method offsets, and the full assortment of interface stubs.
To define a class interface, add to the .conf file of your base class:
##begin interface ##begin config interfaceid my.foo interfacename My_Foo methodstub myFoo # Optional, default to interfacename methodbase MyFooBase attributebase MyFooAttrBase ##end config ##begin attributelist ULONG FooType # [ISG] Type of the Foo BOOL IsBar # [..G] Is this a Bar also? <- comments are preserved! ##end attributelist ##begin methodlist VOID Start(ULONG numfoos) # This comment will appear in the header BOOL Running() .skip 1 # BOOL IsStopped() Disabled obsolete function VOID KillAll(struct TagItem *attrList) ##end methodlist ##end interface
DocumentationEdit
It would be nice if we could just upload to diff (maybe as zip file) and then the patching is automatically done.
If you have a local copy of the whole website, you can update only the file(s) that are changed with a rsync-type script (maybe rsync itself works for the purpose).
MiscEdit
# Your c++ files CXX_FILES := main.cpp debug.cpp subdir/module.cpp # subdir slashes are replaced by three underscores CXX_OBJS := $(addprefix $(GENDIR)/$(CURDIR)/, $(addsuffix .o, $(subst /,___,$(CXX_FILES)) ) ) CXX_FLAGS := -W -Wall -Wno-long-long -fbounds-check CXX_CC = $(TOOLDIR)/crosstools/$(AROS_TARGET_CPU)-aros-g++ CXX_DEPS := $(patsubst %.o,%.d,$(CXX_OBJS)) $(CXX_DEPS): @echo Makedepend $(patsubst %.d,%.cpp,$(subst ___,/,$(notdir $@)))... @$(CXX_CC) $(CXX_FLAGS) -MM -MT $(patsubst %.d,%.o,$@) -o $@ $(patsubst %.d,%.cpp,$(subst ___,/,$(notdir $@))) @echo $@: $(patsubst %.d,%.cpp,$(subst ___,/,$(notdir $@))) >>$@ -include $(CXX_DEPS) $(CXX_OBJS): %compile_q \ cmd=$(CXX_CC) \ opt=$(CXX_FLAGS) \ from="$(patsubst %.o,%.cpp,$(subst ___,/,$(notdir $@)))" \ to=$@
- Make sure your target depends on both deps and objs emumiga-library: $(CXX_DEPS) $(CXX_OBJS)
The AROS build system.
Even if it's not specific to a particular platform, the code in arch/common is hardware dependent, whereas the code in rom/ and workbench/ is supposed to be non-hardware-specific. This has been discussed before when you moved other components (e.g. ata.device) from arch/common to rom/devs. IIRC you accepted that that move was inappropriate in retrospect (but didn't undo it).
Having said that, arch/all-pc might be a good place for components shared between i386-pc and x86_64-pc such as the timer HIDD. On further inspection it seems that most drivers are already in workbench/hidds.
IntroductionEdit
AROS build system is based around GNU toolchain. This means we use gcc as our compiler, and the build system needs a POSIX environment to run.
Currently AROS has been successfully build using the following environments:
- Linux, various architectures and distributions. This has been, for a long time, a primary development platform. Most of our nightly builds are running under Linux.
- MacOS X (more technically known as Darwin).
- Cygwin, running Windows.
- MinGW/MSYS, running Windows (both 32-bit and 64-bit version of MinGW was tested).
From these two Windows environments, MinGW is the preferred one, because of significantly faster (compared to Cygwin) operation. There's, however, a known problem: if you want to build native port, GRUB2 can't be built. Its own build system fails is currently incompatible with MinGW, and will fail. You can work around it if you use—with-bootloader=none argument when configuring AROS. This will disable building the primary bootloader. You can perfectly live with that if you already have GRUB installed.
Running on a host whose binary format is different from ELF (i. e. Darwin and Windows), requires you to use native AROS-targeted crosstoolchain. It can be built together with AROS, however using a standalone preinstalled toolchain significantly shortens the build time and saves up your drive space. A pretty good set of prebuilt toolchains for Darwin and Windows can be obtained from AROS Archives.
Cross-compiling a hosted version of AROS requires you, additionally, to have the second crosstoolchain, targeted to what will be your host. For example if you're building Windows-hosted AROS under Linux, you'll need Windows-targeted crosstoolchain. Because of this, building a hosted version is best to be done on the same system it will run on.
In the past, configure found e.g. i386-elf-gcc etc. on the path during a cross-compile without passing special options. I'd like to retain that capability. That *should* still work if you pass in—disable-crosstools.
Remember, --enable-crosstools is the default now - and it would be silly to use the external crosstools is AROS is just going to build its own anyway.
For the kernel tools though, yes, I definitely agree. Let me know if you have a system where the kernel tool type isn't detected properly.
Making use of threaded builds (make -j X)? if not it might be worth using. Please don't; vps is a virtual machine also running some web sites. I don't want to fully starve the rest that is running on that machine. I appreciate what you are saying - but without info on the virtualised hardware cant really comment. How many "cores" does the vm have? if it has >2, don't see why adding an additional thread (make -j 2) should cause any noticeable difference to the web services it also hosts?
26 February 2012, configure has been restructured, to generate three sets of *_*_{cc,as,objdump,...} definitions.
If we are building crosstools:
orig_target_* - AROS built toolchain (in bin/{host}/tools/crosstools/....)
aros_kernel_* - External toolchain, if—with-kernel-tool-prefix, or the architecture configure it as such
(ie hosted archs) Otherwise, it points to the orig_target_* tools
aros_target_* - AROS target tools (in bin/{host}/tools/${target_tool_prefix}-*
If we are *not* building crosstools: (--disable-crosstools, or—with-crosstools=...)
aros_kernel_* - External toolchain (required, and configure should be checking for it!)
orig_target_* - Points to aros_kernel_*
aros_target_* - AROS target tools (in bin/{host}/tools/${target_tool_prefix}-*
modified collect-aros to mark ABIv1 ELF files with EI_OSABI of 15 (AROS) instead of 0 (generic Unix). For now, I'm going to hold off on the change to refuse to load ABIv0 files (with EI_OSABI of 0) until I can get some more testing done (since dos/internalloadseg_elf.c is reused in a few places).
A separate change to have ABIv0 refuse to load ABIv1 applications will need to be made. The patch to have ABIv1 refuse to load ABIv0 applications will come in the near future.
Custom toolsEdit
../$srcdir/configure" --target=linux-i386—enable-debug=all—with-portssources="$curdir/$portsdir
Always use the 'tools/crosstools' compiler to build contrib/gnu/gcc. AFAIK this was the previous solution, using TARGET_CC override in mmakefile...
the host toolchain should only be used for compiling the tools (genmodule, elf2hunk, etc), and for the bootstrap (ie AROSBootstrap on linux-hosted and grub2 for pc-*). To be exact, the 'kernel' compiler is used for compiling GRUB2, and probably AROSBootstrap too. This is important when cross-compiling.
How about we invert the sense of --enable-crosstools? We make it '--disable-crosstools', and crosstools=yes is on by default? That way we can support new arch bringup (if we don't have working crosstools yet), but 'most people' won have to deal with the issues of, say, having C compiled with (host) gcc 4.6.1, but C++ compiled with (crosstools) gcc 4.2
add-symbol-file boot/aros-bsp-linux 0xf7b14000 add-symbol-file boot/aros-base 0xf7b6a910
There's "loadkick" gdb command now which does this auomatically. Btw, don't use add-symbol-file. Use "loadseg <address>".
You need to run as you have a stale config.
$ ./config.status—recheck && ./config.status
In the end I would like to get rid of the mmakefile parsing by mmake. What I would like to put in place is that mmake calls the command: 'make -f mmakefile __MM__' and it parses the output of that command. The mmakefile would the be full of statements like:
__MM__ :: echo metatarget : prerequisite1 prerequisite2
This could be generated by genmf macros or gmake functions.
I think this approach would give some advantages:
The parsing code in mmake would become simpler: * No need to discard non-#MM lines or at least reduce it significantly * No need for line continuation handling * No need for variable substitution Rule generation in mmakefile would become more flexible. To generate the output one could use all facilities provided by gmake: if statements, functions, complex variable substitutions. For example: providing arch specific or configuration dependent rules would become much easier. This architecture would be much easier to extend to other make(-like) tools like cmake, scons, ... This would for example allow to gradually convert out genmf+gmake build system to a scons based one. External code coudld choose their prefered method: the AROS SDK would support several systems.
Would like to express the following 'build all libraries I depend on' concept in Metamake:
MODULE=testmod USELIBS=dos graphics utility $(MODULE)-linklib: core-linklibs $(addsuffix -includes,$(USELIBS)) $(addsuffix -linklib,$(USELIBS))
At the moment it not possible as mmake is a static scanner and does not support loops or function like $(addsuffix ...). Look in the AROS dev maillist for a thread called 'mmake RFC' (in Aug 2010) describing my idea. If you look at the svn log of tools/MetaMake there is r34165 'Started to write a function which calls the _MM_ target in a mmakefile. ...'
Can see this breaking because it wont know which "parent" metatarget(s) to invoke to build the prerequisites based on the object files / binaries alone, unless you add a dependancy on the (relevant) metatarget for every binary produced. i.e it would be like doing "make <prerequisites-metatarget>-quick" for the prerequisite. Yes, each module target would get an extra linklib-modulename target. (not linklib-kernel-dos, just linklib-dos, for example).
mmake at the moment only knows about metatargets and metadependencies. It does not handle real files or knows when something is old or new. Therefore it always has to try all metadependencies and make will find out if it is up to date or needs to be rebuilt. This can be changed to also let mmake dependencies on real files (e.g. the .c files for a shared library); remember when something was last build and check if files have changed. But this won't be a small change. IS there some way we can pass info about the file types in the "files=" parameter, so that the macros can automatically pass the files to the necessary utility macros?
CFILES := example1 CPPFILES := example2 ASMFILES := example3 %build_prog mmake=foo-bar \ progname=Example files="c'$(CFILES)',cpp'$(CPPFILES)',asm'$(ASMFILES)'" targetdir=$(AROS_TESTS) \ uselibs="amiga arosc"
IMO uselibs= should only be needed when non-standard libraries are used. In my ABI V1 I even made a patch to remove all standard libs from the uselibs= statement. I do plan to submit this again sometime in the future. And there should not be a need to add these libs to uselibs=. linklibs that are standardly linked should be build by the linklibs-core metatarget. %build_module takes care of the linklinbs-core dependency. Currently a lot of linklibs are not dependent of this metatarger because a lot of the standard libs autoopened by libautoinit.a. TBH, I also find it a bit weird. Standard libraries don't need -lXXX, because they "link" via proto files, right?
They are (currently) only used for the linklibs-<foo> dependency autogeneration. Was under the impression you wanted to move all the per-library autoinit code back to the specific libraries? Yes to avoid the current mismatch between versions in libautoinit and libxxx.a.
for %build_prog and some others so it might seem logical to add another cppfiles But then we might need to add dfiles, modfiles or pfiles for the D language, Modula-2 and Pascal as well in the future so your idea about adding it all to the files parameter in one way or another seems to be more future proof to me.
Personally, I'd prefer to let make.tmpl figure it all out from the extensions, even though it'd be a large changeset to fix all the FILES=lines.
FILES = foobar.c \ qux.cpp \ bar.S \ xyyzy.mod %build_prog mmake=foo-bar progname=Example files="$(FILES)" \ targetdir=$(AROS_TESTS) uselibs="frobozz"
By the way: what are the 'standard libraries'? That is to be discussed. I would include almost all libs in our workbench/libs and rom/ directories unless there is a good reason not to use it as a standard linklib. mesa will always require -lGL to be passed because AROSMesaGetProcAddress is only present in linklib. Also nobody will write code #include <proto/mesa.h>. All code will have #include <GL/gl.h>
working on minimal-version autoopening, to enhance binary compatibility with m68k and PPC AOS flavors. To be clear I like the feature you are implementing, I don't like it that programmers have to specify a long list of libs to uselibs= all the time.
Does this give the programmer a way to specify that he'll need more than the minimum for a function? For example, one aspect of a function may have been buggy/unimplemented in the first version. If that aspect is used, a version is needed that supports it properly.
Yes, in the library.conf file, you would use:
foo.conf
... .version 33 ULONG FooUpdate(struct Foo *foo) ULONG FooVersion() # NOTE: The version 33 FooSet() didn't work at all! # It was fixed in version 34. .version 34 ULONG FooSet(struct Foo *foo, ULONG key, ULONG val) .version 33 ULONG FooGet(struct Foo *foo, ULONG key) ...
Then, if you use FooSet(), you'll get version 34 of the library, but if your code never calls FooSet(), you'll only OpenLibrary() version 33.
OpenLibrary requiring version 34 in one case and 37 in the other, depending on whether I needed that specific NULL-handling aspect of FooSet(). How will this work with otherwise automatic determination of minimum versions?
Uh... You'll have the handle library loading yourself, then:
APTR Foo; if (IAmOnABrokenA1000()) { Foo = OpenLibrary("foo.library",34); } else if (TheA3000ReallyNeedsVersion37()) { Foo = OpenLibrary("foo.library",37); } else { /* Put your hands in the air like you just don't care! */ Alert(AT_DeadEnd); }
Syntax of the makefileEdit
Where do I need to make the changes to add 'contrib' to the amiga-m68k build process? You need to study scripts in /AROS/scripts/nightly/pkg and get some knowledge from them. Neil can probably give you better explanation. Contrib | https://en.m.wikibooks.org/wiki/Aros/Developer/BuildSystem | CC-MAIN-2018-43 | refinedweb | 11,978 | 65.42 |
#include <nrt/Core/Blackboard/details/ModulePortHelpers.H>
Module objects which derive from MessageChecker will be allowed to post Message objects.
Definition at line 58 of file Blackboard.H.
Return true if we currently have split-sub-checkers.
Currently, this is always false, no support yet for split checkers
Implements nrt::MessageCheckerCoreBase.
Definition at line 705 of file ModulePortImpl.H.
Check for a message on the Blackboard.
Note that the declaration of check() is a bit obfuscated by the SFINAE paradigm used to allow multiple checkers in one Module; the simplified declaration would look essentially like this:
Note that an exception will be thrown if you try to check() from a MessageChecker that is not attached to a Module, or if you try to check() while the system in not in the running() state. Have a look at Component.H and Manager.H for some explanations about the running() state.
Definition at line 716 of file ModulePortImpl.H.
References nrt::async(), BBTHROW, BBTHROWX, nrt::Singleton< T >::instance(), NRT_BBDEBUG, and NRT_WARNING.
Create a connector for this port's Checking type on the master Blackboard.
Beware of the order of the topic and topicfilter arguments as they differ between the poster checker and subscriber version of make_connector. Think of the first one as directly connected to the physical port, and the other one as the other end of the connector.
Definition at line 797 of file ModulePortImpl.H.
References nrt::Checker, and nrt::Singleton< T >::instance(). | http://nrtkit.net/documentation/classnrt_1_1MessageCheckerCore.html | CC-MAIN-2018-51 | refinedweb | 243 | 58.08 |
A fundamental geospatial operation is checking to see if a point is inside a polygon. This one operation is the atomic building block of many, many different types of spatial queries. This operation seems deceptively simple because it's so easy to see and comprehend visually. But doing this check computationally gets quite complex.
At first glance there are dozens of algorithms addressing this challenge. However they all have special cases where they fail. The failures come from the infinite number of ways polygons can form which ultimately foil any sort of systematic check. For a programmer the choice comes down to a compromise between computational efficiency (i.e. speed in this case) and thoroughness (i.e. how rare the exceptions are).
The best solution to this issue I've found is the "Ray Casting Method". The idea is you start drawing an imaginary line from the point in question and stop drawing it when the line leaves the polygon bounding box. Along the way you count the number of times you crossed the polygon's boundary. If the count is an odd number the point must be inside. If it's an even number the point is outside the polygon. So in summary, odd=in, even=out - got it?
This algorithm is fast and is accurate. In fact, pretty much the only way you can stump it is if the point is ridiculously close to the polygon boundary where a rounding error would merge the point with the boundary. In that case you can just blame your programming language and switch to Python.
I had no intention of implementing this algorithm myself so I googled several options, tried them out, and found a winner. It's interesting but not surprising that most of the spatial algorithms I find and use come from computer graphics sites, usually gaming sites or computer vision sites, as opposed to geospatial sites. My favorite ray casting point-in-polygon sample came from the "Simple Machine Forum" at "PSE Entertainment Corp". It was posted by their anonymous webmaster.
# Determine if a point is inside a given polygon or not # Polygon is a list of (x,y) pairs. This function # returns True or False. The algorithm is called # the "Ray Casting Method". def point_in_poly ## Test polygon = [(0,10),(10,10),(10,0),(0,0)] point_x = 5 point_y = 5 ## Call the function with the points and the polygon print point_in_poly(point_x,point_y,polygon)
Easy to read, easy to use. In a previous post on creating a dot density profile, I used the "contains" method in OGR to check randomly-generated points representing population counts against US Census Bureau tracts. That script created a point shapefile which could then be added as a layer. It worked great but it wasn't pure python because of OGR. The other problem with that recipe is creating a shapefile is overkill as dot density maps are just a visualization.
I decided to build on some other posts to combine this ray casting method, PNGCanvas, and the Python Shapefile Library to create a lightweight, pure Python dot density map implementation. The following code reads in a shapefile of census tracts, looks at the population value for that tract, then randomly draws a dot within that census tract for every 50 people. The census tract boundaries are also added to the resulting PNG image. The conventional wisdom, especially in the geospatial world, states if you need to do a large number of costly calculations it's worth using C because Python will be much slower. To my surprise the pure Python version was just about as quick as the OGR version. I figured the point-in-polygon calculation would be the most costly part. The results are close enough to warrant further detailed profiling which I'll do at some point. But regardless this operation is much, much quicker in pure Python than I expected.
import random import shapefile import pngcanvas def pip # Source shapefile - can be any polygon r = shapefile.Reader("GIS_CensusTract_poly.shp") # pixel to coordinate info xdist = r.bbox[2] - r.bbox[0] ydist = r.bbox[3] - r.bbox[1] iwidth = 600 iheight = 500 xratio = iwidth/xdist yratio = iheight/ydist c = pngcanvas.PNGCanvas(iwidth,iheight,color=[255,255,255,0xff]) # background color c.filledRectangle(0,0,iwidth,iheight) # Pen color c.color = [139,137,137,0xff] # Draw the polygons for shape in r.shapes(): pixels = [] for x,y in shape.points: px = int(iwidth - ((r.bbox[2] - x) * xratio)) py = int((r.bbox[3] - y) * yratio) pixels.append([px,py]) c.polyline(pixels) rnum = 0 trnum = len(r.shapeRecords()) for sr in r.shapeRecords(): rnum += 1 #print rnum, " of ", trnum density = sr.record[20] total = int(density / 50) count = 0 minx, miny, maxx, maxy = sr.shape.bbox while count < total: x = random.uniform(minx,maxx) y = random.uniform(miny,maxy) if pip(x,y,sr.shape.points): count += 1 #print " ", count, " of ", total px = int(iwidth - ((r.bbox[2] - x) * xratio)) py = int((r.bbox[3] - y) * yratio) c.point(px,py,color=[255,0,0,0xff]) f = file("density_pure.png", "wb") f.write(c.dump()) f.close()
The shapefile used above can be found here.
You can download PNGCanvas here.
And the Python Shapefile Library is here. | http://geospatialpython.com/2011_01_01_archive.html | CC-MAIN-2014-42 | refinedweb | 877 | 67.96 |
Re: How to fix broken security in Windows 2000?
From: Shannon Jacobs (shanen_at_my-deja.com)
Date: 02/07/05
- ]
Date: Tue, 8 Feb 2005 05:48:34 +0900
I really am curious why you (Karl Levinson, mvp) persist in blath^H^H^H^H^H
commenting about a technical topic you know so little about. The only
explanation I can come up with is that you get some kind of Microsoft
brownie points for doing it. Your claim of trying to be helpful does not
sound very convincing at this point. Irrespective of your mysterious goal or
motivation, what you actually do is cause my newsreader to show the thread
is active, causing me to hope that someone who actually understands the
situation has shown up. A few years ago, that someone probably would have
been an MVP who actually understood the technology involved, and the
question would have been satisfactorily resolved within two or three
exchanges. At least that was my most common experience in those
days--whereas this exchange is pretty typical of the new situation.
If you actually go and look "in the trenches", you will see that there are
LOTS of security certificates and LOTS of files. Before resorting to the
newsgroups, I had already spent quite a bit of time trying to do it the
"Microsoft way", and found out that I was apparently wasting my time. To
make progress by that path, there would need to be some way to establish a
relationship between a file and the security certificate it requires. I can
definitely say that the specific security certificates listed in that
article (and in several others) are already present and therefore do NOT
solve the problems on at least one machine. Perhaps you'd like to suggest
that I just try to collect all the security certificates in the world and
import all of them? (Actually, I suspect that approach would actually fail
unless they were imported in the proper order.)
I did manage to test a number of additional machines, and so far the only
interesting pattern seems unchanged. Every Windows 2000 box is broken, and
every Windows XP machine is okay. I even managed to stumble across a
researcher with an English W2K machine, and it seemed to be even more badly
afflicted than most of the Japanese machines. One of the Japanese W2K
machines actually took a while to come up with a missing certificate, but
some of the delay was probably due to another process that was running at
the same time. Still, I do have the impression that the problem is not
absolutely uniform, but that some machines are missing more certificates
than others. Some of this might be because Microsoft's security certificate
upgrades have typically not been included on the primary patch list, but in
the second group, and some people may have skipped those. However, I can
certainly say that for the machines I personally control all of those
security certificate upgrades have been installed--to no avail.
Karl Levinson, mvp <levinson_k@despammed.com> wrote:
> "Shannon Jacobs" <shanen@my-deja.com> wrote in message
> news:ezTj1EMDFHA.1188@tk2msftngp13.phx.gbl...
>>
>
> The article lists the certificates used to verify the crypto
> signatures on files from updated Microsoft service packs and
> patches. So, this article certainly answers this question at least
> to those files. I would be very surprised if files from the
> original Windows install CD were not signed either with those same
> certificates, or using other older certificates with the same name
> from the same root authority. It appears to be the closest answer
> you're going to find on the Internet [a google search turned up
> nothing else as far as I could find] and is absolutely worth a try.
>
>> case). Part of the final section would be relevant (though I
>> already know this is not the most convenient way to do it) *IF*
>> there was some way to explicitly identify the missing certificates
>> using SFC or some other tool.
>
> The article does identify the missing certificates, or at least the
> three or so required certificates. It's just three certificates,
> so why not open your GUI and compare what you've got to a working
> or newly installed / imaged Windows 2000 computer? How long could
> that possibly take, a few minutes? If you confirm that no
> certificates are missing, the other sections of that article then
> become relevant, by telling you the other possible dependencies. I
> don't see any reason to delay checking all of the dependencies in
> the article, to confirm these are not the problem. For example,
> you haven't told us whether the crypto service is starting on your
> computers [one of the troubleshooting steps mentioned in the
> article], unregistering and re-registering the DLLs in question,
> etc. I had a similar problem and ran through most of the steps in
> an hour or less, much less time than we've spent arguing about
> whether or not that article is the answer to your question. I
> really can't figure out what your aversion is to you or someone
> else on the IT staff there trying out all the steps in the article.
>
>> It makes me wonder if perhaps the real reason Microsoft has so far
>> avoided answering the question is because they no longer support
>> Windows 2000 to that degree.
>
> As far as tech support goes, Windows 2000 is every bit as supported
> as it was on the first day of its release, unless you're asking for
> new functionality to be programmed.
>
>> Imaginary (but sadly plausible) Microsoftian dialog:
>
> Very imaginary.
>
>> found the problem on any WXP machine). That means it would be
>> fundamentally impossible to know whether or not a W2K machine has
>> valid system files, unless you use the CD to restore the original
>> system files.
>
> Or you use a computer that isn't having the problem, or a freshly
> installed computer.
>
>> Of course that
>> cure would be worse than the disease, since you would almost
>> surely be *undoing* various security patches.
>
> Not in Windows 2000 and newer, it tracks and replaces updated files
> for you. I wouldn't be using the install CD here though, it's
> unnecessary.
>
>> Note that if all W2K machines are
>> missing certain security certificates, then the frequently
>> appearing suggestion (in many of Microsoft's "support" Web pages)
>> of copying them (via export) from another W2K machine is not going
>> to work, either.
>
> That's why you copy them from a known working Windows 2000
> computer, or at least compare them with a known working computer,
> in the default settings that havent been touched by your IT staff.
> Because you refuse to look at the certificates and compare them, we
> really have no idea whether the problem is really missing
> certificates or not.
>
>> Mr. Dilley's rudeness was rather amusing (or even hypocritical) in
>> a post that apparently accused someone else of rudeness. (Hard to
>> be sure what his intended points were, since they were so badly
>> expressed.)]
>
> I understood them. His point is that you are very rude and yet you
> need and demand assistance from the people you are insulting.
> Also, your IT staff should be the primary ones troubleshooting
> this, not you.
- ] | http://www.tech-archive.net/Archive/Win2000/microsoft.public.win2000.windows_update/2005-02/0046.html | crawl-002 | refinedweb | 1,206 | 58.01 |
Hide Forgot
We have found that the timers in the OS are getting into a loop and are not
getting updated.
If we hammer the system with network I/O, the system enventually goes to the
weeds. With more debugging, we have found a bug in a timer in the kernel. We
have fixed the bug and rerunning the test.
----------
Action by: Bryan.Leopard
Issue Registered
----------
Action by: ltroan
ltroan assigned to issue for HP-ProLiant.
Category set to: Kernel
Status set to: Waiting on Client
----------
Action by: Bryan.Leopard
Larry,
We talked with Sue about this yesterday. We were trying to get this into the
e.12 errata before it went out the door... Here is the information from our
engineers and attached is the patch that we feel fixes it. L Woodman has it
already via email.
Bryan
This has been reproduced on several SMP systems here, such as the DL580 G2 (4
processors), with 4 Gbit NICs running either the tg3 or bcm5700 driver. The ftp
test copies files between the test systems. It usually fails in 2 � 8 hours but
may take as long as 16 hours.
The patch has run on two systems for over 6 hours and is still running while on
another system it ran for just under 2 hours before the system locked up. On
this last system, we didn�t have the BUG check code. We�re adding the bug check
code in __run_timers so that we�re sure we are seeing the same problem.
File uploaded: timer.c.patch
----------
Action by: Bryan.Leopard
Status set to: Waiting on Tech
----------
Action by: ltroan
This will be fixed in AS2.1Q2 errata (April-beta, May-ship).
----------
Action by: arjanv
this looks like a nice workaround for a buggy use of timers, on first sight.
ISSUE TRACKER 15751 opened as sev 1
Created attachment 90135 [details]
timer.c.patch
NEEDED FOR AS 2.1 Q2 ERRATA.
lwoodman says will be fixed in Q2 errata.
FROM ISSUE TRACKER...
Event posted 04-10-2003 02:53pm by Bryan.Leopard with duration of 0.00
If the error is associated with the Broadcom driver as indicated above, Red Hat
asks that you reproduce the error with the tg3 driver in the errata -- it
supports all bcm chip sets except the 5705. If the problem can not be reproduced
with the open sourced tg3 driver, Red Hat will not be able to address this problem.
Created attachment 91257 [details]
timer_fix.patch (later patch)
FROM ISSUE TRACKER>>>>>>>>>>
Event posted 04-16-2003 01:40pm by brian.b with duration of 0.00
timer_fix.patch
I have asked QA to tell me whether this is broken with the tg3 driver. While
waiting on that, here is the patch that should be used (not the one earlier in
the thread). The do_IRQ error is a separate issue and should probably be
ignored for this issue's purposes.
File uploaded: timer_fix.patch
----------------------
Event posted 04-16-2003 02:35pm by brian.b with duration of 0.00
This issue occurs also with the tg3 driver. Please use the patch above to
correct the problem.
----------------------
Event posted 04-22-2003 04:03pm by brian.b with duration of 0.00
Please ignore the do_IRQ problem. This is not a Red Hat problem and is separate
from this issue. The system does fail without the timer_fix.patch regardless of
whether you use tg3 or broadcom. This patch needs to be included in the QU2
release.
Comment on attachment 90135 [details]
timer.c.patch
superceded by April patch
THIS IS ALSO ISSUE TRACKER 19520.... (PRESALES ISSUE FROM BRYSTOL WEST).
Created attachment 91362 [details]
rsana.1518.tar.gz
Created attachment 91371 [details]
timer.txt (sysrq info)
The attached patch basically modifies add_timer to act like mod_timer....this
shouldn't be necessary. It seems that we do not fully understand the root cause
of the issue. 2.5 has similar timer code and doesn't need modify semantics for
add. it feel to me like a driver might not be using the semantics that the
timer.c code expects. for example, a drv might call add_timer, then set the
timer list fields to NULL, and then do a mod_timer.
Additional Information from Issue Tracker......
Event posted 05-02-2003 02:57pm by brian.b with duration of 0.00
As for the new kernel image with the additional sysrq support, we can't get to
porkchop...
It will be interesting to see the additional processor's states but I don't
think we'll get any new information. When we originally debugged this problem we
were using an ITP. After the system locked up, we could stop and examine the
state of all the CPUs. The primary hang was in __run_timers() always followed by
a different CPU attempting to del_timer_sync() on that same timer. The rest of
the CPUs weren't doing anything interesting. By the time the kernel falls into
the __run_timer() hole, the damage had been done long before.
------------
Event posted 05-02-2003 03:21pm by ltroan with duration of 0.00
Sorry for the confusion here. Porkchop is an internal user.The discussion this
morning was to get Jeff to push the image to a people.pageat redhat where you
could access/download the image. Will pursue this.
--------------------
Event posted 05-02-2003 03:27pm by jneedle with duration of 0.00
Send Notifications<filename>
Larry, I e-mailed you the username and password for access to that directory.
------------------
Event posted 05-02-2003 04:11pm by ltroan with duration of 0.50
The -e18.3 kernel is for internal HP use ONLY !!!
And not to be given to any customer including Bristol West FOR ANY REASON !!!
You can download it as follows......
---------------------------------------------
Jeff has put the following files in
people.redhat.com:/jneedle/.private/.hp/
This is not a browseable directory. (edited)
They will need to tack on the exact file name.
The directory is ID and password protected. You can obtain these from me (Larry
Troan). Do not append them to this Issue Tracker or to the HP tracking tool.
Status set to: Waiting on Client
--------------------
Event posted 05-02-2003 05:04pm by tom.rhodes with duration of 0.00
We have downloaded the -e18.3 kernel and will run it over the weekend.
--------------------------
Event posted 05-02-2003 05:43pm by tom.rhodes with duration of 0.00
newsysrqinfo
New sysrq output from -e18 kernel attached showing more CPU status:
CPU 0-3 and 6-7 are idle
CPU 4 is in __run_timers or send_sig_info (called from __run_timers)
CPU 5 was never displayed and presumably is stuck in del_timer_sync trying to
delete the timer running on CPU 4
WRT comments from jbaron above, this doesn't look like a driver issue. In both
traces we've captured so far, the timer in question was an itimer being used by
the X server. In both traces the X server process is stuck in del_timer_sync
trying to delete the timer that __run_timers is stuck on.
File uploaded: newsysrqinfo
----------
Event posted 05-02-2003 05:47pm by tom.rhodes with duration of 0.00
Status set to: Waiting on Tech
----------
Event posted 05-05-2003 06:57am by arjanv with duration of 0.00
Send Notifications
For this reason, add_timer has no protection against another
add_timer/del_timer/del_timer_sync/mod_timer running in parallel. By adding the
same locking sequence that is used in mod_timer(), this problem is avoided.
exactly. And it doesn't need to! If the code doesn't provide that protection
itself it should use mod_timer not add_timer.
Created attachment 91498 [details]
newsysrqinfo
Created attachment 91515 [details]
sysrqw_may02
FROM ISSUE TRACKER............
Event posted 05-05-2003 10:58am by tom.rhodes with duration of 0.00
Notes : Attached results from 18.3 kernel and alt-sysrq-w output. Disappointing
results: only cpu 4 and 7 were reported, cpu4 was stuck in run_timers and cpu7
was idle. The alt-sysrq-t output shows that the X server process is stuck in
del_timer_sync.
File uploaded: sysrqw_may02
COMMENT FROM FEATUREZILLA 90549
------- Additional Comment #1 From Arjan van de Ven on 2003-05-09 11:45 -------
Action by: Bryan.Leopard
we don't care. that driver is borken.
Please try to reproduce this without adding *ANY* drivers we don't ship.
FROM ISSUE TRACKER ....
Event posted 05-15-2003 12:10pm by brian.b with duration of 0.00
This also occurs with the tg3 driver as noted in the following events:
Event posted 02-14-2003 09:40am by Bryan.Leopard
Event posted 04-16-2003 02:35pm by brian.b
Event posted 04-22-2003 04:03pm by brian.b
Status set to: Waiting on Tech
COMMENT FROM FEATUREZILLA 90549....
------- Additional Comment #7 From Larry Troan on 2003-07-11 09:05 -------
While agreeing that this is likely a bug, we are waiting for Laurie at HP to
find out if this is a customer bug other than Bristol West. It was originally
reported as a Bristol West Sev 1 but I've been working with BW and HP, and gave
the customer QU2 which has fixed all but an HP PSP (system monitor) problem that
apparently causes a kernel panic. HP L3 is investigating the PSP problem.
From BW/HP/RH teleconference minutes: RHES 2.1 now a supported operating system.
PSP v 6.40 is available for download from the web. URL sent in email during
conference call.
See Issue Tracker 21355 for complete Bristol West details.
FROM ISSUE TRACKER....
Event posted 07-15-2003 07:43am by ltroan with duration of 0.20
FROM BRIAN BAKER..........
Hi guys,
I have confirmed with Laurie that there is not a known customer with the
add_timer issue.
Thanks, Brian.
Brian, this is being carried as a sev 1 in Issue Tracker. Suggest we lower its
severity to a sev 2. I've dropped the priority in Bugzilla from HIGH to NORMAL
and would like to drop the severity as well.
FROM ISSUE TRACKER...
Event posted 07-31-2003 09:17am by brian.b with duration of 0.00
A little more info from our engineers:
This add_timer issue has shown up on the kernel mailing list:
Andrea says below what we have been saying all along: "it's del_timer_sync
against add_timer". See my note on 4/29/03. Specifically del_timer_sync (due to
setitimer from the X server) running in parallel with an add_timer.
> On Wed, 30 Jul 2003, Andrea Arcangeli wrote:
> > > The thing triggered simply by running setitimer in one function, while
> > the it_real_fn was running in the other cpu. I don't see how 2.6 can
> > have fixed this, it_real_fun can still trivially call add_timer while
> > you run inside do_setitimer in 2.6 too. [...]
>
> This is not a race that can happen. itimer does this:
>
> del_timer_sync();
> add_timer();
>
> how can the add_timer() still happen while it_real_fn is still running on
> another CPU?
it's not add_timer against add_timer in this case, it's del_timer_sync against
add_timer.
cpu0 cpu1
------------ --------------------
do_setitimer
it_real_fn
del_timer_sync add_timer -> crash
Andrea
-------------------------------------------------------------------
Event posted 07-31-2003 10:51am by jneedle with duration of 0.00
Yes, there are active discussions on this in LKML. Ingo and Andrea continue to
try and hash out the very complex interactions of the various timer routines and
a patch has been created for 2.4-based kernels that we are testing. The entire
thread can be found here:
Hopefully this will be resolved shortly. It is a showstopper for QU3.
i've committed Ingo's solution...performance testing is needed
FROM ISSUE TRACKER
Event posted 09-01-2003 08:56pm by brian.b with duration of 0.00
Can we get an early look at the solution?
Test Case.....run against Jason's people page code of 9/05 with failure in 1-2
hours...
JASON, IS QA CODE BEYOND THIS FAILURE ???
--------------------
System locking up on SMP box by the setitimer() with invalid argument.
This has been reproduced on several SMP systems. It usually fails in 1
- 6 hours but may take as long as 12 hours.
It is possible to make reproductions. The details are as follows.
1. gcc test.c
2. while : ; do ./a.out ; done
3. netstat -c (execution on the another console)
system locking up after 1 - 6 hours.
--------------- test.c ------------------
#include <stdio.h>
#include <sys/time.h>
#include <signal.h>
#include <unistd.h>
#define LOOP 100000000
#define DELAY_SEC -1
#define DELAY_USEC -1
volatile int count = 0;
struct timeval tv[LOOP];
struct timezone tz;
void sig_action()
{
gettimeofday(tv + count, &tz);
count ++;
}
int main()
{
struct itimerval value, ovalue;
int i;
for(i=273; i<LOOP; i++){
value.it_value.tv_usec = DELAY_USEC;
value.it_value.tv_sec = DELAY_SEC;
value.it_interval.tv_usec = DELAY_USEC;
value.it_interval.tv_sec = DELAY_SEC;
setitimer(ITIMER_REAL, &value, &ovalue);
printf("%3d : \n", i);
}
return 0;
}
--------------- test.c ------------------
unfortunately, the latest timer fixes do not address this test case yet.
When we have a fix that accommodates the above test case, HP would like to test
it in parallel with us. Can we get them a kernel when it's available?
agreed, state changing to assigned, and yes we will make the kernel available as
soon as we track this down.
*** Bug 104297 has been marked as a duplicate of this bug. ***
90549 technically is a dup of this. 90549 is a featurezilla and this one is a
bugzilla and I believe 90549 was put in because of the procedures that TAMs had
in place back when QU2 was being created. 90549 seems superfluous at this point.
HP requests early code on this if possible so they can begin testing
our latest fix. This is their #1 RHEL2.1 MUSTFIX bug.
FROM ISSUE TRACKER
Event posted 11-20-2003 05:40pm by brian.b with duration of 0.00
Re-tested with "2.4.9-e.27.28.test" kernel. (Timer tests) . System
lockup observed within 5 min after stating the tests. Lock-up
reproduced three times with this kernel.
There were numerous timer.c related fixes incorporated into U3, which
address the majority of related issues. This particular bugzilla
entry refers to a corner case which was not addressed. Substantial
effort was expended address this case, but it was concluded that a
non-intrusive resolution is not viable within the bounds of
compatibility constraints. We do have some good ideas to address the
issue because, but that would require us to break kernel
compatibility. The outcome of our analysis is that this particular
bug represents an unlikely corner case which does not justify breaking
kernel compatibility over.
HP-ProLiant requesting this be nominated to the Update4 MUSTFIX list.
However, RH Engineering feels initial bug is fixed in U3 and that the
timer.c program is non-realistic edge case. RH has asked that HP
recreate original problem documented in Issue Tracker 15751.
I am therefore refraining from from nominating this issue to the U4
list until it is recreated via the original test scenario.. | https://bugzilla.redhat.com/show_bug.cgi?id=84452 | CC-MAIN-2020-50 | refinedweb | 2,494 | 68.87 |
Trade data in its raw form is tick by tick data, where each tick represents a trade. It is very useful, but it has far too much noise. In this blog, we convert this tick by tick (TBT) data into an OHLC (Open, High, Low, and Close) format using the resample function of the Pandas library.
We cover:
- What is Tick by Tick Data?
- What is OHLC Data?
- Converting tick by tick data to OHLC data
- Step 1 – Import pandas package
- Step 2 – Load the data
- Step 3 – Resample the data
What is Tick by Tick Data?
Before we go any further let’s understand what tick by tick data is. A single transaction between a buyer and a seller is represented by a tick.
In other words, when a buyer and a seller do the transection of the number of stocks on an agreed-upon price, it is represented by a tick. Multiple transactions of this type can happen under a second and each of them would be represented by a tick.
The tick by tick data looks like this.
This data was downloaded from First Rate Data
The first column of the data is the date and time at which the trade occurred. The second column is the last traded price (LTP) and the third column is the number of bitcoins traded in that particular transaction that is the last traded quantity (LTQ).
And the last column is the exchange code. For this tutorial, we will use the bitcoin data of the fourth of February 2021. This data was downloaded from FirstRate Data.
We can add a section of a couple of suggested reads about the existing blogs on data, preprocessing, etc.
We can also add one paragraph about Pandas here and link it to our blog.
What is OHLC Data?
As we now know, the tick by tick data is very high-frequency data. What if we want to quickly check the moment of the price in 1 min, 10 mins or 1 day?
We would have to check each tick manually to check the price moment. This sounds burdensome, but it can actually be done very quickly if we summarise the data into open, high, low, and close prices.
Now we will walk through the whole process of converting the tick by tick data into OHLC format using the resample function from the pandas library.
Converting tick by tick data to OHLC data
One can convert tick by tick data to OHLC data using the following steps:
Step 1 – Import pandas package
Pandas is a popular Python package that is most widely used to handle tabular data. Pandas is used for important functions such as data wrangling, data manipulation, data analyses etc.
import pandas as pd
import_pandas.py hosted with ❤ by GitHub
Step 2 – Load the data
Data is stored in my working directory with the name ‘BTC_2021-02-04.csv’. We are setting the Date time column as the index. As we saw earlier, the data is without a header. Hence we would add the header to the data while importing it. Thus importing and adding header takes place in the same line of code.
# Reading the data data = pd.read_csv('BTC_2021-02-04.csv', names=['Date_Time', 'LTP', 'LTQ', 'Exchage Code'], index_col=0) # Convert the index to datetime data.index = pd.to_datetime(data.index, format='%Y-%m-%d %H:%M:%S:%f') # Print the last 5 rows data.head()
read_tick_data_csv_file.py hosted with ❤ by GitHub
This is what the data frame looks like:-
Step 3 – Resample the data
We will now use the resample method of the pandas library. The resample method allows us to convert tick by tick data into OHLC format. We shall resample the data every 15 minutes and then divide it into OHLC format.
If you want to resample the data into smaller timeframes (milliseconds/microseconds/seconds), use L for milliseconds, U for microseconds, and S for seconds.
# Resample LTP column to 15 mins bars using resample function from pandas resample_LTP = data['LTP'].resample('15Min').ohlc(_method='ohlc') # Resample LTQ column to 15 mins bars using resample function from pandas resample_LTQ = data['LTQ'].resample('15Min').sum()
resample_LTP.py hosted with ❤ by GitHub
A snapshot of tick-by-tick data converted into OHLC format can be viewed with the following line of code:-
# Print the last 5 rows of resampled LTP resample_LTP.head()
LTP_head.py hosted with ❤ by GitHub
# Print the last 5 rows of resampled LTQ resample_LTQ.head()
LTQ_head.py hosted with ❤ by GitHub
You may concatenate ask and bid price to have a combined data frame.
# Concatenate resampled data resample_data = pd.concat([resample_LTP, resample_LTQ], axis=1,) # Print the last 5 rows resample_data.head()
resample_concat.py hosted with ❤ by GitHub
Conclusion
This blog described a quick way of computing the OHLC using TBT data. This can be applied across different assets and one can devise different strategies based on the OHLC data.
We can also plot charts based on OHLC and generate trade signals. Some other ways the data can be used are to build technical indicators in python or compute risk-adjusted returns.
Want to learn about algorithmic trading? Check out Quantra’s learning track on Algorithmic Trading for Everyone which is a set of 7 courses and covers a wide variety of topics such as Day Trading, Machine Learning, etc. Be sure to check it out!
For additional insight on this topic, see the full article here:.
Disclaimer: All investments and trading in the stock market involve risk. Any decision. | https://www.tradersinsight.news/ibkr-quant-news/pandas-ohlc-convert-tick-by-tick-data-to-ohlc-data/ | CC-MAIN-2022-27 | refinedweb | 921 | 63.9 |
Working with Pandas DataFrame can make life easy, especially if you need to do it quickly.
import arcpy import pandas as pd import sys #-------------------------------------------------------------------------- def trace(): """ trace finds the line, the filename and error message and returns it to the user """ import traceback tb = sys.exc_info()[2] tbinfo = traceback.format_tb(tb)[0] # script name + line number line = tbinfo.split(", ")[1] # Get Python syntax error # synerror = traceback.format_exc().splitlines()[-1] return line, __file__, synerror with arcpy.da.SearchCursor(r"d:\temp\scratch.gdb\INCIDENTS_points", ["OBJECTID", "SHAPE@X", "SHAPE@Y"]) as rows: try: df = pd.DataFrame.from_records(data=rows, index=None, exclude=None, columns=rows.fields, coerce_float=True) print ((df.columns[1], df.columns[2])) print ((df[df.columns[1]].mean(), df[df.columns[2]].mean())) except: print trace()
Like normal, you create an arcpy.da cursor, then pass that generator into the DataFrame's from_records(). Once the data is loaded, like in my example, you can perform operations on the frame itself. For example let's say you needed the mean location of points. This can be quickly done by loading in all the location XY columns (SHAPE@X and SHAPE@Y) and performing a mean call on each column.
With this method you can't control the chunksize when loading the data, so be careful of your memory.
With this method you can't control the chunksize when loading the data, so be careful of your memory. | https://anothergisblog.blogspot.in/2016/07/ | CC-MAIN-2018-09 | refinedweb | 237 | 60.11 |
Configuration basics
This section explains how to quickly configure basic settings that the Amazon Command Line Interface (Amazon CLI) uses to interact with Amazon. These include your security credentials, the default output format, and the default Amazon Region.
Amazon requires that all incoming requests are cryptographically signed. The Amazon CLI does this for you. The "signature" includes a date/time stamp. Therefore, you must ensure that your computer's date and time are set correctly. If you don't, and the date/time in the signature is too far off of the date/time recognized by the Amazon service, Amazon rejects the request.
Topics
Quick configuration with
aws
configure
For general use, the
aws configure command is the fastest way to set up
your Amazon CLI installation. When you enter this command, the Amazon CLI prompts you for four
pieces of information:
The Amazon CLI stores this information in a profile (a
collection of settings) named
default in the
credentials file. By default, the information in this profile
is used when you run an Amazon CLI command that doesn't explicitly specify a profile to use.
For more information on the
credentials file, see Configuration and credential file settings
The following example shows sample values. Replace them with your own values as described in the following sections.
$
json
Access key ID and secret access key
Access keys use an access key ID and secret access key that you use to sign programmatic requests to Amazon.
Creating a key pair
Access keys consist of an access key ID and secret access key, which are used to sign programmatic requests that you make to Amazon. If you don't have access keys, you can create them from the Amazon Web Services Management Console. As a best practice, do not use the Amazon Web Services Amazon Web Services Amazon Web Services account and never email them. Do not share them outside your organization, even if an inquiry appears to come from Amazon
Amazon security credentials in Amazon General Reference
Importing a key pair via .CSV file
Instead of using
aws configure to enter in a key pair, you can import
the
.csv file you downloaded after you created your key pair.
The
.csv file must contain the following headers.
User Name
Access key ID
Secret access key
During initial key pair creation, once you close the Download .csv
file dialog box, you cannot access your secret access key after
you close the dialog box. If you need a
.csv file, you'll
need to create one yourself with the required headers and your stored key pair
information. If you do not have access to your key pair information, you need to
create a new key pair.
To import the
.csv file, use the
aws configure
import command with the
--csv option as follows:
$
aws configure import --csv
For more information, see
aws_configure_import.
Region
The
Default region name identifies the Amazon.
You must specify an Amazon Region when using the Amazon CLI, either explicitly or by setting a default Region. For a list of the available Regions, see Regions and Endpoints. The Region designators used by the Amazon CLI are the same names that you see in Amazon Web Services Management Console URLs and service endpoints.
Output format
The
Default output format specifies how the results are formatted. The
value can be any of the values in the following list. If you don large data types.
text – The output is formatted as multiple lines of tab-separated string values. This can be useful to pass the output to a text processor, like
grep,
sed, or
awk.
table – The output is formatted as a table using the characters +|- to form the cell borders. It typically presents the information in a "human-friendly" format that is much easier to read than the others, but not as programmatically useful.
Profiles
A collection of settings is called a profile. By default, the Amazon CLI uses the
default profile. You can create and use additional named profiles with
varying credentials and settings by specifying the
--profile option and
assigning a name.
The following example creates a profile named
produser.
$
aws configure --profile produser
You can then specify a
--profile
and use the credentials and settings stored under that name.
profilename
$
aws s3 ls --profile
produser
To update these settings, run
aws configure again (with or without the
--profile parameter, depending on which profile you want to update) and
enter new values as appropriate. The next sections contain more information about the
files that
aws configure creates, additional settings, and named
profiles.
For more information on named profiles, see Named profiles for the Amazon CLI.
Configuration settings and precedence
The Amazon CLI uses credentials and configuration settings located in multiple places, such as the system or user environment variables, local Amazon configuration files, or explicitly declared on the command line as a parameter. Certain locations take precedence over others. The Amazon CLI credentials and configuration settings take precedence in the following order:
Command line options – Overrides settings in any other location. You can specify
--region,
--output, and
--profileas parameters on the command line.
Environment variables – You can store values in your system's environment variables.
CLI credentials file – The
credentialsand
configfile are updated when you run the command
aws configure. The
credentialsfile is located at
~/.aws/credentialson Linux or macOS, or at
C:\Users\on Windows. This file can contain the credential details for the
USERNAME\.aws\credentials
defaultprofile and any named profiles.
CLI configuration file – The
credentialsand
configfile are updated when you run the command
aws configure. The
configfile is located at
~/.aws/configon Linux or macOS, or at
C:\Users\on Windows. This file contains the configuration settings for the default profile and any named profiles.
USERNAME\.aws\config
Container credentials – You can associate an IAM role with each of your Amazon Elastic Container Service (Amazon ECS) task definitions. Temporary credentials for that role are then available to that task's containers. For more information, see IAM Roles for Tasks in the Amazon Elastic Container Service Developer Guide.
Instance profile credentials – You can associate an IAM role with each of your Amazon Elastic Compute Cloud (Amazon EC2) instances. Temporary credentials for that role are then available to code running in the instance. The credentials are delivered through the Amazon EC2 metadata service. For more information, see IAM Roles for Amazon EC2 in the Amazon EC2 User Guide for Linux Instances and Using Instance Profiles in the IAM User Guide. | https://docs.amazonaws.cn/en_us/cli/latest/userguide/cli-configure-quickstart.html | CC-MAIN-2022-21 | refinedweb | 1,081 | 55.44 |
The QIMEvent class provides parameters for input method events. More...
#include <qevent.h>
Inherits QEvent.
List of all member functions. cancelled).
See also Event Classes.
Constructs a new QIMEvent with the accept flag set to FALSE. type can be one of QEvent::IMStartEvent, QEvent::IMComposeEvent or QEvent::IMEndEvent. text contains the current compostion string and cursorPosition the current position of the cursor inside text.
Sets the accept flag of the input method event object.
Setting the accept parameter indicates that the receiver of the event processed the input method event.
The accept flag is not set by default.
See also ignore().
Returns the current cursor position inside the composition string. Will return 0 for IMStartEvent and IMEndEvent.
Clears the accept flag parameter of the input method event object.
Clearing the accept parameter indicates that the event receiver does not want the input method event.
The accept flag is cleared by default.
See also accept()..
This file is part of the Qt toolkit. Copyright © 1995-2003 Trolltech. All Rights Reserved. | http://doc.trolltech.com/3.1/qimevent.html | crawl-001 | refinedweb | 169 | 71.61 |
Assert statement in python is a way to check for unrecoverable conditions before proceeding further in a program. It prevents runtime errors by evaluating causes that could certainly raise an error after performing a few operations. It is similar to a self-checking mechanism for a program, a bug-free program should not be affected by asserts. An assert is equivalent to:
if not condition: raise AssertionError("my-message")
When a python program is run in an optimized mode (where __debug__ is False), as shown below, assert is ignored.
python -O main.py
The assert statement needs an expression to evaluate and an optional message, the expression should result in a Boolean value, True or False. If the result is False an AssertionError is raised with provided message.
Syntax:
assert <expression> [, "message"]
Following are a few sample usages of assert
Check if a number is even
assert num % 2 == 0, "Number is not even"
Checking for membership in a list
assert 2 in [1, 3, 4], "2 is not in the list"
Leap year detection
assert (year % 400 == 0 and year % 100 == 0) or (year % 4 ==0 and year % 100 != 0), f"{year} is not a leap year"
Usage in a function
def apply_discount(price: float, discount: float) -> float: assert 0 <= discount <= 100 # discount is in percentage discounted_price = price - (price * discount / 100) return discounted_price | https://www.python-engineer.com/posts/assert/ | CC-MAIN-2022-21 | refinedweb | 224 | 55.47 |
Step 21: Using PureData or other Software to Control the Saiko5
OSC Packet Format
Basic functionality can be easily accessed through the use of the liblo library, available for python, C, PureData, and other languages. Example code of basic python scripts for controlling lights can be found in the saiko5 archive available in the Downloads section, in the subfolder /saiko5/software/scripts/
Basic Example Code
import liblo
address = liblo.Address("192.168.1.222", "2222")
liblo.send(address, '/light/color/set', ('f', 0.8), ('f', 0.0), ('f', 0.2))
In this code example, which is all that is required to control a Saiko5 fixture, there are three lines.
1. import liblo
This imports the liblo library. You will need to have python-liblo installed for this functionality to be activated.
2. address = liblo.Address("192.168.1.222", "2222")
This creates a liblo address object, which can be used to send packets over OSC to the light at the given IP address. The lights are programmed by default to listen on port 2222, and to the computer at 192.168.1.2. However, these settings can be changed easily by uploading a modified version of the firmware. Note that if you have multiple lights, you will need to send the commands to all lights to do updates.
3. liblo.send(address, '/light/color/set', ('f', 0.8), ('f', 0.0), ('f', 0.2))
This creates and sends an OSC packet. The specification that the lights are expecting is the path '/light/color/set', followed by three floats, which correspond to the RGB brightness between 0 and 1.
Note that this is configured to send commands out as RGB data. This is not the preferred colorspace for dealing with LED lighting, and we highly recommend the use of HSI (Hue, Saturation, Intensity) for the local representation of light color. Code is available for converting from HSI to RGB in python in the saiko5 software repository at /saiko5/software/puredata/server/HSI2RGB.py
Hue is an intuitive way to think about "color", with values ranging between 0 and 360 degrees, where 0 degrees and 360 degrees are red, and as the hue increases, it passes first through yellow, then green, then cyan, then blue, then magenta before returning to red. This representation allows for the straightforward coding of steady color changes.
Saturation is an intuitive way to think about the "whiteness" of a color, with a saturation of 1.0 meaning as pure as possible of a color, and a saturation of 0.0 meaning white regardless of the hue. The use of saturation values less than 1.0 allows for the easy display of pastel colors.
Intensity is the natural representation for the brightness of a LED light fixture. Intensity is defined here as the sum of the red, green, and blue channels, between 0.0 and 1.0. This is different than the "value" used in HSV, where the value is defined as the maximum value between the red, green, and blue channels. Although the use of intensity instead of value limits the maximum brightness of the light to the brightness of a single channel alone, we feel that this is a more natural way to use a color changing light fixture. For example, in HSI colorspace, a HSI value, if intensity is constant, and the hue is changed, the total power being put out by the light fixture remains constant. However, in HSV colorspace, if the value is constant, and the hue is changed, the power being put out by the light fixture changes.
This is especially evident in the example of going from red to yellow (or any primary to secondary color). In the case of red, a HSV value of (0, 1, 1) is equivalent to a RGB value of (1, 0, 0). This is the same result that would come from using the HSI value (0, 1, 1). However, for yellow, the HSV value of (60, 1, 1) would result in an RGB value of (1, 1, 0) while a HSI value of (60, 1, 1) would result in an RGB value of (0.5, 0.5, 0). In the case of constant value, the amount of light being put out is higher for yellow than red by a factor of two, while in teh case of constant intensity, the amount of light being put out is unchanged.
The Saiko5 Software
The Saiko5 Software was primarily developed on Ubuntu 10.04. We highly recommend that to be certain that your setup will function you download and use our LiveCD image which will provide a basic Ubuntu environment with our software preinstalled and set up. However, we understand that this isn't an ideal situation for all users, and so software is also provided on our Downloads page. The saiko5 repository is split into two main folders, /saiko5/firmware/ and /saiko5/software/. This documentation will discuss the software contained in the /saiko5/software/ folder.
gui
The gui folder contains a graphical user interface written in python using wxpython for widgets. This software allows for basic control over the light fixtures using a point and click interface, including choosing a HSI color, picking IP addresses to send the commands to, and doing basic color cycling modes. To use this software, open the saikoControlGui.py in python.
scripts
The scripts folder contains basic example code for controlling the light fixtures using liblo in python. Note that using these will require manual configuration of the IP addresses of the lights you are attempting to control.
* colorcycle.py -- This is a basic color cycling program using 100% saturation and a constant rate of change.
* setpurple.py -- This is the very basic script shown above in three lines which will set the color of the chosen light to our favorite shade of purple.
* udpstresstest.py -- This script will stress test your network and the lights by sending udp packets to the lights. You can use this to determine how stable your wireless network is.
puredata
The puredata folder contains source code for doing audio analysis using puredata with an accompanying python server for actual communication with the Saiko5 light fixtures. This is the software that is used to make the videos of music response on the page. The screenshot shows the PureData + Python Server being used actively to control the Saiko5 Fixture.
* analyzer.pd -- This is the puredata analyzer. It requires pd-extended, as well as pd-aubio (the aubio music analysis library) to function.
* server/HSI2RGB.py -- This python script converts HSI values to RGB.
* server/lightconfiguration.py -- This python script contains the configuration information for the Saiko5 fixtures being controlled.
* server/lightevents.py -- This python script has the basic code for the python server. This is the code that will generally be modified to add new functionality to the server.
* server/lightserver.py -- This is the overall python script that must be running for the analyzer.pd software to function. It is designed to listen for OSC packets on port 2223, and those packets are provided by the puredata software.
| http://www.instructables.com/id/Ultra-bright-LED-Color-Changing-Spotlight-using-Op/step21/Using-PureData-or-other-Software-to-Control-the-Sa/ | CC-MAIN-2014-41 | refinedweb | 1,181 | 62.98 |
More than half the transactions under four of eight categories tracked under a scheme to reduce tax evasion lack a "valid" identification number issued by the Income-tax (I-T) Department.
This has reduced the department's ability to use the information collected to identify tax evaders. In 2007-08, the department captured around 1.2 million transactions amounting to Rs 384,185 crore (Rs 3,841.85 billion) in these four categories.
The data, captured under the annual information return (AIR), transactions ranging from cash deposits more than Rs 10 lakh (Rs 1 million) to sale or purchase of immovable property worth more than Rs 30 lakh (Rs 3 million). The I-T department now mandates people to declare these transactions in the annual tax return together with the Permanent Account Number (PAN) that it issues.
"There may be assessees who do not quote the PAN to evade tax scrutiny. There are also cases like land transactions by farmers who fail to quote a PAN because they do not have one," said a Central Board Direct Taxes (CBDT) official.
The latest data for assessment year 2007-08 show that only 27 per cent of transactions reported under property sales valued at Rs 30 lakh and above have valid PAN. The second lowest is cash deposits. The other two categories that are under 50 per cent mark are credit card payments and immovable property purchases worth more than Rs 30 lakh and above.
However, the percentage of valid PANs under these four categories have steadily increased since assessment year 2005-06. For AY 2005-06, only 12 per cent of transactions captured under credit card payments are valid. In two years, this number has more than tripled. Similarly, less than 20 per cent of transactions for house purchase and sales had a valid PAN.
But, even in AY 2007-08, four other categories -- mutual fund purchases worth Rs 2 lakh (Rs 200,000) and more, bonds/debentures purchases worth Rs 5 lakh (Rs 500,000) and more, payment of Rs 1 lakh (Rs 00,000) or more for acquiring shares and payment of Rs 5 lakh or more for purchasing RBI bonds -- have a 90 per cent rate of compliance.
The value of transactions has also increased manifold -- from Rs 16,79,234 crore (Rs 16,792.34 billion) worth of transactions in AY 2005-06 to Rs 54,15,035 crore (Rs 54,150.35 billion) in AY 2007-08. It is not known how many of these transactions have enabled the I-T department to pin down tax evaders and increase tax collections along with penalty and penal interest.
To enforce PAN compliance, the CBDT official said the I-T department investigates many cases every year, but is constrained by lack of manpower to look into all such cases.
Moneywiz Live!
this
Users
Comment
article | http://inhome.rediff.com/money/2008/dec/09tax-most-transactions-made-without-pan.htm | crawl-003 | refinedweb | 477 | 60.14 |
1 //XRadar2 //Copyright (c) 2004, 2005, Kristoffer Kvam3 //All rights reserved.4 //5 //Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:6 //7 //Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.8 //Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.9 //Neither the name of Kristoffer Kvam nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.10 //11 /.12 //13 //See licence.txt for dependancies to other open projects.14 15 16 package org.xradar.test.b;17 18 19 import sun.io.ByteToCharASCII;20 21 /**22 * Class in the XRadar test application. This class should not reference anything else23 * than external classes.24 * 25 * @author Kristoffer Kvam26 */27 public class B1 {28 private String test;29 30 public B1(String test) {31 this.test = test;32 new ByteToCharASCII();33 }34 35 public String getTest() {36 return test;37 }38 39 public void setTest(String test) {40 this.test = test;41 }42 43 }44
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/org/xradar/test/b/B1.java.htm | CC-MAIN-2017-30 | refinedweb | 227 | 56.86 |
Using Boost Libraries with Mex function in MATLAB
28 views (last 30 days)
Show older comments
Edited: John Kelly on 27 Feb 2015
I am trying to write a Mex function and compile it in the MATLAB command window using
>> mex Cluster.cpp
I have already done mex -setup and written other Mex functions that run without error.
I am using the boost libraries for C++ in my program. This is one of the headers that I am using:
#include <boost/random/uniform_int.hpp>
It gives me the following error message when I try to compile:
fatal error C1083: Cannot open include file: 'boost/random /uniform_int.hpp': No such file or directory
I then included the full path name:
#include <C:/boost_1_46_1/boost/random/uniform_int.hpp>
and get a new error message:
C:\boost_1_46_1\boost\random\uniform_int.hpp(22) : fatal error C1083: Cannot open include file: 'boost/config.hpp': No such file or directory.
This indicates that it was able to open the header file that it previously couldn't. But inside the uniform_int.hpp file there is a #include boost/config.hpp and it can't open that header file.
I don't really feel like messing with the boost header files to try to get it to work. The boost libraries also work fine when I use them in Visual C++. Is there any way I can get this to work by doing something in the MATLAB editor or at the command window prompt? I know in Visual C++ I can just add additional "include" directories. Is there a similar way of doing this through MATLAB?
Accepted Answer
Kaustubha Govind on 23 May 2011
You should just need to add additional include directories using the -I argument:
mex Cluster.cpp -IC:\boost_1_46_1
More Answers (3)
Yup. Find your mexopts file (mexopts.bat, since you seem to be on Windows, which you'll find at something like c:\users\craig\appdata\roaming\mathworks\RXXXXx\mexopts.bat I think). At the top you'll find it defines the INCLUDE variable - just add the path to your boost files there, for example:
INCLUDE=C:\boost_1_46_1;C:\My\Other\Path;%MATLAB%\include;
1 Comment
Chirag Gupta on 23 May 2011
Edited: John Kelly on 27 Feb 2015
Another solution might be to create your mex files directly from Visual Studio:
See Also
Categories
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!Start Hunting! | https://nl.mathworks.com/matlabcentral/answers/7955-using-boost-libraries-with-mex-function-in-matlab?s_tid=prof_contriblnk | CC-MAIN-2022-27 | refinedweb | 410 | 56.15 |
Hello,
I wish to use System.Threading on my monogame project. I have it working on Windows and Android, but on Windows Store apps i'm getting stuck in dependencies.
Visual Studio Says:
The type or namespace name 'Thread' could not be found (are you missing a using directive or an assembly reference?)
If I try and add a reference to System.Threading. I get the following messagebox:
The type the compiler complains about is just Thread.
Does any knows how I can use Threading on windows store apps?
Thanks!
EDIT: It seems that Thread does not exist for store apps: will try this solution from stackoverflow:
Apparently Windows Store apps don't support low level threading provided by System.Threading.Thread. They do support Either Task or BackgroundWorker. I choose BackgroundWorker because its fairly easy to set up and for basic "processing data while showing a progress indicator" scenarios it has you covered.! I can also confirm that it will work on: Windows, Windows Store, Android, Mac and IOS.
I'm shocked to find out uwp doesn't support threading. I guess I better research backgroundworkers.
I was surprised too. But technically it support "threading" but just not the low level Thread class. It only support managed types. I personally like BackgroundWorkder the best! Also its from .net 2.0. So low version dependency! Task (factory.startnew()) is from .net 4.0. Also task seems very abstract and just a complicated way of using the ThreadPool. Where BackgroundWorker just gives me what I want.. That's running a function in a separate thread and an easy API for handeling progress.
You can always just convert the desktop version too...
But then the only way of getting it in the Windows Store is thru Microsoft Desktop Brige.! Seems like an unnecessary action when you can just change to BackgrounWorker or Task.
I love nudging people to self discovery | http://community.monogame.net/t/threading-on-a-windows-store-app/10676/7 | CC-MAIN-2019-30 | refinedweb | 318 | 68.47 |
skia
/
third_party
/
libpng
/
0e13545712dc39db5689452ff3299992fc0a8377
/
.
/
Updated and distributed by Glenn Randers-Pehrson
libpng 1.0 beta 6 - version 0.96 - May 28, 1997
Updated and distributed by Andreas Dilger
libpng 1.0 beta 2 - version 0.88 - January 26, 1996
For conditions of distribution and use, see copyright
notice in png.h. Copyright (c) 1995, 1996 Guy Eric
Schalnat, Group 42, Inc.
Updated/rewritten per request in the libpng FAQ.
Libpng was written as a companion to the PNG specification, as a way
of reducing the amount of time and effort it takes to support the PNG
file format in application programs.
The PNG specification (second edition), November 2003, is available as
a W3C Recommendation and as an ISO Standard (ISO/IEC 15948:2004 (E)) at
<>.
The W3C and ISO documents have identical technical content.
The PNG-1.2 specification is available at
<>.
It is technically equivalent
to the PNG specification (second edition) but has some additional material.
The PNG-1.0 specification is available as RFC 2083 at
<> and as a
W3C Recommendation at <>.. Both are internal structures that are no longer exposed
in the libpng interface (as of libpng 1.5.0)., and direct access to the png_info fields was
deprecated..
The png_struct structure is the object used by the library to decode a
single image. As of 1.5.0 this structure is also not exposed.
Almost all libpng APIs require a pointer to a png_struct as the first argument.
Many (in particular the png_set and png_get APIs) also require a pointer
to png_info as the second argument. Some application visible macros
defined in png.h designed for basic data access (reading and writing
integers in the PNG format) don't take a png_info pointer, but it's almost
always safe to assume that a (png_struct*) has to be passed to call an API
function.
You can have more than one png_info structure associated with an image,
as illustrated in pngtest.c, one for information valid prior to the
IDAT chunks and another (called "end_info" below) for things after them.
The png.h header file is an invaluable reference for programming with libpng.
And while I'm on the topic, make sure you include the libpng header file:
#include <png.h>
and also (as of libpng-1.5.0) the zlib header file, if you need it:
#include <zlib.h>
Types
The png.h header file defines a number of integral types used by the
APIs. Most of these are fairly obvious; for example types corresponding
to integers of particular sizes and types for passing color values.
One exception is how non-integral numbers are handled. For application
convenience most APIs that take such numbers have C (double) arguments;
however, internally PNG, and libpng, use 32 bit signed integers and encode
the value by multiplying by 100,000. As of libpng 1.5.0 a convenience
macro PNG_FP_1 is defined in png.h along with a type (png_fixed_point)
which is simply (png_int_32).
All APIs that take (double) arguments also have a matching API that
takes the corresponding fixed point integer arguments. The fixed point
API has the same name as the floating point one with "_fixed" appended.
The actual range of values permitted in the APIs is frequently less than
the full range of (png_fixed_point) (-21474 to +21474). When APIs require
a non-negative argument the type is recorded as png_uint_32 above. Consult
the header file and the text below for more information.
Special care must be take with sCAL chunk handling because the chunk itself
uses non-integral values encoded as strings containing decimal floating point
numbers. See the comments in the header file.
Configuration
The main header file function declarations are frequently protected by C
preprocessing directives of the form:
#ifdef PNG_feature_SUPPORTED
declare-function
#endif
...
#ifdef PNG_feature_SUPPORTED
use-function
#endif
The library can be built without support for these APIs, although a
standard build will have all implemented APIs. Application programs
should check the feature macros before using an API for maximum
portability. From libpng 1.5.0 the feature macros set during the build
of libpng are recorded in the header file "pnglibconf.h" and this file
is always included by png.h.
If you don't need to change the library configuration from the default, skip to
the next section ("Reading").
Notice that some of the makefiles in the 'scripts' directory and (in 1.5.0) all
of the build project files in the 'projects' directory simply copy
scripts/pnglibconf.h.prebuilt to pnglibconf.h. This means that these build
systems do not permit easy auto-configuration of the library - they only
support the default configuration.
The easiest way to make minor changes to the libpng configuration when
auto-configuration is supported is to add definitions to the command line
using (typically) CPPFLAGS. For example:
CPPFLAGS=-DPNG_NO_FLOATING_ARITHMETIC
will change the internal libpng math implementation for gamma correction and
other arithmetic calculations to fixed point, avoiding the need for fast
floating point support. The result can be seen in the generated pnglibconf.h -
make sure it contains the changed feature macro setting.
If you need to make more extensive configuration changes - more than one or two
feature macro settings - you can either add -DPNG_USER_CONFIG to the build
command line and put a list of feature macro settings in pngusr.h or you can set
DFA_XTRA (a makefile variable) to a file containing the same information in the
form of 'option' settings.
A. Changing pnglibconf.h
A variety of methods exist to build libpng. Not all of these support
reconfiguration of pnglibconf.h. To reconfigure pnglibconf.h it must either be
rebuilt from scripts/pnglibconf.dfa using awk or it must be edited by hand.
Hand editing is achieved by copying scripts/pnglibconf.h.prebuilt to
pnglibconf.h and changing the lines defining the supported features, paying
very close attention to the 'option' information in scripts/pnglibconf.dfa
that describes those features and their requirements. This is easy to get
wrong.
B. Configuration using DFA_XTRA
Rebuilding from pnglibconf.dfa is easy if a functioning 'awk', or a later
variant such as 'nawk' or 'gawk', is available. The configure build will
automatically find an appropriate awk and build pnglibconf.h.
The scripts/pnglibconf.mak file contains a set of make rules for doing the
same thing if configure is not used, and many of the makefiles in the scripts
directory use this approach.
When rebuilding simply write a new file containing changed options and set
DFA_XTRA to the name of this file. This causes the build to append the new file
to the end of scripts/pnglibconf.dfa. The pngusr.dfa file should contain lines
of the following forms:
everything = off
This turns all optional features off. Include it at the start of pngusr.dfa to
make it easier to build a minimal configuration. You will need to turn at least
some features on afterward to enable either reading or writing code, or both.
option feature on
option feature off
Enable or disable a single feature. This will automatically enable other
features required by a feature that is turned on or disable other features that
require a feature which is turned off. Conflicting settings will cause an error
message to be emitted by awk.
setting feature default value
Changes the default value of setting 'feature' to 'value'. There are a small
number of settings listed at the top of pnglibconf.h, they are documented in the
source code. Most of these values have performance implications for the library
but most of them have no visible effect on the API. Some can also be overridden
from the API.
This method of building a customized pnglibconf.h is illustrated in
contrib/pngminim/*. See the "$(PNGCONF):" target in the makefile and
pngusr.dfa in these directories.
C. Configuration using PNG.
Apart from the global setting "everything = off" all the options listed above
can be set using macros in pngusr.h:
#define PNG_feature_SUPPORTED
is equivalent to:
option feature on
#define PNG_NO_feature
is equivalent to:
option feature off
#define PNG_feature value
is equivalent to:
setting feature default value
Notice that in both cases, pngusr.dfa and pngusr.h, the contents of the
pngusr file you supply override the contents of scripts/pnglibconf.dfa
If confusing or incomprehensible behavior results it is possible to
examine the intermediate file pnglibconf.dfn to find the full set of
dependency information for each setting and option. Simply locate the
feature in the file and read the C comments that precede it.
This method is also illustrated in the contrib/pngminim/* makefiles and
pngusr.h.
III. Reading.
Setup (false) if the bytes match the
corresponding bytes of the PNG signature, or nonzero (true);
}
if (fread(header, 1, number, fp) != number)
{
return ERROR;
};
}
If you want to use your own memory allocation routines,
use a libpng that was built with PNG_USER_MEM_SUPPORTED defined, longjmp buffer;
}
Pass (png_infopp)NULL instead of &end_info if you didn't create
an end_info structure. change the zlib compression buffer size to be used while
reading compressed data with
png_set_compression_buffer_size(png_ptr, buffer_size);
where the default size is 8192 bytes. Note that the buffer size
is changed immediately and the buffer is reallocated immediately,
instead of setting a flag to be acted upon later.
If you want CRC errors to be handled in a different manner than
the default, use
png_set_crc_action(png_ptr, crit_action, ancil_action);
The values for png_set_crc_action() say how libpng is to handle CRC errors in
ancillary and critical chunks, and whether to use the data contained
therein. Starting with libpng-1.6.26, this also governs how an ADLER32 error
is handled while reading the IDAT chunk. Note that it is impossible to
"discard" data in a critical chunk.
Choices for (int) crit_action are
PNG_CRC_DEFAULT 0 error/quit
PNG_CRC_ERROR_QUIT 1 error/quit
PNG_CRC_WARN_USE 3 warn/use data
PNG_CRC_QUIET_USE 4 quiet/use data
PNG_CRC_NO_CHANGE 5 use the current value
Choices for (int) ancil_action are
PNG_CRC_DEFAULT 0 error/quit
PNG_CRC_ERROR_QUIT 1 error/quit
PNG_CRC_WARN_DISCARD 2 warn/discard data
PNG_CRC_WARN_USE 3 warn/use data
PNG_CRC_QUIET_USE 4 quiet/use data
PNG_CRC_NO_CHANGE 5 use the current value */
}
(You can give it another name that you like instead of "read_row_callback")
To inform libpng about your function, use
png_set_read_status_fn(png_ptr, read_row_callback);
When this function is called the row has already been completely processed.
Unknown-chunk handling
Now you get to set the way the library processes unknown chunks in the
input PNG stream. Both known and unknown chunks will be read. Normal
behavior is that known chunks will be parsed into information in
various info_ptr members while unknown chunks will be discarded. This
behavior can be wasteful if your application will never use some known
chunk types. To change this, you can call:
png_set_keep_unknown_chunks(png_ptr, keep,
chunk_list, num_chunks);
keep - 0: default unknown chunk handling
1: ignore; do not keep
2: keep only if safe-to-copy
3: keep even if unsafe-to-copy
You can use these definitions:
PNG_HANDLE_CHUNK_AS_DEFAULT 0
PNG_HANDLE_CHUNK_NEVER 1
PNG_HANDLE_CHUNK_IF_SAFE 2
PNG_HANDLE_CHUNK_ALWAYS 3
chunk_list - list of chunks affected (a byte string,
five bytes per chunk, NULL or '. The IHDR and IEND chunks should not be named in
chunk_list; if they are, libpng will process them normally anyway.
If you know that your application will never make use of some particular
chunks, use PNG_HANDLE_CHUNK_NEVER (or 1) as demonstrated below.
Here is an example of the usage of png_set_keep_unknown_chunks(),
where the private "vpAg" chunk will later be processed by a user chunk, 2, NULL, 0);
/* except for vpAg: */
png_set_keep_unknown_chunks(read_ptr, 2, vpAg, 1);
/* also ignore unused known chunks: */
png_set_keep_unknown_chunks(read_ptr, 1, unused_chunks,
(int)(sizeof unused_chunks)/5);
#endif images
anyway because of potential buffer overflow conditions).
You should put this statement after you create the PNG structure and
before calling png_read_info(), png_read_png(), or png_process_data().
When writing a PNG datastream, put this statement before calling
png_write_info() or png_write_png().
If you need to retrieve the limits that are being applied, use
width_max = png_get_user_width_max(png_ptr);
height_max = png_get_user_height_max(png_ptr);
The PNG specification sets no limit on the number of ancillary chunks
allowed in a PNG datast);
and you can retrieve the limit with
chunk_malloc_max = png_get_chunk_malloc_max(png_ptr);
Any chunks that would cause either of these limits to be exceeded will
be ignored.
Information about your system
If you intend to display the PNG or to incorporate it in other image data you
need to tell libpng information about your display or drawing surface so that
libpng can convert the values in the image to match the display.
From libpng-1.5.4 this information can be set before reading the PNG file
header. In earlier versions png_set_gamma() existed but behaved incorrectly if
called before the PNG file header had been read and png_set_alpha_mode() did not
exist.
If you need to support versions prior to libpng-1.5.4 test the version number
as illustrated below using "PNG_LIBPNG_VER >= 10504" and follow the procedures
described in the appropriate manual page.
You give libpng the encoding expected by your system expressed as a 'gamma'
value. You can also specify a default encoding for the PNG file in
case the required information is missing from the file. By default libpng
assumes that the PNG data matches your system, to keep this default call:
png_set_gamma(png_ptr, screen_gamma, output_gamma);
or you can use the fixed point equivalent:
png_set_gamma_fixed(png_ptr, PNG_FP_1*screen_gamma,
PNG_FP_1*output_gamma);
If you don't know the gamma for your system it is probably 2.2 - a good
approximation to the IEC standard for display systems (sRGB). If images are
too contrasty or washed out you got the value wrong - check your system
documentation!
Many systems permit the system gamma to be changed via a lookup table in the
display driver, a few systems, including older Macs, change the response by
default. As of 1.5.4 three special values are available to handle common
situations:
PNG_DEFAULT_sRGB: Indicates that the system conforms to the
IEC 61966-2-1 standard. This matches almost
all systems.
PNG_GAMMA_MAC_18: Indicates that the system is an older
(pre Mac OS 10.6) Apple Macintosh system with
the default settings.
PNG_GAMMA_LINEAR: Just the fixed point value for 1.0 - indicates
that the system expects data with no gamma
encoding.
You would use the linear (unencoded) value if you need to process the pixel
values further because this avoids the need to decode and re;
see below). Otherwise you must do the composition yourself and, in this case,
you may need to call png_set_alpha_mode:
#if PNG_LIBPNG_VER >= 10504
png_set_alpha_mode(png_ptr, mode, screen_gamma);
#else
png_set_gamma(png_ptr, screen_gamma, 1.0/screen_gamma);
#endif
The screen_gamma value is the same as the argument to png_set_gamma; however,
how it affects the output depends on the mode. png_set_alpha_mode() sets the
file gamma default to 1/screen_gamma, so normally you don't need to call
png_set_gamma. If you need different defaults call png_set_gamma() before
png_set_alpha_mode() - if you call it after it will override the settings made
by png_set_alpha_mode().
The mode is as follows:
PNG_ALPHA_PNG: The data is encoded according to the PNG on the color values; most, maybe all, color
correction software has no handling for the alpha channel and,
anyway, the math to handle pre-multiplied component values is
unnecessarily complex.
Before you do any arithmetic on the component values you need
to remove the gamma encoding and multiply out the alpha
channel. See the PNG specification for more detail. It is
important to note that when an image with an alpha channel is
scaled, linear encoded, pre-multiplied component values must
be used!
The remaining modes assume you don't need to do any further color gamma before the image is displayed.
If your system doesn't do that, yet still seems to
perform arithmetic on the pixels without decoding them,
it is broken - check out the modes below.
With PNG_ALPHA_STANDARD libpng always produces linear
component values, whatever screen_gamma you supply. The
screen_gamma value is, however, used as a default for
the file gamma if the PNG file has no gamma information.
If you call png_set_gamma() after png_set_alpha_mode() you
will override the linear encoding. Instead the
pre-multiplied pixel values will be gamma encoded but
the alpha channel will still be linear. This may
actually match the requirements of some broken software,
but it is unlikely.
While linear 8-bit data is often used it has
insufficient precision for any image with a reasonable
dynamic range. To avoid problems, and if your software
supports it, use png_set_expand_16() to force all
components to 16 bits.
PNG_ALPHA_OPTIMIZED: This mode is the same as PNG_ALPHA_STANDARD
except that completely opaque pixels are gamma encoded according to
the screen_gamma value. Pixels with alpha less than 1.0
will still have linear components.
Use this format if you have control over your
compositing software and so don't do other arithmetic
(such as scaling) on the data you get from libpng. Your
compositing software can simply copy opaque pixels to
the output but still has linear values for the
non-opaque pixels.
In normal compositing, where the alpha channel encodes
partial pixel coverage (as opposed to broad area
translucency), the inaccuracies of the 8-bit
representation of non-opaque pixels are irrelevant.
You can also try this format if your software is broken;
it might look better.
PNG_ALPHA_BROKEN: This is PNG_ALPHA_STANDARD; however, all manifests as a subtle halo around composited parts of the
image. You may not even perceive this as a halo; the composited part of
the image may simply appear separate from the background, as though it had
been cut out of paper and pasted on afterward.
If you don't have to deal with bugs in software or hardware, or if you can fix
them, there are three recommended ways of using png_set_alpha_mode():
png_set_alpha_mode(png_ptr, PNG_ALPHA_PNG,
screen_gamma);
You can do color correction on the result (libpng does not currently
support color correction internally). When you handle the alpha channel
you need to undo the gamma encoding and multiply out the alpha.
png_set_alpha_mode(png_ptr, PNG_ALPHA_STANDARD,
screen_gamma);
png_set_expand_16(png_ptr);
If you are using the high level interface, don't call png_set_expand_16();
instead pass PNG_TRANSFORM_EXPAND_16 to the.
If you don't need, or can't handle, the alpha channel you can call
png_set_background() to remove it by compositing against a fixed color. Don't
call png_set_strip_alpha() to do this - it will leave spurious pixel values in
transparent parts of this image.
png_set_background(png_ptr, &background_color,
PNG_BACKGROUND_GAMMA_SCREEN, 0, 1);
The background_color is an RGB or grayscale value according to the data format
libpng will produce for you. Because you don't yet know the format of the PNG
file, if you call png_set_background at this point you must arrange for the
format produced by libpng to always have 8-bit or 16-bit components and then
store the color as an 8-bit or 16-bit color as appropriate. The color contains
separate gray and RGB component values, so you can let libpng produce gray or
RGB output according to the input format, but low bit depth grayscale images
must always be converted to at least 8-bit format. (Even though low bit depth
grayscale images can't have an alpha channel they can have a transparent
color!)
You set the transforms you need later, either as flags to the high level
interface or libpng API calls for the low level interface. For reference the
settings and API calls required are:
8-bit values:
PNG_TRANSFORM_SCALE_16 | PNG_EXPAND
png_set_expand(png_ptr); png_set_scale_16(png_ptr);
If you must get exactly the same inaccurate results
produced by default in versions prior to libpng-1.5.4,
use PNG_TRANSFORM_STRIP_16 and png_set_strip_16(png_ptr)
instead.
16-bit values:
PNG_TRANSFORM_EXPAND_16
png_set_expand_16(png_ptr);
In either case palette image data will be expanded to RGB. If you just want
color data you can add PNG_TRANSFORM_GRAY_TO_RGB or png_set_gray_to_rgb(png_ptr)
to the list.
Calling png_set_background before the PNG file header is read will not work
prior to libpng-1.5.4. Because the failure may result in unexpected warnings or
errors it is therefore much safer to call png_set_background after the head has
been read. Unfortunately this means that prior to libpng-1.5.4 it cannot be
used with the high level interface.
The high-level read interface_SCALE_16 Strip 16-bit samples to
8-bit accurately
PNG_TRANSFORM_STRIP_16 Chop 16-bit samples to
8-bit less accurately_GRAY_TO_RGB Expand grayscale samples
to RGB (or GA to RGBA)
PNG_TRANSFORM_EXPAND_16 Expand samples to 16 bits
(This excludes setting a background color, doing gamma transformation,
quantizing, and setting filler.) If this is the case, simply do this:
png_read_png(png_ptr, info_ptr, png_transforms, NULL)
where png_transforms is an integer containing the bitwise.)
You must use png_transforms and not call any png_set_transform() functions
when you use png_read_png().
if (height > PNG_UINT_32_MAX/(sizeof (png_byte)))
png_error (png_ptr,
"Image is too tall to process in memory");
if (width > PNG_UINT_32_MAX/pixel_size)
png_error (png_ptr,
"Image is too wide to process in memory");
row_pointers = png_malloc(png_ptr,
height*
do it, and it'll be free'ed by libpng when you call png_destroy_*().
The low-level read interface.
This also copies some of the data from the PNG file into the decode structure
for use in later transformations. Important information copied in is:
1) The PNG file gamma from the gAMA chunk. This overwrites the default value
provided by an earlier call to png_set_gamma or png_set_alpha_mode.
2) Prior to libpng-1.5.4 the background color from a bKGd chunk. This
damages the information provided by an earlier call to png_set_background
resulting in unexpected behavior. Libpng-1.5.4 no longer does this.
3) The number of significant bits in each component value. Libpng uses this to
optimize gamma handling by reducing the internal lookup table sizes.
4) The transparent color information from a tRNS chunk. This can be modified by
a later call to png_set_tRNS.
Querying the info structure
variables. In such situations, the
png_get_image_width() and png_get_image_height()
functions described below are safer.
width = png_get_image_width(png_ptr,
info_ptr);
height = png_get_image_height(png_ptr,
info_ptr);
bit_depth = png_get_bit_depth(png_ptr,
info_ptr);
color_type = png_get_color_type(png_ptr,
info_ptr);
interlace_type = png_get_interlace_type(png_ptr,
info_ptr);
compression_type = png_get_compression_type(png_ptr,
info_ptr);
filter_method = png_get_filter_type(png_ptr,
info_ptr);
png_get_gAMA(png_ptr, info_ptr, &file_gamma);
png_get_gAMA_fixed(png_ptr, info_ptr, &int_file_gamma);
file_gamma - the gamma at which the file is
written (PNG_INFO_gAMA)
int_file_gamma - 100,000 times the gamma at which the
file is written
png_get_cHRM(png_ptr, info_ptr, &white_x, &white_y, &red_x,
&red_y, &green_x, &green_y, &blue_x, &blue_y)
png_get_cHRM_XYZ(png_ptr, info_ptr, &red_X, &red_Y, &red_Z,
&green_X, &green_Y, &green_Z, &blue_X, &blue_Y,
&blue_Z)
png_get_cHRM_fixed(png_ptr, info_ptr, &int_white_x,
&int_white_y, &int_red_x, &int_red_y,
&int_green_x, &int_green_y, &int_blue_x,
&int_blue_y)
png_get. (PNG_INFO_cHRM)
_INFO_cHRM)_alpha,
&num_trans, &trans_color);
trans_alpha - array of alpha (transparency)
entries for palette (PNG_INFO_tRNS)
num_trans - number of transparent entries
(PNG_INFO_tRNS)
trans_color - graylevel or color sample values of
the single transparent color for
non-paletted images (PNG_INFO_tRNS)
png_get_eXIf_1(png_ptr, info_ptr, &num_exif, &exif);
(PNG_INFO_eXIf)
exif - Exif profile (array of png_byte) (of type
png_color_16p) (PNG_VALID_bKGD)
valid 16-bit red, green and blue
values, regardless of color_type
num_comments = png_get_text(png_ptr, info_ptr,
&text_ptr, &num_text);
num_comments - number of comments);
num_spalettes - number of sPLT chunks read.
palette_ptr - array of palette structures holding
contents of one or more sPLT chunks
read.
png_get_oFFs(png_ptr, info_ptr, &offset_x, &offset_y,
&unit_type);
offset_x - positive offset from the left edge
of the screen (can be negative)
offset_y - positive offset from the top edge
of the screen (can be negative)
(expressed as a string) value of "location" is a bitwise "or" of
PNG_HAVE_IHDR (0x01)
PNG_HAVE_PLTE (0x02)
PNG_AFTER_IDAT (0x08)
Note that because of the way the resolutions are
stored internally, the inch conversions won't
come out to exactly even number. For example,
72 dpi is stored as 0.28346 pixels/meter, and
when this is retrieved it is 71.9988 dpi, so
be sure to round the returned value appropriately
if you want to display a reasonable-looking result.. The
remark about inexact inch conversions applies here
as well, because a value in inches can't always be
converted to microns and back without some loss
of precision.
For more information, see().
Input transformations.
Transformations you request are ignored if they don't have any meaning for a
particular input data format. However some transformations can have an effect
as a result of a previous transformation. If you specify a contradictory set of
transformations, for example both adding and removing the alpha channel, you
cannot predict the final result.
The color used for the transparency values should be supplied in the same
format/depth as the current image data. It is stored in the same format/depth
as the image data in a tRNS chunk, so this is what libpng expects for this data.
The color used for the background value depends on the need_expand argument as
described() or png_set_add_alpha()
is called to insert filler bytes, either before or after each RGB triplet.
16-bit RGB data will be returned RRGGBB RRGGBB, with the most significant
byte of the color value first, unless png_set
be modified with png_set_filler(), png_set_add_alpha(), png_set_strip_16(),
or png_set_scale (png_get_valid(png_ptr, info_ptr,
PNG_INFO_tRNS)) png_set_tRNS_to_alpha(png_ptr);
if (color_type == PNG_COLOR_TYPE_GRAY &&
bit_depth < 8) png_set_expand_gray_1_2_4_to_8(png_ptr);
The first two functions are actually aliases for png_set_expand(), added
in libpng version 1.0.4, with the function names expanded to improve code
readability. In some future version they may actually do different
things.
As of libpng version 1.2.9, png_set_expand_gray_1_2_4_to_8() was
added. It expands the sample depth without changing tRNS to alpha.
As of libpng version 1.5.2, png_set_expand_16() was added. It behaves as
png_set_expand(); however, the resultant channels have 16 bits rather than 8.
Use this when the output color or gray channels are made linear to avoid fairly
severe accuracy loss.
if (bit_depth < 16)
png_set_expand_16(png_ptr);
PNG can have files with 16 bits per channel. If you only can handle
8 bits per channel, this will strip the pixels down to 8-bit.
if (bit_depth == 16)
#if PNG_LIBPNG_VER >= 10504
png_set_scale_16(png_ptr);
#else
png_set_strip_16(png_ptr);
#endif
(The more accurate "png_set_scale_16()" API became available in libpng version
1.5.4).
If you need to process the alpha channel on the image separately from the image
data (for example if you convert it to a bitmap mask) it is possible to have
libpng strip the channel leaving just RGB or gray data:
if (color_type & PNG_COLOR_MASK_ALPHA)
png_set_strip_alpha(png_ptr);
If you strip the alpha channel you need to find some other way of dealing with
the information. If, instead, you want to convert the image to an opaque
version with no alpha channel use png_set_background; see below.
As of libpng version 1.5.2, almost all useful expansions are supported, the
major ommissions are conversion of grayscale to indexed images (which can be
done trivially in the application) and conversion of indexed to grayscale (which
can be done by a trivial manipulation of the palette.)
In the following table, the 01 means grayscale with depth<8, 31 means
indexed with depth<8, other numerals represent the color type, "T" means
the tRNS chunk is present, A means an alpha channel is present, and O
means tRNS or alpha is present but all pixels in the image are opaque.
FROM 01 31 0 0T 0O 2 2T 2O 3 3T 3O 4A 4O 6A 6O
TO
01 - [G] - - - - - - - - - - - - -
31 [Q] Q [Q] [Q] [Q] Q Q Q Q Q Q [Q] [Q] Q Q
0 1 G + . . G G G G G G B B GB GB
0T lt Gt t + . Gt G G Gt G G Bt Bt GBt GBt
0O lt Gt t . + Gt Gt G Gt Gt G Bt Bt GBt GBt
2 C P C C C + . . C - - CB CB B B
2T Ct - Ct C C t + t - - - CBt CBt Bt Bt
2O Ct - Ct C C t t + - - - CBt CBt Bt Bt
3 [Q] p [Q] [Q] [Q] Q Q Q + . . [Q] [Q] Q Q
3T [Qt] p [Qt][Q] [Q] Qt Qt Qt t + t [Qt][Qt] Qt Qt
3O [Qt] p [Qt][Q] [Q] Qt Qt Qt t t + [Qt][Qt] Qt Qt
4A lA G A T T GA GT GT GA GT GT + BA G GBA
4O lA GBA A T T GA GT GT GA GT GT BA + GBA G
6A CA PA CA C C A T tT PA P P C CBA + BA
6O CA PBA CA C C A tT T PA P P CBA C BA +
Within the matrix,
"+" identifies entries where 'from' and 'to' are the same.
"-" means the transformation is not supported.
"." means nothing is necessary (a tRNS chunk can just be ignored).
"t" means the transformation is obtained by png_set_tRNS.
"A" means the transformation is obtained by png_set_add_alpha().
"X" means the transformation is obtained by png_set_expand().
"1" means the transformation is obtained by
png_set_expand_gray_1_2_4_to_8() (and by png_set_expand()
if there is no transparency in the original or the final
format).
"C" means the transformation is obtained by png_set_gray_to_rgb().
"G" means the transformation is obtained by png_set_rgb_to_gray().
"P" means the transformation is obtained by
png_set_expand_palette_to_rgb().
"p" means the transformation is obtained by png_set_packing().
"Q" means the transformation is obtained by png_set_quantize().
"T" means the transformation is obtained by
png_set_tRNS_to_alpha().
"B" means the transformation is obtained by
png_set_background(), or png_strip_alpha().
When an entry has multiple transforms listed all are required to cause the
right overall transformation. When two transforms are separated by a comma
either will do the job. When transforms are enclosed in [] the transform should
do the job but this is currently unimplemented - a different format will result
if the suggested transformations are used.=0xffff and
PNG_FILLER_AFTER which will generate RGBA pixels.
Note that png_set_filler() does not change the color type. If you want
to do that, you can add a true alpha channel with
if (color_type == PNG_COLOR_TYPE_RGB ||
color_type == PNG_COLOR_TYPE_GRAY)
png_set_add_alpha(png_ptr, filler, PNG_FILLER_AFTER);
where "filler" contains the alpha value to assign to each pixel.
The png_set_add_alpha() function was added in libpng-1.2.7.(png_ptr, error_action,
double red_weight, double
green_weight: weight of green component
If either weight is negative, default
weights are used.
In the corresponding fixed point API the red_weight and green_weight values are
simply scaled by 100,000:
png_set_rgb_to_gray(png_ptr, error_action,
png_fixed_point red_weight,
png_fixed_point green_weight);. Background and sBIT data
will be silently converted to grayscale, using the green channel
data for sBIT, regardless of the error_action setting.
The default values come from the PNG file cHRM chunk if present; otherwise, the
defaults correspond to the ITU-R recommendation 709, and also the sRGB color
space, as recommended in the Charles Poynton's Colour FAQ,
<>
can be determined.
The png_set_background() function has been described already; it tells libpng to
composite images with alpha or simple transparency against the supplied
background color. For compatibility with versions of libpng earlier than
libpng-1.5.4 it is recommended that you call the function after reading the file
header, even if you don't want to use the color in a bKGD chunk, if one exists.
If the PNG file contains a bKGD chunk (PNG_INFO_bKGD valid),
you may use this color, or supply another color more suitable for
the current display (e.g., the background color from a web page). You
need to tell libpng how the color is represented, both the format of the
component values in the color (the number of bits) and the gamma encoding of the
color. The function takes two arguments, background_gamma_mode and need_expand
to convey this information; however, only two combinations are likely to be
useful:
png_color_16 my_background;
png_color_16p image_background;
if (png_get_bKGD(png_ptr, info_ptr, &image_background))
png_set_background(png_ptr, image_background,
PNG_BACKGROUND_GAMMA_FILE, 1/*needs to be expanded*/, 1);
else
png_set_background(png_ptr, &my_background,
PNG_BACKGROUND_GAMMA_SCREEN, 0/*do not expand*/, 1);
The second call was described above - my_background is in the format of the
final, display, output produced by libpng. Because you now know the format of
the PNG it is possible to avoid the need to choose either 8-bit or 16-bit
output and to retain palette images (the palette colors will be modified
appropriately and the tRNS chunk removed.) However, if you are doing this,
take great care not to ask for transformations without checking first that
they apply!
In the first call the background color has the original bit depth and color type
of the PNG file. So, for palette images the color is supplied as a palette
index and for low bit greyscale images the color is a reduced bit value in
image_background->gray.
If you didn't call png_set_gamma() before reading the file header, for example
if you need your code to remain compatible with older versions of libpng prior
to libpng-1.5.4, this is the place to call it.
Do not call it if you called png_set_alpha_mode(); doing so will damage the
settings put in place by png_set_alpha_mode(). (If png_set_alpha_mode() is
supported then you can certainly do png_set_gamma() before reading the PNG
header.)
This API unconditionally sets the screen and file gamma values, so it will
override the value in the PNG file unless it is called before the PNG file
reading starts. For this reason you must always call it with the PNG file
value when you call it in this position:
if (png_get_gAMA(png_ptr, info_ptr, &file_gamma))
png_set_gamma(png_ptr, screen_gamma, file_gamma);
else
png_set_gamma(png_ptr, screen_gamma, 0.45455);
If you need to reduce an RGB file to a paletted file, or if a paletted
file has more entries
maximum_colors. If there is a histogram, libpng_quantize(png_ptr, palette, num_palette,
max_screen_colors, histogram, 1);
}
else
{
png_color std_color_cube[MAX_SCREEN_COLORS] =
{ ... colors ... };
png_set_quantize_structp png_ptr, png_row_infop
row_info, png_bytep data)
See pngtest.c for a working example. Your function will be called
after all of the other transformations have been processed. Take care with
interlaced images if you do the interlace yourself - the width of the row is the
width in 'row_info', not the overall image width.
If supported, libpng provides two information routines that you can use to find
where you are in processing the image:
png_get_current_pass_number(png_structp png_ptr);
png_get_current_row_number(png_structp png_ptr);
Don't try using these outside a transform callback - firstly they are only
supported if user transforms are supported, secondly they may well return
unexpected results unless the row is actually being processed at the moment they
are called..
png_read_update_info(png_ptr, info_ptr);
This is most useful to update the info structure's rowbytes
field so you can use it to allocate your image memory. This function
will also update your palette with the correct screen_gamma and
background if these have been given with the calls above. You may
only call png_read_update_info() once with a particular. return the values corresponding to the original PNG image.
After you call png_read_update_info the values refer to the image
that libpng will output. Consequently you must call all the png_set_
functions before you call png_read_update_info(). This is particularly
important for png_set_interlace_handling() - if you are going to call
png_read_update_info() you must call png_set_interlace_handling() before
it unless you want to receive interlaced output.
Reading image data() (unless you call
png_read_update_info()));".
It is almost always better to have libpng handle the interlacing for you.
If you want the images, as is likely,.
You then need to read the whole image 'number_of_passes' times. Each time
will distribute the pixels from the current pass to the correct place in
the output image, so you need to supply the same rows to png_read_rows in);
If you don't want libpng to handle the interlacing details, just call
png_read_rows() PNG_INTERLACE_ADAM7_PASSES times to read in all the images.
Each of the images is a valid image by itself; however, you will almost
certainly need to distribute the pixels from each sub-image to the
correct place. This is where everything gets very tricky.
If you want to retrieve the separate images you must pass the correct
number of rows to each successive call of png_read_rows(). The calculation
gets pretty complicated for small images, where some sub-images may
not even exist because either their width or height ends up zero.
libpng provides two macros to help you in 1.5 and later versions:
png_uint_32 width = PNG_PASS_COLS(image_width, pass_number);
png_uint_32 height = PNG_PASS_ROWS(image_height, pass_number);
Respectively these tell you the width and height of the sub-image
corresponding to the numbered pass. 'pass' is in in the range 0 to 6 -
this can be confusing because the specification refers to the same passes
as 1 to 7! Be careful, you must check both the width and height before
calling png_read_rows() and not call it for that pass if either is zero.
You can, of course, read each sub-image row by row. If you want to
produce optimal code to make a pixel-by-pixel transformation of an
interlaced image this is the best approach; read each row of each pass,
transform it, and write it out to a new interlaced image.
If you want to de-interlace the image yourself libpng provides further
macros to help that tell you where to place the pixels in the output image.
Because the interlacing scheme is rectangular - sub-image pixels are always
arranged on a rectangular grid - all you need to know for each pass is the
starting column and row in the output image of the first pixel plus the
spacing between each pixel. As of libpng 1.5 there are four macros to
retrieve this information:
png_uint_32 x = PNG_PASS_START_COL(pass);
png_uint_32 y = PNG_PASS_START_ROW(pass);
png_uint_32 xStep = 1U << PNG_PASS_COL_SHIFT(pass);
png_uint_32 yStep = 1U << PNG_PASS_ROW_SHIFT(pass);
These allow you to write the obvious loop:
png_uint_32 input_y = 0;
png_uint_32 output_y = PNG_PASS_START_ROW(pass);
while (output_y < output_image_height)
{
png_uint_32 input_x = 0;
png_uint_32 output_x = PNG_PASS_START_COL(pass);
while (output_x < output_image_width)
{
image[output_y][output_x] =
subimage[pass][input_y][input_x++];
output_x += xStep;
}
++input_y;
output_y += yStep;
}
Notice that the steps between successive output rows and columns are
returned as shifts. This is possible because the pixels in the subimages
are always a power of 2 apart - 1, 2, 4 or 8 pixels - in the original
image. In practice you may need to directly calculate the output coordinate
given an input coordinate. libpng provides two further macros for this
purpose:
png_uint_32 output_x = PNG_COL_FROM_PASS_COL(input_x, pass);
png_uint_32 output_y = PNG_ROW_FROM_PASS_ROW(input_y, pass);
Finally a pair of macros are provided to tell you if a particular image
row or column appears in a given pass:
int col_in_pass = PNG_COL_IN_INTERLACE_PASS(output_x, pass);
int row_in_pass = PNG_ROW_IN_INTERLACE_PASS(output_y, pass);
Bear in mind that you will probably also need to check the width and height
of the pass in addition to the above to be sure the pass even exists!
With any luck you are convinced by now that you don't want to do your own
interlace handling. In reality normally the only good reason for doing this
is if you are processing PNG files on a pixel-by-pixel basis and don't want
to load the whole file into memory when it is interlaced.
libpng includes a test program, pngvalid, that illustrates reading and
writing of interlaced images. If you can't get interlacing to work in your
code and don't want to leave it to libpng (the recommended approach), see
how pngvalid.c does it.
Finishing a sequential read
After you are finished reading the image through the
low-level interface, you can finish reading the file.
If you
separate.
png_infop end_info = png_create_info_struct(png_ptr);
if (!end_info)
{
png_destroy_read_struct(&png_ptr, &info_ptr,
(png_infopp)NULL);
return ERROR;
}
png_read_end(png_ptr, end_info);
If you are not interested, you should still call png_read_end()
but you can pass NULL, avoiding the need to create an end_info structure.
If you do this, libpng will not process any chunks after IDAT other than
skipping over them and perhaps (depending on whether you have called
png_set_crc_action) checking their CRCs while looking for the IEND chunk.
png_read_end(png_ptr, (png_infop)NULL);
If you don't call png_read_end(), then your file pointer will be
left pointing to the first chunk after the last IDAT, which is probably
not what you want if you expect to read something beyond the end of
the PNG datastream.
When you are done, you can free all memory allocated by libpng like this:
png_destroy_read_struct(&png_ptr, &info_ptr,
&end_info);
or, if you didn't create an end_info structure,
png_destroy_read_struct(&png_ptr, &info_ptr,
(png_infopp)NULL);
It is also possible to individually free the info_ptr members that
point to libpng-allocated storage with the following function:
png_free_data(png_ptr, info_ptr, mask, seq)
mask - identifies data to be freed, a mask
containing the bitwise_calloc() and passed in via a png_set_*() function, with
png_data_freer(png_ptr, info_ptr, freer, mask)
freer - one of
PNG_DESTROY_WILL_FREE_DATA
PNG_SET_WILL_FREE_DATA
PNG_USER_WILL_FREE_DATA
mask - which data elements are affected
same choices as in png_calloc() bitwise OR of one or
more of
PNG_INFO_gAMA, PNG_INFO_sBIT,
PNG_INFO_cHRM, PNG_INFO_PLTE,
PNG_INFO_tRNS, PNG_INFO_bKGD,
PNG_INFO_eXIf,.
Reading PNG files progressively
The progressive reader is slightly different from than 256 bytes
yet). When this function returns, you may
want to display any rows that were generated
in the row callback if you don't already do
so there.
*/
png_process_data(png_ptr, info_ptr, buffer, length);
/* At this point you can call png_process_data_skip if
you want to handle data the library will skip yourself;
it simply returns the number of bytes to skip (and stops
libpng skipping that number of bytes on the next
png_process_data call). is where you turn on interlace handling,
assuming you don't want to do it yourself.
If you need to you can stop the processing of
your original input data at this point by calling
png_process_data_pause. This returns the number
of unprocessed bytes from the last png_process_data
call - it is up to you to ensure that the next call
sees these bytes again. If you don't want to bother
with this you can get libpng to cache the unread
bytes by setting the 'save' parameter (see png.h) but
then libpng will have to copy the data internally.
*/
}
/*.
If you did not turn on interlace handling then
the callback is called for each row of each
sub-image when the image is interlaced. In this
case 'row_num' is the row in the sub-image, not
the row in the output image as it is in all other
cases.
For the non-NULL rows of interlaced images when
you have switched on libpng interlace handling, if you switch on interlace handling;
*/
png_progressive_combine_row(png_ptr, old_row,
new_row);
/* where old_row is what was displayed.
You can also call png_process_data_pause in this
callback - see above.
*/
}.
Setup.
Checking for invalid palette index on write was added at libpng
1.5.10. If a pixel contains an invalid (out-of-range) index libpng issues
a benign error. This is enabled by default because this condition is an
error according to the PNG specification, Clause 11.3.2, but the error can
be ignored in each png_ptr with
png_set_check_for_invalid_index(png_ptr, */
}
(You can give it another name that you like instead of "write_row_callback")
To inform libpng about your function, use
png_set_write_status_fn(png_ptr, write_row_callback);
When this function is called the row has already been completely processed and
it has also been written bitwise OR of one
or more PNG_FILTER_NAME masks.
*/
png_set_filter(png_ptr, 0,
PNG_FILTER_NONE | PNG_FILTER_VALUE_NONE |
PNG_FILTER_SUB | PNG_FILTER_VALUE_SUB |
PNG_FILTER_UP | PNG_FILTER_VALUE_UP |
PNG_FILTER_AVG | PNG_FILTER_VALUE_AVG |
PNG_FILTER_PAETH | PNG_FILTER_VALUE_PAETH|
PNG_ALL_FILTERS | PNG_FAST.
#include zlib.h
/* Set the zlib compression level */
png_set_compression_level(png_ptr,
Z_BEST_COMPRESSION);
/* Set other zlib parameters for compressing IDAT */)
/* Set zlib parameters for text compression
* If you don't call these, the parameters
* fall back on those defined for IDAT chunks
*/
png_set_text_compression_mem_level(png_ptr, 8);
png_set_text_compression_strategy(png_ptr,
Z_DEFAULT_STRATEGY);
png_set_text_compression_window_bits(png_ptr, 15);
png_set_text_compression_method(png_ptr, 8);
Setting the contents of info for output)
If you call png_set_IHDR(), the call must appear before any of the
other png_set_*() functions, because they might require access to some of
the IHDR settings. The remaining png_set_*() functions can be called
in any order.
If you wish, you can reset the compression_type, interlace_type, or
filter_method later by calling png_set_IHDR() again; if you do this, the
width, height, bit_depth, and color_type must be the same in each call.
png_set_PLTE(png_ptr, info_ptr, palette,
num_palette);
palette - the palette for the file
(array of png_color)
num_palette - number of entries in the palette
png_set_gAMA(png_ptr, info_ptr, file_gamma);
png_set_gAMA_fixed(png_ptr, info_ptr, int_file_gamma);
file_gamma - the gamma at which the image was
created (PNG_INFO_gAMA)
int_file_gamma - 100,000 times the gamma at which
the image was created
png_set_cHRM(png_ptr, info_ptr, white_x, white_y, red_x, red_y,
green_x, green_y, blue_x, blue_y)
png_set_cHRM_XYZ(png_ptr, info_ptr, red_X, red_Y, red_Z, green_X,
green_Y, green_Z, blue_X, blue_Y, blue_Z)
png_set_cHRM_fixed(png_ptr, info_ptr, int_white_x, int_white_y,
int_red_x, int_red_y, int_green_x, int_green_y,
int_blue_x, int_blue_y)
png_set_alpha,
num_trans, trans_color);
trans_alpha - array of alpha (transparency)
entries for palette (PNG_INFO_tRNS)
num_trans - number of transparent entries
(PNG_INFO_tRNS)
trans_color - graylevel or color sample values
(in order red, green, blue) of the
single transparent color for
non-paletted images (PNG_INFO_tRNS)) (PNG_VALID_bKGD)
png_set_text(png_ptr, info_ptr, text_ptr, num_text);
expressed as a string a few hundred.)
You must use png_transforms and not call any png_set_transform() functions
when you use png_write_png().); | https://skia.googlesource.com/third_party/libpng/+/0e13545712dc39db5689452ff3299992fc0a8377/libpng-manual.txt | CC-MAIN-2021-43 | refinedweb | 7,377 | 51.38 |
Registered 2018-12-19 by Tim Van Steenburgh
The Kubernetes Worker (data plane) source charm. This project is for bug tracking only. Code lives in https:/
Project information
- Part of:
- Charmed Distribution of Kubernetes
- Driver:
- Tim Van Steenburgh
- Licence:
- Apache Licence
View full history Series and milestones
trunk series is the current focus of development.
All bugs Latest bugs reported
- Bug #1816494: Short names are used instead of FQDNs for subjAltNames in certificates
Reported on 2019-02-18
- Bug #1815685: increase registry memory limit
Reported on 2019-02-12
- Bug #1814356: can not deploy flannel - regression of snapd
Reported on 2019-02-03
- Bug #1812894: Can't bring up GPU worker, docker daemon fail.
Reported on 2019-01-22
- Bug #1812382: registry action creating configmap in the wrong namespace
Reported on 2019-01-18
More contributors Top contributors
- Alexandre Gomes 17 points
- yen 9 points
- Alexander Turek 9 points
- Dmitrii Shcherbakov 9 points
- Paul Collins 9 points | https://launchpad.net/charm-kubernetes-worker | CC-MAIN-2019-13 | refinedweb | 157 | 50.7 |
Hey, i am trying to write a program that will evaluate n!/(k!(n-k)!)
right now i am just trying to get the basic formula without the factorials to work ( i can just do a function call later and get the actual values) but no luck. all the variables have values but im still outputting 0....
here is my code
#include <iostream> using namespace std; int factorial (int); //function prototype or declaration int main() { int n1, n2, result, counter, n(0), k(0), partialanswer, partialdenom, numerator, denominator, answer; cout<<"Please enter two positive integer values less than 15. "<<endl; cin>>n1; cin>>n2; /* while (n1 < 1 || n1 >15) { cout<<"You entered an incorrect integer value!"<<endl; cout<<"Please enter a positive integer value less than 15. "<<endl; cin>>n1; cin>>n2; } //result = factorial(Q); ///result = partialanswer; //cout<< "The factorial of "<<n1 <<" write out the rest "<<result<<"."<<endl; */ partialdenom = n1-n2; denominator = n2 * partialdenom; answer = n1 / denominator; //factorial = (n / (k(n-k))); // result = answer; cout<< "The factorial of "<<n1 <<"!/"<< n2 <<"!("<< n1 << "!-" << n2 << "!) is "<<answer<< "."<<endl; cout << denominator << endl; cout << n1 << endl; cout << n2 << endl; cout << answer << endl; return 0; }
any ideas on why this is happening?
thanks | https://www.daniweb.com/programming/software-development/threads/230776/output-of-0-when-i-know-everything-has-a-value | CC-MAIN-2017-34 | refinedweb | 197 | 51.28 |
A quick way to make Python classes automatically memoize (a.k.a. cache) their instances based on the arguments with which they are instantiated (i.e. args to their __init__).
It's a simple way to avoid repetitively creating expensive-to-create objects, and to make sure objects that have a natural 'identity' are created only once. If you want to be fancy, mementos implements the Multiton software pattern.
Usage
Say you have a class Thing that requires expensive computation to create, or that should be created only once. Easy peasy:
from mementos import mementos class Thing(mementos): def __init__(self, name): self.name = name ...
Then Thing objects will be memoized:
t1 = Thing("one") t2 = Thing("one") assert t1 is t2 # same instantiation args => same object
Under the Hood
When you define a class class Thing(mementos), it looks like you're subclassing the mementos class. Not really. mementos is a metaclass, not a superclass. The full expression is equivalent to class Thing(with_metaclass(MementoMetaclass, object)), where with_metaclass and MementoMetaclass are also provided by the mementos module. Metaclasses are not normal superclasses; instead they define how a class is constructed. In effect, they define the mysterious __new__ method that most classes don't bother defining. In this case, mementos says in effect, "hey, look in the cache for this object before you create another one."
If you like, you can use the longer invocation with the full with_metaclass spec, but it's not necessary unless you define your own memoizing functions. More on that below.
Python 2 vs. Python 3
Python 2 and 3 have different forms for specifying metaclasses. In Python 2:
from mementos import MementoMetaclass class Thing(object): __metaclass__ = MementoMetaclass # now I'm memoized! ...
Whereas Python 3 uses:
class Thing3(object, metaclass=MementoMetaclass): ...
mementos supports either of these. But Python 2 and Python 3 don't recognize each other's syntax for metaclass specification, so straightforward code for one won't even compile for the other. The with_metaclass() function shown above is the way to go for cross-version compatibility. It's very similar to that found in the six cross-version compatibility module.
Careful with Call Signatures
MementoMetaclass caches on call signature, which can vary greatly in Python, even for logically identical calls. This is especially true if kwargs are used. E.g. def func(a, b=2): pass can be called func(1), func(1, 2), func(a=1), func(1, b=2), or func(a=2, b=2). All of these resolve to the same logical call--and this is just for two parameters! If there is more than one keyword, they can be arbitrarily ordered, creating many logically identical permutations. instantiated with a limited number of parameters, and you can take care that you instantiate them with parallel call signatures. Since this works 99% of the time and has a simple implementation, it's worth the price of this inelegance.
Partial Signatures
If you want only part of the initialization-time call signature (i.e. arguments to __init__) to define an object's identity/cache key, there are two approaches. One is to use MementoMetaclass and design __init__ without superflu identity might come in
- See CHANGES.rst for the extended Change Log.
-, pytest-cov, coverage and tox. Continuous integration testing with Travis-CI. Packaging linting with pyroma.
- The author, Jonathan Eunice or @jeunice on Twitter welcomes your comments and suggestions.
Installation
To install or upgrade to the latest version:
pip install -U mementos.
Testing
To run the module tests, use one of these commands:
tox # normal run - speed optimized tox -e py27 # run for a specific version only (e.g. py27, py34) tox -c toxcov.ini # run full coverage tests | https://bitbucket.org/jeunice/mementos/src | CC-MAIN-2017-34 | refinedweb | 616 | 57.77 |
Opened 10 years ago
Closed 8 years ago
#4339 closed (duplicate)
Override an existing file, using Model.save_FIELD_file method,
Description
When we use save_FIELD_file, If the filename already exists, Django keep adding an underscore to the name of
the file until the filename doesn't exist.
But if I want to override it ?
I think it would be handy to use save_FILED_file(..., override=True) (and keep it False by default)
Attachments (4)
Change History (32)
comment:1 Changed 10 years ago by
comment:2 Changed 10 years ago by
comment:3 Changed 10 years ago by
Maybe the thing should be called "overwrite" not "override"
Also, documentation about that should be available somewhere ;)
comment:4 Changed 10 years ago by
Umm... I'm not sure this is an automatic "accepted", since we've had hesitation in the past about automatically overriding files (it has security and annoyance implications, for a start). Moving to "design decision needed" until we've had time to think about it.
comment:5 Changed 10 years ago by
I marked as accepted because this patch does not automatically overwrite a file, you need to override a method on your model, in which case you are supposed to know what you are doing and pass the right parameter to the super() method.
On the other side, not having this means that anybody that wants to overwrite files needs to write his/her own method checking for file existence, etc. Which doesn't seem very DRY.
Just my 0.02, well 0.002 :P
comment:6 Changed 10 years ago by
Now I can use
class UserProfile(models.Model): avatar = models.ImageField(blank=True, upload_to='users/avatars/', overwrite=True)
the other use is to set explicitly overwrite to True,
userprofile.save_avatar_file(filename, content, overwrite=True)
comment:7 follow-up:?
comment:8 Changed 10 years ago by
when some of core developers have time to decide whether it's a good idea or not (see Malcolm's comments above). If you want to hurry it up, then feel free to post a message to django-developers pointing out the good points of this.
comment?
There are currently over three hundred tickets at the stage of "design decision needed", and that stage, by its nature, requires a fair amount of attention from multiple developers. As a result, it can take quite some time for a more or less "normal" process of discussion to get around to any particular ticket unless someone steps up -- as Chris suggested -- and gets the ball rolling. The place to do that is the django-developers list (and please note that a short "this ticket needs discussion" or "I need this feature, why don't you add it already" message is generally frowned upon -- to get discussion going, mention the ticket and provide an argument for or against the proposed solution).
comment:10 Changed 10 years ago by
I was really surprised when discovered that the is no simple way to overwrite an uploaded file in Django. So I'm already using this patch and hope it will be merged soon.
comment:11 Changed 10 years ago by
I agree that this patch should be merged (and I started a topic in django-devel), but I disagree that there is no simple way to do it in Django.
Check this:
def _save_FIELD_file(self, field, filename, raw_contents, save=True): fullpath_filename = os.path.join(settings.MEDIA_ROOT, field.get_filename(filename)) if os.path.exists(fullpath_filename): os.remove(fullpath_filename) super(MyModel, self)._save_FIELD_file(field, filename, raw_contents, save)
Changed 10 years ago by
Patch which deletes a file when a new one is upload, but only if we're the only record which references that file.
comment:12 Changed 10 years ago by
I added a patch which takes a different approach on this subject. The reasoning is simple. The current code defines the following behavior: when an instance of a model is deleted, the file will also be deleted if the deleted model instance was the only instance which referenced this file. Hence, the curent code is already destructive. To me, it follows logically that the code should do the same thing when a new file is uploaded. If the new file upload is named the same as the old one, it'll be an effective overwrite. If not, it's just remove a file from the filesystem that would otherwise be unreferenceable and not manageable from the admin. Additionally, if there are other records that reference the file, the original one will still be preserved.
comment:13 Changed 10 years ago by
comment:14 Changed 9 years ago by
comment:15 Changed 9 years ago by
comment:16 Changed 9 years ago by
Since this is not regarded a bug, there is no reason to give this to milestone 1.0 beta.
comment:17 Changed 9 years ago by:18 Changed 9 years ago by
comment:19 Changed 9 years ago by
I'm re-opening this because I respectfully disagree with Marty. Don't get me wrong, I much prefer the new FileStorage backend interface, but according to the current docs - 1 the file storage object is never given any context with which to decide whether or not to overwrite the file.
Here is my dilemna, I have an Image model, that allows customers to upload images. Inevitably, the customers want to replace a specific image with a new one, so they re-upload a new version of the file and it gets a new name. So things break, like CSS background image urls. etc. In the old monkey-patch, I could query the models to find out if the current model-instance was the only row that referred to the image, and if so it would overwrite it. If not, it would rename the file as normal.
With the current system, my custom storage instance has no access to the necessary request specific context, like model-instance in order to decide whether or not to overwrite the file. So it must generally make a decision of "always overwrite files even when they have the same name" or "always generate a new name, even if the old filename will be orphaned (no more references to it in the database.)"
comment:20 Changed 9 years ago by
comment:21 Changed 9 years ago by
Actually, with the default backend you've got a DoS entry if you allow your users to upload a profile picture with an
ImageField (even if you check the size of the stuff they upload) - since it will leave the orphaned images behind. The attacker just needs to reupload files to fill up the available disk space which may be scarce on shared hosting.
In any case, the current behaviour doesn't really make a lot of sense when you override upload_to to set a filename (e.g. using the db id) instead of relying on the name from the browser.
I think it should work this way: when you reupload a file, it should be the same as first deleting the old file and then writing the new one (maybe in reverse order, with a bit of code to handle the case where the names are identical). What do you think?
BTW, there's a snippet here with a custom backend that always overwrites:
comment:22 Changed 9 years ago by
With Django's new file access API there is more flexibity to deal with such problems.
I think it's not a good idea to override get_available_name since it's a public methode
that shouldn't have a side effect (I suppose) due to his name
{{{class OverwriteStorage(FileSystemStorage):
def _save(self, name, content):
if self.exists(name):
self.delete(name)
return super(OverwriteStorage, self)._save(name, content)
}}}
Thank you
comment:23 Changed 9 years ago by
Without having tried it, I don't think the code given my elaatifi (overriding _save) will work, since at this point get_available_name
comment:24 Changed 9 years ago by
Apparently I screwed up that last comment pretty badly - sorry. Not sure why the status changed.
comment:25 Changed 8 years ago by
Overriding both _save and get_available_name seems to do the trick for me. It *seems* that it should be unnecessary to override both, but overwriting _save wasn't enough, and only overriding get_available_name resulted in an infinite loop since _save demands that the file not exist (i.e. it won't do an actual overwrite -- but rather a delete, then a write).
from django.core.files.storage import FileSystemStorage class OverwriteStorage(FileSystemStorage): def _save(self, name, content): if self.exists(name): self.delete(name) return super(OverwriteStorage, self)._save(name, content) def get_available_name(self, name): return name
I disclaim any copyright to the above code. (I'm guessing it would be considered too trivial to be copyrighted anyway, but just in case)
Also, while this is maybe a bit more likely to cause race conditions, nothing particularly harmful happens as a result. The thread will just have to try again until it succeeds.
comment:26 Changed 8 years ago by
Just a quick thought from a UI standpoint in the Admin, it sure would be nice when you add an image for there to be a pop-up that showed you the image and proposed three options. A similar pop-up for an image could post some meta-data.
Would you like to:
Overwrite the existing image with the new one? (sitewide)
Save the new image under a different name? (affects only this instance)
Use the existing image? (No need to create a new one with an _ in the name)
comment:27 Changed 8 years ago by
rizumu: This bug is not particular to the admin. Also what you're talking about is the situation where the user is supposed to control the filename. This bug is about the case where the programmer is supposed to control the filename (where currently there's no simple knob in Django to say "this filename, and I mean it, no underscores").
Undo what I presume is an accidental title change, since it makes no sense. | https://code.djangoproject.com/ticket/4339 | CC-MAIN-2017-43 | refinedweb | 1,688 | 58.42 |
CirclingBot
From RoboWiki
Hi, I mixed two sample bots and then edited it to my liking.
Please let me know what you think, and give me some programming tips! This is my first robot.
It is heavily commented, so you don't need much time to understand what it is doing, and beginners might be able to understand it to (even I do, so they must!)
Background Information
- Bot Name
- Circlingbot
- Extends
- AdvancedRobot
- What's special about it?
- This bot finds an enemy, and then starts spiraling around it.
- Great, I want to try it. Where can I download it?
- Source code is below, compile it yourself
- How competitive is it?
- not at all.
Strategy
- How does it fire?
- Straight fire, bullet size based on distance.
- How does it dodge bullets?
- It rides almost parallel to the selected enemy. Too bad in melee other bots can still attack it easily
- What does it save between rounds and matches?
- Nothing
Additional Information
- Where did you get the name?
- Ehm, it is a circling bot?
- Can I use your code?
- Sure thing
- What's next for your robot?
- + If the scanner comes across a robot that is closer, I want it to select that one. I don't know how I am going to do that yet..
- + There is a little glitch that when my robot comes too close it will stop firing because it can not aim properly. If I take some time I might be able to fix that.
- + It could use a way to predict where to shoot, instead of just firing directly at its current location. However, I think I am going to save that for a next bot.
- Does it have any White Whales?
- Every serious bot? However, in melee it does win from all the sample bots, and from all the super sample bots! both in 600x800 as in 1000x1000!
- What other robot(s) is it based on?
- It started originally by editing sample.Crazy and adding code from sample.TrackFire into it.
Code
package hapiel; import robocode.*; import static robocode.util.Utils.normalRelativeAngleDegrees; import java.awt.*; /** * CirclingBot 1.0 - an edit origionally on sample.Crazy * * By Hapiel. Please do point out improvement points and problems on the wiki page * I am eager to learn! * robowiki.net/wiki/CirclingBot */ public class CirclingBot extends AdvancedRobot { boolean movingForward; // Is set to true when setAhead is called, set to false on setBack boolean inWall; // Is true when robot is near the wall. public void run() { // Set colors setBodyColor(new Color(221, 175, 19)); setGunColor(new Color(11,77,113)); setRadarColor(new Color(99,228,199)); setBulletColor(new Color(255,238,0)); setScanColor(new Color(255,241,46)); // Every part of the robot moves freely from the others. setAdjustRadarForRobotTurn(true); setAdjustGunForRobotTurn(true); setAdjustRadarForGunTurn(true); // Check if the robot is closer than 50px from the wall. if (getX() <= 50 || getY() <= 50 || getBattleFieldWidth() - getX() <= 50 || getBattleFieldHeight() - getY() <= 50) { inWall = true; } else { inWall = false; } setAhead(40000); // go ahead until you get commanded to do differently setTurnRadarRight(360); // scan until you find your first enemy movingForward = true; // we called setAhead, so movingForward is true while (true) { /** * Check if we are near the wall, and check if we have noticed (inWall boolean) yet. * If we have noticed yet, do nothing * If we have not noticed yet, reverseDirection and set inWall to true * If we are out of the wall, reset inWall */ if (getX() > 50 && getY() > 50 && getBattleFieldWidth() - getX() > 50 && getBattleFieldHeight() - getY() > 50 && inWall == true) { inWall = false; } if (getX() <= 50 || getY() <= 50 || getBattleFieldWidth() - getX() <= 50 || getBattleFieldHeight() - getY() <= 50 ) { if ( inWall == false){ reverseDirection(); inWall = true; } } // If the radar stopped turning, take a scan of the whole field until we find a new enemy if (getRadarTurnRemaining() == 0.0){ setTurnRadarRight(360); } execute(); // execute all actions set. } } /** * onHitWall: There is a small chance the robot will still hit a wall */ public void onHitWall(HitWallEvent e) { // Bounce off! reverseDirection(); } /** * reverseDirection: Switch from ahead to back & vice versa */ public void reverseDirection() { if (movingForward) { setBack(40000); movingForward = false; } else { setAhead(40000); movingForward = true; } } public void onScannedRobot(ScannedRobotEvent e) { // Calculate exact location of the robot double absoluteBearing = getHeading() + e.getBearing(); double bearingFromGun = normalRelativeAngleDegrees(absoluteBearing - getGunHeading()); double bearingFromRadar = normalRelativeAngleDegrees(absoluteBearing - getRadarHeading()); //Spiral around our enemy. 90 degrees would be circling it (parallel at all times) // 80 and 100 make that we move a bit closer every turn. if (movingForward){ setTurnRight(normalRelativeAngleDegrees(e.getBearing() + 80)); } else { setTurnRight(normalRelativeAngleDegrees(e.getBearing() + 100)); } // If it's close enough, fire! if (Math.abs(bearingFromGun) <= 4) { setTurnGunRight(bearingFromGun); setTurnRadarRight(bearingFromRadar); // keep the radar focussed on the enemy // We check gun heat here, because calling fire() // uses a turn, which could cause us to lose track // of the other robot. // The close the enmy robot, the bigger the bullet. // The more precisely aimed, the bigger the bullet. // Don't fire us into disability, always save .1 if (getGunHeat() == 0 && getEnergy() > .2) { fire(Math.min(4.5 - Math.abs(bearingFromGun) / 2 - e.getDistance() / 250, getEnergy() - .1)); } } // otherwise just set the gun to turn. // else { setTurnGunRight(bearingFromGun); setTurnRadarRight(bearingFromRadar); } // Generates another scan event if we see a robot. // We only need to call this if the radar // is not turning. Otherwise, scan is called automatically. if (bearingFromGun == 0) { scan(); } } /** * onHitRobot: Back up! */ public void onHitRobot(HitRobotEvent e) { // If we're moving the other robot, reverse! if (e.isMyFault()) { reverseDirection(); } } } | http://robowiki.net/wiki/CirclingBot | CC-MAIN-2017-26 | refinedweb | 886 | 58.79 |
gnutls_x509_privkey_export_pkcs8 — API function
#include <gnutls/x509.h>
Holds the key
the format of output params. One of PEM or DER.
the password that will be used to encrypt the key.
an ORed sequence of gnutls_pkcs_encrypt_flags_t
will contain a private key PEM or DER encoded
holds the size of output_data (and will be replaced by the actual size of parameters)−8 in the default PBES2
encryption schemas, or ASCII for the PKCS12 schemas.
If the buffer provided is not long enough to hold the output, then *output_data_size is updated and GNUTLS_E_SHORT_MEMORY_BUFFER will be returned.
If the structure is PEM encoded, it will have a header of "BEGIN ENCRYPTED PRIVATE KEY" or "BEGIN PRIVATE KEY" if encryption is not: | https://man.linuxexplore.com/htmlman3/gnutls_x509_privkey_export_pkcs8.3.html | CC-MAIN-2021-31 | refinedweb | 117 | 63.39 |
you wanted, and create a new project from within Sublime. It actually became fairly popular, with over 6000 installs.
But about four years ago, I stopped using Sublime Text. So I stopped working on the plugin. Issues and PRs and feature requests piled up. Recently Ben Felder asked if the project was still alive and if he could help out getting it into shape. He’s been doing a great job cleaning things up and has brought the project back to life again, which I’m really excited about, and very grateful for.
But, personally, I’m still not using Sublime. So this got me thinking about recreating the basic functionality as a standalone project. Last Friday I sat down and wrote up a list of features I wanted in such a tool. And one week later, yesterday, I released v1.0 of a project codenamed “tinpig”.
The concept is pretty simple. You have templates, which are folders of files, potentially with replaceable tokens in them. You choose a template, specify a location and it copies the files over, replacing the tokens with values you define. Not rocket science.
There are other tools out there that are probably way more powerful. I think one of tinpig’s key features is its simplicity. The whole reason you would use a tool like this is to quickly spin up projects of different types. For prototyping, testing stuff, great for creative coding, etc. You don’t want to have to learn a whole new tool or templating language to do that. You can just create a project the way you like it, copy the files over into your templates directory and add a json file with a name and description, and you have a new template that lets you recreate that project anywhere you want within about 20 seconds.
But if you want, you can go in and add tokens and a few other power features to make the project even more powerful.
Just thinking… I should make a tinpig template template. A meta template that makes setting up a template even easier… Sorry, got sidetracked.
Anyway, try it out. If you want. Let me know if it’s useful. Tell me how it could be better.
Oh, if you come up with a useful template, feel free to do a PR in the tinpig-templates repo. Or just chuck it over to me and I’ll put it in there.
Pingback:Introducing “tinpig” – Javascript World
What do you use now instead of Sublime Text
I used Webstorm for a while. Then switched over to Vim in the last year.
I’ll just add you to the list of developers I respect who use Vim. Someday I’ll make the switch or admit I’m not capable.
vim isn’t for everybody. It’s a very different way of working and it takes a while to get used to. It’s a long term investment and you don’t really know if it’s going to pay off until you’ve put a lot of time into it. For me, it has and I love it. I wouldn’t try to force it on anybody, but if you’re interested in giving it a go, feel free to ask me any questions.
Hi Keith, Thanks for making tinpig! I’ve been wanting something like it for a while. I had a thought for a feature that would be nice for my primary use case.
I have a boilerplate python Google App Engine project I often manually copy and then update a whole bunch of paths. Because Python module namespaces match their corresponding directory structure, most of the time updating is spent updating module namespace imports to match wherever the particular project was copied to. An example:
Say the boilerplate project lives at `corp/experimental/boilerplate/googleappengine/`. Several of the files will have include lines like `from corp.experimental.boilerplate.googleappengine import appconstants`
I _could_ create a token prompt called “namespaced project destination” and during tinpig project creation manually type the namespaced version of the destination. But since I already tell tinpig the destination path to which I’d like to copy the template project, it’d be great if tinpig could do this path -> namespace translation for me. Perhaps there could be a couple special path-related tokens tinpig is aware of for use in templates:
1. One that represent the specified destination to which the template is copied. Something like “${TIN_PIG_DESTINATION}”
2. A dot-delimited, namespace version of the specified destination to which the template is copied. Something like “${TIN_PIG_DESTINATION_AS_NAMESPACE}”.
So then in the main.py template you could have something like:
`from ${TIN_PIG_DESTINATION_AS_NAMESPACE} import constants`
and it will translate `~/path/that/was/entered` to `path.that.was.entered`
Thoughts?
PS: I first read ‘tinpig’ as ‘tinyping’.
I was already planning on adding TINPIG_PROJECT_DIR, TINPIG_PROJECT_PATH, TINPIG_FILE_NAME, TINPIG_FILE_PATH. Maybe TINPIG_FILE_DIR too. But these would be absolute paths.
From there, I could subtract the project dir from the file path to get the path from the root of the project. Then turn that into dot-separated for a namespace.
The only problem is that that assumes the name space root is the same as the project root. What if a project was set up as project/src/com/foo/thing. The namespace would wind up as “src.com.foo.thing”
So I think I’d have to add another template property, something like TINPIG_NAMESPACE_ROOT, which in the example would be set to “src”. If left blank, it would assume the project root.
So, all doable. I just have to evaluate the extra complexity along with how many people would find this useful.
But… I keep going back and forth on whether or not I actually fully understand your use case. 😐 I may be going down the wrong path entirely here.
Nice, I have been looking for something like this for a while.
I’m not sure if you are looking for features/help (didn’t want to just start making issues in the repo) but if there was a way to display a description of each template and if the project path could auto-complete or auto-suggest, I know that there are some inquirer modules that facilitate this.
I could probably help with the above features if you are looking for contributors
Definitely open to any suggestions and will consider any pull requests. My main concern is keeping the tool simple and avoiding feature bloat.
Pingback:tinpig 1.4.0 and sales pitch – BIT-101 | https://www.bit-101.com/blog/2018/03/introducing-tinpig/ | CC-MAIN-2019-18 | refinedweb | 1,092 | 74.59 |
»
Certification
»
Certification Results
Author
Need estimation for SCDJWS
Manash Das
Greenhorn
Joined: May 19, 2003
Posts: 14
posted
Dec 03, 2007 15:57:00
0
Hi Ranchers,
If anybody can estimation which part to stress more?
I mean if somebode can divide the course in %wise or question #wise?
Below is the Objective of the exam:-
Section 1: XML Web Service Standards
1.1 Given XML documents, schemas, and fragments determine whether their syntax and form are correct (according to W3C schema) and whether they conform to the WS-I Basic Profile 1.0a.
1.2 Describe the use of XML schema in
J2EE
Web services.
1.3 Describe the use of namespaces in an XML document.. J2EE.
Section 9: Developing Web Services
9.1.Given
[ December 03, 2007: Message edited by: Manash Das ]
Regards,<br />Manash Das
I agree. Here's the link:
subject: Need estimation for SCDJWS
Similar Threads
SCJWSD Beta Exam
Web Services Certification Material
SCDJWS Beta!!!!!!
Any feedback for SCDJWS Beta?
FREE CERTIFICATION BETA: SCDJWS 5
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/142391/sr/certification/estimation-SCDJWS | CC-MAIN-2013-20 | refinedweb | 187 | 55.44 |
Simple secret diary based on JSON and AES encryption
Project description
Secret Diary
My own project to manage my daily diary, works with only Python 3
How it works
The entire diary is based only on JSON text, everything is done with JSON. Diary has 'public' infos and 'secret' infos, the secret part is encrypted with AES algorithm.
The public part is called 'summary', this will be usefull to help you to know what that page of diary talk about or specific infos.
No connection to internet required, new diary will be created if username is not recognized (obviously with a prompt to allow or deny).
Once you wrote you can't edit text (both public and private) but you can write without limits.
Password of your diary is stored in the same JSON file, the name is yout username setted at first start.
How to install
This code can be installed with pip, if you have pip installed execute the pip command to install new package
pip install json_secret_diary
Example of usage
You have 2 ways to use my package from command-line, simply write
json_secret_diary and creation start!
You can also use with your python project, here an example:
import json_secret_diary my_username = 'example' my_password = 'example_pw' # Create Diary object my_diary = json_secret_diary.Diary(my_username, my_password) # Access to diary, if exist # if not exist create new one if authorized my_diary.access()
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/json-secret-diary/ | CC-MAIN-2020-10 | refinedweb | 253 | 57.1 |
:
Changes
API cleanup
Autotest 0.14.0 brings major new API changes over the previous 0.13.X series.
For a long time, we wanted to clean up the API namespace of unneeded _lib
prefixes to the autotest libraries. The main changes made were:
* No more autotest/client/bin dir, so no more autotest_lib.client.bin namespace.
Now all the main libs and entry points are at the top level client.
* API: autotest_lib -> autotest
* API: autotest.client.common_lib -> autotest.client.shared
As an example, here’s how the imports on client/bin/job.py used to be on pre-0.14:
from autotest_lib.client.bin import client_logging_config from autotest_lib.client.bin import utils, parallel, kernel, xen from autotest_lib.client.bin import profilers, boottool, harness from autotest_lib.client.bin import config, sysinfo, test, local_host from autotest_lib.client.bin import partition as partition_lib from autotest_lib.client.common_lib import base_job from autotest_lib.client.common_lib import error, barrier, log, logging_manager from autotest_lib.client.common_lib import base_packages, packages from autotest_lib.client.common_lib import global_config from autotest_lib.client.tools import html_report
And that’s how it looks on 0.14.0:
from autotest.client import client_logging_config from autotest.client import utils, parallel, kernel, xen from autotest.client import profilers, boottool, harness from autotest.client import config, sysinfo, test, local_host from autotest.client import partition as partition_lib from autotest.client.shared import base_job from autotest.client.shared import error, barrier, log, logging_manager from autotest.client.shared import base_packages, packages from autotest.client.shared import global_config from autotest.client.tools import html_report
The conversion should be mostly automatic for most users.
Entry names change
In order to ease discoverability on a system wide install, some widely used
entry point names changed names. By entry point, we mean the autotest high
level utility programs. Check out the name changes:
1. client/bin/autotest -> client/autotest-local
2. server/autoserv -> server/autotest-remote
3. cli/atest -> cli/autotest-rpc-client
4. scheduler/monitor_db.py -> scheduler/autotest-scheduler
5. scheduler/monitor_db_babysitter.py -> scheduler/autotest-scheduler-watcher
Boottool
Autotest used a perl based tool to manage bootloader entries, but the
tool did not support grub2. So when Fedora started to use grub2, we took
the opportunity to rewrite the whole thing and ditch the perl based boot
tool. As a side effect, now it supports installing kernels on newer
Ubuntu too (and well, Opensuse and what else). It is based on the open source
project grubby, the tool used by anaconda and therefore, Fedora and RHEL distros
to manipulate boot entries.
Git history rewrite
The git repo history was rewritten to properly follow git authorship
conventions. This effort reveals over 200 unique authors that did contribute
to autotest during the last 6 years. May we have many years more with an active
and healthy development community!
What’s next?
With the series of cleanups and improvements, here are the next steps:
– Port of the rpc server application to Django 1.4
– Documentation improvements and API documentation generation.
– Further cobbler-autotest integration
– Separation of tests and core autotest code
– Make the autotest test runner to be smarter, more tight in terms of output
You can also help, please join the mailing lists and happy hacking! | https://mybravenewworld.wordpress.com/2012/05/07/autotest-0-14-0-released/ | CC-MAIN-2020-40 | refinedweb | 532 | 53.27 |
Blocks in WordPress are great. Drop some into the page, arrange them how you like, and you’ve got a pretty sweet landing page with little effort. But what if the default blocks in WordPress need a little tweaking? Like, what if we could remove the alignment options in the Cover block settings? Or how about control sizing for the Button block?
There are plenty of options when it comes to extending the functionality of core blocks in WordPress. We can add a custom CSS class to a block in the editor, add a custom style, or create a block variation. But even those might not be enough to get what you need, and you find yourself needing to filter the core block to add or remove features, or building an entirely new block from scratch.
I’ll show you how to extend core blocks with filters and also touch on when it’s best to build a custom block instead of extending a core one.
A quick note on these examplesA quick note on these examples
Before we dive in, I’d like to note that code snippets in this article are taken out of context on purpose to focus on filters rather than build tools and file structure. If I included the full code for the filters, the article would be hard to follow. With that said, I understand that it’s not obvious for someone who is just starting out where to put the snippets or how to run build scripts and make the whole thing work.
To make things easier for you, I made a WordPress plugin with examples from this article available on my GitHub. Feel free to download it and explore the file structure, dependencies and build scripts. There is a README that will help you get started.
Block filters in an nutshellBlock filters in an nutshell
The concept of filters is not new to WordPress. Most of us are familiar with the
add_filter() function in PHP. It allows developers to modify various types of data using hooks.
A simple example of a PHP filter could look something like this:
function filter_post_title( $title ){ return '<strong>' . $title . '</strong>'; }; add_filter( 'the_title', 'filter_post_title' );
In this snippet, we create a function that receives a string representing a post title, then wrap it in a
<strong> tag and return a modified title. We then use
add_filter() to tell WordPress to use that function on a post title.
JavaScript filters work in a similar way. There is a JavaScript function called
addFilter() that lives in the
wp.hooks package and works almost like its PHP sibling. In its simplest form, a JavaScript filter looks something like this:
function filterSomething(something) { // Code for modifying something goes here. return something; } wp.hooks.addFilter( 'hookName', 'namespace', filterSomething );
Looks pretty similar, right? One notable difference is
addFilter() has a
namespace as a second argument. As per the WordPress Handbook, “Namespace uniquely identifies a callback in the the form
vendor/plugin/function.” However, examples in the handbook follow different patterns:
plugin/what-filter-does or
plugin/component-name/what-filter-does. I usually follow the latter because it keeps the handles unique throughout the project.
What makes JavaScript filters challenging to understand and use is the different nature of what they can filter. Some filter strings, some filter JavaScript objects, and others filter React components and require understanding the concept of Higher Order Components.
On top of that, you’ll most likely need to use JSX which means you can’t just drop the code into your theme or plugin and expect it to work. You need to transpile it to vanilla JavaScript that browsers understand. All that can be intimidating at the beginning, especially if you are coming from a PHP background and have limited knowledge of ES6, JSX, and React.
But fear not! We have two examples that cover the basics of block filters to help you grasp the idea and feel comfortable working with JavaScript filters in WordPress. As a reminder, if writing this code for the Block Editor is new to you, explore the plugin with examples from this article.
Without any further ado, let’s take a look at the first example.
Removing the Cover block’s alignment optionsRemoving the Cover block’s alignment options
We’re going to filter the core Cover block and remove the Left, Center, Right, and Wide alignment options from its block settings. This may be useful on projects where the Cover block is only used as a page hero, or a banner of some sort and does not need to be left- or right-aligned.
We’ll use the
blocks.registerBlockType filter. It receives the settings of the block and its name and must return a filtered settings object. Filtering settings allows us to update the
supports object that contains the array of available alignments. Let’s do it step-by-step.
We’ll start by adding the filter that just logs the settings and the name of the block to the console, to see what we are working with:
const { addFilter } = wp.hooks; function filterCoverBlockAlignments(settings, name) { console.log({ settings, name }); return settings; } addFilter( 'blocks.registerBlockType', 'intro-to-filters/cover-block/alignment-settings', filterCoverBlockAlignments, );
Let’s break it down. The first line is a basic destructuring of the
wp.hooks object. It allows us to write
addFilter() in the rest of the file, instead of
wp.hooks.addFilter(). This may seem redundant in this case, but it is useful when using multiple filters in the same file (as we’ll get to in the next example).
Next, we defined the
filterCoverBlockAlignments() function that does the filtering. For now, it only logs the settings object and the name of the block to the console and returns the settings as is.
All filter functions receive data, and must return filtered data. Otherwise, the editor will break.
And, lastly, we initiated the filter with
addFilter() function. We provided it with the name of the hook we are going to use, the filter namespace, and a function that does the filtering.
If we’ve done everything right, we should see a lot of messages in the console. But note that not all of them refer to the Cover block.
This is correct because the filter is applied to all blocks rather than the specific one we want. To fix that, we need to make sure that we apply the filter only to the
core/cover block:
function filterCoverBlockAlignments(settings, name) { if (name === 'core/cover') { console.log({ settings, name }); } return settings; }
With that in place, we should see something like this now in the console:
Don’t worry if you see more log statements than Cover blocks on the page. I have yet to figure out why that’s the case. If you happen to know why, please share in the comments!
And here comes the fun part: the actual filtering. If you have built blocks from scratch before, then you know that alignment options are defined with Supports API. Let me quickly remind you how it works — we can either set it to
true to allow all alignments, like this:
supports: { align: true }
…or provide an array of alignments to support. The snippet below does the same thing, as the one above:
supports: { align: [ 'left', 'right', 'center', 'wide', 'full' ] }
Now let’s take a closer look at the
settings object from one of the console messages we have and see what we are dealing with:
All we need to do is replace
align: true with
align: ['full'] inside the
supports property. Here’s how we can do it:
function filterCoverBlockAlignments(settings, name) { if (name === 'core/cover') { return assign({}, settings, { supports: merge(settings.supports, { align: ['full'], }), }); } return settings; }
I’d like to pause here to draw your attention to the
assign and
merge lodash methods. We use those to create and return a brand new object and make sure that the original
settings object remains intact. The filter will still work if we do something like this:
/* 👎 WRONG APPROACH! DO NOT COPY & PASTE! */ settings.supports.align = ['full']; return settings;
…but that is an object mutation, which is considered a bad practice and should be avoided unless you know what you are doing. Zell Liew discusses why mutation can be scary over at A List Apart.
Going back to our example, there should now only be one alignment option in the block toolbar:
I removed the “center” alignment option because the alignment toolbar allows you to toggle the alignment “on” and “off.” This means that Cover blocks now have default and “Full width” states.
And here’s the full snippet:
const { addFilter } = wp.hooks; const { assign, merge } = lodash; function filterCoverBlockAlignments(settings, name) { if (name === 'core/cover') { return assign({}, settings, { supports: merge(settings.supports, { align: ['full'], }), }); } return settings; } addFilter( 'blocks.registerBlockType', 'intro-to-filters/cover-block/alignment-settings', filterCoverBlockAlignments, );
This wasn’t hard at all, right? You are now equipped with a basic understanding of how filters work with blocks. Let’s level it up and take a look at a slightly more advanced example.
Adding a size control to the Button blockAdding a size control to the Button block
Now let’s add a size control to the core Button block. It will be a bit more advanced as we will need to make a few filters work together. The plan is to add a control that will allow the user to choose from three sizes for a button: Small, Regular, and Large.
It may seem complicated, but once we break it down, you’ll see that it’s actually pretty straightforward.
1. Add a size attribute to the Button block1. Add a size attribute to the Button block
First thing we need to do is add an additional attribute that stores the size of the button. We’ll use the already familiar
blocks.registerBlockType filter from the previous example:
/** * Add Size attribute to Button block * * @param {Object} settings Original block settings * @param {string} name Block name * @return {Object} Filtered block settings */ function addAttributes(settings, name) { if (name === 'core/button') { return assign({}, settings, { attributes: merge(settings.attributes, { size: { type: 'string', default: '', }, }), }); } return settings; } addFilter( 'blocks.registerBlockType', 'intro-to-filters/button-block/add-attributes', addAttributes, );
The difference between what we’re doing here versus what we did earlier is that we’re filtering
attributes rather than the
supports object. This snippet alone doesn’t do much and you won’t notice any difference in the editor, but having an attribute for the size is essential for the whole thing to work.
2. Add the size control to the Button block2. Add the size control to the Button block
We’re working with a new filter,
editor.BlockEdit. It allows us to modify the Inspector Controls panel (i.e. the settings panel on the right of the Block editor).
/** * Add Size control to Button block */ const addInspectorControl = createHigherOrderComponent((BlockEdit) => { return (props) => { const { attributes: { size }, setAttributes, name, } = props; if (name !== 'core/button') { return <BlockEdit {...props} />; } return ( <Fragment> <BlockEdit {...props} /> > </Fragment> ); }; }, 'withInspectorControl'); addFilter( 'editor.BlockEdit', 'intro-to-filters/button-block/add-inspector-controls', addInspectorControl, );
This may look like a lot, but we’ll break it down and see how straightforward it actually is.
The first thing you may have noticed is the
createHigherOrderComponent construct. Unlike other filters in this example,
editor.BlockEdit receives a component and must return a component. That’s why we need to use a Higher Order Component pattern derived from React.
In its purest form, the filter for adding controls looks something like this:
const addInspectorControl = createHigherOrderComponent((BlockEdit) => { return (props) => { // Logic happens here. return <BlockEdit {...props} />; }; }, 'withInspectorControl');
This will do nothing but allow you to inspect the
<BlockEdit /> component and its
props in the console. Hopefully the construct itself makes sense now, and we can keep breaking down the filter.
The next part is destructuring the props:
const { attributes: { size }, setAttributes, name, } = props;
This is done so we can use
name,
setAttributes, and
size in the scope of the filter, where:
sizeis the attribute of the block that we’ve added in step 1.
setAttributesis a function that lets us update the block’s attribute values.
nameis a name of the block. which is
core/buttonin our case.
Next, we avoid inadvertantly adding controls to other blocks:
if (name !== 'core/button') { return <BlockEdit {...props} />; }
And if we are dealing with a Button block, we wrap the settings panel in a
<Fragment /> (a component that renders its children without a wrapping element) and add an additional control for picking the button size:
return ( <Fragment> <BlockEdit {...props} /> {/* Additional controls go here */} </Fragment> );
Finally, additional controls are created like this:
>
Again, if you have built blocks before, you may already be familiar with this part. If not, I encourage you to study the library of components that WordPress comes with.
At this point we should see an additional section in the inspector controls for each Button block:
We are also able to save the size, but that won’t reflect in the editor or on the front end. Let’s fix that.
3. Add a size class to the block in the editor3. Add a size class to the block in the editor
As the title suggests, the plan for this step is to add a CSS class to the Button block so that the selected size is reflected in the editor itself.
We’ll use the
editor.BlockListBlock filter. It is similar to
editor.BlockEdit in the sense that it receives the component and must return the component; but instead of filtering the block inspector panel, if filters the block component that is displayed in the editor.
import classnames from 'classnames'; const { addFilter } = wp.hooks; const { createHigherOrderComponent } = wp.compose; /** * Add size class to the block in the editor */ const addSizeClass = createHigherOrderComponent((BlockListBlock) => { return (props) => { const { attributes: { size }, className, name, } = props; if (name !== 'core/button') { return <BlockListBlock {...props} />; } return ( <BlockListBlock {...props} className={classnames(className, size ? `has-size-${size}` : '')} /> ); }; }, 'withClientIdClassName'); addFilter( 'editor.BlockListBlock', 'intro-to-filters/button-block/add-editor-class', addSizeClass );
You may have noticed a similar structure already:
- We extract the
size,
className, and
namevariables from
props.
- Next, we check if we are working with
core/buttonblock, and return an unmodified
<BlockListBlock>if we aren’t.
- Then we add a class to a block based on selected button size.
I’d like to pause on this line as it may look confusing from the first glance:
className={classnames(className, size ? `has-size-${size}` : '')}
I’m using the classnames utility here, and it’s not a requirement — I just find using it a bit cleaner than doing manual concatenations. It prevents me from worrying about forgetting to add a space in front of a class, or dealing with double spaces.
4. Add the size class to the block on the front end4. Add the size class to the block on the front end
All we have done up to this point is related to the Block Editor view, which is sort of like a preview of what we might expect on the front end. If we change the button size, save the post and check the button markup on the front end, notice that button class is not being applied to the block.
To fix this, we need to make sure we are actually saving the changes and adding the class to the block on the front end. We do it with
blocks.getSaveContent.extraProps filter, which hooks into the block’s
save() function and allows us to modify the saved properties. This filter receives block props, the type of the block, and block attributes, and must return modified block props.
import classnames from 'classnames'; const { assign } = lodash; const { addFilter } = wp.hooks; /** * Add size class to the block on the front end * * @param {Object} props Additional props applied to save element. * @param {Object} block Block type. * @param {Object} attributes Current block attributes. * @return {Object} Filtered props applied to save element. */ function addSizeClassFrontEnd(props, block, attributes) { if (block.name !== 'core/button') { return props; } const { className } = props; const { size } = attributes; return assign({}, props, { className: classnames(className, size ? `has-size-${size}` : ''), }); } addFilter( 'blocks.getSaveContent.extraProps', 'intro-to-filters/button-block/add-front-end-class', addSizeClassFrontEnd, );
In the snippet above we do three things:
- Check if we are working with a
core/buttonblock and do a quick return if we are not.
- Extract the
classNameand
sizevariables from
propsand
attributesobjects respectively.
- Create a new
propsobject with an updated
classNameproperty that includes a size class if necessary.
Here’s what we should expect to see in the markup, complete with our size class:
<div class="wp-block-button has-size-large"> <a class="wp-block-button__link" href="#">Click Me</a> </div>
5. Add CSS for the custom button sizes5. Add CSS for the custom button sizes
One more little thing before we’re done! The idea is to make sure that large and small buttons have corresponding CSS styles.
Here are the styles I came up with:
.wp-block-button.has-size-large .wp-block-button__link { padding: 1.5rem 3rem; } .wp-block-button.has-size-small .wp-block-button__link { padding: 0.25rem 1rem; }
If you are building a custom theme, you can include these front-end styles in the theme’s stylesheet. I created a plugin for the default Twenty Twenty One theme, so, in my case, I had to create a separate stylesheet and include it using
wp_enqueue_style(). You could just as easily work directly in
functions.php if that’s where you manage functions.
function frontend_assets() { wp_enqueue_style( 'intro-to-block-filters-frontend-style', plugin_dir_url( __FILE__ ) . 'assets/frontend.css', [], '0.1.0' ); } add_action( 'wp_enqueue_scripts', 'frontend_assets' );
Similar to the front end, we need to make sure that buttons are properly styled in the editor. We can include the same styles using the
enqueue_block_editor_assets action:
function editor_assets() { wp_enqueue_style( 'intro-to-block-filters-editor-style', plugin_dir_url( __FILE__ ) . 'assets/editor.css', [], '0.1.0' ); } add_action( 'enqueue_block_editor_assets', 'editor_assets' );
We should now should have styles for large and small buttons on the front end and in the editor!
As I mentioned earlier, these examples are available in as a WordPress plugin I created just for this article. So, if you want to see how all these pieces work together, download it over at GitHub and hack away. And if something isn’t clear, feel free to ask in the comments.
Use filters or create a new block?Use filters or create a new block?
This is a tricky question to answer without knowing the context. But there’s one tip I can offer.
Have you ever seen an error like this?
It usually occurs when the markup of the block on the page is different from the markup that is generated by the block’s
save() function. What I’m getting at is it’s very easy to trigger this error when messing around with the markup of a block with filters.
So, if you need to significantly change the markup of a block beyond adding a class, I would consider writing a custom block instead of filtering an existing one. That is, unless you are fine with keeping the markup consistent for the editor and only changing the front-end markup. In that case, you can use PHP filter.
Speaking of which…
Bonus tip:
render_block()
This article would not be complete without mentioning the
render_block hook. It filters block markup before it’s rendered. It comes in handy when you need to update the markup of the block beyond adding a new class.
The big upside of this approach is that it won’t cause any validation errors in the editor. That said, the downside is that it only works on the front end. If I were to rewrite the button size example using this approach, I would first need to remove the code we wrote in the fourth step, and add this:
/** * Add button size class. * * @param string $block_content Block content to be rendered. * @param array $block Block attributes. * @return string */ function add_button_size_class( $block_content = '', $block = [] ) { if ( isset( $block['blockName'] ) && 'core/button' === $block['blockName'] ) { $defaults = ['size' => 'regular']; $args = wp_parse_args( $block['attrs'], $defaults ); $html = str_replace( '<div class="wp-block-button', '<div class="wp-block-button has-size-' . esc_attr( $args['size']) . ' ', $block_content ); return $html; } return $block_content; } add_filter( 'render_block', 'add_button_size_class', 10, 2 );
This isn’t the cleanest approach because we are injecting a CSS class using
str_replace() — but that’s sometimes the only option. A classic example might be working with a third-party block where we need to add a
<div> with a class around it for styling.
Wrapping upWrapping up
WordPress block filters are powerful. I like how it allows you to disable a lot of unused block options, like we did with the Cover block in the first example. This can reduce the amount of CSS you need to write which, in turn, means a leaner stylesheet and less maintenance — and less cognitive overhead for anyone using the block settings.
But as I mentioned before, using block filters for heavy modifications can become tricky because you need to keep block validation in mind.
That said, I usually reach for block filters if I need to:
- disable certain block features,
- add an option to a block and can’t/don’t want to do it with custom style (and that option must not modify the markup of the block beyond adding/removing a custom class), or
- modify the markup only on the front end (using a PHP filter).
I also usually end up writing custom blocks when core blocks require heavy markup adjustments both on the front end and in the editor.
If you have worked with block filters and have other thoughts, questions, or comments, let me know!
Development builds using
<React.Strictmode>will run functional components twice per render to aid in detection of certain errors. I’m not certain that’s the cause of your extra logs but it’s my prime suspect.
I run into the issue where i am getting the console error “extendBlocks.js?ver=5.8.2:5 Uncaught ReferenceError: assign is not defined”. i’m guessing this is because there is a npm package i dont have installed. I know you have it installed on the plugin but i am wanting to incorporate this into my themes current package.json file. Admittedly i am pretty green when it comes to front-end development
I have been struggling to understand the propper way to extend core blocks.
I have been reading dozens and dozens of pages and tutorials – none of them help me.
i have only started reading the first section of this article and it really helped me understand the best practice for extending core blocks.
Thank you for publishing this great content on you site, it really helped me when I was totally stock
i would like to archive a similar functionality and change the “description” of core image to my own custom text.
where can i find a full list for the namespace i need to include in the addFilter function?
what is the change i need to make in the filter function in order for it to work with the image block?
“intro-to-filters/cover-block/alignment-settings”
const { addFilter } = wp.hooks;
// import { assign, merge } from ‘lodash’;
import { assign } from ‘lodash’;
function filterCoverBlockAlignments(settings, name) {
});
}
}
if (name === ‘core/image’) {
console.log(settings, name);
return assign({}, settings, {
description: ‘my own text baby!’,
return settings;
addFilter(
);
‘blocks.registerBlockType’,
‘intro-to-filters/cover-block/alignment-settings’,
filterCoverBlockAlignments, | https://css-tricks.com/a-crash-course-in-wordpress-block-filters/ | CC-MAIN-2022-21 | refinedweb | 3,913 | 63.39 |
XMonad.Prompt.Shell
Description
A shell prompt for XMonad
Synopsis
- data Shell = Shell
- shellPrompt :: XPConfig -> X ()
- prompt :: FilePath -> XPConfig -> X ()
- safePrompt :: FilePath -> XPConfig -> X ()
- unsafePrompt :: FilePath -> XPConfig -> X ()
- getCommands :: IO [String]
- getBrowser :: IO String
- getEditor :: IO String
- getShellCompl :: [String] -> Predicate -> String -> IO [String]
- split :: Eq a => a -> [a] -> [[a]]
Usage
- In your
~/.xmonad/xmonad.hs:
import XMonad.Prompt import XMonad.Prompt.Shell
- In your keybindings add something like:
, ((modm .|. controlMask, xK_x), shellPrompt def)
For detailed instruction on editing the key binding see XMonad.Doc.Extending.
shellPrompt :: XPConfig -> X () Source #
Variations on shellPrompt
See safe and unsafeSpawn in XMonad.Util.Run. prompt is an alias for unsafePrompt; safePrompt and unsafePrompt work on the same principles, but will use XPrompt to interactively query the user for input; the appearance is set by passing an XPConfig as the second argument. The first argument is the program to be run with the interactive input. You would use these like this:
, ((modm, xK_b), safePrompt "firefox" greenXPConfig) , ((modm .|. shiftMask, xK_c), prompt ("xterm" ++ " -e") greenXPConfig)
Note that you want to use safePrompt for Firefox input, as Firefox
wants URLs, and unsafePrompt for the XTerm example because this allows
you to easily start a terminal executing an arbitrary command, like
top.
Utility functions
getCommands :: IO [String] Source #
getBrowser :: IO String Source # String Source #
Like
getBrowser, but should be of a text editor. This gets the $EDITOR variable, defaulting to "emacs". | https://hackage.haskell.org/package/xmonad-contrib-0.15/docs/XMonad-Prompt-Shell.html | CC-MAIN-2019-04 | refinedweb | 232 | 52.8 |
When I wrote the article Writing a Multiplayer Game (in WPF), I used a remoting architecture (also known as multi-tier architecture) that I considered to be like a remotable MVVM. In fact, when I published the article I wasn't using real MVVM, only some of its concepts and even if I did use MVVM in the latest version of the game, I never published it.
Independently on how well the game or the previous architecture worked, I didn't consider it ideal. The real problem was that I made the game components bound to the game framework so, if I wanted a different kind of applications, I was getting lots of unused classes and I was needing to do work-arounds over some specific parts.
So, I decided to create a new architecture that I consider to be a real evolution over MVVM to make it work in a "distributable" manner (that is, capable of making different layers work in different computers if needed), being capable of creating normal applications and games.
DOP means Distributable Observable POCO, and the use of DOPs is the heart of the new architecture. We can say that a DOP is any class that exposes only public properties, implements the INotifyPropertyChanged correctly (that's what makes it observable) and is equivalent if created in two different applications by using the default constructor and setting the same properties. This means it can't use client specific data on its constructor (like setting a property with the current date and time), it can't use weird code on property gets, it can't have methods or events (aside from the property changed) and it can't validate values on property sets. Methods, events and property-set validations must be done by another class.
INotifyPropertyChanged
By implementing the INotifyPropertyChanged correctly I mean: They should only notify property changes when they do actually happen. Setting a property to the value it already possesses should not trigger the event. This can look silly, but it is primordial to avoid "circular updates" that end-up causing stack-overflows.
I was initially unsure about using Distributable Data-Transfer Object (so, DDTO) or DOP. Yet, DTOs can't have anything special in their property sets. They are only objects full of properties that can be get and set directly, and providing notifications will mean changing the meaning of a DTO. But making a POCO Distributable and Observable doesn't hurt the basic definition, it only adds some extra constraints to it.
Actually, POCO is a very problematic definition. POCO means Plain Old CLR Object (many times said as Plain Old C# Object). That "plain old" has the purpose of saying that it doesn't hold any framework-specific traits, so it should not have any framework-specific data-types, attributes and should not inherit from a framework specific class or implement a framework-specific interface. Yet the POCO definition is very problematic as apparently it was first used to describe database objects that were "persistance ignorant" and they became limited to database, when they should not be. Also, some frameworks (like Entity Framework) may require the properties to be declared as virtual, as the frameowrk will inherit the class to add some behavior but, well, the term is Plain Old CLR Object, not Plain Old CLR Class. We should be able to create the instances, that don't inherit from anything, and hand them to the framework.
So, to make a conclusion, DTO is too restrictive. POCO is too open. So I decided to create a more restricted POCO that's not as limited as DTO. In fact, the spirit of a DOP is that the code that's responsible to change it will always be able to do it and anyone interested in its current state will be notified about the changes.
The DOPs by themselves don't depent on any framework (I don't consider implementing the INotifyPropertyChanged as depending on a framework). If we want to visualize a DOP, we can simply write a Data-Template for it and everything will work, after all a DOP must have all the information needed to be presented visually.
Yet we usually want to "manipulate" those DOPs, changing their contents, executing methods and applying validations, which must be done elsewhere. So, without any architecture, we can simply create methods on another class that receive a DOP as input, possibly other parameters and do whatever is needed, like doing validations, saving things to the database, generating exceptions and changing the DOP. Then we have the following problem: How do we guarantee that this other class will be used to manipulate the DOP and that users of the DOPs will not change them directly, putting invalid values on them and bypassing the needed validations?
Well, this is where we have both a new pattern and frameworks to support it.
The greatest difference from MVVM to DVMH is that the Model is split into DOP (the properties) and messages (that represent both the events and the methods). The messages are actually the "requests" to invoke a method or to generate an event. To execute those messages, we need the handlers.
Details of each letter
string
[Serializable]
Click
To make the DVMH pattern work we need at least a way to send messages and to process those messages. I decided to put this minimum architecture in a framework.
Well, I built the interfaces of that framework first as I don't want to bind users to my implementation of the framework, only to give the "architecture" to be able to use the DOPs and the DVMH pattern correctly.
Even if we use a local framework, the idea is to have a good isolation between data-presentation and data-manipulation, which usually fits very well with the remote architecture of client and server. So, the client presents the DOPs and the server manipulates them.
Considering things as client and server, we have:
A client "participates" in a server "room". As a participant, a client receives notifications about all the added, removed and modified objects and, if necessary, updates the local copy of those DOPs, so they can be correctly presented on the screen.
A server must have at least one room and must accept participant connections, choosing in which room to place those participants. It is important to note that the server doesn't receive a real participant object, only an object to communicate to that participant.
The server is free to add and remove components to a room at any moment and to change participants from one room to the other. As soon as the participants are "linked to a room", they will receive information of all the existing components and will be able to observe the changes that happen to those components.
And, of course, the server can send messages to the clients and the clients can send messages to the server.
More precisely, the participants may send messages to the room to which they are connected, while the server may see all the active "communication objects" bound to a room and use those objects to send messages to the participants.
To put names to the objects involved, in the client we have:
And in the server we have:
As I already said, I started the frameworks by the interfaces. Those interfaces present the IDopParticipant, IDopRoom and IDopCommunicationToParticipant.
IDopParticipant
IDopRoom
IDopCommunicationToParticipant
In fact, there are some more interfaces, as I decided to put the methods that deal with the room components in another dedicated interface (one for the server side, capable of adding and removing components, and one on the client side, restricted to enumerating and getting information from those components).
Then, I did two implementations of those interfaces. One is local, being the support of loosely-coupled DVMH without the serialization and remoting overhead and the other is, well, remote, allowing to create the rooms in one computer and to access those rooms and components from another computer.
The remote framework requires a message port to send messages and it creates copies of the server DOPs on the client. Such copy actually helps enforce the DVMH pattern as changes made to the client DOPs will not affect the original DOPs, which will naturally force the developers to use the messages instead of changing the DOPs directly and will avoid clients from putting invalid states without going through the required validations. Considering that benefit, I decided to create a local message port so the real DVMH architecture can be enforced, yet there's no TCP/IP (or similar) communication involved.
The need for a message port is again a situation to write a "framework". The remote library doesn't come with any message port, only with the interface for the message ports, but I am also giving another library (the minimal implementation) that implements both the local message port and a TCP/IP message port that uses the default .NET binary serialization to send messages. For professional distributed applications, I strongly recommend implementing another message port.
So, to make things clear, let's see which libraries exists and what's contained in them.
There are two things that I consider that many developers will want: Dynamic DOPs and some kind of DOP generator, to avoid having to write the DOP classes by hand. After all, even if the classes are simple, having to write the property sets with the right validations + the notification can be boring and error-prone.
So, in the Pfz.DistributableObservablePoco.Common library you will find the classes DynamicDop and AbstractDopCreator.
DynamicDop
AbstractDopCreator
ExpandoObject
dynamic
GetValue
SetValue
EventArgs
PropertyChanged
I started by saying that the DOPs must be observable, which is achieved by implementing the INotifyPropertyChanged interface. Actually the DOP frameworks only depend on the fact that DOPs are observable, they don't depend on the INotifyPropertyChanged itself. The frameworks use a DOP Manipulator per DOP Type, which is found using a delegate, so you can give a different manipulator and, if the object is observable without implementing the INotifyPropertyChanged, it is enough that such manipulator knows how to observe the property changes.
I would personally love to make my DOPs don't implement the INotifyPropertyChanged but considering that WPF uses such interface, I still implement it in my DOPs. But I don't know... maybe you use a different framework that doesn't depend on the INotifyPropertyChanged either, so you will be free to ignore such interface as long as you create a DOP Manipulator capable of observing those different DOPs.
The sample is a small game (we can say that's a very simplified version of Shoot'em Up .NET) that uses the DVMH pattern. It is not very complete and even if it follows DVMH it is not the best example of good programming practices as I used some static variables. But, well, it is only a small sample, not a killer application built with the pattern.
I put all the important code inside libraries for the client and the server. The client part is effectively reduced to putting DOPs on the screeen, having the data-templates and processing the arrow keys + space (and setting the "signals" object). The game itself runs in the server.
I used libraries for the client and server so I could built the standalone application and the client/server applications reusing the same code. Even if the StandAlone application has both the client and server inside it, the client code doesn't know anything about the server code nor the server code knows anything about the client code.
I know that I created many assemblies, but I consider extremely important to try to follow the pattern for the assemblies too. Put all the message objects, DOPs and maybe some resources in a common assembly (I put the images in the common assembly, as the client presents them and the server uses them for the collision detection). Then, put all the code that's related to the "View" in a client library. All the code that actually does the work in a server library. And, if you don't want a client/server application, only the DVMH pattern, create an executable that uses all those libraries. Even if in the end everything is in the same executable and application domain, the code will be loosely-coupled as client classes don't know about the server classes (or vice-versa).
Up to this moment I tried not to focus on details. I am providing a framework with a minimal implementation and it already has limitations and, even if some of them can be easily solved by different implementations, I prefer to keep them there to guarantee that an application written using one framework can be used by another framework with ease.
So, let's see some of those details:
The types of the DOP's properties must be simple data, not the type of any object that can contains its own modifiable properties.
Also, considering possible serialization restrictions (which we can say to be another implementation detail) we should try to use only primitive types, strings or immutable serializable types in DOP properties. So, the DOPs are modifiable, but the contents of an object put into a DOP property can't be.
Considering a DOP has modifiable properties, how can we have a property in a DOP that references another DOP?
The answer to this question is to use IDs. Each component added to a room is given an ID, which can be used to find the component again inside the that room. It is not expected that an ID work across different rooms and the type of the ID itself is implementation dependent, so it can be an string, an int, a GUID etc.
int
GUID
As the local implementation doesn't use serialization, the ID generated by the local rooms is not a serializable object (at least not by the .NET binary formatter) yet the room and participant objects are capable of finding the right component by such key.
So, when one DOP references another DOP or when sending messages that should reference a DOP, use the ID of the DOP, not the DOP itself.
The DOP frameworks themselves can only send messages from a participant to a room or from a room to a participant. They can't send message to specific components.
This was done on purpose as it simplifies the DOP frameworks, but considering a message can have the ID of a component, we can always send a message to a room telling to which component it should be "dispatched".
Actually it is possible to build another framework dedicated to handle the message dispatching to components. I didn't do that yet, but I consider it would be really important to have a configurable handler for bigger applications. Thankfully, any of the DOP frameworks will be able to use a message dispatching framework by delegating the MessageReceived method to such a framework.
MessageReceived
Messages sent through the PostMessage method and property changes aren't guaranteed to be sent immediately to the remote side. A call to Flush() must be done.
PostMessage
Flush()
As such call is not necessary in the local implementation, it is possible that a local application doesn't work as expected if it is later compiled to be remote. This is why I consider it very important to always start by using the remote framework and, if needed or wanted, use the local framework later to make things faster.
I decided not to do automatic flushes for performance reasons.
In my opinion the disposal of objects in general is still a problematic topic. .NET has a Garbage Collector which mainly solves the problems related to memory leaks and accessing already deleted objects (or, putting it differently, it solves the problem related to the order of the deletes when we aren't sure how many references still exist for such an object, which usually caused either memory leaks or deleted objects while there were still references for them). Yet, there are inumerous situations in which the Garbage Collector is not enough. This is usually presented by things like files and network connections, but it goes further, as any object put into a room will not be available for collection while the room is alive, even if it is not needed or used anymore.
For the objects in the rooms, there's not much to say. It is up to the developer to remove the objects as soon as they aren't needed anymore. DOPs should not have destructors o any specific disposal logic, so it is enough to make them available for collection (removing them from active lists, like the room's components) and .NET will deal with them.
But we still have the Rooms, the Participants and the Message Ports. The disposal support on those objects should exist, be observable and thread-safe. I know, many people consider that the Dispose() method should not be called by multiple threads and that there's an error in the architecture if a Dispose() must be thread-safe but this is very common in duplex communication scenarios. The connection may be closed by the remote side at any moment... or even by a physical cable disconnection. Such a connection lost, on message ports that require an active connection, means that the message port may be disposed at any moment (so thread-safety is needed). Also, a Participant that uses a disposed Message Port is useless, so it should be disposed together with the message port, which means it should be informed of such disposal and may be disposed at random moments too.
Dispose()
To support this, I created the IDopDisposable interface. It is an IDisposable that has the following extra members:
IDopDisposable
IDisposable
And, to make things completely thread-safe, it is important to note that registering to the Disposed event should invoke the delegate immediately if the object is already disposed, as users can't have a guarantee that the object will not be disposed just after checking that the object is still alive.
Disposed
Why the server is the only one that can create new DOPs?
Security. The server may keep data in its memory for the created objects, so the developer of the server application must decide when to create objects. If clients could simply create objects that live on the server, then it will be very easy to attack.
Why only the server can change the DOPs properties?
Security again and to enforce the use of messages. One client should not be able to change the data that another client may be using. It could be even worse if two or more clients believe they can change the same DOP at the same time.
Yet, it is possible for a client to change the properties of a "private DOP". Private DOPs are DOPs created on the server to be visible by a single participant, so that participant is free to change them.
Why DOPs don't have methods?
Because if that was supported, the client implementation should redirect to the server, while the server implementation should execute the action. It would be possible to put abstract methods inside the DOP classes and implement them differently on the client and on the server, but this will end-up creating different DOPs to represent the client and the server version, and I really wanted to use the same DOP classes on both sides, at least for the framework.
abstract
Also, normal methods usually have a synchronous signature and for distributed programming making things asynchronous is better. Yet, if you really want to have methods it is possible to implement such a support on top of the remote framework, as both the client and the server are free to provide their DOP creator delegates.
Why the messages don't have a result?
Because of the synchronous/asynchronous problem and because the idea is to give simple interfaces to implement a DOP framework. It is possible to build a messaging framework capable of sending messages and awaiting for results that depend only on the interfaces of the DOP frameworks, effectively giving such power to any of the possible framework implementations.
await
Why DOPs can't have events?
Because events have two sides: producer and consumer. When we declare an event in a class, other classes can only consume the event, not produce it (try invoking an event declared on another class, you can't, you must use the += or the -= to register your handler for the event). Considering DOPs don't have methods, they will never produce the event. So, we could create a method to allow other classes to produce the event, which will return to the problem of having methods or we can avoid such complication by using messages.
+=
-=
Can the clients connect to more than one room at a time?
Yes, but many Participant/MessagePort objects will be needed, one of each per room. Actually it is not mandatory that each MessagePort holds a connection on its own, so it is possible to have many different message-ports that use a single connection to do their jobs. Yet, the minimal implementation doesn't deal with such a situation and we can consider this to be a concern of the Message Ports involved, not of the DOP framework by itself.
Can I add the same component to two or more rooms?
Yes, but the provided frameworks will not know that such DOP is shared. So, any good practice concerning a shared resource should be used. If the rooms have their own dedicated threads to deal with the objects, locks will be needed when accessing the shared component. Also, the rooms should not expect to apply different rules to the components, or they can see "invalid states" coming from another room that actually considers that as a valid state.
Aren't messages the same as Commands in WPF?
They do a very similar job, yet WPF didn't create Commands with the idea of remote support and WPF controls can use both events and commands. I think that limiting things to only messages helps by avoiding "mixed" APIs and by reducing the amount of code required to create a new DOP framework.
By looking the DVMH pattern you will probably miss the ViewModel.
So, did I forget it, is it missing or somewhat hidden?
And my answer is: I didn't put the ViewModel as a requirement in the pattern. It is still possible to use it, but I actually think there's a better solution.
When the Model is already observable the ViewModel is many times obsolete. It is still useful to "convert" some property data-types or even to add View specific properties, but in most cases it ends-up filled with properties that simply redirect to the Model (or the users break the MVVM design pattern and access the Model directly from the View).
Well, I don't like code repetition, so I prefer to say that we should not have a ViewModel. Then, how do we convert values? How do we add View specific properties without putting View specific properties in the "Model" (the DOP)?
The value conversion is a real issue. I personally don't like to put converters in the Bindings, as this is putting "code stuff" in the View. Yet, when we have a View that wants to present the Model differently (like presenting centimeter values as inches), we can say that it is still a View concern to do the conversion (but some people will put a property that already does the conversion to inches in the ViewModel).
Binding
Another solution is to use the remote framework, which actually creates copies of the server components on the client, to be able to create different objects on the client and on the server (even if the remote framework actually uses a local message port). The common DOPs should still exist, but the client can inherit the common DOPs and add client-specific properties, while the server can inherit those components to put server-specific information. This is possible because the server delegates the job of getting a "type ID" and the client delegates the job of creating an object by such an ID. So, it is possible to send the type-names (for example) but find them in different namespaces to have different client and server objects. This is pretty similar to what I did in the Writing a Multiplayer Game (in WPF) article, as the server had specific properties to control the animations that the client simply wasn't aware of.
In the end we get the benefits of the ViewModel, like having View specific properties, without having to write code to redirect all the properties that we simply don't want to change. But, if you really want to keep the ViewModel and don't want to have different objects for the client and the server, feel free to use a ViewModel, as the DVMH doesn't forbid its use.
The D in DOP means distributed and when I started the development of the framework I was more focused on that distributed part, which allows N-Tier application, than on the DVMH pattern that can be used locally. So, how well do the DOPs really work in multi-tier environments? How scalable are they? What about the performance?
And the answer can't be more vague. It depends. Really, it depends on a lot of factors. The framework implementations I am giving with this article keep the objects in memory, so we can say these frameworks aren't very scalable. Yet, even if the HTTP protocol is stateless, ASP.NET allows us to keep states in memory by using sessions. In that situation the problem is how many objects we keep in the sessions.
So, comparing to DOPs, how many DOPs are we keeping in memory? We actually don't need to be observing a room to be in a "login page". That login page can exist exclusively on the client side and a login message can be sent to a shared room.
Also, the DOPs are extremely simple. They can be stored in the database easily, so a different implementation of the frameworks could do that. Of course, it will be good if such a framework also keeps recently used DOPs cached to avoid excessive database accesses, but that's a normal problem that everyone that writes scalable sites must deal.
We must not forget about the expected use. Do we want to create full client applications that know what they should do and communicate to the server only when they need "data", or do we want to create client applications that are simply "visualizers" for an application that exists elsewhere (maybe in another library loaded in the same process)? The DVMH actually fits the second category. That's a good thing as we are making an MVVM-like architecture that's even more loosely coupled. So, the DVMH architecture is not expected to be fully scalable, but the DOPs are. They can be used in the DVMH architecture, or inside a different architecture that's more scalable.
So, I can only say that DOPs are really scalable while the DVMH pattern is expected to in a Local Area Network (LAN) or completely local. So, it is possible to combine DVMH with another architecture and use DOPs in that other architecture for really multi-tier and scalable applications and. | http://www.codeproject.com/Articles/781738/DOP-and-DVMH?msg=4835005 | CC-MAIN-2014-41 | refinedweb | 4,600 | 57.5 |
Trying to develop a simple outline for a game engine.. This program is simply supposed to populate a window with a bunch of skeleton icons. This is my first time trying to include resources into my program so I'm not sure if I'm doing it right.
My Project:
GameEngine.h
GameEngine.cpp
GameEngineDriver.cpp
Skeleton.h <----Suspected error here.. but it's just like out the book
Skeleton.cpp
Resource.h
Resource.rc
Skeleton.h
Code:#pragma once #include <windows.h> #include "Resource.h" #include "GameEngine.h" //Globals GameEngine* _pGame;
Dev-Cpp errors:
Compiler: Default compiler
Building Makefile: "F:\Dev-Cpp\Makefile.win"
Executing make clean
rm -f GameEngineDriver.o GameEngine.o Skeleton.o Skeleton_private.res Skeleton.exe
g++.exe -c GameEngineDriver.cpp -o GameEngineDriver.o -I"F:/Dev-Cpp/lib/gcc/mingw32/3.4.2/include" -I"F:/Dev-Cpp/include/c++/3.4.2/backward" -I"F:/Dev-Cpp/include/c++/3.4.2/mingw32" -I"F:/Dev-Cpp/include/c++/3.4.2" -I"F:/Dev-Cpp/include"
GameEngineDriver.cpp: In function `int WinMain(HINSTANCE__*, HINSTANCE__*, CHAR*, int)':
GameEngineDriver.cpp:13: error: `GameInitialize' undeclared (first use this function)
GameEngineDriver.cpp:13: error: (Each undeclared identifier is reported only once for each function it appears in.)
GameEngineDriver.cpp:43: error: 'class GameEngine' has no member named 'GetFrameDelay'
GameEngineDriver.cpp:44: error: `GameCycle' undeclared (first use this function)
GameEngineDriver.cpp:54: error: `GameEnd' undeclared (first use this function)
make.exe: *** [GameEngineDriver.o] Error 1
Execution terminated
It seems like my project is not responding to the skeleton class.. which is where all the above functions are defined. Can anyone else get this code to compile..?? I think that if I can get a program like this to work.. I will have an easier time in getting future windows projects to compile.
I'm also not clear about GameEngineDriver.cpp (GameEngineDriver.cpp is my own addition to the program to facilitate the need for a WinMain( )) In the book, HandleEvent( ) is functionally defined, but never called. This is where I think it should go:
GameEngineDriver.cpp: excerpts
Code://I think this: return (int)msg.wParam; //should be this: GameEngine::HandleEvent(hWindow, msg, wParam, lParam) | https://cboard.cprogramming.com/windows-programming/66566-compiling-project-problemos.html | CC-MAIN-2017-39 | refinedweb | 366 | 62.14 |
I'm very near to regaining my 24.3 Gnus send mail functionality of being able to dynamically send from any of my multiple addresses and having the server information change accordingly. Using the function below it does indeed change the server info at send-time, and there is a matching entry in .authinfo. But I receive the (mostly successful) output: Sending via mail... Failed to match ... Failed to match ... Failed to match ... Failed to match ... Setting SMTP server to `smtp.gmail2.com:587' for user <CORRECT-MATCH> Opening TLS connection to `smtp.gmail2.com'... Opening TLS connection with `gnutls-cli --insecure -p 587 smtp.gmail2.com'...failed Opening TLS connection with `gnutls-cli --insecure -p 587 smtp.gmail2.com --protocols ssl3'...failed Opening TLS connection with `openssl s_client -connect smtp.gmail2.com:587 -no_ssl2 -ign_eof'...done And, at that, it freezes with the final line in my message buffer until I C-g to safety. Any ideas why it freezes at this point, having done the dynamic setting correctly? Note that I've successfully sent with this account today before I started working on the dynamic switching, so I know the settings are otherwise correct. Below are the functions I'm using: --8<---------------cut here---------------start------------->8--- (defvar smtp-accounts '( ("address@hidden" "mail.mine.com" 26);; Personal ("address@hidden" "mail.mine3.com" 26);; Professional ("address@hidden" "mail.mine2.com" 26) ;; Web development ("address@hidden" "smtp.gmail.com" 587) ;; Public )) (defun set-smtp (server port user) "Set related SMTP variables for supplied parameters. String `user' will not be evaluated." (setq smtpmail-smtp-server server smtpmail-smtp-service port) (message "Setting SMTP server to `%s:%s' for user `%s'." server port user)) (defun change-smtp () "Change the SMTP server according to the current from line." (save-excursion (cl-loop with from = (save-restriction (message-narrow-to-headers) (message-fetch-field "from")) for (address server port) in smtp-accounts do (if (string-match address from) (return (funcall 'set-smtp server port address)) (message "Failed to match %s with %s" address from)) finally (error "Cannot infer SMTP information.")))) (add-hook 'message-send-mail-hook 'change-smtp) --8<---------------cut here---------------end--------------->8--- | https://lists.gnu.org/archive/html/help-gnu-emacs/2015-02/msg00164.html | CC-MAIN-2019-30 | refinedweb | 358 | 52.46 |
Barcode Software
reportviewer barcode font
Risk Analysis in Software
Integrate barcode 39 in Software Risk Analysis
Downloaded from Digital Engineering Library @ McGraw-Hill () Copyright 2004 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use as given at the website.
using barcode generator for .net windows forms control to generate, create bar code image in .net windows forms applications. recognise
BusinessRefinery.com/ bar code
asp.net barcode label printing
using logic asp.net web pages to display bar code with asp.net web,windows application
BusinessRefinery.com/ barcodes
1.5M DS-1
how to use barcode reader in asp.net c#
Using Barcode decoder for right .net framework Control to read, scan read, scan image in .net framework applications.
BusinessRefinery.com/ bar code
create barcode generator c#
use .net barcode drawer to attach barcode on c# trial
BusinessRefinery.com/ bar code
Static Routes
using barcode creation for visual studio .net crystal report control to generate, create barcode image in visual studio .net crystal report applications. dlls
BusinessRefinery.com/ barcodes
birt barcode plugin
use birt reports barcode development to assign barcodes on java input
BusinessRefinery.com/ bar code
CorelDRAW X4: The Official Guide
to print qr code iso/iec18004 and qr code iso/iec18004 data, size, image with .net barcode sdk compatible
BusinessRefinery.com/QR-Code
using barcode implement for excel spreadsheets control to generate, create qr codes image in excel spreadsheets applications. packages
BusinessRefinery.com/qr barcode
For a given total rise h for angle q0 and any two of the b angles given one can solve for all angles and all values of the derivative curves.
to deploy qrcode and qr code jis x 0510 data, size, image with excel barcode sdk files
BusinessRefinery.com/Denso QR Bar Code
qr-codes data correct in .net
BusinessRefinery.com/qrcode
Securing the System
crystal reports qr code
use .net vs 2010 qr-code integrating to generate qr-codes in .net formula
BusinessRefinery.com/qr bidimensional barcode
to display qr codes and qr data, size, image with vb barcode sdk button
BusinessRefinery.com/Quick Response Code
Temperature Conversions
using barcode integrating for office excel control to generate, create pdf417 image in office excel applications. preview
BusinessRefinery.com/PDF417
ssrs code 39
using image sql server to add barcode 39 for asp.net web,windows application
BusinessRefinery.com/Code39
BellCore developed transaction protocols to handle the call processing, using an 1129protocol specification. The 1129 transaction is triggered when a call comes in. It is initiated with the delivery of a message from the IP to the SCP . Thereafter, the transaction continues with queries issued by the SCP and synchronous responses to these queries returned by the IP, reporting results of the requested action. The BellCore recommendations call for multiple intelligent peripherals within the network. Each is capable of working with multiple independent service control points via communication across multiple data paths. Each IP operates as though it is distributed across multiple host platforms interconnected by multiple LANs. Introducing IP into an AIN environment is a major expense, requiring significant investment in hardware, networking, and supporting software. The BellCore philosophy is to provide redundant components and data paths eliminating single points of failure wherever possible. However, many situations exist whereby an IP or SN provides a service, yet the service does not warrant redundant infrastructure. Therefore, a solution is required for the IP or SN to provide suitable reliability inherently.
.net code 128 reader
Using Barcode recognizer for custom .net framework Control to read, scan read, scan image in .net framework applications.
BusinessRefinery.com/code 128b
crystal reports data matrix
use .net crystal report datamatrix 2d barcode encoder to draw datamatrix in .net jpg
BusinessRefinery.com/Data Matrix 2d barcode
Test case selection. The last step before testing can begin is test case selection. The operator must tell the ETS which cases should be run during the current test campaign. Some test cases might not be selectable, such as those where the IUT has not implemented an optional feature. Among those test cases that are selectable, the operator might opt to run only those that concentrate on particular aspects of the IUT. Figure 6.8 shows a Test Suite Browser screen for the selection of test cases. This notion of selectability is clearly defined in conformance testing. For each test case, there is a boolean expression (called a test case selection expression) that must produce the value True before the test case can be selected. Each selection expression is based on one or more test suite parameters. Consider the test case selection example shown in Figure 6.9. As you can see, test case Ver_VC_Only_F4 can be selected only if the IUT supports VC service but not VP service. In addition, most ETS packages allow the operator to disregard the test case selection expressions and to select any test case desired (such as for testing abnormal situations). Running the test. As soon as one or more test cases have been selected, the test suite can be run. During execution, the tester sends PDUs to the IUT, analyzes its reaction (the contents of the cells sent by the IUT, time when these PDUs are sent, etc.), compares the expected and the observed behavior of the IUT, assigns a verdict to the test case, displays and records on disk all protocol data exchanged, and proceeds to the next test case. This type of testing is called stimulus-and-response testing because the tester expects a response from the IUT each time a stimulus is sent.
ssrs fixed data matrix
using barcode creation for sql database control to generate, create gs1 datamatrix barcode image in sql database applications. rotation
BusinessRefinery.com/barcode data matrix
using barcode implement for microsoft excel control to generate, create 3 of 9 image in microsoft excel applications. activity
BusinessRefinery.com/bar code 39
A network includes all of the hardware and software components used to
generate code 39 barcode in c#
generate, create 3 of 9 barcode classes none for .net c# projects
BusinessRefinery.com/Code-39
using byte excel microsoft to encode ecc200 with asp.net web,windows application
BusinessRefinery.com/Data Matrix ECC200
HISTORY
1. Simplify these logarithmic expressions. a3 b 2 (a) ln 5 c d log2 (a3 b) (b) log3 (ab2 ) (c) ln[e 2x z3 w 2 ] (d) log10 [1000w 100] 2. Solve each of these equations for x. (a) 2x 3 x = 2x e 2 2x (b) x 2x = 10x 10 3 5 (c) 22x 33x 44x = 6 5 2 (d) 2x 3x = x x 3 2 3 e 3. Calculate each of these derivatives. d (a) ln[cos(x 2 )] dx x3 d ln (b) dx x 1 d cos(e x ) (c) e dx d cos(ln x) (d) dx 4. Calculate each of these integrals. (a) (b) (c)
Standard ACLs can filter only on the source IP address. If you omit the
using namespace std;
The C# Language
Station 6 Station 5
This program introduces several new concepts. First, the statement
Figure 10-1. An application with multiple connections
Standards
More Code 39 on None
how to generate barcode in rdlc report: Threat Forecasting Data Is Sparse in Software Access ANSI/AIM Code 39 in Software Threat Forecasting Data Is Sparse
how to use barcode in rdlc report: Risk Transfer in Software Access 39 barcode in Software Risk Transfer
how to use barcode in rdlc report: Transfers and Reassignments in Software Render USS Code 39 in Software Transfers and Reassignments
how to use barcode in rdlc report: 2: IT Governance and Risk Management in Software Insert 3 of 9 barcode in Software 2: IT Governance and Risk Management
how to use barcode in rdlc report: ISO 20000 in Software Generator barcode 3 of 9 in Software ISO 20000
rdlc barcode font: 2: IT Governance and Risk Management in Software Drawer Code 39 Full ASCII in Software 2: IT Governance and Risk Management
word 2010 code 39 barcode: Reviewing Documentation and Records in Software Encoding 3 of 9 barcode in Software Reviewing Documentation and Records
rdlc barcode font: Notes in Software Draw barcode code39 in Software Notes
rdlc barcode font: Resource Planning in Software Receive 3 of 9 barcode in Software Resource Planning
rdlc barcode font: CISA Certified Information Systems Auditor All-in-One Exam Guide in Software Develop 39 barcode in Software CISA Certified Information Systems Auditor All-in-One Exam Guide
rdlc barcode font: G14, Application Systems Review in Software Assign ANSI/AIM Code 39 in Software G14, Application Systems Review
rdlc barcode font: P6, Firewalls in Software Writer 3 of 9 in Software P6, Firewalls
rdlc barcode font: 3: The Audit Process in Software Implement Code-39 in Software 3: The Audit Process
how to print barcode in rdlc report: 3: The Audit Process in Software Develop Code 3 of 9 in Software 3: The Audit Process
how to print barcode in rdlc report: 3: The Audit Process in Software Produce 3 of 9 barcode in Software 3: The Audit Process
how to print barcode in rdlc report: Audit Risk and Materiality in Software Integration Code-39 in Software Audit Risk and Materiality
how to print barcode in rdlc report: 3: The Audit Process in Software Attach Code 3 of 9 in Software 3: The Audit Process
how to print barcode in rdlc report: What s in a Title in Software Access 3 of 9 in Software What s in a Title
how to print barcode in rdlc report: Project Roles and Responsibilities in Software Get Code 39 in Software Project Roles and Responsibilities
how to print barcode in rdlc report: CISA Certified Information Systems Auditor All-in-One Exam Guide in Software Render ANSI/AIM Code 39 in Software CISA Certified Information Systems Auditor All-in-One Exam Guide
Articles you may be interested
.net barcode sdk open source: C H A P T E R in Software Deploy barcode code 128 in Software C H A P T E R
progress bar code in vb.net 2008: This program produces the following output. in Java Render PDF-417 2d barcode in Java This program produces the following output.
creating qrcodes in excel: Final Exam in Software Attach QR-Code in Software Final Exam
ssrs export to pdf barcode font: How Arguments Are Passed in .net C# Development QR-Code in .net C# How Arguments Are Passed
code 128 java free: Number of BDAV/BDMV elements in Software Creation qr codes in Software Number of BDAV/BDMV elements
create qr codes from excel file: CAM DESIGN HANDBOOK in Software Implement DataMatrix in Software CAM DESIGN HANDBOOK
how to create barcode in vb net 2008: Skills and Careers in the Game Industry in Software Generating code 128 barcode in Software Skills and Careers in the Game Industry
barcode recognition vb.net: PART I PART I PART I in Software Encoding QR-Code in Software PART I PART I PART I
print barcode rdlc report: Requiring Address Translation in Software Generating Data Matrix barcode in Software Requiring Address Translation
generate bar code in vb.net: Protocol Analysis 548 Basic Telecommunications Technologies in Software Writer USS Code 128 in Software Protocol Analysis 548 Basic Telecommunications Technologies
barcode font vb.net: in .NET Use data matrix barcodes in .NET
barcode label printing in vb.net: What is the prevalence of stress incontinence in Android Drawer QRCode in Android What is the prevalence of stress incontinence
create barcodes in vb.net: Installing the Universal Printer in Software Integrate QR-Code in Software Installing the Universal Printer
print barcode labels in vb.net: Fiber Optic Network Elements Fiber Optic Network Elements 485 in Software Integration Code 128 in Software Fiber Optic Network Elements Fiber Optic Network Elements 485
generate barcode image vb.net: Manufacturing test elements in Software Connect Code-128 in Software Manufacturing test elements
make code 39 barcodes excel: CHANGING THE SETTINGS in Software Implementation QR Code JIS X 0510 in Software CHANGING THE SETTINGS
java barcode api open source: Build Your Own Combat Robot in Software Develop EAN-13 in Software Build Your Own Combat Robot
rdlc barcode c#: Tunnel Limits in Software Deploy Data Matrix 2d barcode in Software Tunnel Limits
how to create barcodes in visual basic .net: Citrix XenApp Platinum Edition for Windows: The Official Guide in Software Creator Quick Response Code in Software Citrix XenApp Platinum Edition for Windows: The Official Guide
free download barcode scanner for java mobile: Not Better Enough in Software Integration QR Code JIS X 0510 in Software Not Better Enough | http://www.businessrefinery.com/yc3/476/28/ | CC-MAIN-2022-40 | refinedweb | 2,092 | 51.38 |
async_subprocess is a simple wrapper around Python's subprocess.Popen class. You use it just like you would use subprocess.Popen; there are only two major differences:
- You can only pass None or PIPE as values for stdout, stdin, stderr.
- asyncomm() returns immediately with whatever data is available, rather than waiting for EOF and process termination. As such, you can now call asyncomm() many times on the same object.
async_subprocess is beta software, so it might still be a bit buggy. It has been tested on the following configurations:
- Linux (Fedora 15), Python 2.7.1
- Linux (Fedora 15), Python 3.2
- Windows Vista 32, Python 2.7.3
Example usage:
from async_subprocess import AsyncPopen, PIPE args = ("echo", "Hello World!") proc = AsyncPopen(args, stdout=PIPE) stdoutdata, stderrdata = proc.asyncomm() print stdoutdata # should print "Hello World!"
What's New
Version 0.5 * (techtonik) Implement standard communicate() using non-blocking layer.
Version 0.4 * (techtonik) Non-blocking communicate() is renamed to asyncomm() to allow
making AsyncPopen() class a drop-in Popen() replacement that doesn't break existing codebase.
Version 0.3 * (techtonik) Change communicate() to return empty strings if pipes are
alive and empty, and None if they are dead or closed.
- (techtonik) Wrap Popen.stdin to make sure that programs closing stdin directly do this in a threadsafe manner.
Version 0.2.3 * (techtonik) Fixed wrong lock being set in communicate() for stdout pipe. * (techtonik) Added tests.
Version 0.2 * Got rid of the stray debug print statement that was accidentally left in
version 0.1. Sorry about that, it's gone now, and 0.2 has been checked for other stray debug statements.
- Support for Python 3 added. | https://bitbucket.org/techtonik/absub/src | CC-MAIN-2015-22 | refinedweb | 278 | 60.82 |
Content-type: text/html
curs_outopts, clearok, idcok, idlok, immedok, leaveok, nl, nonl, setscrreg, wsetscrreg, scrollok - Routines for controlling output options for a Curses terminal
#include <curses.h>
int clearok(
WINDOW *win,
bool bf
);
void idcok(
WINDOW *win,
bool bf
);
int idlok(
WINDOW *win,
bool bf
);
void immedok(
WINDOW *win,
bool bf
);
int leaveok(
WINDOW *win,
bool bf
);
int nl(
void
);
int nonl(
void
);
int setscrreg(
int top,
int bot
);
int wsetscrreg(
WINDOW *win,
int top,
int bot
);
int scrollok(
WINDOW *win,
bool bf
);
Curses Library (libcurses)
Interfaces documented on this reference page conform to industry standards as follows:
clearok, idlok, leaveok, nl, nonl, setscrreg, wsetscrreg: XPG4, XPG4-UNIX
idcok, immedok: XPG4-UNIX
Refer to the
standards(5)
reference page for more information
about industry standards and associated tags.
These routines set options that deal with output within Curses. Unless stated otherwise, all options are initially FALSE. It is not necessary to turn these options off before calling endwin.
The clearok routine enables and disables the clearok option. If this option is enabled (bf is set to TRUE), the next call to wrefresh with this window clears the screen completely and redraws.
The idlok routine enables and disables the idlok option. If this option is enabled (bf is set to TRUE), Curses considers using the hardware insert/delete line feature if the terminal is so equipped. If the idlok is disabled (bf is set to FALSE), Curses very seldom uses this hardware feature. (The insert/delete character feature is always considered.) This option should be enabled only if the application needs the insert/delete line feature, for example, for a screen editor. The option is disabled by default because insert/delete line tends to be visually annoying when used in applications where it is not needed. If the insert/delete line feature cannot be used, Curses redraws the changed portions of all lines.
The idcok routine enables and disables the idcok option. If this option is enabled (bf is set to TRUE), Curses considers using the hardware insert/delete character feature if the terminal is so equipped. The idcok option is enabled by default.
The immedok routine enables and disables the immedok option. If this option is enabled (bf is set to TRUE), any change in the window image, such as the ones caused by waddch, wclrtobot, wscrl, and similar routines, automatically causes a call to wrefresh. Enabling the immedok option may degrade performance considerably due to repeated calls to wrefresh. The option is disabled by default.
The leaveok routine enables and disables the leaveok option. Usually, the hardware cursor is left at the location of the cursor in the window being refreshed. When the leaveok option is enabled, the cursor is left wherever the update happens to leave it. Because this option reduces the need for cursor motions, it is useful for applications that do not use the cursor. If possible, Curses makes the cursor invisible when leaveok is enabled.
The setscrreg and wsetscrreg routines set a software scrolling region in a window. The top and bot parameters are the line numbers of the top and bottom margin of the scrolling region. (Line 0 is the top line of the window.) If a scrolling region is set and scrollok is enabled, an attempt to move off the bottom margin line causes all lines in the scrolling region to scroll up one line. Only the text of the window is scrolled. (Note that this has nothing to do with the use of a physical scrolling region in a terminal, such as the VT100. If the idlok option is enabled and the terminal has either a scrolling region or an insert/delete line capability, then the output routines will probably use one of these.)
The scrollok routine enables and disables the scrollok option. This option controls what happens when the window cursor is moved off the edge of the window or scrolling region, either as a result of a newline action on the bottom line or typing the last character of the last line. If the scrollok option is disabled, (bf is set to FALSE), the cursor is left on the bottom line. If the option is enabled, (bf is set to TRUE), Curses calls wrefresh on the window, and the physical terminal and window are scrolled up one line. (Note that in order to get the physical scrolling effect on the terminal, applications must also call idlok.)
The
nl
and
nonl
routines control
whether the newline character is translated into carriage return and linefeed
on output, and whether carriage return is translated into newline on input.
By default, the translations do occur. When the application disables these
translations by using
nonl,
curses
is
able to make better use of the linefeed capability, resulting in faster cursor
motion.
The header file <curses.h> automatically includes the header file <stdio.h>.
Note that nl, nonl and setscrreg may be macros.
The
immedok
routine is useful for windows that are
created as terminal emulators.
The
nl,
nonl,
setscrreg, and
wsetscrreg
routines return
OK
upon success and
ERR
upon failure.
All other routines that return an integer always return
OK.
Functions: curses(3), curs_addch(3), curs_clear(3), curs_initscr(3), curs_refresh(3), curs_scroll(3)
Others: standards(5) | http://backdrift.org/man/tru64/man3/wsetscrreg.3.html | CC-MAIN-2017-22 | refinedweb | 871 | 61.97 |
Read this: sublimetext.com/blog/article ... ith-pythonand then: viewtopic.php?f=6&t=4979&start=0#p22442
# `inspect.getfile`
def getfile(object):
"""Work out which source or compiled file an object was defined in."""
if ismodule(object):
if hasattr(object, '__file__'):
return object.__file__
raise TypeError('{!r} is a built-in module'.format(object))
if isclass(object):
object = sys.modules.get(object.__module__)
if hasattr(object, '__file__'):
return object.__file__
raise TypeError('{!r} is a built-in class'.format(object))
if ismethod(object):
object = object.im_func
if isfunction(object):
object = object.func_code
if istraceback(object):
object = object.tb_frame
if isframe(object):
object = object.f_code
if iscode(object):
return object.co_filename
raise TypeError('{!r} is not a module, class, method, '
'function, traceback, frame, or code object'.format(object)).
I don't.. I don't know what's going on here...
castles is having a moment...
@castles, you might want to start explaining what the hell you're doing/talking about...
Also, you might want to get your head checked out
I'm so confused
J.C. Redish's law of useability performance states:
What the hell does that even mean?
People are lazy. Imagine you are working your arse off and in the middle ofsomething you think, Jeez, I wish I had a plugin, or could tweak an existingplugin, to help me with this task.
Do you
Reach for you handy command source teleportation plugin, make some tweaks to a plugin, smash out the work, quicker than it would have taken you otherwise and have an enhanced arsenal for the next time you face that situation?
Just tough it out and do shit manually to Get Shit Done By the time you have navigated to the source (Which file was it in again? FFFF) you going to get distracted anyway right. Stop fucking around and nail it up and get the money.
Get Shit Done
How the hell does any of this relate to unicode in sys.path I imagine?Everything is connected. A tiny detail can have big implications.
sys.path
What is sys.path? It's basically an array of directories which Python willsearch, in turn, for modules. It can take relative (to the working directory) orabsolute paths.
If you'd read the 'not all beer and sunshine' post you'll know that Sublimeemploys some import chicanery to preemptively work around possible unicodecharacters in sys.path which aren't supported on windows. Before loading eachplugin, sublime makes sure to change the directory to the folder containing theplugin module.
What are the implications of this? Normally it's possible to take a Pythonclass/module and use introspection to determine the file it was declared in.However, when you use relative paths on sys.path, namely '.' as Sublime does,what's determined as a file for an object is a relative path.
If you been paying attention, you'll recall that Sublime changes directorybefore loading each and every plugin. Therefore, the rug has been swept out fromunderneath all the relative paths and they float without anchor.
Let's try and make sense of the above:
"todays (psychotic) episode was brought to you by the letter f (__file___)"
You can see it's a QuickPanel containing a bunch of event handlers, commandsgrouped by command/event type (window|application|text|on_.*)
Upon quickpanel selection it jumps to the source of PackageControlsInstallPackageCommand
InstallPackageCommand
How does it do that? You guessed it. It uses the introspection capabilitiesexposed in the std lib inspect module.
inspect
Suddenly the following code makes sense:
Can you see the object.__file__ mentioned above? When a module is imported,rooted from a relative path on sys.path, at least on windows, you'll get arelative path like say .\Package Control.py
object.__file__
.\Package Control.py
That's not really enough information to navigate to the source of a plugin isit?
For the navigation to work in the image above a quick patch was applied tosublime_plugin.py to monkeypatch imported modules with the __file__ attribute.
__file__
Ok, so if that's all you need to do, why not just do that then?
Recall J.C. Redish's, "Something is only useful if it's readily useable". Nowimagine you were working again, and you'd just successfully teleported to thecommand to edit it and after some tinkering you got a half useful exceptionmessage.
Traceback (most recent call last):
File ".\sublime_plugin.py", line 345, in run_
File ".\sublimezen.py", line 118, in wrapper
File ".\sublimezenplugin.py", line 208, in run
File ".\zencoding\__init__.py", line 75, in run_action
File ".\zencoding\actions\basic.py", line 96, in match_pair
ZeroDivisionError: integer division or modulo by zero
You thought that monkeypatch would have taken care of it yeah?
>>> import sublimezenplugin
>>> sublimezenplugin.__file__
u'C:\\Users\\nick\\AppData\\Roaming\\Sublime Text 2\\Packages\\ZenCoding\\sublimezenplugin.py'
Hrmm, the file is being set but something is amiss. Rememberobject.co_filename from above? Wonder if he's the culprit?
object.co_filename
Your coworker is getting ancy. Fuck.
Now if the paths on sys.path were absolute paths that traceback would containa lot more useful information. You could just copy paste the full paths to thefiles. But, I mean, fuck that.
Newsflash, it's 2012, and Python has had sys.excepthook for ages
sys.excepthook
sys.excepthook.
Now recall a relevant section in inspect.getfile and pay attention to traceback
inspect.getfile
traceback
if istraceback(object):
object = object.tb_frame
if isframe(object):
object = object.f_code
if iscode(object):
return object.co_filename
So what if you had an sys.excepthook handler that opened an output panel (thinkTools -> Build ) and allowed you to navigate the traceback with f4/shift+f4bindings? What if it automatically navigated to the most recent call last?
Tools -> Build
most recent call last
So what the fuck is the Count trying to say? And where does he pull thesenumbers from anyway? I'd guess he values numbers over intuition. Hes' called the Count after all
Count
Bert is saying in the remote case that someone actually IS on windows (roughlyhalf of sublime users are on OSX) and does have unicode in their sys.path andDOES have short path names disabled they can always use the portableinstallation.
He let it go without saying that if you can manage JSON for configuration thenyou'll be likely be comfortable using a portable installation.
Now, that's something only tangentially related. If you look at @wbondscommunity package index you'll see that there's currently at least 188 packages available.
Captain obvious would probably say "It's only early days." How many packageswill there be at the end of the year?
You could point out that namespacing was invented for a reason.
>>> import this
The Zen of Python, by Tim Peters
...
Namespaces are one honking great idea -- let's do more of those!
@castles:I could see why you decided not to explain all that in the beginning. It makes so much sense from just looking at sesame street characters.
haha But seriously, shit's so much easier when it's easier
From what I can tell you added code to zencoding that reports statistics about users and their sublime.packages_path() to see if it is feasible for Jon to change the package system to use python packages instead of importing files from the current directory.
Yep, inspired by Package Control, I wonder what it itself would have to say.
import sublime
WINDOWS = sublime.platform() == 'windows'
if WINDOWS: from ctypes import windll, create_unicode_buffer
def importable_path(unicode_file_name):
try:
if WINDOWS: unicode_file_name.encode('ascii')
return unicode_file_name
except UnicodeEncodeError:
buf = create_unicode_buffer(512)
return( buf.value if (
windll.kernel32
.GetShortPathNameW(unicode_file_name, buf, len(buf)) )
else False ) | https://forum.sublimetext.com/t/unicode-sys-path/4165/8 | CC-MAIN-2016-44 | refinedweb | 1,270 | 52.36 |
Python 3.11 pre release has been released. The update log mentions:
Python 3.11 is up to 10–60% faster than Python 3.10. On average, we measured a 1.25x speedup on the standard benchmark suite. See Faster CPython for details. — Python 3.11 Changelog.
The speed of Python in the production system has always been compared and roast by novices., Because there is really no block, in order to solve the performance problem, we always need to use Python or Tuplex to convert key code.
Python 3.11 has specially enhanced this optimization. We can actually verify whether there is an official average improvement of 1.25 times?
As a data science, I'm more looking forward to seeing if it has any improvement in Pandas processing DF.
First, let's try some Fibonacci sequences.
Install Python 3.11 pre release
For windows, you can download the installation file from the official, and ubuntu can be installed with apt command
sudo apt install Python3.11
We can't use 3.11 directly in our work. Therefore, you need to create a separate virtual environment to save two Python versions.
$ virtualenv env10 --python=3.10 $ virtualenv env11 --python=3.11 # To activate v11 you can run, $ source env11/bin/activate
How fast is Python 3.11 compared to Python 3.10?
I created a small function to generate some Fibonacci numbers.
def fib(n: int) -> int: return n if n < 2 else fib(n - 1) + fib(n - 2)
Use Timeit to run the Fibonacci number generator above to determine the execution time. The following command repeats the build process ten times and displays the best execution time.
# To generate the (n)th Fibonacci number python -m timeit -n 10 "from fib import fib;fib(n)"
Here are the results on Python 3.10 and Python 3.11
Python 3.11 outperforms Python 3.10 in every run. The execution time is about half that of version 3.11.
I actually want to confirm its performance on the Pandas mission. Unfortunately, so far, Numpy and Pandas do not support Python 3.11.
Bubble sorting
Since it is impossible to benchmark Pandas, let's try the performance comparison of common calculations to measure the time spent sorting one million numbers. Sorting is the most commonly used operation in daily use. I believe its results can provide us with a good reference.
import random from timeit import timeit from typing import List def bubble_sort(items: List[int]) -> List[int]: n = len(items) for i in range(n - 1): for j in range(0, n - i - 1): if items[j] > items[j + 1]: items[j], items[j + 1] = items[j + 1], items[j] numbers = [random.randint(1, 10000) for i in range(1000000)] print(timeit(lambda:bubble_sort(numbers),number=5))
The above code generates a million random numbers. The timeit function is set to measure only the duration of the bubble sort function execution.
give the result as follows
Python 3.11 takes only 21 seconds to sort, while 3.10 takes 39 seconds.
Are there performance differences in I/O operations?
Is there a difference in the speed of reading and writing information between the two versions of the disk. When pandas reads df and deep learning reads data, I/O performance is very important.
Two programs are prepared here. The first one writes one million files to disk.
from timeit import timeit statement = """ for i in range(100000): with open(f"./data/a{i}.txt", "w") as f: f.write('a') """ print(timeit(statement, number=10))
We use the timeit function to print the duration. You can repeat the task multiple times and take the average value by setting the number parameter.
The second program also uses the timeit function. But it only reads a million files.
from glob import glob from timeit import timeit file_paths = glob("./data/*.txt") statement = f""" for path in {file_paths}: with open(path, "r") as f: f.read() """ print(timeit(statement, number=10))
Here is the output of the two versions we run.
Although Python 3.10 seems to have an advantage over Python 3.11, it doesn't matter. Because running this experiment many times will draw different conclusions, but it is certain that I/O has not improved.
summary
Python 3.11 is still a pre release version. But it seems to be a great version in Python history. It is 60% faster than the previous version, which is still OK. Some of our experiments above have also proved that Python 3.11 is indeed faster.
Translator's note: the previous project was upgraded to 3.6 a few days ago, and the new projects were developed with 3.9. Now 3.11 will be released soon, and the performance has been greatly improved. What are you going to do, uncle tortoise 😂
Author: Thuwarakesh Murallie | https://algorithm.zone/blogs/using-bubble-sorting-and-recursive-function-comparison-test.html | CC-MAIN-2022-27 | refinedweb | 812 | 69.28 |
Unified Data Access for .NETBy Philip Miseldine
Nearly all of today’s Web applications use some sort of database to store persistent data. .NET applications often use SQL Server, PHP applications mostly use MySQL, and so on. When deploying an application to clients, however, there are many occasions on which they may wish to use a different database than that which your application has implemented. They might use Oracle throughout their enterprise, for example, and simply will not use your system as it stands without support for it. It is also far better practice to give the end-user choice rather than tying your system to a single third party database.
Normally, this means a great deal of recoding to make your application talk to different DBMSs (Database Management Systems). The following article will show how, with just a little planning, you can make your applications support almost every professional DBMS made today, out of the box.
ADO.NET: Almost There
ADO.NET has certainly made matters easier for the developer. DataReader and DataSet offer types that can be manipulated and queried without our having to worry about the underlying access method. Even so, to fill either structure, traditionally we need to use different types to handle different databases. ADO.NET gives us access to SQL Server, OLE DB, and ODBC, with many other database providers available (Oracle and MySQL, for instance).
using System.Data.SqlClient; // SQL Server 7.0/2000
using System.Data.Odbc; // ODBC
SqlConnection sqlConn = new SqlConnection(connectionString);
OdbcConnection odbcConn = new OdbcConnection(connectionString);
Imagine we’d written an application that connected to SQL Server, and a new client wanted us to use their proprietary DBMS that connects through ODBC. To do this, we’d have to convert types throughout our application from those contained in System.Data.SqlClient to System.Data.Odbc, or place conditional statements throughout our code, leaving behind a lot of essentially repeated code. Neither option is very appetising for programmers.
What is interesting, however, is that each provider inherits the same set of interfaces that the framework provides for, and, as such, they can be handled in a uniform manner; all connection objects (like SqlConnection, and OdbcConnection) inherit from the IDbConnection interface, for example.
What we really want is a class that we can tell, “I want to use this type of database” only once, then ask it to “run this query and return a DataSet or DataReader with the results” without worrying about the underlying connection. To be more specific, we want a class that will create an instance of an object based on criteria we give it, with an unknown and unseen implementation behind it. This is known as a factory pattern.
The Factory Pattern
Erich Gamma defines the Factory pattern as a “method [that] lets a class defer instantiation to subclasses.” Let’s have a look at a real-world example to understand what a factory pattern can mimic, and where it can be used.
Imagine a biscuit factory. This factory can produce a wide variety of different biscuits, each with their own recipe; the way in which a chocolate chip cookie is made, for example, differs from the way a ginger snap is made. The factory management doesn’t need to know how to make the biscuits, they just need to be able to package them up and ship them to their clients’ shops. The workers (be they machines or people) do require the recipe of the biscuit. They follow the instructions set out in the recipe, and produce a biscuit for the manager. If the manager needs to make a different biscuit, he tells the workers to use a different recipe, and a different type of biscuit is produced.
So, to create different types of biscuit, all that needs to change is the recipe (and of course, making sure we have all the ingredients to hand!). In this way, the manager defers creation of the biscuits to their workers. Moving higher up the supply chain, we can consider the purchasing clients (the shops and businesses who purchase the biscuits), and those who consume the biscuits (the public). Again, they do not need to know the biscuit recipe (indeed it is advantageous that we keep the recipe secret so as to protect our business). They simply ask for the creation of the biscuits.
So now we have a structure for the factory. We want a biscuit form to which all other biscuits will have to conform. This allows our management to handle biscuits in the same manner (to the manager, a ginger snap and a cookie are essentially the same). Specialised biscuits, like our cookies, then inherit all the properties of our standard biscuit, but have different behaviour (the cookie might be more chewy than a ginger snap, or have a different taste). This can be represented in UML as the following:
The Factory class defers creation to the Biscuit class (the makeBiscuit method), but specifies which type of biscuit it wants (the biscuitType parameter). The biscuit class then creates the appropriate biscuit (createGingerSnap, or createCookie methods), and returns its creation.
As both GingerSnap and Cookie are of type Biscuit, the Factory does not need to worry what type of biscuit it has, as they all can be treated as biscuits, rather than ginger snaps or cookies.
From Biscuits to Databases
Enough about biscuits! Back to our original problem: how to unify data access. I hope the example we’ve talked about so far has given you some idea of how to do this. Replacing our general biscuit with a general database connection we wish to achieve, and our different types of biscuit (ginger snap and cookie) as different types of databases (SQL Server, ODBC), we can call on the factory to produce a database connection for us with the functionality we require, without having to know the underlying specifics we’re using (the recipe). All connection objects inherit from the IDbConnection interface already, and therefore different connections can share the same code; all we need to do is tell the factory which type of connection we wish to use.
We can tell our factory which type to use through a parameter passed to a makeConnection method, which will return an IDbConnection to represent our database connection. We can make this static, as there’s no need for the factory class itself to be created manually: we need only one factory, which never changes itself:
public class ConnectionFactory
{
public enum connectionTypes {SQLServer, ODBC};
public static IDbConnection makeConnection
(int connectionType, string connectionString)
{
switch (connectionType)
{
case ((int)connectionTypes.SQLServer):
return (IDbConnection)new SqlConnection(connectionString);
case ((int)connectionTypes.ODBC):
return (IDbConnection)new OdbcConnection(connectionString);
}
//no match
return null;
}
}
Notice the use of the
enum or enumerator. An enumerator is a nice way of giving integers a friendly name. In the code above,
connectionTypes.SQLServer equates to 0,
connectionTypes.ODBC equates to 1 and so forth. This way, a user doesn’t need to remember a specific string or an anonymous integer, which helps with readability and validation.
The factory queries the value
connectionType passed to it and creates the appropriate connection object. As each of these connection objects is of a type IDbConnection, we can cast the created object to this type so that we can handle all of the connection objects in the same way, no matter what their implementation… our ultimate goal!
Open For Business
Now, let’s see how we can use this in our application. Before, if we wished to connect to SQL Server, we’d have created a SqlConnection. Now, we can now use our factory:
private void Form1_Load(object sender, System.EventArgs e)
{
string connectionString = "";
IDbConnection conn = ConnectionFactory.makeConnection((int)ConnectionFactory.connectionTypes.SQLServer, connectionString);
IDbCommand comm = conn.CreateCommand();
comm.CommandText = "select * from mytable";
conn.Open();
IDataReader dr = comm.ExecuteReader();
//do what we want to do with the datareader
//finally close the connection.
conn.Close();
}
Notice the only line that’s now dependent on the database type we wish to connect to is as follows:
IDbConnection conn = ConnectionFactory.makeConnection((int)ConnectionFactory.connectionTypes.SQLServer, connectionString);
Here, our specification of database type is made through a flexible parameter.
All other calls to our database are now abstracted from the actual database type we’re connecting to. Our factory returns to us an object that will use the “recipe” for SQL Server, yet we can easily instruct it to return us an object based on the “recipe” for ODBC connections, and reuse the same code. A simple if statement can be used to select the which database type we wish to use:
string connectionString = "";
IDbConnection conn;
if (database == "SQL Server")
{
conn = ConnectionFactory.makeConnection((int)ConnectionFactory.connectionTypes.SQLServer, connectionString);
}
else
{
//if not SQL Server, then ODBC
conn = ConnectionFactory.makeConnection((int)ConnectionFactory.connectionTypes.ODBC, connectionString);
}
IDbCommand comm = conn.CreateCommand();
comm.CommandText = "select * from mytable";
conn.Open();
IDataReader dr = comm.ExecuteReader();
conn.Close();
As you can see, once the conn object is created, we never have to think about what database we’re using, nor alter any code again: all code for database queries in our applications is now database independent, using the built-in interfaces provided by ADO.NET.
Notes
So why not write applications to communicate through ODBC? Most databases support it, after all. Take the SQL Server ADO.NET components. These allow applications to communicate with SQL Server through a TDS (Tabular Data Stream, which is the native SQL Server data format). This provides an estimated 30-40% speed increase over calls made through ODBC [Sack. J. (2003) SQL Server 2000 Fast Answers for DBAs and Developers. Curlingstone Publishing.]. Similar enhancements are available for many dedicated ADO.NET component sets. These sorts of performance gains are simply too big to ignore, when a unified data access framework, as outlined in this article is so easy to achieve.
There is plenty of room for improvement. For a more robust and advanced data layer, we should include factories for the IDbDataAdapter, IDataReader and IDbCommand. We could also use the Reflection classes to see exactly which types of database we can connect to. Indeed, you could write your own set of interfaces which expose only the functionality you need in your applications, and create them through a factory.
Summary
To me, the greatest benefit ASP.NET brings to a project is the power of excellent object orientation support. I hope this article has shown how easy it is to harness this power to make your applications extendible and reusable, with very little effort needed on your part. | https://www.sitepoint.com/unified-data-access-net/ | CC-MAIN-2017-09 | refinedweb | 1,754 | 53.41 |
Have you ever tried to multiply two NumPy arrays together and got a result you didn’t expect? NumPy’s multiplication functions can be confusing. In this article, we’ll explain everything you need to know about matrix multiplication in NumPy.
Watch the video where I go over the article in detail:
To perform matrix multiplication between 2 NumPy arrays, there are three methods. All of them have simple syntax. Let’s quickly go through them the order of best to worst. First, we have the
@ operator
# Python >= 3.5 # 2x2 arrays where each value is 1.0 >>> A = np.ones((2, 2)) >>> B = np.ones((2, 2)) >>> A @ B array([[2., 2.], [2., 2.]])
np.matmul()
>>> np.matmul(A, B) array([[2., 2.], [2., 2.]])
And finally
np.dot()
>>> np.dot(A, B) array([[2., 2.], [2., 2.]])
Why are there so many choices? And which should you choose? Before we answer those questions, let’s have a refresher on matrix multiplication and NumPy’s default behavior.
What is Matrix Multiplication?
If you don’t know what matrix multiplication is, or why it’s useful, check out this short article.
Matrices and arrays are the basis of almost every area of research. This includes machine learning, computer vision and neuroscience to name a few. If you are working with numbers, you will use matrices, arrays and matrix multiplication at some point.
Now you know why it’s so important, let’s get to the code.
numpy.array — Default Behavior
The default behavior for any mathematical function in NumPy is element-wise operations. This is one advantage NumPy arrays have over standard Python lists.
Let’s say we have a Python list and want to add 5 to every element. To do this we’d have to either write a for loop or a list comprehension.
# For loop - complicated and slow >>> a = [1, 1, 1, 1] >>> b = [] >>> for x in a: b.append(x + 5) >>> b [6, 6, 6, 6] # List comprehension - nicer but still slow >>> a = [1, 1, 1, 1] >>> b = [x + 5 for x in a] >>> b [6, 6, 6, 6]
Both of these are slow and cumbersome.
Instead, if
A is a NumPy array it’s much simpler
>>> A = np.array([1, 1, 1, 1]) >>> B = A + 5 >>> B array([6, 6, 6, 6])
And much much much faster
# Using a list of length 1,000,000 for demonstration purposes In [1]: a = list(range(100000)) In [2]: b = [] In [3]: %timeit for x in a: b.append(x + 5) 28.5 ms ± 5.71 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) In [4]: b = [] In [5]: %timeit b = [x+5 for x in a] 8.18 ms ± 235 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [6]: A = np.array(a) In [7]: %timeit B = A + 5 81.2 µs ± 2 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Using arrays is 100x faster than list comprehensions and almost 350x faster than for loops.
If we want to multiply every element by 5 we do the same
>>> C = A * 5 array([5, 5, 5, 5])
The same applies for subtraction and division.
Every mathematical operation acts element wise by default. So if you multiply two NumPy arrays together, NumPy assumes you want to do element-wise multiplication.
>>> np.ones((2, 2)) * np.array([[1, 2], [3, 4]]) array([[1., 2.], [3., 4.]])
A core feature of matrix multiplication is that a matrix with dimension
(m x n) can be multiplied by another with dimension
(n x p) for some integers
m,
n and
p. If you try this with
*, it’s a
ValueError
# This would work for matrix multiplication >>> np.ones((3, 2)) * np.ones((2, 4)) ValueError: operands could not be broadcast together with shapes (3,2) (2,4)
This happens because NumPy is trying to do element wise multiplication, not matrix multiplication. It can’t do element wise operations because the first matrix has 6 elements and the second has 8.
Element wise operations is an incredibly useful feature.You will make use of it many times in your career. But you will also want to do matrix multiplication at some point.
Perhaps the answer lies in using the
numpy.matrix class?
Numpy.matrix
There is a subclass of NumPy array called
numpy.matrix. This operates similarly to matrices we know from the mathematical world. If you create some
numpy.matrix instances and call
*, you will perform matrix multiplication
# Element wise multiplication because they are arrays >>> np.array([[1, 1], [1, 1]]) * np.array([[1, 2], [3, 4]]) array([[1, 2], [3, 4]]) # Matrix multiplication because they are matrices >>> np.matrix([[1, 1], [1, 1]]) * np.matrix([[1, 2], [3, 4]]) matrix([[4, 6], [4, 6]])
But this causes some issues.
For example, if you have 20 matrices in your code and 20 arrays, it will get very confusing very quickly. You may multiply two together expecting one result but get another. The
* operator is overloaded. This results in code that is hard to read full of bugs.
We feel that this is one reason why the Numpy docs v1.17 now say:
It is no longer recommended to use this class, even for linear algebra. Instead use regular arrays. The class may be removed in the future.
You may see this recommended in other places around the internet. But, as NumPy no longer recommends it, we will not discuss it further.
Now let’s look at some other methods.
Other Methods of Matrix Multiplication
There are 2 methods of matrix multiplication that involve function calls.
Let’s start with the one we don’t recommend
numpy.dot
As the name suggests, this computes the dot product of two vectors. It takes two arguments – the arrays you would like to perform the dot product on. There is a third optional argument that is used to enhance performance which we will not cover.
>>> vec1 = np.array([1, 2, 3]) >>> vec2 = np.array([3, 2, 1]) # Dot product is (1*3) + (2*2) + (3*1) = 3 + 4 + 3 = 10 >>> np.dot(vec1, vec2) 10
If you use this function with a pair of 2D vectors, it does matrix multiplication.
>>> three_by_two = np.ones((3, 2)) >>> two_by_four = np.ones((2, 4)) >>> output = np.dot(three_by_two, two_by_four) # We expect shape (3,2) x (2,4) = shape (3,4) >>> output.shape (3, 4) # Output as expected from matrix multiplication >>> output array([[2., 2., 2., 2.], [2., 2., 2., 2.], [2., 2., 2., 2.]])
This method works but is not recommended by us or NumPy. One reason is because in maths, the ‘dot product’ has a specific meaning. It is very different from multiplication. It is confusing to these mathematicians to see
np.dot() returning values expected from multiplication.
There are times when you can, and should, use this function (e.g. if you want to calculate the dot product) but, for brevity, we refer you to the official docs.
So you should not use this function for matrix multiplication, what about the other one?
Numpy.matmul
This is the NumPy MATrix MULtiplication function. Calling it with two matrices as the first and second arguments will return the matrix product.
>>> three_by_two = np.ones((3, 2)) >>> two_by_four = np.ones((2, 4)) >>> output = np.matmul(three_by_two, two_by_four) # Shape as expected from matrix multiplication >>> output.shape (3, 4) # Output as expected from matrix multiplication >>> output array([[2., 2., 2., 2.], [2., 2., 2., 2.], [2., 2., 2., 2.]])
The function name is clear and it is quite easy to read. This is a vast improvement over
np.dot(). There even are some advanced features you can use with this function. But for 90% of cases, this should be all you need. Check the docs for more info.
So is this the method we should use whenever we want to do NumPy matrix multiplication? No. We’ve saved the best ‘till last.
Python @ Operator
The
@ operator was introduced to Python’s core syntax from 3.5 onwards thanks to PEP 465. Its only goal is to solve the problem of matrix multiplication. It even comes with a nice mnemonic –
@ is * for mATrices.
One of the main reasons for introducing this was because there was no consensus in the community for how to properly write matrix multiplication. The asterisk
* symbol was competing for two operations:
- element wise multiplication, and
- matrix multiplication.
The solutions were function calls which worked but aren’t very unreadable and are hard for beginners to understand. Plus research suggested that matrix multiplication was more common than
// (floor) division. Yet this has its own syntax.
It is unusual that
@ was added to the core Python language when it’s only used with certain libraries. Fortunately, the only other time we use
@ is for decorator functions. So you are unlikely to get confused.
It works exactly as you expect matrix multiplication to, so we don’t feel much explanation is necessary.
# Python >= 3.5 # 2x2 arrays where each value is 1.0 >>> A = np.ones((2, 2)) >>> B = np.ones((2, 2)) >>> A @ B array([[2., 2.], [2., 2.]])
One thing to note is that, unlike in maths, matrix multiplication using
@ is left associative.
If you are used to seeing
AZx
Where A and Z are matrices and x is a vector, you expect the operation to be performed in a right associative manner i.e.
A(Zx)
So you perform
Zx first and then
A(Zx). But all of Python’s mathematical operations are left associative.
>>> a + b + c = (a + b) + c >>> a / b / c = (a / b) / c >>> a * b - c = (a * b) - c
A numerical example
# Right associative >>> 2 * (3 - 4) -2 # Left associative >>> (2 * 3) - 4 2 # Python is left associative by default >>> 2 * 3 - 4 2
There was no consensus as to which was better. Since everything else in Python is left associative, the community decided to make
@ left associative too.
So should you use
@ whenever you want to do NumPy matrix multiplication?
Which Should You Choose?
There is some debate in the community as to which method is best. However, we believe that you should always use the
@ operator. It was introduced to the language to solve the exact problem of matrix multiplication. There are many reasons detailed in PEP 465 as to why @ is the best choice.
The main reason we favour it, is that it’s much easier to read when multiplying two or more matrices together. Let’s say we want to calculate
ABCD. We have two options
# Very hard to read >>> np.matmul(np.matmul(np.matmul(A, B), C), D) # vs # Very easy to read >>> A @ B @ C @ D
This short example demonstrates the power of the
@ operator. The mathematical symbols directly translate to your code, there are less characters to type and it’s much easier to read.
Unfortunately, if you use an old version of Python, you’ll have to stick with
np.matmul().
Summary
You now know how to multiply two matrices together and why this is so important for your Python journey.
If in doubt, remember that
@ is for mATrix multiplication.
Where To Go From Here?
There are several other NumPy functions that deal with matrix, array and tensor multiplication. If you are doing Machine Learning, you’ll need to learn the difference between them all.
A good place to get a thorough NumPy education is the comprehensive Finxter NumPy tutorial on this blog and our new book Coffee Break NumPy.
- np.vdot – complex-conjugating dot product
- np.tensordot – sum products over arbitrary axes
- np.einsum – Einstein summation convention
REFERENCES
-
-
-
-
-
-
-
-
Do you want to become a NumPy master? Check out our interactive puzzle book Coffee Break NumPy and boost your data science skills! (Amazon link opens in new tab.)
Daily Data Science Puzzle
[python]
import numpy as np
# graphics data
a = [[1, 1],
[1, 0]]
a = np.array(a)
# stretch vectors
b = [[2, 0],
[0, 2]]
b = np.array(b)
c = a @ b
d = np.matmul(a,b)
print((c == d)[0,0])
[/python]
What is the output of this puzzle? transformation matrix that transforms the input data. In our setting, the transformation matrix simply stretches the column vectors.
More precisely, the two column vectors
(1,1) and
(1,0) are stretched by factor 2 to
(2,2) and
(2,0). The resulting matrix is therefore
[[2,2],[2,0]]. We access the first row and second column.
We use matrix multiplication to apply this transformation. Numpy allows two ways for matrix multiplication: the matmul function and the @ operator.
Comparing two equal-sized numpy arrays results in a new array with boolean values. As both matrices c and d contain the same data, the result is a matrix with only True values.
Are you a master coder?
Test your skills now!
Related Video
Solution
2
> | https://blog.finxter.com/numpy-matmul-operator/ | CC-MAIN-2021-43 | refinedweb | 2,149 | 67.96 |
The Ember team has does an excellent job giving proper names to most their components, tools and libraries. For example, the rendering engine is called Glimmer, while it uses HTMLBars as the template language. Singletons in Ember applications are called Services. The build tool is called Ember-CLI, and a external application module is called an Addon, generally stored in NPM with the prefix
ember-cli-[addon-name]. Having recognizable names makes talking about them a lot easier.
This is very intentional for the community. There are specific terms for developers to discuss and an easy way to market changes to the framework. The latest is Engines, or Ember Engines.
The Ember Engines RFC started in October 2014 and was merged in April 2016. The goal was to allow large Ember applications to be split into consumable Addons allowing development teams to build logically-grouped pieces of an application in separate repositories and mount each micro application as a route or container in a parent application. In the table below are links to resources for EmberEngines for more history and details:
Engines and the flow to setup Engines were added to Ember fairly early in the Engines-RFC process. The most recent feature to be added, and what I think to be a crucial piece, is lazy-loading. This allows the core application to load with the initial page request while mounted engine sections of the application will be requested as needed in separate
.js files.
For applications that have sections with different business concerns, engines provide a structure for scaling without the threat of exponential file size growth. From the image above, the admin section of the site will only be used by a select group of individuals maintaining the site content. Allowing these user to load a separate file will shed weight from the readers using the blog app. The benefit lies in the main app maintaining the session, styles and global components.
To achieve the separation of concerns with engines, there are two ways to create the sandbox needed for mounting engines: in-repo-engine and external repo Addon. In this post, we’ll walk through building a basic application that uses an
in-repo-engine and
lazy-loading. In a the next post in the series, you’ll learn about making an Ember Addon an engine in an external git repository. In the final post of the series, we’ll bring it all together with shared services and links.
ember new large-company-site cd large-company-site ember install ember-engines
This assumes you have ember-cli install (
npm install -g ember-cli). Also, it assumes you are running Ember@2.10
These commands will setup your umbrella application with the appropriate addon to mount engines. The next step is creating an engine to mount. We will start with the in-app engine. While in the
large-company-site directory add an engine with
ember generate:
ember g in-repo-engine in-app-blog
This has added a directory named “lib” and an app addon directory structure named for “in-app-blog”.
Using the blueprint
in-repo-engine, ember-cli has added all the appropriate files to create a new app structure with its own routes. Open
lib/in-app-blog/addon/routes.js to add new routes:
import buildRoutes from 'ember-engines/routes'; export default buildRoutes(function() { - // Define your engine's route map here + this.route('new'); + this.route('posts'); }); });
Once the routes are added in the engine’s
addon/routes.js file, it is time to create route and template files for each. For this simple example, add the route and template files for
new and
posts in the
addon/routes and
addon/templates directories.
The next step is to add some content to see the routes working between the parent app and engine. In the following code examples you’ll add simple headlines to each
.hbs file. The file name will be in italics above the code block.
lib/in-app-blog/addon/templates/application.hbs
<h1>Blog Engine App</h1> {{outlet}}
lib/in-app-blog/addon/templates/new.hbs
<h1>New Form</h1> <form> {{input type="text" value=title}} {{textarea value=post}} <form>
lib/in-app-blog/addon/templates/posts.hbs
<h1>All Posts</h1> <ul> <!-- will insert items here --> </ul>
Now, add an application template to the umbrella application:
app/templates/application.hbs
<h1>Large Company Umbrella</h1> {{outlet}}
Finally, add a path to the mount route for the engine in
app/route.js:
import Ember from 'ember'; import config from './config/environment'; const Router = Ember.Router.extend({ location: config.locationType, rootURL: config.rootURL }); Router.map(function() { - this.mount('in-app-blog'); + this.mount('in-app-blog', {path: "blog"}); }); export default Router;
At this point, the structure is in place to create new routes, templates, controllers, and components for the blog engine application. The last change you need to make is in the
lib/in-app-blog/index.js file. You will change the application to lazy-load the blog engine. Add the following to the
index.js file in the
lib/in-app-blog:
/\\\*jshint node:true\\\*/ var EngineAddon = require('ember-engines/lib/engine-addon'); module.exports = EngineAddon.extend({ name: 'in-app-blog', + lazyLoading: true, isDevelopingAddon: function() { return true; } });
In the terminal, run
ember s, and open your browser to the location
localhost:4200/blog/posts.
Using the Chrome browser and the Developer Tools, you can open the Network tab to see the network traffic. What you’ll see is multiple
.js files being loaded.
Highlighted in the developer console is
engine.js which is a separate file from
large-company-site.js. This is it, this is what we’ve been waiting for. You can now built the largest Ember site ever and separate concerns with engines and know your users will get script and style resources efficiently. The benefit to you and your team is huge—you don’t have to spend all of your time configuring the build process. That’s Ember’s gift to you.
This example will be on Github at. As the series continues, the GitHub repo will include tags/commits for each blog post.
In the next post of the series, you will create an external addon as an engine and link it to your project. The final post will add shared services and links to tie the application together. | https://www.bignerdranch.com/blog/is-your-ember-app-too-big-split-it-up-with-ember-engines/?utm_source=javascriptweekly&utm_medium=email | CC-MAIN-2017-13 | refinedweb | 1,068 | 56.55 |
dont think it is possible to access the global variable since local variable will take precedence
Lets see if some has something to talk about this.
From within main() there is no way to directly address the value of the global variable *global_var*.
If you find yourself coded into a corner and discover that a function has a local variable with the same name as a global variable and you need the value of the global variable you have two real options.
1. Rename the local variable. In a large function this can often be a nuisance and unless you use the editor's *replace global* function you run the risk of missing some.
2. Write another function that returns the value of a global flag.
int GlobalVar;
main()
{
int GlobalVar;
GlobalVar = FetchGlobalVar ();
}
int FetchGlobalVar ()
{
return (GlovalVar);
}
Kent
Experts Exchange Solution brought to you by
Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.Start your 7-day free trial
but I feel pointers would be more elegant (and time saving) ... Shaj, why do you wish to avoid pointers ?
int global_var=123;
int main()
{
int a=global_var;
int global_var;
global_var = 5;
printf("%d\n",global_var);
printf("%d\n",a);
return 0;
}
Output is
5
123
Paul
Hi Sunnycoder,
That's actually done quite a bit in some of the codes that I've seen. If the compiler supports the *inline* directive, the function will execute at the same speed as any other memory access.
Hi Zebada,
Many of the older compilers won't let you do that. (Nice trick, though.) The compilers actually evaluate *int a=global_var;* as if it were two lines, *int a; a=global_var;*. At which point the next line *int global_var;* generates an error because the compiler has already encountered executable code.
Of course, the real solution is not to paint yourself into this corner. :)
Kent
You could have declared: int GlobalVar; int * PtrToGlobalVar=GlobalVar;
then inside main: #define GimmeOuterGlobalVar (*PtrToGlobalVar)
In other words, rather than asking how to access the global, the better solution is to get rid of it entirely. If necessary, create a separate module with access functions and the variable declared static at file scope.
Gary
Global variables are sometimes a necessary evil.
Some languages (like COBOL, of which more than 80% of all financial applications are STILL written in) don't have local variables, except for externally written functions.
Other languages deal exclusively with local variables.
Then there is C and its derivatives which liberally support both global and local variables. Some global variables are unlikely to ever go away (stdin, stdout, stderr, etc.). But it should be noted that one of the most important global variables is being deprecated. *errno* is a global and externally linkable integer than contains the last error code issued by the C runtime library. GetLastError() is now the preferred way to get the error code.
For my applications, global variables are just that. Something that beyond a doubt is required throughout the program like file handles and streams. Note that all streams aren't necessarily global, but if the result of executing a program is that it produces a particular output file then the stream associated with that file is likely global in scope even if it declared local to main() and passed to all of the subfunctions.
Kent
pointer.
In C++ it is possible using the global scope operator but C doesn't have such an operator.
As a general rule you should avoid global variables, use them only when you have to.
In C++ they can largely be avoided since you can declare static class variables instead which serves much the same purpose but which do not lay themselves open for modification by anyone.
In C you cannot do that - although I have seen that C99 appear to have some support fo nnamespaces I am not sure exactly how much support they have for them and in particular I don't know if they have introduced the scope operator to C as it is used in C++. Without the scope operator you can't access a global variable which is hidden
by a local variable unless you have a pointer that is already pointing to it.
Hope this explains.
Alf
In ANSI C? I thought GetLastError was a Microsoft invention, non-standard, and not part of C.
errno is actually a macro on many platforms, because the original implementations weren't thread safe.
Gary
Yeah, I jumped the gun a bit here. GetLastError() is a MS invention (abortion?) that seems to be gaining more and more favor. It's not ANSI and not directly supported by Borland (not that they're able to influence the standards much anymore).
But because MS does it, everyone will have to before long.
sigh...
'errno' is more often than not a macro which expands to a function call which returns the address of a per thread errno variable.
Something like this:
#define errno (*errno_function())
The 'errno_function' might even have arguments such as current thread id although most systems have system calls to get that so the function can do it on its own.
It should be noted that in ANSI C the errno variable is still the preferred ANSI way to get error code. MS want you to use GetLastError() partly due to the thread problem (which can be circumvented as I described above) but mostly because adhering to standards is against their company policy. For example GetLastError will return more detailed error codes as well as it can support error codes from other components. For example if DirectX or COM detect an error they can set GetLastError to values without conflict with the system since the value returned from GetLastError is a 32 bit value which is separated into several different bit fields, one of those fields identify the 'system' that generated or detected the error, Windows is only one such system (with identification 0) but you can easily find several others if you look around in header files ;)
Use of GetLastError does not make portable code though. If you write MFC or other MS specitic code then GetLastError is indeed the preferred way since you can't port such code anyway (and if you can it's because you use a package which emulates windows on the other OS, that system would probably also emulate GetLastError()).
For portable programs you would use errno exactly. Even in threaded programs where it most likely isn't a simple variable even if it appears to be ;) (preprocessor is a 'wonderful' thing).
Alf
I know that using global variable is harmful. But this is a common interview question so i thought of finding some other good methods.
THANKSSSSSSSSSSSSSSS
hi zebada,
>>This lets you "read" the global var but not change it:
Well this trick will alllow me only to read the initial value. Once the value of the global variable is changed u cannot use it.
So i thnk it won't be helpful.
hi GaryFx,
>>paper by Drs. William Wulf
Can u give me more details. I will give u points.
In c++.
int global; <-----------first
int main()
{
int global;
{
int global;
::global; <---------------Will this still access the access the first global variable.
}
return 0;
}
As stanz pointed out, the "::" causes the compiler to NOT search up the variable stack for the closest reference, but to go directly to the global block for the variable.
Kent
infix :: uses the scope specified by the left hand side and looks for the name on the right hand side in that scope.
so ::bar is a global bar.
foo::bar is a bar defined in the foo scope.
bar is a global bar or local bar depending on which is nearest when you look at it from innermost scope to outermost scope (global scope). The first bar it finds is the bar that is chosen.
Alf
If I were asking it as an interview question, the answer I'd hope to get is "Don't."
Gary
If anyone around me program like that I will personally scold him ;) If I have any say in the matter he will have to clean up ihis act fast or get the boot ;)
rule 1. Use global variables as little as possible and only when all other options are out.
rule 2. When you use global variables, declare them all at a proper place in the source code so they are easily found and easily seen. If some global variables work together you might consider grouping them together in the declaration. However, a better idea would be to put them in a class in C++ or in a module in C and let that module be the only module to access them, so they are declared in a single module together which has little else other than these global variables and the functions to access them.
rule 3. Name all global variables with a proper prefix to mark them as global (this is for C, in C++ you would place global variables as static variables in a class and give them S_ or s_ prefix). In C you should give them G_ or g_ prefix so that they are clearly marked as global variables. Any local variable should NEVER have M_ m_ G_ g_ S_ or s_ prefix so
there are never any danger of name clash between global and local variables.
Alf
Thanks for those words of advice. | https://www.experts-exchange.com/questions/20722542/Accessing-global-variable.html | CC-MAIN-2019-35 | refinedweb | 1,589 | 59.94 |
If.
You need Node.js, since the libraries required for Vue are downloaded using node package manager (npm). Refer to to install Node.js.
<div id=”app”></div>.All the components are loaded within this div with id app.
new Vue({ render: h => h(App)}).$mount(‘#app’).This snippet is telling that App Component needs to be loaded into an element with id app (which is the div element).
npm run build. When this command is run, the code is minified and bundled and stored in the dist folder. The code from this folder is generally deployed to production.
Create a file called as First.vue inside src/components. This file will have HTML, JavaScript, and CSS. Let’s add them one by one. All the code under this section belongs to First.vue file. First.vue will be our Vue Component
<style scoped> .demo { background-color: cyan; } </style>
This is basic CSS. The parameter scoped in
<style scoped> means that the CSS applies to this component only.
<script> export default { name: 'First', props: { msg: String } } </script>
name parameter indicates the name of the component which is First.
props parameter indicates the input to this component. Here we will have one input called as msg which is of type String.
<template> <div class="demo"> <h1>{{ msg }}</h1> </div> </template>
{{msg}} is the way in which the input parameter to the Vue Component can be accessed in the HTML code.>
import First from ‘./components/First.vue’
components: {First}
<First msg=”First Component”/>
Now run the application using
npm run serve and you will see the below output:
Click Here to get the code shown here from the GitHub repository. The GitHub repo has the instructions on cloning and running the code.
Click Here to see how the application looks. It has been deployed using Github pages.
Now you have successfully built your first Vue.js App. You’ve also learned how to create a very simple component. In my next post on Vue.js, I will cover more concepts. Stay tuned!
Vue.js:
Vue CLI:
I love technology and follow the advancements in technology. I also like helping others with any knowledge I have in the technology space.
Feel free to connect with me on my LinkedIn account
You can also follow me on twitter
My Website:
☞ Turn ASP.NET Core Application Into Single-Page Application
☞ Vue and Laravel To-Do App - Restful API Project
☞ Vue.js Fast Crash Course
☞ Learn ECMAScript 2015 - ES6 Best Course
☞ Web Development Tutorial - JavaScript, HTML, CSS
☞ Javascript Project Tutorial: Budget App
☞ JavaScript Programming Tutorial Full Course for Beginners
☞ E-Commerce JavaScript Tutorial - Shopping Cart from Scratch
☞ Web Development Trends 2020
☞ Learn JavaScript - Become a Zero to Hero | https://school.geekwall.in/p/rJV6Yt7RQ | CC-MAIN-2021-10 | refinedweb | 452 | 67.65 |
Hey all,
I'm trying to store the fuse byte for ATtiny20 in my .elf file to be used for production programming. Currently, I'm using Atmel Studio 7 for the production GUI but am open to other options.
I'm running into a couple issues:
First, the avr/io.h file for the ATtiny20 (iotn20.h) defines the fuse memory size as 0:
#define FUSE_MEMORY_SIZE 0
and therefore the standard method of defining FUSES after including <avr/fuse.h> do not work.
So, is this a bug in avr-libc?
Second, trying to declare and define my own __fuse value, I'm not getting the production programmer tool in Atmel Studio 7 to recognize the fuse section in the elf. If I load up the elf file, it un-disables the "Flash" checkbox but not the "Fuses" checkbox. This tells me Atmel Studio is not properly detecting the .fuse section. However, according to avr-objdump, it seems to be there.
#include <avr/io.h> #include <avr/fuse.h> unsigned char __fuse FUSEMEM = 0xDF;
> avr-objdump -s -j .fuse out/production.elf out/production.elf: file format elf32-avr Contents of section .fuse: 800042 df .
Of note, I'm using my own build scripts on Windows 10.
Any idea how Atmel Studio parses .elf files for Fuses section?
Hi cinderblock,
You can look into,......
-Partha
Top
- Log in or register to post comments | https://www.avrfreaks.net/forum/atmel-studio-7-reading-fuses-elf-attiny20 | CC-MAIN-2020-34 | refinedweb | 233 | 69.79 |
Bean is a small, fast, cross-platform, framework-agnostic event manager designed for desktop, mobile, and touch-based browsers. In its simplest form - it works like this:
bean.on(element, 'click', function (e) { console.log('hello'); });
Bean is included in Ender's starter pack,
The Jeesh. More details on the Ender interface below.
Bean has five main methods, each packing quite a punch.
on()
one()
off()
clone()
fire()
bean.on() lets you attach event listeners to both elements and objects.
Arguments
Optionally, event types and handlers can be passed in an object of the form
{ 'eventType': handler } as the second argument.
Examples
// simple bean.on(element, 'click', handler); // optional arguments passed to handler bean.on(element, 'click', function(e, o1, o2) { console.log(o1, o2); }, 'fat', 'ded'); // multiple events bean.on(element, 'keydown keyup', handler); // multiple handlers bean.on(element, { click: function (e) {}, mouseover: function (e) {}, 'focus blur': function (e) {} }); bean.on(element, 'click', '.content p', handler); // Alternatively, you can pass an array of elements. // This cuts down on selector engine work, and is a more performant means of // delegation if you know your DOM won't be changing: bean.on(element, 'click', [el, el2, el3], handler); bean.on(element, 'click', $('.myClass'), handler);.on(element, 'click.fat.foo', fn); // 1 bean.on(element, 'click.ded', fn); // 2 bean.on(element, 'click', fn); // 3 // later: bean.fire(element, 'click.ded'); // trigger 2 bean.fire(element, 'click.fat'); // trigger 1 bean.off(element, 'click'); // remove 1, 2 & 3 // fire() & off() match multiple namespaces with AND, not OR: bean.fire(element, 'click.fat.foo'); // trigger 1 bean.off(element, 'click.fat.ded'); // remove nothing
Notes
fire()and
remove()calls using OR rather than AND.
bean.one() is an alias for
bean.on() except that the handler will only be executed once and then removed for the event type(s).
Notes
one()used the same argument ordering as
add()(see note above), it now uses the new
on()ordering.
bean.off() is how you get rid of handlers once you no longer want them active. It's also a good idea to call off on elements before you remove them from your DOM; this gives Bean a chance to clean up some things and prevents memory leaks.
Arguments
Optionally, event types and handlers can be passed in an object of the form
{ 'eventType': handler } as the second argument, just like
on().
Examples
// remove a single event handlers bean.off(element, 'click', handler); // remove all click handlers bean.off(element, 'click'); // remove handler for all events bean.off(element, handler); // remove multiple events bean.off(element, 'mousedown mouseup'); // remove all events bean.off(element); // remove handlers for events using object literal bean.off(element, { click: clickHandler, keyup: keyupHandler })
Notes
remove()was the primary removal interface. This is retained as an alias for backward compatibility but may eventually be removed.
bean.clone() is a method for cloning events from one DOM element or object to another.
Examples
// clone all events at once by doing this: bean.clone(toElement, fromElement); // clone events of a specific type bean.clone(toElement, fromElement, 'click');
bean.fire() gives you the ability to trigger events.
Examples
// fire a single event on an element bean.fire(element, 'click'); // fire multiple types bean.fire(element, 'mousedown mouseup');
Notes
fire()which will in turn be passed to the event handlers. Handlers will be triggered manually, outside of the DOM, even if you're trying to fire standard DOM events..setSelectorEngine(qwery);
Notes
querySelectorAll()is used as the default selector engine, this is available on most modern platforms such as mobile WebKit. To support event delegation on older browsers you will need to install a selector engine.
Eventobject
Bean implements a variant of the standard DOM
Event object, supplied as the argument to your DOM event handler functions. Bean wraps and fixes the native
Event object where required, providing a consistent interface across browsers.
// prevent default behavior and propagation (even works on old IE) bean.on(el, 'click', function (event) { event.preventDefault(); event.stopPropagation(); }); // a simple shortcut version of the above code bean.on(el, 'click', function (event) { event.stop(); }); // prevent all subsequent handlers from being triggered for this particular event bean.on(el, 'click', function (event) { event.stopImmediatePropagation(); });
Notes
Eventmethods (
preventDefaultetc.) may vary with delegated events as the events are not intercepted at the element in question..on(element, 'partytime', handler); bean.fire(element, 'partytime');
Bean provides you with two custom DOM events, 'mouseenter' and 'mouseleave'. They are essentially just helpers for making your mouseover / mouseout lives a bit easier.
Examples
bean.on(element, 'mouseenter', enterHandler); bean.on(element, 'mouseleave', leaveHandler);
Everything you can do in Bean with an element, you can also do with an object. This is particularly useful for working with classes or plugins.
var inst = new Klass(); bean.on(inst, 'complete', handler); //later on... bean.fire(inst, 'complete');
If you use Bean with Ender its API is greatly extended through its bridge file. This extension aims to give Bean the look and feel of jQuery.
Add events
$(element).on('click', fn);
$(element).addListener('click', fn);
$(element).bind('click', fn);
$(element).listen('click', fn);
Remove events
$(element).off('click');
$(element).unbind('click');
$(element).unlisten('click');
$(element).removeListener('click');
Delegate events
$(element).on('click', '.foo', fn);
$(element).delegate('.foo', 'click', fn);
$(element).undelegate('.foo', 'click');
Clone events
$(element).cloneEvents('.foo', fn);
Custom events
$(element).trigger('click')
Special events
$(element).hover(enterfn, leavefn);
$(element).blur(fn);
$(element).change(fn);
$(element).click(fn);
$(element).dblclick(fn);
$(element).focusin(fn);
$(element).focusout(fn);
$(element).keydown(fn);
$(element).keypress(fn);
$(element).keyup(fn);
$(element).mousedown(fn);
$(element).mouseenter(fn);
$(element).mouseleave(fn);
$(element).mouseout(fn);
$(element).mouseover(fn);
$(element).mouseup(fn);
$(element).mousemove(fn);
$(element).resize(fn);
$(element).scroll(fn);
$(element).select(fn);
$(element).submit(fn);
$(element).unload(fn);
Bean passes our tests in all the following browsers. If you've found bugs in these browsers or others please let us know by submitting an issue on GitHub!!
Special thanks to:
Bean is copyright © 2011-2012 Jacob Thornton and licenced under the MIT licence. All rights not explicitly granted in the MIT license are reserved. See the included LICENSE file for more details. | https://recordnotfound.com/bean-fat-62124 | CC-MAIN-2019-26 | refinedweb | 1,026 | 53.27 |
Simple password validation
I am writing a sort of web-based admin tool for a client, and I had this problem: How do you validate a system user from a script?
Well, this is how:
def validPass(name,password): p=os.popen('/usr/bin/checkpassword-pam -s login -- /bin/true 3<&0','w') s='%s\000%s\000xxx\000'%(name,password) p.write(s) r=p.close() if r==None: #Success return True else: return False
Just get checkpassword-pam from somewhere.
Or, if you use some other sort of authentication scheme, some other checkpassword. They are meant for qmail, but they are very handy :-)
what is 'os' object ?
it's python's os module. Provides stuff like system/popen/exec, among many others.
It's in the standard library.
Hmmm... you don't have something like C's system() in Java? I can see how that would be non-portable :-)
You could hack it with Java-PAM, though:
Looks like a hack, but it's pretty simple. I'd like to have something like that in java. :) | https://ralsina.me/weblog/posts/P293.html | CC-MAIN-2021-17 | refinedweb | 178 | 76.42 |
Introduction To Developing Fireworks Extensions (They’re Just JavaScript!).
Further Reading on SmashingMag:
- Developing A Design Workflow In Adobe Fireworks
- Switching From Adobe Fireworks To Sketch: Ten Tips And Tricks
- The Present And Future Of Adobe Fireworks
- The Power of Adobe Fireworks: What Can You Achieve With It?
I learned to develop Fireworks extensions by writing the Specctr plugin. While working on Specctr, I’ve witnessed Fireworks’ passionate community actively support the app — an app that has been widely overlooked by Adobe. (Sadly, Fireworks CS6 is the last major release, Aaron Beall and Matt Stow, among others, who have written many indispensable extensions, such as SVG Import and SVG Export (which add full-featured SVG support to Fireworks), Generate Web Assets, CSS Professionalzr (which extends the features of the CSS Properties.0), and through extensions, new features and panels can be added. This article is aimed at those interested in developing extensions for Fireworks. We’ll introduce the JavaScript underpinnings of Fireworks and, in the process, write a few JavaScript examples to get you started.
-.
Do You Speak JavaScript? Fireworks Does!Fireworks speaks JavaScript. It exposes a JavaScript application programming interface (API) via Fireworks’ document object model (DOM), which represents its constituent parts and functions. That’s a long way of saying that you can write JavaScript to tell Fireworks what to do. Fireworks lets you run the JavaScript in two basic ways: commands and command panels.
CommandsThe first option is to execute JavaScript as commands. Commands are simple text files that contain JavaScript and that are saved with a
.jsfextension. To make them available from the “Commands” menu in Fireworks, you must save them in the
<Fireworks>/Configuration/Commands/directory (where
<Fireworks>is a stand-in for the installation directory of Adobe Fireworks on your computer — see “A Note on Locations” below).
Command PanelsThe second option is to build a command panel. Command panels are Flash panels powered by ActionScript,Below are the exact locations of the
Commandsand
Command Panelsfolders on both Mac and Windows.
Mac OS X
/Applications/Adobe Fireworks CS6/Configuration/Commands/
/Users/<USERNAME>/Library/Application Support/Adobe/Fireworks CS6/Commands/
/Applications/Adobe Fireworks CS6/Configuration/Command Panels/
/Users/<USERNAME>/Library/Application Support/Adobe/Fireworks CS6/Command Panels/
WindowsWindowsWhen should you write a command, and when should you write a command panel? Generally, a command is useful for automating some action that requires no or very little user input, such as pasting elements into an existing group or quickly collapsing all layers. A command is also easier to build and test.
History PanelThe<<
We can save this code to Fireworks’ “Commands” menu by clicking on the “Save steps as a command” button in the bottom-right corner of the History panel. Once this simple command has been saved to theWe can save this code to Fireworks’ “Commands” menu by clicking on the “Save steps as a command” button in the bottom-right corner of the History panel. Once this simple command has been saved to the
fw.getDocumentDOM().moveSelectionBy({x:2, y:46}, false, false);
Commandsfolder,and
yvalues in the
.jsffile that Fireworks saved at the location described earlier in this article. This was a very simple example, but it shows that developing a Fireworks extension is not that hard — at least not in the beginning!
Fireworks ConsoleLet<<
Console DebuggingWhile building the Specctr panel, I used the JavaScript
alertfunction to check the output of my code at various places in its execution.
myCode.jsf
… // Check the value of myVariable: alert("my variable:", myVariable); …
consoleinto Fireworks’ global namespace. This means that we can use the
consoleobject’s
logfunction to log messages out to the Console panel’s output pane, as we’ll see now.
myCode.jsf
This doesn’t interrupt the code from executing. Because Fireworks does not provide any way for you to set breakpoints in the code, logging to the console is the method that I would recommend when debugging extensions.This doesn’t interrupt the code from executing. Because Fireworks does not provide any way for you to set breakpoints in the code, logging to the console is the method that I would recommend when debugging extensions.
… console.log("myProgramVariable", myVariable); …
Fireworks DOMJust as the
consoleobject is a JavaScript representation of Fireworks’ Console panel, the different concepts and functionality that make up Fireworks have JavaScript representations. This organization of JavaScript objects that models Fireworks’ behavior is called the Fireworks DOM.
fw ObjectWe can see the DOM being accessed by our “Move” JavaScript code from earlier:
TheThe
fw.getDocumentDOM().moveSelectionBy({x:2, y:46}, false, false);
fwobject is a JavaScript object that models or represents Fireworks itself. It contains properties that describe Fireworks’ current state. For example
fw.selectionis an array that represents all of the currently selected elements on the canvas. We can see this by selecting the text element that we’ve been working with and, in the Console panel, typing
fw.selection, then clicking the “Eval” button. Here is the Console panel’s output:
In the output window, you should see a JSON representation of theIn the output window, you should see a JSON representation of the
[{ … alignment: "justify", face: "GillSans", fontsize: "34pt", … }]
fw.selectionarray containing objects that symbolize each of the selected design elements on the canvas. (JSON is just a human-readable representation of JavaScript objects — in our case, the text element that we selected.)
Viewing the DOMWhen the formatting of the Console’s output gets too long, it leaves something to be desired. So, to see the properties and values of objects (object methods are not shown) in the Fireworks DOM, I use Aaron Beall’s DOM Inspector panel, another indispensable companion in my journey of developing extensions.
fw.selection. You should see an expanded
[object Text]in the Inspector, along with all of its properties and values. From the drop-down menu, I can select between viewing the contents of four objects:
fw.selectionAn array of currently selected elements on the canvas
fwThe Fireworks object
domThe DOM of the currently active document (which we will discuss next)
dom.pngTextA property of the currently active document (available for us to write to so that we can save data to the current document and retrieve it even after restarting Fireworks)
Document DOMIn the DOM Inspector panel, we can switch to the
documentDOMand explore its state. We can also access the
documentDOMvia JavaScript with the
getDocumentDOM()method, as we did with the “Move” command:
TheThe
fw.getDocumentDOM().moveSelectionBy({x:10, y:10}, false, false);Acting on the current selection is a common pattern when developing Fireworks extensions. It mirrors the way that the user selects elements on the canvas with the mouse, before performing some action on that selection.
The document DOM’s moveSelectionBy() function takes a JavaScript object as a parameter:The document DOM’s moveSelectionBy() function takes a JavaScript object as a parameter:
fw.getDocumentDOM().moveSelectionBy({x:10, y:10}, false, false);
Given an origin in the top-left corner, this tells Fireworks to move the selected object byGiven an origin in the top-left corner, this tells Fireworks to move the selected object by
{x:10, y:10}
xpixels to the right and by
ypixels down. The other two boolean parameters (
false,
false) indicate to
move(as opposed to
copy) the selection and to move the
entire element(as opposed to a
sub-selection, if any exists). Like the
moveSelectionBy()method, many other Document DOM methods act on the current selection (
cloneSelection()and
flattenSelection(), to name two).
Expand Your Horizons (And The Canvas)Using what we have learned so far, let’s write a simple command that will expand the size of our canvas.
Canvas SizeTo increase the size of the canvas, we need to know the current size. Our panel can call the JavaScript below to access the canvas’ current dimensions:
Now, let’s see how to change these dimensions.Now, let’s see how to change these dimensions.
var = canvasWidth = fw.getDocumentDOM().width; var = canvasHeight = fw.getDocumentDOM().height;
Setting the Canvas’ SizeTo set the canvas’ size, we call the
setDocumentCanvasSize()method of the Document DOM.
The method takes a “bounding rectangle” as a parameter:The method takes a “bounding rectangle” as a parameter:
fw.getDocumentDOM().setDocumentCanvasSize({left:0, top:0, right:200, bottom:200});
The size of the rectangle will determine the new size of the canvas:The size of the rectangle will determine the new size of the canvas:
{left:0, top:0, right:200, bottom:200}
Here, the rectangle is bounded by the object; therefore, the canvas is 200 × 200 pixels.Here, the rectangle is bounded by the object; therefore, the canvas is 200 × 200 pixels.
right - left = 200 bottom - top = 200
Increasing the Canvas’ Size: A Simple CommandLet’s create a simple command that will double the canvas’ size automatically. Instead of going through the
Modify → Canvas → Canvas Sizemenu and then figuring out a width and height to input and then pressing “OK” whenever we want to increase the canvas’ size, we can combine the two code samples from above to create a quick shortcut to double the canvas’ size. The code might look something like this:
I’m working on a Mac, so to make this command available from the “Commands” menu in Fireworks, I could save the code above asI’m working on a Mac, so to make this command available from the “Commands” menu in Fireworks, I could save the code above as
// Double Canvas Size command, v.0.1 :) var newWidth = fw.getDocumentDOM().width * 2; var newHeight = fw.getDocumentDOM().height * 2; fw.getDocumentDOM().setDocumentCanvasSize({left:0, top:0, right: newWidth, bottom: newHeight});
double_size.jsfin the following location:
(Check the beginning of the article to see where to save your(Check the beginning of the article to see where to save your
/Users/<MYUSERNAME>/Library/Application Support/Adobe/Fireworks CS6/Commands/double_size.jsf
.jsfcommands if you are on a different OS.) I leave it as an exercise for you to write and save a simple command that cuts the canvas’ size in half.
ConclusionWe.
Further Reading
- “Extending Fireworks,” Adobe This is the official guide to developing extensions for Fireworks CS5 and CS6 (including the “Fireworks Object Model” documentation).
- FireworksGuru Forum Want to ask John, Aaron or Matt a question? You’ll probably find them here.
- “Adobe Fireworks JavaScript Engine Errata,” John Dunning Dunning breaks down the quirks of the JavaScript interpreter that ships with Fireworks. Something not working as it should? Check it here. The list is pretty extensive!
- Fireworks Console, John Dunning This is a must-have if you write Fireworks extensions!
- DOM Inspector (panel), Aaron Beall
- “Creating Fireworks Panels, Part 1: Introduction to Custom Panels,” Trevor McCauley This was one of the first tutorials that I read to learn how to develop extensions for Fireworks. McCauley has written many cool extensions for Fireworks, and this article is an excellent read!
| https://www.smashingmagazine.com/2014/06/introduction-to-developing-fireworks-extensions-theyre-just-javascript/ | CC-MAIN-2020-40 | refinedweb | 1,811 | 52.39 |
pymakr repl dump output
hi, is it possibile to get a file dump of the repl consolle?
@robert-hh that should do the work. thanks!
@ozeta Then try to use a simple terminal emulator like picocom (linux) or putty(Windows). they allow logging the output.
@robert-hh nope, i'm trying to "redirect" the output of the repl consolle to a file. because for debugging purposes it's difficult to copy-paste directly from the pymakr consolle
@ozeta If you ask for just displaying the content of a file on the console:
There is a set of small unix like tools in, the script upysh.py
Once you have started in on the console with
from upysh import *
you have commands like ls, cat("filename"), head("filename"), etc..
just try the command
manfor a list of commands. .
I import these in my main.py, such that they are always there. If you build you image yourself, you can put it in frozen bytecode. | https://forum.pycom.io/topic/1757/pymakr-repl-dump-output | CC-MAIN-2018-17 | refinedweb | 163 | 74.29 |
NAME
Alt - Alternate Module Implementations
SYNOPSIS
cpanm Alt::IO::All::crackfueled.
GUIDELINES
This idea is new, and the details should be sorted out through proper discussions. Pull requests welcome.
Here are the basic guidelines for using the Alt namespace:
- Name Creation
Names for alternate modules should be minted like this:
"Alt-$Original_Dist_Name-$phrase"
For instance, if MSTROUT wants to make an alternate IO-All distribution to make it even more crack fueled, he might call it:
Alt-IO-All-crackfueled.
- Module for CPAN Indexing
You will need to provide a module like
Alt::IO::All::MSTROUTso that CPAN will index something that can cause your distribution to get installed by people:
cpanm Alt::IO::All::MSTROUT
Since you are adding this module, you should add some doc to it explaining your Alternate version's improvements.
- Versioning
The VERSION of the module you are providing an alternate version of should be the same as the original module at the time you release the alternate. This will make it play well with others.
To use the IO::All example, if MSTROUT releases Alt-IO-All-MSTROUT when IO::All is at version '0.46', his IO::All module should have VERSION = '0.46', but his Alt::IO::All::MSTROUT could be VERSION '0.000005'. This should make the dist be Alt-IO-All-MSTROUT-0.000005.
If another module wants his version of IO::All, it should list Alt::IO::All::MSTROUT 0.000005 as a prereq, and then
use IO::All 0.46;in the code.
- no_index
It is important to use the
no_indexdirective on the modules you are providing an alternative to. This is especially important if you are the author of the original, as PAUSE will reindex CPAN to your Alt- version which defeats the purpose. Even if you are not the same author, it will make your index reports not show failures.
- Other Concerns
If you have em, I(ngy) would like to know them. Discuss on #toolchain on irc.perl.org for now.
AUTHOR
Ingy döt Net <ingy@cpan.org>
See | https://metacpan.org/pod/release/INGY/Alt-0.04/lib/Alt.pm | CC-MAIN-2017-30 | refinedweb | 344 | 65.52 |
Vol. 14, Issue 6, 2543-2558, June 2003
Department of Biological Chemistry and Molecular Pharmacology, Harvard Medical School, Boston, Massachusetts 02115
Submitted October 30, 2002;
Revised January 21, 2003;
Accepted February 5, 2003
Monitoring Editor: Howard Riezman
A major function of Ste5 is to facilitate the activation of Ste11 by Ste20,
a p21-activated kinase that is anchored to the plasma membrane through a
Rho-type G protein Cdc42 (reviewed in
Moskow et al., 2000
;
Lamson et al., 2002
).
Work from a variety of laboratories has led to the hypothesis that Ste5
activates Ste11 by binding to a heterotrimeric G protein at the plasma
membrane and recruiting Ste11 to a pool of active Ste20 that is bound to
Cdc42. On pheromone stimulation, the receptor activates a G protein by
dissociating the G
(Gpa1) subunit from the G
dimer
(Ste4/Ste18) (reviewed in Gustin et
al., 1998
). The G
subunit (Ste4) is then thought to
bind to Ste5 (Whiteway et al.,
1995
; Inouye et al.,
1997a
, Feng et al.,
1998
) and to Ste20 (Leeuw
et al., 1998
). This recruitment event is thought to allow
Ste20 to directly phosphorylate Ste11 in the Ste5 scaffold complex
(Feng et al., 1998
;
Pryciak and Huntress, 1998
;
van Drogen et al.,
2000
).
The association between Ste5 and Ste4 is pheromone-dependent and tightly
regulated. During vegetative growth, Ste5 shuttles continuously between the
cytoplasm and nucleus, with a nuclear pool accumulating in G1 phase cells
(Mahanty et al.,
1999
). In the presence of pheromone, Ste5 undergoes enhanced
export from the nucleus to the cytoplasm, and the nuclear pool is recruited to
plasma membrane. Ste5 rapidly accumulates at restricted cortical sites in G1
phase cells and at the projection tip at later times. A variety of experiments
suggest that under physiological conditions, Ste5 must shuttle through the
nucleus to be recruited to the plasma membrane and activate Fus3
(Mahanty et al.,
1999
). Fus3 is predominantly nuclear, whereas Ste7 and Ste11 are
cytoplasmic (Mahanty et al.,
1999
; van Drogen et
al., 2001
), raising the possibility that nuclear shuttling
helps Ste5 to assemble a signaling complex. Interestingly, the kinase
suppressor of Ras scaffold also links MAPK cascade kinases (Raf, extracellular
signal-regulated kinase kinase, extracellular signal-regulated kinase) to a
membrane-linked anchor (Ras) (Nguyen
et al., 2002
; Roy
et al., 2002
) and shuttles through the nucleus
(Brennan et al.,
2002
), suggesting that multiple aspects of Ste5 localization are
conserved.
Ste5 forms dimers or higher order oligomers based on interallelic
complementation, two-hybrid analysis, and coprecipitation
(Yablonski et al.,
1996
; Feng et al.,
1998
). This property is shared by the JIP family of mammalian
scaffolds (Yasuda et al.,
1999
) and may therefore have a conserved role in scaffold
function. Genetic evidence argues that oligomerization is essential for Ste5
function and occurs through two domains, the RING-H2 domain and a central
leucine-rich domain that spans a potential leucine zipper
(Figure 1A;
Yablonski et al.,
1996
). A demonstration of a direct role for either the RING-H2
domain or the leucine-rich domain in oligomerization is lacking. In addition,
it is not known how oligomerization affects the ability of Ste5 to activate
the MAPK cascade.
The relationship between oligomerization and recruitment is also unclear.
It has been proposed that the association of Ste5 with Ste4 through the
RING-H2 domain is a prerequisite for Ste5-Ste5 association and signaling
(Inouye et al.,
1997a
). However, biochemical and two-hybrid analysis suggests that
Ste5 oligomerization could occur constitutively
(Feng et al., 1998
).
More recent analysis suggests that Ste5 undergoes an activating conformational
change of interactions between the N- and C-terminal halves of the protein
that regulates the associated kinases by allostery or alignment
(Sette et al., 2000
).
Conformational changes could also regulate oligomerization and association
with signaling components (Elion,
2001
).
In this report, we analyzed how the oligomerization status of Ste5 affects nuclear shuttling, recruitment, association with signaling components, and ability to activate the MAPK cascade. Our results suggest that oligomerization of Ste5 positively regulates all of these events and that the activation of Ste5 involves a conformational switch from an inactive monomer to an active dimer.
Pheromone Assays
Halo assays were carried out as described previously (Elion et al., 1990
)
by using 5 µl of 100 µM
factor for bar1 strains and 5
µl of 2 mM
factor for BAR1 strains.
factor was
synthesized by Dr. C. Dahl (Biopolymer Facility, Harvard Medical School).
FUS1-lacZ expression was assayed as described previously
(Farley et al., 1999
)
after inducing cells for 90 min in 50 nM
factor. Patch mating assays
were done as described previously (Elion
et al., 1990
).
Indirect Immunofluorescence
Ste5 localization was monitored by indirect immunofluorescence by using derivatives tagged with either the Myc epitope or GST. A Ste5-Myc9 construct containing nine tandem copies of the Myc epitope at the C terminus of Ste5 was primarily used because previous work has established that it is nearly 100% functional when expressed at native levels from its own promoter (Feng et al., 1998
;
Mahanty et al.,
1999
). Ste5-Myc9 is more functional than N- and C-terminally
tagged green fluorescent protein (GFP) derivatives of Ste5 and N-terminally
tagged hemagglutinin (HA) and Myc derivatives of Ste5 and has been found to be
a sensitive tool for the detection of plasma membrane recruitment after short
exposure to
factor (Mahanty et
al., 1999
). Myc3-Ste5 and Myc3-Ste5-GST derivatives were also
immunolocalized; however, both proteins were significantly more difficult to
detect than Ste5-Myc9 because of fewer copies of the Myc epitope. Cells were
grown to mid-logarithmic phase (A600 of
0.6) in selective SC
medium with or without exposure to 50 nM
factor for 15 min and then
fixed in 5% formaldehyde for 1 h at room temperature. Indirect
immunofluorescence was performed essentially as described previously
(Mahanty et al.,
1999
), except that the primary antibody was used at higher
dilution (1:5001000). This greatly improved the ability to detect Ste5
at the plasma membrane. Cells were incubated in primary antibody (mouse
anti-Myc monoclonal 9E10 ascites or anti-GST polyclonal antibody) for 1.5 h at
room temperature and then incubated in secondary antibody at a dilution of
1:300 (Cy3- or fluorescein isothiocyanate-conjugated antibody) for 1 h at room
temperature.
Quantitation of Localization of Ste5
Nuclear accumulation was defined as the ability to detect a stronger signal in the nucleus compared with the cytoplasm. Nuclear exclusion was defined as a reduced amount of staining in the nucleus compared with the cytoplasm. Rim staining was defined as an enriched signal at the cortex of the cell. The percentage of total cells in the population that exhibited a particular localization pattern was determined by tallying 400700 cells from two or more transformants in at least two experiments. Standard deviations were found to be an average of 2.4% for values in the range of
2097%
and 0.62% for values in the range of 117%. Minor variations in the
numbers seem to reflect variations in strain backgrounds and growth
conditions. For example, greater nuclear accumulation and rim staining of
Ste5-Myc9 is detected in BAR1 cells compared with
bar1
cells, in STE5 cells compared with
ste5
cells, and in S288C strains compared with W303 strains.
Greater nuclear accumulation was also detected in cells that only express the
STE5-MYC9 plasmid without a second plasmid and when cells are grown
in galactose medium. Note that the msn5
mutation decreased the
intensity of rim staining relative to that of wild type, although it was more
readily detected due to a reduced cytoplasmic pool. The rsl1-4
mutation did not completely block nuclear import of either Ste5-Myc9 or
Ste5-GST, because individual 2µ transformants did not always exhibit
nuclear exclusion of the Ste5 fusions. The block in recruitment of Ste5-GST
was most evident in cells that also displayed an obvious block in nuclear
import of Ste5-GST, as indicated by partial nuclear exclusion of Ste5-GST.
Ste5-GST expression was heterogeneous in nsp1ts cells, so only the
most brightly staining cells were tallied. Cells were visualized with an
Axioskop 2 microscope (Carl Zeiss, Thornwood, NY) linked to a digital camera
(C4742-95; Hamamatsu, Bridgewater, NJ).
Coimmunoprecipitation
Whole cell extracts were made as described previously (Elion et al., 1993
)
by using modified H buffer containing 200 mM NaCl. Coimmunoprecipitations and
immunoblots were carried out as described previously
(Elion et al., 1990
;
Feng et al., 1998
) by
using 250 µg to 2 mg of whole cell protein extract, 35 µg of
12CA5 (
-HA) or 12 µg of 9E10 (
-Myc) monoclonal
antibody, and 30 µl of protein A agarose beads (Sigma-Aldrich, St. Louis,
MO). Immunoblots were probed with 12CA5 and 9E10 or with
-Myc and
-HA polyclonal antibodies (Santa Cruz Biotechnology, Santa Cruz, CA).
Immunoreactivity was detected with horseradish peroxidase-conjugated secondary
antibody (enhanced chemiluminescence; Amersham Biosciences, Piscataway, NJ).
Quantitation of enhanced chemiluminescence-detected bands was done using the
Scion Image 1.62c densitometry program of the public domain software NIH image
(available
at//rsb.info.nih.gov/nih-image/).
All immunoprecipitations were done a minimum of three times using two
transformants and were reproducible.
A drawback of the prior biochemical analysis of oligomerization was the use
of GST-tagged derivatives of Ste5, because GST forms stable dimers and can
influence the oligomerization properties of a protein
(McTigue et al.,
1995
; Maru et al.,
1996
; Tudyka and Skerra,
1997
; Inouye et al.,
2000
). We therefore reexamined Ste5 oligomerization by using
derivatives that had been tagged with epitopes. We first compared the ability
of wild-type Ste5 and a Ste5
RING-H2 mutant (i.e., Ste5
177-229)
to oligomerize with wild-type Ste5 and were unable to readily detect Ste5
x Ste5
RING hetero-oligomers in a coimmunoprecipitation assay,
suggesting that the RING-H2 domain is essential for oligomerization (our
unpublished data). We next used as a partner a deletion mutant
(Ste5
474-638) that lacks almost half of the leucine-rich domain,
accumulates to higher steady-state levels than wild-type Ste5, and
homo-oligomerizes very efficiently (Figure
1B, lane 2). Ste5
474-638 could efficiently
hetero-oligomerize with wild-type Ste5, but not with Ste5
RING
(Figure 1B, compare lanes 3 and
4). Ste5
RING oligomerized equally poorly with both wild-type Ste5 and
Ste5
474-638 (our unpublished data), indicating that the
474638 deletion does not block the ability of the leucine-rich
domain to oligomerize. Thus, the RING-H2 domain is the major determinant for
oligomerization of full-length Ste5 and the leucine-rich domain plays a less
critical role. Furthermore, only a portion of the previously defined
leucine-rich domain may be directly involved in oligomerization.
Mutations in the Leucine-rich Domain Block Nuclear Accumulation and
Recruitment
Our prior analysis showed that the recruitment of Ste5 to the plasma membrane was dependent on a putative nuclear localization signal (NLS; overlapping residues 4966) and a RING-H2 domain (residues 177229) (Figure 1A; Mahanty et al., 1999
). To further understand how Ste5 localizes to the plasma
membrane, we identified new mutations in Ste5 that block its ability to
accumulate in nuclei during vegetative growth and be recruited to the plasma
membrane in the presence of
factor mating pheromone. Two of these
mutations, Ste5
474-487 and Ste5L482AL485A (L482AL485A is hereafter
referred to as L482/485A), overlapped the leucine-rich domain and a region
implicated in Ste11 binding. Indirect immunofluorescence showed that
Ste5
474-487-Myc9 failed to accumulate in any nuclei, whereas
Ste5L482/485A-Myc9 accumulated in only a few nuclei
(Figure 2A and
Table 2, lines 13).
Ste5
474-487-Myc9 and Ste5L482/485A-Myc9 were also excluded from a much
higher percentage of nuclei compared with Ste5-Myc9
(Figure 2A, % N.E.). Consistent
with previous findings, wild-type Ste5-Myc9 accumulated in nuclei of 24% of
the cells, of which
85% were unbudded
(Figure 2A and
Table 2, line 1)
(Mahanty et al.,
1999
).
Ste5
474-487-Myc9 and Ste5L482/485A-Myc9 were also severely defective
in their ability to be recruited to the plasma membrane. In the presence of
factor, Ste5
474-487-Myc9 failed to be recruited in any of the
cells and Ste5L482/485A-Myc9 was weakly recruited in only a few cells
(Table 2, lines 23). The
correlation between the amount of nuclear accumulation and the amount of
plasma membrane recruitment was in agreement with previous work
(Mahanty et al.,
1999
). However, both localization defects were unexpected, because
this region of Ste5 is distal to sequences involved in nuclear import or
recruitment and did not support either nuclear import or export of
heterologous proteins (our unpublished data).
Yeast strains expressing Ste5
474-487 and Ste5L482/485A in place of
wild-type Ste5 were severely defective in mating pathway functions. Neither
mutant could efficiently induce mating-specific transcription (as monitored
with a FUS1-lacZ reporter gene), cell cycle arrest in G1 phase (as
monitored in a growth inhibition halo assay), or form a significant number of
diploids (as monitored in a patch mating assay)
(Figure 2B). Ste5
474-487
was more defective than Ste5L482/485A in all of the assays, consistent with
the relative severity of the two mutations.
Two lines of evidence indicated that the functional defects of
Ste5
474-487 and Ste5L482/485A were linked to a reduced ability to
activate Ste11. First, Ste5
474-487 and Ste5L482/485A were unable to
efficiently associate with Ste11 in coprecipitation tests. In initial
experiments, no interaction was detected between Ste11-Myc and HA3-tagged
derivatives of either Ste5
474-487 or Ste5L482/485A (our unpublished
data). Further analysis with GST-tagged derivatives of the mutants showed that
Ste5
474-487 was more defective: Ste11-Myc failed to associate with
Ste5
474-487-GST but could associate with a reduced amount of
Ste5L482/485A-GST (Figure 2C).
Second, neither Ste5
474-487 nor Ste5L482/485A could positively regulate
a constitutively active form of Ste11, STE11-4
(Stevenson et al.,
1992
) that requires Ste5 for most of its basal activity
(Figure 2D). Thus, residues
474487 of Ste5 are critical for Ste11 association and activation.
The simplest interpretation of these findings was that the localization
defects of Ste5
474-487 and Ste5L482/485A were a secondary consequence
of their inability to bind to Ste11. However, two lines of evidence suggested
that this was not the case. First, a ste11
null mutation did
not block nuclear accumulation of wild-type Ste5
(Table 2, lines 7). Second,
previous work suggests that Ste11 is not required for plasma membrane
recruitment of Ste5 (Pryciak and Huntress,
1998
). Collectively, these findings suggest that amino acids
474487 define all or part of a novel domain that is required for
nuclear accumulation and recruitment in addition to Ste11 binding.
Ste5
474-487 and Ste5L482/485A Have Enhanced Ability to
Oligomerize and Associate with Ste4 and Ste7
Because the
474-487 and L482/485A mutations overlap the leucine-rich
domain, we determined the ability of Ste5
474-487 and Ste5L482/485A to
oligomerize. Myc- and HAtagged derivatives of each mutant were coexpressed in
yeast and tested for their ability to coimmunoprecipitate from whole cell
extracts. Both mutants formed abnormally high levels of homo-oligomers
(Figure 3A), although they
formed normal levels of hetero-oligomers with wild-type Ste5
(Figure 3B, only
Ste5
474-487 is shown). Ste5
474-487 formed more homo-oligomers
than Ste5L482/485 (Figure 3A,
bottom two
-HA panels), suggesting that the more severe mutation had a
stronger effect on oligomerization. The increase in oligomerization was
unlikely to be the result of poor binding to Ste11, because equivalent levels
of wild-type oligomers could be detected in wild-type and
ste11
cells (our unpublished data). Thus, both mutants have an
enhanced ability to oligomerize as long as both partners have the
mutation.
The oligomerization results suggested that the
474487
mutation defined a region in Ste5 that normally limited the ability of the
RING-H2 domain to oligomerize. To determine whether the increase in
oligomerization was dependent on the RING-H2 domain, we deleted the RING-H2
domain from Ste5
474-487. However, the double mutant was unstable. Next,
we tested whether Ste5
474-487 and Ste5L482/485A could oligomerize with
a partner that lacked the RING-H2 domain and found that they oligomerized at
barely detectable levels, like wild-type Ste5 (our unpublished data, but see
discussion of Figure 8C). This
finding, taken together with the strong dependence on the RING-H2 domain for
oligomerization of Ste5
474638
(Figure 1B), strongly suggested
that the enhanced oligomerization of Ste5
474-487 and Ste5L482/485A was
the result of greater homo-oligomerization of the intact RING-H2 domain.
To further explore whether the RING-H2 domain was more accessible in
Ste5
474-487 and Ste5L482/485A, we determined whether Ste5
474-487
and Ste5L482/485A had an increased ability to associate with Ste4. Strikingly,
Ste5
474-487 and Ste5L482/485A both associated with more Ste4 than did
wild-type Ste5, and Ste5
474-487 associated with more Ste4 than did
Ste5L482/485A (Figure 3C).
Similar experiments were done with Fus3, which binds next to the RING-H2
domain, and with Ste7, which binds the C terminus of Ste5
(Figure 1A). Ste5
474-487
and Ste5L482/485A also had an increased ability to associate with Ste7, with
Ste5
474-487 exhibiting the larger increase
(Figure 3D). In contrast, both
mutants associated with wild-type levels of Fus3
(Figure 3E), indicating that
the mutations specifically affected interactions with the N- and C-terminal
binding partners. Thus, Ste5
474-487 and Ste5L482/485A seem to have an
altered conformation that makes the RING-H2 domain more accessible for
oligomerization and binding to Ste4 and the C termini more accessible to
Ste7.
A msn5
Mutation Restores Efficient Nuclear Accumulation to
Ste
474-487 and Ste5L482/485A
We next determined why Ste5
474-487 and Ste5L482/485A failed to
accumulate in nuclei. Decreased nuclear accumulation can either be the result
of a block in nuclear import or an increase in nuclear export. Two pieces of
evidence suggested that Ste5
474-487 and Ste5L482/485A could be imported
into nuclei. First, a Ste5(1242) fragment that overlaps the NLS and the
RING-H2 domain accumulated in as many nuclei as did full-length Ste5 (our
unpublished data). Second, Ste5C180A-Myc9, which lacks a functional RING-H2
domain, accumulated in
10% more nuclei than Ste5-Myc9 in side-by-side
comparisons. Thus, all of the information required for nuclear import of Ste5
resides in the first 242 amino acids of the protein and is not dependent on
the status of either the leucine-rich domain or the RING-H2 domain.
These findings suggested that Ste5
474-487 and Ste5L482/485A failed
to accumulate in nuclei because they were more efficiently exported. To test
this hypothesis, we looked at their localization in a msn5
strain that lacks the major exportin required for nuclear export of Ste5
(Mahanty et al.,
1999
). Lack of Msn5 causes Ste5 to accumulate in
90% of
nuclei as long as it has a functional NLS. If Ste5
474-487 and
Ste5L482/485A were defective in nuclear import, then they should not
accumulate in msn5
nuclei. In contrast, if they were more
efficiently exported, then they should accumulate in msn5
nuclei. Strikingly, Ste5
474-487 and Ste5L482/485A accumulated in a high
percentage of msn5
nuclei
(Figure 4, A and B), confirming
our prediction. Thus, Ste5
474-487 and Ste5L482/485A are more
efficiently exported than wild-type Ste5, and the
474487
mutation defines a region in Ste5 that controls its accessibility to the
nuclear export machinery in addition to Ste4 and Ste7.
Wild-Type Ste5 Suppresses the Nuclear Accumulation Defects of
Ste5
474-487 and Ste5L482/485A
We wondered whether enhanced nuclear export was linked to the formation of homo-oligomers with an altered conformation. If this was the case, then Ste5
474-487 and Ste5L482/485A might be poorer substrates for export if
they hetero-oligomerized with wild-type Ste5. This possibility was supported
by the observation that the mutants formed fewer oligomers with a wild-type
partner than they did with themselves
(Figure 3, A and B). We tested
whether formation of mutant x wild-type hetero-oligomers correlated with
less nuclear export, by comparing the localization of Ste5
474-487-Myc9
and Ste5L482/485A-Myc9 in ste5
and STE5 strains. The
presence of wild-type Ste5 partially suppressed the nuclear accumulation
defect of both mutants, leading to more nuclear accumulation, in addition to
better recruitment of Ste5L482/485A (Table
2, lines 2, 3, 5, and 6). Thus, the enhanced nuclear export of
Ste5
474-487 and Ste5L482/485A could be linked to the formation of
homo-oligomers that have an altered conformation.
TAgNLS-Ste5 Drives Ste5-Myc9 to the Nucleus and Plasma Membrane
To further explore the possibility that Ste5 oligomers are recruited from a nuclear pool, we determined whether a nuclear localized form of Ste5, TAgNLS-Ste5, would stimulate nuclear accumulation and plasma membrane recruitment of wild-type Ste5, which is predominantly cytoplasmic (Mahanty et al., 1999
). TAgNLS-Ste5 shuttles continuously through the nucleus but
is predominantly nuclear in the absence and presence of mating pheromone due
to greater reimport from the additional strong NLS. As a consequence, it is
poorly recruited to the plasma membrane. We coexpressed untagged TAgNLS-Ste5
with Ste5-Myc9 and monitored nuclear accumulation and plasma membrane
recruitment of Ste5-Myc9 by indirect immunofluorescence. TAgNLS-Ste5 was
expressed at low levels from the STE5 promoter and at high levels
from the GAL1 promoter (Table
2, lines 812). Remarkably, a low level of expression of
TAgNLS-Ste5 was sufficient to increase the residency of Ste5-Myc9 in the
nucleus during vegetative growth (-
factor) and cause a much larger
increase in its recruitment in the presence of
factor. Greater
expression of TAgNLS-Ste5 caused even greater nuclear accumulation of
Ste5-Myc9 and a high level of recruitment. Moreover, more Ste5-Myc9 remained
in the nucleus in the presence of
factor. Thus, the size of the
nuclear pool of Ste5 is a rate-limiting factor in the amount that is recruited
to the plasma membrane.
Three control experiments confirmed that the changes in Ste5-Myc9
localization were dependent on coexpression of nuclear-localized Ste5 and were
not secondary consequences of sequestration of importins or exportins that
normally regulate Ste5. First, the expression of another TAgNLS-tagged nuclear
protein (TAgNLS-GFP-GFP) did not increase nuclear accumulation or recruitment
of Ste5-Myc9 (Table 2, lines 9
and 11). Second, a msn5
mutation in the major exportin for
Ste5 decreased Ste5-Myc9 recruitment (our unpublished data), suggesting that
sequestration of Ste5 exportins is more likely to interfere with recruitment
rather than enhance it. Third, cooverexpression of other Msn5 cargo (i.e.,
Far1 and Cdc24) did not increase nuclear accumulation or recruitment of
Ste5-Myc9 (our unpublished data). Collectively, these findings support the
possibility that TAgNLS-Ste5 x Ste5 hetero-oligomers are imported into
the nucleus and subsequently recruited to the plasma membrane.
Fusing GST to Ste5 Greatly Increases the Pool of Oligomers and
Stimulates Ste5 Activity and Recruitment
Our results raised the interesting possibility that Ste5 oligomers are more efficiently exported from the nucleus and recruited to the plasma membrane than are Ste5 monomers. We therefore tested whether artificially increasing the level of Ste5 oligomers would increase the pool of Ste5 that is exported and recruited. To make our analysis directly comparable with previous studies (Choi et al., 1994
;
Yablonski et al.,
1996
; Inouye et al.,
1997a
; Feng et al.,
1998
), we fused Ste5 to GST. We first determined whether GST
enhances the formation of Ste5 oligomers in yeast whole cell extracts, by
using HA- and Myc-tagged derivatives of Ste5 and Ste5-GST. Significantly more
Myc3-Ste5-GST coprecipitated with HA3-Ste5-GST compared with the amount of
Myc3-Ste5 that coprecipitated with HA3-Ste5
(Figure 5A, compare lanes 3 and
4). Densitometric analysis of duplicate samples revealed a 137-fold increase
in the level of Ste5-GST oligomers compared with Ste5 oligomers. In contrast,
HA3-Ste5-GST X Ste5-Myc9 hetero-oligomers were not more abundant than HA3-Ste5
x Ste5-Myc9 homo-oligomers (Figure
5B), demonstrating that the increase in Ste5-GST
homo-oligomerization was due to interactions between the GST moieties.
Therefore, the GST tag increases the level of Ste5 homo-oligomers to a point
were they constitute most of the total pool.
The functional competency of Ste5-GST was determined by expressing it at
native levels from its own promoter in a ste5
null strain and
measuring various outputs of the pheromone response pathway. Ste5-GST was very
hyperactive and constitutively activated the mating MAPK cascade, as shown by
a 40-fold increase in
-galactosidase activity from the
FUS1-lacZ reporter gene (Figure
5C, units Fus1-lacZ -
F) and slower vegetative growth
compared with wild-type Ste5 (Figure
5C). Ste5-GST was also hyperactive in the presence of
factor and induced more FUS1-lacZ expression
(Figure 5C, units Fus1-lacZ +
F), growth inhibition in a halo assay
(Figure 5C) and diploid
formation (Figure 6C). The
enhanced pathway activation was not due to stabilization of Ste5 by the GST
moiety, because the steady-state levels of HA3-Ste5-GST and Myc3-Ste5-GST were
no greater than that of HA3-Ste5 and Myc3-Ste5
(Figure 5A, whole cell extract
[WCE] panel compare lanes 3 and 4). Thus, Ste5-GST oligomers are much more
active than wild-type Ste5 and do not require
factor to induce the
mating pathway.
Ste5-GST was also more efficiently recruited to the plasma membrane
compared with Ste5-Myc9, both during vegetative growth and in the presence of
factor (Figure 5D).
Weak constitutive recruitment of Ste5-GST to the cell cortex could be detected
in
14% of vegetatively dividing cells, compared with no detectable
recruitment for Ste5-Myc9 (Figure
6A, % rim staining -
F). This basal recruitment was
asymmetric and nearly always restricted to one side of the cell, as found for
Ste5-Myc9 after brief
factor induction
(Mahanty et al.,
1999
). Ste5-GST also underwent significantly enhanced recruitment
to the cell cortex after brief (15-min)
factor induction, resulting in
strongly detectable recruitment of Ste5-GST in 43% of cells. In contrast,
Ste5-Myc9 could be detected at the plasma membrane of only 13% of cells. The
Ste5-GST cells were also more enlarged and shmoo-like than the Ste5 cells
(Figures 5D and
6A), suggesting that the
enhanced recruitment induces more polarized growth.
To verify that the apparent increase in the recruitment of Ste5-GST
compared with Ste5-Myc9 was not a secondary consequence of using different
primary antibodies, we compared the ability of Myc3-Ste5 and Myc3-Ste5-GST to
be recruited using the same
-Myc antibody. The Myc3-tagged proteins
were much more difficult to detect than Ste5-Myc9 as a result of six fewer
copies of the Myc epitope. Nevertheless, this direct comparison demonstrated
that Myc3-Ste5-GST is more efficiently recruited to the plasma membrane than
Myc3-Ste5 (Figures 5E and
6A). Thus, the GST tag
simultaneously increases the pool of Ste5 that is oligomerized and stably
recruited to the plasma membrane.
Ste5-GST Must Shuttle through the Nucleus and Be Recruited to Ste4 to
Activate the Pathway
Prior genetic analysis led to the conclusion that oligomerization of Ste5 occurs as a consequence of binding to G
dimers and that
artificial oligomerization bypasses the need for binding to the Ste4 G
subunit (Inouye et al.,
1997a
). This conclusion was based on the ability of a Ste5-GST
mutant derivative [Ste5(C177AC180A)-GST] to restore mating to a
ste4
ste5
strain when it was overexpressed
from the GAL1 promoter. However, nuclear shuttling and recruitment to
G
are critical for pathway activation when Ste5 is expressed at
native levels (Mahanty et al.,
1999
). In addition, the GAL pathway is known to induce
the expression of mating specific genes
(Dolan and Gatlin, 1995
;
Ideker et al.,
2001
).
We determined whether the enhanced activity of Ste5-GST was dependent on
recruitment, by testing for a functional dependence on the RING-H2 domain and
Ste4. The C180A RING-H2 domain mutation, which blocks the association of Ste5
with Ste4 (Feng et al.,
1998
), completely abrogated the recruitment of Ste5-GST to the
plasma membrane both in the absence and presence of mating pheromone
(Figure 6A) and completely
disrupted its function (Figure 6, B and
C). In addition, a ste4
mutation in the G
subunit completely blocked the function of Ste5-GST, even when it was
overexpressed from a multicopy 2µ plasmid
(Figure 6, B and C). Thus, the
RING-H2 domain and Ste4 are both absolutely required for Ste5-GST to activate
the MAPK cascade, and oligomerization does not bypass the need for binding to
Ste4. These results argue that pathway activation is dependent on recruitment
of preformed Ste5 oligomers to G
.
As a control, we also determined whether Ste5-GST must shuttle through the
nucleus to be recruited to the plasma membrane, because this is the case for
wild-type Ste5. The localization of Ste5-GST was dependent on amino acid
residues required for nuclear localization of wild-type Ste5. A
Ste5
49-66-GST derivative was unable to accumulate in nuclei (our
unpublished data) or be recruited (Figure
6A) and was devoid of function
(Figure 6, B and C). Furthermore, the recruitment of Ste5-GST to the plasma membrane was also
dependent on the
-importin Rsl1/Kap95 and the nucleoporin Nsp1, which
regulate nuclear import of wild-type Ste5
(Mahanty et al.,
1999
). Temperature sensitive rsl1-4 and
nsp1ts mutations
(Nehrbass et al.,
1993
; Seedorf and Silver,
1997
) blocked the recruitment of Ste5-GST during vegetative growth
at both permissive (25°C) and nonpermissive (37°C) temperatures, and
greatly reduced recruitment at nonpermissive temperature in the presence of
factor (Figure 6D; shown is rsl1-4; see MATERIALS AND METHODS for details on
nsp1ts). Consistent with this, Ste5-GST was also unable to
induce morphological changes at either temperature in either mutant. Thus,
recruitment of Ste5-GST is dependent on the same nuclear import machinery that
regulates wild-type Ste5.
Fusing GST to TAgNLS-Ste5 Induces Its Export from the Nucleus
We next determined whether increasing the pool of Ste5 oligomers increases the pool of Ste5 that is exported from the nucleus. We used the predominantly nuclear TAgNLS-Ste5 derivative to determine whether fusion of GST to Ste5 increases its export from the nucleus. TAgNLS-Ste5 was predominantly nuclear both in the absence and presence of
factor
(Figure 7A) and unable to
efficiently activate the mating MAPK cascade
(Figure 7B), presumably as a
result of its reimport into the nucleus. Similarly, TAgNLS-Ste5-GST was also
predominantly nuclear during vegetative growth
(Figure 7A) and did not
constitutively activate the pathway (Figure
7B), indicating that it is readily imported into the nucleus. In
contrast, during
factor stimulation, a much greater pool of
TAgNLS-Ste5-GST was in the cytoplasm of budded cells and at the plasma
membrane of unbudded cells compared with TAgNLS-Ste5
(Figure 7A). The better
recruitment of TAgNLS-Ste5-GST correlated with much greater pathway activation
and projection formation (Figure 7, A and
B). Thus, fusion of GST to TAgNLS-Ste5 promotes its export from
the nucleus and recruitment to the plasma membrane in the presence of
factor, suggesting that Ste5 oligomers are more efficiently exported in
addition to being more efficiently recruited.
Ste5-GST Is More Efficiently Exported and Is Retained by Ste11 in the
Cytoplasm
The enhanced nuclear export of TAgNLS-Ste5-GST strongly suggested that Ste5-GST was also more efficiently exported from the nucleus. Interestingly, Ste5-GST accumulated in only 2% of total nuclei in ste5
cells,
suggesting that it may be more efficiently exported, as found for
Ste5
474-487 and Ste5L482/485A. We therefore compared the ability of
Ste5-GST to accumulate in the nuclei of MSN5 and msn5
cells (Figure 7C). The
msn5
mutation increased nuclear accumulation of Ste5-GST by
more than eightfold compared with a less than threefold increase for
Ste5-Myc9. The greater fold-increase in nuclear accumulation for Ste5-GST
compared with Ste5 indicates that Ste5-GST is more efficiently exported than
Ste5. Ste5-GST accumulated in fewer nuclei than wild-type Ste5 in the
msn5
strain, suggesting that it is also more efficiently
exported by Msn5-independent export pathways that operate in the absence of
Msn5 (Mahanty et al.,
1999
) and involve multiple exportins (Wang and Elion, unpublished
data). These findings suggest that Ste5-GST is more efficiently exported to
the cytoplasm than wild-type Ste5 both in the absence and presence of mating
pheromone.
The fact that Ste5-GST constitutively activates the mating MAPK cascade
raised the possibility that it mimics the effects of
factor and
induces its own export. We therefore determined whether mutations that block
MAPK cascade activation would increase nuclear accumulation of Ste5-GST. A
ste4
mutation did not increase nuclear accumulation of
Ste5-GST, indicating that Ste5-GST does not induce its own export in the
absence of
factor (Figure
7C). Interestingly, however, a ste11
mutation
caused a sixfold increase in nuclear accumulation of Ste5-GST, with no effect
on nuclear accumulation of Ste5-Myc9
(Figure 7C). Ste11 is
cytoplasmic and largely excluded from nuclei
(Mahanty et al.,
1999
), suggesting that it retains Ste5-GST in the cytoplasm. Thus,
the low level of nuclear accumulation of Ste5-GST compared with Ste5-Myc9 is
the combined effect of more efficient export from the nucleus and better
retention in the cytoplasm by Ste11.
Ste5-GST Associates with More Ste11 and Has a More Accessible N
Terminus
A prediction from the localization results is that Ste5-GST should associate with more Ste11 than wild-type Ste5. We compared the ability of Ste11-Myc to associate with HA3-Ste5 and HA3-Ste5-GST in coprecipitation tests. Ste11-Myc was expressed from the GAL1 promoter whereas the Ste5 derivatives were constitutively expressed from the STE5 promoter, allowing formation of a steady-state pool of Ste5 oligomers before the expression of Ste11. It was not possible to express HA3-Ste5-GST to as high a level as HA3-Ste5 in cells that also expressed Ste11-Myc, because of hyperactivation of the MAPK cascade. Therefore, the abundance of HA3-Ste5-GST in the whole cell extracts was much lower than that of HA3-Ste5 (Figure 8A, WCE). In sharp contrast, equivalent amounts of HA3-Ste5-GST and HA3-Ste5 coprecipitated with Ste11-Myc (Figure 8A,
Myc IP), indicating that a much greater percentage of the total pool of
HA3-Ste5-GST was associated with Ste11 compared with that of HA3-Ste5. Thus,
Ste5-GST oligomers have an enhanced ability to associate with Ste11.
The large increase in the association of Ste5-GST with Ste11 suggested that
the Ste11 binding domain was more accessible, raising the possibility that the
Ste5-GST oligomers have a more open conformation as suggested for
Ste5
474-487 and Ste5L482/485A. To test this possibility, we determined
whether the N terminus of the Ste5-GST fusion was more accessible than that of
wild-type Ste5, by comparing the ability of the 12CA5 antibody to
immunoprecipitate HA3-Ste5 and HA3-Ste5-GST. Remarkably, HA3-Ste5-GST was more
efficiently immunoprecipitated than HA3-Ste5
(Figure 8B). Similar results
were found with Myc3-Ste5 and Myc3-Ste5-GST when they were expressed
individually (our unpublished data) or together
(Figure 5A,
-Myc panels,
lanes 3 and 4). Thus, the N terminus of Ste5-GST is more accessible to the
antibody, suggesting that it has a more open conformation.
Only a Minor Fraction of the Total Pool of Ste5 Is Oligomerized in
Diluted Whole Cell Extracts
Our analysis argued that the formation of a Ste5 oligomer is a key rate-limiting step in determining the ability of Ste5 to be recruited to the plasma membrane and activate the pathway. We therefore estimated the fraction of total Ste5 that is oligomerized in yeast whole cell extracts, by assessing the relative ability of functional epitope-tagged derivatives of Ste5 to associate in the coimmunoprecipitation assay. Figure 8C shows that roughly <1% of the total input of HA3-Ste5 coprecipitated with either Myc3-Ste5 or Ste5-Myc9, suggesting that the majority of HA3-Ste5 is monomeric. Similar results were obtained when the immunoprecipitation was performed in the opposite direction, and
factor treatment did not affect the total
amount of oligomerization detected (our unpublished data). Thus, the majority
of Ste5 is likely to be monomeric, suggesting that oligomerization is a
tightly regulated event.
The Major Form of Ste5 in Cells Is an Inactive Monomer
We found that only a very minor fraction of the total pool of Ste5 forms oligomers in coimmunoprecipitation experiments (Figure 8C). Thus, the majority of Ste5 is monomeric under our coimmunoprecipitation conditions. The low level of Ste5 oligomers suggests that oligomerization is tightly regulated and could require stabilizing factors that are diluted in our extracts. Two lines of evidence lead us to favor the possibility that the major form of Ste5 is an autoinhibited monomer in which contacts between the N- and C-terminal halves of the protein decrease the accessibility of the RING-H2 domain and Ste11 binding site (Figure 8D). First, previous work indicates that the N- and C-terminal halves of Ste5 can associate (Sette et al., 2000
). Second, we find that the major pool of Ste5 does not
efficiently associate with Ste4 and Ste11 unless Ste5 has an increased
capacity to oligomerize. The potential existence of a monomer that protects
the RING-H2 domain from oligomerization makes biological sense given the
propensity of these domains to form higher order oligomers
(Borden, 2000
;
Kentsis et al., 2002
)
and the potential for inappropriate recruitment and pathway activation. The
fact that Ste5-GST constitutively hyperactivates the MAPK cascade
(Figure 5) suggests that too
high a pool of oligomers would cause inappropriate pathway activation and
provides a rationale for why the steady state level of oligomers needs to be
kept low.
The Availability of the RING-H2 Domain Determines the Amount of
Oligomerization
We find that oligomerization of full-length Ste5 is largely controlled by the RING-H2 domain, with the leucine-rich domain playing a less critical role (Figure 1B). Interestingly, a good correlation was found between oligomerization and accessibility of Ste5 to proteins that bind to different regions of the protein, including the N terminus (epitope antibody), RING-H2 domain (Ste4), leucine-rich domain (Ste11), and C termini (Ste7) (Figures 3 and 8, A and B). Thus, the oligomer may have a more accessible RING-H2 domain and Ste11 and Ste7 binding sites compared with the monomer (Figure 8D). This possibility is supported by preliminary results suggesting that Ste5-GST has an enhanced ability to bind Ste4 (our unpublished data).
Ste5 may oligomerize before associating with either Ste11 or Ste4, because
Ste5 does not require either Ste4 or Ste11 to oligomerize
(Yablonski et al.,
1996
; our unpublished data), associates with more Ste11 and is
better recruited as a Ste5-GST fusion
(Figure 8A). This
interpretation is consistent with the observation that the same RING-H2 domain
mutation blocks association with Ste4 as well as oligomerization of the
RING-H2 domain (Feng et al.,
1998
). It is tempting to speculate that Ste5 first undergoes a
conformational change to allow oligomerization of the RING-H2 domain before
binding to Ste4 and Ste11.
Our interpretation contrasts the view of Sette et al.
(2000
) who have proposed that
the active form of Ste5 undergoes stronger interactions between the N and C
halves of the protein, either through intramolecular contacts in a closed
monomer or intermolecular contacts in an antiparallel dimer. In their study,
hyperactivating mutations in either the N terminus (Ste5 P44L) or C termini
(Ste5 S770K) caused better coprecipitation of the N-terminal and C-terminal
halves of the protein with no effect on oligomerization. However, the use of
GST fusions in this analysis could have interfered with the detection of
potential effects of the mutations on oligomerization. In light of our
findings, the P44L and S770K mutations would be predicted to increase the
accessibility of nearby RING-H2 and leucine-rich domains for oligomerization.
This prediction is supported by findings in Sette et al.
(2000
): 1)
GST-Ste5P44L(1518) associates more readily with Ste5P44L(1518)
than it does with wild-type Ste5(1518); and 2) GST-Ste5 (1518)
and GST-Ste5P44L(1518) both associate more readily with full-length
Ste5P44L than with full-length Ste5. Furthermore, our model does not rule out
the possibility that interactions between the N and C termini of the protein
could be involved in activation of Ste5; for example, N to C interactions
could occur between dimers or within a dimer (see
Elion, 2001
for a
discussion).
The Leucine-rich Domain Negatively Regulates the Accessibility of the
RING-H2 Domain and Vice Versa
The finding that mutations in the leucine-rich domain increase the ability of Ste5 to oligomerize and to associate with Ste4 suggests that the leucine-rich domain restricts the availability of the RING-H2 domain (Figure 3). Conversely, the fact that a C180A RING-H2 domain mutant oligomerizes more efficiently and has an enhanced capacity to be suppressed by overexpression of Ste11 (Feng et al., 1998
),
suggests that the RING-H2 domain restricts the availability of the
leucine-rich domain and the Ste11 binding site. Although multiple
interpretations are possible, the simplest is that the Ste5
474-487 and
Ste5L482/485 mutations define a domain that makes intramolecular contacts with
the RING-H2 domain or a part of the protein that influences its accessibility.
An attractive possibility is that these intramolecular interactions maintain
Ste5 in an autoinhibitory conformation that limits the accessibility of both
the RING-H2 domain and the Ste11 binding site
(Figure 8D).
Oligomerization Does Not Bypass a Requirement for Recruitment of Ste5
to Ste4
Previous work led to the conclusion that oligomerization of Ste5 bypasses the need for Ste4 for pathway activation (Inouye et al., 1997a
). This conclusion was based on the ability of a
Ste5(C177AC180A)-GST RING-H2 domain mutant to suppress the mating defect of a
ste4
mutant when it was overexpressed from the GAL1
promoter. However, we find that when Ste5-GST or Ste5C180A-GST are expressed
from the STE5 promoter at native or overexpressed levels, the RING-H2
domain and Ste4 are both essential for basal and induced signaling. Thus, Ste5
signaling capacity is tightly linked to its recruitment to G
,
presumably as a preformed dimer. The previous results can be reconciled by
postulating that the need for regulated recruitment to the plasma membrane is
bypassed if the amount of Ste5 in the cytoplasm is high enough to allow
interactions with Ste20 at the cell cortex. This interpretation is supported
by the fact that overexpressed Ste5(C177AC180A)-GST still requires Ste20 to
activate the pathway (Sette et
al., 2000
). It is also supported by the observation that
signal transduction can be restored to a Ste5
49-66 mutant that fails to
shuttle through the nucleus when it is overexpressed
(Elion, 2002
).
Conformationally Distinct Forms of Ste5 May Localize Differently in
the Cell
Our analysis reveals a strong link between oligomerization and nuclear export and recruitment. The major monomeric form of Ste5 shuttles through the nucleus, but is efficiently retained in G1 phase nuclei. In contrast, Ste5 derivatives that oligomerize more readily are more efficiently exported from the nucleus, better retained in the cytoplasm by Ste11 and more efficiently recruited to the plasma membrane. These observations raise the intriguing possibility that oligomerization could be regulated at the level of localization. For example, Ste5 oligomers might accumulate in subcellular compartments with a higher concentration of Ste5 or other proteins that stabilize the oligomer. Negative regulatory events could also prevent the accumulation of oligomers; for example, an inhibitor could bind monomers and block oligomerization or degrade oligomers at the end of a cycle of signaling. A rate-limiting step in recruitment is the amount of Ste5 that is in the nucleus. Thus, it is possible that oligomerization occurs in the nucleus or during nuclear shuttling. Interestingly, key binding partners of Ste5 are differentially distributed in the nucleus (Msn5, Fus3), cytoplasm (Ste11, Ste7), and at the plasma membrane (Ste4), suggesting that Ste5 conformation could be regulated by sequential binding to these proteins. The most obvious potential regulator is Fus3, which phosphorylates a domain near the RING-H2 domain (Kranz, Satterberg, and Elion, unpublished data). However, the available evidence suggests that
factor does not increase
oligomerization and Fus3 is not required for nuclear accumulation or
recruitment (Wang and Elion, unpublished data), arguing against this
possibility.
A second interesting possibility is that Ste5 oligomers form as a result of
binding of the nuclear export machinery. This is strongly supported by the
tight link between oligomerization, nuclear export and recruitment revealed by
comparative analysis of Ste5 mutants (i.e., wild-type Ste5,
Ste5
474-487, Ste5L282/485A, Ste5-GST, and TAgNLS-Ste5-GST). Further
support comes from the observation that amino acid residues in Ste5 that
mediate nuclear export of a heterologous protein also influence the
accessibility of the RING-H2 domain (Wang and Elion, unpublished data). For
example, nuclear exportins might bind to inactive monomers in the nucleus,
induce a conformational change that increases the accessibility of the RING-H2
domain and promotes the formation of oligomers, which are exported and
subsequently stabilized by binding to Ste11 in the cytoplasm and Ste4 at the
plasma membrane. Because Ste11 seems to preferentially bind to Ste5 oligomers,
it has the potential to increase the cytoplasmic pool of oligomers by
preventing intermolecular interactions between the N- and C-terminal halves of
the protein. The binding of Ste4 to the RING-H2 domain has a similar
potential. A prediction of this model is that Ste11 should be required for
efficient recruitment of Ste5. Remarkably, the behavior of Ste5
474-487
fulfills this prediction because it is selectively defective in binding to
Ste11 and cannot be recruited even when it's nuclear accumulation defect is
suppressed (Table 2).
* Corresponding author. E-mail address: elaine_elion{at}hms.harvard.edu.
Borden, K.L.B. (2000). RING domains: master builders of molecular scaffolds? J. Mol. Biol. 295, 1103-1112..
Burack, W.R., and Shaw, A.S. (2000). Signal transduction: hanging on a scaffold. Curr. Opin. Cell Biol. 12, 211-216.[CrossRef][Medline]
Choi, K.-Y., Kranz, J.A., Mahanty, S.K., and Elion, E.A.
(1999). Characterization of Fus3 localization, Active Fus3
localizes in complexes of varying size and specific activity. Mol.
Biol. Cell 10,
1553-1568.
Choi, K.-Y., Satterberg, B., Lyons, D.M., and Elion, E.A. (1994). Ste5 tethers multiple protein kinases in the MAP kinase cascade required for mating in S. cerevisiae. Cell 78, 499-512.[CrossRef][Medline]
Dolan, J.W., and Gatlin, J.E. (1995). A role for the Gal11 protein in pheromone-induced transcription in Saccharomyces cerevisiae. Biochem. Biophys. Res. Commun. 212, 854-860.[CrossRef][Medline]
Elion, E.A. (1995). Ste5, a meeting place for MAP kinases and their associates. Trends Cell Biol. 5, 322-327.[CrossRef][Medline]
Elion, E.A. (2001). The Ste5 Scaffold. J. Cell Sci. 114, 3967-3978.
Elion, E.A. (2002). How to monitor nuclear shuttling. In: Methods in Enzymology, Guide to Yeast Genetics and Molecular Biology, ed. G. Fink and C. Guthrie, New York: Academic Press.]
Errede, B., Gartner, A., Zhou, Z., Nasmyth, K., and Ammerer, G. (1993). MAP kinase-related FUS3 from S. cerevisiae is activated by STE7 in vitro. Nature 362, 261-264.[CrossRef][Medline]
Farley, F.W., Satterberg, B., Goldsmith, E.J., and Elion, E.A.
(1999). Relative dependence of different outputs of the
Saccharomyces cerevisiae pheromone response pathway on the MAP kinase
Fus3p. Genetics 151,
1425-1444.
Feng, Y., Song, L., Kincaid, E., Mahanty, S.K., and Elion, E.A.
(1998). Functional binding between G
and the LIM domain of
Ste5 is required to activate the MEKK Ste11. Curr. Biol.
8,
267-278.[CrossRef][Medline]
Gustin, M.C., Albertyn, J., Alexander, M.R., and Davenport, K.
(1998). MAP kinase pathways in the yeast Saccharomyces
cerevisiae. Microbiol. Rev.
62,
1264-1300.29-934.
Inouye, C., Dhillon, N., and Thorner, J. (1997b). Mutational analysis of STE5 in the yeast Saccharomyces cerevisiae: application of a differential interaction trap assay for examining protein-protein interactions. Genetics 147, 479-492.[Abstract]
Inouye, C., Dhillon, N., and Thorner, J. (1997a). Ste5
RING-H2 domain-Role in Ste4-promoted oligomerization for yeast pheromone
signaling. Science 278,
103-106.
Inouye, K., Mizutani, S., Koide, H., and Kaziro, Y.
(2000). Formation of the Ras dimer is essential for Raf-1
activation. J. Biol. Chem. 275,
3737-3740.
Kaplan, W., Husler, P., Klump, H., Erhardt, J., Sluiscremer, N., and Dirr, H. (1997). Conformational stability of pGEX-expressed Schistosoma japonicum glutathione S-transferase, a detoxification enzyme and fusion protein affinity tag. Protein Sci. 6, 399-406.[Medline]
Kentsis, A., Gordon, R.E., and Borden, K.L.B. (2002).
Self-assembly properties of a model RING domain. Proc. Natl. Acad. Sci.
USA 99,
667-672.
Kranz, J., Satterberg, B., and Elion, E.A. (1994). The
MAP kinase Fus3 associates with and phosphorylates the upstream signaling
component Ste5. Genes Dev. 8,
313-327.
Lamson, R.E., Winters, M.J., and Pryciak, P.M. (2002). Cdc42 regulation of kinase activity and signaling by the yeast p21-activated kinase Ste20. Mol. Cell. Biol. 22, 2932-2951.
Lee, B., and Elion, E.A. (1999). The MAPKKK Ste11
regulates vegetative growth through a kinase cascade of shared signaling
components. Proc. Natl. Acad. Sci. USA
96,
12679-12684.
Leeuw, T., Wu, C.L., Schrag, J.D., Whiteway, M., Thomas, D.Y., and
Leberer, E. (1998). Interaction of a G-protein
-subunit
with a conserved sequence in Ste20/PAK family protein kinases.
Nature 391,
191-195.[CrossRef].
McTigue, M.A., Williams, D.R., and Tainer, J.A. (1995). Crystal structures of a schistosomal drug and vaccine target: glutathione S-transferase from Schistosoma japonicum and its complex with the leading antischistosomal drug praziquantel. J. Mol. Biol. 246, 21-27.[CrossRef][Medline].
Nehrbass, U., Fabre, E., Dihlmann, S., Herth, W., and Hurt, E.C. (1993). Analysis of nucleocytoplasmic transport in a thermosensitive mutant of nuclear pore protein NSP1. Eur. J. Cell Biol. 62, 1-12.guyen, A., et al. (2002). Kinase suppressor of Ras
(KSR) is a scaffold which facilitates mitogen-activated protein kinase
activation in vivo. Mol. Cell. Biol.
22,
3035-3045.
Pawson, T., and Scott, J.D. (1997). Signaling through
scaffold, anchoring, and adaptor proteins. Science
278,
2075-2080.
Pryciak, P.M., and Huntress, F.A. (1998). Membrane
recruitment of the kinase cascade scaffold protein Ste5 by the G-
complex underlies activation of the yeast pheromone response pathway.
Genes Dev. 12,
2684-2697.
Roy, F., Laberge, G., Douziech, M., Ferland-McCollough, D., and
Therrien, M. (2002). KSR is a scaffold required for activation of
the ERK/MAPK module. Genes Dev.
16,
427-38.
Seedorf, M., and Silver, P.A. (1997).
Importin/karyopherin protein family members required for mRNA export from the
nucleus. Proc. Natl. Acad. Sci. USA
94,
8590-8595.
Sette, C., Inouye, C.J., Stroschein, S.L., Iaquinta, P.J., and
Thorner, J. (2000). Mutational analysis suggests that activation
of the yeast pheromone response MAPK pathway involves conformational changes
in the Ste5 scaffold protein. Mol. Biol. Cell
11,
4033-4049.udyka, T., and Skerra, A. (1997). Glutathione S-transferase can be used as an enzymatically active dimerization module for a recombinant protease inhibitor, and functionally secreted into the periplasm of Escherichia coli. Protein Sci. 6, 2180-2187. Drogen, F., Stucke, V.M., Jorritsma, G., and Peter, M. (2001). MAP kinase dynamics in response to pheromones in budding yeast. Nat. Cell Biol. 3, 1051-1059.[CrossRef][Medline]
Whiteway, M.S., Wu, C., Leeuw, T., Clark, K., Fourest-Lieuvin, A.,
Thomas, D.Y., and Leberer, E. (1995). Association of the yeast
pheromone response G protein
subunits with the MAP kinase
scaffold Ste5p. Science 269,
1572-1575.ablonski, D., Marbach, I., and Levitzki, A. (1996).
Dimerization of Ste5, a mitogen-activated protein kinase cascade scaffold
protein, is required for signal transduction. Proc. Natl. Acad. Sci.
USA 93,
13864-13869.
Yasuda, J., Whitmarsh, A.J., Cavanagh, J., Sharma, M., and Davis,
R.J. (1999). The JIP group of mitogen-activated protein kinase
scaffold proteins, Mol. Cell. Biol.
19,
7245-7254.
This article has been cited by other articles: | http://www.molbiolcell.org/cgi/content/full/14/6/2543 | crawl-002 | refinedweb | 8,727 | 50.97 |
Python Certification Training for Data Scienc ...
- 48k Enrolled Learners
- Weekend/Weekday
- Live Class
Python for Data Science is a must learn for professionals in the Data Analytics domain. With the growth in the IT industry, there is a booming demand for skilled Data Scientists and Python has evolved as the most preferred programming language. Through this article, you will learn the basics, how to analyze data and then create some beautiful visualizations using Python.
Before we begin, let me just list out the topics I’ll be covering through the course of this article.
You can go through this Python for data science video lecture where our Python Training expert is discussing each & every nitty-gritty of the technology.
Python is no-doubt the best-suited language for a Data Scientist. I have listed down a few points which will help you understand why people go with Python for Data Science:
And do you know the best part? Data Scientist is one of the highest paid jobs who earn around $130,621 per year as per Indeed.com.
Python was created by Guido Van Rossum in 1989. It is an interpreted language with dynamic semantics. It is free to access and run on all platforms. Python is:
1) Object Oriented
2) High-Level Language
3) Easy to Learn
4) Procedure Oriented
Let me guide you through the process of installing Jupyter on your system. Just follow the below steps:
Step 1: Go to the link:
Step 2: You can either click on “Try in your browser” or “Install the Notebook”.
Well, I would recommend you to install Python and Jupyter using Anaconda distribution. Once you have installed Jupyter, it will open on your default browser by typing “Jupyter Notebook” in command prompt. Let us now perform a basic program on Jupyter.
name=input("Enter your Name:") print("Hello", name)
Now to run this, press “Shift+Enter” and view the output. Refer to the below screenshot:
data-src=
In case you are facing any issues with the installation or Jupyter basics, you can go through the below video. It will also take you to various fundamentals of Python, along with a practical demonstrating the various libraries such as Numpy, Pandas, Matplotlib and Seaborn. Hope you like it! :)
Now is the time when you get your hands dirty in Python programming. But for that, you should have a basic understanding of the following topics:
Variables: Variables refers to & save some time.
For more information and practical implementations, you can refer to this blog: Python Tutorial.
This is the part where the actual power of Python with data science comes into the picture. Python comes with numerous libraries for scientific computing, analysis, visualization etc. Some of them are listed below:
Problem Statement: You are given a dataset which comprises of comprehensive statistics on a range of aspects like distribution & nature of prison institutions, overcrowding in prisons, type of prison inmates etc. You have to use this dataset to perform descriptive statistics and derive useful insights out of the data. Below are few tasks:
For data loading, write the below code:
import pandas as pd import matplotlib.pyplot as plot %matplotlib inline file_name = "prisoners.csv" prisoners = pd.read_csv(file_name) prisoners
data-src=
Now to use describe method in Pandas, just type the below statement:
prisoners.describe()
data-src=
Next in Python with data science article, let us perform data manipulation.
prisoners["total_benefited"]=prisoners.sum(axis=1) prisoners.head()
data-src=
And finally, let us perform some visualization in Python for data science article. Refer')
Output –
data-src=
I hope my blog on “Python for data science” was relevant for you. To get in-depth knowledge, check out our interactive, live-online Edureka Python Data Science Certification Training here, that comes with 24*7 support to guide you throughout your learning period.
Got a question for us? Please mention it in the comments section of this “Python for data science” article and we will get back to you as soon as possible. | https://www.edureka.co/blog/learn-python-for-data-science/ | CC-MAIN-2019-39 | refinedweb | 667 | 62.98 |
In Java code, class and variable and method and constructor declarations can have “access specifiers”, that is one of:
private,
protected,
public. (or none.)
The purpose of access specifiers is to declare which entity can not be accessed from where. Its effect is different when used on any of: {class, class variable, class method, class's constructor}.
Below is a table showing the effects of access specifiers for class members (i.e. class variable and class methods).
◆ = Can Access. ◇ = No Access.
For example, if a variable is declared “protected”, then the class itself can access it, its subclass can access it, and any class in the same package can also access it, but otherwise a class cannot access it.
If a class memeber doesn't have any access specifier (the “(none)” row in above), its access level is sometimes known as “package”.
Here's a example.
class P { int x = 7; } public class A { public static void main(String[] args) { P p = new P(); System.out.println(p.x); } }
The code compiles and runs. But, if you add “private” in front of
int x, then you'll get a compiler error: “x has private access in P”. This is because when a member variable is private, it can only be accessed within that class.
Constructors can have the same access specifiers used for variables and methods. Their meaning is the same. For example, when a constructor has “private” declared, then, only the class itself can create a instance of it (kind of like self-reference). Other class in the same package can not create a instance of that class. Nor any subclass of that class. Nor any other class outside of this package.
(Note: constructors in Java are treated differently than methods. Class members are made of 2 things: ① class's variables. ② class's methods. Constructors are NOT considerd a class member.)
Here is a sample code.
class Q { public int x; private Q (int n) { x=n; System.out.println("i'm born!"); } } public class A1 { public static void main(String[] args) { Q q = new Q(3); System.out.println(q.x); } }
In the above code, it won't compile because Q's contructor is “private” but it is being created outside of itself. If you delete the “private” keyword in front of Q's constructor, then it compiles.
Remember that a class can have more than one constructors, each with different parameters. Each constructor can have different access specifier.
In the following example, the class Q has two constructors, A2 { public static void main(String[] args) { Q q1 = new Q(3); //Q q2 = new Q(3.3); } }
The fact that there can be constructors with different access specifiers means that in Java, the ability to create a object also depends on which constructor is called to create the object.
For classes, only the “public” access specifier can be used on
classes. Basically, Java has this “One Class Per File” paradigm. That
is, in every java source code file, only one class in the file is
public accessible, and that class must have the same name as the
file. (For Example, if the file is
xyz.java, then there must be a
class named “xyz” in it, and that is the class that's public.)
Optionally, the class can be declared with “public” keyword.
(Note: By convention, classes names should start with capital letter. So, a class named “xyz” really should be “Xyz”, with the file named “Xyz.java”.)
If you use any other access specifier on classes, or declare more than one class “public” in a file, the compiler will complain. For detail, see Packages in Java.
The rise of the elaborate access specifiers and their different consequences in Java entities is a complexity out of OOP and its machinery. What are OOP's Jargons &. | http://xahlee.info/java-a-day/access_specifiers.html | CC-MAIN-2014-15 | refinedweb | 639 | 65.73 |
But on processig for Android it is possible that no one has ever used a slider code and an LED to drive Arduino via bluetooth it seems very strange to me
care to document context and what is the real question?
When i move slider led on arduino go to on
I have 3 led when slider reached Char led set it on
It would help us hep you if you were to:
- Describe your equipment
- Describe the setup (how are things wired, powered)
- Post your code that is not doing what you expect
What’s that?
connectedThread
How did you pair the android device with the HC-06?
How did you connect the HC-06 with your arduino (some only support 3.3v on Rx)?
The way you read Serial is weird (you have two serial reads in the loop)
Hi @r0x15, I think you are trying to fix too many things at the same time. First develop and test your Ard program connected directly to the PC. The 2nd serial read, as @jay_m says, looks wrong, suggest remove. You are using Serial.print to see what’s happening, but in the final arrangement they will arrive in the Processing sketch. Is that what you want? As you are using a Mega you could use one of the other Serial ports for your progress messages. You need an adapter, and putty. If you’ve never done that, ask.
When the Ard program is reliable, then connect the Processing program. When that is working well, change the connection to via Bluetooth.
Hello,
Please format your code:
You formatted it so I can work with it now…
:)
This was a very quick exploration using Processing on PC and Arduino MEGA 2560 R3 connected with a USB cable.
Arduino
int incomingData; char data = 0; const byte LED9 = 9; const byte LED11 = 11; //BACK const byte LED12 = 12; //STOP const byte LED13 = 13; //FWD void setup() { pinMode(LED9, OUTPUT); //set pin4 as output pinMode(11, OUTPUT); pinMode(12, OUTPUT); pinMode(13, OUTPUT); Serial.begin(9600); } void loop() { //if(Serial.available()) if (Serial.available() > 0) { analogWrite(9, incomingData); data = Serial.read(); //Serial.print("\n"); //Serial.print(data); //Print Value of data in Serial monitor //Serial.print(" : "); incomingData = Serial.read(); Serial.println(data); if (data == '!') //stop_(); digitalWrite(LED13, HIGH); else if (data == '~') digitalWrite(LED13, LOW); //forward_(); // else if (data == 'b') // backward_(); } } void stop_() { Serial.println("STOP"); digitalWrite(LED12, HIGH); digitalWrite(LED11, LOW); digitalWrite(LED13, LOW); } void forward_() { Serial.println("FORWARD"); digitalWrite(LED13, HIGH); digitalWrite(LED12, LOW); digitalWrite(LED11, LOW); } void backward_() { Serial.println("BACKWARD"); digitalWrite(LED11, HIGH); digitalWrite(LED12, LOW); digitalWrite(LED13, LOW); }
Processing
import processing.serial.*; Serial myPort; // Create object from Serial class int val; // Data received from the serial port int count; void setup() { size(200, 200); // List all the available serial ports printArray(Serial.list()); String portName = Serial.list()[2]; myPort = new Serial(this, portName, 9600); delay(1000); } void draw() { count++; byte data = byte(count); myPort.write(data); // send an H to indicate mouse is over square println(char(data)); if(data == '!') background(255, 255, 0); else if (data == '~') background(0); }
This works!
I did not have a bank of LEDs so just using LED13 on Arduino and turned it ON and OFF and also changed Processing background.
I used an integer counter converted to a byte to send data from Processing and looked for the equivalent ASCII value (character) received on Arduino.
The important thing is I added a delay() in setup():
delay(1000)
The Arduino will reboot when you make an initial serial connection and you should wait until it is ready to receive serial data first otherwise you will flood the buffers.
I have a topic about this in the forum.
You can look for it.
:)
This is one of the first projects I did with Processing:
I later adapted the same code to Processing on Android with the Ketai library and used the sliders as a general controller for LEDs, my robots etc.
Once you grasp serial communications and serial over Bluetooth this gets easier to work with.
Start with simple examples and build on that.
I used the Processing slider examples and not a library for this.
:)
maybe you didn’t understand I am making a dimmer on one led and the possibility through buttons to turn on three other leds the problem is that when I move the slider to dim the led the other 3 also turn on randomly according to the code that arrives
[image]
We understood. We are saying that your arduino code is not good. You will likely have something failing when you try to second guess when the data will arrive since you have two different reads in your arduino code.
Hello,
Consider sending comma delimited data terminated with a ‘\n’:
import processing.serial.*; Serial myPort; // Create object from Serial class int data; void setup() { size(200, 200); // List all the available serial ports printArray(Serial.list()); String portName = Serial.list()[2]; myPort = new Serial(this, portName, 9600); delay(1000); } void draw() { if(frameCount%10 == 0) //Sends every 1/6 second { String sData = nf(byte(data), 3); // Only the LSB (least significant byte) //myPort.write(sData); //This will send 0 to 255 and repeat; it is only sending the LSB (least significatn byte of the int) //myPort.write(','); String R = nf(int(random(256)), 3); //myPort.write(R); //R //myPort.write(','); String G = nf(int(random(256)), 3); //myPort.write(G); //R //myPort.write(','); String B = nf(int(random(256)), 3); //myPort.write(B); //R //myPort.write('\n'); //This replaces commented data above: myPort.write(sData + ',' + R + ',' + G + ',' + B + '\n'); data++; } }
This is an example of the data strings it was sending:
000,183,109,014
001,170,200,221
002,243,029,109
003,200,061,115
004,180,070,156
005,080,132,113
…
The above is one way to do it; I found it easy to get substrings if data had leading 0’s and went with that.
On the Arduino side you will need to:
:)
That is my approach as well to monitor data.
I once used the network library as well for this.
You can also send the data (received from Arduino) back to the same sketch on same COM port and view it in the console; I did this in my exploration of this topic since my other hardware was not available and removed that in my example to keep it simple.
:)
Thank you so much this is a great solution but there is a problem
what you posted is in java mode and i am in android mode
how can I do?
This is not a problem… it is an opportunity for you.
You will have to do this in Android mode.
:)
couple typos:
The pins were chosen for an Android mini pro
I supposed you meant an Arduino Pro Mini
in the fritzing drawing:
it’s probably the Common Cathode that is connected to GND.
just for clarity, the 3 resistors are red, red, brown so 220Ω
(on my screen they could be confused with red/red/violet
which would be 220MΩ (20%) so way too much)…
Noel
you are always available thank you very much I apologized if not I answered your questions but the commitments are many Hello and thanks
This was clearly stated
This is a 3.3 V Arduino, thus no voltage divider was used on the Rx input. (Necessary with the 5V model.)
———————
Each LED in an RGB LED has a different forward voltage drop; I would use correct resistor values for each one depending on the RGB LED datasheet to balance the intensity and limit current as required.
What’s the real point of « balancing the intensity »?
(I get the concept but it has no practical use since human eye does not perceive RGB intensities in the same way. That’s what should be taken into account more than forward current and protection alone if you want to match the RGB choice)
Hum seems some posts disappeared
@J_Silva @jay_m The post was flagged by the community. It seems that the rule is, not to paste full codes at least on topics tagged as homework. I respect that, but in the case of Android-related topics, I disagree because there is no Processing for Android documentation teaching to code programmatically. Something necessary, because it would be difficult to implement an XML file structure. The problem is, that for building apps, instead of using P4A in conjunction with the Arduino IDE (which were born to live together); P4A has totally lost the battle against software like “MIT App Inventor” because documentation and handling seem much more simple. And that’s really a pity. Further, I don’t believe that a teacher will use A4P for actually teaching Android. So I believe that this rule in this case has a negative effect. This is why I decided to make a Github repo with some ready code, and you can find the code in question there. | https://discourse.processing.org/t/code-android-e-arduino/26364 | CC-MAIN-2022-27 | refinedweb | 1,505 | 62.48 |
afReadFrames man page
afReadFrames — read sample frames from a track in an audio file
Synopsis
#include <audiofile.h>
AFframecount afReadFrames(AFfilehandle file, int track, void *data, int count);
Description
afReadFrames attempts to read up to count frames of audio data from the audio file handle file into the buffer at data.
Parameters
file is a valid file handle returned by afOpenFile(3).
track is always AF_DEFAULT_TRACK for all currently supported file formats.
data is a buffer of storing count frames of audio sample data.
count is the number of sample frames to be read.
Return Value
afReadFrames returns the number of frames successfully read from file.
Errors
afReadFrames can produce these errors:
- AF_BAD_FILEHANDLE
the file handle was invalid
- AF_BAD_TRACKID
the track parameter is not AF_DEFAULT_TRACK
- AF_BAD_READ
reading audio data from the file failed
- AF_BAD_LSEEK
seeking within the file failed
See Also
afWriteFrames(3)
Author
Michael Pruett <michael@68k.org>
Referenced By
afOpenFile(3), afWriteFrames(3).
03/06/2013 Audio File Library 0.3.6 | https://www.mankier.com/3/afReadFrames | CC-MAIN-2017-17 | refinedweb | 164 | 56.05 |
ACTION-10: Convert member submission to Editor's Draft, with support of JohnArwe and mhausenblas
Convert member submission to Editor's Draft, with support of JohnArwe and mhausenblas
- State:
- closed
- Person:
- Steve Speicher
- Due on:
- September 17, 2012
- Created on:
- September 10, 2012
- Associated Product:
- Linked Data Platform Spec
- Related emails:
- Re: ACTION-10: First Editor's Draft is available! (from michael.hausenblas@deri.org on 2012-09-20)
- Re: ACTION-10: First Editor's Draft is available! (from sspeiche@us.ibm.com on 2012-09-19)
- ACTION-10: First Editor's Draft is available! (from sspeiche@us.ibm.com on 2012-09-19)
- Re: ACTION-10: First Editor's Draft is available! (from lehors@us.ibm.com on 2012-09-19)
Related notes:
adding email manually: On 2012-09-19 12:17 , "Steve K Speicher" <sspeiche@us.ibm.com> wrote
--------------------------------------------------
I have completed ACTION-10 to convert member submission to an Editor's
Draft [1].
This is an "as-is" migration from member submission [2] to follow W3C
draft format. I have also produced a HTML diff [3].
There are a few editorial items I need to address:
- What namespace URI should we use for new terms defined in this document?
- Validate the shortname for this as 'ldbp'
I assume these will either be created as official issues or actions. For
now I just made a reminder for editors in the doc.
Now that ACTION-10 is complete, I will start incorporating some of the
actions and minor editorial issues found since published as member
submission.
Fellow editors (Michael/John), once you have contributed an edit feel free
to add yourself as an editor.
Note: the diff [3] isn't perfect, for example the new draft uses ReSpec.js
which pulls in boilerplate HTML and the diff struggles with this a bit.
Note: [1] and [3] have an annoying red box at the top right since biblio
entries are missing, I have a pull request pending [4]. Robin says he'll
get to it shortly.
[1] -
[2] -
[3] -
[4] -
Thanks,
Steve Speicher
IBM Rational Software
OSLC - Lifecycle integration inspired by the web ->
Consider closing, no work left...see emailSteve Speicher, 19 Sep 2012, 19:41:01
Display change log. | https://www.w3.org/2012/ldp/track/actions/10?changelog | CC-MAIN-2021-43 | refinedweb | 370 | 62.78 |
WSL+Docker: Kubernetes on the Windows Desktop platform for running containerized services and applications in distributed environments. While a wide variety of distributions and installers exist to deploy Kubernetes in the cloud environments (public, private or hybrid), or within the bare metal environments, there is still a need to deploy and run Kubernetes locally, for example, on the developer's workstation.
Kubernetes has been originally designed to be deployed and used in the Linux environments. However, a good number of users (and not only application developers) use Windows OS as their daily driver. When Microsoft revealed WSL - the Windows Subsystem for Linux, the line between Windows and Linux environments became even less visible.
Also, WSL brought an ability to run Kubernetes on Windows almost seamlessly!
Below, we will cover in brief how to install and use various solutions to run Kubernetes locally.
Prerequisites
Since we will explain how to install KinD, we won't go into too much detail around the installation of KinD's dependencies.
However, here is the list of the prerequisites needed and their version/lane:
- OS: Windows 10 version 2004, Build 19041
- WSL2 enabled
- In order to install the distros as WSL2 by default, once WSL2 installed, run the command
wsl.exe --set-default-version 2in Powershell
- WSL2 distro installed from the Windows Store - the distro used is Ubuntu-18.04
- Docker Desktop for Windows, stable channel - the version used is 2.2.0.4
- [Optional] Microsoft Terminal installed from the Windows Store
- Open the Windows store and type "Terminal" in the search, it will be (normally) the first option
And that's actually it. For Docker Desktop for Windows, no need to configure anything yet as we will explain it in the next section.
WSL2: First contact
Once everything is installed, we can launch the WSL2 terminal from the Start menu, and type "Ubuntu" for searching the applications and documents:
Once found, click on the name and it will launch the default Windows console with the Ubuntu bash shell running.
Like for any normal Linux distro, you need to create a user and set a password:
[Optional] Update the
sudoers
As we are working, normally, on our local computer, it might be nice to update the
sudoers and set the group
%sudo to be password-less:
# Edit the sudoers with the visudo command sudo visudo # Change the %sudo group to be password-less %sudo ALL=(ALL:ALL) NOPASSWD: ALL # Press CTRL+X to exit # Press Y to save # Press Enter to confirm
Update Ubuntu
Before we move to the Docker Desktop settings, let's update our system and ensure we start in the best conditions:
# Update the repositories and list of the packages available sudo apt update # Update the system based on the packages installed > the "-y" will approve the change automatically sudo apt upgrade -y
Docker Desktop: faster with WSL2
Before we move into the settings, let's do a small test, it will display really how cool the new integration with Docker Desktop is:
# Try to see if the docker cli and daemon are installed docker version # Same for kubectl kubectl version
You got an error? Perfect! It's actually good news, so let's now move on to the settings.
Docker Desktop settings: enable WSL2 integration
First let's start Docker Desktop for Windows if it's not still the case. Open the Windows start menu and type "docker", click on the name to start the application:
You should now see the Docker icon with the other taskbar icons near the clock:
Now click on the Docker icon and choose settings. A new window will appear:
By default, the WSL2 integration is not active, so click the "Enable the experimental WSL 2 based engine" and click "Apply & Restart":
What this feature did behind the scenes was to create two new distros in WSL2, containing and running all the needed backend sockets, daemons and also the CLI tools (read: docker and kubectl command).
Still, this first setting is still not enough to run the commands inside our distro. If we try, we will have the same error as before.
In order to fix it, and finally be able to use the commands, we need to tell the Docker Desktop to "attach" itself to our distro also:
Let's now switch back to our WSL2 terminal and see if we can (finally) launch the commands:
# Try to see if the docker cli and daemon are installed docker version # Same for kubectl kubectl version
Tip: if nothing happens, restart Docker Desktop and restart the WSL process in Powershell:
Restart-Service LxssManagerand launch a new Ubuntu session
And success! The basic settings are now done and we move to the installation of KinD.
KinD: Kubernetes made easy in a container KinD and create our first cluster.
And as sources are always important to mention, we will follow (partially) the how-to on the official KinD website:
# Download the latest version of KinD curl -Lo ./kind # Make the binary executable chmod +x ./kind # Move the binary to your executable path sudo mv ./kind /usr/local/bin/
KinD: the first cluster
We are ready to create our first cluster:
# Check if the KUBECONFIG is not set echo $KUBECONFIG # Check if the .kube directory is created > if not, no need to create it ls $HOME/.kube # Create the cluster and give it a name (optional) kind create cluster --name wslkind # Check if the .kube has been created and populated with files ls $HOME/.kube
Tip: as you can see, the Terminal was changed so the nice icons are all displayed
The cluster has been successfully created, and because we are using Docker Desktop, the network is all set for us to use "as is".
So we can open the
Kubernetes master URL in our Windows browser:
And this is the real strength from Docker Desktop for Windows with the WSL2 backend. Docker really did an amazing integration.
KinD: counting 1 - 2 - 3
Our first cluster was created and it's the "normal" one node cluster:
# Check how many nodes it created kubectl get nodes # Check the services for the whole cluster kubectl get all --all-namespaces
While this will be enough for most people, let's leverage one of the coolest feature, multi-node clustering:
# Delete the existing cluster kind delete cluster --name wslkind # Create a config file for a 3 nodes cluster cat << EOF > kind-3nodes.yaml kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane - role: worker - role: worker EOF # Create a new cluster with the config file kind create cluster --name wslkindmultinodes --config ./kind-3nodes.yaml # Check how many nodes it created kubectl get nodes
Tip: depending on how fast we run the "get nodes" command, it can be that not all the nodes are ready, wait few seconds and run it again, everything should be ready
And that's it, we have created a three-node cluster, and if we look at the services one more time, we will see several that have now three replicas:
# Check the services for the whole cluster kubectl get all --all-namespaces
KinD: can I see a nice dashboard?
Working on the command line is always good and very insightful. However, when dealing with Kubernetes we might want, at some point, to have a visual overview.
For that, the Kubernetes Dashboard project has been created. The installation and first connection test is quite fast, so let's do it:
# Install the Dashboard application into our cluster kubectl apply -f # Check the resources it created based on the new namespace created kubectl get all -n kubernetes-dashboard
As it created a service with a ClusterIP (read: internal network address), we cannot reach it if we type the URL in our Windows browser:
That's because we need to create a temporary proxy:
# Start a kubectl proxy kubectl proxy # Enter the URL on your browser:
Finally to login, we can either enter a Token, which we didn't create, or enter the
kubeconfig file from our Cluster.
If we try to login with the
kubeconfig, we will get the error "Internal error (500): Not enough data to create auth info structure". This is due to the lack of credentials in the
kubeconfig file.
So to avoid you ending with the same error, let's follow the recommended RBAC approach.
Let's open a new WSL2 session:
# Create a new ServiceAccount kubectl apply -f - <<EOF apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kubernetes-dashboard EOF # Create a ClusterRoleBinding for the ServiceAccount kubectl apply -f - <<EOF EOF
# Get the Token for the ServiceAccount kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}') # Copy the token and copy it into the Dashboard login and press "Sign in"
Success! And let's see our nodes listed also:
A nice and shiny three nodes appear.
Minikube: Kubernetes from everywhere Minikube and create our first cluster.
And as sources are always important to mention, we will follow (partially) the how-to from the Kubernetes.io website:
# Download the latest version of Minikube curl -Lo minikube # Make the binary executable chmod +x ./minikube # Move the binary to your executable path sudo mv ./minikube /usr/local/bin/
Minikube: updating the host
If we follow the how-to, it states that we should use the
--driver=none flag in order to run Minikube directly on the host and Docker.
Unfortunately, we will get an error about "conntrack" being required to run Kubernetes v 1.18:
# Create a minikube one node cluster minikube start --driver=none
Tip: as you can see, the Terminal was changed so the nice icons are all displayed
So let's fix the issue by installing the missing package:
# Install the conntrack package sudo apt install -y conntrack
Let's try to launch it again:
# Create a minikube one node cluster minikube start --driver=none # We got a permissions error > try again with sudo sudo minikube start --driver=none
Ok, this error cloud be problematic ... in the past. Luckily for us, there's a solution
Minikube: enabling SystemD
In order to enable SystemD on WSL2, we will apply the scripts from Daniel Llewellyn.
I invite you to read the full blog post and how he came to the solution, and the various iterations he did to fix several issues.
So in a nutshell, here are the commands:
# Install the needed packages sudo apt install -yqq daemonize dbus-user-session fontconfig
# Create the start-systemd-namespace script sudo vi /usr/sbin/start-systemd-namespace #!/bin/bash SYSTEMD_PID=$(ps -ef | grep '/lib/systemd/systemd --system-unit=basic.target$' | grep -v unshare | awk '{print $2}') if [ -z "$SYSTEMD_PID" ] || [ "$SYSTEMD_PID" != "1" ]; then export> "$HOME/.systemd-env" exec sudo /usr/sbin/enter-systemd-namespace "$BASH_EXECUTION_STRING" fi if [ -n "$PRE_NAMESPACE_PATH" ]; then export PATH="$PRE_NAMESPACE_PATH" fi
# Create the enter-systemd-namespace sudo vi /usr/sbin/enter-systemd-namespace #!/bin/bash if [ "$UID" != 0 ]; then echo "You need to run $0 through sudo" exit 1 fi SYSTEMD_PID="$(ps -ef | grep '/lib/systemd/systemd --system-unit=basic.target$' | grep -v unshare | awk '{print $2}')" if [ -z "$SYSTEMD_PID" ]; then /usr/sbin/daemonize /usr/bin/unshare --fork --pid --mount-proc /lib/systemd/systemd --system-unit=basic.target while [ -z "$SYSTEMD_PID" ]; do SYSTEMD_PID="$(ps -ef | grep '/lib/systemd/systemd --system-unit=basic.target$' | grep -v unshare | awk '{print $2}')" done fi if [ -n "$SYSTEMD_PID" ] && [ "$SYSTEMD_PID" != "1" ]; then if [ -n "$1" ] && [ "$1" != "bash --login" ] && [ "$1" != "/bin/bash --login" ]; then exec /usr/bin/nsenter -t "$SYSTEMD_PID" -a \ /usr/bin/sudo -H -u "$SUDO_USER" \ /bin/bash -c 'set -a; source "$HOME/.systemd-env"; set +a; exec bash -c '"$(printf "%q" "$@")" else exec /usr/bin/nsenter -t "$SYSTEMD_PID" -a \ /bin/login -p -f "$SUDO_USER" \ $(/bin/cat "$HOME/.systemd-env" | grep -v "^PATH=") fi echo "Existential crisis" fi
# Edit the permissions of the enter-systemd-namespace script sudo chmod +x /usr/sbin/enter-systemd-namespace # Edit the bash.bashrc file sudo sed -i 2a"# Start or enter a PID namespace in WSL2\nsource /usr/sbin/start-systemd-namespace\n" /etc/bash.bashrc
Finally, exit and launch a new session. You do not need to stop WSL2, a new session is enough:
Minikube: the first cluster
We are ready to create our first cluster:
# Check if the KUBECONFIG is not set echo $KUBECONFIG # Check if the .kube directory is created > if not, no need to create it ls $HOME/.kube # Check if the .minikube directory is created > if yes, delete it ls $HOME/.minikube # Create the cluster with sudo sudo minikube start --driver=none
In order to be able to use
kubectl with our user, and not
sudo, Minikube recommends running the
chown command:
# Change the owner of the .kube and .minikube directories sudo chown -R $USER $HOME/.kube $HOME/.minikube # Check the access and if the cluster is running kubectl cluster-info # Check the resources created kubectl get all --all-namespaces
The cluster has been successfully created, and Minikube used the WSL2 IP, which is great for several reasons, and one of them is that we can open the
Kubernetes master URL in our Windows browser:
And the real strength of WSL2 integration, the port
8443 once open on WSL2 distro, it actually forwards it to Windows, so instead of the need to remind the IP address, we can also reach the
Kubernetes master URL via
localhost:
Minikube: can I see a nice dashboard?
Working on the command line is always good and very insightful. However, when dealing with Kubernetes we might want, at some point, to have a visual overview.
For that, Minikube embeded the Kubernetes Dashboard. Thanks to it, running and accessing the Dashboard is very simple:
# Enable the Dashboard service sudo minikube dashboard # Access the Dashboard from a browser on Windows side
The command creates also a proxy, which means that once we end the command, by pressing
CTRL+C, the Dashboard will no more be accessible.
Still, if we look at the namespace
kubernetes-dashboard, we will see that the service is still created:
# Get all the services from the dashboard namespace kubectl get all --namespace kubernetes-dashboard
Let's edit the service and change it's type to
LoadBalancer:
# Edit the Dashoard service kubectl edit service/kubernetes-dashboard --namespace kubernetes-dashboard # Go to the very end and remove the last 2 lines status: loadBalancer: {} # Change the type from ClusterIO to LoadBalancer type: LoadBalancer # Save the file
Check again the Dashboard service and let's access the Dashboard via the LoadBalancer:
# Get all the services from the dashboard namespace kubectl get all --namespace kubernetes-dashboard # Access the Dashboard from a browser on Windows side with the URL: localhost:<port exposed>
Conclusion
It's clear that we are far from done as we could have some LoadBalancing implemented and/or other services (storage, ingress, registry, etc...).
Concerning Minikube on WSL2, as it needed to enable SystemD, we can consider it as an intermediate level to be implemented.
So with two solutions, what could be the "best for you"? Both bring their own advantages and inconveniences, so here an overview from our point of view solely:
We hope you could have a real taste of the integration between the different components: WSL2 - Docker Desktop - KinD/Minikube. And that gave you some ideas or, even better, some answers to your Kubernetes workflows with KinD and/or Minikube on Windows and WSL2.
See you soon for other adventures in the Kubernetes ocean. | https://kubernetes.io/blog/2020/05/21/wsl-docker-kubernetes-on-the-windows-desktop/ | CC-MAIN-2022-05 | refinedweb | 2,572 | 53.24 |
There is a bug in the way Unity handles the HTTP response headers. A typical HTTP response might look like this:
HTTP/1.1 200 OK
Content-Type: application/json
Date: Sun, 19 Oct 2014 19:48:07 GMT
Set-Cookie: session-data=a3izTzllfjmJvIAedFWH8_RF1VUoTVbszM-4KtITK8QBZwE
Set-Cookie: AWSELB=8FCF616716267D5222A365F68CB5F524241ADC40F2A79E054E56BD41E6C10E5921CA9D8BE1291E904BD712CDA6EB211CD74D6780AE1E44EE07A58093B97B9C3AA5121EF17E09420E;PATH=/
transfer-encoding: chunked
Connection: keep-alive
However, since Unity parses this as a Dictionary, only one Set-Cookie header gets parsed. In other words, only one cookie will get through and the others will be ignored / removed.
I submitted a bug report about this. In the meantime, does anyone know of a workaround?
Answer by Bunny83
·
Oct 20, 2014 at 02:39 AM
Sure, there are a few, however not all are available in all situations. If you're in a webplayer you can use the browsers webinterface (via site-javascript code) to perform webrequests and read manually read the headers.
For the other platforms you can use any .NET / Mono web class (System.Net) that actually works with headerfields. However since those classes requires Sockets to work, it's not available for Android and iOS Free. Only for the pro version. Standalone builds should work fine.
See the license comparison page (seciont "code": .NET Socket Support)
You probably want to use either the HttpWebRequest if you do single requests or the WebClient class which actually handles cookies itself and is designed for multiple requests on the same domain.
ps: If someone comes across and still has some votes on the feedback site left, feel free to vote it up ;)
I gave it 10 votes. In the meantime, I discovered a workaround that lets you get at all the cookies from the WWW object. It's a gross hack that uses reflection to get at the data of the protected responseHeadersString property. I wrote an extension library here. Enjoy!
Sure ;) That's also possible, however the internal implementation of the WWW class already has changed a few times in the past, so use it with caution.
Answer by fractiv
·
Oct 20, 2014 at 04:32 AM
Answering my own question. Discovered a gross hack that lets me get at the data by using reflection to access the responseHeadersString property of the WWW class, which is a protected member. Below is an extension method class I wrote that provides methods for getting, parsing and sending cookies. Enjoy!
//
// UnityCookies.cs
// by Sam McGrath
//
// Use as you please.
//
// Usage:
// Dictionary<string,string> cookies =;
//
// To send cookies in a WWW response:
// var www = new WWW( url, null, UnityCookies.GetCookieRequestHeader(cookies) );
// (if other headers are needed, merge them with the dictionary returned by GetCookieRequestHeader)
//
using UnityEngine;
using System;
using System.Collections.Generic;
using System.Reflection;
using System.Text;
using System.Text.RegularExpressions;
public static class UnityCookies {
public static string GetRawCookieString( this WWW www ) {
if ( !("SET-COOKIE") ) {
return null;
}
// HACK: workaround for Unity bug that doesn't allow multiple SET-COOKIE headers
var rhsPropInfo = typeof(WWW).GetProperty( "responseHeadersString",BindingFlags.Public|BindingFlags.NonPublic|BindingFlags.Instance );
if ( rhsPropInfo == null ) {
Debug.LogError( " not found in WWW class." );
return null;
}
var headersString = rhsPropInfo.GetValue( www, null ) as string;
if ( headersString == null ) {
return null;
}
// concat cookie headers
var allCookies = new StringBuilder();
string[] lines = headersString.Split( new string[] { "\r\n", "\n" }, StringSplitOptions.RemoveEmptyEntries );
foreach( var l in lines ) {
var colIdx = l.IndexOf( ':' );
if ( colIdx < 1 ) {
continue;
}
var headerType = l.Substring( 0,colIdx ).Trim();
if ( headerType.ToUpperInvariant() != "SET-COOKIE" ) {
continue;
}
var headerVal = l.Substring( colIdx+1 ).Trim();
if ( allCookies.Length > 0 ) {
allCookies.Append( "; " );
}
allCookies.Append( headerVal );
}
return allCookies.ToString();
}
public static Dictionary<string,string> ParseCookies( this WWW www ) {
return ParseCookies( );
}
public static Dictionary<string,string> ParseCookies( string str ) {
// cookie parsing adapted from node.js cookie module, so it should be pretty robust.
var dict = new Dictionary<string,string>();
if ( str != null ) {
var pairs = Regex.Split( str, "; *" );
foreach( var pair in pairs ) {
var eqIdx = pair.IndexOf( '=' );
if ( eqIdx == -1 ) {
continue;
}
var key = pair.Substring( 0,eqIdx ).Trim();
if ( dict.ContainsKey(key) ) {
continue;
}
var val = pair.Substring( eqIdx+1 ).Trim();
if ( val[0] == '"' ) {
val = val.Substring( 1, val.Length-2 );
}
dict[ key ] =( val );
}
}
return dict;
}
public static Dictionary<string,string> GetCookieRequestHeader( Dictionary<string,string> cookies ) {
var str = new StringBuilder();
foreach( var c in cookies ) {
if ( str.Length > 0 )
str.Append( "; " );
str.Append( c.Key ).Append( '=' ).Append( );
}
return new Dictionary<string,string>{ {"Cookie", str.ToString() } };
}
}
thanks for posting this fractiv, very helpful!
Works like a charm! It's really sad this issue has not been addressed, but in the meantime, your solution works very well. I will use your code in my project, please let me know how to credit you, in the mean time I'll use your user name..
WWW responseHeaders not giving me cookie in web-player, but does in editor
1
Answer
WWW http request loads way to long on IOS
0
Answers
Light Cookie Doubles up Lighting on Transparent Objects ?
1
Answer
UnityWebRequest leaking memory.
2
Answers
Flashlight Cookie Not Displaying
1
Answer
EnterpriseSocial Q&A | https://answers.unity.com/questions/812495/workaround-for-set-cookie-bug-in-wwwresponseheader.html?sort=oldest | CC-MAIN-2022-40 | refinedweb | 825 | 51.44 |
Django models
What we want to create now is something that will store all the posts in our blog. But to be able to do that we need to talk a little bit about things called
objects.
Objects
There is a concept in programming called
object-oriented programming. The idea is that instead of writing everything as a boring sequence of programming instructions, we can model things and define how they interact with each other.
So what is an object? It is a collection of properties and actions. It sounds weird, but we will give you an example.
If we want to model a cat, we will create an object
Cat that has some properties such as
color,
age,
mood (like good, bad, or sleepy ;)), and
owner (which could be assigned a
Person object – or maybe, in case of a stray cat, this property could be empty).
Then the
Cat has some actions:
purr,
scratch, or
feed (in which case, we will give the cat some
CatFood, which could be a separate object with properties, like
taste).
Cat -------- color age mood owner purr() scratch() feed(cat_food)
CatFood -------- taste
So basically the idea is to describe real things in code with properties (called
object properties) and actions (called
methods).
How will we model blog posts then? We want to build a blog, right?
We need to answer the question: What is a blog post? What properties should it have?
Well, for sure our blog post needs some text with its content and a title, right? It would be also nice to know who wrote it – so we need an author. Finally, we want to know when the post was created and published.
Post -------- title text author created_date published_date
What kind of things could be done with a blog post? It would be nice to have some
method that publishes the post, right?
So we will need a
publish method.
Since we already know what we want to achieve, let's start modeling it in Django!
Django model
Knowing what an object is, we can create a Django model for our blog post.
A model in Django is a special kind of object – it is saved in the
database. A database is a collection of data. This is a place in which you will store information about users, your blog posts, etc. We will be using a SQLite database to store our data. This is the default Django database adapter – it'll be enough for us right now.
You can think of a model in the database as a spreadsheet with columns (fields) and rows (data).
Creating an application
To keep everything tidy, we will create a separate application inside our project. It is very nice to have everything organized from the very beginning. To create an application we need to run the following command in the console (from
djangogirls directory where
manage.py file is):
command-line
(myvenv) ~/djangogirls$ python manage.py startapp blog
You will notice that a new
blog directory is created and it contains a number of files now. The directories and files in our project should look like this:
djangogirls ├── blog │ ├── __init__.py │ ├── admin.py │ ├── apps.py │ ├── migrations │ │ └── __init__.py │ ├── models.py │ ├── tests.py │ └── views.py ├── db.sqlite3 ├── manage.py └── mysite ├── __init__.py ├── settings.py ├── urls.py └── wsgi.py
After creating an application, we also need to tell Django that it should use it. We do that in the file
mysite/settings.py. We need to find
INSTALLED_APPS and add a line containing
'blog', just above
]. So the final product should look like this:
mysite/settings.py
INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'blog', ]
Creating a blog post model
In the
blog/models.py file we define all objects called
Models – this is a place in which we will define our blog post.
Let's open
blog/models.py, remove everything from it, and write code like this:
blog/models.py
from django.db import models from django.utils import timezone
Double-check that you use two underscore characters (
_) on each side of
str. This convention is used frequently in Python and sometimes we also call them "dunder" (short for "double-underscore").
It looks scary, right? But don't worry – we will explain what these lines mean!
All lines starting with
from or
import are lines that add some bits from other files. So instead of copying and pasting the same things in every file, we can include some parts with
from ... import ....
class Post(models.Model): – this line defines our model (it is an
object).
classis a special keyword that indicates that we are defining an object.
Postis the name of our model. We can give it a different name (but we must avoid special characters and whitespace). Always start a class name with an uppercase letter.
models.Modelmeans that the Post is a Django Model, so Django knows that it should be saved in the database.
Now we define the properties we were talking about:
title,
text,
created_date,
published_date and
author. To do that we need to define the type of each field (Is it text? A number? A date? A relation to another object, like a User?).
We will not explain every bit of code here since it would take too much time. You should take a look at Django's documentation if you want to know more about Model fields and how to define things other than those described above ().
What about
def publish(self):? This is exactly the
publish method we were talking about before.
def means that this is a function/method and
publish is the name of the method. You can change the name of the method if you want. The naming rule is that we use lowercase and underscores instead of spaces. For example, a method that calculates average price could be called
calculate_average_price.
Methods often
return something. There is an example of that in the
__str__ method. In this scenario, when we call
__str__() we will get a text (string) with a Post title.
Also notice that both
def publish(self): and
def __str__(self): are indented inside our class. Because Python is sensitive to whitespace, we need to indent our methods inside the class. Otherwise, the methods won't belong to the class, and you can get some unexpected behavior.
If something is still not clear about models, feel free to ask your coach! We know it is complicated, especially when you learn what objects and functions are at the same time. But hopefully it looks slightly less magic for you now!
Create tables for models in your database
The last step here is to add our new model to our database. First we have to make Django know that we have some changes in our model. (We have just created it!) Go to your console window and type
python manage.py makemigrations blog. It will look like this:
command-line
(myvenv) ~/djangogirls$ python manage.py makemigrations blog Migrations for 'blog': blog/migrations/0001_initial.py: - Create model Post
Note: Remember to save the files you edit. Otherwise, your computer will execute the previous version which might give you unexpected error messages.
Django prepared a migration file for us that we now have to apply to our database. Type
python manage.py migrate blog and the output should be as follows:
command-line
(myvenv) ~/djangogirls$ python manage.py migrate blog Operations to perform: Apply all migrations: blog Running migrations: Rendering model states... DONE Applying blog.0001_initial... OK
Hurray! Our Post model is now in our database! It would be nice to see it, right? Jump to the next chapter to see what your Post looks like! | https://tutorial.djangogirls.org/en/django_models/ | CC-MAIN-2017-34 | refinedweb | 1,292 | 75.4 |
More from my Mix09 talk “building business applications with Silverlight 3”. Many customers have told me that they love Entity Framework and LinqToSql, but that they are not always able to use them in their projects just yet. In fact the number of folks that are using ADO.NET DataSet, DataReader, etc is very high. So I wanted to show taking my Mix demo and changing it to use the standard ADO.NET classic model of data access.
This allows you to use DataSet with Silverlight AND take advantage of all the cool new features RIA Services offers around data validation, paging, etc.
For the context,.
First, we can remove the Entity Framework model from our project…. we are going to use DataSet as our data access model in this demo. Notice this pattern likely makes the most sense if you already have a lot of infascture built up around DataSet… if not, then using DataReader\Writer might be a good choice.
First, we need to create a type that we return to the client. here that we are able to put the validation metadata directly on the type we are returning. Now we just need to fill up this type from the database..
Let’s start by defining a DomainService
1: [EnableClientAccess()]
2: public class SuperEmployeeDomainService : DomainService
3: {
4: DataSet Context = new DataSet();
5:
6: const int PageSize = 20;
7:
Notice here we are driving directly from DomainService… there is no need to use the EF or L2SDomainService.. We then setup the Context to be a DataSet.. we will populate this DataSet in the methods on the DomainService. Then we define a PageSize for our data.. this gives us a standard chunk to access from the database.
Then I wrote some fairly simply code to deal with populating the DataSet… I’d guess it would be easy to change this to work with whatever pattern you are using to full up DataSets today.
void FillSuperEmployees(DataSet ds, int page, int employeeID)
{
var conn = new SqlConnection();
conn.ConnectionString = ConfigurationManager.ConnectionStrings[“MainConnStr”].ConnectionString;
SqlDataAdapter da;
if (employeeID == -1)
{
da = new SqlDataAdapter(
“FROM SuperEmployees”,
conn);
}
else
{
da = new SqlDataAdapter(
“FROM SuperEmployees “ +
“WHERE EmployeeID=” + employeeID,
conn);
}
if (page == -1) da.Fill(ds, “SuperEmployees”);
else da.Fill(ds, page * PageSize, PageSize, “SuperEmployees”);
}
Next we write a query method..
1: public IQueryable<SuperEmployee> GetSuperEmployees(int pageNumber)
2: {
3: Context = new DataSet();
4: FillSuperEmployees(Context, pageNumber,-1);
5: DataTable superEmployees =
6: Context.Tables[“SuperEmployees”];
7:
8: var query = from row in
9: superEmployees.AsEnumerable()
10: select new SuperEmployee
11: {
12: EmployeeID = row.Field<int>(“EmployeeID”),
13: Name = row.Field<string>(“Name”),
14: Gender = row.Field<string>(“Gender”),
15: Issues = row.Field<int?>(“Issues”),
16: LastEdit = row.Field<DateTime>(“LastEdit”),
17: Origin = row.Field<string>(“Origin”),
18: Publishers = row.Field<string>(“Publishers”),
19: Sites = row.Field<string>(“Sites”),
20: };
21: return query.AsQueryable();
22: }
In line 4 we fill up the DataSet then in line 8-20, we use some LinqToDataSet support to make it easier to create a projection of our DataSet. If you’d rather not use Linq here, no problem, you can simply write a copy method to such the data out the DataSet and into our SuperEmployee type. Any collection can be returned as an IQuerable. Notice we are taking the page number here.. we are going to follow the same explicit paging pattern I introduced in the WCF example.
Then let’s take a look at Update… this method is called when there is a change to one of the fields in our SuperEmployee instance…
1: public void UpdateSuperEmployee(SuperEmployee currentSuperEmployee)
2: {
3:
4: GetSuperEmployee(currentSuperEmployee.EmployeeID);
5:
6: DataRow updateRow = null;
7: foreach (DataRow row in Context.Tables[“SuperEmployees”].Rows) {
8: if (row.Field<int>(“EmployeeID”) == currentSuperEmployee.EmployeeID) {
9: updateRow = row;
10: }
11: }
12:
13: var orgEmp = this.ChangeSet.GetOriginal(currentSuperEmployee);
14:
15: if (orgEmp.Gender != currentSuperEmployee.Gender)
16: updateRow.SetField(“Gender”, currentSuperEmployee.Gender);
17: if (orgEmp.Issues != currentSuperEmployee.Issues)
18: updateRow.SetField(“Issues”, currentSuperEmployee.Issues);
19: if (orgEmp.LastEdit != currentSuperEmployee.LastEdit)
20: updateRow.SetField(“LastEdit”, currentSuperEmployee.LastEdit);
21: if (orgEmp.Name != currentSuperEmployee.Name)
22: updateRow.SetField(“Name”, currentSuperEmployee.Name);
23: if (orgEmp.Origin != currentSuperEmployee.Origin)
24: updateRow.SetField(“Origin”, currentSuperEmployee.Origin);
25: if (orgEmp.Publishers != currentSuperEmployee.Publishers)
26: updateRow.SetField(“Publishers”, currentSuperEmployee.Publishers);
27: if (orgEmp.Sites != currentSuperEmployee.Sites)
28: updateRow.SetField(“Sites”, currentSuperEmployee.Sites);
29:
30: }
First we need to get the DataRow to update. In line 4, we load it up from the Database, then in line 6-11 we find it in the current DataSet (remember, we are doing batch processing so their could be several updates already done in the the DataSet).
Notice the general pattern here is that we compare the original results that the client last saw (from line 13) to what is being sent up from the client. This ensure that we only change the fields that are actually updated. Otherwise we could overwrite another clients changes. This is very much like the code we did in the DTO example.
Finally, in Submit, we need to actually commit these changes to the database.
1: public override void Submit(ChangeSet changeSet)
2: {
3: base.Submit(changeSet);
4: var conn = new SqlConnection();
5: conn.ConnectionString = ConfigurationManager.ConnectionStrings[“MainConnStr”].ConnectionString;
6:
7:
8: SqlDataAdapter da = new SqlDataAdapter(
9: “SELECT * “ +
10: “FROM SuperEmployees “,
11: conn);
12: SqlCommandBuilder com = new SqlCommandBuilder(da);
13: da.Update(Context, “SuperEmployees”);
14:
15: }
Looking at this Submit override gives us some good insights into how RIA Services really works. Submit is called when a request first comes in from the client. It could contain several adds, deletes or updates in the changeSet. calling base.Submit() breaks the changeset out and calls the appropriate update\add\delete methods for each change. Those changes should leave the DataSet populated with the changes we need to commit to the database. Line 13 takes care of that. Notice this is also a really good place to set a breaking point when you are trying to debug your DomainService.
The only real changes to the client are to accommodate the explicit paging pattern we saw in the WCF example… which is great.. that means you can move from this DataSet model, to EF with very minimal changes to the client.
This example showed how to use existing code dealing with DataSet and expose it to Silverlight clients via .NET RIA Services.
Enjoy!
How could I write a generic non-typed Dataset class for the client. Since it appears that data binding cannot bind to an indexed object, the dataset class, I guessing, needs to dynamically create a strong typed class from the xml meta data. Do have have to use Ruby/Python for this? Anyone have any ideas?
BTW, thank-you for posting on how to use datasets. Up to now it seems that MS is not too helpful in providing samples on how to use datasets with Silverlight.
I would like to see how to get the validation data from the db too.
Basically, I’m setting up a project in which data is selected dynamically on the client. Data from the server comes in 2 parts.
1. Meta data such as column names in language of the user, etc.
2. Actual data.
Domald.. Sure… we have a pluggable system where you can get your metadata from anywhere.. This example shows getting hte metadata from a Xml file, but you could easily update it to pull from the database..
Donald – Yup, I’d suggest creating a strongly typed object to bind against in that case..
Nice Approach. So we can use this kind of methodology for supporting custom frameworks.
Thanks,
Thani
Dang, Brad,
I’ve been bundling the posts in this series into an ebook file so I can read it offline on my phone.. and you keep adding to it, so I have to keep updating my file.. (not complaining)..
Thanks, for the extensive info (now if I could just find some time to really dig in)
Jay Kimble
Thanigainathan – That is right! We are hoping folks will be able to fit tihs to any sort of data access model they use today with very little work.
Hi guys,
Here is a solution for you:
Enjoy your coding
Vitaly
Brad: Great stuff. Appreciate your hard work. This is exactly what I hav been looking for.
Quick question: This is the approach I should use to leverage a number of stored procedures I have, correct? Can I use the usually method for adding a parameters collection to the command object, etc. like I always have?
Thanks,
Sean
If you don’t want to have to lay out the datacontract and all of that you can use the tools from. | https://blogs.msdn.microsoft.com/brada/2009/07/27/business-apps-example-for-silverlight-3-rtm-and-net-ria-services-july-update-part-12-dataset/ | CC-MAIN-2017-13 | refinedweb | 1,453 | 58.48 |
unhandled exception
Bug Description
DEBUG:Svammel:
DEBUG:Svammel:
DEBUG:Svammel:Got archive test-rebuild-
INFO:Svammel:
Traceback (most recent call last):
File "file-failures.py", line 163, in <module>
(log_
File "/home/
fail_build_date = fail_build_
AttributeError: 'NoneType' object has no attribute 'datecreated'
Related branches
- Matthias Klose: Pending requested 2011-04-28
- Diff: 45 lines (+13/-4)2 files modifieddata_parsing.py (+5/-1)
file-failures.py (+8/-3)
This situation occurred in the clean up phase, after bug filing. The script found a bug in the log file and goes ahead to check if there is a more recent successful build, in which case the old bug should be closed.
This did fail because the script hadn't found the old build failure during the bug filing phase. I guess the failed build is no longer in the archive if the build is re-run and succeeds? If so, the script needs to handle that.
hmm, that was a run with python file-failures.py --archive=
so maybe the debian-installer build was rescheduled. hte state/logfile had this as already filed as bug #766038
I fixed this in the linked branch by accepting that even if no build failure is found, a successful build may be searched for. This is needed for closing bugs that have been recorded in the log file after the build failure is no longer available.
worked around with:
@@ -197,6 +198,8 @@
def exists_
successful_ later_build( spph, archives, fail_arch): build_record = spph.get_ arch(fail_ arch) build_date = fail_build_ record. datecreated
fail_
+ if fail_build_record == None:
+ return None
fail_
for archive, archive_name in archives:
# Get list of successful builds for the same package | https://bugs.launchpad.net/svammel/+bug/770729 | CC-MAIN-2018-47 | refinedweb | 274 | 71.65 |
I recently saw a Twitter post (a tweet?) with this ArcGIS Pro tip: Nathan Shephard on Twitter: "Random @ArcGISPro tip - you can click the 'Selected Features' count lab...
And I thought to myself, "Well, we see these scattered about all over - wouldn't it be cool to find these in one place?" Wait, somebody must have already done this, right? So I did what I do with so many questions and I Googled it. You can do the same, but I figured I'd help by getting you started: Click here to Google "arcgis pro tips and tricks"
Indeed, we do in fact get a number of results. Like this one by James Sullivan is golden: I love how he calls out "hidden gems" and we see some convergence here with Nathan Shephard's tip from his Twitter post.
Instead of sifting through multiple Google results, and more importantly, to hear from YOU, ArcGIS Pro users in the wild, I wanted to create a place where we can all share our Pro tips and tricks.
I'll kick it off below with another one I learned just this week. We'll ask around the halls here and continue to post some favorites from the the ArcGIS Pro team, but I want to hear from you. Please post YOUR Pro tips and tricks.
Thank you for sharing!
Copy/paste from one table's Fields View into another table's Fields View to quickly update schema:
Tip:
Be very cautious when doing tasks for the first time in Pro. Some things that you have done a million times in ArcMap, and is pretty much muscle memory now, can look very similar but be very different. This one almost tripped me up. Everything got flipped here. Both the questions and choices.
ArcMap Pro
Why in the world did they do that? Thanks for the alert on this. No telling how many users have unknowingly corrupted their edits.
Here's another one that Tom Bole from the ArcGIS Pro Layout team was excited to share (though, to be fair, Aubrianna Kinghorn beat him to it with the Tweet).
When working in a Layout with the map frame activated so that you can pan and zoom the map, hold the 1 key to navigate the page.
Find this shortcut along with many others organized by functional areas of the application here: ArcGIS Pro Shortcuts
If you create and recreate locators like I do, you can run the tool once and then copy the parameters so you can put them into a python script for automation In this case I'm creating a locator using street centerlines with an alt names table.
I paste them into a text editor to pretty them up a bit, and then copy that into my python IDE. In the end it looks like this:
#create the centerlines alt names locator
def createCenterlinesLocator(scratchGDB,locatorsDir):
countryCode = 'USA'
primaryData = '{}\\Centerlines StreetAddress'.format(scratchGDB)
fieldMapping = "'StreetAddress.STREET_NAME_JOIN_ID Centerlines.JOINID';\
'StreetAddress.HOUSE_NUMBER_FROM_LEFT Centerlines.FROMADDR_L';\
'StreetAddress.HOUSE_NUMBER_TO_LEFT Centerlines.TOADDR_L';\
'StreetAddress.HOUSE_NUMBER_FROM_RIGHT Centerlines.FROMADDR_R';\
'StreetAddress.HOUSE_NUMBER_TO_RIGHT Centerlines.TOADDR_R';\
'StreetAddress.STREET_PREFIX_DIR Centerlines.PREDIR';\
'StreetAddress.STREET_NAME Centerlines.NAME';\
'StreetAddress.STREET_SUFFIX_TYPE Centerlines.POSTTYPE';\
'StreetAddress.STREET_SUFFIX_DIR Centerlines.POSTDIR'"
outLocator = '{}\\Centerlines_Pro'.format(locatorsDir)
languageCode = 'ENG'
altNames = '{}\\CenterlinesAltNames AlternateStreetName'.format(scratchGDB)
altFieldMapping = "'AlternateStreetName.STREET_NAME_JOIN_ID CenterlinesAltNames.JOINID';\
'AlternateStreetName.STREET_PREFIX_DIR CenterlinesAltNames.PREDIR';\
'AlternateStreetName.STREET_NAME CenterlinesAltNames.AN_NAME';\
'AlternateStreetName.STREET_SUFFIX_TYPE CenterlinesAltNames.POSTTYPE';\
'AlternateStreetName.STREET_SUFFIX_DIR CenterlinesAltNames.AN_POSTDIR'"
try:
arcpy.geocoding.CreateLocator(countryCode,primaryData,fieldMapping,outLocator,languageCode,
altNames,altFieldMapping)
print('Success: created single role centerlines locator')
except Exception as err:
print (err)
print('Error: unable to create single role centerlines locator. Exiting' )
time.sleep(5)
#sendEmail(err)
exit()
Have a Dedicated ArcGIS Pro Project to act as Arc Catalog. One day while watching movies, I followed this older blog Dude, where's my Catalog? and Created an ArcGIS Pro Project to act as ArcCatalog. I loaded all of my GDB connections, server connections, and frequent folders into the project and into my favorites. This truly made data management manageable in ArcGIS Pro.
One that I just learned about this year that is in plain sight is the View Tab->Windows group->Reset Panes for Mapping (Default). As we know, AGP uses Panes and can become numerous on the left or right side of the application depending how you set up your AGP. By clicking Reset Panes for Mapping (Default), it closes all panes except for the Catalog Pane. Brilliant! So other tips/tricks from a AGP 2.3 blog can be found here.
So I was reading about How To Make This Paper Terrain Map of Germany by John Nelson and saw the following GIS how-to-ism:
I guess he's heard from somebody who was freaking out because they thought their work disappeared, and I've heard that as well, so thought it was worth posting as a quick tip.
Remember that an ArcGIS Pro project can contain many maps and many layouts, so feel free to close and open the views as needed. They're still there! | https://community.esri.com/t5/arcgis-pro-questions/arcgis-pro-tips-and-tricks-i-share-mine-you-share/m-p/610719 | CC-MAIN-2021-39 | refinedweb | 832 | 53.92 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.